CN114119797B - Data processing method, data processing device, computer readable medium, processor and electronic equipment - Google Patents

Data processing method, data processing device, computer readable medium, processor and electronic equipment Download PDF

Info

Publication number
CN114119797B
CN114119797B CN202111394510.8A CN202111394510A CN114119797B CN 114119797 B CN114119797 B CN 114119797B CN 202111394510 A CN202111394510 A CN 202111394510A CN 114119797 B CN114119797 B CN 114119797B
Authority
CN
China
Prior art keywords
target
game scene
picture
unit
virtual camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111394510.8A
Other languages
Chinese (zh)
Other versions
CN114119797A (en
Inventor
张桥
李京燕
李小海
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Shi Guan Jin Yang Technology Development Co ltd
Original Assignee
Beijing Shi Guan Jin Yang Technology Development Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Shi Guan Jin Yang Technology Development Co ltd filed Critical Beijing Shi Guan Jin Yang Technology Development Co ltd
Priority to CN202111394510.8A priority Critical patent/CN114119797B/en
Publication of CN114119797A publication Critical patent/CN114119797A/en
Application granted granted Critical
Publication of CN114119797B publication Critical patent/CN114119797B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Generation (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a data processing method, a device, a computer readable medium, a processor and electronic equipment, wherein rendering texture can be created, and a plurality of virtual cameras with preset rendering attributes corresponding to a target role are determined; in each virtual camera, the background of a game scene picture rendered by each virtual camera with a depth value which is not the minimum depth value is transparent, texture attributes of each virtual camera are respectively set to be renderings, the game scene picture rendered by each virtual camera is sequentially saved to the renderings according to the sequence of the depth values from small to large, a picture carrier of a picture to be mapped is created, the renderings are mapped to the picture carrier, a target rendering map is obtained, and the target rendering map is saved under a target storage path. The invention can effectively display the game scene picture seen from the view angle of the target role, improve the game experience of the user and increase the viscosity of the user.

Description

Data processing method, data processing device, computer readable medium, processor and electronic equipment
Technical Field
The present invention relates to the field of computer science and technology, and in particular, to a data processing method, apparatus, computer readable medium, processor, and electronic device.
Background
With the development of computer science and technology, development technology of online game application programs is continuously improved.
In the program screen of the online game application, a game scene screen and a user interface may be included. The game scene picture can comprise game characters, monster, buildings and the like. It will be appreciated that the perspective of the game character is not viewable to the user interface.
However, when a game scene screen is required to be displayed according to the view angle of a certain game character, the prior art cannot effectively display the game scene screen.
Disclosure of Invention
In view of the foregoing, the present invention provides a data processing method, apparatus, computer readable medium, processor and electronic device, which overcome or at least partially solve the foregoing problems, and the technical solutions are as follows:
a data processing method, comprising:
creating a rendering texture render texture;
determining a plurality of virtual cameras with preset rendering attributes corresponding to the target roles; in each virtual camera, the background of the game scene picture rendered by each virtual camera with the depth value being a non-minimum depth value is transparent;
setting texture attributes of the virtual cameras as the renderTexture, respectively;
sequentially storing the game scene pictures rendered by the virtual cameras to the renderings according to the order of the depth values from small to large;
creating a picture carrier of the to-be-mapped image;
obtaining a target rendering map by mapping the render texture map to the picture carrier;
and storing the target rendering map under a target storage path.
Optionally, the mapping the RenderTexture to the picture carrier to obtain a target rendering map includes:
reading pixel data from the renderTexture;
according to a predefined coding mode, coding the pixel data to obtain processed pixel data in a target format;
and assigning the processed pixel data to the picture carrier to obtain the target rendering map.
Optionally, after the virtual camera with the depth value being the non-minimum depth value obtains the initial game scene picture through shooting, the initial game scene picture is subjected to pixel processing in a predefined mode through the set material shader, and the game scene picture with the rendered background being transparent is output.
A data processing apparatus, the apparatus comprising: a first creation unit, a first determination unit, a first setting unit, a first saving unit, a second creation unit, a first obtaining unit, and a second saving unit; wherein:
the first creation unit is used for creating rendering texture render texture;
the first determining unit is used for determining a plurality of virtual cameras with preset rendering attributes corresponding to the target role; in each virtual camera, the background of the game scene picture rendered by each virtual camera with the depth value being a non-minimum depth value is transparent;
the first setting unit is used for setting the texture attribute of each virtual camera as the render texture;
the first storage unit is used for sequentially storing the game scene pictures rendered by the virtual cameras to the renderertext according to the order of the depth values from small to large;
the second creating unit is used for creating a picture carrier of the to-be-mapped picture;
the first obtaining unit is used for obtaining a target rendering map and mapping the render texture map to the picture carrier;
the second saving unit is configured to save the target rendering map to a target storage path.
Optionally, the first obtaining unit includes: a reading unit, a second obtaining unit, and a third obtaining unit; wherein:
the reading unit is used for reading pixel data from the renderTexture;
the second obtaining unit is configured to perform encoding processing on the pixel data according to a predefined encoding manner, so as to obtain processed pixel data in a target format;
and the third obtaining unit is used for assigning the processed pixel data to the picture carrier to obtain the target rendering map.
Optionally, after the virtual camera with the depth value being the non-minimum depth value obtains the initial game scene picture through shooting, the initial game scene picture is subjected to pixel processing in a predefined mode through the set material shader, and the game scene picture with the rendered background being transparent is output.
A computer readable medium having stored thereon a computer program, wherein the program when executed by a processor implements the above described data processing method.
A processor for running a program, wherein the program when run implements a data processing method as described above.
An electronic device, comprising:
one or more processors;
a storage device having one or more programs stored thereon;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the data processing methods described above.
The data processing method, the device, the computer readable medium, the processor and the electronic equipment provided by the embodiment can create rendering texture and determine a plurality of virtual cameras with preset rendering attributes corresponding to a target role; in each virtual camera, the background of a game scene picture rendered by each virtual camera with a depth value which is not the minimum depth value is transparent, texture attributes of each virtual camera are respectively set to be renderings, the game scene picture rendered by each virtual camera is sequentially saved to the renderings according to the sequence of the depth values from small to large, a picture carrier of a picture to be mapped is created, the renderings are mapped to the picture carrier, a target rendering map is obtained, and the target rendering map is saved under a target storage path. According to the invention, game scene images rendered by the virtual cameras can be sequentially sent to the renderertext for effective fusion according to the sequence of the depth values, and the fused game scene images are effectively displayed in a mode of generating the target rendering map, so that the game scene images seen by the view angle of the target role can be effectively displayed, the game experience of a user is improved, and the viscosity of the user is increased.
The foregoing description is only an overview of the present invention, and is intended to provide a more clear understanding of the technical means of the present invention, as well as to provide a more clear understanding of the above and other objects, features and advantages of the present invention, as exemplified by the following detailed description.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present invention, and that other drawings can be obtained according to the provided drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flowchart of a first data processing method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a first data processing apparatus according to an embodiment of the present invention;
fig. 3 shows a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present invention will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present invention are shown in the drawings, it should be understood that the present invention may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.
As shown in fig. 1, this embodiment proposes a first data processing method, which may include the following steps:
s101, creating a rendering texture render texture;
it should be noted that the present invention may be applied to electronic devices, such as mobile phones and tablet computers.
Alternatively, the present invention may define the width and height of the renderTexture in advance when creating the renderTexture, such as the width and height of a screen by default.
S102, determining a plurality of virtual cameras with preset rendering attributes corresponding to a target role; in each virtual camera, the background of the game scene picture rendered by each virtual camera with the depth value being not the minimum depth value is transparent;
the virtual camera may be a camera that photographs a scene of a game in a game at a view angle of a target character.
Specifically, the method can determine the depth value of each virtual camera from the target role in advance, order each virtual camera according to the sequence of the depth values from big to small, and determine the virtual camera with the smallest depth value as the target virtual camera; after that, the invention can set rendering attributes for all virtual cameras except the target virtual camera, so that the background of the game scene picture rendered by all virtual cameras except the target virtual camera can be transparent.
Optionally, after the virtual camera with the depth value being the non-minimum depth value shoots and obtains the initial game scene picture, the virtual camera performs the pixel processing of the predefined mode on the initial game scene picture through the set material shader, and outputs the game scene picture with the rendered background being transparent.
Specifically, the invention can realize the setting of rendering attributes of the virtual camera by setting the texture shader. The set material shader loader can be applied to all virtual cameras except the target virtual camera.
The color components of the texture shader may be composed for R, G, B, and A values (transparency). In the invention, when a texture shader is arranged, the wide tolerance value of the color transparency can be predefined to be 0.01, and the chromophore of the transparent color can be defined to be black, namely (0, 1).
Specifically, after the virtual camera captures an initial game scene, the texture shader may obtain, for the initial game scene, the color to be rendered in advance, and the value of a is given to transparency by sampling a point from a map in a CG program (advanced shader program programmed by a graphics card GPU), where_maintex is sampled at the input point, and the values of R, G and B of its color are given to the pixel color of the output. Thus, the shader knows how it should work: namely, finding a corresponding uv point on the map, directly coloring by using color information, and subtracting the R value of the chromophore of the transparent color from the R value of the color to be presented to obtain deltaR. G and B are the same. The R value of the chromophore of the transparent color is 0, the G value of the chromophore of the transparent color is 0, the B value of the chromophore of the transparent color is 0, and the a value of the chromophore of the transparent color is 1. It should be noted that if deltaR < the forum value R of color transparency and deltaG < the forum value G of color transparency and deltaB < the forum value B of color transparency, then it is considered a transparent pixel, otherwise it is considered a non-transparent regular color.
Alternatively, the present invention may set a background color for each virtual camera described above. At this time, black (0, 1) may be set, and the black minus the chromophore of the transparent color is (0, 0), and if a tolerance value smaller than the color transparency is satisfied, it is regarded as a transparent pixel. That is, in the case where there is no target rendering object within the visual range of a certain virtual camera, the effect displayed on the game scene screen rendered by the virtual camera is transparent, and nothing is done.
Optionally, the invention can solve the problem that the frame of the game scene picture flashes down to a white frame during loading. In particular, this is a problem with the initial value, which the present invention can set transparent. The setting time point may be that the frame is set to 0 pixel at the last end, which is a colorless transparent effect. The target color to be displayed is then calculated, and the colorless transparent effect can be maintained until the target color is displayed.
S103, respectively setting texture attributes of the virtual cameras as render texture;
specifically, the present invention can realize that the game scene images output by each virtual camera are saved in the render texture by setting the texture attribute of each virtual camera as the render texture.
S104, sequentially storing the game scene pictures rendered by the virtual cameras to the renderings according to the order of the depth values from small to large;
when the game scene images rendered by the virtual cameras are stored in the render text according to the order of the depth values from small to large, the game scene images rendered first are arranged at the back, the game scene images rendered later can be arranged at the front, and the game scene images rendered first can be covered by the game scene images rendered later. Specifically, since the game scene images rendered by the virtual cameras with the smallest depth values are background, the background in the game scene images rendered by the other virtual cameras can be transparent and can only comprise the appearance of the target rendering object. Therefore, in each game scene picture rendered by each virtual camera, the game scene picture rendered later does not completely cover the game scene picture rendered first, and the game scene picture rendered later can be a corresponding area in the game scene picture rendered first covered by only the target rendering object. At the moment, the method and the device can effectively realize the effective fusion of different contents in each game scene picture rendered by each virtual camera.
S105, creating a picture carrier of the to-be-mapped picture;
the picture carrier may be a picture template (whiteboard), which may display a corresponding color when a color is given to the picture carrier. For example, the picture carrier may be a Texture2D type picture carrier. It should be noted that the present invention may predefine the width, height, and color depth (such as RGB16, RGB24, or RGB 32) of the picture carrier when creating the picture carrier.
S106, mapping the render texture to a picture carrier to obtain a target rendering map;
the target rendering map may be a map obtained by carrying the render texture map value picture.
It should be noted that, the target rendering map may display the game scene images rendered by the virtual camera after the effective fusion of the content, so as to effectively display the game scene images seen from the view angle of the target character, improve the game experience of the user, and increase the viscosity of the user.
And S107, storing the target rendering map under the target storage path.
The target storage path may be set or adjusted by a technician or a user according to actual situations, which is not limited by the present invention.
Specifically, after the target rendering map is obtained, the target rendering map can be stored in the data storage space corresponding to the target storage path.
When the target character moves in the game or the visual direction changes, the game scene images rendered by the corresponding virtual cameras correspondingly change, and at this time, the game scene images stored in the renderTexture also correspondingly change, and the game scene images displayed in the target rendering map also correspondingly update synchronously. Therefore, the game scene picture displayed by the target rendering map can be synchronously updated along with the movement of the target character or the change of the visual direction.
According to the data processing method provided by the embodiment, the rendering texture rendering can be created, and a plurality of virtual cameras with preset rendering attributes corresponding to the target role are determined; in each virtual camera, the background of a game scene picture rendered by each virtual camera with a depth value which is not the minimum depth value is transparent, texture attributes of each virtual camera are respectively set to be renderings, the game scene picture rendered by each virtual camera is sequentially saved to the renderings according to the sequence of the depth values from small to large, a picture carrier of a picture to be mapped is created, the renderings are mapped to the picture carrier, a target rendering map is obtained, and the target rendering map is saved under a target storage path. According to the invention, game scene images rendered by the virtual cameras can be sequentially sent to the renderertext for effective fusion according to the sequence of the depth values, and the fused game scene images are effectively displayed in a mode of generating the target rendering map, so that the game scene images seen by the view angle of the target role can be effectively displayed, the game experience of a user is improved, and the viscosity of the user is increased.
Based on fig. 1, the present embodiment proposes a second data processing method. In the method, step S106 may include steps S1061, S1062, S1063, and S1064; wherein:
s1061, reading pixel data from the render texture;
the present invention can activate the renderings after creating the renderings and setting the texture attributes of each virtual camera to the renderings.
Specifically, after the rendering is activated, the present invention can read pixel data in game scene pictures rendered by different virtual cameras from the rendering by assigning the rendering to the rendering.
Specifically, the invention can use the ReadPixels interface to read pixel data. The pixel data read by the present invention may be pixel data in a region of Texture2D, and the present invention may store the read pixel data as Texture data.
S1062, performing coding processing on the pixel data according to a predefined coding mode;
s1063, obtaining processed pixel data in a target format;
specifically, the invention can output the pixel data read from the render texture, i.e., the pixel data read from the target texture of each virtual camera, into processed pixel data in a certain format (e.g., JPG or PNG format) through a certain encoding format.
S1064, assigning the processed pixel data to a picture carrier to obtain a target rendering map.
Alternatively, the present invention can activate the current rendering map using the active interface of renderTexture. The invention sets the renderTexture. Active as the calling graphics. The activated rendering texture may be changed or queried when the custom graphical effect is performed; if all cameras are required to render to texture, then camera targettexture may be used instead.
It should be noted that, the present invention can implement step S106 by executing steps S1061, S1062, S1063, and S1064, thereby implementing generation of a target rendering map, effectively displaying a game scene image seen from a view angle of a target character, improving game experience of a user, and increasing user viscosity.
The data processing method provided by the embodiment can realize the generation of the target rendering map, effectively display the game scene picture seen from the view angle of the target role, improve the game experience of the user and increase the viscosity of the user.
Corresponding to the steps shown in fig. 1, as shown in fig. 2, the present embodiment proposes a first data processing apparatus. The apparatus may include: a first creation unit 101, a first determination unit 102, a first setting unit 103, a first saving unit 104, a second creation unit 105, a first obtaining unit 106, and a second saving unit 107; wherein:
a first creation unit 101 for creating a rendering texture render texture;
it should be noted that the present invention may be applied to electronic devices, such as mobile phones and tablet computers.
Alternatively, the present invention may define the width and height of the renderTexture in advance when creating the renderTexture, such as the width and height of a screen by default.
A first determining unit 102, configured to determine a plurality of virtual cameras with preset rendering attributes corresponding to a target role; in each virtual camera, the background of the game scene picture rendered by each virtual camera with the depth value being not the minimum depth value is transparent;
the virtual camera may be a camera that photographs a scene of a game in a game at a view angle of a target character.
Specifically, the method can determine the depth value of each virtual camera from the target role in advance, order each virtual camera according to the sequence of the depth values from big to small, and determine the virtual camera with the smallest depth value as the target virtual camera; after that, the invention can set rendering attributes for all virtual cameras except the target virtual camera, so that the background of the game scene picture rendered by all virtual cameras except the target virtual camera can be transparent.
Optionally, after the virtual camera with the depth value being the non-minimum depth value shoots and obtains the initial game scene picture, the virtual camera performs the pixel processing of the predefined mode on the initial game scene picture through the set material shader, and outputs the game scene picture with the rendered background being transparent.
Specifically, the invention can realize the setting of rendering attributes of the virtual camera by setting the texture shader. The set material shader loader can be applied to all virtual cameras except the target virtual camera.
A first setting unit 103 for setting texture attributes of the respective virtual cameras as renderTextures, respectively;
specifically, the present invention can realize that the game scene images output by each virtual camera are saved in the render texture by setting the texture attribute of each virtual camera as the render texture.
A first saving unit 104, configured to sequentially save the game scene images rendered by the virtual cameras to the RenderTexture in order of from the depth value to the depth value;
when the game scene images rendered by the virtual cameras are stored in the render text according to the order of the depth values from small to large, the game scene images rendered first are arranged at the back, the game scene images rendered later can be arranged at the front, and the game scene images rendered first can be covered by the game scene images rendered later. Specifically, since the game scene images rendered by the virtual cameras with the smallest depth values are background, the background in the game scene images rendered by the other virtual cameras can be transparent and can only comprise the appearance of the target rendering object. Therefore, in each game scene picture rendered by each virtual camera, the game scene picture rendered later does not completely cover the game scene picture rendered first, and the game scene picture rendered later can be a corresponding area in the game scene picture rendered first covered by only the target rendering object. At the moment, the method and the device can effectively realize the effective fusion of different contents in each game scene picture rendered by each virtual camera.
A second creating unit 105 for creating a picture carrier of the to-be-mapped;
the picture carrier may be a picture template (whiteboard), which may display a corresponding color when a color is given to the picture carrier.
A first obtaining unit 106 for obtaining a target rendering map and mapping the render texture map to a picture carrier;
the target rendering map may be a map obtained by carrying the render texture map value picture.
It should be noted that, the target rendering map may display the game scene images rendered by the virtual camera after the effective fusion of the content, so as to effectively display the game scene images seen from the view angle of the target character, improve the game experience of the user, and increase the viscosity of the user.
A second saving unit 107, configured to save the target rendering map under the target storage path.
The target storage path may be set or adjusted by a technician or a user according to actual situations, which is not limited by the present invention.
Specifically, after the target rendering map is obtained, the target rendering map can be stored in the data storage space corresponding to the target storage path.
When the target character moves in the game or the visual direction changes, the game scene images rendered by the corresponding virtual cameras correspondingly change, and at this time, the game scene images stored in the renderTexture also correspondingly change, and the game scene images displayed in the target rendering map also correspondingly update synchronously. Therefore, the game scene picture displayed by the target rendering map can be synchronously updated along with the movement of the target character or the change of the visual direction.
According to the data processing device provided by the embodiment, game scene images rendered by the virtual cameras can be sequentially sent to the renderings for effective fusion according to the sequence of the depth values, and effective display of the fused game scene images is achieved by generating the target rendering map, so that the game scene images seen from the view angle of the target role can be effectively displayed, the game experience of a user is improved, and the viscosity of the user is increased.
Based on fig. 2, the present embodiment proposes a second data processing apparatus. In this apparatus, the first obtaining unit 106 includes: a reading unit, a second obtaining unit, and a third obtaining unit; wherein:
a reading unit for reading pixel data from the renderTexture;
the second obtaining unit is used for carrying out coding processing on the pixel data according to a predefined coding mode to obtain processed pixel data in a target format;
and the third obtaining unit is used for assigning the processed pixel data to the picture carrier to obtain the target rendering map.
The present invention can activate the renderings after creating the renderings and setting the texture attributes of each virtual camera to the renderings.
Specifically, after the rendering is activated, the present invention can read pixel data in game scene pictures rendered by different virtual cameras from the rendering by assigning the rendering to the rendering.
Specifically, the invention can use the ReadPixels interface to read pixel data. The pixel data read by the present invention may be pixel data in a region of Texture2D, and the present invention may store the read pixel data as Texture data.
Specifically, the invention can output the pixel data read from the render texture, i.e., the pixel data read from the target texture of each virtual camera, into processed pixel data in a certain format (e.g., JPG or PNG format) through a certain encoding format.
The data processing device provided by the embodiment can generate the target rendering map, effectively display the game scene picture seen from the view angle of the target role, improve the game experience of the user and increase the viscosity of the user.
Referring now to fig. 3, a schematic diagram of an electronic device 300 suitable for use in implementing some embodiments of the present invention is shown. The electronic device shown in fig. 3 is only an example and should not be construed as limiting the functionality and scope of use of the embodiments of the invention.
As shown in fig. 3, the electronic device 300 may include a processor 301, a memory 302, a communication interface 303, an input unit 304, an output unit 305, and a communication bus 306. Wherein the processor 301 and the memory 302 are connected to each other via a communication bus 306. The communication interface 303, the input unit 304 and the output unit 305 are also connected to a communication bus 306.
The communication interface 303 may be an interface of a communication module, such as an interface of a GSM module. The communication interface 303 may be used to obtain data or instructions sent by other devices. The communication interface 303 is also used to send data or instructions to other devices.
In an embodiment of the present invention, the processor 301 may be a central processing unit (Central Processing Unit, CPU), application-specific integrated circuit (ASIC), digital Signal Processor (DSP), application-specific integrated circuit (ASIC), off-the-shelf programmable gate array (FPGA), or other programmable logic device, etc.
In one possible implementation, the memory 302 may include a storage program area and a storage data area, where the storage program area may store an operating system, and application programs required for at least one function (such as a sound playing function, an image playing function, etc.), and so on; the storage data area may store data created during use of the computer, such as user data, user access data, audio data, and the like.
In addition, memory 302 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device or other volatile solid-state storage device.
The processor 301 may call a program stored in the memory 302, and in particular, the processor 301 may execute the data processing method of any of the above embodiments.
The memory 302 is used for storing one or more programs, and the programs may include program codes including computer operation instructions, and in the embodiment of the present invention, at least the programs for implementing the following functions are stored in the memory 302:
creating a rendering texture render texture;
determining a plurality of virtual cameras with preset rendering attributes corresponding to the target roles; in each virtual camera, the background of the game scene picture rendered by each virtual camera with the depth value being not the minimum depth value is transparent;
setting texture attributes of the virtual cameras as renderTexture respectively;
sequentially storing the game scene pictures rendered by the virtual cameras to the render text according to the order of the depth values from small to large;
creating a picture carrier of the to-be-mapped image;
obtaining a target rendering map by mapping the render texture map to a picture carrier;
and saving the target rendering map under the target storage path.
In one possible implementation, the electronic device 300 may include: one or more processors 301;
a storage device having one or more programs stored thereon;
the one or more programs, when executed by the one or more processors 301, cause the one or more processors 301 to implement the data processing methods described in the method embodiments above.
The present invention may further include an input unit 304, and the input unit 304 may include at least one of a touch sensing unit, a keyboard, a mouse, a camera, a pickup, and the like that senses a touch event on the touch display panel.
The output unit 305 may include: at least one of a display, a speaker, a vibration mechanism, a light, etc. The display may include a display panel, such as a touch display panel or the like. In one possible case, the display panel may be configured in the form of a liquid crystal display (Liquid Crystal Display, LCD), an Organic Light-Emitting Diode (OLED), or the like. The vibration mechanism may be operable to displace the electronic device 300, and in one possible implementation, the vibration mechanism includes a motor and an eccentric vibrator, the motor driving the eccentric vibrator to rotate to generate vibration. The brightness and/or color of the light may be adjustable, and in one possible implementation, different information may be represented by at least one of the brightness, the color, and the on/off of the light, such as an alarm information represented by the emission of red light from the light.
Of course, the structure of the electronic device 300 shown in fig. 3 is not limited to the electronic device in the embodiment of the present invention, and the electronic device may include more or fewer components than those shown in fig. 3 or may combine some components in practical applications.
The embodiments of the present invention provide a computer-readable medium on which a computer program is stored, wherein the program, when executed by a processor, implements the data processing method described in the above method embodiments.
The embodiment of the invention provides a processor, which is used for running a program, wherein the program runs to realize the data processing method described in each method embodiment.
The present invention also provides a computer program product which, when executed on a data processing apparatus, causes the data processing apparatus to implement the data processing method described in the above method embodiments.
The electronic device, the processor, the computer readable medium or the computer program product provided in the above embodiments of the present invention may be used to perform the corresponding methods provided above, and therefore, the advantages achieved by the electronic device, the processor, the computer readable medium or the computer program product may refer to the advantages in the corresponding methods provided above, and are not repeated herein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, etc., such as Read Only Memory (ROM) or flash RAM. Memory is an example of a computer-readable medium.
Computer-readable media include both permanent and non-permanent, removable and non-removable media, and information storage may be implemented by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It will be appreciated by those skilled in the art that embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The above description is only illustrative of the preferred embodiments of the present invention and the technical principles applied, and is not intended to limit the present invention. Various modifications and variations of the present invention will be apparent to those skilled in the art. The scope of the invention is not limited to the specific combination of the above technical features, but also covers other technical features formed by any combination of the above technical features or their equivalents without departing from the inventive concept. Such as the above-mentioned features and the technical features disclosed in the present invention (but not limited to) having similar functions are replaced with each other.

Claims (9)

1. A method of data processing, comprising:
creating a rendering texture render texture;
determining a plurality of virtual cameras with preset rendering attributes corresponding to the target roles; in each virtual camera, the background of the game scene picture rendered by each virtual camera with the depth value being a non-minimum depth value is transparent;
setting texture attributes of the virtual cameras as the renderTexture, respectively;
sequentially storing the game scene pictures rendered by the virtual cameras to the renderings according to the order of the depth values from small to large;
creating a picture carrier of the to-be-mapped image;
obtaining a target rendering map by mapping the render texture map to the picture carrier;
and saving the target rendering map under a target storage path so that a game scene picture seen from the view angle of the target character can be displayed.
2. The data processing method of claim 1, wherein the mapping the renderings to the picture carrier to obtain a target rendering map comprises:
reading pixel data from the renderTexture;
according to a predefined coding mode, coding the pixel data to obtain processed pixel data in a target format;
and assigning the processed pixel data to the picture carrier to obtain the target rendering map.
3. The data processing method according to claim 1, wherein the virtual camera with a depth value not being the minimum depth value performs a pixel processing in a predefined manner on an initial game scene picture through a set texture shader after capturing the initial game scene picture, and outputs the game scene picture with a rendered background transparent.
4. A data processing apparatus, the apparatus comprising: a first creation unit, a first determination unit, a first setting unit, a first saving unit, a second creation unit, a first obtaining unit, and a second saving unit; wherein:
the first creation unit is used for creating rendering texture render texture;
the first determining unit is used for determining a plurality of virtual cameras with preset rendering attributes corresponding to the target role; in each virtual camera, the background of the game scene picture rendered by each virtual camera with the depth value being a non-minimum depth value is transparent;
the first setting unit is used for setting the texture attribute of each virtual camera as the render texture;
the first storage unit is used for sequentially storing the game scene pictures rendered by the virtual cameras to the renderertext according to the order of the depth values from small to large;
the second creating unit is used for creating a picture carrier of the to-be-mapped picture;
the first obtaining unit is used for obtaining a target rendering map and mapping the render texture map to the picture carrier;
the second saving unit is configured to save the target rendering map under a target storage path, so that a game scene picture seen from a perspective of the target character can be displayed.
5. The data processing apparatus according to claim 4, wherein the first obtaining unit includes: a reading unit, a second obtaining unit, and a third obtaining unit; wherein:
the reading unit is used for reading pixel data from the renderTexture;
the second obtaining unit is configured to perform encoding processing on the pixel data according to a predefined encoding manner, so as to obtain processed pixel data in a target format;
and the third obtaining unit is used for assigning the processed pixel data to the picture carrier to obtain the target rendering map.
6. The data processing apparatus according to claim 4, wherein the virtual camera having a depth value other than the minimum depth value outputs the game scene image with a rendered background transparent by performing a pixel processing in a predetermined manner on the initial game scene image through a set texture shader after capturing the initial game scene image.
7. A computer readable medium having stored thereon a computer program, wherein the program when executed by a processor implements the data processing method according to any of claims 1 to 3.
8. A processor for running a program, wherein the program when run implements a data processing method as claimed in any one of claims 1 to 3.
9. An electronic device, comprising:
one or more processors;
a storage device having one or more programs stored thereon;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the data processing method of any of claims 1 to 3.
CN202111394510.8A 2021-11-23 2021-11-23 Data processing method, data processing device, computer readable medium, processor and electronic equipment Active CN114119797B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111394510.8A CN114119797B (en) 2021-11-23 2021-11-23 Data processing method, data processing device, computer readable medium, processor and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111394510.8A CN114119797B (en) 2021-11-23 2021-11-23 Data processing method, data processing device, computer readable medium, processor and electronic equipment

Publications (2)

Publication Number Publication Date
CN114119797A CN114119797A (en) 2022-03-01
CN114119797B true CN114119797B (en) 2023-08-15

Family

ID=80440115

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111394510.8A Active CN114119797B (en) 2021-11-23 2021-11-23 Data processing method, data processing device, computer readable medium, processor and electronic equipment

Country Status (1)

Country Link
CN (1) CN114119797B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103959340A (en) * 2011-12-07 2014-07-30 英特尔公司 Graphics rendering technique for autostereoscopic three dimensional display
CN111803932A (en) * 2020-07-22 2020-10-23 网易(杭州)网络有限公司 Skill release method for virtual character in game, terminal and storage medium
CN112274921A (en) * 2020-10-23 2021-01-29 完美世界(重庆)互动科技有限公司 Rendering method and device of game role, electronic equipment and storage medium
CN112652046A (en) * 2020-12-18 2021-04-13 完美世界(重庆)互动科技有限公司 Game picture generation method, device, equipment and storage medium
CN112891940A (en) * 2021-03-16 2021-06-04 天津亚克互动科技有限公司 Image data processing method and device, storage medium and computer equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103959340A (en) * 2011-12-07 2014-07-30 英特尔公司 Graphics rendering technique for autostereoscopic three dimensional display
CN111803932A (en) * 2020-07-22 2020-10-23 网易(杭州)网络有限公司 Skill release method for virtual character in game, terminal and storage medium
CN112274921A (en) * 2020-10-23 2021-01-29 完美世界(重庆)互动科技有限公司 Rendering method and device of game role, electronic equipment and storage medium
CN112652046A (en) * 2020-12-18 2021-04-13 完美世界(重庆)互动科技有限公司 Game picture generation method, device, equipment and storage medium
CN112891940A (en) * 2021-03-16 2021-06-04 天津亚克互动科技有限公司 Image data processing method and device, storage medium and computer equipment

Also Published As

Publication number Publication date
CN114119797A (en) 2022-03-01

Similar Documents

Publication Publication Date Title
US20230053462A1 (en) Image rendering method and apparatus, device, medium, and computer program product
CN111225150B (en) Method for processing interpolation frame and related product
KR101952983B1 (en) System and method for layering using tile-based renderers
US8952981B2 (en) Subpixel compositing on transparent backgrounds
WO2019228013A1 (en) Method, apparatus and device for displaying rich text on 3d model
CN105550973B (en) Graphics processing unit, graphics processing system and anti-aliasing processing method
CN109636885B (en) Sequential frame animation production method and system for H5 page
US20220417591A1 (en) Video rendering method and apparatus, electronic device, and storage medium
WO2023273114A1 (en) Dynamic resolution rendering method and apparatus, device, program, and readable medium
KR20190048360A (en) Method and apparatus for processing image
CN114742931A (en) Method and device for rendering image, electronic equipment and storage medium
CN108256072B (en) Album display method, apparatus, storage medium and electronic device
CN114119797B (en) Data processing method, data processing device, computer readable medium, processor and electronic equipment
CN114428573B (en) Special effect image processing method and device, electronic equipment and storage medium
CN112116719B (en) Method and device for determining object in three-dimensional scene, storage medium and electronic equipment
US11748911B2 (en) Shader function based pixel count determination
CN114693780A (en) Image processing method, device, equipment, storage medium and program product
WO2021184303A1 (en) Video processing method and device
CN113724364A (en) Setting method and device for realizing shielding by utilizing polygon and no rendering of body
JP2002369076A (en) Three-dimensional special effect device
CN112686984B (en) Rendering method, device, equipment and medium for sub-surface scattering effect
CN117274106B (en) Photo restoration method, electronic equipment and related medium
CN109803163B (en) Image display method and device and storage medium
CN114359081A (en) Liquid material dissolving method, device, electronic equipment and storage medium
US20120120197A1 (en) Apparatus and method for sharing hardware between graphics and lens distortion operation to generate pseudo 3d display

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant