CN114119797A - Data processing method and device, computer readable medium, processor and electronic equipment - Google Patents

Data processing method and device, computer readable medium, processor and electronic equipment Download PDF

Info

Publication number
CN114119797A
CN114119797A CN202111394510.8A CN202111394510A CN114119797A CN 114119797 A CN114119797 A CN 114119797A CN 202111394510 A CN202111394510 A CN 202111394510A CN 114119797 A CN114119797 A CN 114119797A
Authority
CN
China
Prior art keywords
game scene
target
rendertexture
picture
unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111394510.8A
Other languages
Chinese (zh)
Other versions
CN114119797B (en
Inventor
张桥
李京燕
李小海
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Shi Guan Jin Yang Technology Development Co ltd
Original Assignee
Beijing Shi Guan Jin Yang Technology Development Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Shi Guan Jin Yang Technology Development Co ltd filed Critical Beijing Shi Guan Jin Yang Technology Development Co ltd
Priority to CN202111394510.8A priority Critical patent/CN114119797B/en
Publication of CN114119797A publication Critical patent/CN114119797A/en
Application granted granted Critical
Publication of CN114119797B publication Critical patent/CN114119797B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Generation (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a data processing method, a data processing device, a computer readable medium, a processor and electronic equipment, which can create rendering texture and determine a plurality of virtual cameras corresponding to target roles and having preset rendering attributes; in each virtual camera, the background of a game scene picture rendered by each virtual camera with a non-minimum depth value is transparent, the texture attribute of each virtual camera is set as RenderTexture, the game scene picture rendered by each virtual camera is sequentially stored in RenderTexture according to the sequence of the depth values from small to large, a picture carrier to be mapped is created, the RenderTexture is mapped to the picture carrier, a target rendering map is obtained, and the target rendering map is stored under a target storage path. The invention can effectively display the game scene picture seen from the visual angle of the target role, improve the game experience of the user and increase the viscosity of the user.

Description

Data processing method and device, computer readable medium, processor and electronic equipment
Technical Field
The present invention relates to the field of computer science technologies, and in particular, to a data processing method and apparatus, a computer readable medium, a processor, and an electronic device.
Background
With the development of computer science technology, the development technology of online game application programs is continuously improved.
In the program screen of the network game application, a game scene screen and a user interface may be included. The game scene picture can comprise game characters, monsters, buildings and the like. It will be appreciated that the perspective of the game character is such that the user interface is not visible.
However, when it is necessary to display a game scene screen from the perspective of a certain game character, the conventional technique cannot effectively display the game scene screen.
Disclosure of Invention
In view of the above problems, the present invention provides a data processing method, apparatus, computer readable medium, processor and electronic device for overcoming the above problems or at least partially solving the above problems, and the technical solution is as follows:
a method of data processing, comprising:
creating a render texture;
determining a plurality of virtual cameras which correspond to the target role and have preset rendering attributes; in each virtual camera, the background of a game scene picture rendered by each virtual camera with the depth value being non-minimum depth value is transparent;
setting texture attributes of each of the virtual cameras to the RenderTexture, respectively;
sequentially saving the game scene pictures rendered by the virtual cameras to the renderTexture according to the sequence of the depth values from small to large;
creating a picture carrier to be pasted with a picture;
obtaining a target rendering map by applying the RenderTexture map to the picture carrier;
and saving the target rendering map to a target storage path.
Optionally, the applying the RenderTexture map to the picture carrier to obtain a target rendering map includes:
reading pixel data from the RenderTexture;
coding the pixel data according to a predefined coding mode to obtain processed pixel data in a target format;
and assigning the processed pixel data to the picture carrier to obtain the target rendering chartlet.
Optionally, after the virtual camera with the non-minimum depth value captures an initial game scene picture, the virtual camera performs pixel processing in a predefined manner on the initial game scene picture through a set texture shader, and outputs the game scene picture with a rendered transparent background.
A data processing apparatus, the apparatus comprising: the device comprises a first creating unit, a first determining unit, a first setting unit, a first saving unit, a second creating unit, a first obtaining unit and a second saving unit; wherein:
the first creating unit is used for creating rendering texture renderTexture;
the first determining unit is used for determining a plurality of virtual cameras which correspond to the target role and have preset rendering attributes; in each virtual camera, the background of a game scene picture rendered by each virtual camera with the depth value being non-minimum depth value is transparent;
the first setting unit is configured to set a texture attribute of each of the virtual cameras to the RenderTexture, respectively;
the first storage unit is used for sequentially storing the game scene pictures rendered by the virtual cameras to the renderTexture according to the sequence of the depth values from small to large;
the second creating unit is used for creating a picture carrier to be pasted with a picture;
the first obtaining unit is used for obtaining a target rendering map and applying the RenderTexture map to the picture carrier;
and the second storage unit is used for storing the target rendering map to a target storage path.
Optionally, the first obtaining unit includes: a reading unit, a second obtaining unit and a third obtaining unit; wherein:
the reading unit is used for reading pixel data from the RenderTexture;
the second obtaining unit is configured to perform coding processing on the pixel data according to a predefined coding manner, so as to obtain processed pixel data in a target format;
and the third obtaining unit is used for assigning the processed pixel data to the picture carrier to obtain the target rendering chartlet.
Optionally, after the virtual camera with the non-minimum depth value captures an initial game scene picture, the virtual camera performs pixel processing in a predefined manner on the initial game scene picture through a set texture shader, and outputs the game scene picture with a rendered transparent background.
A computer-readable medium, on which a computer program is stored, wherein said program, when being executed by a processor, carries out the above-mentioned data processing method.
A processor for running a program, wherein the program when running implements the data processing method as described above.
An electronic device, comprising:
one or more processors;
a storage device having one or more programs stored thereon;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the data processing methods described above.
The data processing method, the data processing device, the computer readable medium, the processor and the electronic device provided by the embodiment can create a render texture, determine a plurality of virtual cameras corresponding to a target role and having preset rendering attributes; in each virtual camera, the background of a game scene picture rendered by each virtual camera with a non-minimum depth value is transparent, the texture attribute of each virtual camera is set as RenderTexture, the game scene picture rendered by each virtual camera is sequentially stored in RenderTexture according to the sequence of the depth values from small to large, a picture carrier to be mapped is created, the RenderTexture is mapped to the picture carrier, a target rendering map is obtained, and the target rendering map is stored under a target storage path. According to the method and the device, the game scene pictures rendered by the virtual cameras can be sequentially sent to the render texture for effective fusion according to the sequencing of the depth values, and the fused game scene pictures can be effectively displayed in a mode of generating the target rendering map, so that the game scene pictures seen from the visual angle of the target role can be effectively displayed, the game experience of a user is improved, and the user viscosity is increased.
The foregoing description is only an overview of the technical solutions of the present invention, and the following detailed description of the present invention is provided to enable the technical means of the present invention to be more clearly understood, and to enable the above and other objects, features, and advantages of the present invention to be more clearly understood.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a flowchart illustrating a first data processing method according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a first data processing apparatus according to an embodiment of the present invention;
fig. 3 shows a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present invention will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the invention are shown in the drawings, it should be understood that the invention can be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.
As shown in fig. 1, the present embodiment proposes a first data processing method, which may include the steps of:
s101, creating rendering texture render texture;
it should be noted that the present invention can be applied to electronic devices, such as mobile phones and tablet computers.
Alternatively, the present invention may define the width and height of the RenderTexture in advance when creating the RenderTexture, for example, the width and height of the screen may be defaulted.
S102, determining a plurality of virtual cameras which are corresponding to the target role and have preset rendering attributes; in each virtual camera, the background of a game scene picture rendered by each virtual camera with the depth value being non-minimum depth value is transparent;
the virtual camera may be a camera that captures a game scene picture in the game from the perspective of the target character.
Specifically, the depth value of each virtual camera from the target role can be determined in advance, the virtual cameras are sequenced according to the sequence of the depth values from large to small, and the virtual camera with the smallest depth value is determined as the target virtual camera; then, the present invention can set rendering attributes for each virtual camera except the target virtual camera, so that the background of the game scene picture rendered by each virtual camera except the target virtual camera can be made transparent.
Optionally, after the virtual camera with the non-minimum depth value captures an initial game scene picture, the virtual camera performs pixel processing in a predefined manner on the initial game scene picture through a set texture shader, and outputs a rendered game scene picture with a transparent background.
Specifically, the method and the device can realize the setting of the rendering attribute of the virtual camera by setting the texture shader. The set texture shader may be applied to virtual cameras other than the target virtual camera.
The color components of the texture shader may be composed for R, G, B, and A values (transparency). When the texture shader is arranged, the wide-tolerance value of the color transparency can be predefined to be 0.01, and the color base of the transparent color is defined to be black, namely (0,0,0, 1).
Specifically, after the virtual camera captures an initial game scene picture, the texture shader may acquire a color to be presented in advance for the initial game scene picture, and sample a point from a map in a CG program (advanced shader program programmed by graphics card GPU), where _ MainTex is sampled at an input point, and the output pixel color is given to the values of R, G and B of the color, and the value of a is given to transparency. The shader then knows how it should work: that is, find the corresponding uv point on the map, directly use the color information to color, subtract the R value of the transparent color chromophore from the R value of the color to be presented, and obtain deltaR. G and B are the same. The R value of the chromophore of the clear color is 0, the G value of the chromophore of the clear color is 0, the B value of the chromophore of the clear color is 0, and the a value of the chromophore of the clear color is 1. It should be noted that if deltaR < tolerance value R of color transparency, deltaG < tolerance value G of color transparency, and deltaB < tolerance value B of color transparency, then it is considered as a transparent pixel, otherwise it is considered as a non-transparent regular color.
Alternatively, the present invention may set a background color to each of the virtual cameras described above. At this time, black (0,0,0,1) may be set, and this black minus the chromophore of the transparent color results in (0,0,0,0), and if a tolerance value smaller than the transparency of the color is satisfied, the pixel is regarded as a transparent pixel. That is, when there is no target rendering object within the visual range of a virtual camera, the effect displayed on the game scene screen rendered by the virtual camera is transparent, and nothing is left.
Optionally, the method and the device can solve the problem that the frame of the game scene picture flashes a white frame during loading. Specifically, this is a problem of the initial value, which can be set to be transparent by the present invention. The setting time point may be that the frame is set to 0 pixel at the last end, which is a colorless and transparent effect. And then, calculating the target color to be displayed, wherein the colorless transparent effect can be kept until the target color is displayed.
S103, setting the texture attribute of each virtual camera as RenderTexture;
specifically, the texture attribute of each virtual camera is set as the RenderTexture, so that the game scene pictures output by each virtual camera can be stored in the RenderTexture.
S104, sequentially saving the game scene pictures rendered by the virtual cameras to a render texture according to the sequence of the depth values from small to large;
it should be noted that, when the game scene pictures rendered by the virtual cameras are stored in the RenderTexture according to the sequence of the depth values from small to large, the game scene picture rendered first may be arranged behind, the game scene picture rendered later may be arranged in front, and at this time, the game scene picture rendered first may be covered by the game scene picture rendered later. Specifically, because the game scene picture rendered by the virtual camera with the smallest depth value has a background, the backgrounds in the game scene pictures rendered by the other virtual cameras may be transparent and may only include the appearance of the target rendering object. Therefore, in each game scene picture rendered by each virtual camera, the post-rendered game scene picture does not completely cover the first-rendered game scene picture, and only the target rendering object covers the corresponding area in the first-rendered game scene picture in the post-rendered game scene picture. At the moment, the invention can effectively realize the effective fusion of different contents in each game scene picture rendered by each virtual camera.
S105, creating a picture carrier to be pasted with a picture;
the picture carrier may be a picture template (whiteboard), and when a color is given to the picture carrier, the picture carrier can display a corresponding color. For example, the picture carrier may be a Texture2D type picture carrier. It should be noted that the present invention may define the width, height and color depth of the picture carrier (such as RGB16, RGB24 or RGB32) in advance when creating the picture carrier.
S106, a render texture map is added to the picture carrier to obtain a target render map;
the target rendering map can be a map obtained after the RenderTexture map value is carried in a picture.
It should be noted that the target rendering map may display the game scene pictures, which are obtained by effectively fusing the contents of the game scene pictures rendered by the virtual camera, so as to effectively display the game scene pictures seen from the view angle of the target character, improve the game experience of the user, and increase the user viscosity.
And S107, storing the target rendering map to a target storage path.
The target storage path may be set or adjusted by a technician or a user according to actual conditions, which is not limited in the present invention.
Specifically, the target rendering map can be stored in the data storage space corresponding to the target storage path after the target rendering map is obtained.
It should be noted that, when the target character moves in the game or the visual direction changes, the game scene picture rendered by each corresponding virtual camera changes correspondingly, at this time, the game scene picture stored in the RenderTexture may also change correspondingly, and the game scene picture displayed in the target rendering map may also be updated synchronously correspondingly. Therefore, the game scene picture displayed by the target rendering map can be synchronously updated along with the movement of the target character or the change of the visual direction, and the game scene picture seen from the visual angle of the target character can be displayed in real time.
The data processing method provided by the embodiment can create rendering texture RenderTexture and determine a plurality of virtual cameras with preset rendering attributes corresponding to target roles; in each virtual camera, the background of a game scene picture rendered by each virtual camera with a non-minimum depth value is transparent, the texture attribute of each virtual camera is set as RenderTexture, the game scene picture rendered by each virtual camera is sequentially stored in RenderTexture according to the sequence of the depth values from small to large, a picture carrier to be mapped is created, the RenderTexture is mapped to the picture carrier, a target rendering map is obtained, and the target rendering map is stored under a target storage path. According to the method and the device, the game scene pictures rendered by the virtual cameras can be sequentially sent to the render texture for effective fusion according to the sequencing of the depth values, and the fused game scene pictures can be effectively displayed in a mode of generating the target rendering map, so that the game scene pictures seen from the visual angle of the target role can be effectively displayed, the game experience of a user is improved, and the user viscosity is increased.
Based on fig. 1, the present embodiment proposes a second data processing method. In the method, step S106 may include steps S1061, S1062, S1063, and S1064; wherein:
s1061, reading pixel data from the render texture;
it should be noted that the present invention can activate the RenderTexture after the RenderTexture is created and the texture attribute of each virtual camera is set to the RenderTexture.
Specifically, after the RenderTexture is activated, the pixel data in the game scene picture rendered by different virtual cameras can be read from the RenderTexture respectively in a mode of assigning the RenderTexture to a RenderTexture active () interface.
In particular, the present invention can read pixel data using the ReadPixels interface. It should be noted that the pixel data read by the present invention may be pixel data in the region of Texture2D, and the present invention may store the read pixel data as Texture data.
S1062, coding the pixel data according to a predefined coding mode;
s1063, obtaining the processed pixel data in the target format;
specifically, the present invention can output the pixel data read from the render texture, that is, the pixel data read from the target texture of each virtual camera, in a certain encoding format, to the processed pixel data in a certain format (for example, in the JPG or PNG format).
And S1064, assigning the processed pixel data to a picture carrier to obtain a target rendering map.
Optionally, the present invention can use the active interface of render texture to activate the current render map. Active may be the same as the method for calling graph. The activated rendering texture may be changed or queried when the custom graphics effect is executed; targettexture may be used instead if all camera renderings to textures are required.
It should be noted that, the present invention can implement step S106 by executing steps S1061, S1062, S1063, and S1064, so as to implement generation of a target rendering map, effectively display a game scene picture seen from the perspective of a target character, improve user game experience, and increase user stickiness.
The data processing method provided by the embodiment can realize generation of the target rendering map, effectively display the game scene picture seen from the visual angle of the target role, improve the game experience of the user and increase the user viscosity.
Corresponding to the steps shown in fig. 1, the present embodiment proposes a first data processing apparatus as shown in fig. 2. The apparatus may include: a first creating unit 101, a first determining unit 102, a first setting unit 103, a first saving unit 104, a second creating unit 105, a first obtaining unit 106, and a second saving unit 107; wherein:
a first creating unit 101 for creating a rendering texture RenderTexture;
it should be noted that the present invention can be applied to electronic devices, such as mobile phones and tablet computers.
Alternatively, the present invention may define the width and height of the RenderTexture in advance when creating the RenderTexture, for example, the width and height of the screen may be defaulted.
A first determining unit 102, configured to determine a plurality of virtual cameras, to which rendering attributes have been set in advance, corresponding to a target role; in each virtual camera, the background of a game scene picture rendered by each virtual camera with the depth value being non-minimum depth value is transparent;
the virtual camera may be a camera that captures a game scene picture in the game from the perspective of the target character.
Specifically, the depth value of each virtual camera from the target role can be determined in advance, the virtual cameras are sequenced according to the sequence of the depth values from large to small, and the virtual camera with the smallest depth value is determined as the target virtual camera; then, the present invention can set rendering attributes for each virtual camera except the target virtual camera, so that the background of the game scene picture rendered by each virtual camera except the target virtual camera can be made transparent.
Optionally, after the virtual camera with the non-minimum depth value captures an initial game scene picture, the virtual camera performs pixel processing in a predefined manner on the initial game scene picture through a set texture shader, and outputs a rendered game scene picture with a transparent background.
Specifically, the method and the device can realize the setting of the rendering attribute of the virtual camera by setting the texture shader. The set texture shader may be applied to virtual cameras other than the target virtual camera.
A first setting unit 103 for setting texture attributes of the respective virtual cameras to renderTexture, respectively;
specifically, the texture attribute of each virtual camera is set as the RenderTexture, so that the game scene pictures output by each virtual camera can be stored in the RenderTexture.
The first storage unit 104 is used for sequentially storing the game scene pictures rendered by the virtual cameras to a RenderTexture according to the sequence of the depth values from small to large;
it should be noted that, when the game scene pictures rendered by the virtual cameras are stored in the RenderTexture according to the sequence of the depth values from small to large, the game scene picture rendered first may be arranged behind, the game scene picture rendered later may be arranged in front, and at this time, the game scene picture rendered first may be covered by the game scene picture rendered later. Specifically, because the game scene picture rendered by the virtual camera with the smallest depth value has a background, the backgrounds in the game scene pictures rendered by the other virtual cameras may be transparent and may only include the appearance of the target rendering object. Therefore, in each game scene picture rendered by each virtual camera, the post-rendered game scene picture does not completely cover the first-rendered game scene picture, and only the target rendering object covers the corresponding area in the first-rendered game scene picture in the post-rendered game scene picture. At the moment, the invention can effectively realize the effective fusion of different contents in each game scene picture rendered by each virtual camera.
A second creating unit 105, configured to create a picture carrier to be mapped;
the picture carrier may be a picture template (whiteboard), and when a color is given to the picture carrier, the picture carrier can display a corresponding color.
A first obtaining unit 106, configured to obtain a target rendering map, and apply a RenderTexture map to the picture carrier;
the target rendering map can be a map obtained after the RenderTexture map value is carried in a picture.
It should be noted that the target rendering map may display the game scene pictures, which are obtained by effectively fusing the contents of the game scene pictures rendered by the virtual camera, so as to effectively display the game scene pictures seen from the view angle of the target character, improve the game experience of the user, and increase the user viscosity.
And a second saving unit 107, configured to save the target rendering map to a position below the target storage path.
The target storage path may be set or adjusted by a technician or a user according to actual conditions, which is not limited in the present invention.
Specifically, the target rendering map can be stored in the data storage space corresponding to the target storage path after the target rendering map is obtained.
It should be noted that, when the target character moves in the game or the visual direction changes, the game scene picture rendered by each corresponding virtual camera changes correspondingly, at this time, the game scene picture stored in the RenderTexture may also change correspondingly, and the game scene picture displayed in the target rendering map may also be updated synchronously correspondingly. Therefore, the game scene picture displayed by the target rendering map can be synchronously updated along with the movement of the target character or the change of the visual direction, and the game scene picture seen from the visual angle of the target character can be displayed in real time.
The data processing device provided by the embodiment can sequentially send the game scene pictures rendered by the virtual cameras to the render texture for effective fusion according to the sequencing of the depth values, and realize effective display of the fused game scene pictures by generating the target rendering map, so that the game scene pictures seen from the view angle of the target character can be effectively displayed, the game experience of a user is improved, and the user stickiness is increased.
Based on fig. 2, the present embodiment proposes a second data processing apparatus. In the apparatus, the first obtaining unit 106 includes: a reading unit, a second obtaining unit and a third obtaining unit; wherein:
a reading unit configured to read pixel data from a RenderTexture;
the second obtaining unit is used for coding the pixel data according to a predefined coding mode to obtain the processed pixel data in the target format;
and the third obtaining unit is used for assigning the processed pixel data to the picture carrier to obtain the target rendering chartlet.
It should be noted that the present invention can activate the RenderTexture after the RenderTexture is created and the texture attribute of each virtual camera is set to the RenderTexture.
Specifically, after the RenderTexture is activated, the pixel data in the game scene picture rendered by different virtual cameras can be read from the RenderTexture respectively in a mode of assigning the RenderTexture to a RenderTexture active () interface.
In particular, the present invention can read pixel data using the ReadPixels interface. It should be noted that the pixel data read by the present invention may be pixel data in the region of Texture2D, and the present invention may store the read pixel data as Texture data.
Specifically, the present invention can output the pixel data read from the render texture, that is, the pixel data read from the target texture of each virtual camera, in a certain encoding format, to the processed pixel data in a certain format (for example, in the JPG or PNG format).
The data processing device provided by the embodiment can realize generation of the target rendering map, effectively display the game scene picture seen from the visual angle of the target role, improve the game experience of the user and increase the user viscosity.
Referring now to FIG. 3, a block diagram of an electronic device 300 suitable for use in implementing some embodiments of the invention is shown. The electronic device shown in fig. 3 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present invention.
As shown in fig. 3, the electronic device 300 may include a processor 301, a memory 302, a communication interface 303, an input unit 304, an output unit 305, and a communication bus 306. Wherein the processor 301 and the memory 302 are connected to each other by a communication bus 306. A communication interface 303, an input unit 304 and an output unit 305 are also connected to the communication bus 306.
The communication interface 303 may be an interface of a communication module, such as an interface of a GSM module. The communication interface 303 may be used to obtain data or instructions sent by other devices. The communication interface 303 is also used to transmit data or instructions to other devices.
In the embodiment of the present invention, the processor 301 may be a Central Processing Unit (CPU), an application-specific integrated circuit (ASIC), a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic devices.
In one possible implementation, the memory 302 may include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required for at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data created during use of the computer, such as user data, user access data, audio data, and the like.
Further, the memory 302 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device or other volatile solid state storage device.
The processor 301 may call a program stored in the memory 302, and in particular, the processor 301 may execute the data processing method of any of the above embodiments.
The memory 302 is used for storing one or more programs, the program may include program codes, the program codes include computer operation instructions, and in the embodiment of the present invention, at least the program for realizing the following functions is stored in the memory 302:
creating a render texture;
determining a plurality of virtual cameras which correspond to the target role and have preset rendering attributes; in each virtual camera, the background of a game scene picture rendered by each virtual camera with the depth value being non-minimum depth value is transparent;
respectively setting the texture attribute of each virtual camera as RenderTexture;
sequentially storing the game scene pictures rendered by the virtual cameras to a render texture according to the sequence of the depth values from small to large;
creating a picture carrier to be pasted with a picture;
the render texture map is applied to a picture carrier to obtain a target render map;
and saving the target rendering map to a target storage path.
In one possible implementation, the electronic device 300 may include: one or more processors 301;
a storage device having one or more programs stored thereon;
the one or more programs, when executed by the one or more processors 301, cause the one or more processors 301 to implement the data processing methods described above in the method embodiments.
The present invention may further include an input unit 304, and the input unit 304 may include at least one of a touch sensing unit sensing a touch event on the touch display panel, a keyboard, a mouse, a camera, a microphone, and the like.
The output unit 305 may include: at least one of a display, a speaker, a vibration mechanism, a light, and the like. The display may comprise a display panel, such as a touch display panel or the like. In one possible case, the Display panel may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. The vibration mechanism is operable to displace the electronic device 300, and in one possible implementation, the vibration mechanism includes a motor and an eccentric vibrator, and the motor drives the eccentric vibrator to rotate so as to generate vibration. The brightness and/or color of the lamp can be adjusted, in a possible implementation manner, different information can be embodied through at least one of the on-off, brightness and color of the lamp, for example, the alarm information can be embodied through red light emitted by the lamp.
Of course, the structure of the electronic device 300 shown in fig. 3 does not constitute a limitation of the electronic device in the embodiment of the present invention, and in practical applications, the electronic device may include more or less components than those shown in fig. 3, or some components may be combined.
Embodiments of the present invention provide a computer-readable medium, on which a computer program is stored, wherein the program, when executed by a processor, implements the data processing method described in the above method embodiments.
The embodiment of the invention provides a processor, which is used for running a program, wherein the program realizes the data processing method described in the above method embodiments when running.
The present invention also provides a computer program product which, when executed on a data processing apparatus, causes the data processing apparatus to implement the data processing method described in the above method embodiments.
In addition, the electronic device, the processor, the computer readable medium, or the computer program product provided in the foregoing embodiments of the present invention may be all used for executing the corresponding methods provided above, and therefore, the beneficial effects achieved by the electronic device, the processor, the computer readable medium, or the computer program product may refer to the beneficial effects in the corresponding methods provided above, and are not described herein again.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). The memory is an example of a computer-readable medium.
Computer-readable media, which include both non-transitory and non-transitory, removable and non-removable media, may implement the information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and the technical principles applied, and is not intended to limit the present invention. Various modifications and alterations to this invention will become apparent to those skilled in the art. The scope of the present invention is not limited to the specific combinations of the above-described features, and may also include other features formed by arbitrary combinations of the above-described features or their equivalents without departing from the spirit of the present invention. For example, the above features and (but not limited to) features having similar functions disclosed in the present invention are mutually replaced to form the technical solution.

Claims (9)

1. A data processing method, comprising:
creating a render texture;
determining a plurality of virtual cameras which correspond to the target role and have preset rendering attributes; in each virtual camera, the background of a game scene picture rendered by each virtual camera with the depth value being non-minimum depth value is transparent;
setting texture attributes of each of the virtual cameras to the RenderTexture, respectively;
sequentially saving the game scene pictures rendered by the virtual cameras to the renderTexture according to the sequence of the depth values from small to large;
creating a picture carrier to be pasted with a picture;
obtaining a target rendering map by applying the RenderTexture map to the picture carrier;
and saving the target rendering map to a target storage path.
2. The data processing method according to claim 1, wherein said applying the RenderTexture map to the picture carrier to obtain a target rendering map comprises:
reading pixel data from the RenderTexture;
coding the pixel data according to a predefined coding mode to obtain processed pixel data in a target format;
and assigning the processed pixel data to the picture carrier to obtain the target rendering chartlet.
3. The data processing method of claim 1, wherein the virtual camera with the non-minimum depth value performs pixel processing on an initial game scene picture in a predefined manner through a set texture shader after capturing the initial game scene picture, and outputs the game scene picture with a transparent background.
4. A data processing apparatus, characterized in that the apparatus comprises: the device comprises a first creating unit, a first determining unit, a first setting unit, a first saving unit, a second creating unit, a first obtaining unit and a second saving unit; wherein:
the first creating unit is used for creating rendering texture renderTexture;
the first determining unit is used for determining a plurality of virtual cameras which correspond to the target role and have preset rendering attributes; in each virtual camera, the background of a game scene picture rendered by each virtual camera with the depth value being non-minimum depth value is transparent;
the first setting unit is configured to set a texture attribute of each of the virtual cameras to the RenderTexture, respectively;
the first storage unit is used for sequentially storing the game scene pictures rendered by the virtual cameras to the renderTexture according to the sequence of the depth values from small to large;
the second creating unit is used for creating a picture carrier to be pasted with a picture;
the first obtaining unit is used for obtaining a target rendering map and applying the RenderTexture map to the picture carrier;
and the second storage unit is used for storing the target rendering map to a target storage path.
5. The data processing apparatus according to claim 4, wherein the first obtaining unit includes: a reading unit, a second obtaining unit and a third obtaining unit; wherein:
the reading unit is used for reading pixel data from the RenderTexture;
the second obtaining unit is configured to perform coding processing on the pixel data according to a predefined coding manner, so as to obtain processed pixel data in a target format;
and the third obtaining unit is used for assigning the processed pixel data to the picture carrier to obtain the target rendering chartlet.
6. The data processing apparatus according to claim 4, wherein the virtual camera with the depth value being not the minimum depth value performs pixel processing in a predefined manner on an initial game scene picture through a set texture shader after capturing the initial game scene picture, and outputs the game scene picture with a rendered background being transparent.
7. A computer-readable medium, on which a computer program is stored, wherein the program, when executed by a processor, implements a data processing method as claimed in any one of claims 1 to 3.
8. A processor for running a program, wherein the program when running implements the data processing method of any of claims 1 to 3.
9. An electronic device, comprising:
one or more processors;
a storage device having one or more programs stored thereon;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement a data processing method as recited in any of claims 1 to 3.
CN202111394510.8A 2021-11-23 2021-11-23 Data processing method, data processing device, computer readable medium, processor and electronic equipment Active CN114119797B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111394510.8A CN114119797B (en) 2021-11-23 2021-11-23 Data processing method, data processing device, computer readable medium, processor and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111394510.8A CN114119797B (en) 2021-11-23 2021-11-23 Data processing method, data processing device, computer readable medium, processor and electronic equipment

Publications (2)

Publication Number Publication Date
CN114119797A true CN114119797A (en) 2022-03-01
CN114119797B CN114119797B (en) 2023-08-15

Family

ID=80440115

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111394510.8A Active CN114119797B (en) 2021-11-23 2021-11-23 Data processing method, data processing device, computer readable medium, processor and electronic equipment

Country Status (1)

Country Link
CN (1) CN114119797B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103959340A (en) * 2011-12-07 2014-07-30 英特尔公司 Graphics rendering technique for autostereoscopic three dimensional display
CN111803932A (en) * 2020-07-22 2020-10-23 网易(杭州)网络有限公司 Skill release method for virtual character in game, terminal and storage medium
CN112274921A (en) * 2020-10-23 2021-01-29 完美世界(重庆)互动科技有限公司 Rendering method and device of game role, electronic equipment and storage medium
CN112652046A (en) * 2020-12-18 2021-04-13 完美世界(重庆)互动科技有限公司 Game picture generation method, device, equipment and storage medium
CN112891940A (en) * 2021-03-16 2021-06-04 天津亚克互动科技有限公司 Image data processing method and device, storage medium and computer equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103959340A (en) * 2011-12-07 2014-07-30 英特尔公司 Graphics rendering technique for autostereoscopic three dimensional display
CN111803932A (en) * 2020-07-22 2020-10-23 网易(杭州)网络有限公司 Skill release method for virtual character in game, terminal and storage medium
CN112274921A (en) * 2020-10-23 2021-01-29 完美世界(重庆)互动科技有限公司 Rendering method and device of game role, electronic equipment and storage medium
CN112652046A (en) * 2020-12-18 2021-04-13 完美世界(重庆)互动科技有限公司 Game picture generation method, device, equipment and storage medium
CN112891940A (en) * 2021-03-16 2021-06-04 天津亚克互动科技有限公司 Image data processing method and device, storage medium and computer equipment

Also Published As

Publication number Publication date
CN114119797B (en) 2023-08-15

Similar Documents

Publication Publication Date Title
US20230053462A1 (en) Image rendering method and apparatus, device, medium, and computer program product
CN110969685A (en) Customizable rendering pipeline using rendering maps
WO2010000126A1 (en) Method and system for generating interactive information
CN112891946B (en) Game scene generation method and device, readable storage medium and electronic equipment
EP4290464A1 (en) Image rendering method and apparatus, and electronic device and storage medium
WO2023197762A1 (en) Image rendering method and apparatus, electronic device, computer-readable storage medium, and computer program product
RU2680355C1 (en) Method and system of removing invisible surfaces of a three-dimensional scene
CN107767437B (en) Multilayer mixed asynchronous rendering method
CN112714357A (en) Video playing method, video playing device, electronic equipment and storage medium
CN114742931A (en) Method and device for rendering image, electronic equipment and storage medium
CN115082609A (en) Image rendering method and device, storage medium and electronic equipment
CN114842120A (en) Image rendering processing method, device, equipment and medium
US9153193B2 (en) Primitive rendering using a single primitive type
CN114428573B (en) Special effect image processing method and device, electronic equipment and storage medium
CN114119797B (en) Data processing method, data processing device, computer readable medium, processor and electronic equipment
CN108010095B (en) Texture synthesis method, device and equipment
US9230508B2 (en) Efficient feedback-based illumination and scatter culling
US11748911B2 (en) Shader function based pixel count determination
CN112116719B (en) Method and device for determining object in three-dimensional scene, storage medium and electronic equipment
CN114693780A (en) Image processing method, device, equipment, storage medium and program product
CN114241172A (en) Three-dimensional model display method and device based on holographic projection and computer equipment
CN113724364A (en) Setting method and device for realizing shielding by utilizing polygon and no rendering of body
CN111145358A (en) Image processing method, device and hardware device
JP2002369076A (en) Three-dimensional special effect device
CN112686984B (en) Rendering method, device, equipment and medium for sub-surface scattering effect

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant