CN115170712A - Data processing method, data processing apparatus, storage medium, and electronic apparatus - Google Patents

Data processing method, data processing apparatus, storage medium, and electronic apparatus Download PDF

Info

Publication number
CN115170712A
CN115170712A CN202210713381.2A CN202210713381A CN115170712A CN 115170712 A CN115170712 A CN 115170712A CN 202210713381 A CN202210713381 A CN 202210713381A CN 115170712 A CN115170712 A CN 115170712A
Authority
CN
China
Prior art keywords
data
rendering
compressed
compressed data
type
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210713381.2A
Other languages
Chinese (zh)
Inventor
梁哲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Neteasy Brilliant Network Technology Co ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202210713381.2A priority Critical patent/CN115170712A/en
Publication of CN115170712A publication Critical patent/CN115170712A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/30Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device
    • A63F2300/308Details of the user interface
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/66Methods for processing data by generating or executing the game program for rendering three dimensional images

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Image Generation (AREA)

Abstract

The invention discloses a data processing method, a data processing device, a storage medium and an electronic device. The method comprises the following steps: acquiring initial rendering data, wherein the initial rendering data is used for rendering a game scene picture to be displayed in a graphic user interface of a mobile terminal; dividing the initial rendering data into a first part of data and a second part of data based on the data type of the initial rendering data, wherein the first part of data is compressible data in the initial rendering data, and the second part of data is compression-prohibited data in the initial rendering data; compressing the first part of data to obtain compressed data; determining target rendering data using the compressed data and the second portion of data; a rendering operation is performed based on the target rendering data. The invention solves the technical problem of poor picture rendering performance.

Description

Data processing method, data processing apparatus, storage medium, and electronic apparatus
Technical Field
The present invention relates to the field of computers, and in particular, to a data processing method, apparatus, storage medium, and electronic apparatus.
Background
Currently, for the delayed rendering problem, a memory-free buffer (memory buffer) is generally used for delayed rendering, but the mobile terminal has a limitation on the number of caches, so that the technical problem of poor screen rendering performance is caused by the limited number of cache memories.
For the problem of poor picture rendering performance, no effective solution has been proposed at present.
Disclosure of Invention
At least some embodiments of the present invention provide a data processing method, an apparatus, a storage medium, and an electronic apparatus, so as to at least solve the technical problem of poor image rendering performance.
According to an embodiment of the present invention, there is provided a data processing method including: acquiring initial rendering data, wherein the initial rendering data is used for rendering a game scene picture to be displayed in a graphic user interface of a mobile terminal; dividing the initial rendering data into a first part of data and a second part of data based on the data type of the initial rendering data, wherein the first part of data is compressible data in the initial rendering data, and the second part of data is compression-prohibited data in the initial rendering data; compressing the first part of data to obtain compressed data; determining target rendering data using the compressed data and the second portion of data; a rendering operation is performed based on the target rendering data.
Optionally, compressing the first part of data to obtain compressed data, including: dividing the first part of data into first type data and second type data, wherein the first type data is used for representing corresponding normal data, and the second type data is used for representing data except the normal data in the first part of data; and compressing the first type data and the second type data to obtain compressed data.
Optionally, compressing the first type of data to obtain compressed data includes: and converting the first type data into planar data to obtain first compressed data, wherein the channel number of the first compressed data is less than that of the first type data.
Optionally, compressing the second type of data to obtain compressed data, including: determining data to be saved in the second type data based on the pixel position of the second type data; and storing the data to be stored to obtain second compressed data.
Optionally, determining data to be saved in the second type of data based on the pixel position of the second type of data includes: determining the data content corresponding to the first position as data to be stored in the second type data in response to the pixel position being the first position; and determining the data content corresponding to the second position as the data to be saved in the second type data in response to the pixel position being the second position.
Optionally, determining target rendering data using the compressed data and the second portion of data, comprising: restoring the second compressed data to obtain rendering data corresponding to the second compressed data; and obtaining target rendering data based on the rendering data, the first compressed data and the second part of data.
Optionally, the restoring the second compressed data to obtain rendering data corresponding to the second compressed data includes: determining compressed data horizontally adjacent to the second compressed data and compressed data vertically adjacent to the second compressed data based on an objective function for determining a difference between pixel values corresponding to the adjacent compressed data; and restoring the second compressed data based on the compressed data adjacent in the horizontal direction and the compressed data adjacent in the vertical direction to obtain rendering data corresponding to the second compressed data.
Optionally, performing reduction processing on the second compressed data based on the compressed data adjacent in the horizontal direction and the compressed data adjacent in the vertical direction to obtain rendering data corresponding to the second compressed data, including: determining a difference value between the brightness data in the second compressed data and the brightness data in the compressed data adjacent to the horizontal direction to obtain a first difference value; determining a difference value between the brightness data in the second compressed data and the brightness data in the compressed data adjacent to the vertical direction to obtain a second difference value; and performing interpolation calculation on the compressed data adjacent in the horizontal direction and the compressed data adjacent in the vertical direction based on the first difference and the second difference to obtain rendering data corresponding to the second compressed data.
Alternatively, a difference between the pixel value of the second compressed data and the pixel value of the adjacent position compressed data satisfies the pixel threshold.
According to an embodiment of the present invention, there is also provided a data processing apparatus including: the system comprises an acquisition unit, a display unit and a display unit, wherein the acquisition unit is used for acquiring initial rendering data, and the initial rendering data is used for rendering a game scene picture to be displayed in a graphic user interface of a mobile terminal; the device comprises a dividing unit, a processing unit and a processing unit, wherein the dividing unit is used for dividing initial rendering data into a first part of data and a second part of data based on the data type of the initial rendering data, the first part of data is compressible data in the initial rendering data, and the second part of data is compression-prohibited data in the initial rendering data; the processing unit is used for compressing the first part of data to obtain compressed data; a determining unit configured to determine target rendering data using the compressed data and the second partial data; an execution unit to execute a rendering operation based on the target rendering data.
According to an embodiment of the present invention, there is further provided a computer-readable storage medium having a computer program stored therein, wherein the computer program is configured to execute the data processing method in any one of the above methods when the computer program is executed.
There is further provided, according to an embodiment of the present invention, an electronic apparatus including a memory and a processor, the memory storing a computer program therein, and the processor being configured to execute the computer program to perform the data processing method in any one of the above.
In the embodiment of the invention, initial rendering data is obtained, wherein the initial rendering data is used for rendering a game scene picture to be displayed in a graphic user interface of a mobile terminal; dividing the initial rendering data into a first part of data and a second part of data based on the data type of the initial rendering data, wherein the first part of data is compressible data in the initial rendering data, and the second part of data is compression-prohibited data in the initial rendering data; compressing the first part of data to obtain compressed data; determining target rendering data using the compressed data and the second portion of data; a rendering operation is performed based on the target rendering data. That is to say, in the embodiment of the present invention, the initial rendering data is classified according to the data type to obtain the first part of data and the second part of data, and the first part of data is compressed, so that the compression of the initial rendering data is completed, the storage amount of the data is increased, the technical effect of improving the picture rendering performance is further achieved, and the technical problem of poor picture rendering performance is solved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention and do not constitute a limitation of the invention. In the drawings:
fig. 1 is a block diagram of a hardware configuration of a mobile terminal of a data processing method according to an embodiment of the present invention;
FIG. 2 is a flow diagram of a data processing method according to one embodiment of the invention;
FIG. 3 is a schematic diagram of a spherical projection method according to one embodiment of the present invention;
FIG. 4 is a schematic diagram of inter-pixel storage according to one embodiment of the present invention;
FIG. 5 is a schematic diagram of an array of 01 pixel locations according to one embodiment of the invention;
FIG. 6 is a diagram illustrating the operation result of the CPU according to one embodiment of the present invention;
FIG. 7 is a schematic diagram of pixel-by-pixel holding of data according to one embodiment of the invention;
FIG. 8 is a schematic diagram of rendering completion according to one embodiment of the invention;
FIG. 9 is a diagram illustrating homogeneous texture rendering according to an embodiment of the present invention;
FIG. 10 is a diagram illustrating rendering of different textures in accordance with one embodiment of the present invention;
fig. 11 is a block diagram of a data processing apparatus according to an embodiment of the present invention;
fig. 12 is a schematic diagram of an electronic device according to an embodiment of the invention.
Detailed Description
In order to make those skilled in the art better understand the technical solutions of the present invention, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without making any creative effort based on the embodiments in the present invention, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
First, some terms or terms appearing in the description of the embodiments of the present application are applicable to the following explanations:
rendering a memory area (Buffer), which may refer to a memory area on the central processing unit, where different stored data correspond to different memory areas, and may include a Color memory area (Color Buffer) for storing material data and Color data, a Depth memory area (Depth Buffer) for storing Depth data, and the like;
render target (render target, abbreviated RT): the map used for recording the GPU rendering result is a block of memory on the GPU, for example, GBufferA, GBufferB, GBufferC, GBufferD may be used to represent different rendering targets;
delayed Rendering (delayed Rendering), which is a Rendering strategy, may be to record material data on a plurality of Rendering targets, and then uniformly read the material data to render a final illumination result;
forward Rendering (Forward Rendering), which is also a Rendering strategy, can directly acquire all data of a model, calculate all illumination simultaneously, and do not need to store a temporary set of Rendering targets, theoretically, the method has low bandwidth, but has limited lighting effect;
bandwidth, in the rendering process, the number of times (which may be data amount) of reading and writing the depth memory and the color memory increases power consumption and mobile phone heating, and is an important index for performance evaluation;
geometric Buffer (GBuffer for short): in delayed rendering, a set of generic terms of rendering targets of the texture data are temporarily saved;
the memory (memory less Buffer) on the central processing unit can be a unique cache type of the mobile phone, and the cache type can have extremely low bandwidth but has quantity limitation;
the low-frequency image data can be data with the content not changing violently along with the position change on the image;
the high-frequency data may be data whose content changes relatively sharply with position change on the image.
Where an embodiment of a data processing method is provided according to one of the embodiments of the present invention, it should be noted that the steps illustrated in the flowchart of the figure may be performed in a computer system such as a set of computer-executable instructions and that, although a logical order is illustrated in the flowchart, in some cases, the steps illustrated or described may be performed in an order different than that presented herein.
The method embodiments may be performed in a mobile terminal, a computer terminal or a similar computing device. Taking the Mobile terminal as an example, the Mobile terminal may be a terminal device such as a smart phone (e.g., an Android phone, an iOS phone, etc.), a tablet computer, a palm computer, a Mobile Internet device (MID for short), a PAD, a game console, and the like. Fig. 1 is a block diagram of a hardware structure of a mobile terminal of a data processing method according to an embodiment of the present invention. As shown in fig. 1, the mobile terminal may include one or more (only one shown in fig. 1) processors 102 (the processors 102 may include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a Digital Signal Processing (DSP) chip, a Microprocessor (MCU), a programmable logic device (FPGA), a neural Network Processor (NPU), a Tensor Processor (TPU), an Artificial Intelligence (AI) type processor, etc.) and a memory 104 for storing data. Optionally, the mobile terminal may further include a transmission device 106, an input/output device 108, and a display device 110 for communication functions. It will be understood by those of ordinary skill in the art that the structure shown in fig. 1 is only an illustration and is not intended to limit the structure of the mobile terminal. For example, the mobile terminal may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
The memory 104 may be used to store computer programs, for example, software programs and modules of application software, such as computer programs corresponding to the data processing method in the embodiment of the present invention, and the processor 102 executes various functional applications and data processing by running the computer programs stored in the memory 104, that is, implementing the data processing method described above. The memory 104 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory located remotely from the processor 102, which may be connected to the mobile terminal over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 106 is used to receive or transmit data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the mobile terminal. In one example, the transmission device 106 includes a Network adapter (NIC) that can be connected to other Network devices via a base station to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module, which is used to communicate with the internet in a wireless manner.
The inputs in the input output Device 108 may come from a plurality of Human Interface Devices (HIDs). For example: keyboard and mouse, game pad, other special game controller (such as steering wheel, fishing rod, dance mat, remote controller, etc.). In addition to providing input functionality, some human interface devices may also provide output functionality, such as: force feedback and vibration of the gamepad, audio output of the controller, etc.
The display device 110 may be, for example, a head-up display (HUD), a touch screen type Liquid Crystal Display (LCD), and a touch display (also referred to as a "touch screen" or "touch display screen"). The liquid crystal display may enable a user to interact with a user interface of the mobile terminal. In some embodiments, the mobile terminal has a Graphical User Interface (GUI) with which a user can interact by touching finger contacts and/or gestures on a touch-sensitive surface, where the human interaction functionality optionally includes the following interactions: executable instructions for creating web pages, drawing, word processing, making electronic documents, games, video conferencing, instant messaging, emailing, call interfacing, playing digital video, playing digital music, and/or web browsing, etc., for performing the above-described human-computer interaction functions, are configured/stored in one or more processor-executable computer program products or readable storage media.
In a possible implementation manner, an embodiment of the present invention provides a data processing method, and fig. 2 is a flowchart of the data processing method according to an embodiment of the present invention, as shown in fig. 2, the method includes the following steps:
step S202, initial rendering data are obtained, wherein the initial rendering data are used for rendering a game scene picture to be displayed in a graphic user interface of the mobile terminal.
In the technical solution provided in the above step S202 of the present invention, the initial rendering data may be obtained from the storage location, for example, the initial rendering data may be read from a geometric buffer (GBuffer), so as to achieve the purpose of obtaining the initial rendering data, where the initial rendering data may be used to render a game scene picture to be displayed in a graphical user interface of a mobile terminal, and may include texture data, for example, normal data (Normal), color data (Diffuse), custom data (CustomData) set according to different textures, texture type data (MaterialID), roughness (Roughness), shadow data (InShadow), depth data (Depth), and Lighting data (Lighting).
Optionally, the rendering technology may be divided into delayed rendering and forward rendering, where delayed rendering is a current trend of real-time rendering, which can have a better light and material effect, and is more flexible to manufacture, and is currently widely applied to most of host computers and computer games, the delayed rendering requires that material data be recorded on a plurality of rendering targets, and these rendering buffers (buffers) for storing data may be referred to as geometric buffers (gbuffers), and the purpose of obtaining initial rendering data may be achieved by reading the material data in the gbuffers.
For example, the initial rendering data to be stored in the necessary geometric buffer (gbbuffer), which is 160 bits in total, may include: normal data (Normal), 3 × 8bit, color data (Diffuse), 3 × 8bit, custom data (CustomData) set according to material differences, and 3 × 8bit may be, for example, metal data (Metallic) stored in an x channel, thickness data (Thickness) stored in a y channel, or the like, material type data (MaterialID), 8bit, roughness data (Roughness), 8bit, shadow data (InShadow), 8bit, depth data (Depth), 16bit, and illumination data (Lighting), and 3 × 16bit.
It should be noted that the above initial rendering data is only an example, and is not limited herein.
Step S204, based on the data type of the initial rendering data, dividing the initial rendering data into a first part of data and a second part of data, where the first part of data is compressible data in the initial rendering data, and the second part of data is compression-prohibited data in the initial rendering data.
In the technical solution provided in step S204 of the present invention, a data type of the obtained initial rendering data is determined, and the initial rendering data is divided based on the data type of the initial rendering data to obtain a first part of data and a second part of data, where the data types may be compressible data and data prohibited from being compressed (non-compressible data), and the first part of data may be compressible data, and may be, for example, low-frequency data such as color data (Diffuse); the second part of data is compressible data in the initial rendering data, and may be incompressible data that needs to be accurately stored, for example, data that needs to be accurately stored may be Lighting data (Lighting), depth data (Depth), shadow data (InShadow), and the like.
Optionally, determining a data type of the initial rendering data, and when the initial rendering data is compressible data, dividing the initial rendering data into a first part of data; when the initial rendering data is the non-compressible data, dividing the initial rendering data into a second part of data, for example, when the initial rendering data is the low-frequency data, dividing the initial rendering data into a first part of data; when the initial rendering data is depth data, the initial rendering data may be determined as the second partial data.
In this embodiment, since the illumination data, the depth data, and the shadow data are data that need to be accurately stored, these three types of data cannot be compressed, and meanwhile, the texture type data is the mark data and cannot be compressed, so that the normal data, the color data, the custom data, and the roughness data can be compressed in the embodiment of the present invention.
Optionally, in the embodiment of the present invention, the initial rendering data is divided by using the data type to obtain the compressible data and the incompressible data, and the compressible data is compressed, so that the purpose of reducing the data amount is achieved, and the rendering effect on the game is further improved.
And step S206, compressing the first part of data to obtain compressed data.
In the technical solution provided by step S206 of the present invention, the compressible first part of data is compressed to obtain compressed data, where compressing the first part of data can be used to reduce the number of data, for example, channel merging is performed by using the characteristics of the data itself, so as to achieve the purpose of compressing the data; it is also possible to compress data or the like by utilizing image storage characteristics.
At present, in order to use delayed rendering on a mobile phone and solve the problems of bandwidth and power consumption of the mobile phone, a memory-free buffer (memory buffer) is usually used on the mobile phone to complete rendering of a game scene picture, and at the same time, the number of gbuffers is limited by a game engine, but since the mobile phone has a strict limitation on the number of gbuffers, the game engine needs to be designed to be downward compatible in order to use delayed rendering on most mobile phones, for example, pixel data of each memory-free buffer in the mobile phone cannot exceed 128 bits, or some mobile phones only support four rendering targets (Render Target), but for a central processor architecture of a part of mobile phones, single pixel storage of 4 gbuffers exceeds 128 bits, and a memory-free buffer function cannot be used completely, so that a problem of poor rendering performance exists on the part of mobile phones, and in order to solve the above problem, in an embodiment of the present invention, a first compressible part of data is obtained by classifying the initial rendering data, and a purpose of compressing the initial rendering data is achieved by compressing the first part of data.
In step S208, target rendering data is determined using the compressed data and the second portion of data.
In the technical solution provided by the foregoing step S208 of the present invention, the compressed data and the second part of data (uncompressed data) are determined, and the target rendering data is obtained, where the target rendering data is used to render a game scene picture to be displayed in the graphical user interface of the mobile terminal.
Optionally, the target rendering data comprises the compressed data and the second portion of data, and thus, using the compressed data and the second portion of data, the target rendering data may be determined.
In step S210, a rendering operation is performed based on the target rendering data.
In the technical solution provided in step S210 of the present invention, the target rendering data is used to render a game scene picture to be displayed in a graphical user interface of the mobile terminal, and the target rendering data performs a rendering operation, so as to obtain the game scene picture.
Through steps S202 to S210, initial rendering data are obtained, wherein the initial rendering data are used for rendering a game scene picture to be displayed in a graphic user interface of a mobile terminal; dividing the initial rendering data into a first part of data and a second part of data based on the data type of the initial rendering data, wherein the first part of data is compressible data in the initial rendering data, and the second part of data is compression-prohibited data in the initial rendering data; compressing the first part of data to obtain compressed data; determining target rendering data using the compressed data and the second portion of data; a rendering operation is performed based on the target rendering data. That is to say, according to the data type, the initial rendering data is classified to obtain the first part of data and the second part of data, and the first part of data is compressed, so that the compression of the initial rendering data is completed, the storage capacity of the data is improved, the technical effect of poor rendering performance of the picture is improved, and the technical problem of poor rendering performance of the picture is solved.
The following further describes embodiments of the present invention.
As an alternative embodiment, in step S206, the compressing the first part of data to obtain compressed data includes: dividing the first part of data into first type data and second type data, wherein the first type data is used for representing corresponding normal data, and the second type data is used for representing data except the normal data in the first part of data; and compressing the first type data and the second type data to obtain compressed data.
In this embodiment, since the normal data may be compressed by using the image storage characteristic, and other data cannot be compressed by using the image storage characteristic, the first part of data may be divided into a first type of data and a second type of data, where the first type of data may be used to represent corresponding normal data, and the second type of data may be used to represent data other than the normal data in the first part of data, for example, color data in low-frequency data.
Optionally, the first part of data may include data that can be channel-merged using the characteristics of the data itself and data that is compressed using the characteristics of image storage, and the first part of data is divided to obtain data of a first type that can be compressed using the characteristics of image storage and data that can be channel-merged using the characteristics of the data itself.
Optionally, since the low-frequency data does not change significantly within a range, the data channels may be merged according to the characteristics of the data itself, for example, different data may be stored in one channel or using adjacent pixels according to the characteristics of the texture data; on the other hand, for the normal line, the normal line is a unit vector, and the compression coding can be performed through two channels, so that the first part of data is divided into first type data and second type data, different compression modes are selected for different types of data, and the purpose of improving the data compression efficiency is achieved.
As an alternative embodiment, in step S206, performing compression processing on the first type data to obtain compressed data includes: and converting the first type data into planar data to obtain first compressed data, wherein the channel number of the first compressed data is less than that of the first type data.
In this embodiment, the first type data may be Normal data (Normal), and it is generally required to occupy three channels when storing, and since the Normal data itself is normalized data, the third channel may be omitted from the data storage perspective, so that the first type data is converted into plane data having at most two channels, and the first compressed data is obtained.
In the embodiment of the invention, the purpose of omitting a third channel and compressing data is realized by converting the three-dimensional data of the first type of data into the planar data, so that the number of the obtained channels of the first compressed data is certainly less than that of the channel data of the first type of data, and otherwise, the purpose of compressing the data cannot be realized.
Alternatively, for more preservation of accuracy, the normal data may be processed by using a spherical Projection strategy (Stereographic Projection) to obtain plane information of the normal data, and data of at most two channels may be obtained.
For example, the normal data may be set as any point P on the sphere, the point P is connected to the reference point N, and the projection point P' on the plane circle is the converted data.
As an alternative embodiment, in step S206, the compressing the second type data to obtain compressed data includes: determining data to be saved in the second type data based on the pixel position of the second type data; and storing the data to be stored to obtain second compressed data.
In this embodiment, a pixel position of the second type of data is determined, and data to be stored in the second type of data is determined based on the pixel position of the second type of data, and the stored data is stored to obtain second compressed data, where the data to be stored may be data that is set in advance according to an operation habit or needs, for example, when the pixel position is set to be 0, the corresponding data to be stored may be GBufferC1; when the pixel position is at the flag bit 1, the corresponding data to be stored may be GBufferC2; the pixel position may be indicated by texPos.
Optionally, in the same material, the low frequency data similar to the color data in the second type of data may further include: the low frequency data can be stored with the color data, including the metal, roughness data, and custom data (CustomData) with a lower utilization.
Optionally, different data to be stored is set for different pixel positions, the pixel position (texPos) of the second type data is determined, the pixel position of the data can be determined in the form of coordinates, so that the data to be stored in the second type data is determined, and the data to be stored is stored to obtain second compressed data.
As an alternative embodiment, determining data to be saved in the second type of data based on the pixel position of the second type of data includes: determining the data content corresponding to the first position as data to be stored in the second type data in response to the pixel position being the first position; and determining the data content corresponding to the second position as the data to be saved in the second type data in response to the pixel position being the second position.
In this embodiment, a pixel position of the second type data is determined, the data content needing to be saved set at the first position is determined as data to be saved in the second type data in response to the pixel position being the first position, the data content needing to be saved set at the second position is determined as data to be saved in the second type data in response to the pixel position being the second position, wherein the first position may be a coordinate position with a corresponding coordinate position identified as singular or even, the second position may be a coordinate position with a corresponding coordinate identified as even or odd, and the first position and the second position are adjacent.
Alternatively, different areas in the texture coordinates (uv coordinates) may be represented by two-dimensional positive shaping data, for example, different areas may be represented by 0,1, 0 and 1 may correspond to data in GbufferC1 and GbufferC2, and GbufferC1 and GbufferC2, respectively, for alternate storage; the purpose of determining the pixel position of the current processing data can be achieved by the number of the corresponding area of the data, and it should be noted that the above numbers are only used for illustration and are not limited specifically here.
For example, the current pixel position is input, whether the coordinate of the pixel position of the second type data corresponds to a single number or a double number or 0 or 1 is judged according to the modulo operation, the data content to be stored corresponding to the second type data is determined according to the single or double number of the two-dimensional coordinate 0,1, if the pixel position corresponds to the two-dimensional coordinate of 0, the pixel stores the content in GBufferC c1, and if the two-dimensional coordinate of the pixel position corresponds to 1, the pixel stores the content in GBufferC c2.
Alternatively, since the luminance data in the second type data is high-frequency data, the full amount of the luminance data may be retained; since the color change is low-frequency data, the accuracy of color data storage can be reduced, and in the embodiment of the invention, the spatial domain compression is performed by using the mapping space pixel storage, so that the purpose of compressing the data is realized.
In the related art, compressing color data is generally to convert the data into color spaces such as a color model (Hue _ failure _ value, abbreviated as HSV), a color mode (Lab), a color model (Hue _ failure _ light, abbreviated as HSL), and YCbCr.
In the embodiment of the present invention, color data of three primary colors (Red Green Blue, abbreviated as RGB) may be converted into a color space (YCbCr) and encoded, and after encoding, luminance data stored in a Y channel is stored in an x channel of a buffer area GBuffer C, and a Blue chrominance component (Cb) channel and a Red chrominance component (Cr) channel of the color data are subjected to pixel skipping compression together with subsequent data.
Alternatively, it may be set that two sets of pixel compressed data are alternately stored in GBufferC, for example, GBufferC1 and GBufferC2, where it may be set that luminance data (difference.y), blue chrominance component (difference.cb) of difference, roughness data (Roughness), and custom data (custom data) are stored in GBufferC1, and may be:
GBufferC1:Diffuse.Y、Diffuse.Cb、Roughness、CustomData.y
the stored luminance data (difference.y), the red chromaticity component (difference.cr), the metal data (custom data.x (Metallic)) in the custom data, and the custom data (custom data.z) in GBufferC2 may be set as:
GBufferC2:Diffuse.Y、Diffuse.Cr、CustomData.x(Metallic)、CustomData.z
in this embodiment, the data content to be saved at the first position and the data content to be saved at the second position are alternately saved, and the data content is selectively saved by determining the pixel position of the data, thereby achieving the purpose of compressing the data.
It should be noted that, the contents of the data to be stored in GBufferC1 and GBufferC2 are only for illustration, and are not limited specifically here.
As an alternative embodiment, step S208, determining target rendering data by using the compressed data and the second partial data, includes: restoring the second compressed data to obtain rendering data corresponding to the second compressed data; and obtaining target rendering data based on the rendering data, the first compressed data and the second part of data.
In this embodiment, since the mobile phone usually uses a non-Memory buffer (Memory buffer) to store the second compressed data, but the stored data in the non-Memory buffer is only locally available in a Memory (Memory) of an On-Chip buffer and cannot be directly read, the second compressed data needs to be restored to obtain rendering data corresponding to the second compressed data, and the target rendering data can be obtained based On the rendering data, the first compressed data, and the second partial data.
In this embodiment, because the data stored in GBufferC1 and GBufferC2 are alternately stored, the data stored in the periphery of GBufferC1 is GBufferC2, and thus the operation of restoring the second compressed data is to restore the missing partial cache data from the peripheral geometric cache.
As an optional embodiment, performing reduction processing on the second compressed data to obtain rendering data corresponding to the second compressed data includes: determining compressed data adjacent to the second compressed data in the horizontal direction and compressed data adjacent to the second compressed data in the vertical direction based on an objective function, wherein the objective function is used for determining a difference value between pixel values corresponding to the adjacent compressed data; and restoring the second compressed data based on the compressed data adjacent in the horizontal direction and the compressed data adjacent in the vertical direction to obtain rendering data corresponding to the second compressed data.
In this embodiment, since the mobile phone usually uses the memory-free buffer area to store the second compressed data, but the stored data in the memory-free buffer area can only be locally used in the memory of the single-chip buffer, and cannot be directly read, and there is a problem that the adjacent stored data cannot be directly determined, but in the embodiment of the present invention, the reduction of the second compressed data can be completed only by using the adjacent stored data, so that an objective function can be used to achieve the purpose of determining the compressed data adjacent to the second compressed data in the horizontal direction and the compressed data adjacent to the second compressed data in the vertical direction, and the second compressed data is reduced based on the compressed data adjacent to the horizontal direction and the compressed data adjacent to the second compressed data in the vertical direction, so as to obtain the rendering data corresponding to the second compressed data, where the objective function can be used to determine the difference between the pixel values corresponding to the adjacent compressed data, and can calculate a Partial derivative function (Partial derivative) in the characteristics for the hardware of the central processing unit.
Alternatively, the operations of the central processor are committed to run in thread groups, the threads are run in a pixel level (pixel stage) with the packing device quad (2 × 2 pixels) as the smallest organization unit, the central processor can calculate the derivative by taking the difference between pixel values in the packing device, for example, dFdx (p (x, y)) = p (x +1,y) -p (x, y) or dFdx (p (x, y)) = p (x, y + 1) -p (x, y), where p (x, y) can be the pixel value of the second compressed data, then p (x +1,y) can be the pixel value of the right data of p (x, y), p (x, y + 1) can be the pixel value of the lower data of p (x, y), and so on.
Optionally, by calculating the second compressed data by using the partial derivative provided by the central processing unit, the difference between the second compressed data and the result of the neighboring pixel can be obtained, so as to determine GBufferC c1 or GBufferC c2 corresponding to the neighboring pixel.
For example, when inbufferra is set as the second compressed data, and the second compressed data is the top left corner data, neighbor _ H may be an adjacent pixel in the horizontal direction of the second compressed data, may be a left side pixel, or may be a right side pixel, neighbor _ V may be an adjacent pixel in the vertical direction of the second compressed data, may be a lower side pixel, or may be an upper side pixel, and is determined to be a left side, a right side, an upper side, or a lower side according to the position of the second compressed data, where no specific limitation is imposed, the Neighbor _ H may be calculated by adding a partial derivative of the second compressed data to the second compressed data, and may be calculated by the following formula:
Neighbor_H=InGBufferC+ddx(InGBufferC)
neighbor _ V may be the second compressed data plus the partial derivative of the second compressed data, and may be calculated by the following equation:
Neighbor_V=InGBufferC+ddy(InGBufferC)
for example, when the second compressed data is the upper right corner data, neighbor _ H may be calculated by subtracting the partial derivative of the second compressed data from the second compressed data, and may be calculated by the following formula:
Neighbor_H=InGBufferC-ddx(InGBufferC)
neighbor _ V may be the second compressed data minus the partial derivative of the second compressed data, and may be calculated by the following equation:
Neighbor_V=InGBufferC-ddy(InGBufferC)
for example, when the second compressed data is the lower left corner data, neighbor _ H may be calculated by subtracting the partial derivative of the second compressed data from the second compressed data, and may be calculated by the following formula:
Neighbor_H=InGBufferC-ddx(InGBufferC)
neighbor _ V may be the second compressed data plus the partial derivative of the second compressed data, and may be calculated by the following equation:
Neighbor_V=InGBufferC+ddy(InGBufferC)
for example, when the second compressed data is the bottom right corner data, neighbor _ H may be calculated by adding the second compressed data to the partial derivative of the second compressed data, and may be calculated by the following formula:
Neighbor_H=InGBufferC+ddx(InGBufferC)
neighbor _ V may be the second compressed data minus the partial derivative of the second compressed data, and may be calculated by the following equation:
Neighbor_V=InGBufferC-ddy(InGBufferC)
optionally, in the embodiment of the present invention, the pixel attributes at adjacent positions are close and the same attribute is maintained between the opposite data, then the same attribute calculated by p (x, y) and p (x, y + 1) is close, and the difference between the different attributes is close to p (x +1,y) -p (x, y) and is close to p (x +1, y + 1) -p (x +1,y), so that it can be deduced that p (x +1, y + 1) is about p (x, y + 1) + p (x +1,y) -p (x, y).
It should be noted that the addition and subtraction operations above depend on different calculation strategies for the partial derivative function (ddx), and the calculation method of ddx is not specifically limited herein, and the calculation method of the adjacent pixel may be changed according to the calculation method of ddx.
As an optional embodiment, performing reduction processing on the second compressed data based on the compressed data adjacent in the horizontal direction and the compressed data adjacent in the vertical direction to obtain rendering data corresponding to the second compressed data includes: determining a difference value between the brightness data in the second compressed data and the brightness data in the compressed data adjacent to the horizontal direction to obtain a first difference value; determining a difference value between the brightness data in the second compressed data and the brightness data in the compressed data adjacent to the vertical direction to obtain a second difference value; and performing interpolation calculation on the compressed data adjacent in the horizontal direction and the compressed data adjacent in the vertical direction based on the first difference and the second difference to obtain rendering data corresponding to the second compressed data.
In this embodiment, an objective function is used to achieve the steps of determining compressed data horizontally adjacent to the second compressed data and vertically adjacent to the second compressed data, determining a difference between luminance data in the second compressed data and luminance data in the horizontally adjacent compressed data to obtain a first difference, and determining a difference between luminance data in the second compressed data and luminance data in the vertically adjacent compressed data to obtain a second difference, wherein the first difference may be represented by biasL and the second difference may be represented by biasU.
Optionally, in the embodiment of the present invention, pixel-by-pixel storage in a checkerboard form may be performed, so that only the data content that needs to be stored in GBufferC1 is stored in the GBufferC1, the data content that needs to be stored in GBufferC2 is missing, and the missing data content in GBufferC1 may be obtained by performing linear interpolation on the calculated adjacent compressed data.
Alternatively, the difference between the luminance data (current.x) in the second compressed data and the luminance data (left.x) in the compressed data adjacent to the horizontal direction is calculated to obtain a first difference (biasL), the difference between the luminance data (current.x) in the second compressed data and the luminance data (up.x) in the compressed data adjacent to the vertical direction is calculated to obtain a second difference (biasU), and the result of dividing the data in the vertical direction by the second difference is added to the result of dividing the data in the horizontal direction by the first difference to obtain a sum of the first difference and the second difference, thereby obtaining the missing data content in the second compressed data, and the calculation may be performed by the following formula:
(Up*biasU+Left*biasL)/(biasU+biasL)
in order to avoid the problem of dividing by 0, absolute values of the first difference and the second difference need to be obtained, and when the first difference or the second difference is 0, 0.001 may be obtained, which may be:
biasU=max(0.01,abs(Current.x-Up.x))
biasL=max(0.01,abs(Current.x-Left.x))
wherein Current may be used to represent the second compressed data, current.x represents luminance data, and other components (e.g., y, z, w) may represent GBufferC1 or GBufferC2; up, left may represent Left data and above data, respectively, the x component represents luma data, and the other components (e.g., y, z, w) may represent GBufferc1 or GBufferc2 where the second compressed data is missing.
Optionally, by the first difference and the second difference, interpolation calculation is performed on the compressed data adjacent in the horizontal direction and the compressed data adjacent in the vertical direction, so that reduction of the second compressed data is completed, and the purpose of obtaining complete rendering data corresponding to the second compressed data is achieved.
As an alternative embodiment, the difference between the pixel value of the second compressed data and the pixel value of the adjacent position compressed data satisfies the pixel threshold.
In the embodiment of the present invention, the difference between the pixel value of the second compressed data and the pixel value of the adjacent compressed data satisfies the pixel threshold, where the pixel threshold may be a value set according to actual requirements.
Optionally, the problem of material loss exists when different materials are stored in different pixels, and the difference between the pixel value of the second compressed data and the pixel value of the compressed data at the adjacent position satisfies the pixel threshold, so that the condition of material loss can be avoided.
It should be noted that, since the color component of the color is low-frequency data, the color component does not change obviously on the image, and if the color component is stored with a lower resolution, much space can be saved, but if the resolution of the geometric buffer (GBuffer) is different, an uncontrollable abnormal effect occurs on the boundary, such as layering and aliasing of the Banding (Banding) color, and thus, the original resolution can be used for storage in the embodiment of the present invention. Optionally, when a sudden high-frequency change occurs at a different material boundary, the problem that the sudden high-frequency change occurs at the different material boundary, which causes an abnormal picture rendering, can be solved by using anti-aliasing.
In the embodiment of the invention, the initial rendering data is classified according to the data type to obtain the first part of data and the second part of data, and the first part of data is compressed, so that the compression of the initial rendering data is completed, the storage capacity of the data is improved, the technical effect of improving the picture rendering performance is further realized, and the technical problem of poor picture rendering performance is solved.
The technical solutions of the embodiments of the present invention are further described below by way of examples in connection with preferred embodiments. Specifically, a satellite buffer (microbbuffer) format design scheme for delay rendering of a mobile phone is described.
At present, in game rendering, rendering technologies can be divided into delayed rendering and forward rendering, wherein delayed rendering is a current trend of real-time rendering, can have a better light and material effect, is more flexible to manufacture, is widely applied to most of host computers and computer games at present, and although mobile phone games are gradually transited to delayed rendering, some problems still exist.
The delayed rendering needs to record the material data on a plurality of rendering targets, the rendering cache (Buffer) for storing the data can be called as a geometric cache (GBuffer), and the final illumination result is obtained by uniformly reading the material data in the GBuffer and rendering the material data.
In order to use delayed rendering on a mobile phone and solve the problems of bandwidth and power consumption of the mobile phone, a mobile phone generally uses a buffer without memory (memory buffer) to complete rendering of a game scene picture, and at the same time, the number of gbuffers is limited by a game engine, but since the mobile phone has a strict limitation on the number of gbuffers, in order to use delayed rendering on most mobile phones, the design of the game engine needs to be downward compatible, for example, the pixel data of each memory of some mobile phones cannot exceed 128 bits, and some mobile phones only support four rendering targets (RenderTarget), so that the following problems exist for the memory technology on the mobile phone: for the central processing unit architecture of part of mobile phones, the single pixel storage of 4 GBuffers exceeds 128bit, and the memory GBuffer function cannot be used completely, so that the problem of poor rendering performance exists on part of mobile phones; in the four-channel GBuffer, a lot of low-frequency data exist, if the storage of general material data is wasted, the utilization rate of the GBuffer is reduced, and the technical problem of large data volume of rendering data exists.
In the method, 4RT GBuffer of a game engine (Unity) is already a limit, and a Deferred pipeline (refer) of a game engine (for example, UE 4) for 3D modeling rendering needs to be reconstructed on a mobile phone, in order to increase the data volume of the stored rendering data, on one hand, since low-frequency data does not change obviously in a range, pixel separation or single-pixel compression can achieve a better effect, and therefore, data channel merging is performed according to the characteristics of the data itself, for example, different data can be stored in one channel or by using adjacent pixels according to the characteristics of material data; on the other hand, the image storage characteristics are utilized to compress a certain data type, for example, as for the normal line, the data type is a unit vector, so that the data can be compressed and encoded through two channels, through the two aspects, the embodiment of the present invention achieves the purpose of further compressing the GBuffer, for example, compressing the GBuffer to only use two texture caches to store data (Multi-Render Target 2, abbreviated as MRT 2) or 3, and simultaneously compressing the single-pixel storage space to be within 128 bits, thereby reducing the data amount of the rendering Target, and further improving the rendering effect of the game.
The following further describes embodiments of the present invention.
In this embodiment, the necessary content to be stored in the GBuffer, which is 160 bits in total, may include: normal (Normal), 3 × 8bit, color (Diffuse), 3 × 8bit, custom data (CustomData) set according to material differences, and 3 × 8bit may be, for example, metal data (Metallic) stored in an x channel, thickness data (Thickness) stored in a y channel, or the like, a material type (MaterialID), 8bit, roughness (Roughness), 8bit, shadow data (InShadow), 8bit, depth data (Depth), 16bit, and illumination data (Lighting), and 3 × 16bit.
In this embodiment, since the illumination data, the depth data, and the shadow data are data that need to be accurately stored, these three types of data cannot be compressed, and meanwhile, the texture type data is the mark data and cannot be compressed, so that the normal data, the color data, the custom data, and the roughness data can be compressed in the embodiment of the present invention.
As an alternative embodiment, the normal data is compressed.
In the related art, the Normal data size is 3 × 8bit, three channels are generally required for storage, and since Normal itself is normalized (normalized) data, the third channel can be omitted from the perspective of data storage.
Optionally, in order to further retain accuracy, a spherical Projection strategy (stereo Projection) may be used to process the normal data to obtain plane information of the normal data, so as to obtain data of at most two channels, fig. 3 is a schematic diagram of the spherical Projection method according to an embodiment of the present invention, as shown in fig. 3, the normal data may be set as any point P on the sphere, connecting the point P and a reference point N, and a Projection point P' on the plane circle is the converted data, as shown in fig. 3, the Projection point has only plane information, so only two channels are needed, thereby achieving the purpose of compressing the data.
As an alternative embodiment, the second type of data is compressed.
In this embodiment, compressing the second type of data may include compressing low frequency data, such as color data.
Alternatively, since the luminance is high frequency data, the full amount of data of the luminance may be retained; since the color change is low-frequency data, the accuracy of color data storage can be reduced, and in the embodiment of the invention, the purpose of compressing data is realized by performing spatial domain compression by using tile space pixel storage.
In the related art, compressing color data is generally to convert the data into color spaces such as a color model (Hue _ failure _ value, abbreviated as HSV), a color mode (Lab), a color model (Hue _ failure _ light, abbreviated as HSL), and YCbCr.
It should be noted that, since color components of colors are low-frequency data, the color components do not change significantly on an image, and if the color components are stored with a lower resolution, much space can be saved, but if the resolutions of geometric buffers (gbuffers) are different, an uncontrollable abnormal effect may occur on a boundary, such as layering and aliasing of band (Banding) colors, and thus, in an embodiment of the present invention, the original resolution may be used for storage, fig. 4 is a schematic diagram of inter-pixel storage according to an embodiment of the present invention, as shown in fig. 4, data contents stored in positions with the same color are the same, data contents stored in positions with different colors are different, and different data contents are stored in positions with different pixels.
Alternatively, when a sudden high frequency change occurs at a different material boundary, the problem of a sudden high frequency change occurring at a different material boundary can be solved by using anti-aliasing.
Optionally, in the same material, the low frequency data similar to the color data may further include: the low frequency data may be stored together with Diffuse, with low metal (Metallic), roughness (roughress), and custom data (CustomData) that is low in utilization.
Alternatively, diffuse of three primary colors (Red Green Blue, abbreviated as RGB) is converted into a color space (YCbCr) and encoded, after encoding, luminance data stored in the Y channel is stored in the x channel of the buffer area GBuffer C, and the Blue chrominance component (Cb) channel and the Red chrominance component (Cr) channel of the Diffuse are subjected to pixel skipping compression together with the following data.
Alternatively, two sets of pixel-compressed data may be set to be alternately stored in GBufferC, for example, GBufferC1 and GBufferC2, where it may be set that luminance data (difference.y), blue chrominance component of difference (difference.cb), roughness (roughress), and custom data (custom data.y) are stored in GBufferC 1.
Alternatively, it may be set that luminance data (difference.y), red chromaticity component (difference.cr), metal data (custom data.x) in custom data, and custom data (custom data.z) are saved in GBufferC2.
In this embodiment, because the data stored in GBufferC1 and GBufferC2 are alternately stored, the pixels around GBufferC1 are stored as GBufferC2, and thus the operation of restoring the compressed data is to restore the missing partial cache data from the surrounding geometric cache (GBufferC).
Alternatively, the pixel position (texPos) of the second type data is determined, and the pixel position of the data may be determined in the form of coordinates, wherein different areas in uv coordinates may be represented by two-dimensional positive shaping data, for example, fig. 5 is a schematic diagram of a 01 array of pixel positions according to an embodiment of the present invention, as shown in fig. 5, different areas may be represented by 0,1, and 0 and 1 may correspond to data in GbufferC1 and GbufferC2, and GbufferC1 and GbufferC2, respectively, for alternate storage; the purpose of determining the pixel position of the current processing data can be achieved by the number of the corresponding area of the data, and it should be noted that the above numbers are only used for illustration and are not limited specifically here.
For example, a pixel position currently processed is input, whether a coordinate of a pixel position (texPos) of the second type data corresponds to a singular number or an even number is judged according to a modulo operation, finally, data content to be stored corresponding to the second type data is determined according to the singular number or the even number of the two-dimensional coordinate, if the two-dimensional coordinate corresponding to the pixel position is 0, the pixel stores the content in GBufferC1, and if the two-dimensional coordinate corresponding to the pixel position is 1, the pixel stores the content in GBufferC2.
As an alternative embodiment, the compressed second type data is restored.
The compressed second type data is usually stored in a mobile phone by using a Memory-free buffer (Memory), but the stored data in the Memory-free buffer can only be locally stored in a Memory (Memory) of an On-Chip buffer and cannot be directly read, and the problem that adjacent stored data cannot be directly determined exists.
Alternatively, the operation of the central processing unit is committed to run in thread groups, the operation of threads in a pixel level (pixel stage) is performed with a packing device quad (2 × 2 pixels) as a minimum organization unit, fig. 6 is a schematic diagram of the operation result of the central processing unit according to an embodiment of the present invention, as shown in fig. 6, the central processing unit calculates a derivative by taking the difference between pixel values in the packing device, for example, dFdx (p (x, y)) = p (x +1,y) -p (x, y) or dFdx (p (x, y)) = p (x, y + 1) -p (x, y), where p (x, y) may be the pixel value of the current data, p (x +1,y) may be the pixel value of the right data of p (x, y), and p (x, y + 1) may be the pixel value of the lower data of p (x, y), and so on the like.
Optionally, by calculating the second compressed data GBufferC by using the partial derivative provided by the central processing unit, a difference between the second compressed data and the result of the neighboring pixel may be obtained, so that GBufferC1 or GBufferC2 corresponding to the neighboring pixel may be determined.
For example, as shown in fig. 8, the inbgufferc is set as the second compressed data, the second compressed data is the upper left corner data, the Neighbor _ H may be the adjacent pixel in the horizontal direction of the second compressed data, may be the left side pixel, may also be the right side pixel, the Neighbor _ V may be the adjacent pixel in the vertical direction of the second compressed data, may be the lower side pixel, and may also be the upper side pixel, and the position of the second compressed data is determined as the left side, the right side, the upper side, or the lower side, which is not limited herein specifically, and the Neighbor _ H may be calculated by adding the partial derivative of the second compressed data to the second compressed data, and may be:
Neighbor_H=InGBufferC+ddx(InGBufferC)
neighbor _ V may be the second compressed data plus the partial derivative of the second compressed data, and may be:
Neighbor_V=InGBufferC+ddy(InGBufferC)
for example, as shown in fig. 8, when the second compressed data is the top-right corner data, the Neighbor _ H may be calculated by subtracting the partial derivative of the second compressed data from the second compressed data, and may be:
Neighbor_H=InGBufferC-ddx(InGBufferC)
neighbor _ V may be the second compressed data minus the partial derivative of the second compressed data, and may be:
Neighbor_V=InGBufferC-ddy(InGBufferC)
for example, as shown in fig. 8, when the second compressed data is the lower left corner data, the Neighbor _ H may be calculated by subtracting the partial derivative of the second compressed data from the second compressed data, and may be:
Neighbor_H=InGBufferC-ddx(InGBufferC)
neighbor _ V may be the second compressed data plus the partial derivative of the second compressed data, and may be:
Neighbor_V=InGBufferC+ddy(InGBufferC)
for example, as shown in fig. 8, when the second compressed data is the lower right corner data, the Neighbor _ H may be calculated by adding the partial derivative of the second compressed data to the second compressed data, and may be:
Neighbor_H=InGBufferC+ddx(InGBufferC)
neighbor _ V may be the second compressed data minus the partial derivative of the second compressed data, and may be:
Neighbor_V=InGBufferC-ddy(InGBufferC)
optionally, in the embodiment of the present invention, the pixel attributes at adjacent positions are close and the same attribute is maintained between the opposite data, then the same attribute calculated by p (x, y) and p (x, y + 1) is close, and the difference between the different attributes is close to p (x +1,y) -p (x, y) and is close to p (x +1, y + 1) -p (x +1,y), so that it can be deduced that p (x +1, y + 1) is about p (x, y + 1) + p (x +1,y) -p (x, y).
It should be noted that the addition and subtraction operations above depend on different calculation strategies of the partial derivative function (ddx), and the calculation of ddx is not specifically limited herein, and the calculation method of the adjacent pixels may be changed according to the calculation method of ddx.
In this embodiment of the present invention, fig. 7 is a schematic diagram of storing data by pixels according to an embodiment of the present invention, and as shown in fig. 7, the compressed data storage may be a checkerboard-form pixel-by-pixel storage, so that only the data content (corresponding to C in fig. 7) that needs to be stored in GBufferC C1 is stored in the GBufferC C1, the data content (corresponding to U in fig. 7) that needs to be stored in GBufferC C2 is missing, and the missing data content in GBufferC C1 can be obtained by linear interpolation through the calculated upper data and the right data.
Alternatively, the step of calculating a difference between the luminance data (current.x) in the second compressed data and the luminance data (left.x) in the compressed data adjacent to the horizontal direction to obtain a first difference (biasL), calculating a difference between the luminance data (current.x) in the second compressed data and the luminance data (up.x) in the compressed data adjacent to the vertical direction to obtain a second difference (biasU), and adding a result of dividing the data in the vertical direction by the second difference to a result of dividing the data in the horizontal direction by the first difference to obtain a sum of the first difference and the second difference, thereby obtaining the data content missing in the second compressed data may be:
(Up*biasU+Left*biasL)/(biasU+biasL)
in order to avoid the problem of dividing by 0, absolute values of the first difference and the second difference need to be obtained, and when the first difference or the second difference is 0, 0.001 may be obtained, which may be:
biasU=max(0.01,abs(Current.x-Up.x))
biasL=max(0.01,abs(Current.x-Left.x))
wherein Current may be used to represent the second compressed data, current.x represents luminance data, and other components (e.g., y, z, w) may represent GBufferC1 or GBufferC2; up, left may represent Left data and above data, respectively, the x component represents luma data, and the other components (e.g., y, z, w) may represent GBufferc1 or GBufferc2 where the second compressed data is missing.
Optionally, the interpolation calculation is performed on the compressed data adjacent to the horizontal direction and the compressed data adjacent to the vertical direction in the above manner, so that the restoration of the second compressed data is completed, and the purpose of obtaining complete rendering data corresponding to the second compressed data is achieved.
Fig. 8 is a schematic diagram of rendering completion according to an embodiment of the present invention, and as shown in fig. 8, a graphical user interface is rendered based on complete second type data, normal data, and uncompressed data, resulting in a rendering result.
FIG. 9 is a schematic diagram of a same texture rendering according to an embodiment of the invention, where the rendering result is not significantly erroneous as shown in FIG. 9, and FIG. 10 is a schematic diagram of a different texture rendering according to an embodiment of the invention, where the different texture has a texture loss problem as shown in FIG. 10.
In the embodiment of the invention, the GBuffer content is compressed from 160bit to 128bit by 20 percent, and simultaneously, the mobile phone can be ensured to completely use MemoryLessGBuffer; in the embodiment of the invention, all data are stored in the corresponding packaged devices by using the calculation of the partial derivative function, if the adjacent pixels are considered to be sampled, the total sampling times of the positions are kept unchanged, and the number of GBuffers is reduced by using the pixel separating cache (texture cache), so that the total bandwidth is saved more.
In the embodiment, data channels are merged according to the characteristics of the data, different data are stored in one channel or adjacent pixels according to the characteristics of material data, and meanwhile, the compression of a certain data type is performed by using the image storage characteristics, so that the technical effect of improving the rendering effect is realized, and the technical problem of poor rendering performance is solved.
Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
In this embodiment, a data processing apparatus is further provided, and the apparatus is used to implement the foregoing embodiments and preferred embodiments, and details are not repeated for what has been described. As used below, the term "unit" may be a combination of software and/or hardware that implements a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
Fig. 11 is a block diagram of a data processing apparatus according to an embodiment of the present invention, and as shown in fig. 11, the data processing apparatus may include: an acquisition unit 1102, a dividing unit 1104, a processing unit 1106, a determination unit 1108, and an execution unit 1110.
An obtaining unit 1102, configured to obtain initial rendering data, where the initial rendering data is used to render a game scene picture to be displayed in a graphical user interface of a mobile terminal;
a dividing unit 1104, configured to divide the initial rendering data into a first part of data and a second part of data based on a data type of the initial rendering data, where the first part of data is compressible data in the initial rendering data, and the second part of data is compression-prohibited data in the initial rendering data;
a processing unit 1106, configured to perform compression processing on the first part of data to obtain compressed data;
a determining unit 1108, configured to determine target rendering data by using the compressed data and the second partial data;
an execution unit 1110 for executing a rendering operation based on the target rendering data.
In the embodiment, initial rendering data is acquired through an acquisition unit, wherein the initial rendering data is used for rendering a game scene picture to be displayed in a graphic user interface of a mobile terminal; dividing the initial rendering data into a first part of data and a second part of data based on the data type of the initial rendering data through a dividing unit, wherein the first part of data is compressible data in the initial rendering data, and the second part of data is compression-prohibited data in the initial rendering data; compressing the first part of data through a processing unit to obtain compressed data; determining, by the determining unit, target rendering data using the compressed data and the second partial data; performing, by the execution unit, a rendering operation based on the target rendering data. That is, the present invention
It should be noted that, the above units may be implemented by software or hardware, and for the latter, the following may be implemented, but not limited to: the units are all positioned in the same processor; or, the above units may be located in different processors in any combination.
An embodiment of the present invention further provides a computer-readable storage medium, in which a computer program is stored, wherein the computer program is configured to perform the steps in any of the above method embodiments when executed.
Optionally, in this embodiment, the computer-readable storage medium may include, but is not limited to: various media capable of storing computer programs, such as a usb disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk.
Optionally, in this embodiment, the computer-readable storage medium may be located in any one of a group of computer terminals in a computer network, or in any one of a group of mobile terminals.
Alternatively, in the present embodiment, the above-mentioned computer-readable storage medium may be configured to store a computer program for executing the steps of:
s1, obtaining initial rendering data, wherein the initial rendering data is used for rendering a game scene picture to be displayed in a graphic user interface of a mobile terminal;
s2, dividing the initial rendering data into a first part of data and a second part of data based on the data type of the initial rendering data, wherein the first part of data is compressible data in the initial rendering data, and the second part of data is compression-prohibited data in the initial rendering data;
s3, compressing the first part of data to obtain compressed data;
s4, determining target rendering data by using the compressed data and the second part of data;
and S5, performing rendering operation based on the target rendering data.
Optionally, the computer-readable storage medium is further configured to store program code for performing the following steps: dividing the first part of data into first type data and second type data, wherein the first type data is used for representing corresponding normal data, and the second type data is used for representing data except the normal data in the first part of data; and compressing the first type data and the second type data to obtain compressed data.
Optionally, the computer-readable storage medium is further configured to store program code for performing the following steps: and converting the first type data into planar data to obtain first compressed data, wherein the channel number of the first compressed data is less than that of the first type data.
Optionally, the computer-readable storage medium is further configured to store program code for performing the following steps: determining data to be saved in the second type data based on the pixel position of the second type data; and storing the data to be stored to obtain second compressed data.
Optionally, the computer-readable storage medium is further configured to store program code for performing the following steps: determining the data content corresponding to the first position as the data to be stored in the second type data in response to the pixel position being the first position; and determining the data content corresponding to the second position as the data to be saved in the second type data in response to the pixel position being the second position.
Optionally, the computer-readable storage medium is further configured to store program code for performing the following steps: restoring the second compressed data to obtain rendering data corresponding to the second compressed data; and obtaining target rendering data based on the rendering data, the first compressed data and the second part of data.
Optionally, the computer-readable storage medium is further configured to store program code for performing the following steps: determining compressed data horizontally adjacent to the second compressed data and compressed data vertically adjacent to the second compressed data based on an objective function for determining a difference between pixel values corresponding to the adjacent compressed data; and restoring the second compressed data based on the compressed data adjacent in the horizontal direction and the compressed data adjacent in the vertical direction to obtain rendering data corresponding to the second compressed data.
Optionally, the computer-readable storage medium is further configured to store program code for performing the following steps: determining a difference value between the brightness data in the second compressed data and the brightness data in the compressed data adjacent to the horizontal direction to obtain a first difference value; determining a difference value between the brightness data in the second compressed data and the brightness data in the compressed data adjacent to the vertical direction to obtain a second difference value; and performing interpolation calculation on the compressed data adjacent in the horizontal direction and the compressed data adjacent in the vertical direction based on the first difference and the second difference to obtain rendering data corresponding to the second compressed data.
Optionally, the computer-readable storage medium is further configured to store program code for performing the following steps: the difference between the pixel value of the second compressed data and the pixel value of the adjacent position compressed data satisfies the pixel threshold.
In the computer-readable storage medium of this embodiment, a technical solution of data processing is provided, where initial rendering data is classified according to data types to obtain a first part of data and a second part of data, and the first part of data is compressed, so that the initial rendering data is compressed, the storage amount of data is increased, a technical effect of improving image rendering performance is achieved, and a technical problem of poor image rendering performance is solved.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiment of the present invention can be embodied in the form of a software product, which can be stored in a computer-readable storage medium (which can be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to make a computing device (which can be a personal computer, a server, a terminal device, or a network device, etc.) execute the method according to the embodiment of the present invention.
In an exemplary embodiment of the present application, a computer readable storage medium has stored thereon a program product capable of implementing the above-described method of the present embodiment. In some possible implementations, various aspects of the embodiments of the present invention may also be implemented in the form of a program product including program code for causing a terminal device to perform the steps according to various exemplary implementations of the present invention described in the above section "exemplary method" of this embodiment, when the program product is run on the terminal device.
The program product for implementing the above method according to the embodiment of the present invention may employ a portable compact disc read only memory (CD-ROM) and include program codes, and may be run on a terminal device, such as a personal computer. However, the program product of the embodiments of the invention is not limited thereto, and in the embodiments of the invention, the computer readable storage medium may be any tangible medium that can contain or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product described above may employ any combination of one or more computer-readable media. The computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
It should be noted that the program code embodied on the computer readable storage medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Embodiments of the present invention also provide an electronic device comprising a memory having a computer program stored therein and a processor arranged to run the computer program to perform the steps of any of the above method embodiments.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, wherein the transmission device is connected to the processor, and the input/output device is connected to the processor.
Optionally, in this embodiment, the processor may be configured to execute the following steps by a computer program:
s1, obtaining initial rendering data, wherein the initial rendering data are used for rendering a game scene picture to be displayed in a graphic user interface of a mobile terminal;
s2, dividing the initial rendering data into a first part of data and a second part of data based on the data type of the initial rendering data, wherein the first part of data is compressible data in the initial rendering data, and the second part of data is compression-prohibited data in the initial rendering data;
s3, compressing the first part of data to obtain compressed data;
s4, determining target rendering data by using the compressed data and the second part of data;
and S5, performing rendering operation based on the target rendering data.
Optionally, the processor may be further configured to execute the following steps by a computer program: dividing the first part of data into first type data and second type data, wherein the first type data is used for representing corresponding normal data, and the second type data is used for representing data except the normal data in the first part of data; and compressing the first type data and the second type data to obtain compressed data.
Optionally, the processor may be further configured to execute the following steps by a computer program: and converting the first type data into planar data to obtain first compressed data, wherein the channel number of the first compressed data is less than that of the first type data.
Optionally, the processor may be further configured to execute the following steps by a computer program: determining data to be saved in the second type data based on the pixel position of the second type data; and storing the data to be stored to obtain second compressed data.
Optionally, the processor may be further configured to execute the following steps by a computer program: determining the data content corresponding to the first position as the data to be stored in the second type data in response to the pixel position being the first position; and determining the data content corresponding to the second position as the data to be saved in the second type data in response to the pixel position being the second position.
Optionally, the processor may be further configured to execute the following steps by a computer program: restoring the second compressed data to obtain rendering data corresponding to the second compressed data; and obtaining target rendering data based on the rendering data, the first compressed data and the second part of data.
Optionally, the processor may be further configured to execute the following steps by a computer program: determining compressed data adjacent to the second compressed data in the horizontal direction and compressed data adjacent to the second compressed data in the vertical direction based on an objective function, wherein the objective function is used for determining a difference value between pixel values corresponding to the adjacent compressed data; and performing reduction processing on the second compressed data based on the compressed data adjacent to the horizontal direction and the compressed data adjacent to the vertical direction to obtain rendering data corresponding to the second compressed data.
Optionally, the processor may be further configured to execute the following steps by a computer program: determining a difference value between the brightness data in the second compressed data and the brightness data in the compressed data adjacent to the horizontal direction to obtain a first difference value; determining a difference value between the brightness data in the second compressed data and the brightness data in the compressed data adjacent to the vertical direction to obtain a second difference value; and performing interpolation calculation on the compressed data adjacent in the horizontal direction and the compressed data adjacent in the vertical direction based on the first difference and the second difference to obtain rendering data corresponding to the second compressed data.
Optionally, the processor may be further configured to execute the following steps by a computer program: the difference between the pixel value of the second compressed data and the pixel value of the adjacent position compressed data satisfies the pixel threshold.
In the electronic device according to the embodiment, a technical solution of data processing is provided, where initial rendering data is classified according to data types to obtain a first part of data and a second part of data, and the first part of data is compressed, so that the initial rendering data is compressed, the storage amount of the data is increased, a technical effect of improving a picture rendering performance is achieved, and a technical problem of poor picture rendering performance is solved.
Fig. 12 is a schematic diagram of an electronic device according to an embodiment of the invention. As shown in fig. 12, the electronic device 1200 is only an example and should not bring any limitation to the functions and the scope of use of the embodiment of the present invention.
As shown in fig. 12, the electronic apparatus 1200 is embodied in the form of a general purpose computing device. The components of the electronic device 1200 may include, but are not limited to: the at least one processor 1210, the at least one memory 1220, the bus 1230 connecting the various system components (including the memory 1220 and the processor 1210), and the display 1240.
The memory 1220 stores therein program codes that can be executed by the processor 1210, such that the processor 1210 performs the steps according to various exemplary embodiments of the present invention described in the method section of the embodiments of the present application.
The memory 1220 may include readable media in the form of volatile memory units, such as a random access memory unit (RAM) 12201 and/or a cache memory unit 12202, may further include a read-only memory unit (ROM) 12203, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory.
In some examples, memory 1220 may also include a program/utility 12204 having a set (at least one) of program modules 12205, such program modules 12205 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which or some combination thereof may comprise an implementation of a network environment. The memory 1220 may further include memory that is remotely located from the processor 1210 and that may be connected to the electronic device 1200 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
Bus 1230 may be any of several types of bus structures including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, processor 1210, or a local bus using any of a variety of bus architectures.
Display 1240 may, for example, be a touch screen type Liquid Crystal Display (LCD) that may enable a user to interact with a user interface of electronic device 1200.
Optionally, the electronic apparatus 1200 may also communicate with one or more external devices 1400 (e.g., keyboard, pointing device, bluetooth device, etc.), with one or more devices that enable a user to interact with the electronic apparatus 1200, and/or with any devices (e.g., router, modem, etc.) that enable the electronic apparatus 1200 to communicate with one or more other computing devices. Such communication may occur via input/output (I/O) interfaces 1250. Also, the electronic device 1200 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the internet) via the network adapter 1260. As shown in FIG. 12, the network adapter 1260 communicates with the other modules of the electronic device 1200 via the bus 1230. It should be appreciated that although not shown in FIG. 12, other hardware and/or software modules may be used in conjunction with the electronic device 1200, which may include, but are not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
The electronic device 1200 may further include: a keyboard, a cursor control device (e.g., a mouse), an input/output interface (I/O interface), a network interface, a power source, and/or a camera.
It will be understood by those skilled in the art that the structure shown in fig. 12 is only an illustration and is not intended to limit the structure of the electronic device. For example, electronic device 1200 may also include more or fewer components than shown in FIG. 12, or have a different configuration than shown in FIG. 1. The memory 1220 can be used for storing a computer program and corresponding data, such as a computer program and corresponding data corresponding to the data processing method in the embodiment of the present invention. The processor 1210 executes various functional applications and data processing by running computer programs stored in the memory 1220, that is, implements the data processing method described above.
The above-mentioned serial numbers of the embodiments of the present invention are only for description, and do not represent the advantages and disadvantages of the embodiments.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described apparatus embodiments are merely illustrative, and for example, the division of the units may be a logical division, and in actual implementation, there may be another division, for example, multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or may not be executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (12)

1. A data processing method, comprising:
acquiring initial rendering data, wherein the initial rendering data is used for rendering a game scene picture to be displayed in a graphic user interface of a mobile terminal;
dividing the initial rendering data into a first part of data and a second part of data based on the data type of the initial rendering data, wherein the first part of data is compressible data in the initial rendering data, and the second part of data is compression-prohibited data in the initial rendering data;
compressing the first part of data to obtain compressed data;
determining target rendering data using the compressed data and the second portion of data;
performing a rendering operation based on the target rendering data.
2. The method of claim 1, wherein compressing the first portion of data to obtain the compressed data comprises:
dividing the first part of data into first type data and second type data, wherein the first type data is used for representing corresponding normal data, and the second type data is used for representing data except the normal data in the first part of data;
and compressing the first type data and the second type data to obtain the compressed data.
3. The method of claim 2, wherein compressing the first type of data to obtain the compressed data comprises:
and converting the first type data into planar data to obtain first compressed data, wherein the channel number of the first compressed data is less than that of the first type data.
4. The method of claim 2, wherein compressing the second type of data to obtain the compressed data comprises:
determining data to be saved in the second type data based on the pixel position of the second type data;
and storing the data to be stored to obtain second compressed data.
5. The method of claim 4, wherein determining the data to be saved in the second type of data based on the pixel location of the second type of data comprises:
in response to that the pixel position is a first position, determining the data content corresponding to the first position as data to be stored in the second type of data;
and determining the data content corresponding to the second position as the data to be saved in the second type of data in response to the pixel position being the second position.
6. The method of claim 4, wherein determining the target rendering data using the compressed data and the second portion of data comprises:
restoring the second compressed data to obtain rendering data corresponding to the second compressed data;
and obtaining the target rendering data based on the rendering data, the first compressed data and the second part of data.
7. The method according to claim 6, wherein performing reduction processing on the second compressed data to obtain rendering data corresponding to the second compressed data comprises:
determining compressed data horizontally adjacent to the second compressed data and compressed data vertically adjacent to the second compressed data based on an objective function for determining a difference between pixel values corresponding to the adjacent compressed data;
and restoring the second compressed data based on the compressed data adjacent to the horizontal direction and the compressed data adjacent to the vertical direction to obtain rendering data corresponding to the second compressed data.
8. The method according to claim 7, wherein performing reduction processing on the second compressed data based on the compressed data adjacent to the horizontal direction and the compressed data adjacent to the vertical direction to obtain rendering data corresponding to the second compressed data comprises:
determining a difference value between the brightness data in the second compressed data and the brightness data in the compressed data adjacent to the horizontal direction to obtain a first difference value;
determining a difference value between the brightness data in the second compressed data and the brightness data in the compressed data adjacent to the vertical direction to obtain a second difference value;
and performing interpolation calculation on the compressed data adjacent to the horizontal direction and the compressed data adjacent to the vertical direction based on the first difference and the second difference to obtain rendering data corresponding to the second compressed data.
9. The method according to any one of claims 4 to 8, wherein a difference between a pixel value of the second compressed data and a pixel value of adjacent position compressed data satisfies a pixel threshold.
10. A data processing apparatus, comprising:
the system comprises an acquisition unit, a display unit and a display unit, wherein the acquisition unit is used for acquiring initial rendering data, and the initial rendering data is used for rendering a game scene picture to be displayed in a graphic user interface of a mobile terminal;
the dividing unit is used for dividing the initial rendering data into a first part of data and a second part of data based on the data type of the initial rendering data, wherein the first part of data is compressible data in the initial rendering data, and the second part of data is compression-prohibited data in the initial rendering data;
the processing unit is used for compressing the first part of data to obtain compressed data;
a determining unit configured to determine target rendering data using the compressed data and the second partial data;
an execution unit to execute a rendering operation based on the target rendering data.
11. A computer-readable storage medium, in which a computer program is stored, wherein the computer program is arranged to, when executed by a processor, perform the method of any one of claims 1 to 9.
12. An electronic device comprising a memory and a processor, wherein the memory has stored therein a computer program, and the processor is configured to execute the computer program to perform the method of any one of claims 1 to 9.
CN202210713381.2A 2022-06-22 2022-06-22 Data processing method, data processing apparatus, storage medium, and electronic apparatus Pending CN115170712A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210713381.2A CN115170712A (en) 2022-06-22 2022-06-22 Data processing method, data processing apparatus, storage medium, and electronic apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210713381.2A CN115170712A (en) 2022-06-22 2022-06-22 Data processing method, data processing apparatus, storage medium, and electronic apparatus

Publications (1)

Publication Number Publication Date
CN115170712A true CN115170712A (en) 2022-10-11

Family

ID=83486856

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210713381.2A Pending CN115170712A (en) 2022-06-22 2022-06-22 Data processing method, data processing apparatus, storage medium, and electronic apparatus

Country Status (1)

Country Link
CN (1) CN115170712A (en)

Similar Documents

Publication Publication Date Title
CN109166159B (en) Method and device for acquiring dominant tone of image and terminal
CN101802872B (en) Depth buffer compression
KR20200092418A (en) Method and apparatus for processing duplicate points in point cloud compression
US9449362B2 (en) Techniques for reducing accesses for retrieving texture images
KR102569371B1 (en) Video application of delta color compression
TWI517089B (en) Color buffer caching
EP2797049B1 (en) Color buffer compression
CN110782387B (en) Image processing method and device, image processor and electronic equipment
CN111111172A (en) Method and device for processing ground surface of game scene, processor and electronic device
JP2023545660A (en) Landscape virtual screen display method and device, electronic device and computer program
WO2011126424A1 (en) Texture compression and decompression
US20180097527A1 (en) 32-bit hdr pixel format with optimum precision
CN112843700B (en) Terrain image generation method and device, computer equipment and storage medium
CN117456079A (en) Scene rendering method, device, equipment, storage medium and program product
KR20170005035A (en) Depth offset compression
CN115170712A (en) Data processing method, data processing apparatus, storage medium, and electronic apparatus
CN108668170B (en) Image information processing method and device, and storage medium
CN116843736A (en) Scene rendering method and device, computing device, storage medium and program product
CN108200433B (en) Image compression and decompression method
CN114882149A (en) Animation rendering method and device, electronic equipment and storage medium
CN113766319A (en) Image information processing method and device, and storage medium
CN113613011A (en) Light field image compression method and device, electronic equipment and storage medium
KR20180037837A (en) Method and apparatus for determining the number of bits assigned a channel based on a variation of the channel
CN118135079B (en) Three-dimensional scene roaming drawing method, device and equipment based on cloud fusion
CN117115299A (en) Display information processing method and device, storage medium and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20230829

Address after: Room 3040, 3rd floor, 2879 Longteng Avenue, Xuhui District, Shanghai, 2002

Applicant after: Shanghai NetEasy Brilliant Network Technology Co.,Ltd.

Address before: 310000 7 storeys, Building No. 599, Changhe Street Network Business Road, Binjiang District, Hangzhou City, Zhejiang Province

Applicant before: NETEASE (HANGZHOU) NETWORK Co.,Ltd.

TA01 Transfer of patent application right