CN117499663B - Video decoding system and method, electronic equipment and storage medium - Google Patents

Video decoding system and method, electronic equipment and storage medium Download PDF

Info

Publication number
CN117499663B
CN117499663B CN202311865166.5A CN202311865166A CN117499663B CN 117499663 B CN117499663 B CN 117499663B CN 202311865166 A CN202311865166 A CN 202311865166A CN 117499663 B CN117499663 B CN 117499663B
Authority
CN
China
Prior art keywords
tile
cache
engine
reference data
video decoding
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311865166.5A
Other languages
Chinese (zh)
Other versions
CN117499663A (en
Inventor
请求不公布姓名
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Moore Threads Technology Co Ltd
Original Assignee
Moore Threads Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Moore Threads Technology Co Ltd filed Critical Moore Threads Technology Co Ltd
Priority to CN202311865166.5A priority Critical patent/CN117499663B/en
Publication of CN117499663A publication Critical patent/CN117499663A/en
Application granted granted Critical
Publication of CN117499663B publication Critical patent/CN117499663B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • H04N19/423Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation characterised by memory arrangements
    • H04N19/426Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation characterised by memory arrangements using memory downsizing methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The present disclosure relates to the field of computer technologies, and in particular, to a video decoding system and method, an electronic device, and a storage medium, where the video decoding system includes: a DMA engine, a cache engine and a video decoding engine; the DMA engine determines cache parameter information corresponding to a target reference data block required by the decoding process of the video data to be decoded, and sends the cache parameter information to the cache engine; the cache engine executes cache searching in a cache in the cache engine according to the cache parameter information, determines a cache searching result, reads a target reference data block from the cache and sends the target reference data block to the video decoding engine under the condition that the cache searching result is cache hit; the video decoding engine performs video decoding processing on video data to be decoded by using the target reference data block. The embodiment of the disclosure can accelerate the reading speed of the video decoding system to the data required by the video decoding process, thereby effectively improving the overall efficiency of the video decoding system.

Description

Video decoding system and method, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a video decoding system and method, an electronic device, and a storage medium.
Background
In the related art, a video decoding Engine (calculation Engine) and a Double Data Rate (DDR) synchronous dynamic random access memory (DDR) in a video decoding system directly interact, and reference Data required in a video decoding process is read from the DDR. However, since the data volume of the reference data required in the video decoding process is huge, a large amount of data read-write operations are frequently generated when the video decoding engine directly interacts with the DDR, and since the DDR access speed is relatively slow, the data acquisition speed of the video decoding system in the video decoding process can be influenced by the frequent access of the DDR, and the efficiency of the video decoding system is reduced.
Disclosure of Invention
The disclosure provides a video decoding system and method, an electronic device and a technical scheme of a storage medium.
According to an aspect of the present disclosure, there is provided a video decoding system including: a DMA engine, a cache engine and a video decoding engine; the DMA engine is used for determining cache parameter information corresponding to a target reference data block and sending the cache parameter information to the cache engine, wherein the target reference data block is reference data required by a video data decoding process to be decoded; the cache engine is used for executing cache searching in a cache in the cache engine according to the cache parameter information, determining a cache searching result, reading the target reference data block from the cache and sending the target reference data block to the video decoding engine when the cache searching result is a cache hit; the video decoding engine is used for carrying out video decoding processing on the video data to be decoded by utilizing the target reference data block.
In one possible implementation manner, the minimum storage unit of the cache is a tile; the target reference data block comprises a plurality of tiles; the caching engine includes: the information analysis module and the cache control module; the information analysis module is used for analyzing the cache parameter information to obtain a control instruction corresponding to each tile contained in the target reference data block, and sending the control instruction corresponding to each tile to the cache control module; the buffer control module is configured to perform buffer searching on each tile according to a control instruction corresponding to each tile, and determine a buffer searching result corresponding to each tile.
In one possible implementation manner, the control instruction corresponding to each tile includes starting position information corresponding to the tile and identification information corresponding to the target reference data block; the caching engine includes: the tag storage module is used for storing a plurality of tags; the buffer control module is used for determining a tag corresponding to any tile according to the initial position information corresponding to the tile and the identification information corresponding to the target reference data block; the cache control module is used for matching the tag corresponding to any tile with the tags stored in the tag storage module and determining a cache searching result corresponding to the tile.
In one possible implementation, the cache engine includes: tile instruction queue and area configuration queue; the buffer control module is used for determining a tile instruction corresponding to each tile according to a buffer searching result corresponding to each tile, and sending the tile instruction corresponding to each tile to the tile instruction queue, wherein the tile instruction corresponding to each tile comprises the buffer searching result and the buffer address corresponding to the tile; the buffer control module is configured to determine vertical extension information and horizontal extension information corresponding to each tile according to starting position information corresponding to each tile, and send the vertical extension information and the horizontal extension information corresponding to each tile to the area configuration queue.
In one possible implementation, the cache engine includes: a tile reading module; the tile reading module is configured to read a tile instruction from the tile instruction queue, and read a tile corresponding to the tile instruction from the cache according to a cache address included in the tile instruction when a cache lookup result included in the tile instruction is a cache hit.
In one possible implementation, the cache engine includes: a data acquisition module; the buffer control module is used for generating a tile read request corresponding to any tile and sending the tile read request corresponding to the tile to a tile read request queue in the data acquisition module when the buffer search result corresponding to the tile is a buffer miss; the data acquisition module is used for reading a tile read request from the tile read request queue under the condition that the tile read request queue is not empty and a storage space accommodating at least one tile size exists in an annular buffer area in the data acquisition module; the data acquisition module is configured to read a tile corresponding to the tile read request from an external storage device corresponding to the video decoding system based on the tile read request, and store the tile in the ring buffer.
In one possible implementation, the cache engine includes: a tile reading module; the tile reading module is configured to read a tile instruction from the tile instruction queue, read a tile corresponding to the tile instruction from the ring buffer when a cache lookup result included in the tile instruction is a cache miss, and write the tile into the cache according to a cache address included in the tile instruction.
In one possible implementation, the cache engine includes: a vertical expansion module and tile array; the tile reading module is used for sending any one of the tiles to the vertical expansion module; the vertical expansion module is used for reading the vertical expansion information corresponding to the tile from the area configuration queue, performing vertical expansion processing on the tile based on the vertical expansion information corresponding to the tile to obtain a vertical expansion result corresponding to the tile, and writing the vertical expansion result corresponding to the tile into the tile array.
In one possible implementation, the cache engine includes: a horizontal expansion module, a block random access memory; the horizontal expansion module is used for reading the vertical expansion result corresponding to each tile from the tile array according to the raster scanning sequence; the horizontal expansion module is used for reading horizontal expansion information corresponding to the tile from the area configuration queue for any tile, performing horizontal expansion processing on a vertical expansion result corresponding to the tile based on the horizontal expansion information corresponding to the tile to obtain a target expansion result corresponding to the tile, and writing the target expansion result corresponding to the tile into the block random access memory.
In one possible implementation, the cache engine includes: the reference data is written into the proxy module; the reference data writing agent module is used for reading the target expansion result corresponding to each tile from the block random access memory and writing the target expansion result corresponding to each tile into the video decoding engine.
According to an aspect of the present disclosure, there is provided a video decoding method applied to a video decoding system including: a DMA engine, a cache engine, a video decoding engine, the method comprising: determining cache parameter information corresponding to a target reference data block based on the DMA engine, and sending the cache parameter information to the cache engine, wherein the target reference data block is reference data required by a video data decoding process to be decoded; according to the cache parameter information, executing cache lookup in a cache in the cache engine, determining a cache lookup result, reading the target reference data block from the cache and sending the target reference data block to the video decoding engine when the cache lookup result is a cache hit; and controlling the video decoding engine to perform video decoding processing on the video data to be decoded by utilizing the target reference data block.
According to an aspect of the present disclosure, there is provided an electronic apparatus including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to invoke the instructions stored in the memory to perform the above method.
According to an aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the above-described method.
In an embodiment of the present disclosure, a video decoding system includes: a DMA engine, a cache engine and a video decoding engine; the DMA engine determines cache parameter information corresponding to target reference data blocks required by the video data decoding process to be decoded, and sends the cache parameter information to the cache engine, and the cache engine executes cache lookup in a cache in the cache engine according to the cache parameter information to determine a cache lookup result.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure. Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the technical aspects of the disclosure.
Fig. 1 shows a schematic diagram of a video decoding system in the related art.
Fig. 2 shows a block diagram of a video decoding system according to an embodiment of the present disclosure.
Fig. 3 illustrates a block diagram of a video decoding system according to an embodiment of the present disclosure.
Fig. 4 shows a flowchart of a video decoding method according to an embodiment of the present disclosure.
Fig. 5 shows a block diagram of an electronic device, according to an embodiment of the disclosure.
Detailed Description
Various exemplary embodiments, features and aspects of the disclosure will be described in detail below with reference to the drawings. In the drawings, like reference numbers indicate identical or functionally similar elements. Although various aspects of the embodiments are illustrated in the accompanying drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
The term "and/or" is herein merely an association relationship describing an associated object, meaning that there may be three relationships, e.g., a and/or B, may represent: a exists alone, A and B exist together, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
Furthermore, numerous specific details are set forth in the following detailed description in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements, and circuits well known to those skilled in the art have not been described in detail in order not to obscure the present disclosure.
In the related art, a video decoding Engine (calculation Engine) in a video decoding system directly interacts with an external storage device corresponding to the video decoding system, and reads reference data required in a video decoding process from the external storage device. The external memory device may be a Double Data Rate (DDR) synchronous dynamic random access memory (DDR). Fig. 1 shows a schematic diagram of a video decoding system in the related art. As shown in fig. 1, a video decoding engine in a video decoding system directly interacts with a DDR external to the video decoding system, and reference data required in the video decoding process is read from the DDR. Since the data volume of the reference data required in the video decoding process is huge, a large amount of data read-write operations are frequently generated when the video decoding engine directly interacts with the DDR. In addition, because the DDR access speed is relatively slow, frequent access to the DDR can influence the data acquisition speed of the video decoding system in the video decoding process, so that the overall efficiency of the video decoding system is further influenced, and relatively large DDR bandwidth is occupied.
In video decoding, a video frame to be decoded is typically divided into a plurality of macroblocks, and decoding is performed using the macroblocks as minimum video decoding data units. Based on the processing mode of dividing the macro blocks of the video decoding, the characteristic that reference data required by different macro block decoding processes has higher data repeatability exists. For example, a reference data block of a block 64 Byte (Byte) size may be repeatedly read as a reference data block of a plurality of different macro blocks (video data to be decoded) for decoding processing, that is, reference data blocks corresponding to different macro blocks overlap to a large extent.
Therefore, in order to reduce the problem that the video decoding speed is affected by the slow data acquisition speed caused by frequent access of the video decoding system to the external storage device, the embodiment of the disclosure provides a video decoding system, by utilizing the characteristics of video decoding itself, and by introducing a buffer mechanism into the video decoding system, the buffer mechanism is introduced, and the data reading speed of the buffer is fast, so that the reading speed of the data required by the video decoding system in the video decoding process can be increased, and the overall efficiency of the video decoding system in the video decoding process can be further effectively improved. The video decoding system provided by the embodiments of the present disclosure is described in detail below.
Fig. 2 shows a schematic diagram of a video decoding system according to an embodiment of the present disclosure. As shown in fig. 2, the video decoding system includes: a DMA engine, a cache engine and a video decoding engine; the DMA engine is used for determining cache parameter information corresponding to a target reference data block and sending the cache parameter information to the cache engine, wherein the target reference data block is reference data required by a video data decoding process to be decoded; the cache engine is used for executing cache searching in a cache in the cache engine according to the cache parameter information, determining a cache searching result, reading a target reference data block from the cache under the condition that the cache searching result is cache hit, and sending the target reference data block to the video decoding engine; and the video decoding engine is used for carrying out video decoding processing on the video data to be decoded by utilizing the target reference data block.
According to an embodiment of the present disclosure, a video decoding system includes: a DMA engine, a cache engine and a video decoding engine; the DMA engine determines cache parameter information corresponding to target reference data blocks required by the video data decoding process to be decoded, and sends the cache parameter information to the cache engine, and the cache engine executes cache lookup in a cache in the cache engine according to the cache parameter information to determine a cache lookup result.
In an example, the Video decoding system may be a hardware Video decoding device (Video Codec IP).
And a DMA Engine (DMA Engine) included in the video decoding system generates buffer parameter information corresponding to any one reference data block. For a reference data block, the buffer parameter information corresponding to the reference data block may be used to indicate the area size, the position, the access mode, the expansion and the buffer update policy of the reference data block, and the specific format and the content of the buffer parameter information may be flexibly set according to the actual situation, which is not specifically limited in the disclosure.
When a certain video data to be decoded is needed to be decoded currently, the DMA Engine sends the Cache parameter information of the target reference data block needed in the decoding process of the video data to be decoded to a Cache Engine (Cache Engine) in the video decoding system so as to conveniently search the target reference data block in a Cache (vcche) included in the Cache Engine. The correspondence between the video data to be decoded and the reference data block corresponding to the video data to be decoded is already determined in the video encoding process, which is not specifically limited in the present disclosure.
In one possible implementation, the minimum storage unit of the cache is a tile; the target reference data block comprises a plurality of tiles; the caching engine includes: the information analysis module and the cache control module; the information analysis module is used for analyzing the cache parameter information to obtain a control instruction corresponding to each tile contained in the target reference data block, and sending the control instruction corresponding to each tile to the cache control module; the buffer control module is used for executing buffer searching for each tile according to the control instruction corresponding to each tile, and determining the buffer searching result corresponding to each tile.
The minimum data storage unit is a tile, and the specific data size of the tile can be flexibly set according to practical situations, for example, one tile can be data with the size of 4×4, can be data with the size of 8×8, and can be set as data with other sizes, which is not particularly limited in the disclosure.
The target reference data block is a large data block and may include multiple tiles. Therefore, in order to adapt to the data storage mode of the cache engine, the cache engine includes an information analysis module (Message Parser), and the DMA engine sends the cache parameter information corresponding to the target reference data block to the information analysis module, so that the information analysis module analyzes the cache parameter information corresponding to the target reference data block, and obtains a control instruction (CmdCTrl) corresponding to each tile included in the target reference data block. Fig. 3 illustrates a block diagram of a video decoding system according to an embodiment of the present disclosure. Taking fig. 3 as an example, the DMA engine may send the buffer parameter information corresponding to the target reference data block to an information analysis module in the buffer engine.
The Cache engine includes a Cache Control module (Cache Control). After the information analysis module analyzes the cache parameter information corresponding to the target reference module, the control instruction corresponding to each tile obtained through analysis is sent to the cache control module. Taking fig. 3 as an example, the information parsing module may send the control instruction corresponding to each tile obtained by parsing to the cache control module.
In one possible implementation manner, the control instruction corresponding to each tile includes starting position information corresponding to the tile and identification information corresponding to the target reference data block; the caching engine includes: the tag storage module is used for storing a plurality of tags; the buffer control module is used for determining the tag corresponding to any tile according to the initial position information corresponding to the tile and the identification information corresponding to the target reference data block; and the cache control module is used for matching the tag corresponding to any tile with the plurality of tags stored in the tag storage module and determining a cache searching result corresponding to the tile.
In order to distinguish different tiles in the target reference data block, setting the control instruction corresponding to each tile to include the initial position information corresponding to the tile and the identification information corresponding to the target reference data block. The buffer control module can rapidly determine the tag corresponding to the tile according to the initial position information corresponding to any tile currently being processed and the identification information corresponding to the target reference data block so as to distinguish different tiles.
The tag corresponding to each tile is not only required to accurately and uniquely identify the tile, but also is required to ensure that the tag cannot be too long and complex so as to reduce the complexity of subsequent cache searching for the tile based on the tag, so that how to generate the tag corresponding to each tile is important.
In one example, one tile of size 8×4 is currently being processed, for example, a luminance (Luma) tile, whose starting location information includes: x-direction position information (xTile), y-direction position information (yile), identification information (rBID, reference Block ID) corresponding to a target reference data block, a tag= { valid, rBID [5:1], xTile [8:3], and yile [7:2] }, where valid indicates that the tag is valid, rBID [5:1] indicates 1-5 bit data of the identification information corresponding to the target reference data block, xTile [8:3] indicates 3-8 bit data of the x-direction position information, and yile [7:2] indicates 2-7 bit data of the y-direction position information, so that the tag corresponding to the luminance tile can be accurately and simply represented by only 18 bit data.
In an example, one tile of 4×4 size is currently being processed, for example, a chroma (chroma) tile, whose start position information includes: x-direction position information (xTile), y-direction position information (yile), identification information (rBID, reference Block ID) corresponding to a target reference data block, a tag= { valid, rBID [5:1], xTile [8:3], and yile [7:2] }, where valid indicates that the tag is valid, rBID [5:1] indicates 1-5 bit data of the identification information corresponding to the target reference data block, xTile [8:3] indicates 3-8 bit data of the x-direction position information, and yile [7:2] indicates 2-7 bit data of the y-direction position information, so that the tag corresponding to the chromaticity tile can be accurately and simply represented by only 18 bit data.
The method for determining the tag corresponding to each tile may adopt other modes besides the above-mentioned modes, and may also adopt other modes according to actual needs, which is not specifically limited in the present disclosure.
The cache engine comprises a tag storage module, a plurality of tags are stored in the tag storage module, after the cache control module determines the tag corresponding to any tile in the target reference data block based on the mode, the tag corresponding to the tile is matched with the plurality of tags stored in the tag storage module, and a cache searching result corresponding to the tile is determined. Taking fig. 3 as an example, the cache control module matches a tag corresponding to any tile with a plurality of tags stored in a tag (tag) storage module, and determines a cache search result corresponding to the tile.
In one example, the tag Memory module may be a Static Random-Access Memory (SRAM). the tag memory module may also be set to other memory forms according to actual needs, which is not specifically limited in this disclosure.
In an example, for any tile in the target reference data block, when determining that a tag corresponding to the tile is stored in the tag storage module, determining that a cache search result corresponding to the tile is a cache hit, that is, determining that the tile is currently stored in a cache of the cache engine; when determining that the tag corresponding to the tile is not stored in the tag storage module, determining that the cache searching result corresponding to the tile is cache miss, namely determining that the tile is not currently stored in the cache of the cache engine.
For any tile in the target reference data block, when a cache searching result corresponding to the tile is a cache hit, searching a cache address corresponding to the tile in a cache according to a tag corresponding to the tile, wherein the cache address is generated when the tile is stored in the cache for the first time; when the cache searching result corresponding to the tile is a cache miss, generating a cache address corresponding to the tile, and establishing a mapping relation between the cache address and the tag corresponding to the tile for subsequent cache and cache searching.
In an example, for any tile, the x-direction position information and the y-direction position information of the tile may be modulo 256, and then preset information is added to generate a cache address cacheaddr= { uv, lineAddr%256, rsvd } corresponding to the tile. The method of generating the buffer address corresponding to the tile may adopt other modes besides the above-mentioned modes, and may also flexibly adopt other modes according to actual needs, which is not specifically limited in the present disclosure.
In one possible implementation, the caching engine includes: tile instruction queue and area configuration queue; the buffer control module is used for determining a tile instruction corresponding to each tile according to the buffer searching result corresponding to each tile and sending the tile instruction corresponding to each tile to the tile instruction queue, wherein the tile instruction corresponding to each tile comprises the buffer searching result and the buffer address corresponding to the tile; the buffer control module is used for determining the vertical expansion information and the horizontal expansion information corresponding to each tile according to the initial position information corresponding to each tile, and sending the vertical expansion information and the horizontal expansion information corresponding to each tile to the area configuration queue.
The buffer control module determines a tile instruction corresponding to each tile according to a buffer searching result and a buffer address corresponding to each tile, and sends the tile instruction corresponding to each tile to a tile instruction queue (tile cmdq) included in the buffer engine, so that tile reading can be sequentially performed on each tile included in the target reference data block according to the tile instruction queue. Taking fig. 3 as an example, the buffer control module generates and sends tile instructions corresponding to each tile to a tile instruction queue.
The buffer control module determines vertical expansion information and horizontal expansion information corresponding to each tile according to the initial position information corresponding to each tile, and sends the vertical expansion information and the horizontal expansion information corresponding to each tile to an area configuration queue (Region Config Queue) in the buffer engine so as to perform vertical/horizontal expansion on each tile according to the area configuration queue. Taking fig. 3 as an example, the buffer control module generates and sends vertical extension information and horizontal extension information corresponding to each tile to the regional instruction queue.
In one possible implementation, the caching engine includes: a data acquisition module; the buffer control module is used for generating a tile read request corresponding to any tile and sending the tile read request corresponding to the tile to a tile read request queue in the data acquisition module under the condition that the buffer search result corresponding to any tile is a buffer miss; the data acquisition module is used for reading a tile read request from the tile read request queue under the condition that the tile read request queue is not empty and a storage space accommodating at least one tile size exists in the annular buffer area in the data acquisition module; the data acquisition module is used for reading the tile corresponding to the tile read request from the external storage device corresponding to the video decoding system based on the tile read request, and storing the tile into the annular buffer area.
In an example, the external memory device corresponding to the video decoding system may be a DDR, or may be another form of external memory, which is not specifically limited in this disclosure.
For any tile in the target Reference data block, when the cache result of the tile is cache miss, the cache control module generates a tile read request corresponding to the tile, and sends the tile read request to a tile read request queue (OcpCmdQ) in a data acquisition module (Reference Fetch) in the cache engine, so that the subsequent tile read is performed from the external storage device according to the tile read request queue, and therefore, under the condition of cache miss, the required tile can be effectively read without affecting the subsequent video decoding process. Taking fig. 3 as an example, the cache control module generates and sends tile read requests corresponding to tiles that miss to a tile read request queue.
The data acquisition module also includes a ring buffer (TolBuf) for storing data read from the external storage device. The data acquisition module may execute tile read requests queued in the tile read request queue as soon as possible while maintaining the ring buffer to ensure that it does not overflow. Specifically, as long as the ring buffer includes enough space for at least one read (space capable of storing one Tile) and the Tile read request queue is not empty, the data acquisition module will execute the Tile read request in the Tile read request queue. Taking fig. 3 as an example, when a tile read request queue is not empty and a storage space accommodating at least one tile size exists in a ring buffer, the data acquisition module reads a tile read request from the tile read request queue, reads a tile corresponding to the tile read request from an external storage device based on the tile read request, and stores the tile in the ring buffer.
In one possible implementation, the caching engine includes: a tile reading module; the tile reading module is used for reading a tile instruction from the tile instruction queue, and reading a tile corresponding to the tile instruction from the cache according to the cache address included in the tile instruction when the cache searching result included in the tile instruction is cache hit.
In one possible implementation manner, the tile reading module is configured to read a tile instruction from the tile instruction queue, read a tile corresponding to the tile instruction from the ring buffer when a cache lookup result included in the tile instruction is a cache miss, and write the tile into the cache according to a cache address included in the tile instruction.
The buffer engine comprises a Tile reading module (Tile Assembly), wherein the Tile reading module reads a Tile instruction from a Tile instruction queue, and reads a Tile corresponding to the Tile instruction from a buffer according to a buffer address included in the Tile instruction when a buffer searching result included in the Tile instruction is a buffer hit; reading a tile corresponding to the tile instruction from the ring buffer area under the condition that a cache searching result included in the tile instruction is a cache miss; therefore, whether the cache hits or not, the tile corresponding to the tile instruction can be read quickly. Taking fig. 3 as an example, a tile read module may read tiles from a buffer or ring buffer.
In addition, aiming at the tile with the cache searching result being the cache miss, the cache control module writes the tile into the cache according to the cache address included in the corresponding tile instruction. If the buffer address included in the tile instruction already stores data, updating the data stored in the buffer address according to the read tile; if the buffer address included in the tile instruction does not store data, the read tile is directly written to a storage position corresponding to the buffer address. Taking fig. 3 as an example, a tile (tile) read module may write a tile into a cache. And writing the tile with the cache searching result being the cache miss into the cache, and updating a tag (tag) storage module, namely storing the tag corresponding to the tile with the cache searching result being the cache miss into the tag (tag) storage module.
In one possible implementation, the caching engine includes: a vertical expansion module and tile array; the tile reading module is used for sending any one of the tiles to the vertical expansion module; the vertical expansion module is used for reading the vertical expansion information corresponding to the tile from the area configuration queue, carrying out vertical expansion processing on the tile based on the vertical expansion information corresponding to the tile to obtain a vertical expansion result corresponding to the tile, and writing the vertical expansion result corresponding to the tile into the tile array.
In order to adapt to the data vertical expansion requirement of the video decoding process, the tile reading module sends any one of the tiles read to a vertical expansion module (Vertical Expansion) included in the cache engine to perform vertical expansion processing. Taking fig. 3 as an example, a tile (tile) reading module sends tiles to a vertical expansion module.
The vertical expansion module reads vertical expansion information corresponding to any Tile from the area configuration queue according to any received Tile, further performs vertical expansion processing on the Tile based on the vertical expansion information corresponding to the Tile, and writes the obtained vertical expansion result corresponding to the Tile into a Tile Array (Tile Array) in the cache engine. Taking fig. 3 as an example, after the vertical expansion module performs vertical expansion processing on each tile, the obtained vertical expansion result corresponding to each tile is written into a tile array.
In an example, for any tile, when the tile belongs to the left edge or the right edge of the image corresponding to the video data to be decoded, according to the vertical extension information corresponding to the tile, performing left-right edge extension on the tile to extend the tile boundary, and writing the vertical extension result corresponding to the tile into the tile array.
In an example, for any tile, at the top edge of an image corresponding to the video data to be decoded, the top row of the tile is vertically expanded according to the vertical expansion information corresponding to the tile, and the vertical expansion result corresponding to the tile is written into the tile array.
In an example, for any tile, at the lower edge of an image corresponding to the video data to be decoded, the bottom row of the tile is vertically expanded according to the vertical expansion information corresponding to the tile, and the vertical expansion result corresponding to the tile is written into the tile array.
For any tile, based on the vertical expansion information corresponding to the tile, the vertical expansion processing manner of the tile may be a corresponding vertical expansion manner according to actual requirements, which is not specifically limited in the disclosure.
After vertical expansion processing is executed for each tile in the target reference data block, writing the vertical expansion results corresponding to all tiles into the tile array.
In an example, the tile array may be SRAM, but may also be in other memory forms, which is not specifically limited in this disclosure.
In one possible implementation, the caching engine includes: a horizontal expansion module, a block random access memory; the horizontal expansion module is used for reading the vertical expansion result corresponding to each tile from the tile array according to the raster scanning sequence; the horizontal expansion module is used for reading horizontal expansion information corresponding to the tile from the area configuration queue for any tile, carrying out horizontal expansion processing on a vertical expansion result corresponding to the tile based on the horizontal expansion information corresponding to the tile to obtain a target expansion result corresponding to the tile, and writing the target expansion result corresponding to the tile into the block random access memory.
In order to adapt to the data horizontal expansion requirement of the video decoding process, a horizontal expansion module (Horizontal Expansion) included in the cache engine reads the vertical expansion result corresponding to each tile from the tile array according to the raster scanning sequence, reads the horizontal expansion information corresponding to any one of the read tiles from the area configuration queue, further carries out horizontal expansion processing on the vertical expansion result corresponding to the tile based on the horizontal expansion information corresponding to the tile, and writes the obtained target expansion result corresponding to the tile into a Block random access memory (Block Ram) in the cache engine. Taking fig. 3 as an example, after performing horizontal expansion processing on the vertical expansion result corresponding to each tile, the horizontal expansion module writes the obtained target expansion result corresponding to each tile into the block random access memory.
In one example, the goal of the horizontal expansion module is to move pixels of the same line in the image corresponding to the video data to be decoded to the left/right and perform horizontal expansion at the left and right edges of the image. The horizontal expansion mode specifically adopted by the horizontal expansion module can be flexibly set according to actual needs, and the horizontal expansion mode is not specifically limited in the disclosure.
In one possible implementation, the caching engine includes: the reference data is written into the proxy module; the reference data writing agent module is used for reading the target expansion result corresponding to each tile from the block random access memory and writing the target expansion result corresponding to each tile into the video decoding engine.
The reference data writing agent module (Reference Write Agent) included in the cache engine can play a role in data connection between the cache engine and the video decoding engine, reads the target expansion result corresponding to each tile from the block random access memory, and writes the target expansion result corresponding to each tile into the video decoding engine.
The block random access memory may be a set of ping-pong buffers that can match the data format required by the video decoding engine and the differences in the Tile data format inside the buffer engine. When the horizontal expansion module writes data into one of the ping-pong buffers, the data in the other ping-pong buffer is read out and sent to the video decoding engine through the interface of the reference data writing agent module.
In an embodiment of the present disclosure, a video decoding system includes: a DMA engine, a cache engine and a video decoding engine; the DMA engine determines cache parameter information corresponding to target reference data blocks required by the video data decoding process to be decoded, and sends the cache parameter information to the cache engine, and the cache engine executes cache lookup in a cache in the cache engine according to the cache parameter information to determine a cache lookup result.
Fig. 4 shows a flowchart of a video decoding method according to an embodiment of the present disclosure. The method may be applied to a video decoding system shown in fig. 2 and/or 3, the video decoding system including: a DMA engine, a cache engine, a video decoding engine. The video decoding system shown in fig. 2 and/or 3 may be deployed in an electronic device, which may perform the method to control the operation of the video decoding system. As shown in fig. 4, the method may include:
in step S41, based on the DMA engine, buffer parameter information corresponding to a target reference data block is determined, and the buffer parameter information is sent to the buffer engine, where the target reference data block is reference data required in a decoding process of video data to be decoded.
In step S42, according to the cache parameter information, a cache lookup is performed in a cache in the cache engine, a cache lookup result is determined, and if the cache lookup result is a cache hit, a target reference data block is read from the cache and sent to the video decoding engine.
In step S43, the control video decoding engine performs video decoding processing on the video data to be decoded using the target reference data block.
The details of the video decoding process may refer to the details of the above related embodiments, which are not described herein.
It will be appreciated that the above-mentioned method embodiments of the present disclosure may be combined with each other to form a combined embodiment without departing from the principle logic, and are limited to the description of the present disclosure. It will be appreciated by those skilled in the art that in the above-described methods of the embodiments, the particular order of execution of the steps should be determined by their function and possible inherent logic.
In addition, the disclosure further provides an electronic device, a computer readable storage medium, and a program, where the foregoing may be used to implement any video decoding method provided by the disclosure, and corresponding technical schemes and descriptions and corresponding descriptions referring to method parts are not repeated.
The method has specific technical association with the internal structure of the computer system, and can solve the technical problems of improving the hardware operation efficiency or the execution effect (including reducing the data storage amount, reducing the data transmission amount, improving the hardware processing speed and the like), thereby obtaining the technical effect of improving the internal performance of the computer system which accords with the natural law.
In some embodiments, functions or modules included in an apparatus provided by the embodiments of the present disclosure may be used to perform a method described in the foregoing method embodiments, and specific implementations thereof may refer to descriptions of the foregoing method embodiments, which are not repeated herein for brevity.
The disclosed embodiments also provide a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the above-described method. The computer readable storage medium may be a volatile or nonvolatile computer readable storage medium.
The embodiment of the disclosure also provides an electronic device, which comprises: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to invoke the instructions stored in the memory to perform the above method.
Embodiments of the present disclosure also provide a computer program product comprising computer readable code, or a non-transitory computer readable storage medium carrying computer readable code, which when run in a processor of an electronic device, performs the above method.
The electronic device may be provided as a terminal, server or other form of device.
Fig. 5 shows a block diagram of an electronic device, according to an embodiment of the disclosure. Referring to fig. 5, an electronic device 1900 may be provided as a server or terminal device. Referring to FIG. 5, electronic device 1900 includes a processing component 1922 that further includes one or more processors and memory resources represented by memory 1932 for storing instructions, such as application programs, that can be executed by processing component 1922. The application programs stored in memory 1932 may include one or more modules each corresponding to a set of instructions. Further, processing component 1922 is configured to execute instructions to perform the methods described above.
The electronic device 1900 may also include a power component 1926 configured to perform power management of the electronic device 1900, a wired or wireless network interface 1950 configured to connect the electronic device 1900 to a network, and an input/output interface 1958. Electronic device 1900 may operate an operating system based on memory 1932, such as the Microsoft Server operating system (Windows Server) TM ) Apple Inc. developed graphical user interface based operating System (Mac OS X TM ) Multi-user multi-process computer operating system (Unix) TM ) Unix-like operating system (Linux) of free and open source code TM ) Unix-like operating system (FreeBSD) with open source code TM ) Or the like.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided, such as memory 1932, including computer program instructions executable by processing component 1922 of electronic device 1900 to perform the methods described above.
The present disclosure may be a system, method, and/or computer program product. The computer program product may include a computer readable storage medium having computer readable program instructions embodied thereon for causing a processor to implement aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: portable computer disks, hard disks, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), static Random Access Memory (SRAM), portable compact disk read-only memory (CD-ROM), digital Versatile Disks (DVD), memory sticks, floppy disks, mechanical coding devices, punch cards or in-groove structures such as punch cards or grooves having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media, as used herein, are not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (e.g., optical pulses through fiber optic cables), or electrical signals transmitted through wires.
The computer readable program instructions described herein may be downloaded from a computer readable storage medium to a respective computing/processing device or to an external computer or external storage device over a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmissions, wireless transmissions, routers, firewalls, switches, gateway computers and/or edge servers. The network interface card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium in the respective computing/processing device.
Computer program instructions for performing the operations of the present disclosure can be assembly instructions, instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, c++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer readable program instructions may be executed entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, aspects of the present disclosure are implemented by personalizing electronic circuitry, such as programmable logic circuitry, field Programmable Gate Arrays (FPGAs), or Programmable Logic Arrays (PLAs), with state information of computer readable program instructions, which can execute the computer readable program instructions.
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable medium having the instructions stored therein includes an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The computer program product may be realized in particular by means of hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied as a computer storage medium, and in another alternative embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK), or the like.
The foregoing description of various embodiments is intended to highlight differences between the various embodiments, which may be the same or similar to each other by reference, and is not repeated herein for the sake of brevity.
It will be appreciated by those skilled in the art that in the above-described method of the specific embodiments, the written order of steps is not meant to imply a strict order of execution but rather should be construed according to the function and possibly inherent logic of the steps.
If the technical scheme of the application relates to personal information, the product applying the technical scheme of the application clearly informs the personal information processing rule before processing the personal information, and obtains independent consent of the individual. If the technical scheme of the application relates to sensitive personal information, the product applying the technical scheme of the application obtains individual consent before processing the sensitive personal information, and simultaneously meets the requirement of 'explicit consent'. For example, a clear and remarkable mark is set at a personal information acquisition device such as a camera to inform that the personal information acquisition range is entered, personal information is acquired, and if the personal voluntarily enters the acquisition range, the personal information is considered as consent to be acquired; or on the device for processing the personal information, under the condition that obvious identification/information is utilized to inform the personal information processing rule, personal authorization is obtained by popup information or a person is requested to upload personal information and the like; the personal information processing rule may include information such as a personal information processor, a personal information processing purpose, a processing mode, and a type of personal information to be processed.
The foregoing description of the embodiments of the present disclosure has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the various embodiments described. The terminology used herein was chosen in order to best explain the principles of the embodiments, the practical application, or the improvement of technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (13)

1. A video decoding system, comprising: a DMA engine, a cache engine and a video decoding engine;
the DMA engine is used for determining cache parameter information corresponding to a target reference data block and sending the cache parameter information to the cache engine, wherein the target reference data block is reference data required by a video data decoding process to be decoded, and the corresponding relation between the video data to be decoded and the target reference data block is determined in a video encoding process;
the cache engine is used for executing cache searching in a cache in the cache engine according to the cache parameter information, determining a cache searching result, reading the target reference data block from the cache and sending the target reference data block to the video decoding engine when the cache searching result is a cache hit;
The video decoding engine is used for performing video decoding processing on the video data to be decoded by utilizing the target reference data block;
the minimum storage unit of the cache is a tile; the target reference data block comprises a plurality of tiles; the caching engine includes: a cache control module;
the buffer control module is configured to determine, for any tile in the target reference data block, a tag corresponding to the tile according to starting position information corresponding to the tile and identification information corresponding to the target reference data block, where the tag corresponding to the tile is used to perform buffer lookup on the tile.
2. The system of claim 1, wherein the caching engine comprises: an information analysis module;
the information analysis module is used for analyzing the cache parameter information to obtain a control instruction corresponding to each tile contained in the target reference data block, and sending the control instruction corresponding to each tile to the cache control module;
the buffer control module is configured to perform buffer searching on each tile according to a control instruction corresponding to each tile, and determine a buffer searching result corresponding to each tile.
3. The system of claim 2, wherein the control command corresponding to each tile includes start position information corresponding to the tile and identification information corresponding to the target reference data block; the caching engine includes: the tag storage module is used for storing a plurality of tags;
the cache control module is used for matching the tag corresponding to any tile with the tags stored in the tag storage module and determining a cache searching result corresponding to the tile.
4. The system of claim 3, wherein the caching engine comprises: tile instruction queue and area configuration queue;
the buffer control module is used for determining a tile instruction corresponding to each tile according to a buffer searching result corresponding to each tile, and sending the tile instruction corresponding to each tile to the tile instruction queue, wherein the tile instruction corresponding to each tile comprises the buffer searching result and the buffer address corresponding to the tile;
the buffer control module is configured to determine vertical extension information and horizontal extension information corresponding to each tile according to starting position information corresponding to each tile, and send the vertical extension information and the horizontal extension information corresponding to each tile to the area configuration queue.
5. The system of claim 4, wherein the caching engine comprises: a tile reading module;
the tile reading module is configured to read a tile instruction from the tile instruction queue, and read a tile corresponding to the tile instruction from the cache according to a cache address included in the tile instruction when a cache lookup result included in the tile instruction is a cache hit.
6. The system of claim 4, wherein the caching engine comprises: a data acquisition module;
the buffer control module is used for generating a tile read request corresponding to any tile and sending the tile read request corresponding to the tile to a tile read request queue in the data acquisition module when the buffer search result corresponding to the tile is a buffer miss;
the data acquisition module is used for reading a tile read request from the tile read request queue under the condition that the tile read request queue is not empty and a storage space accommodating at least one tile size exists in an annular buffer area in the data acquisition module;
the data acquisition module is configured to read a tile corresponding to the tile read request from an external storage device corresponding to the video decoding system based on the tile read request, and store the tile in the ring buffer.
7. The system of claim 6, wherein the caching engine comprises: a tile reading module;
the tile reading module is configured to read a tile instruction from the tile instruction queue, read a tile corresponding to the tile instruction from the ring buffer when a cache lookup result included in the tile instruction is a cache miss, and write the tile into the cache according to a cache address included in the tile instruction.
8. The system of claim 5 or 7, wherein the caching engine comprises: a vertical expansion module and tile array;
the tile reading module is used for sending any one of the tiles to the vertical expansion module;
the vertical expansion module is used for reading the vertical expansion information corresponding to the tile from the area configuration queue, performing vertical expansion processing on the tile based on the vertical expansion information corresponding to the tile to obtain a vertical expansion result corresponding to the tile, and writing the vertical expansion result corresponding to the tile into the tile array.
9. The system of claim 8, wherein the caching engine comprises: a horizontal expansion module, a block random access memory;
The horizontal expansion module is used for reading the vertical expansion result corresponding to each tile from the tile array according to the raster scanning sequence;
the horizontal expansion module is used for reading horizontal expansion information corresponding to the tile from the area configuration queue for any tile, performing horizontal expansion processing on a vertical expansion result corresponding to the tile based on the horizontal expansion information corresponding to the tile to obtain a target expansion result corresponding to the tile, and writing the target expansion result corresponding to the tile into the block random access memory.
10. The system of claim 9, wherein the caching engine comprises: the reference data is written into the proxy module;
the reference data writing agent module is used for reading the target expansion result corresponding to each tile from the block random access memory and writing the target expansion result corresponding to each tile into the video decoding engine.
11. A video decoding method, the method being applied to a video decoding system, the video decoding system comprising: a DMA engine, a cache engine, a video decoding engine, the method comprising:
determining cache parameter information corresponding to a target reference data block based on the DMA engine, and sending the cache parameter information to the cache engine, wherein the target reference data block is reference data required by a video data decoding process to be decoded, and the corresponding relation between the video data to be decoded and the target reference data block is determined in a video encoding process;
According to the cache parameter information, executing cache lookup in a cache in the cache engine, determining a cache lookup result, reading the target reference data block from the cache and sending the target reference data block to the video decoding engine when the cache lookup result is a cache hit;
controlling the video decoding engine to perform video decoding processing on the video data to be decoded by utilizing the target reference data block;
the minimum storage unit of the cache is a tile; the target reference data block comprises a plurality of tiles; the caching engine includes: a cache control module;
the method further comprises the steps of:
and determining a tag corresponding to the tile by using the cache control module according to the initial position information corresponding to the tile and the identification information corresponding to the target reference data block aiming at any tile in the target reference data block, wherein the tag corresponding to the tile is used for cache searching for the tile.
12. An electronic device, comprising:
a processor;
a memory for storing instructions executable by the processor;
wherein the processor is configured to invoke the memory-stored instructions to perform the method of claim 11.
13. A computer readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the method of claim 11.
CN202311865166.5A 2023-12-29 2023-12-29 Video decoding system and method, electronic equipment and storage medium Active CN117499663B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311865166.5A CN117499663B (en) 2023-12-29 2023-12-29 Video decoding system and method, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311865166.5A CN117499663B (en) 2023-12-29 2023-12-29 Video decoding system and method, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN117499663A CN117499663A (en) 2024-02-02
CN117499663B true CN117499663B (en) 2024-03-15

Family

ID=89678561

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311865166.5A Active CN117499663B (en) 2023-12-29 2023-12-29 Video decoding system and method, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117499663B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107846597A (en) * 2016-09-20 2018-03-27 上海澜至半导体有限公司 Data cache method and device for Video Decoder
WO2022115999A1 (en) * 2020-12-01 2022-06-09 深圳市大疆创新科技有限公司 Data processing method and data processing device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8593469B2 (en) * 2006-03-29 2013-11-26 Nvidia Corporation Method and circuit for efficient caching of reference video data

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107846597A (en) * 2016-09-20 2018-03-27 上海澜至半导体有限公司 Data cache method and device for Video Decoder
WO2022115999A1 (en) * 2020-12-01 2022-06-09 深圳市大疆创新科技有限公司 Data processing method and data processing device

Also Published As

Publication number Publication date
CN117499663A (en) 2024-02-02

Similar Documents

Publication Publication Date Title
US9448977B2 (en) Website blueprint generation and display algorithms to reduce perceived web-page loading time
WO2021169236A1 (en) Rendering method and apparatus
US10110653B2 (en) Systems and methods for transmitting data
US9876845B2 (en) Data transmission
US11003579B2 (en) Method, device and computer program product for managing distributed system
CN110750664B (en) Picture display method and device
WO2017202175A1 (en) Method and device for video compression and electronic device
US20170272545A1 (en) Method and system for transmitting remote screen
CN111294600A (en) Compression method, decompression method and device for video sequence frame
CN112764877A (en) Method and system for communication between hardware acceleration equipment and process in docker
EP4135333A1 (en) Image display method and apparatus, electronic device, and medium
CN117499663B (en) Video decoding system and method, electronic equipment and storage medium
US11055813B2 (en) Method, electronic device and computer program product for expanding memory of GPU
US20120005587A1 (en) Performing Remoting Operations For Different Regions Of A Display Surface At Different Rates
US11195248B2 (en) Method and apparatus for processing pixel data of a video frame
CN112004147A (en) Video rendering method and device, electronic equipment and storage medium
US11429317B2 (en) Method, apparatus and computer program product for storing data
JP6861801B2 (en) Multi-pixel caching scheme for reversible encoding
CN114090168A (en) Self-adaptive adjusting method for image output window of QEMU (QEMU virtual machine)
KR20230066560A (en) Method and electronic device for detecting and removing artifacts/degradation in media
CN113784075A (en) Screen video reading method and system and computing device
CN111083552A (en) Thumbnail generation method, device, equipment and medium
CN113535606A (en) Data processing method and device
CN111292392A (en) Unity-based image display method, apparatus, device and medium
US20240103767A1 (en) Method, electronic device, and computer program product for synchronously accessing data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant