CN113596506B - Performance optimization method and system for live cache, electronic device and storage medium - Google Patents

Performance optimization method and system for live cache, electronic device and storage medium Download PDF

Info

Publication number
CN113596506B
CN113596506B CN202110911359.4A CN202110911359A CN113596506B CN 113596506 B CN113596506 B CN 113596506B CN 202110911359 A CN202110911359 A CN 202110911359A CN 113596506 B CN113596506 B CN 113596506B
Authority
CN
China
Prior art keywords
data
frame
memory block
live
cache
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110911359.4A
Other languages
Chinese (zh)
Other versions
CN113596506A (en
Inventor
杨大维
徐小龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
AVIT Ltd
Original Assignee
AVIT Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by AVIT Ltd filed Critical AVIT Ltd
Priority to CN202110911359.4A priority Critical patent/CN113596506B/en
Publication of CN113596506A publication Critical patent/CN113596506A/en
Application granted granted Critical
Publication of CN113596506B publication Critical patent/CN113596506B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/231Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion
    • H04N21/23106Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion involving caching operations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/433Content storage operation, e.g. storage operation in response to a pause request, caching operations
    • H04N21/4331Caching operations, e.g. of an advertisement for later insertion during playback

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a performance optimization method, a system, an electronic device and a storage medium for live broadcast cache, wherein the method comprises the following steps: providing a calling interface for the live broadcast application, wherein the calling interface is used for receiving an instruction of the live broadcast application for reading or writing the cache; a group of memory blocks are allocated to each live channel of the live application, and the memory blocks are used for storing live data which are newly acquired by the corresponding live channel; providing a corresponding virtual file for each live channel, wherein the virtual file is used for packaging live data of the corresponding live channel; receiving a write cache instruction of a live broadcast application through a calling interface, opening a virtual file, and writing cache data into a memory block corresponding to the virtual file; or, receiving a read cache instruction of the live broadcast application by calling the interface, opening the virtual file, and reading cache data from the memory block corresponding to the virtual file. The invention can simplify the use of the buffer memory, improve the concurrency performance of live broadcast and reduce the delay of live broadcast.

Description

Performance optimization method and system for live cache, electronic device and storage medium
Technical Field
The present invention relates to the field of cache optimization technologies, and in particular, to a method, a system, an electronic device, and a storage medium for optimizing performance of a live cache.
Background
With the development of technology, live broadcast has become an indispensable part of our life, such as online education, video conferences, large-scale evening live broadcast, training live broadcast, event live broadcast and the like.
Due to the increase of network bandwidth, more and more audiences participate in watching live broadcast, the concurrency of live broadcast is higher, and the requirement on the time delay of live broadcast is higher.
Some existing live broadcast systems are not universal in cache system, and the read-write operation of an application layer is inconvenient, so that the concurrency performance of live broadcast is poor.
Disclosure of Invention
The invention mainly aims to provide a performance optimization method, a system, an electronic device and a storage medium for live broadcast caching, and aims to solve the problems of poor concurrency performance of live broadcast caused by the fact that a caching system is not universal and reading and writing operations of an application layer are inconvenient in the prior art.
In order to achieve the above object, a first aspect of the present invention provides a method for optimizing performance of a live cache, which is characterized by comprising: providing a calling interface for a live broadcast application, wherein the calling interface is used for receiving an instruction of the live broadcast application for reading or writing a cache; distributing a group of memory blocks to each live broadcast channel of the live broadcast application, wherein the memory blocks are used for storing live broadcast data which are newly acquired by the corresponding live broadcast channel; providing a corresponding virtual file for each live channel, wherein the virtual file is used for packaging live data of the corresponding live channel; receiving a write cache instruction of the live broadcast application through the calling interface, opening the virtual file, and writing cache data into the memory block corresponding to the virtual file; or receiving a read cache instruction of the live broadcast application through the calling interface, opening the virtual file, and reading cache data from the memory block corresponding to the virtual file.
Wherein the writing the cache data into the memory block corresponding to the virtual file includes: acquiring a frame of data to be written for the latest live broadcast data; determining the memory block position where a frame of data to be written should be written; writing corresponding frame data in the corresponding memory block according to the memory block position; and writing each frame of data of the latest live broadcast data into the memory block according to the time sequence.
Wherein writing the frame data in the memory block includes: acquiring a first numerical value of a writing index value and a frame number group size of one frame of data to be written; using the written index value to obtain a remainder of the quotient as a number position of a storage area of a memory block to which a frame of data to be written should be written, with the first value; and writing the frame of data into the memory block corresponding to the number position.
After writing one frame of data into the memory block, writing cache data into the memory block corresponding to the virtual file further includes: when one frame data written is pointed to a new memory block according to the frame number group, setting the position of the written memory block as a new frame type by using the frame type array, recording the frame type of the memory block at the position, releasing the frame array in an old memory block pointed to by the live broadcast data which is already written, and adding one to the writing index value.
Wherein, the reading the cache data from the memory block corresponding to the virtual file includes: determining the memory block position of a frame of data to be read; reading the frame data in the corresponding memory block according to the memory block position to which the frame data belongs; and reading each frame of data of the live broadcast data from the corresponding memory block according to the time sequence.
The determining the memory block position of a frame of data to be read includes: recording a read index value of a frame of data to be read, wherein the initial value of the read index value is zero; calculating a frame of data of the read index value, and performing time difference values when writing and reading the data to obtain delay data; judging whether to skip frames to read the live broadcast data or not through the delay data and a preset frame skip threshold; if yes, determining a position corresponding to one frame of data to be read after frame skipping; if not, determining the position corresponding to the currently read frame data.
The determining the position corresponding to the frame data to be read after frame skipping comprises the following steps: acquiring adjacent frame data of a frame nearest to the currently read frame data; and giving the reading index value to the position corresponding to the adjacent frame data, and calculating the position corresponding to the frame data to be read according to the reading index value and the frame data array. After reading a frame of data from the memory block, the reading cache data from the memory block corresponding to the virtual file further includes: and adding one to the read index value to read cache data from the memory block corresponding to the virtual file for the next frame of data.
A second aspect of the present application provides a performance optimization system for live cache, including: the application interface layer module is used for providing a calling interface for the live broadcast application, and the calling interface is used for receiving an instruction of the live broadcast application for reading or writing the cache; the memory channel layer module is used for distributing a group of memory blocks to each live channel of the live application, and the memory blocks are used for storing live data which are newly acquired by the corresponding live channel; the virtual file layer module is used for providing a corresponding virtual file for each live channel; the data writing module is used for receiving a write cache instruction of the live broadcast application through the calling interface, opening the virtual file and writing cache data into the memory block corresponding to the virtual file; and the data reading module is used for receiving the read cache instruction of the live broadcast application through the calling interface, opening the virtual file and reading cache data from the memory block corresponding to the virtual file.
A third aspect of the present application provides an electronic device, comprising: the system comprises a memory and a processor, wherein the memory is stored with a computer program which can run on the processor, and the processor realizes the performance optimization method of the live cache in any one of the above modes when executing the computer program.
A fourth aspect of the present application provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a method for optimizing performance of a live cache as described in any of the above.
The invention provides a performance optimization method, a system, an electronic device and a storage medium for live broadcast cache, which have the beneficial effects that: the unified calling interface is provided, so that the application layer where the live broadcast application is located is convenient to use, maintainability and reliability of a live broadcast system are improved, the use of a buffer memory can be simplified, concurrency of live broadcast is improved, and live broadcast delay is reduced.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are necessary for the description of the embodiments or the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention and that other drawings may be obtained from them without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a method for optimizing performance of a live cache according to an embodiment of the present application;
FIG. 2 is a schematic flow chart of writing cache data into a memory block corresponding to a virtual file according to a performance optimization method of live cache according to an embodiment of the present application;
fig. 3 is a schematic flow chart of all live broadcast data corresponding to one frame of data cached and written in a memory block according to a performance optimization method of live broadcast cache in an embodiment of the present application;
FIG. 4 is a schematic flow chart of a method for optimizing performance of live cache according to an embodiment of the present application, in which cache data is read from a memory block corresponding to a virtual file;
FIG. 5 is a flowchart illustrating a method for determining a memory block of a frame of data to be read according to a performance optimization method for live cache according to an embodiment of the present application;
FIG. 6 is a block diagram illustrating a performance optimization system for live cache according to an embodiment of the present application;
fig. 7 is a schematic block diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, features and advantages of the present invention more comprehensible, the technical solutions in the embodiments of the present invention will be clearly described in conjunction with the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are only some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1, a method for optimizing performance of a live cache includes:
s101, providing a calling interface for a live broadcast application, wherein the calling interface is used for receiving an instruction of the live broadcast application for reading or writing a cache;
s102, distributing a group of memory blocks to each live channel of a live application, wherein the memory blocks are used for storing live data which are newly acquired by the corresponding live channel;
s103, providing a corresponding virtual file for each live channel, wherein the virtual file is used for packaging live data of the corresponding live channel;
s104, receiving a write cache instruction of the live broadcast application through a calling interface, opening a virtual file, and writing cache data into a memory block corresponding to the virtual file;
s105, receiving a read cache instruction of the live broadcast application through a calling interface, opening the virtual file, and reading cache data from a memory block corresponding to the virtual file.
In step S101, at least four call interfaces are Open, write, read, close interfaces, respectively.
The Open interface is used for opening a virtual file, transmitting the url of the live channel as a parameter, and returning a virtual file ID after successful opening.
The Read interface is used for reading the latest live channel data, transmitting the latest live channel data into the virtual file ID and returning a complete frame block data. When the application layer does not call the Read interface in time and the data has time delay, the live broadcast caching system can automatically skip over-run data and return from the latest position point, so that the low time delay of live broadcast is ensured.
The Write interface is used for writing the latest live data, and transmitting the virtual file ID and a complete frame block.
The Close interface is used for closing the virtual file and releasing the resource.
In step S102, there is a memory channel (MemChannel) first, the memory channel has two arrays with fixed size, respectively FData (frame data array) and FType (frame type array), FType is an array of int type, and video frame types (I, P, B frames) are recorded; FData is an array of Block pointer types pointing to the Block cache Block. The Block buffer blocks buffer one complete frame data.
The memory channel has a write index value write_idx, where the video data is written, and when writing the data, it is necessary to transfer the index to the index of the array through write_idx. If the array size is 100 and the current write_idx is 120, the write position is 120 divided by 100 to obtain the remainder, i.e. the write position is 20, the 20 positions of the FData array point to new blocks, and the blocks at the original positions are set as idle blocks.
The FType array records the frame type of the FData for the location point, and is used for searching for an I frame when the frame is lost.
In step S103, a virtual file is provided by using the virtual file layer, and each user wants to read and write data of the live channel, needs to open a virtual file and operate by using the file ID.
In step S104 and step S105, data is transmitted through the network transmission layer, for example, the data is transmitted through the chunck mode of HTTP, a frame of data is packed into a chunck, the receiving end receives a chunck, knows that the frame of data is one frame of data, knows the frame type, and the receiving end does not need to perform alignment and frame type judgment of the frame of data again, thereby improving the efficiency of the receiving end. When Chunck is transmitted, the last word of Chunck data is used to record the frame type. When a sending end sends chunck data, the byte is added; when the receiving end receives data, the frame type is judged through the byte, and when the frame type is stored in a Block, the byte is removed.
The network transmission layer transmits the live broadcast data in a mode of transmitting the live broadcast data and the frame type according to the data frame, so that the live broadcast delay can be reduced.
In this embodiment, live broadcast data to be cached may be audio/video data encapsulated in a TS, or may be bare data encapsulated in a certain format; in addition, the performance optimization method of the live cache provided in the embodiment may be disposed in a window system, a Linux system, a Unix system, and the like of the electronic device, and preferably, the performance optimization method of the live cache in the embodiment of the present application is disposed in a Linux system.
In the embodiment, a unified calling interface is provided, so that the application layer where the live broadcast application is located is convenient to use, maintainability and reliability of a live broadcast system are improved, the use of a cache can be simplified, concurrency performance of live broadcast is improved, and live broadcast delay is reduced.
In addition, the http chunck mode is used for transmitting according to the mode of video frames and frame types, so that CPU consumption of frame assembly of data to be analyzed by a receiving end is simplified, system performance is improved, and concurrency is improved.
Referring to fig. 2, in step S104, writing cache data into a memory block corresponding to a virtual file includes:
s1041, acquiring a frame of data to be written for the latest live broadcast data;
s1042, determining the memory block position where a frame of written data should be written;
s1043, writing a corresponding frame of data in a corresponding memory block according to the memory block position;
s1044, writing each frame of data of the latest live broadcast data into the memory block according to the time sequence.
After opening a virtual file, url of the incoming channel is used as the only parameter. After successful opening, a file ID is returned, and the file ID is carried by subsequent operations. The VirtualFile will associate one MemChannel.
The Write interface is then called to Write a frame of data, with the input parameters being Block and frame type.
Referring to fig. 3, in one embodiment, step S1043, writing the frame data in the memory block includes:
s10431, obtaining a first numerical value of a writing index value and a frame number group size of one frame of data to be written;
s10432, using the written index value to obtain a remainder of the quotient as the number position of a storage area of a memory block to which a frame of data to be written should be written;
s10433, writing the frame data into the memory block corresponding to the number position.
MemChannel determines the write location by write_idx. Specifically, in implementing step S10432, for example, assuming that the FData array size is 100 and the current write_idx is 120, the write position is 120 divided by 100 to obtain 20, where 20 is the number position of the memory area of the memory block to which the written frame data should be written.
In one embodiment, in step S104, after writing one frame of data into the memory block, writing the cache data into the memory block corresponding to the virtual file further includes:
s1044, when a frame data written in is pointed to a new memory block according to the frame array, setting the position of the written memory block as a new frame type by using the frame type array, recording the frame type of the memory block at the position, releasing the frame array in the old memory block pointed to by the live broadcast data already written in, and adding one to the writing index value.
Referring to fig. 4, in one embodiment, in step S105, reading cache data from a memory block corresponding to a virtual file includes:
s1051, determining the memory block position of a frame of data to be read;
s1052, reading a frame of data in a corresponding memory block according to the memory block position to which the frame of data belongs;
s1053, reading each frame of data of the live broadcast data from the corresponding memory block according to the time sequence.
In this embodiment, a virtual file is opened, and url of the incoming channel is used as the unique parameter. After successful opening, a file ID is returned, and the file ID is carried by subsequent operations. The VirtualFile will associate one MemChannel. The read interface is required to read a frame of data.
Referring to fig. 5, in one embodiment, step S1052, determining the memory block location of a frame of data to be read includes:
s10521, recording a read index value of a frame of data to be read, wherein the initial value of the read index value is zero;
s10522, calculating a frame of data of the read index value, and performing time difference between writing cache and reading cache to obtain delay data;
s10523, judging whether to skip frames to read live broadcast data or not through the delay data and a preset frame skip threshold;
s10524, if yes, determining a position corresponding to frame data to be read after frame skipping;
s10525, if not, determining the position corresponding to the currently read frame data.
The VirtualFile maintains a read index read_idx, with the read_idx initial value of 0.VirtualFile reads data to MemChannel through read_idx.
In step S10523, memChannel determines the delay by read_idx and write_idx. If write_idx minus read_idx is equal to 10, this indicates a delay of 10 frames of data.
Judging whether the time delay is greater than a configured threshold value, and if so, starting frame skipping; the frame skip can always keep low delay of live broadcast, thereby reducing the delay of live broadcast.
In one embodiment, in step S10524, determining a position corresponding to a frame of data to be read after frame skipping includes: acquiring adjacent frame data of a frame nearest to the currently read frame data; and giving a read index value to the position corresponding to the adjacent frame data, and calculating the position corresponding to one frame data to be read according to the read index value and the frame data array.
When frame skipping, an I-frame data closest to the write_idx position is found and the position is assigned to read_idx.
In this embodiment, under the condition that the network is not good, a fast frame loss mode is adopted, so that live broadcast delay is reduced, and meanwhile, mosaics are not generated, and user experience is improved.
When calculating the position corresponding to the frame data to be read according to the read index value and the frame data array, the method is the same as that when obtaining the write cache, and the method can specifically include: acquiring a second numerical value of the write index value and the frame number group size of the read frame data; dividing the written index value by the second numerical value to obtain the remainder of the quotient, wherein the remainder is used as the number position of a storage area of the memory block from which read frame data should be read; and reading all live broadcast data from the memory blocks corresponding to the numbering positions.
In step S105, after reading one frame of data from the memory block, reading the cache data from the memory block corresponding to the virtual file further includes: s1054, adding one to the read index value to read the cache data from the memory block corresponding to the virtual file for the next frame of data.
Referring to fig. 6, in an embodiment, the present application further provides a performance optimization system for live cache, including: an application interface layer module 1, a virtual file layer module 2, a memory channel layer module 3, a data writing module 4 and a data reading module 5; the application interface layer module 1 is used for providing a calling interface for the live broadcast application, and the calling interface is used for receiving an instruction of the live broadcast application for reading or writing the cache; the virtual file layer module 2 is used for providing a corresponding virtual file for each live channel; the memory channel layer module 3 is used for distributing a group of memory blocks to each live channel of the live application, and the memory blocks are used for storing live data which are newly acquired by the corresponding live channel; the data writing module 4 is used for receiving a write cache instruction of the live broadcast application through a calling interface, opening a virtual file, and writing cache data into a memory block corresponding to the virtual file; the data reading module is used for receiving a read cache instruction of the live broadcast application through the calling interface, opening the virtual file and reading cache data from the memory block corresponding to the virtual file.
In this embodiment, the network transmission layer is used to transmit live broadcast data, and the data writing module is used to transmit live broadcast data from a live broadcast channel to the receiving end, for example, the data writing module is used to transmit the live broadcast data in a chunck mode through HTTP, one frame of data is packaged into one chunck, the receiving end receives one chunck, and knows that the live broadcast data is one frame of data, and knows the frame type, the receiving end does not need to perform alignment and frame type judgment of the frame data again, and the efficiency of the receiving end is improved. When Chunck is transmitted, the last word of Chunck data is used to record the frame type. When a sending end sends chunck data, the byte is added; when the receiving end receives data, the frame type is judged through the byte, and when the frame type is stored in a Block, the byte is removed.
The network transmission layer transmits the live broadcast data in a mode of transmitting the live broadcast data and the frame type according to the data frame, so that the live broadcast delay can be reduced.
The performance optimization system of the live broadcast cache of the embodiment provides a unified calling interface, is convenient for the application layer where live broadcast applications are located to use, and improves maintainability and reliability of the live broadcast system, so that concurrency of live broadcast can be improved.
In one embodiment, the network transport layer module 4 comprises: a first data writing unit, a writing memory determining unit and a second data writing unit; the first data writing unit is used for acquiring one frame of data to be written for the latest live broadcast data; the writing memory determining unit is used for determining the memory block position where the frame data to be written should be written; the second data writing unit is used for writing corresponding frame data in the corresponding memory block according to the memory block position, and writing each frame of data of the latest live broadcast data into the memory block according to the time sequence.
In one embodiment, the second data writing unit includes: the system comprises a numerical value acquisition subunit, a first calculation subunit and a live broadcast data writing unit; the numerical value acquisition subunit is used for acquiring a first numerical value of a writing index value and a frame number group size of one frame of data to be written; the first calculating subunit is configured to divide the write index value by the first numerical value to obtain a remainder of the quotient, where the remainder is used as a number position of a storage area of a memory block to which a frame of data to be written should be written; and the live broadcast data writing unit is used for writing the frame of data into the memory block corresponding to the number position.
The data writing module 4 further includes: and the frame type recording module is used for setting the position of the written memory block as a new frame type by using the frame type array when the written frame data is directed to the new memory block according to the frame array, recording the frame type of the memory block at the position, releasing the frame array in the old memory block to which the written live broadcast data is directed, and adding one to the writing index value.
In one embodiment, the data reading module 5 comprises: a read memory determination unit and a second data reading unit; the read memory determining unit is used for determining the memory block position to which a frame of data to be read belongs; the second data reading unit is used for reading one frame of data in the corresponding memory block according to the memory block position to which the one frame of data belongs, and reading each frame of data of the live broadcast data from the corresponding memory block according to the time sequence.
In one embodiment, the read memory determination unit includes: the system comprises a numerical value recording subunit, a second calculating subunit, a judging subunit and a position determining subunit; the numerical value recording subunit is used for recording a read index value of one frame of data to be read, and the initial value of the read index value is zero; the second calculating subunit is used for calculating one frame of data of the read index value, and performing time difference values when writing and reading the cache to obtain delay data; the judging subunit is used for judging whether to skip frames to read live broadcast data or not through the delay data and a preset frame skip threshold value; the position determining subunit is used for determining the position corresponding to one frame of data to be read after frame skipping if the judging subunit judges that the frame is to be skipped; and the judging subunit is further used for determining the position corresponding to the currently read frame data if the judging subunit judges that the frame does not need to be skipped or not.
In one embodiment, the position determining subunit is further configured to: acquiring adjacent frame data of a frame nearest to the currently read frame data; and giving a read index value to the position corresponding to the adjacent frame data, and calculating the position corresponding to one frame data to be read according to the read index value and the frame data array.
In one embodiment, the network transport layer module 4 further includes an accumulation unit, configured to increment the read index value by one, so as to read the buffered data from the memory block corresponding to the virtual file for the next frame of data.
Referring to fig. 7, an electronic device according to an embodiment of the present application includes: the system comprises a memory 601, a processor 602 and a computer program stored in the memory 601 and capable of running on the processor 602, wherein the processor 602 implements the performance optimization method of the live cache described in the foregoing description when executing the computer program.
Further, the electronic device further includes: at least one input device 603 and at least one output device 604.
The memory 601, the processor 602, the input device 603, and the output device 604 are connected via a bus 605.
The input device 603 may be a camera, a touch panel, a physical key, a mouse, or the like. The output device 604 may be, in particular, a display screen.
The memory 601 may be a high-speed random access memory (RAM, random Access Memory) memory or a non-volatile memory (non-volatile memory), such as a disk memory. The memory 601 is used for storing a set of executable program codes and the processor 602 is coupled to the memory 601.
Further, the embodiments of the present application also provide a computer readable storage medium, which may be provided in the electronic device in the foregoing embodiments, and the computer readable storage medium may be the memory 601 in the foregoing embodiments. The computer readable storage medium has stored thereon a computer program which, when executed by the processor 602, implements the method for optimizing performance of a live cache as described in the foregoing embodiments.
Further, the computer-readable medium may be any medium capable of storing a program code, such as a usb (universal serial bus), a removable hard disk, a Read-Only Memory 601 (ROM), a RAM, a magnetic disk, or an optical disk.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the apparatus embodiments described above are merely illustrative, and for example, the division of the modules is merely a logical function division, and there may be additional divisions when actually implemented, for example, multiple modules or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or modules, which may be in electrical, mechanical, or other forms.
The modules described as separate components may or may not be physically separate, and components shown as modules may or may not be physical modules, i.e., may be located in one place, or may be distributed over a plurality of network modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional module in each embodiment of the present invention may be integrated into one processing module, or each module may exist alone physically, or two or more modules may be integrated into one module. The integrated modules may be implemented in hardware or in software functional modules.
The integrated modules, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied essentially or in part or all of the technical solution or in part in the form of a software product stored in a storage medium, including instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
It should be noted that, for the sake of simplicity of description, the foregoing method embodiments are all expressed as a series of combinations of actions, but it should be understood by those skilled in the art that the present invention is not limited by the order of actions described, as some steps may be performed in other order or simultaneously in accordance with the present invention. Further, those skilled in the art will appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily all required for the present invention.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to the related descriptions of other embodiments.
The foregoing describes a method, a system, an electronic device, and a storage medium for optimizing performance of a live cache, which are provided by the present invention, and therefore, according to the ideas of the embodiments of the present invention, the details of the present invention should not be construed as limiting the present invention.

Claims (7)

1. The performance optimization method for the live cache is characterized by comprising the following steps of:
providing a calling interface for a live broadcast application, wherein the calling interface is used for receiving an instruction of the live broadcast application for reading or writing a cache;
distributing a group of memory blocks to each live broadcast channel of the live broadcast application, wherein the memory blocks are used for storing live broadcast data which are newly acquired by the corresponding live broadcast channel;
providing a corresponding virtual file for each live channel, wherein the virtual file is used for packaging live data of the corresponding live channel;
receiving a write cache instruction of the live broadcast application through the calling interface, opening the virtual file, and writing cache data into the memory block corresponding to the virtual file;
or, receiving a read cache instruction of the live broadcast application through the calling interface, opening the virtual file, and reading cache data from the memory block corresponding to the virtual file;
the writing the cache data into the memory block corresponding to the virtual file includes: acquiring a frame of data to be written for the latest live broadcast data; determining the memory block position where a frame of data to be written should be written; writing corresponding frame data in the corresponding memory block according to the memory block position; writing each frame of data of the latest live broadcast data into the memory block according to the time sequence;
the writing the corresponding frame data in the corresponding memory block according to the memory block position comprises: acquiring a first numerical value of a writing index value and a frame number group size of one frame of data to be written; dividing the writing index value by the first numerical value to obtain a remainder of the quotient, wherein the remainder is used as the number position of a storage area of a memory block to which one frame of data to be written is written; writing the frame of data into the memory block corresponding to the numbering position;
after writing a corresponding frame of data into a corresponding memory block according to the memory block position, writing cache data into the memory block corresponding to the virtual file further includes: when one frame data written into the frame data group points to a new memory block according to the frame data group, setting the position of the written memory block as a new frame type by using the frame data group, recording the frame type of the memory block at the position, releasing the frame data group in the old memory block pointed to by the live broadcast data which is already written into the frame data group, and adding one to the writing index value.
2. The method for optimizing performance of a live cache as claimed in claim 1, wherein,
the reading the cache data from the memory block corresponding to the virtual file includes:
determining the memory block position of a frame of data to be read;
reading the frame data in the corresponding memory block according to the memory block position to which the frame data belongs;
and reading each frame of data of the live broadcast data from the corresponding memory block according to the time sequence.
3. The method for optimizing performance of a live cache as claimed in claim 2, wherein,
the determining the memory block position to which the frame data to be read belongs includes:
recording a read index value of a frame of data to be read, wherein the initial value of the read index value is zero;
calculating a frame of data of the read index value, and performing time difference values when writing and reading the data to obtain delay data;
judging whether to skip frames to read the live broadcast data or not through the delay data and a preset frame skip threshold;
if yes, determining a position corresponding to one frame of data to be read after frame skipping;
if not, determining the position corresponding to the currently read frame data.
4. The method for optimizing performance of a live cache as claimed in claim 3, wherein,
the determining the position corresponding to the frame data to be read after frame skipping comprises the following steps: acquiring adjacent frame data of a frame nearest to the currently read frame data; the positions corresponding to the adjacent frame data are endowed with the reading index values, and the positions corresponding to the frame data to be read are calculated according to the reading index values and the frame data arrays;
after reading a frame of data from the memory block, the reading cache data from the memory block corresponding to the virtual file further includes: and adding one to the read index value to read cache data from the memory block corresponding to the virtual file for the next frame of data.
5. A performance optimization system for live cache, comprising:
the application interface layer module is used for providing a calling interface for the live broadcast application, and the calling interface is used for receiving an instruction of the live broadcast application for reading or writing the cache;
the virtual file layer module is used for providing a corresponding virtual file for each live channel;
the memory channel layer module is used for distributing a group of memory blocks to each live channel of the live application, and the memory blocks are used for storing live data which are newly acquired by the corresponding live channel;
the data writing module is used for receiving a write cache instruction of the live broadcast application through the calling interface, opening the virtual file and acquiring a frame of data to be written for the latest live broadcast data; determining the memory block position where a frame of data to be written should be written; acquiring a first numerical value of a writing index value and a frame number group size of one frame of data to be written; dividing the writing index value by the first numerical value to obtain a remainder of the quotient, wherein the remainder is used as the number position of a storage area of a memory block to which one frame of data to be written is written; writing the frame of data into the memory block corresponding to the numbering position; when one frame data written into the frame data group is pointed to a new memory block according to the frame data group, setting the position of the written memory block to be a new frame type by using the frame data group, recording the frame type of the memory block at the position, releasing the frame data group in an old memory block pointed to by the live broadcast data which is already written into the frame data group, and adding one to the writing index value; writing each frame of data of the latest live broadcast data into the memory block according to the time sequence;
and the data reading module is used for receiving the read cache instruction of the live broadcast application through the calling interface, opening the virtual file and reading cache data from the memory block corresponding to the virtual file.
6. An electronic device, comprising: a memory, a processor, on which a computer program is stored which is executable on the processor, characterized in that the processor implements the method for optimizing the performance of a live cache according to any one of claims 1 to 4 when executing the computer program.
7. A computer readable storage medium having stored thereon a computer program, wherein the computer program, when executed by a processor, implements the method of performance optimization of a live cache as claimed in any of claims 1 to 4.
CN202110911359.4A 2021-08-09 2021-08-09 Performance optimization method and system for live cache, electronic device and storage medium Active CN113596506B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110911359.4A CN113596506B (en) 2021-08-09 2021-08-09 Performance optimization method and system for live cache, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110911359.4A CN113596506B (en) 2021-08-09 2021-08-09 Performance optimization method and system for live cache, electronic device and storage medium

Publications (2)

Publication Number Publication Date
CN113596506A CN113596506A (en) 2021-11-02
CN113596506B true CN113596506B (en) 2024-03-12

Family

ID=78256722

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110911359.4A Active CN113596506B (en) 2021-08-09 2021-08-09 Performance optimization method and system for live cache, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN113596506B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101030218A (en) * 2007-03-23 2007-09-05 北京中星微电子有限公司 Virtual file management unit, system and method
CN103299600A (en) * 2011-01-04 2013-09-11 汤姆逊许可公司 Apparatus and method for transmitting live media content
WO2020155295A1 (en) * 2019-01-30 2020-08-06 网宿科技股份有限公司 Live data processing method and system, and server
WO2020221186A1 (en) * 2019-04-30 2020-11-05 广州虎牙信息科技有限公司 Virtual image control method, apparatus, electronic device and storage medium
CN112650720A (en) * 2020-12-18 2021-04-13 深圳市佳创视讯技术股份有限公司 Cache system management method and device and computer readable storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101030218A (en) * 2007-03-23 2007-09-05 北京中星微电子有限公司 Virtual file management unit, system and method
CN103299600A (en) * 2011-01-04 2013-09-11 汤姆逊许可公司 Apparatus and method for transmitting live media content
WO2020155295A1 (en) * 2019-01-30 2020-08-06 网宿科技股份有限公司 Live data processing method and system, and server
WO2020221186A1 (en) * 2019-04-30 2020-11-05 广州虎牙信息科技有限公司 Virtual image control method, apparatus, electronic device and storage medium
CN112650720A (en) * 2020-12-18 2021-04-13 深圳市佳创视讯技术股份有限公司 Cache system management method and device and computer readable storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
HDTV视频解码器中***控制的分析与实现;王少勇, 王金刚, 王兆华;天津大学学报(自然科学与工程技术版)(第06期);第1.2.1-1.2.3章节 *
尤晋元等.《Windows操作***原理》.机械工业出版社,2001,第216页. *

Also Published As

Publication number Publication date
CN113596506A (en) 2021-11-02

Similar Documents

Publication Publication Date Title
US10705974B2 (en) Data processing method and NVME storage device
US10268398B2 (en) Storage system, recording medium for storing control program and control method for storage system
KR20200027413A (en) Method, device and system for storing data
US10831612B2 (en) Primary node-standby node data transmission method, control node, and database system
CN102129434B (en) Method and system for reading and writing separation database
US10838691B2 (en) Method and apparatus of audio/video switching
US9055268B2 (en) Multi-tier recorder to enable seek-back unique copy recording
EP3869313A1 (en) Data storage method and apparatus
CN103152606A (en) Video file processing method, device and system
CN111163297A (en) Method for realizing high concurrency and quick playback of video monitoring cloud storage
CN117312201B (en) Data transmission method and device, accelerator equipment, host and storage medium
CN113596506B (en) Performance optimization method and system for live cache, electronic device and storage medium
CN110740374A (en) multimedia data processing method, device, computer equipment and storage medium
WO2023083064A1 (en) Video processing method and apparatus, electronic device, and readable storage medium
CN104133781A (en) Network storage equipment and method thereof for improving data access speed
CN104581403A (en) Method and device for sharing video content
CN111208946A (en) Data persistence method and system supporting KB-level small file concurrent IO
CN103248912A (en) Network television time shifting play method as well as network television system and device
US8588591B2 (en) Reproducing apparatus and reproducing method
JP5787129B2 (en) Data transfer method and program for remote connection screen
JP7073737B2 (en) Communication log recording device, communication log recording method, and communication log recording program
CN114327260B (en) Data reading method, system, server and storage medium
KR102399661B1 (en) Apparatus and method for remote connection
US20150088943A1 (en) Media-Aware File System and Method
CN113127222B (en) Data transmission method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant