CN113596506A - Live broadcast cache performance optimization method and system, electronic device and storage medium - Google Patents

Live broadcast cache performance optimization method and system, electronic device and storage medium Download PDF

Info

Publication number
CN113596506A
CN113596506A CN202110911359.4A CN202110911359A CN113596506A CN 113596506 A CN113596506 A CN 113596506A CN 202110911359 A CN202110911359 A CN 202110911359A CN 113596506 A CN113596506 A CN 113596506A
Authority
CN
China
Prior art keywords
data
frame
cache
memory block
live
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110911359.4A
Other languages
Chinese (zh)
Other versions
CN113596506B (en
Inventor
杨大维
徐小龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
AVIT Ltd
Original Assignee
AVIT Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by AVIT Ltd filed Critical AVIT Ltd
Priority to CN202110911359.4A priority Critical patent/CN113596506B/en
Publication of CN113596506A publication Critical patent/CN113596506A/en
Application granted granted Critical
Publication of CN113596506B publication Critical patent/CN113596506B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/231Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion
    • H04N21/23106Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion involving caching operations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/433Content storage operation, e.g. storage operation in response to a pause request, caching operations
    • H04N21/4331Caching operations, e.g. of an advertisement for later insertion during playback

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a method, a system, an electronic device and a storage medium for optimizing the performance of live broadcast cache, wherein the method comprises the following steps: providing a calling interface for the live broadcast application, wherein the calling interface is used for receiving a command of reading cache or writing cache of the live broadcast application; allocating a group of memory blocks to each live broadcast channel applied to live broadcast, wherein the memory blocks are used for storing newly acquired live broadcast data corresponding to the live broadcast channels; providing a corresponding virtual file for each live channel, wherein the virtual file is used for packaging live data of the corresponding live channel; receiving a write cache instruction of live broadcast application through a calling interface, opening a virtual file, and writing cache data into a memory block corresponding to the virtual file; or receiving a cache reading instruction of the live broadcast application through a calling interface, opening the virtual file, and reading cache data from a memory block corresponding to the virtual file. The method and the device can simplify the use of the cache, improve the concurrency performance of live broadcasting and reduce the live broadcasting delay.

Description

Live broadcast cache performance optimization method and system, electronic device and storage medium
Technical Field
The present invention relates to the field of cache optimization technologies, and in particular, to a method, a system, an electronic device, and a storage medium for optimizing performance of a live broadcast cache.
Background
With the development of science and technology, live broadcast becomes an indispensable part of our lives, such as online education, video conference, large-scale late meeting live broadcast, training live broadcast, event live broadcast and the like.
Due to the increase of network bandwidth, the number of audiences who participate in watching the live broadcast is increased, the concurrency of the live broadcast is increased, and the requirement on the time delay of the live broadcast is also increased.
Some existing live broadcast systems are not universal, read-write operation of an application layer is inconvenient, and live broadcast concurrency performance is poor.
Disclosure of Invention
The invention mainly aims to provide a performance optimization method, a system, an electronic device and a storage medium for live broadcast cache, and aims to solve the problems that in the prior art, a cache system is not universal, and read-write operation of an application layer is inconvenient, so that the concurrent performance of live broadcast is poor.
In order to achieve the above object, a first aspect of the present invention provides a method for optimizing performance of a live broadcast cache, including: providing a calling interface for a live broadcast application, wherein the calling interface is used for receiving a command of reading cache or writing cache of the live broadcast application; allocating a group of memory blocks to each live channel of the live application, wherein the memory blocks are used for storing newly acquired live data of the corresponding live channel; providing a corresponding virtual file for each live channel, wherein the virtual file is used for packaging live data of the corresponding live channel; receiving a write cache instruction of the live broadcast application through the calling interface, opening the virtual file, and writing cache data into the memory block corresponding to the virtual file; or, receiving a cache reading instruction of the live broadcast application through the calling interface, opening the virtual file, and reading cache data from the memory block corresponding to the virtual file.
Wherein the writing of the cache data into the memory block corresponding to the virtual file comprises: acquiring a frame of data needing to be written in for the latest live broadcast data; determining the position of a memory block to which a frame of data needing to be written should be written; writing corresponding frame data in the corresponding memory block according to the position of the memory block; and writing each frame of the latest live broadcast data into the memory block according to the time sequence.
Wherein writing the frame data in the memory block comprises: acquiring a write index value of a frame data to be written and a first numerical value of the size of a frame group; using the write index value and the first numerical value to obtain a remainder of quotient, wherein the remainder is used as a number position of a storage area of a memory block into which a frame of data to be written should be written; and writing the frame data into the memory block corresponding to the serial number position.
After writing a frame of data into the memory block, the writing cache data into the memory block corresponding to the virtual file further includes: when writing-in frame data to a new memory block according to the frame array, setting the position of the written-in memory block to be a new frame type by using the frame type array, recording the frame type of the memory block at the position, releasing the frame array from an old memory block to which the written live broadcast data points, and adding one to the write index value.
Wherein the reading the cache data from the memory block corresponding to the virtual file comprises: determining the position of a memory block to which a frame of data needing to be read belongs; reading the frame data in the corresponding memory block according to the memory block position to which the frame data belongs; and reading each frame of data of the live broadcast data from the corresponding memory block according to the time sequence.
Wherein, the determining the memory block position of the frame of data needing to be read includes: recording a read index value of a frame of data to be read, wherein the initial value of the read index value is zero; calculating a frame data of the read index value, and performing time difference between writing cache and reading cache to obtain delay data; judging whether to skip frames to read the live broadcast data or not according to the delay data and a preset frame skipping threshold; if so, determining the position corresponding to the frame data needing to be read after frame skipping; if not, determining the position corresponding to the currently read frame data.
Wherein, the determining the position corresponding to the data of the frame to be read after frame skipping includes: acquiring adjacent frame data of a frame closest to the currently read frame data; and assigning the position corresponding to the adjacent frame data to the reading index value, and calculating the position corresponding to the frame data to be read according to the reading index value and the frame data array. After reading a frame of data from the memory block, the reading cache data from the memory block corresponding to the virtual file further includes: and adding one to the read index value to read the cache data from the memory block corresponding to the virtual file by the next frame data.
A second aspect of the present application provides a performance optimization system for live broadcast caching, including: the application interface layer module is used for providing a calling interface for live broadcast application, and the calling interface is used for receiving a command of reading cache or writing cache of the live broadcast application; the memory channel layer module is used for allocating a group of memory blocks to each live channel of the live broadcast application, and the memory blocks are used for storing newly acquired live broadcast data of the corresponding live broadcast channel; the virtual file layer module is used for providing a corresponding virtual file for each live channel; the data writing module is used for receiving a writing cache instruction of the live broadcast application through the calling interface, opening the virtual file and writing cache data into the memory block corresponding to the virtual file; and the data reading module is used for receiving a reading cache instruction of the live broadcast application through the calling interface, opening the virtual file and reading cache data from the memory block corresponding to the virtual file.
A third aspect of the present application provides an electronic apparatus comprising: the live broadcast cache performance optimization method comprises a storage and a processor, wherein the storage is stored with a computer program capable of running on the processor, and the processor realizes any one of the live broadcast cache performance optimization methods when executing the computer program.
A fourth aspect of the present application provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements a method for optimizing performance of a live cache as described in any one of the above.
The invention provides a method and a system for optimizing the performance of live broadcast cache, an electronic device and a storage medium, and has the advantages that: the unified calling interface is provided, the application layer where the live broadcast application is located can be conveniently used, the maintainability and the reliability of a live broadcast system are improved, the use of cache can be simplified, the concurrency of live broadcast is improved, and the live broadcast delay is reduced.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic flowchart of a performance optimization method for live broadcast caching according to an embodiment of the present application;
fig. 2 is a schematic flow chart illustrating a process of writing cache data into a memory block corresponding to a virtual file according to the performance optimization method for live caching in an embodiment of the present application;
fig. 3 is a schematic flowchart of all live broadcast data corresponding to one frame of data written in a memory block in a cache according to a performance optimization method for live broadcast caching in an embodiment of the present application;
fig. 4 is a schematic flowchart illustrating a process of reading cache data from a memory block corresponding to a virtual file according to a performance optimization method for live caching in an embodiment of the present application;
fig. 5 is a schematic flowchart illustrating a process of determining a read memory block of data of a frame according to a performance optimization method for live broadcast caching according to an embodiment of the present application;
fig. 6 is a block diagram illustrating a structure of a performance optimization system of a live broadcast cache according to an embodiment of the present application;
fig. 7 is a block diagram illustrating a structure of an electronic device according to an embodiment of the disclosure.
Detailed Description
In order to make the objects, features and advantages of the present invention more obvious and understandable, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, a method for optimizing performance of a live broadcast cache includes:
s101, providing a calling interface for the live broadcast application, wherein the calling interface is used for receiving a command of reading cache or writing cache of the live broadcast application;
s102, distributing a group of memory blocks to each live channel of live broadcast application, wherein the memory blocks are used for storing newly acquired live broadcast data of the corresponding live broadcast channel;
s103, providing a corresponding virtual file for each live channel, wherein the virtual file is used for packaging live data of the corresponding live channel;
s104, receiving a write cache instruction of the live broadcast application through a calling interface, opening a virtual file, and writing cache data into a memory block corresponding to the virtual file;
and S105, receiving a reading cache instruction of the live broadcast application through a calling interface, opening a virtual file, and reading cache data from a memory block corresponding to the virtual file.
In step S101, at least four calling interfaces are respectively Open, Write, Read, and Close interfaces.
The Open interface is used for opening a virtual file, transmitting a live channel url as a parameter, and returning a virtual file ID after the virtual file is successfully opened.
The Read interface is used for reading the latest live channel data, transmitting a virtual file ID and returning a complete frame block data. When the application layer does not call the Read interface in time, the data is delayed sometimes, the live broadcast cache system can automatically skip the overdue data and return from the latest position point, and the low delay of live broadcast is ensured.
The Write interface is used to Write the latest live data, the incoming virtual file ID and a complete frame block.
The Close interface is used for closing the virtual file and releasing the resources.
In step S102, there is first a memory channel (MemChannel), where the memory channel has two fixed-size arrays, which are FData (frame data array) and FType (frame type array), respectively, and FType is an int type array, and records video frame types (e.g., I, P, B frames); FData is a Block pointer type array, pointing to a Block cache Block. The Block buffer Block buffers one complete frame of data.
The memory channel has a write index value, write _ idx, to record the write location of the video data, and needs to be converted to the index of the array by write _ idx when writing the data. If the size of the array is 100 and the current write _ idx is 120, the write position is 120 divided by 100 to get the remainder, i.e. the write position is 20, the 20 position of the FData array points to the new Block, and the Block in the original position is set as the free Block.
And the FType array records the frame type of FData to the position point and is used for searching the I frame when the frame is lost.
In step S103, a virtual file is provided by using the virtual file layer, and each user wants to read and write data of a live channel, needs to open a virtual file VirtualFile, and performs an operation through a file ID.
In step S104 and step S105, data transmission is performed through the network transmission layer, for example, transmission is performed in a chunck manner of HTTP, one frame data is packed into one chunck, and the receiving end knows that the frame data is one frame data and knows the frame type after receiving the one chunck, so that the receiving end does not need to perform alignment and frame type judgment of the frame data again, thereby improving the efficiency of the receiving end. When Chunck is transmitted, the last word of Chunck data is used to record the frame type. Adding the byte when the transmitting end transmits chunck data; when receiving data, the receiving end judges the frame type according to the byte, and when saving to Block, the byte is removed.
Because the network transmission layer transmits the live broadcast data according to the data frame and the frame type, the live broadcast delay can be reduced.
In this embodiment, the live data to be cached may be audio and video data encapsulated by a TS, or may be bare data encapsulated according to a certain format; in addition, the performance optimization method for the live broadcast cache provided by the embodiment may be arranged in a window system, a Linux system, a Unix system, and the like of the electronic device.
In this embodiment, a unified calling interface is provided, which facilitates the application layer where the live broadcast application is located to use, and improves the maintainability and reliability of the live broadcast system, thereby simplifying the use of the cache, improving the concurrency performance of the live broadcast, and reducing the live broadcast delay.
In addition, the transmission is carried out in a http chunck mode according to the video frames and the frame types, so that the CPU consumption of a receiving end for analyzing data and carrying out frame assembly is reduced, the system performance is improved, and the concurrence is improved.
Referring to fig. 2, in an embodiment, the writing the cache data into the memory block corresponding to the virtual file in step S104 includes:
s1041, acquiring a frame of data needing to be written in for the latest live broadcast data;
s1042, determining the position of a memory block to which the written data of one frame should be written;
s1043, writing corresponding frame data in the corresponding memory block according to the position of the memory block;
and S1044, writing each frame of the latest live broadcast data into the memory block according to the time sequence.
After opening a virtual file, the url of the incoming channel is used as the only parameter. And returning a file ID after the opening is successful, wherein the file ID is carried in subsequent operations. VirtualFile will associate a MemChannel.
And then calls the Write interface to Write a frame of data, and the input parameters are Block and frame type.
Referring to fig. 3, in an embodiment, the step S1043 of writing the frame data in the memory block includes:
s10431, obtaining a write index value of a frame data to be written and a first numerical value of the frame number group size;
s10432, using the write index value and the first numerical value to obtain a remainder of the quotient, and using the remainder as a number position of a storage area of a memory block into which the data of one frame to be written should be written;
s10433, writing the frame data into the memory block corresponding to the serial number position.
MemChanel determines the write location by write _ idx. Specifically, when step S10432 is implemented, for example, it is assumed that the FData array size is 100, and the current write _ idx is 120, the write position is 120 divided by 100 to obtain 20, and 20 is the number position of the storage area of the memory block to which the written frame data should be written.
In an embodiment, in step S104, after writing a frame of data into the memory block, writing cache data into the memory block corresponding to the virtual file further includes:
s1044 setting the location of the written memory block to a new frame type by using the frame type array when pointing the written frame data to the new memory block according to the frame array, recording the frame type of the memory block at the location, releasing the frame group from the old memory block to which the written live data points, and adding one to the write index value.
Referring to fig. 4, in an embodiment, in step S105, reading the cache data from the memory block corresponding to the virtual file includes:
s1051, determining the position of a memory block to which a frame of data needing to be read belongs;
s1052, reading a frame of data in the corresponding memory block according to the memory block position to which the frame of data belongs;
and S1053, reading each frame of data of the live broadcast data from the corresponding memory block according to the time sequence.
In this embodiment, a VirtualFile is opened with the url of the incoming channel as the only parameter. And returning a file ID after the opening is successful, wherein the file ID is carried in subsequent operations. VirtualFile will associate a MemChannel. The read interface is required to read one frame of data.
Referring to fig. 5, in an embodiment, the step S1052 of determining the memory block location of the frame data to be read includes:
s10521, recording a read index value of a frame of data needing to be read, wherein the initial value of the read index value is zero;
s10522, calculating a frame of data of the read index value, and performing time difference between write caching and read caching to obtain delay data;
s10523, judging whether to skip the frame to read the live broadcast data according to the delay data and a preset frame skipping threshold;
s10524, if yes, determining the position corresponding to the frame data which needs to be read after frame skipping;
s10525, if not, determining the position corresponding to the currently read frame data.
VirtualFile maintains a read index, read _ idx, with an initial value of 0. VirtualFile reads data to MemChannel via read _ idx.
In step S10523, MemChannel determines the delay time from read _ idx and write _ idx. If write _ idx minus read _ idx equals 10, this indicates a delay of 10 frames.
Judging whether the time delay is greater than a configured threshold value, and starting frame skipping if the time delay is greater than the configured threshold value; the frame skipping can always keep the low delay of live broadcasting, thereby reducing the delay of live broadcasting.
In an embodiment, in step S10524, determining a corresponding position of a frame of data that needs to be read after frame skipping includes: acquiring adjacent frame data of a frame closest to the currently read frame data; and assigning the corresponding position of the adjacent frame data to a reading index value, and calculating the corresponding position of the frame data to be read according to the reading index value and the frame data array.
When skipping a frame, an I-frame data closest to the write _ idx position is found and assigned to read _ idx.
In this embodiment, under the bad condition of network, adopt the mode of losing the frame fast, reduce live broadcast time delay, the mosaic can not appear simultaneously, promotes user experience.
When the position corresponding to the frame data to be read is calculated according to the read index value and the frame data array, the method determines that the position corresponding to the frame data to be written is the same as that when the write cache is acquired, and specifically may include: acquiring a second numerical value of the read write index value of the frame data and the size of the frame group; dividing the writing index value by the second numerical value to obtain a remainder of the quotient, wherein the remainder is used as the number position of the storage area of the memory block which is read by the read frame data; and reading all live broadcast data from the memory blocks corresponding to the numbering positions.
In step S105, after reading a frame of data from the memory block, the reading the cache data from the memory block corresponding to the virtual file further includes: s1054, adding one to the reading index value to read the cache data from the memory block corresponding to the virtual file by the next frame data.
Referring to fig. 6, in an embodiment, the present application further provides a performance optimization system for live broadcast cache, including: the system comprises an application interface layer module 1, a virtual file layer module 2, a memory channel layer module 3, a data writing module 4 and a data reading module 5; the application interface layer module 1 is used for providing a calling interface for the live broadcast application, and the calling interface is used for receiving a command of reading cache or writing cache of the live broadcast application; the virtual file layer module 2 is used for providing a corresponding virtual file for each live channel; the memory channel layer module 3 is configured to allocate a group of memory blocks to each live channel of the live broadcast application, where the memory blocks are configured to store live broadcast data newly acquired from the corresponding live broadcast channel; the data writing module 4 is used for receiving a writing cache instruction of the live broadcast application through a calling interface, opening a virtual file, and writing cache data into a memory block corresponding to the virtual file; the data reading module is used for receiving a reading cache instruction of the live broadcast application through a calling interface, opening a virtual file and reading cache data from a memory block corresponding to the virtual file.
In this embodiment, live data is transmitted through a network transport layer, the live data is transmitted from a live channel to a data writing module of a receiving end, for example, the live data is transmitted in a chunck mode of HTTP, one frame data is packed into one chunck, and the receiving end knows that the frame data is one frame data and knows the frame type without performing alignment of the frame data and frame type judgment again, so that the efficiency of the receiving end is improved. When Chunck is transmitted, the last word of Chunck data is used to record the frame type. Adding the byte when the transmitting end transmits chunck data; when receiving data, the receiving end judges the frame type according to the byte, and when saving to Block, the byte is removed.
Because the network transmission layer transmits the live broadcast data according to the data frame and the frame type, the live broadcast delay can be reduced.
The performance optimization system of the live broadcast cache of the embodiment provides a uniform calling interface, is convenient for an application layer where the live broadcast application is located to use, and improves maintainability and reliability of a live broadcast system, so that concurrency of live broadcast can be improved.
In one embodiment, the network transport layer module 4 comprises: the device comprises a first data writing unit, a writing memory determining unit and a second data writing unit; the first data writing unit is used for acquiring data of a frame needing to be written for the latest live broadcast data; the write-in memory determining unit is used for determining the position of a memory block to which a frame of data needing to be written should be written; the second data writing unit is used for writing corresponding frame data into the corresponding memory block according to the position of the memory block, and writing each frame of latest live broadcast data into the memory block according to the time sequence.
In one embodiment, the second data writing unit includes: the device comprises a numerical value acquisition subunit, a first calculation subunit and a live broadcast data writing unit; the numerical value obtaining subunit is configured to obtain a write index value of a frame of data to be written and a first numerical value of a frame group size; the first calculating subunit is configured to use a remainder of a quotient obtained by dividing the write index value by the first numerical value as a number position of a storage area of a memory block into which data of one frame to be written should be written; the live broadcast data writing unit is used for writing the frame data into the memory block corresponding to the serial number position.
The data writing module 4 further includes: and the frame type recording module is used for setting the position of the written memory block to be a new frame type by using the frame type array when the written frame data points to the new memory block according to the frame array, recording the frame type of the memory block at the position, releasing the frame group from the old memory block pointed by the written live data, and adding one to the write index value.
In one embodiment, the data reading module 5 includes: reading the memory determining unit and the second data reading unit; the reading memory determining unit is used for determining the memory block position to which a frame of data needing to be read belongs; the second data reading unit is configured to read one frame of data from the corresponding memory block according to the memory block location to which the one frame of data belongs, and read each frame of data of the live broadcast data from the corresponding memory block according to the time sequence.
In one embodiment, reading the memory determination unit includes: the device comprises a numerical value recording subunit, a second calculating subunit, a judging subunit and a position determining subunit; the numerical value recording subunit is used for recording a read index value of a frame of data to be read, and the initial value of the read index value is zero; the second calculating subunit is used for calculating a frame data of the read index value, and performing time difference between write caching and read caching to obtain delay data; the judging subunit is used for judging whether to read live broadcast data by frame skipping or not according to the delay data and a preset frame skipping threshold; the position determining subunit is used for determining the position corresponding to the frame data which needs to be read after frame skipping if the judging subunit judges that the frame skipping is required; and the judging subunit is further configured to determine a position corresponding to the currently read frame data if the judging subunit judges that no frame skipping is required.
In one embodiment, the location determining subunit is further to: acquiring adjacent frame data of a frame closest to the currently read frame data; and assigning the corresponding position of the adjacent frame data to a reading index value, and calculating the corresponding position of the frame data to be read according to the reading index value and the frame data array.
In an embodiment, the network transport layer module 4 further includes an accumulation unit, configured to add one to the read index value, so as to read the cached data from the memory block corresponding to the virtual file in the next frame of data.
An embodiment of the present application provides an electronic device, please refer to fig. 7, which includes: the memory 601, the processor 602, and a computer program stored on the memory 601 and executable on the processor 602, when the processor 602 executes the computer program, the performance optimization method of the live cache described in the foregoing is implemented.
Further, the electronic device further includes: at least one input device 603 and at least one output device 604.
The memory 601, the processor 602, the input device 603, and the output device 604 are connected by a bus 605.
The input device 603 may be a camera, a touch panel, a physical button, a mouse, or the like. The output device 604 may be embodied as a display screen.
The Memory 601 may be a high-speed Random Access Memory (RAM) Memory, or a non-volatile Memory (non-volatile Memory), such as a disk Memory. The memory 601 is used for storing a set of executable program code, and the processor 602 is coupled to the memory 601.
Further, an embodiment of the present application also provides a computer-readable storage medium, which may be disposed in the electronic device in the foregoing embodiments, and the computer-readable storage medium may be the memory 601 in the foregoing. The computer-readable storage medium has stored thereon a computer program which, when executed by the processor 602, implements the performance optimization method of the live cache described in the foregoing embodiments.
Further, the computer-readable storage medium may be various media that can store program codes, such as a usb disk, a removable hard disk, a Read-Only Memory 601 (ROM), a RAM, a magnetic disk, or an optical disk.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is merely a logical division, and in actual implementation, there may be other divisions, for example, multiple modules or components may be combined or integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or modules, and may be in an electrical, mechanical or other form.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical modules, may be located in one place, or may be distributed on a plurality of network modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
In addition, functional modules in the embodiments of the present invention may be integrated into one processing module, or each of the modules may exist alone physically, or two or more modules are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode.
The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
It should be noted that, for the sake of simplicity, the above-mentioned method embodiments are described as a series of acts or combinations, but those skilled in the art should understand that the present invention is not limited by the described order of acts, as some steps may be performed in other orders or simultaneously according to the present invention. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no acts or modules are necessarily required of the invention.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the above description, for a person skilled in the art, according to the idea of the embodiment of the present invention, there are variations in the specific implementation and application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (10)

1. A performance optimization method for a live broadcast cache is characterized by comprising the following steps:
providing a calling interface for a live broadcast application, wherein the calling interface is used for receiving a command of reading cache or writing cache of the live broadcast application;
allocating a group of memory blocks to each live channel of the live application, wherein the memory blocks are used for storing newly acquired live data of the corresponding live channel;
providing a corresponding virtual file for each live channel, wherein the virtual file is used for packaging live data of the corresponding live channel;
receiving a write cache instruction of the live broadcast application through the calling interface, opening the virtual file, and writing cache data into the memory block corresponding to the virtual file;
or, receiving a cache reading instruction of the live broadcast application through the calling interface, opening the virtual file, and reading cache data from the memory block corresponding to the virtual file.
2. The method of optimizing performance of a live cache of claim 1,
the writing of the cache data into the memory block corresponding to the virtual file includes:
acquiring a frame of data needing to be written in for the latest live broadcast data;
determining the position of a memory block to which a frame of data needing to be written should be written;
writing corresponding frame data into the corresponding memory block according to the position of the memory block;
and writing each frame of the latest live broadcast data into the memory block according to the time sequence.
3. The method of optimizing performance of a live cache of claim 2,
writing the corresponding frame of data in the corresponding memory block according to the memory block position includes:
acquiring a write index value of a frame data to be written and a first numerical value of the size of a frame group;
dividing the writing index value by the first numerical value to obtain a remainder of a quotient, wherein the remainder is used as a number position of a storage area of a memory block into which a frame of data to be written should be written;
and writing the frame data into the memory block corresponding to the serial number position.
4. The method of optimizing performance of a live cache of claim 3,
after writing the corresponding frame of data into the corresponding memory block according to the memory block position, the writing the cache data into the memory block corresponding to the virtual file further includes:
when writing-in frame data to a new memory block according to the frame array, setting the position of the written-in memory block to be a new frame type by using the frame array, recording the frame type of the memory block at the position, releasing the frame array from an old memory block to which the written live data points, and adding one to the write index value.
5. The method of optimizing performance of a live cache of claim 1,
the reading of the cache data from the memory block corresponding to the virtual file includes:
determining the position of a memory block to which a frame of data needing to be read belongs;
reading the frame data in the corresponding memory block according to the memory block position to which the frame data belongs;
and reading each frame of data of the live broadcast data from the corresponding memory block according to the time sequence.
6. The method of optimizing performance of a live cache of claim 5,
the determining the memory block location to which the frame of data needing to be read belongs includes:
recording a read index value of a frame of data to be read, wherein the initial value of the read index value is zero;
calculating a frame data of the read index value, and performing time difference between writing cache and reading cache to obtain delay data;
judging whether to skip frames to read the live broadcast data or not according to the delay data and a preset frame skipping threshold;
if so, determining the position corresponding to the frame data needing to be read after frame skipping;
if not, determining the position corresponding to the currently read frame data.
7. The method of optimizing performance of a live cache of claim 6,
determining the position corresponding to the data of the frame needing to be read after frame skipping comprises: acquiring adjacent frame data of a frame closest to the currently read frame data; assigning the position corresponding to the adjacent frame data to the reading index value, and calculating the position corresponding to the frame data to be read according to the reading index value and the frame data array;
after reading a frame of data from the memory block, the reading cache data from the memory block corresponding to the virtual file further includes: and adding one to the read index value to read the cache data from the memory block corresponding to the virtual file by the next frame data.
8. A system for optimizing performance of a live cache, comprising:
the application interface layer module is used for providing a calling interface for live broadcast application, and the calling interface is used for receiving a command of reading cache or writing cache of the live broadcast application;
the virtual file layer module is used for providing a corresponding virtual file for each live channel;
the memory channel layer module is used for allocating a group of memory blocks to each live channel of the live broadcast application, and the memory blocks are used for storing newly acquired live broadcast data of the corresponding live broadcast channel;
the data writing module is used for receiving a writing cache instruction of the live broadcast application through the calling interface, opening the virtual file and writing cache data into the memory block corresponding to the virtual file;
and the data reading module is used for receiving a reading cache instruction of the live broadcast application through the calling interface, opening the virtual file and reading cache data from the memory block corresponding to the virtual file.
9. An electronic device, comprising: a memory, and a processor, where the memory stores thereon a computer program operable on the processor, and when the processor executes the computer program, the method for optimizing performance of a live cache according to any one of claims 1 to 7 is implemented.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out a method for optimizing the performance of a live cache according to any one of claims 1 to 7.
CN202110911359.4A 2021-08-09 2021-08-09 Performance optimization method and system for live cache, electronic device and storage medium Active CN113596506B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110911359.4A CN113596506B (en) 2021-08-09 2021-08-09 Performance optimization method and system for live cache, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110911359.4A CN113596506B (en) 2021-08-09 2021-08-09 Performance optimization method and system for live cache, electronic device and storage medium

Publications (2)

Publication Number Publication Date
CN113596506A true CN113596506A (en) 2021-11-02
CN113596506B CN113596506B (en) 2024-03-12

Family

ID=78256722

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110911359.4A Active CN113596506B (en) 2021-08-09 2021-08-09 Performance optimization method and system for live cache, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN113596506B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101030218A (en) * 2007-03-23 2007-09-05 北京中星微电子有限公司 Virtual file management unit, system and method
CN103299600A (en) * 2011-01-04 2013-09-11 汤姆逊许可公司 Apparatus and method for transmitting live media content
WO2020155295A1 (en) * 2019-01-30 2020-08-06 网宿科技股份有限公司 Live data processing method and system, and server
WO2020221186A1 (en) * 2019-04-30 2020-11-05 广州虎牙信息科技有限公司 Virtual image control method, apparatus, electronic device and storage medium
CN112650720A (en) * 2020-12-18 2021-04-13 深圳市佳创视讯技术股份有限公司 Cache system management method and device and computer readable storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101030218A (en) * 2007-03-23 2007-09-05 北京中星微电子有限公司 Virtual file management unit, system and method
CN103299600A (en) * 2011-01-04 2013-09-11 汤姆逊许可公司 Apparatus and method for transmitting live media content
WO2020155295A1 (en) * 2019-01-30 2020-08-06 网宿科技股份有限公司 Live data processing method and system, and server
WO2020221186A1 (en) * 2019-04-30 2020-11-05 广州虎牙信息科技有限公司 Virtual image control method, apparatus, electronic device and storage medium
CN112650720A (en) * 2020-12-18 2021-04-13 深圳市佳创视讯技术股份有限公司 Cache system management method and device and computer readable storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王少勇, 王金刚, 王兆华: "HDTV视频解码器中***控制的分析与实现", 天津大学学报(自然科学与工程技术版), no. 06, pages 216 - 1 *

Also Published As

Publication number Publication date
CN113596506B (en) 2024-03-12

Similar Documents

Publication Publication Date Title
CN109254733B (en) Method, device and system for storing data
WO2017028514A1 (en) Method and device for storing and reading data
US20180300250A1 (en) Method and apparatus for storing data
CN107484011B (en) Video resource decoding method and device
CN107341062B (en) Data pushing method, device, equipment and storage medium
CN109634824B (en) Distributed storage performance test method and system in broadcasting and television service scene
US7624227B2 (en) Drive device and related computer program
WO2023015866A1 (en) Data writing method, apparatus and system, and electronic device and storage medium
CN110334145A (en) The method and apparatus of data processing
CN103078810A (en) Efficient rich media showing system and method
CN111163297A (en) Method for realizing high concurrency and quick playback of video monitoring cloud storage
US20170163555A1 (en) Video file buffering method and system
CN103761194B (en) A kind of EMS memory management process and device
CN113596506A (en) Live broadcast cache performance optimization method and system, electronic device and storage medium
CN111984198A (en) Message queue implementation method and device and electronic equipment
CN107885807B (en) File saving method and device, intelligent tablet and storage medium
CN111538705B (en) Video thumbnail preview method, control server and medium
CN111857462B (en) Server, cursor synchronization method and device, and computer readable storage medium
WO2019104688A1 (en) Input data processing method for handwriting board, electronic device and storage medium
CN111625524B (en) Data processing method, device, equipment and storage medium
CN113127222B (en) Data transmission method, device, equipment and medium
CN115329006B (en) Data synchronization method and system for background and third party interface of network mall
CN114327260B (en) Data reading method, system, server and storage medium
CN111614998B (en) Method for processing promotion video, electronic device and computer-readable storage medium
CN110727402B (en) High-speed FC data real-time receiving and frame loss-free storage method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant