CN115190352A - Video data storage method and device, computer readable storage medium and electronic equipment - Google Patents

Video data storage method and device, computer readable storage medium and electronic equipment Download PDF

Info

Publication number
CN115190352A
CN115190352A CN202210543941.4A CN202210543941A CN115190352A CN 115190352 A CN115190352 A CN 115190352A CN 202210543941 A CN202210543941 A CN 202210543941A CN 115190352 A CN115190352 A CN 115190352A
Authority
CN
China
Prior art keywords
data
information
video data
storage
fragment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210543941.4A
Other languages
Chinese (zh)
Inventor
白国瑞
韦利东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Genyan Network Technology Co ltd
Original Assignee
Shanghai Genyan Network Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Genyan Network Technology Co ltd filed Critical Shanghai Genyan Network Technology Co ltd
Priority to CN202210543941.4A priority Critical patent/CN115190352A/en
Publication of CN115190352A publication Critical patent/CN115190352A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/433Content storage operation, e.g. storage operation in response to a pause request, caching operations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The application discloses a video data storage method and device, a computer readable storage medium and electronic equipment. The method comprises the following steps: acquiring a plurality of fragment data corresponding to original video data and fragment information of the plurality of fragment data, and storing the plurality of fragment data to acquire a storage identifier of each fragment data; creating a corresponding information packet using the video identifier of the original video data; the storage identity is stored in the information packet in association with the fragmentation information. According to the scheme, any server can be used for splicing the fragmented data by utilizing the fragmented information stored in the information packet, so that the original video data is generated, the requirement that the same server is used for receiving all the fragmented data in order to ensure successful splicing in the prior art is eliminated, the complexity of video data transmission is greatly reduced, and the experience of uploading the video data in a distributed environment by a user is improved.

Description

Video data storage method and device, computer readable storage medium and electronic equipment
Technical Field
The present application relates to the field of video technologies, and in particular, to a method and an apparatus for storing video data, a computer-readable storage medium, and an electronic device.
Background
With the development of internet technology, users have viewed various videos through the internet. Particularly, with the improvement of video capture technology, more and more users want to upload a high-resolution video file to a cloud server for sharing with other users. Such high-resolution video files tend to have a large file size, so that when uploading to the cloud server, not only is a high bandwidth occupied, but also a large computational load is imposed on the sending and receiving servers, and the transmission rate is reduced due to an excessively high load of the server transmitting the video file. Therefore, a technical solution capable of increasing the transmission rate of the video file is needed.
Disclosure of Invention
Embodiments of the present application provide a video data storage method and apparatus, a computer-readable storage medium, and an electronic device, so as to overcome a defect in the prior art that a system is unstable due to a need to change a routing configuration of a server.
In order to achieve the above object, an embodiment of the present application provides a video data storage method, including:
acquiring a plurality of fragment data corresponding to original video data and fragment information of the plurality of fragment data, wherein the plurality of fragment data are obtained by dividing the original video data;
storing the plurality of fragment data to acquire a storage identifier of each fragment data;
creating a corresponding information packet using the video identifier of the original video data;
storing the storage identity in the information packet in association with the fragmentation information.
In the video data storage method according to the embodiment of the present application, the fragmentation information includes a fragmentation data sequence number of each of the plurality of fragmentation data, and the storing the storage identifier in the information packet in association with the fragmentation information includes:
for each fragmented data, its storage identity is stored in the information packet in association with its fragmented data sequence number.
In the video data storage method according to an embodiment of the present application, the fragmentation information includes the number of the plurality of pieces of fragmentation data, and the storing the storage identification in the information packet in association with the fragmentation information includes:
storing the number of the plurality of sliced data and the video identification in association in the information packet.
According to the video data storage method in the embodiment of the present application, the storing the plurality of sliced data to obtain the storage identifier of each sliced data includes:
storing the plurality of sliced data in a cloud database;
and acquiring the storage identifier of each fragment data from the cloud database.
According to the video data storage method of the embodiment of the application, the creating the corresponding information packet by using the video identifier of the original video data includes:
creating the information packet in a non-relational database and using the video identification as a packet identification for the information packet.
An embodiment of the present application further provides a video data storage device, including:
the video slicing device comprises a slicing information acquisition module, a slicing information acquisition module and a slicing information acquisition module, wherein the slicing information acquisition module is used for acquiring a plurality of slicing data corresponding to original video data and slicing information of the plurality of slicing data, and the plurality of slicing data are obtained by dividing the original video data;
the storage identifier acquisition module is used for storing the plurality of fragment data to acquire the storage identifier of each fragment data;
the grouping creation module is used for creating a corresponding information grouping by using the video identifier of the original video data;
an information storage module to store the storage identity in association with the fragmentation information in the information packet.
In the video data storage apparatus according to an embodiment of the present application, the fragment information includes a fragment data sequence number of each of the plurality of fragment data, and the information storage module is further configured to:
for each fragmented data, its storage identity is stored in the information packet in association with its fragmented data sequence number.
In the video data storage apparatus according to an embodiment of the present application, the fragmentation information includes a number of the plurality of fragmentation data, and the information storage module is further configured to:
storing the number of the plurality of sliced data and the video identity in association in the information packet.
Embodiments of the present application further provide a computer-readable storage medium on which a computer program executable by a processor is stored, where the program, when executed by the processor, implements a video data storage method in a cluster as provided in embodiments of the present application.
An embodiment of the present application further provides an electronic device, including:
a memory for storing a program;
and the processor is used for operating the program stored in the memory so as to execute the video data storage method in the cluster provided by the embodiment of the application.
According to the video data storage method and device, the computer-readable storage medium and the electronic device, the plurality of fragment data are stored to obtain the storage identifier of each fragment data by obtaining the plurality of fragment data corresponding to the original video data and the fragment information of the plurality of fragment data; corresponding information packets are created using the video identification of the original video data and the storage identification is stored in the information packets in association with the fragmentation information. Therefore, a user can use any server to splice the fragmented data by using the fragmented information stored in the information packet, so as to generate the original video data, thereby eliminating the need of using the same server to receive all the fragmented data in order to ensure successful splicing in the prior art, greatly reducing the complexity of video data transmission, and improving the experience of uploading the video data in a distributed environment for the user.
The above description is only an overview of the technical solutions of the present application, and the present application may be implemented in accordance with the content of the description so as to make the technical means of the present application more clearly understood, and the detailed description of the present application will be given below in order to make the above and other objects, features, and advantages of the present application more clearly understood.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the application. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
fig. 1 is a schematic view of an application scenario of a video data storage scheme provided in an embodiment of the present application;
FIG. 2 is a flow chart of an embodiment of a video data storage method provided by the present application;
FIG. 3 is a schematic structural diagram of an embodiment of a video data storage apparatus provided in the present application;
fig. 4 is a schematic structural diagram of an embodiment of an electronic device provided in the present application.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
The scheme provided by the embodiment of the application can be applied to any device or system with video storage capacity and the like. Fig. 1 is a schematic view of an application scenario of a video data storage scheme provided in an embodiment of the present application, and the scenario shown in fig. 1 is only one example of a scenario to which the technical scheme of the present application may be applied.
Particularly, with the improvement of video capture technology, more and more users want to upload a high-resolution video file to a cloud server to share the high-resolution video file with other users. Such high-resolution video files tend to have a large file size, so that when uploading to the cloud server, not only is a high bandwidth occupied, but also a large computational load is imposed on the sending and receiving servers, and the transmission rate is reduced due to an excessively high load of the server transmitting the video file.
A video transmission scheme based on a distributed environment has been proposed in the prior art, in which a user can use a client to segment a video file to be uploaded to generate a plurality of fragmented files, so that a large video file can be divided into a plurality of small fragmented files to be uploaded synchronously by a plurality of servers, and the servers extract and concatenate harmonious files after receiving the fragmented files to generate an original video file on the servers. However, in this solution in the prior art, since a large video file is divided into a plurality of small fragmented files for uploading, it is necessary to ensure that these divided fragmented files are all sent to the same server instance, so as to avoid that the server cannot find one or more fragments when combining the fragmented files, which results in a failure in restoration. Therefore, in the prior art, the load balancing routing rule of the server needs to be changed during transmission to ensure that all fragmented files can be finally transmitted to the same server, but such a modification increases the complexity of the configuration of the server for operation and maintenance personnel on one hand, and also reduces the stability of the whole system on the other hand.
In the prior art, because the transmission of a plurality of fragmented files in a distributed environment requires modification of routing settings such as a load balancing rule of a current transmission server according to current service logic, and also requires the use of a server memory to store the fragmented files, such a scheme easily causes memory overflow of the server as video files transmitted by users become larger and larger.
In this regard, as shown in fig. 1, fig. 1 is a schematic diagram showing an application scenario of a video data transmission scheme according to an embodiment of the present application. In the video data transmission scheme of the embodiment of the application, the client can acquire the video information of the video data according to the video data which the user wants to upload. For example, the client may acquire video information of the video data by parsing the video data, and the video information may include information such as the size of the video data and the identification of the server to be uploaded.
Then, the parameters for dividing the video data may be calculated according to the size of the video data in the video information. For example, the number of slices into which the video data is divided, and further, the serial number of each slice data, the original data identifier assigned to each slice data to identify the video data to which the slice data belongs, are determined. In particular, in the embodiment of the present application, the raw data identifier may be randomly generated based on the fragmented file.
After the fragment parameters are generated, the client may divide the video data based on the parameters to generate a plurality of fragment data, and allocate the fragment data sequence numbers and the original data identifiers to the generated fragment data. And then, the client can transmit the generated fragments to the target server through the uploading interface of the corresponding server according to the identification of the target server in the video data uploading request of the user.
And the target server acquires the fragment storage identifier of each fragment data after receiving each fragment data uploaded by the client. For example, the target server may upload the received respective fragment data to the cloud database, and thereby obtain the fragment storage identifier corresponding to the respective fragment data from the cloud database.
The target server creates a fragment data information packet for the original video data to which the fragment data belongs by using the original data identifier generated when the fragment data is divided, takes the original data identifier as the packet identifier of the information packet, stores each fragment data sequence number and a fragment storage identifier corresponding to the fragment data sequence number and acquired from a cloud database in the information packet in an associated manner, and can further store the original data identifier and the number of the fragment data in the information packet in an associated manner. Therefore, the information packet has, as a packet identifier, a raw data identifier of the raw video data to which the fragment data belongs, and a sequence number in which each fragment data is stored and a storage identifier in which each fragment data is stored in the cloud database, as well as a total number of the raw data identifier and the fragment data. That is, all information relating to these sliced data is stored in the information packet. Therefore, when the fragment data needs to be combined to obtain the original video data, the information packet can be found according to the original data identifier of the original video data, and then the information of each fragment can be obtained from the information packet. For example, the storage identifier of the fragment data obtained from the information packet may be used to obtain the corresponding fragment data from a database storing the fragment data, and the fragment data may be spliced according to the sequence number, the primary data identifier, and the total number of the fragment data to obtain complete primary video data.
Therefore, in the embodiment of the present application, the target server receiving the fragmented data may be used only for receiving the fragmented data, and any one server may be used for splicing the fragmented data. Particularly, the target server receiving the fragment data can store the received fragment data in the cloud database, so that any other server can be used for obtaining the fragment data from the cloud database, information of the fragment data can be obtained from information groups, and assembling is completed. Therefore, the requirement that the same server receives and splices the fragment data in the prior art is eliminated, the flexibility of video transmission and splicing in a distributed environment is improved, and the load on the server is reduced. And particularly, any server can be used for uploading the fragmented video, so that the utilization rate of server resources in a distributed environment is improved.
According to the video data storage scheme of the embodiment of the application, a plurality of fragment data corresponding to original video data and fragment information of the plurality of fragment data are obtained, and the plurality of fragment data are stored to obtain a storage identifier of each fragment data; corresponding information packets are created using the video identification of the original video data and the storage identification is stored in the information packets in association with the fragmentation information. Therefore, a user can use any server to splice the fragmented data by using the fragmented information stored in the information packet, so as to generate the original video data, thereby eliminating the need of receiving all fragmented data by using the same server in order to ensure successful splicing in the prior art, greatly reducing the complexity of video data transmission, and improving the experience of uploading video data in a distributed environment.
Fig. 2 is a flowchart of an embodiment of a video data storage method provided in the present application. As shown in fig. 2, the video data storage method may include the steps of:
s201, acquiring a plurality of fragment data corresponding to the original video data and fragment information of the plurality of fragment data.
In step S201, a plurality of pieces of shard data uploaded by a user and shard information corresponding to the pieces of shard data may be acquired. In the embodiment of the application, when a user uploads video data by using a client, the video data can be divided according to the size of the video data, so that a plurality of fragment data are generated, and thus, a raw data identifier can be generated for the raw video data, so that the raw video data can be uniquely identified and distinguished by the raw data identifier after being uploaded to a server. In addition, after the video data is divided, the user may further specify a server identifier of a server that the user wants to upload, or may determine a server identifier of a destination server that the user wants to upload, by a client currently used by the user. The slice information of the slice data generated by the user dividing the original video data using the client may be obtained in step S201. For example, the serial number of each slice data, the original data identification assigned to each slice data to identify the video data to which the slice data belongs.
S202, storing the plurality of fragment data to obtain the storage identifier of each fragment data.
In step S202, the server may perform storage processing on the received multiple sharded data to obtain a storage identifier of each sharded data. For example, in step S202, the target server may upload the received respective shard data to the cloud database, and thereby obtain the shard storage identifier corresponding to the respective shard data from the cloud database.
S203, create a corresponding information packet using the video identifier of the original video data.
S204, storing the storage identification in the information packet in association with the fragment information.
In step S203, an information packet may be created using the video identifier of the original video data, in other words, the packet may use the video identifier of the original video data as its packet identifier, so as to represent the information described in the packet, which is the fragment data divided from the original video data. For example, in step S203, the information packet may be created in a non-relational database, and a Remote Dictionary service (Redis) may be used, for example, as the non-relational database that can be used in step S203, which is a type of a memory-based implemented key-value NoSQL database.
In step S204, each fragment data sequence number and the storage identifier corresponding thereto acquired from the cloud database in step S202 may be stored in the created information packet in step S203 in association with each other.
In addition, in step S204, the original data identifier and the number of all the sliced data generated from the original video data division may be further stored in the information packet in association with each other. Therefore, the information packet has, as a packet identifier, a raw data identifier of raw video data to which the fragment data belongs, and a sequence number of each fragment data and a storage identifier of each fragment data stored in the cloud database, as well as a total number of the raw data identifier and the fragment data, are stored therein. That is, all information relating to these sliced data is stored in the information packet. Therefore, when the fragment data needs to be combined to obtain the original video data, the information packet can be found according to the original data identifier of the original video data, and then the information of each fragment can be obtained from the information packet. For example, the storage identifier of the fragment data obtained from the information packet may be used to obtain corresponding fragment data from a database storing the fragment data, and the fragment data may be spliced according to the sequence number, the original data identifier, and the total number of the fragment data to obtain complete original video data.
According to the video data storage method provided by the embodiment of the application, a plurality of fragment data and fragment information of the plurality of fragment data corresponding to original video data are acquired, and the plurality of fragment data are stored to acquire the storage identifier of each fragment data; corresponding information packets are created using the video identification of the original video data and the storage identification is stored in the information packets in association with the fragmentation information. Therefore, a user can use any server to splice the fragmented data by using the fragmented information stored in the information packet, so as to generate the original video data, thereby eliminating the need of receiving all fragmented data by using the same server in order to ensure successful splicing in the prior art, greatly reducing the complexity of video data transmission, and improving the experience of uploading video data in a distributed environment.
Fig. 3 is a schematic block diagram of an embodiment of a video data storage apparatus provided in the present application, which can be used to execute the steps of the method shown in fig. 2. As shown in fig. 3, the video data storage apparatus may include: a fragmentation information acquisition module 31, a storage identity acquisition module 32, a grouping creation module 33 and an information storage module 34.
The fragment information obtaining module 31 may be configured to obtain a plurality of fragment data corresponding to the original video data and fragment information of the plurality of fragment data.
The fragment information obtaining module 31 may obtain a plurality of fragment data uploaded by the user and fragment information corresponding to the fragment data. In the embodiment of the application, when a user uploads video data by using a client, the video data can be divided according to the size of the video data, so that a plurality of fragment data are generated, and a raw data identifier can be generated for the raw video data, so that the raw video data can be uniquely identified and distinguished by the raw data identifier after being uploaded to a server. In addition, after the video data is divided, the user may further specify a server identifier of a server that the user wants to upload, or may determine a server identifier of a destination server that the user wants to upload, by a client currently used by the user. The fragment information obtaining module 31 may obtain fragment information of fragment data generated by a user dividing the original video data using a client. For example, a serial number of each fragment data, and an original data identifier assigned to each fragment data to identify the video data to which the fragment data belongs.
The storage identifier acquiring module 32 may be configured to store the plurality of sliced data to acquire the storage identifier of each sliced data.
The storage identifier obtaining module 32 may perform storage processing on the received multiple pieces of fragmented data to obtain a storage identifier of each piece of fragmented data. For example, the storage identifier obtaining module 32 may upload the received fragment data to the cloud database, and thereby obtain the fragment storage identifier corresponding to the fragment data from the cloud database.
S203, creating a corresponding information packet by using the video identifier of the original video data.
And S204, storing the storage identification in the information packet in association with the fragment information.
The packet creation module 33 may be used to create corresponding information packets using the video identification of the original video data. For example, the packet creation module 33 may create an information packet using the video identifier of the original video data, in other words, the packet may use the video identifier of the original video data as its packet identifier, so as to represent the information recorded in the packet, which is the fragment data divided from the original video data. For example, the packet creation module 33 may create the information packet in a non-relational database, and may use, for example, redis (Remote Dictionary Server), which is a piece of memory-based key-value NoSQL database, as a non-relational database that can be used in step S203.
Information storage module 34 may be configured to store the storage identification in association with the fragmentation information in the information packet for each fragmentation data sequence number.
Furthermore, the information storage module 34 may further store the original data identifier and the number of all the sliced data generated from the original video data division in the information packet in association with each other. Therefore, the information packet has, as a packet identifier, a raw data identifier of the raw video data to which the fragment data belongs, and a sequence number in which each fragment data is stored and a storage identifier in which each fragment data is stored in the cloud database, as well as a total number of the raw data identifier and the fragment data. That is, all information relating to these sliced data is stored in the information packet. Therefore, when the fragment data needs to be combined to obtain the original video data, the information packet can be found according to the original data identifier of the original video data, and then the information of each fragment can be obtained from the information packet. For example, the storage identifier of the fragment data obtained from the information packet may be used to obtain the corresponding fragment data from a database storing the fragment data, and the fragment data may be spliced according to the sequence number, the primary data identifier, and the total number of the fragment data to obtain complete primary video data.
According to the video data storage device provided by the embodiment of the application, a plurality of fragment data corresponding to original video data and fragment information of the plurality of fragment data are obtained, and the plurality of fragment data are stored to obtain the storage identifier of each fragment data; corresponding information packets are created using the video identification of the original video data and the storage identification is stored in the information packets in association with the fragmentation information. Therefore, a user can use any server to splice the fragmented data by using the fragmented information stored in the information packet, so as to generate the original video data, thereby eliminating the need of using the same server to receive all the fragmented data in order to ensure successful splicing in the prior art, greatly reducing the complexity of video data transmission, and improving the experience of uploading the video data in a distributed environment for the user.
The internal functions and structure of the electronic license system are described above, and the apparatus can be implemented as an electronic device. Fig. 4 is a schematic structural diagram of an embodiment of an electronic device provided in the present application. As shown in fig. 4, the electronic device includes a memory 41 and a processor 42.
And a memory 41 for storing a program. In addition to the above-described programs, the memory 41 may also be configured to store other various data to support operations on the electronic device. Examples of such data include instructions for any application or method operating on the electronic device, contact data, phonebook data, messages, pictures, videos, and the like.
The memory 41 may be implemented by any type or combination of volatile or non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The processor 42 is not limited to a Central Processing Unit (CPU), but may be a processing chip such as a Graphic Processing Unit (GPU), a Field Programmable Gate Array (FPGA), an embedded neural Network Processor (NPU), or an Artificial Intelligence (AI) chip. And a processor 42, coupled to the memory 41, for executing the program stored in the memory 41, wherein the program is executed to execute the video data storage method of the above embodiment.
Further, as shown in fig. 4, the electronic device may further include: communication components 43, power components 44, audio components 45, display 46, and the like. Only some of the components are schematically shown in fig. 4, and the electronic device is not meant to include only the components shown in fig. 4.
The communication component 43 is configured to facilitate wired or wireless communication between the electronic device and other devices. The electronic device may access a wireless network based on a communication standard, such as WiFi, 3G, 4G, or 5G, or a combination thereof. In an exemplary embodiment, the communication component 43 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 43 further includes a Near Field Communication (NFC) module to facilitate short-range communication. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
A power supply component 44 provides power to the various components of the electronic device. The power components 44 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for electronic devices.
Audio component 45 is configured to output and/or input audio signals. For example, audio assembly 45 includes a Microphone (MIC) configured to receive external audio signals when the electronic device is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signal may further be stored in the memory 41 or transmitted via the communication component 43. In some embodiments, audio assembly 44 also includes a speaker for outputting audio signals.
The display 46 includes a screen, which may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation.
Those of ordinary skill in the art will understand that: all or a portion of the steps of implementing the above-described method embodiments may be performed by hardware associated with program instructions. The program may be stored in a computer-readable storage medium. When executed, the program performs steps comprising the method embodiments described above; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and these modifications or substitutions do not depart from the spirit of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. A video data storage method, comprising:
acquiring a plurality of fragment data corresponding to original video data and fragment information of the plurality of fragment data, wherein the plurality of fragment data are obtained by dividing the original video data;
storing the plurality of fragment data to obtain a storage identifier of each fragment data;
creating a corresponding information packet using the video identifier of the original video data;
storing the storage identity in the information packet in association with the fragmentation information.
2. The video data storage method of claim 1, wherein the fragmentation information comprises a fragmentation data sequence number for each of the plurality of fragmentation data, and said storing the storage identity in the information packet in association with the fragmentation information comprises:
for each fragment data, storing its storage identity in the information packet in association with its fragment data sequence number.
3. The video data storage method of claim 1, wherein the fragmentation information comprises a number of the plurality of fragmentation data, and said storing the storage identity in association with the fragmentation information in the information packet comprises:
storing the number of the plurality of sliced data and the video identification in association in the information packet.
4. The video data storage method of claim 1, wherein said storing the plurality of sliced data to obtain the storage identity of each sliced data comprises:
storing the plurality of sliced data in a cloud database;
and acquiring the storage identifier of each fragment data from the cloud database.
5. The video data storage method according to any of claims 1-4, wherein said creating a corresponding information packet using a video identification of said original video data comprises:
creating the information packet in a non-relational database and using the video identification as a packet identification for the information packet.
6. A video data storage apparatus comprising:
the video slicing device comprises a slicing information acquisition module, a slicing information acquisition module and a slicing information processing module, wherein the slicing information acquisition module is used for acquiring a plurality of slicing data corresponding to original video data and slicing information of the plurality of slicing data, and the plurality of slicing data are obtained by dividing the original video data;
the storage identifier acquisition module is used for storing the plurality of fragment data to acquire the storage identifier of each fragment data;
the grouping creation module is used for creating corresponding information groups by using the video identifiers of the original video data;
an information storage module to store the storage identity in association with the fragmentation information in the information packet.
7. The video data storage device of claim 6, wherein the fragmentation information comprises a fragmentation data sequence number for each of the plurality of fragmentation data, and the information storage module is further configured to:
for each fragment data, storing its storage identity in the information packet in association with its fragment data sequence number.
8. The video data storage device of claim 6, wherein the fragmentation information comprises a number of the plurality of fragmentation data, and the information storage module is further configured to:
storing the number of the plurality of sliced data and the video identity in association in the information packet.
9. A computer-readable storage medium, on which a computer program executable by a processor is stored, wherein the program, when executed by the processor, implements the video data storage method according to any one of claims 1 to 6.
10. An electronic device, comprising:
a memory for storing a program;
a processor for executing the program stored in the memory to perform the video data storage method according to any one of claims 1 to 6.
CN202210543941.4A 2022-05-18 2022-05-18 Video data storage method and device, computer readable storage medium and electronic equipment Pending CN115190352A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210543941.4A CN115190352A (en) 2022-05-18 2022-05-18 Video data storage method and device, computer readable storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210543941.4A CN115190352A (en) 2022-05-18 2022-05-18 Video data storage method and device, computer readable storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN115190352A true CN115190352A (en) 2022-10-14

Family

ID=83513951

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210543941.4A Pending CN115190352A (en) 2022-05-18 2022-05-18 Video data storage method and device, computer readable storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN115190352A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115643453A (en) * 2022-12-23 2023-01-24 北京安锐卓越信息技术股份有限公司 Video uploading method, system, user terminal, server and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010075795A1 (en) * 2008-12-31 2010-07-08 华为技术有限公司 Method and device for fragment information processing
CN110602122A (en) * 2019-09-20 2019-12-20 北京达佳互联信息技术有限公司 Video processing method and device, electronic equipment and storage medium
CN112511650A (en) * 2020-12-22 2021-03-16 湖南新云网科技有限公司 Video uploading method, device, equipment and readable storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010075795A1 (en) * 2008-12-31 2010-07-08 华为技术有限公司 Method and device for fragment information processing
CN110602122A (en) * 2019-09-20 2019-12-20 北京达佳互联信息技术有限公司 Video processing method and device, electronic equipment and storage medium
CN112511650A (en) * 2020-12-22 2021-03-16 湖南新云网科技有限公司 Video uploading method, device, equipment and readable storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115643453A (en) * 2022-12-23 2023-01-24 北京安锐卓越信息技术股份有限公司 Video uploading method, system, user terminal, server and storage medium
CN115643453B (en) * 2022-12-23 2023-03-21 北京安锐卓越信息技术股份有限公司 Video uploading method, system, user terminal, server and storage medium

Similar Documents

Publication Publication Date Title
KR101758167B1 (en) Using quality information for adaptive streaming of media content
WO2018223842A1 (en) Video file transcoding system, segmentation method, and transcoding method and device
CN111800443B (en) Data processing system and method, device and electronic equipment
CN109542361B (en) Distributed storage system file reading method, system and related device
EP3197167B1 (en) Image transmission method and apparatus
CN104994401A (en) Barrage processing method, device and system
CN109729386B (en) Video file playing starting method and system, electronic equipment and storage medium
CN111431813B (en) Access current limiting method, device and storage medium
US10824901B2 (en) Image processing of face sets utilizing an image recognition method
KR20170073605A (en) Composite partition functions
CN109510754B (en) Online document generation method, device and system and electronic equipment
EP3125501A1 (en) File synchronization method, server, and terminal
CN110297944B (en) Distributed XML data processing method and system
CN108874825B (en) Abnormal data verification method and device
CN109525622B (en) Fragment resource ID generation method, resource sharing method, device and electronic equipment
US20160203144A1 (en) Method and System for Processing Associated Content
CN115190352A (en) Video data storage method and device, computer readable storage medium and electronic equipment
CN109788251B (en) Video processing method, device and storage medium
US20240202035A1 (en) Systems and methods for determining target allocation parameters for initiating targeted communications in complex computing networks
CN103152606A (en) Video file processing method, device and system
CN110719526A (en) Video playing method and device
CN114245175A (en) Video transcoding method and device, electronic equipment and storage medium
CN113395487A (en) Video data storage management method and device, computer equipment and storage medium
CN110677443A (en) Data transmitting and receiving method, transmitting end, receiving end, system and storage medium
CN113038192A (en) Video processing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination