CN113891112A - Live broadcast method, device, medium and equipment for billion pixel video - Google Patents

Live broadcast method, device, medium and equipment for billion pixel video Download PDF

Info

Publication number
CN113891112A
CN113891112A CN202111149485.7A CN202111149485A CN113891112A CN 113891112 A CN113891112 A CN 113891112A CN 202111149485 A CN202111149485 A CN 202111149485A CN 113891112 A CN113891112 A CN 113891112A
Authority
CN
China
Prior art keywords
video
canvas
resolution
target
client
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111149485.7A
Other languages
Chinese (zh)
Other versions
CN113891112B (en
Inventor
赵月峰
温建伟
袁潮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zhuohe Technology Co Ltd
Original Assignee
Beijing Zhuohe Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zhuohe Technology Co Ltd filed Critical Beijing Zhuohe Technology Co Ltd
Priority to CN202111149485.7A priority Critical patent/CN113891112B/en
Publication of CN113891112A publication Critical patent/CN113891112A/en
Application granted granted Critical
Publication of CN113891112B publication Critical patent/CN113891112B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/23424Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for inserting or substituting an advertisement
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/234363Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by altering the spatial resolution, e.g. for clients with a lower screen resolution
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/239Interfacing the upstream path of the transmission network, e.g. prioritizing client content requests
    • H04N21/2393Interfacing the upstream path of the transmission network, e.g. prioritizing client content requests involving handling client requests
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/437Interfacing the upstream path of the transmission network, e.g. for transmitting client requests to a VOD server
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440263Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by altering the spatial resolution, e.g. for displaying on a connected PDA
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/4728End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for selecting a Region Of Interest [ROI], e.g. for requesting a higher resolution version of a selected region
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The present invention relates to a method, an apparatus, a medium and a device for broadcasting billion pixel video, wherein the method is applied to a server and comprises the following steps: acquiring all camera videos shot by an array camera, wherein each camera video comprises M video streams with different resolutions; fusing video streams with the same resolution in all camera videos into M layers of canvas with different resolutions, wherein M is more than or equal to 2; after receiving a live broadcast request of a client, providing relevant information of M layers of canvases with different resolutions for the client, so that the client determines the canvases with target resolutions corresponding to a selection area and a display resolution in the M layers of canvases with different resolutions according to the selection area and the display resolution, and pulls a video stream corresponding to the canvas with the target resolution. The server side provides fixed computing power, and computing work is put to the client side, so that a large number of client sides are supported under the condition that computing power of the server side is not increased.

Description

Live broadcast method, device, medium and equipment for billion pixel video
Technical Field
The present disclosure relates to the field of live video, and in particular, to a live broadcast method, apparatus, medium, and device for billion-pixel video.
Background
In the related art, an array camera comprises a plurality of cameras, simultaneously shoots a plurality of paths of videos, and can be spliced into one billion pixels of videos. When interactive live broadcasting is carried out, the client side is responsible for sending the selected area coordinates to the server side, the server side judges which videos need to be decoded, and cutting, rendering, encoding and the like are carried out on the videos. When a plurality of clients perform interactive viewing, the selection area of each client may be different, and at this time, the server needs to perform corresponding calculation according to the requests of different clients. However, the calculation and encoding capabilities of the server are limited, and when the number of the clients reaches the upper limit of the capability of the server, the capacity of the system can be expanded only by adding the server. The traditional video live broadcast method is high in cost and is not suitable for live broadcast application of billion-pixel videos of a large number of user scenes.
Disclosure of Invention
To overcome the problems in the related art, a method, apparatus, medium, and device for live broadcasting of gigapixel video are provided.
According to a first aspect of the present disclosure, there is provided a live broadcast method of gigapixel video, applied to a server, including:
acquiring all camera videos shot by an array camera, wherein each camera video comprises M video streams with different resolutions, and M is more than or equal to 2;
fusing video streams with the same resolution in all the camera videos into M layers of canvas with different resolutions;
after receiving a live broadcast request of a client, providing relevant information of the M layers of canvases with different resolutions for the client, so that the client determines the canvases with target resolutions corresponding to the selection area and the display resolution in the canvases with the M layers of different resolutions according to the selection area and the display resolution, and pulls the video streams corresponding to the canvas with the target resolutions and the selection area.
Based on the above scheme, in some embodiments, the method for live broadcasting of billion pixel video further includes:
dividing the M layers of canvas with different resolutions by using the same division rule, dividing each canvas into N blocks, wherein N is more than or equal to 1, numbering each block, and each block corresponds to multiple paths of camera videos;
and storing the video streams with different resolutions of the multi-path camera video corresponding to each block in one or more appointed servers, and establishing the corresponding relation among the block numbers, the video streams with different resolutions and the storage servers.
Based on the above scheme, in some embodiments, after receiving the play request from the client, the method further includes:
providing the segmentation rule to the client.
According to another aspect of the present disclosure, there is provided a live broadcast method of gigapixel video, applied to a client, including:
sending a live broadcast request to a server to acquire related information of M layers of canvas with different resolutions;
the client determines a canvas with a target resolution corresponding to the selection area and the display resolution from the M layers of canvases with different resolutions according to the selection area and the display resolution;
determining a plurality of camera videos corresponding to the selection area and a target video stream corresponding to the canvas of the target resolution in the plurality of camera videos;
pulling the target video stream;
and splicing and fusing the target video stream according to the position in the selected area, and rendering the target video stream to a display device after cutting.
Based on the above scheme, in some embodiments, after sending the live broadcast request to the server, the live broadcast method of the gigapixel video further includes:
and acquiring a segmentation rule from the server, wherein the segmentation rule comprises a block number, and the corresponding relation between video streams with different resolutions and the storage server.
Based on the above solution, in some embodiments, the determining the multiple camera videos corresponding to the selection area and the target video stream corresponding to the canvas of the target resolution in the multiple camera videos includes:
determining a block corresponding to the selected area and the number of the block;
determining a target video stream in the block according to the block and the target resolution;
querying the segmentation rule and determining a target server where the target video stream is located;
pulling the target video stream from the target server.
According to another aspect of the present disclosure, there is provided a live broadcast device of gigapixel video, applied to a server, including:
the array video acquisition module is used for acquiring all camera videos shot by the array camera, wherein each path of camera video comprises M video streams with different resolutions;
the canvas fusion module is used for fusing video streams with the same resolution in all the camera videos into M layers of canvases with different resolutions;
and the response module is used for providing the relevant information of the M layers of canvases with different resolutions for the client after receiving a live broadcast request of the client, so that the client determines the canvases with the target resolution corresponding to the selection area and the display resolution in the canvases with the M layers of different resolutions according to the selection area and the display resolution, and pulls the video streams corresponding to the canvas with the target resolution and the selection area.
According to another aspect of the present disclosure, there is provided a live device of gigapixel video, applied to a client, including:
the request module is used for sending a live broadcast request to the server and acquiring related information of canvas of M layers with different resolutions;
the canvas selection module is used for determining the canvas with the target resolution corresponding to the selection area and the display resolution from the M layers of canvases with different resolutions by the client according to the selection area and the display resolution;
a target video stream determining module, configured to determine multiple paths of camera videos corresponding to the selection area and a target video stream corresponding to the canvas of the target resolution in the multiple paths of camera videos;
the pulling module is used for pulling the target video stream;
and the rendering module is used for splicing and fusing the target video stream according to the position in the selection area, and rendering the target video stream to a display device after cutting.
According to another aspect herein, there is provided a computer readable storage medium having stored thereon a computer program which, when executed, implements the steps of a live method of gigapixel video.
According to another aspect herein, there is provided a computer apparatus comprising a processor, a memory and a computer program stored on the memory, the processor when executing the computer program implementing the steps of a live method of gigapixel video.
The method comprises the steps of obtaining video streams of different resolutions of a plurality of camera videos of an array camera through a server, establishing a plurality of layers of canvases with different resolutions, providing relevant information of the plurality of layers of canvases with different resolutions to the client after receiving a live broadcast request of the client, so that the client determines the canvas with a target resolution corresponding to a selection area and a display resolution in the plurality of layers of canvases with different resolutions according to the selection area and the display resolution, and pulls the canvas with the target resolution and the video streams corresponding to the selection area, thereby realizing the effect of transferring the calculation power to the client, realizing the fixed calculation power by the server, and supporting a large number of clients under the condition of not increasing the calculation power of the server.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention as claimed.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, are included to provide a further understanding of the disclosure, and are incorporated in and constitute a part of this specification. In the drawings:
FIG. 1 is a flow diagram illustrating a live method of gigapixel video according to an exemplary embodiment.
FIG. 2 is a schematic diagram of an array camera image shown in accordance with an exemplary embodiment.
FIG. 3 is a diagram illustrating the segmentation of a canvas according to an exemplary embodiment.
FIG. 4 is a flow diagram illustrating a live method of gigapixel video, according to an exemplary embodiment.
FIG. 5 is a block diagram illustrating a live device of gigapixel video, according to an example embodiment.
FIG. 6 is a block diagram illustrating a live device of gigapixel video, according to an example embodiment.
FIG. 7 is a block diagram illustrating a computer device in accordance with an example embodiment.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the drawings of the embodiments of the present invention, and it is obvious that the described embodiments are some but not all of the embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments herein without making any creative effort, shall fall within the scope of protection. It should be noted that the embodiments and features of the embodiments may be arbitrarily combined with each other without conflict.
In the related art, an array camera comprises a plurality of cameras, simultaneously shoots a plurality of paths of videos, and can be spliced into one billion pixels of videos. When interactive live broadcasting is carried out, the client side is responsible for sending the selected area coordinates to the server side, the server side judges which videos need to be decoded, and cutting, rendering, encoding and the like are carried out on the videos. When a plurality of clients perform interactive viewing, the selection area of each client may be different, and at this time, the server needs to perform corresponding calculation according to the requests of different clients. However, the server has limited computing and encoding capabilities, and when the number of clients reaches the upper limit of the server capability, the system capacity expansion can be performed only by adding the server, so that the cost is high, and the method is not suitable for live broadcast application of billions of pixel videos in a large number of user scenes.
To solve the above problems, a method for live broadcast of billion pixel video is provided herein. FIG. 1 is a flow diagram illustrating a live method of gigapixel video according to an exemplary embodiment. Referring to fig. 1, the method for broadcasting billion-pixel video on the server side at least includes steps S11 to S13, which are described in detail below.
And step S11, acquiring all camera videos shot by the array camera, wherein each camera video comprises M video streams with different resolutions, and M is more than or equal to 2.
The array camera comprises a plurality of cameras which are arranged according to a certain sequence, video images of different areas in a target view field are respectively acquired, the video images of the different areas acquired by the different cameras are arranged according to the sequence of the cameras, and videos with billions of pixels or higher pixels can be spliced and fused.
Herein, all camera videos shot by the array camera are acquired by the server. The server can be a server or a cluster formed by a plurality of servers. In practical applications, the number of servers may be determined according to the service capability of each server and the number of videos that need to be processed.
Each camera in the array camera acquires a camera video.
In one example, a single camera of the array camera may be a camera with encoding capability to obtain a video stream at a native resolution while encoding the video stream at the native resolution into a plurality of video streams at different resolutions. For example, the video stream of the original resolution is 4K (3840 × 2160), and the video stream of the original resolution is encoded into a 2K (2560 × 1440) video stream and a 1080P (1920 × 1080) video stream. Thus, each camera video includes a plurality of video streams of different resolutions, such as 4K, 2K, 1080P, etc. The resolution of the video stream to be encoded or the resolution of the video stream to be encoded needs to be determined according to a specific application scenario, which is not limited herein.
In an example, a single camera in the array camera may be a camera without encoding capability, and only a video stream of an original resolution is output, and the video stream of the original resolution is received by the server and then encoded into a plurality of video streams of different resolutions.
In this way, the server can store a plurality of video streams of different resolutions for each camera video, and the plurality of video streams of different resolutions have the same displayed screen content and different resolutions. While a plurality of different resolution video streams of respective different camera videos acquired may be marked.
For example, cameras are numbered according to the position in the field of view of the image content acquired by each camera in the array camera. FIG. 2 is a schematic diagram of an array camera image shown in accordance with an exemplary embodiment. Referring to fig. 2, taking an example that the array camera includes 12 cameras, 12-way camera video is acquired by the 12 cameras. Each camera video may be numbered according to the corresponding camera position, as shown in fig. 1-12. For a first path of camera video, a server encodes a video stream of an original video resolution acquired by a camera to obtain the following video stream of 3 resolutions: a video stream of 4K (3840 × 2160) resolution, identified as a 1-1, 2K (2560 × 1440) resolution video stream, identified as a 1-2, 1080P (1920 × 1080) resolution video stream, identified as 1-3. Similarly, for the second path of camera video, the video streams with the three resolutions are respectively identified as 2-1, 2-2 and 2-3. And the like, until the 12 th camera video, the description is not repeated.
And step S12, fusing video streams with the same resolution in all the camera videos into M canvases with different resolutions.
Still taking the 12-channel camera video as an example above, the server performs timestamp synchronization matching on all video streams, and performs decoding and fusion rendering on video streams with the same resolution in the 12-channel camera video. The video images of the same timestamp in the 4K (3840 x 2160) resolution video stream identified as 1-1, 2-1, … … 12-1 are merged into one canvas, which can be identified as the first layer canvas; similarly, video images of the same timestamp in a 2K (2560 x 1440) resolution video stream identified as 1-2, 2-2, … … 12-2 are fused into a canvas, which may be identified as a second layer of canvas; thus, 3 canvases corresponding to 4K, 2K and 1080P can be obtained.
In practical application, the frame images of the latest timestamp of the multi-channel video streams can be fused in real time, and the fused canvas is updated in real time, so that the canvas content is the latest video content. Frame images with the same timestamp in each video stream can be collected regularly for fusion, canvas content is updated regularly, load of a server is reduced, and fusion capacity is improved.
It should be understood by those skilled in the art that decoding and fusion rendering of multiple video streams can be performed cooperatively using multiple servers. For example, one or more servers decode and render a video stream of 4K (3840 × 2160) resolution, and one or more other servers decode and render a video stream of 2K (2560 × 1440) resolution and 1080P (1920 × 1080) resolution. The method can be flexibly deployed according to the performance of the existing server.
Step S13, after receiving a live broadcast request from the client, providing relevant information of the M canvases with different resolutions to the client, so that the client determines a canvas with a target resolution corresponding to the selection area and the display resolution from the M canvases with different resolutions according to the selection area and the display resolution, and pulls a video stream corresponding to the canvas with the target resolution and the selection area.
After receiving a live broadcast request of the client, the server can provide the fused related information of the canvas with the different resolutions to the client. The related information of the canvas can include the pixel size of the canvas with different resolutions, how many camera videos are spliced to form the canvas, the position of each camera video in the canvas, and the resolution of the camera video. The method can also include image contents of all areas, for example, after frame images corresponding to the latest timestamps of the video streams are fused, a thumbnail is formed and provided for the client, so that the client can select an interested area.
The client may select a canvas of resolution appropriate to the selection area based on the selection area and the display resolution of the client display device or the display resolution specified by the client.
For example, the client acquires the 3-layer canvas as shown in fig. 2 from the server, the selection area is a1, and the corresponding resolution of the selection area a1 in the 3-layer canvas is respectively calculated. If the resolution corresponding to the area A1 in the third layer of canvas with the lowest resolution is greater than or equal to the display resolution of the client, determining that the third layer of canvas is the canvas with the target resolution; if the resolution corresponding to the area A1 in the third layer of canvas is smaller than the display resolution of the client, further comparing the resolution corresponding to the area A1 in the second layer of canvas, and if the resolution corresponding to the area A1 in the second layer of canvas is larger than or equal to the display resolution of the client, determining that the second layer of canvas is the canvas with the target resolution; if the resolution corresponding to the area A1 in the second layer of canvas is less than the display resolution of the client, the first layer of canvas is determined to be the canvas of the target resolution even though the resolution corresponding to the area A1 in the first layer of canvas is still less than the display resolution of the client. Of course, the resolution corresponding to the area a1 in each layer of canvas may also be determined layer by layer starting from the first layer of canvas, and when the resolution corresponding to a1 is selected to be greater than the client display resolution and closest to the client display resolution, the corresponding canvas is the canvas with the target resolution.
Similarly, when M is greater than 3, still starting from the canvas with the lowest resolution or the canvas with the highest resolution, determining layer by layer whether the resolution corresponding to the selection area in the canvases with the resolutions is greater than or equal to the display resolution of the client and is closest to the display resolution of the client, and if so, taking the corresponding canvas as the canvas with the target resolution.
After the canvas of the target resolution is determined, the video stream of the resolution corresponding to the canvas of the target resolution and the selection area can be pulled by the client, and operations such as decoding, fusion, rendering and the like can be performed at the client. For example, in the embodiment shown in fig. 2, assuming that the canvas of the target resolution is the second layer canvas of 2K, the client may pull 2K video streams 1-2, 2-2, 3-2, 5-2, 6-2, 7-2, 9-2, 10-2, 11-2 of videos 1, 2, 3, 5-2, 6-2, 7-2, 9-2, 10-2, 11-2 corresponding to the selection area a1, decode, merge, and render the pulled 9 video streams to the display device.
In this embodiment, the server provides fixed resources, that is, the computing power of the server is fixed, and the computation is transferred to the client, so that when the number of clients increases, the resource occupation of the server is not increased, and the server can support a large number of clients without capacity expansion.
If the number of cameras in the array camera is large, video streams with different resolutions need to be cooperatively decoded by a plurality of servers and stored in the plurality of servers, and when a client pulls a video stream with a resolution corresponding to a target resolution, the client needs to pull the video stream into the plurality of servers, and the client needs to know in advance which server the video stream with each resolution in each camera video is stored in.
In an exemplary embodiment, the live broadcast method of gigapixel video further comprises: dividing M layers of canvas with different resolutions by using the same division rule, dividing each canvas into N blocks, numbering each block, wherein each block corresponds to a plurality of paths of camera videos, and N is more than or equal to 1;
and storing the video streams with different resolutions of the multi-path camera video corresponding to each block in one or more appointed servers, and establishing the corresponding relation among the block numbers, the video streams with different resolutions and the storage servers.
The canvas is divided into a plurality of blocks according to the video number included by the canvas, each block corresponds to a plurality of camera videos, and a number is set for each block. And storing the video streams with different resolutions in the multi-path camera video corresponding to each area into a specified server or servers.
And the multilayer canvas is divided by using the same division rule, and the corresponding relation between the block number, the video streams with different resolutions and the storage server is established. It is possible to specify which camera video corresponds to each area number, and in which server the video streams of different resolutions in each camera video are stored. When a client initiates a live broadcast request or the client changes a selection area through operations such as translation and scaling, a target server can be quickly determined, a corresponding video stream is pulled, and the response speed is improved.
In an exemplary embodiment, the method for live broadcasting a billion pixel video, after receiving a play request from a client, further includes:
providing a segmentation rule to the client.
After the client side initiates a live broadcast request, the server side provides the client side with the related information of the multi-layer canvas, and provides the client side with the segmentation rule. After the client determines the canvas with the target resolution, according to the blocks related to the selected area, the segmentation rule is inquired, the video stream and the storage position corresponding to the block number are determined, the video stream and the storage position are connected to the server corresponding to the storage position, and the video stream is pulled.
FIG. 3 is a diagram illustrating the segmentation of a canvas according to an exemplary embodiment. Referring to fig. 3, still taking 12-way camera video as an example, each way of camera video includes a video stream of 4K (3840 × 2160) resolution, a video stream of 2K (2560 × 1440) resolution, and a video stream of 1080P (1920 × 1080) resolution. The canvas is divided, in fig. 3, the 1, 5, 9-way camera video is divided into B1 blocks, the 2, 6, 10-way camera video is divided into B2 blocks, the 3, 7, 11-way camera video is divided into B3 blocks, and the 4, 8, 12-way camera video is divided into B4 blocks. When the canvas is divided, the blocks can be the same in size or different in size, flexible division can be performed according to actual requirements, and only the same division rule is adopted for the division of the multilayer canvas.
The whole inquiry and calculation process is completed by the client without the participation of the server, so that the calculation power of the server is realized, and the resource occupation of the server is not increased when the number of the clients is increased.
FIG. 4 is a flow diagram illustrating a live method of gigapixel video, according to an exemplary embodiment. Referring to fig. 4, the live broadcast method of gigapixel video is applied to the client, and includes at least steps S41 to S45, which are described in detail as follows:
and step S41, sending a live broadcast request to the server, and acquiring the related information of the canvas of the M layers with different resolutions. The client sends a live broadcast request for acquiring the target video to the server, and acquires the related information of the canvas with different resolutions of the M layers of the target video from the server. The M layers of canvas with different resolutions are obtained by fusing video streams with different resolutions of a plurality of camera videos by the server.
In step S42, the client determines, from among M layers of canvases with different resolutions, a canvas with a target resolution corresponding to the selection area and the display resolution according to the selection area and the display resolution.
And the client determines a canvas with a resolution suitable for the client from the M layers of canvases according to the selection area and the display resolution. The selection area is selected by the client in the canvas, and the display resolution of the client can be determined according to the resolution of the display device of the client or can be specified by the client.
For example, the client acquires the 3-layer canvas as shown in fig. 2 from the server, selects the area as a1, and respectively calculates the resolution of the selection area a1 corresponding to the 3-layer canvas. If the resolution corresponding to the area A1 in the third layer of canvas with the lowest resolution is greater than or equal to the display resolution of the client, determining that the third layer of canvas is the canvas with the target resolution; if the resolution corresponding to the area A1 in the third layer of canvas is less than the display resolution of the client, further comparing the resolution corresponding to the area A1 in the second layer of canvas, and if the resolution corresponding to the area A1 in the second layer of canvas is greater than or equal to the display resolution of the client, determining that the second layer of canvas is the canvas with the target resolution; if the resolution corresponding to the area A1 in the first layer canvas is still less than the display resolution of the client, the first layer canvas is determined to be the canvas of the target resolution. And when the M is larger than 3, starting from the canvas with the lowest resolution, judging whether the resolution corresponding to the selection area in the canvases with the resolutions is larger than or equal to the display resolution of the client layer by layer, and if so, taking the canvas of the current layer as the canvas with the target resolution. How to determine the canvas of the target resolution is described above, and will not be described here again.
In step S43, multiple camera videos corresponding to the selection area and a target video stream corresponding to the canvas of the target resolution in the multiple camera videos are determined.
The selection area may be a partial area in the single-channel camera video, or may be an area composed of multiple-channel camera videos, and the corresponding multiple-channel camera videos may be determined according to the selection area. According to the canvas of the target resolution, the video stream of the target resolution in the selected multi-path camera video can be further determined as the target video stream.
And step S44, pulling.
After the target video stream is determined, the target video stream can be pulled by the client and pulled to the local part of the client.
And step S45, splicing and fusing the target video stream according to the position in the selected area, and rendering the target video stream to a display device after cutting.
And finally, decoding the pulled video stream by the client, splicing and fusing the video streams according to the positions of the video streams with the target resolutions in the selection area, obtaining the video stream with the resolution in the selection area as the target resolution after cutting, and rendering the video stream to a display device.
The server only provides the video stream with fixed resolution of fixed path number for the client to pull. After the client pulls the multiple video streams corresponding to the selection area and the resolution, the client decodes, splices, renders and the like the multiple video streams to realize the transfer of calculation to the client.
In an exemplary embodiment, the method for live broadcasting the gigapixel video further comprises the following steps after the client sends a live broadcasting request to the server:
and acquiring a segmentation rule from the server, wherein the segmentation rule comprises a block number, and the corresponding relation between video streams with different resolutions and the storage server.
In order to accelerate the acquisition speed of the video stream with the target resolution corresponding to the selected area and the resolution, after the canvas is segmented by the server, if the client sends a live broadcast request to the server, the client acquires the related information of the multilayer canvas from the server and acquires a segmentation rule, wherein the segmentation rule comprises the block number, the corresponding relation between the video stream with different resolutions and the storage server. The client can quickly determine the target video stream according to the blocks related to the selected area, can quickly determine the address of the server stored in the target video stream according to the corresponding relation, requests the target video stream from the server, and completes the query work at the client, thereby further reducing the resource consumption of the server.
In an exemplary embodiment, the determining of the multi-path camera video corresponding to the selection area and the target video stream corresponding to the canvas of the target resolution in the multi-path camera video in step S43 includes:
determining a block corresponding to the selected area and the number of the block;
determining a target video stream in the block according to the block and the target resolution;
inquiring a segmentation rule, and determining a target server where a target video stream is located;
the target video stream is pulled from the target server.
For example, taking fig. 3 as an example, the client may determine, according to the selection area a2, that the tiles corresponding to the selection area are tile B2 and tile B3, and if the resolution specified by the client is 2K, the target resolution is 2K. It can thus be determined that the video stream of 2K resolution in the camera videos 2, 3, 6, 7 is the target video stream, respectively video streams 2-2, 3-2, 6-2, 7-2. And inquiring the segmentation rule, according to the block number in the segmentation rule and the corresponding relation between the video streams with different resolutions and the storage server, quickly determining a target server where the target video stream is located, and pulling the target video stream by connecting the target server so as to accelerate the display speed of the client.
To better understand the live method of eleven-level pixels provided herein, an example is illustrated.
Referring to fig. 2 and 3, the server cluster acquires videos of a plurality of cameras from the array camera, the array camera is composed of 12 cameras, 12 paths of camera videos are shot in total, and each path of camera video is numbered according to the position of the camera, and the number is 1-12. When shooting video, each camera encodes the video stream of the original resolution into a 2K (2560 × 1440) video stream and a 1080P (1920 × 1080) video stream, except that the video stream of the original resolution is 4K (3840 × 2160). I.e. each camera video comprises 3 video streams of different resolutions. The video streams with the three resolutions are numbered respectively, and in the first camera video, the video stream with the resolution of 4K (3840 × 2160) is identified as 1-1 and 2K (2560 × 1440), and the video stream with the resolution of 1-2 and 1080P (1920 × 1080) is identified as 1-3. Similarly, in the second path of camera video, the video streams with the three resolutions are respectively identified as 2-1, 2-2 and 2-3. And the like, until the 12 th camera video, the description is not repeated.
After receiving the 36 paths of video streams, the server splices and fuses 12 video streams with original resolution of 4K (3840 multiplied by 2160) into a first layer of canvas according to corresponding camera positions; merging the 12 video streams with the resolution of 2K (2560 x 1440) into a second layer of canvas; the 12 video streams with a resolution of 1080P (1920 × 1080) are merged into a third layer canvas.
The 12-path camera video in each layer of canvas is divided, 1, 5 and 9-path camera video is divided into B1 blocks, 2, 6 and 10-path camera video is divided into B2 blocks, 3, 7 and 11-path camera video is divided into B3 blocks, 4, 8 and 12-path camera video is divided into B4 blocks, and 36-path video streams are stored to a designated server. As shown in table 1.
Table 1:
Figure BDA0003286733090000121
Figure BDA0003286733090000131
Figure BDA0003286733090000141
and recording the corresponding relation between the block numbers and the video streams with different resolutions and the storage server through the table 1, and providing the related information and the segmentation rule of the 3-layer canvas to the client after receiving a live broadcast request of the client.
If the client's selection area is the A2 area, the display resolution is 2K. After the client acquires the three layers of canvas from the server, the client judges whether the resolution corresponding to the selection area in each layer of canvas is greater than or equal to the display resolution of the client display equipment from the third layer of canvas with the lowest resolution, and the second layer of canvas is supposed to meet the conditions. The client may determine, according to the segmentation rule, to pull the video streams identified as 2-2 and 6-2 in the server 2 corresponding to the block 2 and the video streams identified as 3-2 and 7-2 in the server 3 corresponding to the block 3, decode the 4 paths of video streams, merge them, and render and display them in the display device.
Through the embodiment, the server side obtains the video streams of different resolutions of a plurality of camera videos of the array camera, establishes a plurality of layers of canvases and segmentation rules of different resolutions, and stores the video streams of different resolutions of a plurality of paths of videos of the array camera into the appointed one or more servers. When a client side carries out a live broadcast request, the client side obtains relevant information and segmentation rules of multiple layers of canvas from a server side, the canvas with the target resolution is determined based on the selection area and the display resolution of the client side, and then a plurality of target video streams corresponding to the canvas with the target resolution are further determined. And determining a corresponding block number according to the segmentation rule, quickly querying the storage position of the target video according to the block number, pulling the target video stream, and fusing and rendering the target video stream to the client display equipment. Therefore, the service capability of each server can be integrated, and the video content with fixed computing power can be provided for the client; the client selects the target video stream with the corresponding resolution ratio according to the performance of the client, pulls the target video stream for splicing and fusion, renders the target video stream to the display device, and the calculation work is completed by the client so as to support a large number of clients under the condition that the calculation power of the server is not changed.
FIG. 5 is a block diagram illustrating a live device of gigapixel video, according to an example embodiment. Referring to fig. 5, the live broadcast device of gigapixel video is applied to a server, and includes: an array video acquisition module 501, a canvas fusion module 502 and a response module 503.
The array video acquiring module 501 is configured to acquire all camera videos captured by the array cameras, wherein each camera video includes M video streams with different resolutions.
The canvas fusion module 502 is configured to fuse video streams of the same resolution in all camera videos into M layers of canvases of different resolutions.
The response module 503 is configured to, after receiving a live broadcast request from the client, provide the client with the relevant information of the M layers of canvases with different resolutions, so that the client determines, according to the selection area and the display resolution, a canvas with a target resolution corresponding to the selection area and the display resolution from among the M layers of canvases with different resolutions, and pulls a video stream corresponding to the canvas with the target resolution and the selection area.
In an exemplary embodiment, the canvas fusion module 502 is further configured to segment the M layers of canvases with different resolutions using the same segmentation rule, segment each canvas into N blocks, and number each block, each block corresponding to multiple camera videos;
and storing the video streams with different resolutions of the multi-path camera video corresponding to each block in one or more appointed servers, and establishing the corresponding relation among the block numbers, the video streams with different resolutions and the storage servers.
In an exemplary embodiment, the response module 503 is further configured to provide the splitting rule to the client after receiving the play request from the client.
FIG. 6 is a block diagram illustrating a live device of gigapixel video, according to an example embodiment. Referring to fig. 6, a live device of gigapixel video is applied to a client, including: a request module 601, a canvas selection module 602, a target video stream determination module 603, a pull module 604, and a rendering module 605.
The request module 601 is configured to send a live broadcast request to a server, and obtain information related to M layers of canvases with different resolutions.
The canvas selection module 602 is configured for the client to determine a canvas of a target resolution corresponding to the selection area and the display resolution among the M layers of canvases of different resolutions according to the selection area and the display resolution.
The target video stream determination module 603 is configured to determine a plurality of camera videos corresponding to the selection area and a target video stream corresponding to a canvas of a target resolution in the plurality of camera videos.
The pull module 604 is configured for pulling the target video stream.
The rendering module 605 is configured to splice and fuse the target video streams according to the positions in the selection area, and render the target video streams to the display device after being cut.
In an exemplary embodiment, the request module 601 is further configured to obtain a splitting rule from the server after sending the live broadcast request to the server, where the splitting rule includes a block number, and a correspondence between video streams with different resolutions and a storage server.
In an exemplary embodiment, the target video stream determination module 603 is further configured to:
determining a block corresponding to the selected area and the number of the block;
determining a target video stream in a block according to the block and a target resolution;
inquiring a segmentation rule according to the root, and determining a target server where a target video stream is located;
pulling the target video stream from a target server.
FIG. 7 is a block diagram illustrating a computer device 700 for gigapixel video live, according to an example embodiment. For example, the computer device 700 may be provided as a server. Referring to fig. 7, the computer device 700 includes a processor 701, and the number of the processors may be set to one or more as necessary. The computer device 700 also includes a memory 702 for storing instructions, such as application programs, that are executable by the processor 701. The number of the memories can be set to one or more according to needs. Which may store one or more application programs. The processor 701 is configured to execute instructions to perform the method of gigapixel video live described above.
As will be appreciated by one skilled in the art, the embodiments herein may be provided as a method, apparatus (device), or computer program product. Accordingly, the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present disclosure may take the form of a computer program product embodied on one or more computer-usable storage media having computer-usable program code embodied in the medium. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data, including, but not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, Digital Versatile Disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by the computer, and the like. In addition, communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media as known to those skilled in the art.
The present disclosure is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (devices) and computer program products according to embodiments herein. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that an article or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such article or apparatus. Without further limitation, an element defined by the phrase "comprising … …" does not exclude the presence of additional like elements in the article or device comprising the element.
While the preferred embodiments herein have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following appended claims be interpreted as including the preferred embodiments and all such alterations and modifications as fall within the scope of this disclosure.
It will be apparent to those skilled in the art that various changes and modifications may be made herein without departing from the spirit and scope thereof. Thus, it is intended that such changes and modifications be included herein, provided they come within the scope of the appended claims and their equivalents.

Claims (10)

1. A billion pixel video live broadcasting method is applied to a server side and comprises the following steps:
acquiring all camera videos shot by an array camera, wherein each camera video comprises M video streams with different resolutions, and M is more than or equal to 2;
fusing video streams with the same resolution in all the camera videos into M layers of canvas with different resolutions;
after receiving a live broadcast request of a client, providing relevant information of the M layers of canvases with different resolutions for the client, so that the client determines the canvases with target resolutions corresponding to the selection area and the display resolution in the canvases with the M layers of different resolutions according to the selection area and the display resolution, and pulls the video streams corresponding to the canvas with the target resolutions and the selection area.
2. A method of live broadcast of gigapixel video as claimed in claim 1, further comprising:
dividing the M layers of canvas with different resolutions by using the same division rule, dividing each canvas into N blocks, wherein N is more than or equal to 1, numbering each block, and each block corresponds to multiple paths of camera videos;
and storing the video streams with different resolutions of the multi-path camera video corresponding to each block in one or more appointed servers, and establishing the corresponding relation among the block numbers, the video streams with different resolutions and the storage servers.
3. A method as claimed in claim 2, wherein said receiving a request for playing back from a client further comprises:
providing the segmentation rule to the client.
4. A billion pixel video live broadcasting method is applied to a client side and comprises the following steps:
sending a live broadcast request to a server to acquire related information of M layers of canvas with different resolutions;
the client determines a canvas with a target resolution corresponding to the selection area and the display resolution from the M layers of canvases with different resolutions according to the selection area and the display resolution;
determining a plurality of camera videos corresponding to the selection area and a target video stream corresponding to the canvas of the target resolution in the plurality of camera videos;
pulling the target video stream;
and splicing and fusing the target video stream according to the position in the selected area, and rendering the target video stream to a display device after cutting.
5. The method of claim 4, wherein sending a live request to the server further comprises:
and acquiring a segmentation rule from the server, wherein the segmentation rule comprises a block number, and the corresponding relation between video streams with different resolutions and the storage server.
6. A live method of gigapixel video as recited in claim 5, wherein the determining a plurality of camera videos corresponding to the selection area and a target video stream of the plurality of camera videos corresponding to the canvas of the target resolution comprises:
determining a block corresponding to the selected area and the number of the block;
determining a target video stream in the block according to the block and the target resolution;
querying the segmentation rule and determining a target server where the target video stream is located;
pulling the target video stream from the target server.
7. A direct broadcast device of billion pixel video is characterized in that the device is applied to a server side and comprises:
the array video acquisition module is used for acquiring all camera videos shot by the array camera, wherein each path of camera video comprises M video streams with different resolutions;
the canvas fusion module is used for fusing video streams with the same resolution in all the camera videos into M layers of canvases with different resolutions;
and the response module is used for providing the relevant information of the M layers of canvases with different resolutions for the client after receiving a live broadcast request of the client, so that the client determines the canvases with the target resolution corresponding to the selection area and the display resolution in the canvases with the M layers of different resolutions according to the selection area and the display resolution, and pulls the video streams corresponding to the canvas with the target resolution and the selection area.
8. A live broadcast device of billion pixel video, which is applied to a client and comprises:
the request module is used for sending a live broadcast request to the server and acquiring related information of canvas of M layers with different resolutions;
the canvas selection module is used for determining the canvas with the target resolution corresponding to the selection area and the display resolution from the M layers of canvases with different resolutions by the client according to the selection area and the display resolution;
a target video stream determining module, configured to determine multiple paths of camera videos corresponding to the selection area and a target video stream corresponding to the canvas of the target resolution in the multiple paths of camera videos;
the pulling module is used for pulling the target video stream;
and the rendering module is used for splicing and fusing the target video stream according to the position in the selection area, and rendering the target video stream to a display device after cutting.
9. A computer-readable storage medium, on which a computer program is stored, characterized in that the computer program, when executed, implements the steps of the method according to any one of claims 1-6.
10. A computer arrangement comprising a processor, a memory and a computer program stored on the memory, characterized in that the steps of the method according to any of claims 1-6 are implemented when the computer program is executed by the processor.
CN202111149485.7A 2021-09-29 2021-09-29 Live broadcasting method, device, medium and equipment of billion pixel video Active CN113891112B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111149485.7A CN113891112B (en) 2021-09-29 2021-09-29 Live broadcasting method, device, medium and equipment of billion pixel video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111149485.7A CN113891112B (en) 2021-09-29 2021-09-29 Live broadcasting method, device, medium and equipment of billion pixel video

Publications (2)

Publication Number Publication Date
CN113891112A true CN113891112A (en) 2022-01-04
CN113891112B CN113891112B (en) 2023-12-05

Family

ID=79007953

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111149485.7A Active CN113891112B (en) 2021-09-29 2021-09-29 Live broadcasting method, device, medium and equipment of billion pixel video

Country Status (1)

Country Link
CN (1) CN113891112B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170302719A1 (en) * 2016-04-18 2017-10-19 Qualcomm Incorporated Methods and systems for auto-zoom based adaptive video streaming
CN107749857A (en) * 2017-11-01 2018-03-02 深圳市普天宜通技术股份有限公司 Method, storage medium and client a kind of while that check multi-channel video
US10341605B1 (en) * 2016-04-07 2019-07-02 WatchGuard, Inc. Systems and methods for multiple-resolution storage of media streams
CN110460871A (en) * 2019-08-29 2019-11-15 香港乐蜜有限公司 Generation method, device, system and the equipment of live video
CN110662100A (en) * 2018-06-28 2020-01-07 中兴通讯股份有限公司 Information processing method, device and system and computer readable storage medium
CN111193937A (en) * 2020-01-15 2020-05-22 北京拙河科技有限公司 Processing method, device, equipment and medium for live video data
CN111385607A (en) * 2018-12-29 2020-07-07 浙江宇视科技有限公司 Resolution determination method and device, storage medium, client and server
CN111601151A (en) * 2020-04-13 2020-08-28 北京拙河科技有限公司 Method, device, medium and equipment for reviewing hundred million-level pixel video
CN112473130A (en) * 2020-11-26 2021-03-12 成都数字天空科技有限公司 Scene rendering method and device, cluster, storage medium and electronic equipment
US20210258621A1 (en) * 2018-06-12 2021-08-19 Ela KLIOTS SHAPIRA Method and system for automatic real-time frame segmentation of high resolution video streams into constituent features and modifications of features in each frame to simultaneously create multiple different linear views from same video source
WO2021179783A1 (en) * 2020-03-11 2021-09-16 叠境数字科技(上海)有限公司 Free viewpoint-based video live broadcast processing method, device, system, chip and medium

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10341605B1 (en) * 2016-04-07 2019-07-02 WatchGuard, Inc. Systems and methods for multiple-resolution storage of media streams
US20170302719A1 (en) * 2016-04-18 2017-10-19 Qualcomm Incorporated Methods and systems for auto-zoom based adaptive video streaming
CN107749857A (en) * 2017-11-01 2018-03-02 深圳市普天宜通技术股份有限公司 Method, storage medium and client a kind of while that check multi-channel video
US20210258621A1 (en) * 2018-06-12 2021-08-19 Ela KLIOTS SHAPIRA Method and system for automatic real-time frame segmentation of high resolution video streams into constituent features and modifications of features in each frame to simultaneously create multiple different linear views from same video source
CN110662100A (en) * 2018-06-28 2020-01-07 中兴通讯股份有限公司 Information processing method, device and system and computer readable storage medium
CN111385607A (en) * 2018-12-29 2020-07-07 浙江宇视科技有限公司 Resolution determination method and device, storage medium, client and server
CN110460871A (en) * 2019-08-29 2019-11-15 香港乐蜜有限公司 Generation method, device, system and the equipment of live video
CN111193937A (en) * 2020-01-15 2020-05-22 北京拙河科技有限公司 Processing method, device, equipment and medium for live video data
WO2021179783A1 (en) * 2020-03-11 2021-09-16 叠境数字科技(上海)有限公司 Free viewpoint-based video live broadcast processing method, device, system, chip and medium
CN111601151A (en) * 2020-04-13 2020-08-28 北京拙河科技有限公司 Method, device, medium and equipment for reviewing hundred million-level pixel video
CN112473130A (en) * 2020-11-26 2021-03-12 成都数字天空科技有限公司 Scene rendering method and device, cluster, storage medium and electronic equipment

Also Published As

Publication number Publication date
CN113891112B (en) 2023-12-05

Similar Documents

Publication Publication Date Title
CN111193937B (en) Live video data processing method, device, equipment and medium
CN101778257B (en) Generation method of video abstract fragments for digital video on demand
US20140292746A1 (en) Clustering crowdsourced videos by line-of-sight
US20150222815A1 (en) Aligning videos representing different viewpoints
US11539983B2 (en) Virtual reality video transmission method, client device and server
CN109698949B (en) Video processing method, device and system based on virtual reality scene
US11694303B2 (en) Method and apparatus for providing 360 stitching workflow and parameter
CN111225228B (en) Video live broadcast method, device, equipment and medium
CN112468832A (en) Billion-level pixel panoramic video live broadcast method, device, medium and equipment
CN104301769B (en) Method, terminal device and the server of image is presented
CN111614975B (en) Hundred million-level pixel video playing method, device, medium and equipment
CN109389550B (en) Data processing method, device and computing equipment
CN113891111B (en) Live broadcasting method, device, medium and equipment of billion pixel video
CN111542862A (en) Method and apparatus for processing and distributing live virtual reality content
CN105282526A (en) Panorama video stitching method and system
KR20220149574A (en) 3D video processing method, apparatus, readable storage medium and electronic device
CN111601151A (en) Method, device, medium and equipment for reviewing hundred million-level pixel video
US11871137B2 (en) Method and apparatus for converting picture into video, and device and storage medium
CN108401163B (en) Method and device for realizing VR live broadcast and OTT service system
CN107707830B (en) Panoramic video playing and photographing system based on one-way communication
JP2015050572A (en) Information processing device, program, and information processing method
CN113891112A (en) Live broadcast method, device, medium and equipment for billion pixel video
CN112188219A (en) Video receiving method and device and video transmitting method and device
CN113259601A (en) Video processing method and device, readable medium and electronic equipment
CN114513702B (en) Web-based block panoramic video processing method, system and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant