CN111510759A - Video display method, device and readable storage medium - Google Patents

Video display method, device and readable storage medium Download PDF

Info

Publication number
CN111510759A
CN111510759A CN202010188270.5A CN202010188270A CN111510759A CN 111510759 A CN111510759 A CN 111510759A CN 202010188270 A CN202010188270 A CN 202010188270A CN 111510759 A CN111510759 A CN 111510759A
Authority
CN
China
Prior art keywords
video
target data
data packet
packet
queue
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010188270.5A
Other languages
Chinese (zh)
Other versions
CN111510759B (en
Inventor
蔡耀
王海
胡磊
杨春晖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Visionvera Information Technology Co Ltd
Original Assignee
Visionvera Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Visionvera Information Technology Co Ltd filed Critical Visionvera Information Technology Co Ltd
Priority to CN202010188270.5A priority Critical patent/CN111510759B/en
Publication of CN111510759A publication Critical patent/CN111510759A/en
Application granted granted Critical
Publication of CN111510759B publication Critical patent/CN111510759B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44012Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving rendering scenes according to scene graphs, e.g. MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/15Conference systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The embodiment of the invention provides a video display method, a video display device and a readable storage medium. The video display method comprises the steps of receiving a media packet sent by a streaming media server through a first receiving thread, wherein the media packet comprises a data packet acquired by a chairman participation terminal and the type of the data packet, analyzing the media packet to obtain a target data packet of which the type is a video type, putting the target data packet at the tail of a video queue, obtaining the target data packet from the head of the video queue through a decoding thread, calling a decoder to decode the target data packet through the decoding thread to obtain an original pixel stream, rendering the original pixel stream by adopting a drawing control to obtain a rendered video picture, and displaying the rendered video picture. Therefore, the video picture of the chairman participating terminal side can be displayed on the mobile terminal.

Description

Video display method, device and readable storage medium
Technical Field
The present invention relates to the field of communications, and in particular, to a video display method and apparatus, and a readable storage medium.
Background
In the scenes of emergency command, emergency rescue and relief, comprehensive treatment meetings and the like supported by the video networking, a plurality of video networking terminals can be networked together for meetings, wherein the unmanned aerial vehicle is very important as a terminal for meeting. The unmanned aerial vehicle meets real-time tracking that can be fine solution scene emergency, solves the dangerous region (such as mountain landslide, dammed lake, etc.) that the people temporarily can not arrive, passes back the real-time picture in dangerous region to the chairman and participates in the terminal, makes things convenient for leader and the commander in commander's hall to know the scene condition, carries out efficient dispatch and commander.
In a 4th Generation mobile communication technology (4G) environment, the existing unmanned plane conference manner is: the mobile terminal is connected with a remote control handle of the unmanned aerial vehicle through a data line so as to receive audio and video streams sent by the remote control handle of the unmanned aerial vehicle, the audio and video streams are packaged through a Transmission Control Protocol (TCP) and then transmitted to the streaming media server, and then the streaming media server sends the packaged data packets to the chairman participant terminal.
However, at present, the mobile terminal can send the received audio and video data of the unmanned aerial vehicle to the chairman participant terminal, but cannot receive the video data acquired by the chairman participant terminal, so that the video picture of the chairman participant terminal at the far end cannot be displayed.
Disclosure of Invention
The embodiment of the invention provides a video display method, a video display device and a readable storage medium, which are used for solving the problem that the video data acquired by a chairman participant terminal cannot be received by the conventional mobile terminal, so that the video picture of the chairman participant terminal at the far end cannot be displayed.
In a first aspect of the embodiments of the present invention, a video display method is provided, executed on a mobile terminal, and includes:
receiving a media packet sent by a streaming media server through a first receiving thread, wherein the media packet comprises a data packet acquired by a chairman participant terminal and the type of the data packet;
analyzing the media packet to obtain a target data packet with a video type, and putting the target data packet at the tail of a video queue;
acquiring the target data packet from the head of the video queue through a decoding thread, and calling a decoder to decode the target data packet through the decoding thread to obtain an original pixel stream;
and rendering the original pixel stream by adopting a drawing control to obtain a rendered video picture, and displaying the rendered video picture.
In a second aspect of the embodiments of the present invention, there is provided a video display device disposed in a mobile terminal, including:
the receiving module is used for receiving a media packet sent by a streaming media server through a first receiving thread, wherein the media packet comprises a data packet acquired by a chairman participant terminal and the type of the data packet;
the analysis module is used for analyzing the media packet to obtain a target data packet with the video type, and putting the target data packet at the tail of the video queue;
the decoding module is used for acquiring the target data packet from the head of the video queue through a decoding thread and calling a decoder to decode the target data packet through the decoding thread to obtain an original pixel stream;
and the display module is used for rendering the original pixel stream by adopting a drawing control to obtain a rendered video picture and displaying the rendered video picture.
In a third aspect of the embodiments of the present invention, a computer-readable storage medium is provided, on which a computer program is stored, and the computer program, when being executed by a processor, implements the steps of the video display method described above.
In a fourth aspect of the present invention, there is provided a video display apparatus comprising a processor, a memory and a computer program stored on the memory and operable on the processor, the computer program, when executed by the processor, implementing the steps of the video display method described above.
Aiming at the prior art, the invention has the following advantages:
the video display method provided by the embodiment of the invention receives a media packet sent by a streaming media server through a first receiving thread, wherein the media packet comprises a data packet acquired by a chairman participant terminal and the type of the data packet, analyzes the media packet to obtain a target data packet of which the type is a video type, puts the target data packet at the tail of a video queue, obtains the target data packet from the head of the video queue through a decoding thread, calls a decoder to decode the target data packet through the decoding thread to obtain an original pixel stream, renders the original pixel stream through a drawing control to obtain a rendered video picture, and displays the rendered video picture. Therefore, the video picture of the chairman participating terminal side can be displayed on the mobile terminal.
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
fig. 1 is a system architecture diagram based on a video display method according to an embodiment of the present invention
FIG. 2 is a flowchart illustrating steps of a video display method according to an embodiment of the present invention;
FIG. 3 is a flowchart illustrating steps of another video display method according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a video display apparatus according to an embodiment of the present invention;
FIG. 5 is a networking schematic of a video network of the present invention;
FIG. 6 is a schematic diagram of a hardware architecture of a node server according to the present invention;
fig. 7 is a schematic diagram of a hardware structure of an access switch of the present invention;
fig. 8 is a schematic diagram of a hardware structure of an ethernet protocol conversion gateway according to the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
It should be understood that the specific embodiments described herein are merely illustrative of the invention, but do not limit the invention to only some, but not all embodiments.
Referring to fig. 1, fig. 1 is a system architecture diagram based on a video display method according to an embodiment of the present invention. The system comprises unmanned aerial vehicle equipment 101, a remote control handle 102, a mobile terminal 103, a streaming media server 104, a video network terminal 105 and a client 106, wherein the unmanned aerial vehicle equipment 101 is in remote radio connection with the remote control handle 102, and the mobile terminal 103 is connected with the remote control handle 102 through a Universal Serial Bus (USB). The client 106 is used for controlling the conference, and the conference-joining signaling sent by the client 106 is transferred to the streaming media server 104 through the video network. Then streaming media server 104 sends the meeting entering instruction to mobile terminal 103 again, mobile terminal 103 can meet after receiving the meeting entering instruction, mobile terminal 103 can send the video data that unmanned aerial vehicle equipment 101 that receives sent through remote control handle to streaming media server 104 after meeting entering, streaming media server sends for video network terminal 105 through the video network, the participant who is located video network terminal 105 side can see the picture that unmanned aerial vehicle gathered, video network terminal 105 is the chairman meeting terminal, the participant of video network terminal 105 side generally is the commander, the commander commands according to the picture that unmanned aerial vehicle gathered that sees. At present, the mobile terminal 103 can send the received video data of the unmanned aerial vehicle device 101 to the chairman participant terminal, but cannot receive the video data collected by the chairman participant terminal, so that the video picture of the chairman participant terminal at the far end cannot be displayed.
In order to solve the above technical problem, an embodiment of the present invention provides a video display method. Referring to fig. 2, fig. 2 is a flowchart illustrating steps of a video display method according to an embodiment of the present invention, where the method may be executed in a mobile terminal, and the method of the embodiment includes the following steps:
step 201, receiving a media packet sent by a streaming media server through a first receiving thread, where the media packet includes a data packet collected by a chairman participant terminal and a type of the data packet.
The first receiving thread of the step is used for receiving a media packet sent by the streaming media server, wherein the media packet comprises a data packet acquired by the chairman participant terminal and the type of the data packet. The data packets collected by the chairman participant terminal are video data packets or audio data packets, that is, the data packets in the media packets are video type data packets or audio type data packets.
Step 202, analyzing the media packet to obtain a target data packet of a video type, and placing the target data packet at the end of the video queue.
The mobile terminal can analyze the media packet and determine the type of the data packet collected by the chairman participant terminal in the media packet according to the analysis result. The analysis result comprises a data packet acquired by the chairman participant terminal and the type of the data packet, if the type of the data packet is determined to be a video type according to the analysis result, the data packet is used as a target data packet and is placed at the tail of the video queue, and if the type of the data packet is an audio type, the data packet is not placed in the video queue and is not used as the target data packet. For example, when the first target packet (target packet 1) is placed at the end of the video queue, the target packet 1 is the first packet in the video queue and is also the last packet in the video queue. When a second target data packet (target data packet 2) is placed at the end of the video queue, the target data packet 1 is the first data packet in the video queue, and the target data packet 2 is the second data packet in the video queue, that is, the target data packet 2 is the last data packet in the video queue, so that the target data packets in the video queue can be arranged in sequence.
And step 203, acquiring a target data packet from the head of the video queue through the decoding thread, and calling a decoder to decode the target data packet through the decoding thread to obtain an original pixel stream.
The target data packet is obtained from the head of the video queue through the decoding thread, so that the decoding of the target data packet received firstly can be guaranteed.
And step 204, rendering the original pixel stream by adopting a drawing control to obtain a rendered video picture, and displaying the rendered video picture.
Because it is ensured that the target data packet arranged in front of the video queue in the video queue is decoded to obtain the original pixel stream in step 203, the original pixel stream decoded from the target data packet arranged in front of the video queue can be rendered first by this step to obtain a rendered video picture, and the rendered video picture is displayed, and the rendered video picture is the video picture at the side of the chairman participant terminal. Therefore, the mobile terminal can display the video pictures according to the time sequence of the video data packets collected by the chairman participant terminal to a certain extent, namely the video pictures of the video data packets collected by the chairman participant terminal are displayed on the mobile terminal firstly.
In the video display method provided by this embodiment, a media packet sent by a streaming media server is received through a first receiving thread, where the media packet includes a data packet acquired by a chairman participant terminal and a type of the data packet, the media packet is analyzed to obtain a target data packet of the video type, the target data packet is placed at the end of a video queue, the target data packet is obtained from the head of the video queue through a decoding thread, a decoder is called through the decoding thread to decode the target data packet to obtain an original pixel stream, a drawing control is used to render the original pixel stream to obtain a rendered video picture, and the rendered video picture is displayed. Therefore, the video picture of the chairman participating terminal side can be displayed on the mobile terminal.
Referring to fig. 3, fig. 3 is a flowchart illustrating steps of another video display method according to an embodiment of the present invention. The method comprises the following steps:
step 301, a first receiving thread is created, and the first receiving thread and a second thread created by the mobile terminal for receiving the signaling are asynchronous threads.
The second thread is used for receiving a signaling sent by the streaming media server, the signaling is, for example, a conference entering signaling, and the mobile terminal can join the video networking conference after receiving the conference entering signaling. The first thread and the second thread are asynchronous threads, and the first thread is used for receiving a media packet sent by the streaming media server.
Step 302, receiving a media packet sent by the streaming media server through a first receiving thread, where the media packet includes a data packet collected by the chairman participant terminal and a type of the data packet.
And 303, analyzing the media packet to obtain a target data packet with the video type, and placing the target data packet at the tail of the video queue.
Step 304, detecting the length of the target data packet in the video queue.
Step 305, judging whether the length of the target data packet in the video queue is larger than a preset length.
Step 306, if the length of the target data packets in the video queue is greater than the preset length, discarding the N target data packets in the video queue.
And N is equal to the difference value between the number of the target data packets in the video queue and the preset length, and the N target data packets comprise N data packets which are sequentially selected from the head of the video queue.
For steps 304 to 306, the mobile terminal may detect the length of the target data packet in the video queue in real time, determine whether the length of the target data packet in the video queue is greater than a preset length, and discard N target data packets in the video queue if the length of the target data packet in the video queue is greater than the preset length. For example, the preset length is 30, and the length of the currently detected target data packet in the video queue is equal to 70 (there are 70 target data packets in the video queue), so that if the length of the target data packet in the video queue is greater than the preset length, 40 data packets are sequentially selected from the head of the video queue, and the 40 data packets are discarded.
It should be noted that, under the conditions of high resource occupancy rate, high memory occupancy rate of the processor of the mobile terminal, or video playing jam caused by network reasons, more target data packets in the video queue may be accumulated. Meanwhile, as the number of audio data packets from the chairman participant terminal is small, and the number of video data packets from the chairman participant terminal is generally one hundred times that of the audio data packets, the playing of the audio data in the audio data packets is smooth, if a certain number of target data packets are not discarded, the target data packets are still obtained from the head of the video queue until the mobile terminal has sufficient resources of a processor, memory resources and the like, and when the target data packets are decoded and the like, the rendered video picture and the audio data are not synchronized. Therefore, in the present embodiment, the rendered video frame is prevented from being asynchronous with the audio data through the steps 304 to 306.
And 307, acquiring a target data packet from the head of the video queue through the decoding thread, and calling a decoder to decode the target data packet through the decoding thread to obtain an original pixel stream.
It should be noted that, the target data packet is, for example, a video data packet in h.264 format, and the decoded original pixel stream is an original pixel stream in YUV format, where "Y" in YUV represents brightness (L metric or L uma), that is, a gray scale value, and "U" and "V" represent Chrominance (Chroma or Chroma).
And 308, rendering the original pixel stream by adopting a drawing control to obtain a rendered video picture, and displaying the rendered video picture.
Wherein the drawing control comprises a surfaceView control. The SurfaceView control can render the original pixel stream directly.
In this embodiment, the second thread is used for receiving the conference instruction that enters that streaming media server sent, mobile terminal can join the video networking meeting after receiving the conference instruction, mobile terminal joins the video networking meeting back, can send the video data that the unmanned aerial vehicle equipment that receives sent through remote control handle for streaming media server, streaming media server sends for chairman meeting terminal through the video networking, the video picture that unmanned aerial vehicle gathered can be seen to the personnel of meeting of participating in that is located chairman meeting terminal side. And the media packet sent by the streaming media server can be received through the first thread, and then the video picture of the chairman participating terminal is displayed. That is, the mobile terminal can transmit the video data collected by the unmanned aerial vehicle device to the chairman participant terminal and can display the video picture of the chairman participant terminal.
In the video display method provided by this embodiment, whether the length of a target data packet in a video queue is greater than a preset length is determined by detecting the length of the target data packet in the video queue, if the length of the target data packet in the video queue is greater than the preset length, N target data packets in the video queue are discarded, the target data packet is obtained from the head of the video queue through a decoding thread, a decoder is called through the decoding thread to decode the target data packet, an original pixel stream is obtained, a drawing control is used to render the original pixel stream, a rendered video picture is obtained, and the rendered video picture is displayed. Thereby can show the video picture that the chairman participated in the terminal side on making mobile terminal to can guarantee to a certain extent that the video picture and the audio data at the chairman participated in the terminal of the display on the mobile terminal keep synchronous, thereby make the personnel that control unmanned aerial vehicle equipment of mobile terminal side also can watch the video picture at the chairman participatin the terminal, make things convenient for the commander that the chairman participated in the terminal side to command the personnel that control unmanned aerial vehicle equipment to control unmanned aerial vehicle equipment.
Referring to fig. 4, fig. 4 is a schematic structural diagram of a video display apparatus according to an embodiment of the present invention, where the apparatus 400 is disposed in a mobile terminal, and includes:
a receiving module 410, configured to receive, by a first receiving thread, a media packet sent by a streaming media server, where the media packet includes a data packet acquired by a chairman participant terminal and a type of the data packet;
the parsing module 420 is configured to parse the media packet to obtain a target data packet of a video type, and place the target data packet at the end of a video queue;
a decoding module 430, configured to obtain the target data packet from the head of the video queue through a decoding thread, and call a decoder to decode the target data packet through the decoding thread to obtain an original pixel stream;
the display module 440 is configured to render the original pixel stream by using a drawing control, obtain a rendered video frame, and display the rendered video frame.
In the video display method provided by this embodiment, a media packet sent by a streaming media server is received through a first receiving thread, where the media packet includes a data packet acquired by a chairman participant terminal and a type of the data packet, the media packet is analyzed to obtain a target data packet of the video type, the target data packet is placed at the end of a video queue, the target data packet is obtained from the head of the video queue through a decoding thread, a decoder is called through the decoding thread to decode the target data packet to obtain an original pixel stream, a drawing control is used to render the original pixel stream to obtain a rendered video picture, and the rendered video picture is displayed. Therefore, the video picture of the chairman participating terminal side can be displayed on the mobile terminal.
Optionally, the method further includes:
the detection module is used for detecting the length of a target data packet in the video queue;
the judging module is used for judging whether the length of the target data packet in the video queue is greater than a preset length or not;
and the discarding module is used for discarding N target data packets in the video queue if the length of the target data packets in the video queue is greater than the preset length, wherein N is equal to the difference value between the number of the target data packets in the video queue and the preset length, and the N target data packets comprise N data packets sequentially selected from the head of the video queue.
Optionally, before the receiving, by the first receiving thread, the media packet sent by the streaming media server, the method further includes:
and the creating module is used for creating the first receiving thread, and the first receiving thread and a second receiving thread created by the mobile terminal and used for receiving the signaling are asynchronous threads.
Optionally, the decoder is a hard decoder or a soft decoder.
Optionally, the drawing control includes a surface view control, and the original pixel stream is a pixel stream in a YUV format.
In addition, an embodiment of the present invention further provides a video display apparatus, where the video display apparatus includes a processor, a memory, and a computer program stored in the memory and capable of running on the processor, and when the computer program is executed by the processor, the computer program implements each process of the video display method embodiment of the foregoing embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
An embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the video display method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
The embodiment of the invention also provides a computer program, and the computer program can be stored on a cloud or a local storage medium. When being executed by a computer or a processor, for performing the respective steps of the video display method according to the embodiment of the present invention, and for implementing the respective modules in the video display apparatus according to the embodiment of the present invention.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
As is readily imaginable to the person skilled in the art: any combination of the above embodiments is possible, and thus any combination between the above embodiments is an embodiment of the present invention, but the present disclosure is not necessarily detailed herein for reasons of space.
The video display methods provided herein are not inherently related to any particular computer, virtual system, or other apparatus. Various general purpose systems may also be used with the teachings herein. The structure required to construct a system incorporating aspects of the present invention will be apparent from the description above. Moreover, the present invention is not directed to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the teachings of the present invention as described herein, and any descriptions of specific languages are provided above to disclose the best mode of the invention.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the invention and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: that the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the claims, any of the claimed embodiments may be used in any combination.
The various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that a microprocessor or Digital Signal Processor (DSP) may be used in practice to implement some or all of the functions of some or all of the components of the video display method according to embodiments of the present invention. The present invention may also be embodied as apparatus or device programs (e.g., computer programs and computer program products) for performing a portion or all of the methods described herein. Such programs implementing the present invention may be stored on computer-readable media or may be in the form of one or more signals. Such a signal may be downloaded from an internet website or provided on a carrier signal or in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.
To better understand the embodiments of the present invention, the following description refers to the internet of view:
some of the technologies applied in the video networking are as follows:
network Technology (Network Technology)
Network technology innovation in video networking has improved over traditional Ethernet (Ethernet) to face the potentially enormous video traffic on the network. Unlike pure network Packet Switching (Packet Switching) or network circuit Switching (circuit Switching), the Packet Switching is adopted by the technology of the video networking to meet the Streaming requirement. The video networking technology has the advantages of flexibility, simplicity and low price of packet switching, and simultaneously has the quality and safety guarantee of circuit switching, thereby realizing the seamless connection of the whole network switching type virtual circuit and the data format.
Switching Technology (Switching Technology)
The video network adopts two advantages of asynchronism and packet switching of the Ethernet, eliminates the defects of the Ethernet on the premise of full compatibility, has end-to-end seamless connection of the whole network, is directly communicated with a user terminal, and directly bears an IP data packet. The user data does not require any format conversion across the entire network. The video networking is a higher-level form of the Ethernet, is a real-time exchange platform, can realize the real-time transmission of the whole-network large-scale high-definition video which cannot be realized by the existing Internet, and pushes a plurality of network video applications to high-definition and unification.
Server Technology (Server Technology)
The server technology on the video networking and unified video platform is different from the traditional server, the streaming media transmission of the video networking and unified video platform is established on the basis of connection orientation, the data processing capacity of the video networking and unified video platform is independent of flow and communication time, and a single network layer can contain signaling and data transmission. For voice and video services, the complexity of video networking and unified video platform streaming media processing is much simpler than that of data processing, and the efficiency is greatly improved by more than one hundred times compared with that of a traditional server.
Storage Technology (Storage Technology)
The super-high speed storage technology of the unified video platform adopts the most advanced real-time operating system in order to adapt to the media content with super-large capacity and super-large flow, the program information in the server instruction is mapped to the specific hard disk space, the media content is not passed through the server any more, and is directly sent to the user terminal instantly, and the general waiting time of the user is less than 0.2 second. The optimized sector distribution greatly reduces the mechanical motion of the magnetic head track seeking of the hard disk, the resource consumption only accounts for 20% of that of the IP internet of the same grade, but concurrent flow which is 3 times larger than that of the traditional hard disk array is generated, and the comprehensive efficiency is improved by more than 10 times.
Network Security Technology (Network Security Technology)
The structural design of the video network completely eliminates the network security problem troubling the internet structurally by the modes of independent service permission control each time, complete isolation of equipment and user data and the like, generally does not need antivirus programs and firewalls, avoids the attack of hackers and viruses, and provides a structural carefree security network for users.
Service Innovation Technology (Service Innovation Technology)
The unified video platform integrates services and transmission, and is not only automatically connected once whether a single user, a private network user or a network aggregate. The user terminal, the set-top box or the PC are directly connected to the unified video platform to obtain various multimedia video services in various forms. The unified video platform adopts a menu type configuration table mode to replace the traditional complex application programming, can realize complex application by using very few codes, and realizes infinite new service innovation.
Networking of the video network is as follows:
the video network is a centralized control network structure, and the network can be a tree network, a star network, a ring network and the like, but on the basis of the centralized control node, the whole network is controlled by the centralized control node in the network.
As shown in fig. 5, the video network is divided into an access network and a metropolitan network.
The devices of the access network part can be mainly classified into 3 types: node server, access switch, terminal (including various set-top boxes, coding boards, memories, etc.). The node server is connected to an access switch, which may be connected to a plurality of terminals and may be connected to an ethernet network.
The node server is a node which plays a centralized control function in the access network and can control the access switch and the terminal. The node server can be directly connected with the access switch or directly connected with the terminal.
Similarly, devices of the metropolitan network portion may also be classified into 3 types: a metropolitan area server, a node switch and a node server. The metro server is connected to a node switch, which may be connected to a plurality of node servers.
The node server is a node server of the access network part, namely the node server belongs to both the access network part and the metropolitan area network part.
The metropolitan area server is a node which plays a centralized control function in the metropolitan area network and can control a node switch and a node server. The metropolitan area server can be directly connected with the node switch or directly connected with the node server.
Therefore, the whole video network is a network structure with layered centralized control, and the network controlled by the node server and the metropolitan area server can be in various structures such as tree, star and ring.
The access network part can form a unified video platform (the part in the dotted circle), and a plurality of unified video platforms can form a video network; each unified video platform may be interconnected via metropolitan area and wide area video networking.
Video networking device classification
1.1 devices in the video network of the embodiment of the present invention can be mainly classified into 3 types: servers, switches (including ethernet gateways), terminals (including various set-top boxes, code boards, memories, etc.). The video network as a whole can be divided into a metropolitan area network (or national network, global network, etc.) and an access network.
1.2 wherein the devices of the access network part can be mainly classified into 3 types: node servers, access switches (including ethernet gateways), terminals (including various set-top boxes, code boards, memories, etc.).
The specific hardware structure of each access network device is as follows:
a node server:
as shown in fig. 6, the system mainly includes a network interface module 601, a switching engine module 602, a CPU module 603, and a disk array module 604;
the network interface module 601, the CPU module 603, and the disk array module 604 all enter the switching engine module 602; the switching engine module 602 performs an operation of looking up the address table 605 on the incoming packet, thereby obtaining the direction information of the packet; and stores the packet in a queue of a corresponding packet buffer 606 according to the packet's steering information; if the queue of the packet buffer 606 is nearly full, it is discarded; the switching engine module 602 polls all packet buffer queues for forwarding if the following conditions are met: 1) the port send buffer is not full; 2) the queue packet counter is greater than zero. The disk array module 604 mainly implements control over the hard disk, including initialization, read-write, and other operations of the hard disk; the CPU module 603 is mainly responsible for protocol processing with an access switch and a terminal (not shown in the figure), configuring an address table 605 (including a downlink protocol packet address table, an uplink protocol packet address table, and a data packet address table), and configuring the disk array module 604.
The access switch:
as shown in fig. 7, the network interface module mainly includes a network interface module (a downlink network interface module 701, an uplink network interface module 702), a switching engine module 703 and a CPU module 704;
wherein, the packet (uplink data) coming from the downlink network interface module 701 enters the packet detection module 705; the packet detection module 705 detects whether the Destination Address (DA), the Source Address (SA), the packet type, and the packet length of the packet meet the requirements, if so, allocates a corresponding stream identifier (stream-id) and enters the switching engine module 703, otherwise, discards the stream identifier; the packet (downstream data) coming from the upstream network interface module 702 enters the switching engine module 703; the incoming data packet from the CPU module 704 enters the switching engine module 703; the switching engine module 703 performs an operation of looking up the address table 706 on the incoming packet, thereby obtaining the direction information of the packet; if a packet entering the switching engine module 703 goes from the downstream network interface to the upstream network interface, the packet is stored in the queue of the corresponding packet buffer 707 in association with a stream-id; if the queue of the packet buffer 707 is close to full, discard; if the packet entering the switching engine module 703 does not go from the downlink network interface to the uplink network interface, the packet is stored in the queue of the corresponding packet buffer 707 according to the packet steering information; if the queue of the packet buffer 707 is close to full, it is discarded.
The switching engine module 703 polls all packet buffer queues, which in this embodiment of the present invention is divided into two cases:
if the queue is from the downlink network interface to the uplink network interface, the following conditions are met for forwarding: 1) the port send buffer is not full; 2) the queued packet counter is greater than zero; 3) obtaining a token generated by a video display module;
if the queue is not from the downlink network interface to the uplink network interface, the following conditions are met for forwarding: 1) the port send buffer is not full; 2) the queue packet counter is greater than zero.
The rate control module 708 is configured by the CPU module 704 to generate tokens for packet buffer queues going to the upstream network interface for all downstream network interfaces at programmable intervals to control the rate of upstream forwarding.
The CPU module 704 is mainly responsible for protocol processing with the node server, configuration of the address table 706, and configuration of the code rate control module 708.
Ethernet protocol gateway:
as shown in fig. 8, the system mainly includes a network interface module (a downlink network interface module 801, an uplink network interface module 802), a switching engine module 803, a CPU module 804, a packet detection module 805, a video display module 808, an address table 806, a packet buffer 807, a MAC adding module 809, and a MAC deleting module 810.
Wherein, the data packet coming from the downlink network interface module 801 enters the packet detection module 805; the packet detection module 805 detects whether the ethernet MAC DA, the ethernet MAC SA, the ethernet length frame type, the video network destination address DA, the video network source address SA, the video network packet type, and the packet length of the packet meet the requirements, and if so, allocates a corresponding stream identifier (stream-id); then, the MAC deleting module 810 subtracts MAC DA, MAC SA, length or frame type (2byte), and enters the corresponding receiving buffer, otherwise, discards it;
the downlink network interface module 801 detects the sending buffer of the port, and if there is a packet, acquires the ethernet MAC DA of the corresponding terminal according to the destination address DA of the packet, adds the ethernet MAC DA of the terminal, the MACSA of the ethernet coordination gateway, and the ethernet length or frame type, and sends the packet.
The other modules in the ethernet protocol gateway function similarly to the access switch.
A terminal:
the system mainly comprises a network interface module, a service processing module and a CPU module; for example, the set-top box mainly comprises a network interface module, a video and audio coding and decoding engine module and a CPU module; the coding board mainly comprises a network interface module, a video and audio coding engine module and a CPU module; the memory mainly comprises a network interface module, a CPU module and a disk array module.
1.3 devices of the metropolitan area network part can be mainly classified into 2 types: node server, node exchanger, metropolitan area server. The node switch mainly comprises a network interface module, a switching engine module and a CPU module; the metropolitan area server mainly comprises a network interface module, a switching engine module and a CPU module.
2. Video networking packet definition
2.1 Access network packet definition
The data packet of the access network mainly comprises the following parts: destination Address (DA), Source Address (SA), reserved bytes, payload (pdu), CRC.
As shown in the following table, the data packet of the access network mainly includes the following parts:
DA SA Reserved Payload CRC
wherein:
the Destination Address (DA) is composed of 8 bytes (byte), the first byte represents the type of the data packet (such as various protocol packets, multicast data packets, unicast data packets, etc.), there are 256 possibilities at most, the second byte to the sixth byte are metropolitan area network addresses, and the seventh byte and the eighth byte are access network addresses;
the Source Address (SA) is also composed of 8 bytes (byte), defined as the same as the Destination Address (DA);
the reserved byte consists of 2 bytes;
the payload part has different lengths according to different types of datagrams, and is 64 bytes if the datagram is various types of protocol packets, and is 32+1024 or 1056 bytes if the datagram is a unicast packet, of course, the length is not limited to the above 2 types;
the CRC consists of 4 bytes and is calculated in accordance with the standard ethernet CRC algorithm.
2.2 metropolitan area network packet definition
The topology of a metropolitan area network is a graph and there may be 2, or even more than 2, connections between two devices, i.e., there may be more than 2 connections between a node switch and a node server, a node switch and a node switch, and a node switch and a node server. However, the metro network address of the metro network device is unique, and in order to accurately describe the connection relationship between the metro network devices, parameters are introduced in the embodiment of the present invention: a label to uniquely describe a metropolitan area network device.
In this specification, the definition of the label is similar to that of a label of MP L S (Multi-Protocol L abel Switch), and assuming that there are two connections between device a and device B, there are 2 labels for a packet from device a to device B, and there are 2 labels for a packet from device B to device a. the label is divided into an incoming label and an outgoing label, and assuming that the label (incoming label) of a packet entering device a is 0x0000, the label (outgoing label) of the packet leaving device a may become 0x 0001.
As shown in the following table, the data packet of the metro network mainly includes the following parts:
DA SA Reserved Label Payload CRC
Namely Destination Address (DA), Source Address (SA), Reserved byte (Reserved), tag, payload (pdu), CRC. The format of the tag may be defined by reference to the following: the tag is 32 bits with the upper 16 bits reserved and only the lower 16 bits used, and its position is between the reserved bytes and payload of the packet.

Claims (10)

1. A video display method, implemented in a mobile terminal, includes:
receiving a media packet sent by a streaming media server through a first receiving thread, wherein the media packet comprises a data packet acquired by a chairman participant terminal and the type of the data packet;
analyzing the media packet to obtain a target data packet with a video type, and putting the target data packet at the tail of a video queue;
acquiring the target data packet from the head of the video queue through a decoding thread, and calling a decoder to decode the target data packet through the decoding thread to obtain an original pixel stream;
and rendering the original pixel stream by adopting a drawing control to obtain a rendered video picture, and displaying the rendered video picture.
2. The method of claim 1, further comprising:
detecting the length of a target data packet in the video queue;
judging whether the length of a target data packet in the video queue is greater than a preset length or not;
if the length of the target data packets in the video queue is greater than the preset length, discarding N target data packets in the video queue, wherein N is equal to the difference value between the number of the target data packets in the video queue and the preset length, and the N target data packets comprise N data packets sequentially selected from the head of the video queue.
3. The method according to claim 1 or 2, wherein before the receiving the media packet sent by the streaming media server through the first receiving thread, the method further comprises:
and creating the first receiving thread, wherein the first receiving thread and a second receiving thread created by the mobile terminal and used for receiving the signaling are asynchronous threads.
4. The method of claim 1 or 2, wherein the decoder is a hard decoder or a soft decoder.
5. The method of claim 1 or 2, wherein the drawing control comprises a surface view control, and wherein the raw pixel stream is a YUV format pixel stream.
6. A video display device, provided in a mobile terminal, comprising:
the receiving module is used for receiving a media packet sent by a streaming media server through a first receiving thread, wherein the media packet comprises a data packet acquired by a chairman participant terminal and the type of the data packet;
the analysis module is used for analyzing the media packet to obtain a target data packet with the video type, and putting the target data packet at the tail of the video queue;
the decoding module is used for acquiring the target data packet from the head of the video queue through a decoding thread and calling a decoder to decode the target data packet through the decoding thread to obtain an original pixel stream;
and the display module is used for rendering the original pixel stream by adopting a drawing control to obtain a rendered video picture and displaying the rendered video picture.
7. The apparatus of claim 6, further comprising:
the detection module is used for detecting the length of a target data packet in the video queue;
the judging module is used for judging whether the length of the target data packet in the video queue is greater than a preset length or not;
and the discarding module is used for discarding N target data packets in the video queue if the length of the target data packets in the video queue is greater than the preset length, wherein N is equal to the difference value between the number of the target data packets in the video queue and the preset length, and the N target data packets comprise N data packets sequentially selected from the head of the video queue.
8. The apparatus according to claim 6 or 7, wherein before said receiving the media packet sent by the streaming media server through the first receiving thread, further comprising:
and the creating module is used for creating the first receiving thread, and the first receiving thread and a second receiving thread created by the mobile terminal and used for receiving the signaling are asynchronous threads.
9. A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, implements the video display method of any one of claims 1 to 5.
10. A video display apparatus comprising a processor, a memory and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps of the video display method according to any one of claims 1 to 5.
CN202010188270.5A 2020-03-17 2020-03-17 Video display method, device and readable storage medium Active CN111510759B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010188270.5A CN111510759B (en) 2020-03-17 2020-03-17 Video display method, device and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010188270.5A CN111510759B (en) 2020-03-17 2020-03-17 Video display method, device and readable storage medium

Publications (2)

Publication Number Publication Date
CN111510759A true CN111510759A (en) 2020-08-07
CN111510759B CN111510759B (en) 2023-10-13

Family

ID=71872471

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010188270.5A Active CN111510759B (en) 2020-03-17 2020-03-17 Video display method, device and readable storage medium

Country Status (1)

Country Link
CN (1) CN111510759B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113727175A (en) * 2021-08-11 2021-11-30 西安诺瓦星云科技股份有限公司 Multi-channel video source playback method and system and display control equipment
CN113870799A (en) * 2021-09-09 2021-12-31 瑞芯微电子股份有限公司 System display method and storage device of electronic ink screen device
CN114205662A (en) * 2021-12-13 2022-03-18 北京蔚领时代科技有限公司 Low-delay video rendering method and device for iOS (internet operating system) end
CN114465990A (en) * 2022-02-16 2022-05-10 锐捷网络股份有限公司 Communication method, system, working terminal and network camera
CN116112739A (en) * 2022-12-29 2023-05-12 广东中兴新支点技术有限公司 Picture splitting screen protection method and device based on active frame loss and storage medium

Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101674486A (en) * 2009-09-29 2010-03-17 深圳市融创天下科技发展有限公司 Streaming media audio and video synchronization method and system
CN103108186A (en) * 2013-02-21 2013-05-15 中国对外翻译出版有限公司 Method of achieving high-definition transmission of videos
CN103237191A (en) * 2013-04-16 2013-08-07 成都飞视美视频技术有限公司 Method for synchronously pushing audios and videos in video conference
CN105430322A (en) * 2016-01-22 2016-03-23 深圳市华玺科技有限公司 Method and system for accessing video conference by unmanned aerial vehicle
CN106230839A (en) * 2016-08-03 2016-12-14 青岛海信宽带多媒体技术有限公司 The acceptance control method of Real Time Streaming and device
CN205901964U (en) * 2016-08-09 2017-01-18 陕西中联电科电子有限公司 Unmanned aerial vehicle image teletransmission interactive terminal
WO2017211112A1 (en) * 2016-06-06 2017-12-14 中兴通讯股份有限公司 Video live streaming method, user device, and unmanned aerial vehicle controller
US20180063548A1 (en) * 2016-08-30 2018-03-01 Qualcomm Incorporated Intra-coded video frame caching for video telephony sessions
CN108270732A (en) * 2016-12-30 2018-07-10 北京视联动力国际信息技术有限公司 A kind of Streaming Media processing method and system
CN109361945A (en) * 2018-10-18 2019-02-19 广州市保伦电子有限公司 The meeting audiovisual system and its control method of a kind of quick transmission and synchronization
CN109756789A (en) * 2018-12-28 2019-05-14 视联动力信息技术股份有限公司 A kind of loss treating method and system of audio, video data packet
CN110062192A (en) * 2019-04-18 2019-07-26 视联动力信息技术股份有限公司 Data processing method, device and storage medium in video conference
CN110324644A (en) * 2019-07-05 2019-10-11 视联动力信息技术股份有限公司 UAV Video live broadcasting method, system, electronic equipment and readable storage medium storing program for executing
CN110324580A (en) * 2019-07-05 2019-10-11 视联动力信息技术股份有限公司 A kind of monitor video playback method and device based on view networking
CN110418186A (en) * 2019-02-01 2019-11-05 腾讯科技(深圳)有限公司 Audio and video playing method, apparatus, computer equipment and storage medium
CN209676370U (en) * 2019-01-21 2019-11-22 华平信息技术股份有限公司 Portable satellite video terminal and video-signal system
CN110519627A (en) * 2018-05-21 2019-11-29 视联动力信息技术股份有限公司 A kind of synchronous method and device of audio data
CN110771160A (en) * 2018-07-23 2020-02-07 深圳市大疆创新科技有限公司 Video uniform display method, terminal device and machine-readable storage medium
CN110832875A (en) * 2018-07-23 2020-02-21 深圳市大疆创新科技有限公司 Video processing method, terminal device and machine-readable storage medium

Patent Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101674486A (en) * 2009-09-29 2010-03-17 深圳市融创天下科技发展有限公司 Streaming media audio and video synchronization method and system
CN103108186A (en) * 2013-02-21 2013-05-15 中国对外翻译出版有限公司 Method of achieving high-definition transmission of videos
CN103237191A (en) * 2013-04-16 2013-08-07 成都飞视美视频技术有限公司 Method for synchronously pushing audios and videos in video conference
CN105430322A (en) * 2016-01-22 2016-03-23 深圳市华玺科技有限公司 Method and system for accessing video conference by unmanned aerial vehicle
WO2017211112A1 (en) * 2016-06-06 2017-12-14 中兴通讯股份有限公司 Video live streaming method, user device, and unmanned aerial vehicle controller
CN106230839A (en) * 2016-08-03 2016-12-14 青岛海信宽带多媒体技术有限公司 The acceptance control method of Real Time Streaming and device
CN205901964U (en) * 2016-08-09 2017-01-18 陕西中联电科电子有限公司 Unmanned aerial vehicle image teletransmission interactive terminal
US20180063548A1 (en) * 2016-08-30 2018-03-01 Qualcomm Incorporated Intra-coded video frame caching for video telephony sessions
CN108270732A (en) * 2016-12-30 2018-07-10 北京视联动力国际信息技术有限公司 A kind of Streaming Media processing method and system
CN110519627A (en) * 2018-05-21 2019-11-29 视联动力信息技术股份有限公司 A kind of synchronous method and device of audio data
CN110771160A (en) * 2018-07-23 2020-02-07 深圳市大疆创新科技有限公司 Video uniform display method, terminal device and machine-readable storage medium
CN110832875A (en) * 2018-07-23 2020-02-21 深圳市大疆创新科技有限公司 Video processing method, terminal device and machine-readable storage medium
CN109361945A (en) * 2018-10-18 2019-02-19 广州市保伦电子有限公司 The meeting audiovisual system and its control method of a kind of quick transmission and synchronization
CN109756789A (en) * 2018-12-28 2019-05-14 视联动力信息技术股份有限公司 A kind of loss treating method and system of audio, video data packet
CN209676370U (en) * 2019-01-21 2019-11-22 华平信息技术股份有限公司 Portable satellite video terminal and video-signal system
CN110418186A (en) * 2019-02-01 2019-11-05 腾讯科技(深圳)有限公司 Audio and video playing method, apparatus, computer equipment and storage medium
CN110062192A (en) * 2019-04-18 2019-07-26 视联动力信息技术股份有限公司 Data processing method, device and storage medium in video conference
CN110324644A (en) * 2019-07-05 2019-10-11 视联动力信息技术股份有限公司 UAV Video live broadcasting method, system, electronic equipment and readable storage medium storing program for executing
CN110324580A (en) * 2019-07-05 2019-10-11 视联动力信息技术股份有限公司 A kind of monitor video playback method and device based on view networking

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113727175A (en) * 2021-08-11 2021-11-30 西安诺瓦星云科技股份有限公司 Multi-channel video source playback method and system and display control equipment
CN113870799A (en) * 2021-09-09 2021-12-31 瑞芯微电子股份有限公司 System display method and storage device of electronic ink screen device
CN114205662A (en) * 2021-12-13 2022-03-18 北京蔚领时代科技有限公司 Low-delay video rendering method and device for iOS (internet operating system) end
CN114205662B (en) * 2021-12-13 2024-02-20 北京蔚领时代科技有限公司 Low-delay video rendering method and device of iOS (integrated operation system) terminal
CN114465990A (en) * 2022-02-16 2022-05-10 锐捷网络股份有限公司 Communication method, system, working terminal and network camera
CN116112739A (en) * 2022-12-29 2023-05-12 广东中兴新支点技术有限公司 Picture splitting screen protection method and device based on active frame loss and storage medium

Also Published As

Publication number Publication date
CN111510759B (en) 2023-10-13

Similar Documents

Publication Publication Date Title
CN111510759B (en) Video display method, device and readable storage medium
CN108737768B (en) Monitoring method and monitoring device based on monitoring system
CN109495713B (en) Video conference control method and device based on video networking
CN110049271B (en) Video networking conference information display method and device
CN110460804B (en) Conference data transmitting method, system, device and computer readable storage medium
CN109889779B (en) Message out-of-order processing method and device
CN111464817A (en) Code rate control method and device and readable storage medium
CN108881948B (en) Method and system for video inspection network polling monitoring video
CN110049273B (en) Video networking-based conference recording method and transfer server
CN110191304B (en) Data processing method, device and storage medium
CN111131754A (en) Control split screen method and device of conference management system
CN111447393A (en) Conference maintenance method and device and readable storage medium
CN110049268B (en) Video telephone connection method and device
CN110830750A (en) Data transmission method and device based on video network
CN111210462A (en) Alarm method and device
CN111327868A (en) Method, terminal, server, device and medium for setting conference speaking party role
CN110769179B (en) Audio and video data stream processing method and system
CN111131743A (en) Video call method and device based on browser, electronic equipment and storage medium
CN108965783B (en) Video data processing method and video network recording and playing terminal
CN110719432A (en) Data transmission method and device, electronic equipment and storage medium
CN110769297A (en) Audio and video data processing method and system
CN110677617A (en) Data processing method and device
CN109922300B (en) Method and server for inviting video network MCU conference to Internet MCU conference
CN109889516B (en) Method and device for establishing session channel
CN111447396A (en) Audio and video transmission method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant