CN113766567A - Communication method and device - Google Patents

Communication method and device Download PDF

Info

Publication number
CN113766567A
CN113766567A CN202010504771.XA CN202010504771A CN113766567A CN 113766567 A CN113766567 A CN 113766567A CN 202010504771 A CN202010504771 A CN 202010504771A CN 113766567 A CN113766567 A CN 113766567A
Authority
CN
China
Prior art keywords
data packet
data
communication device
video frame
packet
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010504771.XA
Other languages
Chinese (zh)
Inventor
黄曲芳
曾清海
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202010504771.XA priority Critical patent/CN113766567A/en
Priority to PCT/CN2021/092358 priority patent/WO2021244218A1/en
Publication of CN113766567A publication Critical patent/CN113766567A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/06Optimizing the usage of the radio link, e.g. header compression, information sizing, discarding information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/06Optimizing the usage of the radio link, e.g. header compression, information sizing, discarding information
    • H04W28/065Optimizing the usage of the radio link, e.g. header compression, information sizing, discarding information using assembly or disassembly of packets

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

A communication method and apparatus, the method reducing a probability of a failure to decode a video frame, the method comprising: the communication device obtains an extended delay budget; the communication device receives a data packet of a first service; and the communication device processes the received data packet according to the extended delay budget. For example, the communication device may be an access network device, and the access network device may allocate a reasonable transmission timing according to the extended delay budget of the I frame, and transmit the data packet of the I frame received from the core network to the terminal device at the transmission timing, so as to reduce the probability that the I frame data packet exceeds the extended delay budget limit when reaching the access layer of the terminal device, and reduce the probability of I frame decoding failure.

Description

Communication method and device
Technical Field
The embodiment of the application relates to the technical field of communication, in particular to a communication method and device.
Background
In the field of video processing, the compressed sizes of different video frames are different due to different compression modes. The size of the compressed video frame is the largest in the reference frame, also called I frame. The compressed sizes of the P and B frames are next to each other. The I frame is too large in size, and is divided into a plurality of fragments by a transmission control protocol/internet protocol (TCP/IP) layer or an ethernet layer during transmission, and then the fragments are handed to a wireless communication network for transmission. For example, a typical I-frame is divided into 64 IP packets. Since there is a time limit for the application layer of the receiving party to decode each video frame. For example, the time for the application layer of the receiving party to decode each video frame cannot exceed the extended delay. At this time, how to process a plurality of data packets of the same video frame by the receiver is a technical problem to be solved currently.
Disclosure of Invention
The embodiment of the application provides a communication method and a communication device, which are used for solving the technical problem that a receiving party processes a plurality of data packets of the same video frame.
In a first aspect, a communication method is provided, and the method includes: the communication device obtains an extended delay budget; the communication device receives a data packet of a first service; the communication device processes the data packet according to the extended delay budget.
Optionally, the extended delay budget may also be referred to as a spread delay budget. The extended delay budget may be pre-configured, or may be protocol specified, etc., without limitation. If pre-configured, the communication device may receive the first information, and the first information may directly indicate the size of the extended delay or indirectly indicate the size of the extended delay budget, for example, the first information indicates a traffic type or a decoding type, etc. The service type and the decoding type can have a corresponding relationship with the extended delay budget. The communication device may determine an extended delay budget based on the first information.
In a possible design, the communication device may be an access network device, and the scheme of the first aspect may be applied to a downlink video transmission process, where the process of processing a data packet by the access network device according to the extended delay budget may include: and the access network equipment determines the time for transmitting the N data packets to the terminal equipment according to the extended delay budget and the data volume of the N data packets. And in the above-mentioned opportunity, send N data packets to the terminal installation, thus can prevent the time delay that the terminal installation receives or processes N data from exceeding the budget of extended delay, improve the decoding success rate of the terminal installation. Alternatively, the scheme of the first aspect may be applied to an uplink video transmission process, and the access network device may determine the transmission timing of the N data packets according to the extended delay budget and the data amount of the N data packets. And sending scheduling information to the terminal equipment, for scheduling the terminal equipment to transmit N data packets to the network equipment at the transmission opportunity, thereby reducing the probability that the time delay for receiving or processing the N data packets exceeds the extended time delay budget and improving the decoding success rate of the video server.
In another possible design, the communication device may be a terminal device. The access layer of the terminal equipment can submit N data packets to the upper layer according to the extended delay budget, so that the probability that the delay of the decoder of the upper layer for processing the N data packets exceeds the extended delay budget is reduced, and the probability of successful decoding is improved.
In one possible design, the communication device may receive N packets belonging to the same video frame in the first service. For example, N may be less than or equal to the number of packets included in a video frame. For example, when a video frame includes 64 packets, the value of N may be a positive integer less than or equal to 64.
In a possible design, a first packet, that is, a first data packet, of the N data may carry a frame start identifier, or the communication device may receive an independent frame start identifier and the first data packet, without limiting the sequence of the frame start identifier and the first data packet. Alternatively, the previous packet of the first packet carries an end-of-frame flag, or the communication device may receive an independent end-of-frame flag and the last packet of the previous frame. Alternatively, the communication device may not receive any data packet for a period of time T, and the data packet received later may be considered as the first data packet, etc. The value of T may be pre-configured, or protocol specified, or self-confirmed by the communication device, etc. In addition to receiving the header packet, the communication device may continue to receive other data packets of the N data packets. The manner of receiving other data packets may be implemented by a preset time length, or by a preset number or a preset data amount, without limitation.
In one possible design, the end of frame flag may be carried in the second packet of the N packets. The communication device may determine that other data packets between the end-of-frame markers constitute N data packets with the end packet corresponding to the next end-of-frame marker.
In one possible design, the same indication information may be carried in N data packets. The communication device may determine the data packets carrying the same indication information as N data packets, and the like, which is not limited.
In one possible design, when the communication device is an access network device, the access network device may schedule data scheduling according to an extended delay budget. Thus, after receiving the end packet of a video frame, the access network device needs to calculate the sum of the data amount of all the data packets in the video frame. Optionally, the indication information of the sizes of the N data packets may be carried in each video frame, that is, the first packet of the N data packets. Therefore, when the access network equipment receives the first packet, the size of the N data packets can be determined, the N data packets can be transmitted by searching for the opportunity, and the transmission efficiency of the video frame is improved. Alternatively, the "indication information of the size of N packets" may be replaced by "indication information of the average size of N packets".
In a second aspect, a communication method is provided, including: an access layer receives a first data packet, wherein the first data packet is a first data packet of a video frame; the access layer receives other data packets belonging to the same video frame as the first data packet; and when the preset time expires or N data packets belonging to the same video frame are all received, the access layer submits the N data packets to an upper layer, wherein the N data packets comprise the first data packet and the other data packets.
In one possible implementation, the preset time may be preconfigured or may be specified by a protocol. Alternatively, the above scheme may be implemented by a timer. For example, when the access layer receives the first packet of a video frame, it starts the timer; at the end of the timer, the received packet is delivered to the upper layer. Or, the access layer may receive indication information sent by the access network device, a core network element, or a video data source, and determine the preset time according to the indication information. The indication information may indicate a specific preset time size. Alternatively, the indication information may indicate the extended delay budget. According to the extended delay budget, the preset time can be determined, and the preset time can be less than or equal to the extended delay budget. In one scheme, the access stratum is handed over to the upper layers every time it receives a packet. This may cause the time interval between the top layer receiving the head packet and the end packet of the video frame to exceed the extended delay budget limit, causing the decoding failure of the upper layer decoder. In the implementation manner, the access layer does not submit to the upper layer every time a data packet is received, but uniformly submits the data packets continuously received for a period of time (namely, preset time) to the upper layer, so that the probability that the delay of each video frame in the upper layer exceeds the extended delay budget can be reduced. Of course, if the preset time is set properly, the upper layer can deliver all the data packets of one video frame to the upper layer uniformly, so that the extended delay of each video frame of the upper layer is not beyond the extended delay budget, and the upper layer decoder is guaranteed to decode successfully.
In another possible implementation, the access stratum may uniformly deliver N data packets to the upper layer when the N data packets are all received. The N may be less than or equal to the number of packets included in one video frame. For example, if a video frame includes 64 packets, the value of N may be less than or equal to 64, etc. In this implementation, the access stratum delivers N packets to the upper layer in a unified manner. Compared with the mode that the access layer submits the data packets to the upper layer one by one, the method can also reduce the probability of the extension time-out extension time-delay budget limit of the upper layer video frame and reduce the probability of decoding failure. Of course, if the value of N is equal to the number of all packets of a video frame, the access layer may deliver all packets of a video frame to the upper layer in a unified manner, so as to ensure that the delay spread of each video frame of the upper layer does not exceed the delay spread budget, and the upper layer decoder may decode successfully.
In another possible implementation, when the access layer receives the end packet of a video frame, it may consider that all data packets of the video frame are received and deliver all data packets to the upper layer, otherwise, not deliver data packets to the upper layer. The access stratum may determine that a data packet is a tail packet in many ways, for example, the tail packet may carry a video frame end identifier, or the tail packet may carry indication information of the tail packet, or the access stratum may separately send the video frame end identifier or the indication information of the tail packet, without limitation. Optionally, in another mode, the UE may determine whether all the data packets of the video frame are received by using a PDCP Sequence Number (SN) allocated to each data packet by the PDCP layer of the base station. Optionally, if the UE fails to collect all the data packets of the ith video frame all the time and receives the first data packet of the (i + 1) th frame, the UE may discard the data packet of the ith frame and not submit the data packet to the upper layer any more.
In a third aspect, there is provided an apparatus comprising means for performing the steps of the first or second aspect.
In a fourth aspect, there is provided an apparatus comprising a processor and an interface circuit, the processor being configured to communicate with other apparatuses via the interface circuit and to perform the method provided in the first or second aspect, the processor comprising one or more of the above.
In a fifth aspect, there is provided an apparatus, comprising a processor, connected to a memory, configured to call a program stored in the memory to perform the method provided in the first aspect or the second aspect, where the memory may be located inside the apparatus or outside the apparatus, and the processor includes one or more processors.
In a sixth aspect, an apparatus is provided, which includes at least one processor and at least one memory, where the at least one processor is configured to execute the method provided in the first or second aspect.
In a seventh aspect, a program is provided, which, when executed by a processor, is adapted to perform the method provided in the first or second aspect above.
In an eighth aspect, there is provided a program product, such as a computer readable storage medium, comprising the program of the first or second aspect.
In a ninth aspect, there is provided a computer readable storage medium comprising a program which, when executed by a processor, performs the method provided in the first or second aspect.
The above device may be a chip, and the processor may be implemented by hardware or software, and when implemented by hardware, the processor may be a logic circuit, an integrated circuit, or the like; when implemented in software, the processor may be a general-purpose processor implemented by reading software code stored in a memory, which may be integrated with the processor, located external to the processor, or stand-alone. The number of the processors is one or more, and the number of the memories is one or more. The memory may be integral with the processor or provided separately from the processor. In a specific implementation process, the memory and the processor may be integrated on the same chip, or may be respectively disposed on different chips.
Drawings
Fig. 1 is a schematic diagram of a video frame provided in an embodiment of the present application;
fig. 2 is a schematic diagram of different transmission schemes provided by an embodiment of the present application;
fig. 3 is a schematic diagram of a network architecture provided in an embodiment of the present application;
fig. 4 to fig. 7 are flowcharts of a communication method according to an embodiment of the present application;
fig. 8 is a schematic diagram of a protocol stack of a receiving party according to an embodiment of the present application;
fig. 9 and fig. 10 are flowcharts of a communication method provided in an embodiment of the present application;
fig. 11 and 12 are schematic structural diagrams of a communication device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application. Where in the description of the present application, "/" indicates a relationship where the objects associated before and after are an "or", unless otherwise stated, for example, a/B may indicate a or B; in the present application, "and/or" is only an association relationship describing an associated object, and means that there may be three relationships, for example, a and/or B, and may mean: a exists alone, A and B exist simultaneously, and B exists alone, wherein A and B can be singular or plural. Also, in the description of the present application, "a plurality" means two or more than two unless otherwise specified. "at least one of the following" or similar expressions refer to any combination of these items, including any combination of the singular or plural items. For example, at least one (one) of a, b, or c, may represent: a, b, c, a-b, a-c, b-c, or a-b-c, wherein a, b, c may be single or multiple. In addition, in order to facilitate clear description of technical solutions of the embodiments of the present application, in the embodiments of the present application, terms such as "first" and "second" are used to distinguish the same items or similar items having substantially the same functions and actions. Those skilled in the art will appreciate that the terms "first," "second," etc. do not denote any order or quantity, nor do the terms "first," "second," etc. denote any order or importance.
In addition, the network architecture and the service scenario described in the embodiment of the present application are for more clearly illustrating the technical solution of the embodiment of the present application, and do not constitute a limitation to the technical solution provided in the embodiment of the present application, and it can be known by a person skilled in the art that the technical solution provided in the embodiment of the present application is also applicable to similar technical problems along with the evolution of the network architecture and the appearance of a new service scenario.
For video services, the basic process is to divide the video into N pictures per second, each picture being encoded as a video frame. For each video frame, the digital information can be formed by encoding from two aspects of color and brightness. However, since each video frame contains many pixels, the digital information formed by encoding has a relatively large size, and the direct transmission occupies a large bandwidth. Therefore, the video service can be compressed and transmitted.
Due to the nature of video services, as long as a shot is not switched, the content of most pictures is the same and only a small amount of content is different between adjacent frames. Therefore, the video frames may be grouped, with the first frame of each group being a reference frame and subsequent frames being dependent frames. During compression, the standard frame is subjected to intraframe compression, namely, only the code stream of the frame is referred to during compression, and other frames are not referred to. Thus, the decompressor receives a reference frame and can independently decompress without other frames; and for the dependent frame behind the reference frame, performing interframe compression by referring to the reference frame, namely, during compression, referring to the code stream in the frame and other frames, such as the reference frame. Therefore, the compression rate can be greatly improved during compression, and the size of the compressed data is reduced.
Due to the above compression method, the sizes of the compressed video frames are greatly different. As shown in fig. 1, the reference frame (also called I-frame) has the largest size. Dependent frames, i.e., P frames (only preceding frames are dependent upon decoding) and B frames (not only preceding frames but also following frames are dependent upon decoding) in the figure are smaller in size.
Because the size of the I frame is too large, the I frame is generally divided into a plurality of IP packets by a transmission control protocol/internet protocol (TCP/IP) layer or an ethernet layer during transmission, and then delivered to a communication network for transmission. A typical I-frame would be divided into 64 IP packets. From the perspective of the base station (e.g., the gNB), tens of IP packets may be received collectively, and then subsequent P and B frames may continue to be received. Alternatively, P-frames and B-frames may each include one or more IP packets because of their smaller size.
Referring to fig. 2, a scheme for a base station (e.g., a gNB) to transmit a video frame may include the following three transmission schemes. Alternatively, the transmission scheme may be used to transmit I-frames, B-frames, P-frames, or the like. The following description is given taking an IP packet for transmitting an I frame as an example:
the first transmission scheme corresponds to an ideal situation. The air interface load of the gNB is light, and the gNB quickly transmits the IP packet of the I frame to the terminal equipment. After receiving the IP packet, the terminal equipment decodes the IP packet of the I frame immediately, the time delay is very small, the extension time delay is also very small, and both the time delay and the extension time delay do not exceed the limit.
In the second transmission scheme, corresponding to the situation that the air interface load of the gNB is heavy, the gNB has other more urgent data to transmit, so that the IP packet of the I frame is scheduled very late. This may result in that the gNB does not schedule all IP packets of the I-frame until the delay budget of the I-frame is exceeded. From the perspective of the gNB side, only a portion of the IP packets do not arrive at the receiving side on time, and quality of service (Qos) is still satisfied. However, from the perspective of the receiver video decoder, the I-frame exceeds the delay budget, which affects not only the decoding of the I-frame itself, but also the decoding of the following P-frames and B-frames.
And a third transmission scheme, corresponding to the situation that the air interface load of the gNB is medium. However, since the gNB does not know that the tens of IP packets belong to the same I frame, the extended delay of the I frame may be too large. Although all IP packets of an I-frame arrive at the receiver before the delay budget, the delay spread of the I-frame is too large to exceed the delay spread budget limit and is not compliant with the requirements of the video decoder.
Wherein, the delay budget can also be called delay budget; the extended delay budget, which may also be referred to as a spread delay budget, is different. The delay budget defines an upper time limit of a data packet transmission delay between a core network element and a terminal device, or, refers to a time limit from when the core network element receives a data packet to when the data packet is delivered to the terminal device, for example, to a non-access layer of the terminal device, and the core network element is a core network element for user plane control, for example, a User Plane Function (UPF) network element. The extended delay budget may refer to an upper temporal limit of a time interval from a first packet to a last packet of a received video frame by a video decoder. Optionally, for the video decoder, if all the IP packets of the I frame are not received when the extended delay budget is reached, the video decoder discards the previously received IP packet, resulting in a failure in decoding the I frame.
Based on the above, an embodiment of the present application provides a communication method and apparatus, where the method may be: the communication device obtains an extended delay budget; the communication device receives a data packet of a first service; and the communication device processes the received data packet according to the extended delay budget. For example, the communication device may be an access network device, and the access network device may allocate a reasonable transmission time to send the data packet of the I frame received from the core network to the terminal device according to the extension delay budget of the I frame, thereby reducing the limitation that the data packet of the I frame exceeds the extension delay budget when reaching the access layer of the terminal device, and solving the technical problem of I frame decoding failure.
As shown in fig. 3, there is provided a network architecture comprising: an access network and a core network.
The access network is used for realizing functions related to wireless access, and the access network equipment is equipment for providing access for the terminal equipment. The access network device includes a Radio Access Network (RAN) device and/or AN Access Network (AN) device. The RAN device may be an access network device as defined in 3 GPP. The AN device may be AN access network device defined by non-3GPP (non-3 GPP).
The RAN device is mainly responsible for radio resource management, quality of service (QoS) management, data compression, security processing, and the like on the air interface side. The RAN equipment may include various forms of base stations. For example, a macro base station, a micro base station (small station), a relay station, or an access point, etc. RAN equipment includes, but is not limited to: a next generation base station (gbb) in 5G, an evolved node B (eNB), a Radio Network Controller (RNC), a Node B (NB), a Base Station Controller (BSC), a Base Transceiver Station (BTS), a home base station (e.g., home evolved node B or home node B, HNB), a Base Band Unit (BBU), a Transmission and Reception Point (TRP), a Transmission Point (TP), or a mobile switching center (msc). The RAN device may also be a radio controller, a Centralized Unit (CU), and/or a Distributed Unit (DU) in a Cloud Radio Access Network (CRAN) scenario, or the RAN device may be an access network device in a relay station, an access point, a vehicle-mounted device, a terminal device, a wearable device, and a future 6G network, or an access network device in a future evolved Public Land Mobile Network (PLMN) network, and the like.
And the AN equipment is used for enabling the terminal equipment and the 3GPP core network to adopt non-3GPP technology for interconnection and interworking. The non-3GPP technologies include, but are not limited to: wireless fidelity (WIFI), Worldwide Interoperability for Microwave Access (WiMAX), Code Division Multiple Access (CDMA) network technology, and the like.
The core network device is mainly used for managing the terminal device and providing a gateway for communicating with an external network. The core network device may be, for example, a core network element in different network standards, and may include, for example, one or more of the following network elements: an access and mobility management function (AMF) network element, a Session Management Function (SMF) network element, a User Plane Function (UPF) network element, a Policy Control Function (PCF) network element, an Application Function (AF) network element, a Unified Data Management (UDM) network element, an authentication server function (AUSF) network element, a Network Slice Selection Function (NSSF) network element.
AMF network element: the method is mainly responsible for mobility management in the mobile network, such as user location update, user registration network, user switching and the like. SMF network element: the method is mainly responsible for session management in the mobile network, such as session establishment, modification and release. The specific functions include allocating an IP address to a user, selecting a UPF network element providing a message forwarding function, and the like. UPF network element: the method is mainly responsible for forwarding and receiving user data. In downlink transmission, the UPF network element may receive user data from a Data Network (DN), and transmit the user data to the terminal device through the access network device; in uplink transmission, the UPF network element may receive user data from the terminal device through the access network device, and forward the user data to the DN. Optionally, the transmission resource and the scheduling function for providing services for the terminal device in the UPF network element may be managed and controlled by the SMF network element. PCF network element: the method mainly supports the provision of a unified policy framework to control network behaviors, provides policy rules to a control layer network function, and is responsible for acquiring user subscription information related to policy decision. AF network element: mainly supports the interaction with the 3GPP core network to provide services, such as influencing data routing decision, strategy control function or providing some services of a third party to the network side. The UDM network element is mainly used for generating authentication credentials, user identification processing (such as storing and managing user permanent identities, etc.), access authorization control, subscription data management, and the like. The AUSF network element is mainly used for performing authentication when the terminal device accesses a network, and includes receiving an authentication request sent by a security anchor function (SEAF), selecting an authentication method, and requesting an authentication vector from an authentication storage and processing function (ARPF). The NSSF network element is mainly used for selecting a network slice instance for the terminal device, determining allowed Network Slice Selection Assistance Information (NSSAI), configuring the NSSAI, and determining an AMF set serving the terminal device.
Optionally, the network architecture shown in fig. 3 may further include: and (4) terminal equipment. The terminal equipment can be called as a terminal for short, and is equipment with a wireless transceiving function, and the terminal equipment can be deployed on the land and comprises indoor or outdoor, handheld or vehicle-mounted equipment; can also be deployed on the water surface (such as a ship and the like); and may also be deployed in the air (e.g., airplanes, balloons, satellites, etc.). The terminal device may be a mobile phone (mobile phone), a tablet computer (pad), a computer with a wireless transceiving function, a Virtual Reality (VR) terminal device, an Augmented Reality (AR) terminal device, a wireless terminal device in industrial control (industrial control), a wireless terminal device in self driving (self driving), a wireless terminal device in remote medical treatment (remote medical), a wireless terminal device in smart grid (smart grid), a wireless terminal device in transportation safety (transportation safety), a wireless terminal device in smart city (smart city), a wireless terminal device in smart home (smart home), or the like. The terminal device may also be a cellular phone, a cordless phone, a Session Initiation Protocol (SIP) phone, a Wireless Local Loop (WLL) station, a Personal Digital Assistant (PDA), a handheld device with wireless communication capability, a computing device or other processing device connected to a wireless modem, a vehicle mounted device, a wearable device, a terminal device in a fifth generation (5G) network in the future or a terminal device in a Public Land Mobile Network (PLMN) in the future, etc. A terminal device may also be sometimes referred to as a User Equipment (UE), an access terminal device, a vehicle-mounted terminal device, an industrial control terminal device, a UE unit, a UE station, a mobile station, a remote terminal device, a mobile device, a wireless communication device, a UE agent, or a UE apparatus, etc. The terminal equipment may also be fixed or mobile. The embodiments of the present application do not limit this. By way of example and not limitation, in embodiments of the present application, the terminal device may be a wearable device. Wearable equipment can also be called wearable intelligent equipment, is the general term of applying wearable technique to carry out intelligent design, develop the equipment that can dress to daily wearing, like glasses, gloves, wrist-watch, dress and shoes etc.. A wearable device is a portable device that is worn directly on the body or integrated into the clothing or accessories of the user. The wearable device is not only a hardware device, but also a device with powerful functions realized through software support, data interaction and cloud interaction. The generalized wearable smart device includes full functionality, large size, and can implement full or partial functionality without relying on a smart phone, such as: smart watches or smart glasses and the like, and only focus on a certain type of application functions, and need to be used in cooperation with other devices such as smart phones, such as various smart bracelets for physical sign monitoring, smart jewelry and the like. In the application, the terminal device may be a terminal in an internet of things (IoT) system, the IoT is an important component of future information technology development, and the main technical feature of the IoT is to connect an article with a network through a communication technology, so as to implement an intelligent network of man-machine interconnection and object-object interconnection. The terminal device in the present application may be a terminal device in Machine Type Communication (MTC). The terminal device of the present application may be an on-board module, an on-board component, an on-board chip, or an on-board unit built into a vehicle as one or more components or units, and the vehicle may implement the method of the present application through the built-in on-board module, on-board component, on-board chip, or on-board unit. Therefore, the embodiments of the present application may be applied to vehicle networking, such as vehicle to outside (V2X), long term evolution (LTE-V) for vehicle to vehicle communication, vehicle to vehicle (V2V), and the like.
Optionally, in the network architecture shown in fig. 3, the method may further include: data Network (DN). The DN may be a service network that provides data traffic services to users. For example, the DN may be an IP multimedia service (IP multi-media service) network or the internet (internet), etc. The terminal device may establish a Protocol Data Unit (PDU) session from the terminal device to the DN to access the DN.
It should be noted that, in different communication systems, network elements in the core network may have different names. The schematic diagram shown in fig. 3 is described by taking the fifth generation mobile communication system as an example, and is not intended to limit the present application. For example, in an LTE communication system, the core network device may include one or more of the following network elements: mobility Management Entity (MME), serving gateway (S-GW), and the like. Further, the above-mentioned network elements of the core network in fig. 3 are only schematic illustrations, and do not serve as limitations to the embodiments of the present application. For example, in the network architecture shown in fig. 1, the core network element may further include: one or more network elements among a Network Exposure Function (NEF), a network storage function (NRF), a Service Control Point (SCP), or the like.
As shown in fig. 4, an embodiment of the present application provides a flowchart of a communication method, where the flowchart includes but is not limited to:
in step 401, the communication device obtains an extended delay budget.
Optionally, the extended delay budget may adopt any one of the following descriptions:
1. the extended delay budget is used for time limiting the processing of all data packets of one video frame by the communication device.
2. The extended delay budget refers to a time limit for the communication device to process all packets of a video frame, such as a maximum processing delay, a tolerable time, or a maximum decoding duration of all packets of a video frame. For example, a video frame may typically be broken into a plurality of packets. The communication device video decoder can utilize the extended delay budget to limit the processing time of all packets of a video frame. For example, the communication device video decoder clocks from receiving a first packet of a video frame, or clocks from processing the first packet of the video frame; if the decoder still does not receive all the data packets completely or successfully after the time-out of the limit of the extended delay budget, the decoding of the video frame fails, and the decoder can discard the received data packets; if all the data packets are collected or successfully decoded within the limit of the extended delay budget, the video frame is successfully decoded. It will of course be appreciated that the above snapping or decoding success could instead be handed over to the upper layer.
In a possible implementation manner, the extended delay budget may be pre-configured, or may be defined by a protocol, and the like, without limitation. If pre-configured, the communications device may receive first information and determine the extended delay budget based on the first information. The first information may directly indicate a size of the extended delay budget. Alternatively, the first information may indirectly indicate the size of the extended delay budget. For example, the first information may indicate a service type, and the communication device determines the extended delay budget according to a correspondence between the service type and the extended delay budget. For example, the extended delay budget for an animation video may be 20ms, the extended delay budget for a landscape video may be 15ms, and the extended delay budget for an action video may be 10 ms. Alternatively, the first information may indicate a decoding type of the video frame, and different decoding types may correspond to different spreading delay budgets. The communication device may determine the extended delay budget and the like according to the corresponding relationship between the decoding type of the video frame and the extended delay budget, which is not limited. The communication device may be an access network device, and the access network device may receive the first information from a core network element. For example, the SMF network element may send the first information to the access network device through the AMF network element. Alternatively, the communication device may be a terminal device, and the terminal device may receive the first information from an access network device. For example, the access network device may first obtain the first information from the network element of the core network, and then transparently transmit the first information to the terminal device. Optionally, the first information acquired by the access network device may be directly indicated or may be indirectly indicated; the first information transmitted to the terminal device may be directly indicated, or indirectly indicated, etc., without limitation. The first information may be carried in an application layer packet, the application layer packet is sent to the terminal device by the access network device, the terminal device may obtain the first information by parsing the application layer packet, the application layer packet may be provided to the access layer by the application layer, and the access layer may transmit the first information to the terminal device without parsing. Alternatively, the first information may be sent by the access network device to the terminal device, etc. as an access stratum control cell.
The preconfiguration in this embodiment and the following embodiments refers to preconfiguration to a communication device, and as for a message carrying configured content, the embodiments of the present application do not make any limitation.
In step 402, a communication device receives a data packet of a first service. For example, the communication device may receive N packets of the same video frame, which may be all or part of the same video frame. For example, for a reference frame, i.e. an I frame, it is assumed that an I frame includes 64 IP packets, and the value of N may be a positive integer smaller than or equal to 64. Optionally, N data packets of the same video frame may also be referred to as a cluster (cluster) of data. The data packets may also be referred to as IP packets, and the two are not distinguished.
In the embodiment of the present application, the communication device may determine that the received first packet is the head packet of the video frame in the following manner. For example, the first data packet carries a start frame identifier, or the communication device may receive the start frame identifier and the first data packet independently, without limiting the sequence of the start frame identifier and the first data packet. Alternatively, the previous packet of the first packet carries an end-of-frame flag, or the communication device may receive an independent end-of-frame flag and the last packet of the previous frame. Alternatively, the communication device may not receive any data packet for a period of time T, and the data packet received later may be considered as the first data packet, etc. The value of T may be pre-configured, or protocol specified, or self-confirmed by the communication device, etc. In the following description, the start of frame identifier may also be referred to as indication information of the first packet, or the first indication information, etc. The frame end flag may also be referred to as indication information of the end packet, or second indication information, and the like, without limitation. The communication device may receive other data packets in the video frame in addition to the first data packet of the video frame.
In one possible implementation, the communication device may receive a first data packet, referred to as a data packet P1, where the data packet P1 is a first data packet of N data packets, and the data packet P1 may carry first indication information indicating the first data packet. Other packets received by the communication device within the predetermined time belong to the same video frame as the packet P1. In one possible implementation, the above-described function may be implemented by a timer. The communication device may start the first timer upon receiving the packet P1. The other packets received during the operation of the first timer may be the remaining packets of the N packets. The timer may be protocol specified or pre-configured, without limitation. Alternatively, the packet P1 and the first indication information may be transmitted independently, and the order of the packet P1 and the first indication information is not limited.
In a possible implementation manner, the communication device receives a first data packet, referred to as a data packet P1, the data packet P1 is a first data packet of N data packets, and the data packet P1 may carry first indication information, where the first indication information is used to indicate that the data packet P1 is the first data packet of the N data packets. A preset number of other packets received by the communication device belong to the same video frame as the packet P1, where the preset number may be specified by the protocol; or preconfigured, e.g. configured with application layer configuration information or configured with access layer information; or carried in the first data packet, for example, the first data packet includes information indicating the preset number and first indication information, which are independent cells, or the first indication information may indicate both the preset number and the first packet of the video frame. For example, in one possible implementation, after receiving the packet P1, the communication device may receive a predetermined number of other packets, which are considered to belong to the same video frame as the packet P1. For example, the preconfigured, or protocol specified, preset number may be N. The communication device may continue to receive N-1 packets, etc. according to the above-mentioned predetermined number after receiving packet P1, which is the first packet of the same video frame. Similarly, in this implementation, the first indication information and the data packet P1 may also be sent independently of each other, and the sequence of the first indication information and the data packet P1 is not limited, at this time, the information indicating the preset number may be carried in the same message as the first indication information, or the first indication information may indicate both the preset number and the first packet of the video frame.
The preset number may be the total number of one video frame including the head packet, or may be the number of data packets excluding the head packet in one video frame.
In another possible implementation manner, the communication device receives a first data packet, referred to as a data packet P1, where the data packet P1 is the first data packet of the N data packets, and the data packet P1 may carry first indication information, where the first indication information is used to indicate that the data packet P1 is the first data packet of the N data packets. Other packets with a preset data volume, which may be specified by the protocol, received by the communication device belong to the same video frame as the packet P1; or preconfigured, e.g. configured with application layer configuration information or configured with access layer information; or carried in the first data packet, for example, the first data packet includes information indicating the preset data amount and the first indication information, the information and the first indication information are independent cells, or the first indication information may indicate both the preset data amount and the first packet of the video frame, and the unit of the preset data amount may be byte (byte) or the like. For example, in one possible implementation, after receiving the packet P1, the communication device may receive another packet with a preset data size, which is considered to belong to the same video frame as the packet P1. For example, a video frame may have a data size of 1500 bytes, and the 1500 bytes may be preconfigured to the communication device. The communication device may determine the amount of data in a first packet of a video frame when the first packet is received. Then, after receiving the second packet, the sum of the data amount of the first packet + the second packet is determined. Similarly, after receiving the third data packet, the sum of the first data packet + the second data packet + the third data packet is determined. And stopping receiving the data packet of the current video frame until the sum of the data amount of the received data packets reaches 1500 bytes, and considering that the data packets belong to the same video frame. Subsequently, the communication device may continue to receive data packets for the next video frame. Similarly, in this implementation, the first indication information and the data packet P1 may also be sent independently of each other, and the order of the first indication information and the data packet P1 is not limited, at this time, the information indicating the preset data amount may be carried in the same message as the first indication information, or the first indication information may indicate both the preset data amount and the first packet of the video frame.
The above predetermined data amount may be a total data amount of one video frame including the head packet, or may be a remaining data amount excluding the data amount of the head packet in one video frame.
In another possible implementation, the communication device receives a first packet, referred to as packet P1, where packet P1 is the first packet of N packets, and packet P1 may carry first indication information indicating that packet P1 is the first packet of N packets. Other packets received by the communication device during packet P1 and packet P2 (which may also be referred to as a second packet) belong to the same video frame as packet P1, and packet P2 is the first packet of the next video frame. In this manner, the communication device can distinguish between different video frames by indicating the header packet of each video frame. Similarly, in this implementation, the first indication information and the packet P1 may also be sent independently, and the order of the two is not limited.
In another possible implementation manner, the communication device receives a data packet Pn (which may be referred to as a third data packet), where the data packet Pn is a last data packet of the N data, and the data packet Pn carries second indication information, where the second indication information is used to indicate that the data packet Pn is a last data packet of the N data packets, that is, the last data packet. Other packets received by the communication device from the last packet of the previous video frame to the last packet of the current video frame (i.e., the packet Pn) belong to the same video frame as the packet Pn. In this manner, the communication device may distinguish between different video frames by indicating a trailer packet for each video frame.
For example, the communication device may receive a packet Pm, which may carry an indication of a tail packet. Thereafter, the communication device receives other data packets that do not carry the trailer indication information. Then, the communication device may receive the data packet Pn, where the data packet Pn carries the indication information of the tail data packet. The terminal device can determine that other packets received during the packet Pm and the packet Pn belong to the same video frame as the packet Pn. Optionally, the second indication information and the data packet Pn may also be sent separately, and the sequence of the two is not limited.
In another possible implementation manner, the communication device receives N data packets of the same frame, where each of the N data packets carries third indication information of a corresponding video frame, and the data packets carrying the same third indication information belong to the same video frame. For example, when the communication device receives N1 data packets, and each data packet in the N1 data packets carries indication information 1, the communication device determines that the N1 data packets belong to the same video frame according to the indication information 1. Then, the communication device receives N2 data packets, and each data packet in the N2 data packets carries indication information 0, so that the communication device can determine that the N2 data packets belong to the same video frame according to the indication information 0. By analogy, the communication device receives N3 packets, each of the N3 packets carrying indication information 1 and the like.
The first indication information is used for indicating a first data packet of a video frame; the second indication information is used for indicating the last data packet of a video frame, namely a tail data packet; the third indication information may be understood as an indication of the same video frame, that is, the data packets carrying the same third indication information belong to the same video frame. The first indication information and the second indication information may be used in combination, and another implementation manner is given, that is, data packets between a data packet carrying the first indication information and a data packet carrying the second indication information (including a first data packet and a last data packet) belong to the same video frame.
In addition, the above second indication information may be used in combination with a preset number or a preset data amount instead of the first indication information, for example, when the tail data packet is received, a preset number of data packets or a preset data amount of data packets before the tail data packet is understood as belonging to the same video frame as the tail data packet.
Step 403, the communication device processes the data packet according to the extended delay budget. For example, the communication device may determine the timing to transmit or deliver the N data packets based on the data amount of the N data packets and the extended delay budget. Optionally, the communication device determines the transmission time according to the data amount of N data packets in the same video frame. Therefore, the indication information of the data quantity of the N data packets can be carried in the first data packet of the N data packets. Thus, when the communication device receives the first data packet of each video frame, the data volume of the whole video frame can be determined, and the video frame can be transmitted by searching for the opportunity. Alternatively, the average data size of the N data packets included in the first data packet of each video frame may be carried in the first data packet of each video frame, and the communication device may also determine the data size of the entire video frame according to the average data size of the N data packets.
Alternatively, the indication information indicating the packet data amount of N packets may not be transmitted, but may be specified by a protocol or may be indicated by information independent of the first packet.
In one possible implementation, the communication device may be an access network device. The scheme shown in fig. 4 may be applied to downlink video transmission, in step 403, the access network device may determine, according to the extended delay budget and the data amount of the N data packets, a timing when the access network device transmits the N data packets to the terminal device, and at the timing, send the N data packets to the terminal device, so that the delay of receiving or processing the N data packets by the terminal device may be reduced to exceed the extended delay budget, and the decoding success rate of the terminal device is improved. Alternatively, the scheme shown in fig. 4 may be applied to uplink video transmission, and in 403, the access network device may determine transmission timings of N data packets according to the extended delay budget and the data amount of the N data packets. And sending scheduling information to the terminal equipment, for scheduling the terminal equipment to transmit N data packets to the network equipment at the transmission opportunity, thereby reducing the time delay for receiving or processing the N data packets to exceed the extended time delay budget and improving the decoding success rate of the video server. Optionally, in the uplink video transmission scheme, the terminal device may transmit the uplink video service to the access network device, and the access network device transmits the uplink video service to the video server through the UPF network element, and the video server decodes the uplink video service.
In another possible implementation manner, the communication device may be a terminal device. The access layer of the terminal equipment can submit N data packets to the upper layer according to the extended delay budget, so that the delay of the decoder of the upper layer for processing the N data packets is reduced to exceed the extended delay budget, and the successful decoding probability is improved.
Alternatively, the scheme of the flow of fig. 4 may be applied to processing a video frame including a plurality of data packets. For example, the I frame, P frame, B frame, or the like is not limited.
Fig. 5 shows a flow chart of a communication method, which can be applied in downlink video transmission, including but not limited to:
step 501, an SMF network element sends first information to an AMF network element, where the first information may be used to directly indicate an extension delay budget size of a first service; or the first information is used to indicate the type of the first service, or the first information may be used to indicate the decoding type of the first service, and so on, so as to indirectly indicate the size of the extended delay budget of the first service. The first information may be generated by the SMF network element, or the first information may be obtained by the SMF network element from another network element. Such as UDM network elements, PCF network elements, or video application servers, etc.
Step 502, the AMF network element sends the first information to the gNB. Optionally, the gNB may determine the extended delay budget of the first service according to the first information. The first information may directly indicate the size of the extended delay budget, or the first information may indirectly indicate the size of the extended delay budget. For example, the first information may indicate a type of the first service, or may indicate a decoding type of the first service, etc., without limitation.
Step 503, the application server sends N data packets of the same video frame to the UPF network element. In one possible implementation, the application server may send N packets of the same video frame to the UPF network element through the DN.
In step 504, the UPF network element sends N data packets of the same video frame to the gNB.
In a possible implementation manner, the first packet of each video frame may carry first indication information, where the first indication information is used to indicate the first packet of one video frame. The gNB may distinguish the data packets included in different video frames according to the first indication information. Optionally, the first indication information and the first data packet may also be sent separately, and the order of the first indication information and the first data packet is not limited.
1. The gNB may start a timer upon receiving the first packet of each video frame. The timer may be protocol specified or pre-configured. And during the running period of the timer, the other data packets received from the UPF network element and the first data packet belong to the same video frame. That is, the other data packets and the first data packet together form N data packets of the same video frame.
2. In addition to receiving the first data packet of each video frame, the gNB also receives other data packets of a preset number or a preset data volume from the UPF network element, where the preset number or the preset data volume may be specified by a protocol, or may be preconfigured, or carried in the first data packet, and the like, without limitation. The carrying manner of the preset amount or the preset data volume in the first data packet can be referred to the description in fig. 4, and will not be described here.
3. The gNB may determine that other packets between the two first packets constitute the same video frame as the previous first packet. For example, the gNB receives the first packet of the ith video frame from the UPF network element. Thereafter, other data packets from the UPF network element are received. And then receiving the first data packet of the (i + 1) th video frame from the UPF network element. The gNB can determine that the other data packets and the first data packet of the ith video frame jointly form the ith video frame, wherein i is a positive integer greater than or equal to 1.
In another possible implementation manner, the last packet of each video frame may carry second indication information, where the second indication information is used to indicate a tail packet of one video frame. The gNB may distinguish the data packets included in different video frames according to the second indication information.
1. The gNB may determine that other packets between two tail packets constitute the same video frame with the next tail packet. For example, the gNB receives a tail packet of an ith video frame from a UPF network element. Thereafter, other data packets from the UPF network element are received. And then, receiving a tail data packet of the (i + 1) th video frame from the UPF network element. The gNB may determine that the other data packet and the tail data packet of the (i + 1) th video frame together constitute the (i + 1) th video frame.
In another possible implementation manner, different indication information may be carried in the data packets of different video frames to indicate the respective corresponding video frames. The gNB may determine the data packets included in each video frame through the indication information.
And 505, the gNB determines the transmission time for starting to transmit the N data packets according to the data volume and the extended delay budget of the N data packets of the same video frame. Optionally, the number of the transmission occasions may be one or more. When the number of transmission occasions is one, the gNB finishes transmitting N data packets at the transmission occasion. When the number of transmission occasions is multiple, the gNB may split the N data packets into multiple parts, and transmit the corresponding data packets at each transmission occasion.
In one possible implementation, the gNB may schedule data scheduling according to the extended delay budget, such that the gNB calculates the sum of the data amount of all data packets of a video frame after receiving the end packet of the video frame. And arranging a continuous or discontinuous transmission resource to transmit all the data packets of the video frame. Alternatively, "indication information of the size of the video frame size" may be added to the packet of the video frame, and the indication information may be notified to the gNB. In theory, the above-mentioned "indication information of the size of the video frame size" may be added to any packet of a video frame. Optionally, the "indication information of the size of the video frame size" may be added to the head packet of the video frame, so that the gNB does not need to wait until the end packet of a video frame is received before calculating the sum of the data amount of all the data packets of the video frame. Then, a continuous or discontinuous transmission resource is arranged for transmitting all the data packets of the video frame. Instead, when the gNB receives the first packet of each video frame, the size of each video frame can be determined, and the video frame can be transmitted by searching for a time, so that the transmission efficiency of the video frame is improved.
Alternatively, the "indication information of the size of the video frame" may be replaced by "indication information of the average size of all the packets in the video frame". The gNB can also estimate the total size of the video frame based on the average size of the packets and the number of packets included in the video frame.
Fig. 6 shows a further process of the communication method, which is equally applicable to downstream video transmission. Unlike the flow of fig. 5 described above, the gNB extension delay budget is informed by the video data source (e.g., application server).
Step 601, the video receiving side reports any one or more of the following parameters to the video data source: video receiver buffer size, video receiver buffer time, compression and decompression algorithm, compression and decompression parameters, video type (e.g., animation, landscape, character, etc.). Optionally, for the downlink transmission scheme, the video receiver may specifically be a UE.
Step 602, the video data source determines the extended delay budget of the video service according to the parameters.
In a possible implementation manner, if the video data source needs to determine the extended delay budget of the video service according to the "buffer size of the video receiver". The video receiver may report the buffer size of the video receiver in the following manner.
1. A different receiver buffer size is configured for each service. A video receiver, e.g., a UE, may obtain a receiver buffer size for a current service configuration. The size is reported in step 601 above.
2. The same receiver buffer size is configured for each service. Since the receiver buffer size is the same for each video service. The receiver may report the above size to the video data source in advance. And determining the extended delay budget by the video data source according to the size no matter what type of service is adopted.
3. And configuring a uniform receiver cache size, and sharing a plurality of services. And the video data source determines the cache size of each video service according to the number of the video services simultaneously received by the video receiver, and the size is not greater than the total size limit. The video data source may need to be questioned with the video recipient before determining the extended delay budget for each transaction. By inference, the video data source may capture the size of the buffer shared by the recipients, as well as the amount of video traffic currently being received simultaneously. From both, the video data source determines an appropriate buffer size. And then, determining the expansion delay budget of the current service according to the proper cache size.
Step 603, the video data source sends first information to the gNB, where the first information may directly indicate the size of the extended delay budget. Alternatively, the first information may indirectly indicate the size of the extended delay budget. For example, the first information may indicate a video service type or a decoding type, etc. The video service type or the decoding type may have a correspondence with the extended delay budget. The gNB can determine the extended delay precoding according to the video service type or the decoding type. Optionally, the terminal device may also consider the terminal device capabilities in addition to the video service type. Such as terminal device buffer size, buffer time, etc. And the terminal equipment determines the total extended delay budget by combining the video service type and the terminal equipment capability.
In step 604, the UPF network element sends N data packets of the same video frame to the gNB. The way in which the gNB receives N packets of the same video frame from the UPF network element can be referred to the description in fig. 5 or fig. 4, and will not be described again.
Step 605, the gNB determines the time for transmitting the N data packets according to the extended delay budget and the data amount of the N data packets, so that the delay from the first data packet to the last data packet of the video frame does not exceed the limit of the extended delay budget.
The methods shown in fig. 5 and fig. 6 are for downlink video transmission. In a real network, most video transmissions are downstream, but there are also upstream video transmissions. For example, in a live network service, a main broadcast needs to upload a real-time video to a video server, and the video server downloads a live video to each viewer. For the uplink video service, there may also be a problem of extending the delay budget. Referring to fig. 7, a specific process of the communication method is provided, which can be applied to the uplink video transmission, including but not limited to:
in step 701, the SMF network element sends first information to the AMF network element. The first information may directly indicate the size of the extended delay budget or indirectly indicate the size of the extended delay budget, which is not limited. Optionally, the extended delay budget may be an uplink extended delay budget, for example, an uplink extended delay budget corresponding to certain session (session) data.
In step 702, the AMF network element sends the first information to the gNB.
Optionally, in the flow of fig. 7, it is described by taking an example that the core network element notifies the gNB extension delay budget. Besides, the video data source in fig. 6 may also notify the gNB of the extended delay budget, which is not limited.
Step 703, the UE sends a notification message to the gnnb, where the notification message is used to notify the gnnb that there is an uplink video frame to be transmitted. Optionally, the notification message may further include indication information of the data size of an uplink video frame. Optionally, the UE upper layer may notify the access layer that there is a video frame to be transmitted, and then the access layer sends a notification message to the gNB. Or, the upper layer of the UE may directly transmit the first packet of the video frame to the access layer, and the access layer sends the notification message to the gNB when receiving the first packet of the video frame.
In one possible implementation, the UE may notify the gNB via a Buffer Status Report (BSR). For example, the BSR may carry notification messages. Alternatively, the UE may notify the gNB through a media access control control element (MAC CE) other than the BSR, and the MAC CE may carry the notification message. Alternatively, the UE may generate control signaling to inform the gNB through the SDAP layer, PDCP layer, or RLC layer. Alternatively, the UE may notify the gNB through a reserved field of a Protocol Data Unit (PDU) header of the SDAP layer, the PDCP layer, or the RLC layer when transmitting other data. I.e. the reserved field may carry notification messages. In a more specific implementation, there are multiple video frames to be transmitted in the UE, and the extended delay budgets of different video frames may be the same or different. When the extended delay budgets of the video frames to be transmitted are different, the UE can report notification messages of different video frames through different BSRs. For example, an example of reporting the notification message by BSR is: "there is a 5000 byte video frame to transmit and it is desired to finish transmitting the video frame within a 30ms delay". And the gNB can allocate transmission resources for the video frame according to the requirements of the UE and the extended delay budget corresponding to the video frame.
Step 704, the gNB receives the notification message, determines the scheduling behavior, and ensures that enough uplink resources are allocated to the UE within the extended delay budget to complete the uplink data transmission of the video frame. For example, when receiving the notification message, the gNB may allocate an appropriate time to the UE according to the data size of the uplink video frame and the extended delay budget. And the gNB sends Downlink Control Information (DCI) to the UE, where the DCI is used to allocate resources for uplink data transmission to the UE. And the UE transmits the uplink video frame to the gNB according to the uplink resource allocated by the DCI.
Optionally, the DCI may include an uplink grant (UL grant). The gNB may allocate uplink resources to the UE through one or more UL grants. When the gNB allocates uplink grant to the UE through multiple UL grants, the first UL grant may carry indication information, where the indication information may indicate that uplink resources may continue to be allocated to the UE subsequently. For example, the indication may specifically indicate: during the future T time, the gNB may also allocate X bit resources to the UE. Further, when the UE receives the indication information, it may optimize a Logical Channel Prioritization (LCP) behavior. For example, for a logical channel corresponding to the video service, the UE may temporarily increase a Guaranteed Bit Rate (GBR) value of the video service, or temporarily increase a priority of the video service, so as to ensure that data of some current video frames are transmitted as much as possible when the video service is LCP, and further ensure that a delay of the current video frame does not exceed a limit of an uplink extended delay budget.
Further, if the UE decides to optimize the transmission of the current video service. For example, the GBR value of the current video traffic is temporarily increased, or the priority of the current video traffic is temporarily increased. When to end the preferential treatment, there are several implementations: and the UE decides. For example, the UE may end the preferential treatment when the video service of the current frame is transmitted. Alternatively, the gNB decision. For example, when the gNB allocates uplink resources to the UE, the description may be added. For example, the currently allocated uplink resources are expected to be inclined to transmit video traffic. Thus, when the UE performs LCP, it can prioritize the transmission of the video service. Alternatively, the gNB may notify the UE to end the preferential treatment through the indication information. The indication information may be carried in DCI, MAC CE, or Radio Resource Control (RRC). For example, if the gNB allocates uplink resources to the UE in a dynamic scheduling manner, the gNB may notify the UE through DCI to end the preferential treatment. Or, if the gNB allocates the uplink resource to the UE in the semi-persistent scheduling manner, the gNB may notify the UE through the RRC or the MAC CE, and end the preferential treatment.
Through the method described in fig. 7, the UE notifies the gNB of the data size of the uplink video data to be transmitted, and the video data belong to the same frame of video service. The gbb may refer to the above information when scheduling. And ensuring that the uplink video frame does not exceed the limit of the extended delay budget in transmission.
Fig. 8 shows a schematic diagram of a protocol stack of a video receiver (e.g., a terminal device). In a possible implementation manner, the protocol stack of the video receiver may include, from top to bottom: an Application (APP) layer, a Service Data Adaptation Protocol (SDAP) layer, a Packet Data Convergence Protocol (PDCP) layer, a Radio Link Control (RLC) layer, a Medium Access Control (MAC) layer, and a physical layer (PHY) layer. Optionally, the "access stratum" mentioned in the embodiments of the present application may include: one or more of a SDAP layer, a PDCP layer, an RLC layer, a MAC layer, or a PHY layer; the upper layer may include an APP layer. The access layer and the upper layer may be adjacent protocol layers or non-adjacent protocol layers, and are not limited. For example, an IP layer, a TCP layer, or a User Datagram Protocol (UDP) layer may be further provided between the upper layer and the access layer, which is not limited.
As shown in fig. 9, a flow chart of a method of communication is provided. Alternatively, the method of the flow may be performed at a video recipient (e.g., a terminal device). Including but not limited to:
step 901, an access stratum receives a first data packet, where the first data packet is a first data packet of a video frame.
In the embodiment of the present application, the access stratum may determine that the received first packet is a header packet of a video frame in the following manner. For example, the first data packet carries a frame start identifier, or the access stratum receives an independent frame start identifier and the first data packet, without limiting the sequence of the frame start identifier and the first data packet. Or, the previous data packet of the first data packet carries the end-of-frame identifier, or the access stratum receives the independent end-of-frame identifier and the last data packet of the previous frame, without limiting the sequence of the two. Alternatively, the access stratum may not receive any data packet for a period of time T, and the data packet received later may be considered as the first data packet. The value of T may be pre-configured, or protocol specified, or self-confirmed by the UE, etc.
In step 902, the access stratum receives other data packets belonging to the same video frame as the first data packet.
Step 903, when the preset time expires or when N data packets belonging to the same video frame are all received, the access layer submits the N data packets to the upper layer, where the N data packets include the first data packet and other data packets.
In one possible implementation, the preset time may be preconfigured or may be specified by a protocol. Alternatively, the above scheme may be implemented by a timer. For example, when the access layer receives the first packet of a video frame, it starts the timer; at the end of the timer, the received packet is delivered to the upper layer. Or, the access layer may receive indication information sent by the access network device, a core network element, or a video data source, and determine the preset time according to the indication information. The indication information may indicate a specific preset time size. Alternatively, the indication information may indicate the extended delay budget. According to the extended delay budget, the preset time can be determined, and the preset time can be less than or equal to the extended delay budget. In one scheme, the access stratum is handed over to the upper layers every time it receives a packet. This may cause the time interval between the top layer receiving the head packet and the end packet of the video frame to exceed the extended delay budget limit, causing the decoding failure of the upper layer decoder. In the implementation manner, the access layer does not submit to the upper layer every time a data packet is received, but uniformly submits the data packets continuously received for a period of time (namely, preset time) to the upper layer, so that the probability that the delay of each video frame in the upper layer exceeds the extended delay budget can be reduced. Of course, if the preset time is set properly, the upper layer can deliver all the data packets of one video frame to the upper layer uniformly, so that the extended delay of each video frame of the upper layer is not beyond the extended delay budget, and the upper layer decoder is guaranteed to decode successfully.
In this embodiment, the manner of determining whether the data packets belong to the same video frame may be any manner of the foregoing embodiments, and details are not repeated here.
In another possible implementation, the access stratum may uniformly deliver N data packets to the upper layer when the N data packets are all received. The N may be less than or equal to the number of packets included in one video frame. For example, if a video frame includes 64 packets, the value of N may be less than or equal to 64, etc. In this implementation, the access stratum delivers N packets to the upper layer in a unified manner. Compared with the mode that the access layer submits the data packets to the upper layer one by one, the method can also reduce the probability of the extension time-out extension time-delay budget limit of the upper layer video frame and reduce the probability of decoding failure. Of course, if the value of N is equal to the number of all packets of a video frame, the access layer may deliver all packets of a video frame to the upper layer in a unified manner, so as to ensure that the delay spread of each video frame of the upper layer does not exceed the delay spread budget, and the upper layer decoder may decode successfully.
In another possible implementation, when the access layer receives the end packet of a video frame, it may consider that all data packets of the video frame are received and deliver all data packets to the upper layer, otherwise, not deliver data packets to the upper layer. The access stratum may determine that a data packet is a tail packet in many ways, for example, the tail packet may carry a video frame end identifier, or the tail packet may carry indication information of the tail packet, or the access stratum may separately send the video frame end identifier or the indication information of the tail packet, without limitation. Optionally, in another mode, the UE may determine whether all the data packets of the video frame are received by using a PDCP Sequence Number (SN) allocated to each data packet by the PDCP layer of the base station. Optionally, if the UE fails to collect all the data packets of the ith video frame all the time and receives the first data packet of the (i + 1) th frame, the UE may discard the data packet of the ith frame and not submit the data packet to the upper layer any more.
Alternatively, the method shown in fig. 9 may be used in combination with the methods shown in fig. 4 to 7. For example, in one possible implementation, the gNB may allocate transmission opportunities for the N data packets according to the extended delay budget, so that transmission of the N data packets meets the requirement of the delay spread budget. Further, when the access layer of the UE receives the N data packets, the access layer may uniformly deliver the N data packets to the upper layer.
As shown in fig. 10, a flow chart of a method of communication is provided, including but not limited to:
step 1000, the SMF network element sends the first information to the UE. Optionally, the SMF may send the first information to the UE through the AMF network element and the gNB. The first information may directly indicate the size of the extended delay budget or, alternatively, indirectly indicate the size of the extended delay budget.
Step 1001, the UPF network element receives N data packets from the same video frame. For example, a video data source may send N packets of the same video frame to a UPF network element through the DN.
In step 1002, the UPF network element sends N packets of the same video frame to the gNB. For the way the gNB determines N packets of the same video frame, see the above description.
Step 1003, the gNB sends N packets of the same video frame to the UE.
Step 1004, the access layer of the UE uniformly submits to the upper layer when the N data packets are all received.
Similarly, N data packets of the same video frame sent by the gNB to the UE carry indication information of a first packet, or carry indication information of a last packet, or carry indication information of different video frames. Thus, the access stratum of the UE, upon receiving the data packets, may distinguish which data packets each video frame includes. Unlike the existing scheme, the access layer of the UE uniformly delivers all data packets of the same video frame to the upper layer. In a specific implementation, the access stratum of the UE receives a data packet. If a packet is found that belongs to the current video frame, it is not delivered to the upper layer for the moment, but buffered. And after all the data packets of the current video frame are received, the data packets are delivered to the upper layer in a centralized mode. Alternatively, if the UE uses the indication information for data packets of different video frames, the UE may determine that the data packets carrying the same indication information belong to the same video frame. And when the UE receives the data packets with different indication information, the UE can determine that the data packets of the current video frame are all received, and the data packets can be uniformly submitted to an upper layer at the moment.
In this embodiment of the present application, in order to prevent the extended delay budget from being over time, a video receiving party (e.g., UE) collects all packets of the same video frame and delivers them to an upper layer in a centralized manner.
It is understood that the schemes in the above-mentioned flows can be used individually or in combination, and are not limited. For example, in a possible approach, in downlink video transmission, the gNB may allocate different transmission occasions for different video frames according to the extended delay budget, and transmit different video frames to the UE using the transmission occasions. And after the UE side receives the video frames, the data packets included in each video frame can be determined, and all the data packets of each video frame are delivered to the upper layer uniformly.
The method provided by the embodiment of the present application is described in detail above with reference to fig. 1 to 10. The device provided by the embodiment of the present application is described in detail below with reference to fig. 11 and 12. It is to be understood that the description of the apparatus and the method embodiments correspond to each other, and that what is not described in detail in the apparatus can be referred to the description of the method embodiments above.
Fig. 11 is a schematic block diagram of an apparatus 1100 provided in an embodiment of the present application, configured to implement the functions of an access network device or a terminal device in the foregoing methods. The device may be a software unit or a system of chips, for example. The chip system may be constituted by a chip, and may also include a chip and other discrete devices. The apparatus comprises a communication unit 1101 and may further comprise a processing unit 1102. The communication unit 1101 can communicate with a local area. A processing unit 1102 for performing processing. The communication unit 1101 may also be referred to as a communication interface, a transmitting/receiving unit, an input/output interface, and the like.
In one example, the apparatus 1100 may implement the steps performed by the access network device in the above method embodiments, and the apparatus 1100 may be the access network device, or a chip or a circuit configured in the access network device. The communication unit 1101 performs the transceiving operation of the access network device in the above method embodiment, and the processing unit 1102 is configured to perform the processing related operation on the side of the access network device in the above method embodiment. Alternatively, the apparatus 1100 may implement the steps performed by the terminal device in the above method embodiments, and the apparatus 1100 may be the terminal device, or a chip or a circuit configured in the terminal device. The communication unit 1101 performs the transceiving operation of the terminal device in the above method embodiment, and the processing unit 1102 is configured to perform the same operation as the processing of the terminal device in the above method embodiment.
For example, the processing unit 1102 is configured to obtain an extended delay budget, where the extended delay budget is used for time limiting processing of all data packets of one video frame by the communication device; a communication unit 1101 for receiving a data packet of a first service; the processing unit 1102 is further configured to process the data packet according to the extended delay budget.
Optionally, the extended delay budget is preconfigured; or pre-configured. For example, the communication unit 1101 may receive first information indicating the extended delay budget from a core network element; alternatively, first information indicating the extended delay budget is received from an access network device.
Optionally, when the first information is used to indicate the service type of the first service, the obtaining, by the processing unit 1102, an extended delay budget includes: determining the service type of the first service according to the first information; and determining the extended delay budget according to the corresponding relation between the service type and the extended delay budget.
Optionally, the communication unit 1101 receives a data packet of the first service, including: receiving N data packets belonging to the same video frame in the first service, wherein N is an integer greater than 1; the processing unit 1102 processes the data packet according to the extended delay budget, including: and determining the time for transmitting or submitting the N data packets according to the data volume of the N data packets and the extended delay budget of the first service.
Optionally, the receiving, by the communication unit 1101, N data packets belonging to the same video frame in the first service includes: receiving a first data packet, where the first data packet carries first indication information, and the first indication information is used to indicate that the first data packet is a first data packet of the N data packets; and other data packets received within the preset time belong to the same video frame as the first data packet.
Optionally, the receiving, by the communication unit 1101, N data packets belonging to the same video frame in the first service includes: receiving a first data packet, where the first data packet carries first indication information, and the first indication information is used to indicate that the first data packet is a first data in the N data packets. The other data packets of the received preset number or preset data volume belong to the same video frame as the first data packet.
Optionally, the receiving, by the communication unit 1101, N data packets belonging to the same video frame in the first service includes: receiving a first data packet, where the first data packet carries first indication information, and the first indication information is used to indicate that the first data packet is a first data packet of the N data packets; and other data packets received during the first data packet and the second data packet belong to the same video frame as the first data packet of the next video frame.
Optionally, the receiving, by the communication unit 1101, N data packets belonging to the same video frame in the first service includes: receiving a third data packet, where the third data packet carries second indication information, and the second indication information is used to indicate that the third data packet is a tail data packet of the N data packets; and other data packets received during the last data packet and the third data packet of the previous frame belong to the same video frame as the third data packet.
Optionally, a first data packet of the N data packets carries third indication information, where the third indication information is used to indicate a data size of a video frame in the first service; or, a first data packet of the N data packets carries fourth indication information, where the fourth indication information is used to indicate an average data size of data packets included in a video frame in the first service.
Optionally, the extended delay budget of the first service is determined according to one or more of the following parameters: the buffer space of the video receiver, the buffer duration of the video receiver, the decompression algorithm of the video receiver, the decompression parameter of the video receiver, and the video type of the first service.
The division of the unit in the embodiment of the present application is schematic, and is only a logic function division, and there may be another division manner in actual implementation, and in addition, each functional unit in each embodiment of the present application may be integrated in one processor, may also exist alone physically, or may also be integrated in one unit by two or more units. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
It is to be understood that the functions of the communication unit in the above embodiments may be implemented by a transceiver, and the functions of the processing unit may be implemented by a processor. The transceiver may comprise a transmitter and/or a receiver or the like for performing the functions of the transmitting unit and/or the receiving unit, respectively. This is illustrated below with reference to fig. 12.
Fig. 12 is a schematic block diagram of an apparatus 1200 provided in an embodiment of the present application, and the apparatus 1200 shown in fig. 12 may be implemented as a hardware circuit of the apparatus shown in fig. 11. The device can execute the functions of the access network equipment or the terminal equipment in the method embodiment. For convenience of explanation, fig. 12 shows only main components of the communication apparatus.
The communication device 1200 shown in fig. 12 includes at least one processor 1201. The communications apparatus 1200 can also include at least one memory 1202 for storing program instructions and/or data. The memory 1202 is coupled to the processor 1201. The coupling in the embodiments of the present application is an indirect coupling or a communication connection between devices, units or modules, and may be an electrical, mechanical or other form for information interaction between the devices, units or modules. The processor 1201 may operate in conjunction with the memory 1202, the processor 1201 may execute program instructions stored in the memory 1202, and at least one of the at least one memory 1202 may be included in the processor 1201.
The apparatus 1200 may also include a communication interface 1203 for communicating with other devices via a transmission medium, such that the apparatus 1200 may communicate with other devices. In embodiments of the present application, the communication interface may be a transceiver, circuit, bus, module, or other type of communication interface. In the embodiment of the present application, when the communication interface is a transceiver, the transceiver may include an independent receiver and an independent transmitter; a transceiver that integrates transceiving functions, or an interface circuit may be used.
It should be understood that the connection medium between the processor 1201, the memory 1202 and the communication interface 1203 is not limited in the embodiments of the present application. In the embodiment of the present application, the memory 1202, the processor 1201 and the communication interface 1203 are connected by a communication bus 1204 in fig. 12, the bus is represented by a thick line in fig. 12, and the connection manner between other components is only illustrative and not limiting. The bus may include an address bus, a data bus, a control bus, and the like. For ease of illustration, fig. 12 shows only one thick line, but does not show only one bus or one type of bus or the like.
In one example, the apparatus 1200 is used to implement the steps performed by the access network device in the above method embodiments. The communication interface 1203 is configured to perform operations related to transceiving of the access network device in the foregoing method embodiment, and the processor 1201 is configured to perform operations related to processing on the access network device side in the foregoing method embodiment. Alternatively, the apparatus 1200 is configured to implement the steps performed by the terminal device in the foregoing method embodiment. The communication interface 1203 is configured to perform operations related to transceiving of the terminal device in the foregoing method embodiment, and the processor 1201 is configured to perform operations related to processing on the terminal device side in the foregoing method embodiment.
For example, the processor 1201 is configured to obtain an extended delay budget, where the extended delay budget is used for time limiting processing of all packets of one video frame by the communication apparatus; a communication interface 1203, configured to receive a data packet of a first service; the processor 1201 is further configured to process the data packet according to the extended delay budget.
Optionally, the extended delay budget is preconfigured; for example, the communication interface 1203 may receive first information indicating the extended delay budget from a core network element or receive first information indicating the extended delay budget from an access network device.
Optionally, when the first information is used to indicate a service type of the first service, the processor 1201 obtains an extended delay budget, including: determining the service type of the first service according to the first information; and determining the extended delay budget according to the corresponding relation between the service type and the extended delay budget.
Optionally, the receiving, by the communication interface 1203, a data packet of the first service includes: receiving N data packets belonging to the same video frame in the first service, wherein N is an integer greater than 1; the processor 1201 processes the data packet according to the extended delay budget, including: and determining the time for transmitting or submitting the N data packets according to the data volume of the N data packets and the extended delay budget of the first service.
Optionally, the receiving, by the communication interface 1203, N data packets belonging to the same video frame in the first service includes: receiving a first data packet, where the first data packet carries first indication information, and the first indication information is used to indicate that the first data packet is a first data packet of the N data packets; and other data packets received within the preset time belong to the same video frame as the first data packet.
Optionally, the receiving, by the communication interface 1203, N data packets belonging to the same video frame in the first service includes: receiving a first data packet, where the first data packet carries first indication information, and the first indication information is used to indicate that the first data packet is a first data packet of the N data packets; the other data packets of the received preset number or preset data volume belong to the same video frame as the first data packet.
Optionally, the receiving, by the communication interface 1203, N data packets belonging to the same video frame in the first service includes: receiving a first data packet, where the first data packet carries first indication information, and the first indication information is used to indicate that the first data packet is a first data packet of the N data packets; and other data packets received during the first data packet and the second data packet belong to the same video frame as the first data packet of the next video frame.
Optionally, the receiving, by the communication interface 1203, N data packets belonging to the same video frame in the first service includes: receiving a third data packet, where the third data packet carries second indication information, and the second indication information is used to indicate that the third data packet is a tail data packet of the N data packets; and other data packets received during the last data packet and the third data packet of the previous frame belong to the same video frame as the third data packet.
Optionally, a first data packet of the N data packets carries third indication information, where the third indication information is used to indicate a data size of a video frame in the first service; or, a first data packet of the N data packets carries fourth indication information, where the fourth indication information is used to indicate an average data size of data packets included in a video frame in the first service.
Optionally, the extended delay budget of the first service is determined according to one or more of the following parameters: the buffer space of the video receiver, the buffer duration of the video receiver, the decompression algorithm of the video receiver, the decompression parameter of the video receiver, and the video type of the first service.
Further, an apparatus for performing the method in the above method embodiments is also provided in the embodiments of the present application. A computer readable storage medium comprising a program which when executed by a processor performs the method of the above method embodiments. A computer program product comprising computer program code which, when run on a computer, causes the computer to implement the method in the above method embodiments. A chip, comprising: a processor coupled with a memory for storing a program or instructions which, when executed by the processor, cause an apparatus to perform the method in the above method embodiments. A system comprising at least one of an access network device, a terminal device, a core network element or an application server performing the above method embodiments.
In the embodiments of the present application, the processor may be a general processor, a digital signal processor, an application specific integrated circuit, a field programmable gate array or other programmable logic device, a discrete gate or transistor logic device, or a discrete hardware component, and may implement or execute the methods, steps, and logic blocks disclosed in the embodiments of the present application. A general purpose processor may be a microprocessor or any conventional processor or the like. The steps of a method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware processor, or may be implemented by a combination of hardware and software modules in a processor.
In the embodiment of the present application, the memory may be a nonvolatile memory, such as a Hard Disk Drive (HDD) or a solid-state drive (SSD), and may also be a volatile memory, for example, a random-access memory (RAM). The memory is any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to such. The memory in the embodiments of the present application may also be circuitry or any other device capable of performing a storage function for storing program instructions and/or data.
The method provided by the embodiment of the present application may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the invention to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, a network appliance, a user device, or other programmable apparatus. The computer instructions may be stored on a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center via wire (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)), or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., a floppy disk, a hard disk, a magnetic tape), an optical medium (e.g., a Digital Video Disk (DVD)), or a semiconductor medium (e.g., an SSD), among others.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (14)

1. A method of communication, comprising:
a communication device acquires an extended time delay, wherein the extended time delay is used for time limitation of processing of all data packets of one video frame by the communication device;
the communication device receives a data packet of a first service;
and the communication device processes the data packet according to the extended time delay.
2. The method of claim 1, wherein the extended delay is preconfigured; or
The communication device is an access network device, and the access network device receives first information for indicating the extended delay from a core network element; or
The communication device is a terminal device, and the terminal device receives first information for indicating the extended delay from an access network device.
3. The method of claim 2, wherein the first information is used for indicating a service type of the first service, and wherein the obtaining of the extended delay by the communication device comprises:
the communication device determines the service type of the first service according to the first information;
and the communication device determines the extended time delay according to the corresponding relation between the service type and the extended time delay.
4. A method according to any one of claims 1 to 3, wherein the communication device receives data packets of a first service, comprising: the communication device receives N data packets belonging to the same video frame in the first service, wherein N is an integer greater than 1;
the communication device processes the data packet according to the extended delay, and the processing comprises the following steps: and the communication device determines the time for transmitting or submitting the N data packets according to the data volume of the N data packets and the extended delay of the first service.
5. The method of claim 4, wherein said communication device receiving N packets belonging to the same video frame in said first service comprises:
the communication device receives a first data packet, wherein the first data packet carries first indication information, and the first indication information is used for indicating that the first data packet is a first data packet in the N data packets;
and other data packets received by the communication device in a preset time belong to the same video frame as the first data packet.
6. The method of claim 4, wherein said communication device receiving N packets belonging to the same video frame in said first service comprises:
the communication device receives a first data packet, wherein the first data packet carries first indication information, and the first indication information is used for indicating that the first data packet is a first data packet in the N data packets;
the other data packets of the preset number or the preset data volume received by the communication device belong to the same video frame as the first data packet.
7. The method of claim 4, wherein said communication device receiving N packets belonging to the same video frame in said first service comprises:
the communication device receives a first data packet, wherein the first data packet carries first indication information, and the first indication information is used for indicating that the first data packet is a first data packet in the N data packets;
other data packets received by the communication device during the first data packet and a second data packet belong to the same video frame as the first data packet of the next video frame.
8. The method of claim 4, wherein said communication device receiving N packets belonging to the same video frame in said first service comprises:
the communication device receives a third data packet, wherein the third data packet carries second indication information, and the second indication information is used for indicating that the third data packet is a tail data packet in the N data packets;
the other data packets received by the communication device during the last data packet of the last frame and the third data packet belong to the same video frame as the third data packet.
9. The method according to any one of claims 4 to 8, wherein a first data packet of the N data packets carries third indication information or fourth indication information, the third indication information is used for indicating a data size of the N data packets, and the fourth indication information is used for indicating a size of an average data size of the N data packets.
10. The method of any of claims 1 to 9, wherein the extended delay of the first traffic is determined according to one or more of the following parameters: the buffer space of the video receiver, the buffer duration of the video receiver, the decompression algorithm of the video receiver, the decompression parameter of the video receiver, and the video type of the first service.
11. An apparatus comprising means for performing the steps of the method of any one of claims 1 to 10.
12. An apparatus comprising at least one processor and interface circuitry, the at least one processor configured to communicate with other apparatus via the interface circuitry and to perform the method of any of claims 1 to 10.
13. An apparatus comprising a processor for invoking a program stored in a memory to perform the method of any of claims 1 to 10.
14. A computer-readable storage medium, characterized by comprising a program which, when executed by a processor, performs the method of any of claims 1 to 10.
CN202010504771.XA 2020-06-05 2020-06-05 Communication method and device Pending CN113766567A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010504771.XA CN113766567A (en) 2020-06-05 2020-06-05 Communication method and device
PCT/CN2021/092358 WO2021244218A1 (en) 2020-06-05 2021-05-08 Communication method and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010504771.XA CN113766567A (en) 2020-06-05 2020-06-05 Communication method and device

Publications (1)

Publication Number Publication Date
CN113766567A true CN113766567A (en) 2021-12-07

Family

ID=78784003

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010504771.XA Pending CN113766567A (en) 2020-06-05 2020-06-05 Communication method and device

Country Status (2)

Country Link
CN (1) CN113766567A (en)
WO (1) WO2021244218A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114726513A (en) * 2022-03-18 2022-07-08 阿里巴巴(中国)有限公司 Data transmission method, apparatus, medium, and product
WO2023109743A1 (en) * 2021-12-17 2023-06-22 华为技术有限公司 Data transmission method and communication apparatus
WO2023173292A1 (en) * 2022-03-15 2023-09-21 Oppo广东移动通信有限公司 Wireless communication method, and devices
WO2023173293A1 (en) * 2022-03-15 2023-09-21 Oppo广东移动通信有限公司 Wireless communication method, and device

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117202352A (en) * 2022-05-25 2023-12-08 华为技术有限公司 Communication method and device
CN117279031A (en) * 2022-06-13 2023-12-22 维沃移动通信有限公司 Information processing method and communication device
CN117858154A (en) * 2022-09-30 2024-04-09 ***通信有限公司研究院 Data processing method, device, communication equipment and storage medium
CN118042618A (en) * 2022-11-11 2024-05-14 上海朗帛通信技术有限公司 Method and apparatus in a communication node for wireless communication

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1860884A1 (en) * 2006-05-26 2007-11-28 BRITISH TELECOMMUNICATIONS public limited company Video processing
CN101271720B (en) * 2008-04-22 2011-06-22 中兴通讯股份有限公司 Synchronization process for mobile phone stream media audio and video
CN102497578B (en) * 2011-11-25 2014-05-21 武汉大学 Mobile audio and video real-time communication method in 3G network environment
CN106331820B (en) * 2015-06-29 2020-01-07 成都鼎桥通信技术有限公司 Audio and video synchronization processing method and device
CN110351201B (en) * 2018-04-04 2021-09-14 华为技术有限公司 Data processing method and device

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023109743A1 (en) * 2021-12-17 2023-06-22 华为技术有限公司 Data transmission method and communication apparatus
WO2023173292A1 (en) * 2022-03-15 2023-09-21 Oppo广东移动通信有限公司 Wireless communication method, and devices
WO2023173293A1 (en) * 2022-03-15 2023-09-21 Oppo广东移动通信有限公司 Wireless communication method, and device
CN114726513A (en) * 2022-03-18 2022-07-08 阿里巴巴(中国)有限公司 Data transmission method, apparatus, medium, and product

Also Published As

Publication number Publication date
WO2021244218A1 (en) 2021-12-09

Similar Documents

Publication Publication Date Title
CN113766567A (en) Communication method and device
WO2021259112A1 (en) Service transmission method and apparatus
WO2021259129A1 (en) Communication method and communication device
US20190174360A1 (en) Service transmission control method, related device, and communications system
US20230231787A1 (en) Communication method and an apparatus
US20230090232A1 (en) Terminal device and network device
EP2658208B1 (en) Method, device, and system for processing streaming media service data
US20240031870A1 (en) Media data transmission method and communication apparatus
US20230354334A1 (en) Communication method and apparatus
US20230224525A1 (en) Video data transmission method and apparatus
US20230050923A1 (en) Media packet transmission method, apparatus, and system
CN116261173A (en) Communication method and device
US20220361281A1 (en) Method and apparatus for adaptive discontinous reception configuration
CN117882385A (en) System and method for modem power aware augmented reality (XR) and gaming software applications
WO2023231025A1 (en) Wireless communication method and device for extended reality traffic
WO2024011380A1 (en) Wireless communication method and base station for extended reality traffic
WO2024067374A1 (en) Communication method and apparatus
WO2023173292A1 (en) Wireless communication method, and devices
WO2023231026A1 (en) Wireless communication method and device for extended reality traffic
WO2023070392A1 (en) Data transmission method, device, and storage medium
WO2023216986A1 (en) Buffer status report (bsr) indication method, and apparatus
EP4274189A2 (en) Packet validity time enhancement for quality of service flows
WO2023173293A1 (en) Wireless communication method, and device
WO2023108413A1 (en) Method and apparatus of reporting buffer status
WO2024060985A1 (en) Communication method, network device and terminal device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination