WO2022183431A1 - 数据处理方法和设备 - Google Patents
数据处理方法和设备 Download PDFInfo
- Publication number
- WO2022183431A1 WO2022183431A1 PCT/CN2021/079055 CN2021079055W WO2022183431A1 WO 2022183431 A1 WO2022183431 A1 WO 2022183431A1 CN 2021079055 W CN2021079055 W CN 2021079055W WO 2022183431 A1 WO2022183431 A1 WO 2022183431A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- information
- media data
- frame
- data
- buffer
- Prior art date
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 23
- 238000000034 method Methods 0.000 claims abstract description 115
- 230000005540 biological transmission Effects 0.000 claims abstract description 65
- 238000004891 communication Methods 0.000 claims description 47
- 230000006870 function Effects 0.000 claims description 32
- 238000004590 computer program Methods 0.000 claims description 29
- 238000012545 processing Methods 0.000 claims description 29
- 230000005641 tunneling Effects 0.000 claims description 10
- 230000003139 buffering effect Effects 0.000 description 25
- 238000007726 management method Methods 0.000 description 24
- 230000008569 process Effects 0.000 description 11
- 238000010586 diagram Methods 0.000 description 10
- 230000000694 effects Effects 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 6
- 230000007774 longterm Effects 0.000 description 6
- 230000004044 response Effects 0.000 description 6
- 230000003993 interaction Effects 0.000 description 5
- 230000001360 synchronised effect Effects 0.000 description 5
- 230000001427 coherent effect Effects 0.000 description 4
- 238000012986 modification Methods 0.000 description 4
- 230000004048 modification Effects 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 3
- 230000001413 cellular effect Effects 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000003068 static effect Effects 0.000 description 3
- 238000001514 detection method Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000010295 mobile communication Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 238000005728 strengthening Methods 0.000 description 2
- 102100037812 Medium-wave-sensitive opsin 1 Human genes 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 238000013475 authorization Methods 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000013523 data management Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000000977 initiatory effect Effects 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/24—Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth, upstream requests
- H04N21/2401—Monitoring of the client buffer
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L65/00—Network arrangements, protocols or services for supporting real-time applications in data packet communication
- H04L65/60—Network streaming of media packets
- H04L65/61—Network streaming of media packets for supporting one-way streaming services, e.g. Internet radio
- H04L65/613—Network streaming of media packets for supporting one-way streaming services, e.g. Internet radio for the control of the source by the destination
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/30—Flow control; Congestion control in combination with information about buffer occupancy at either end or at transit nodes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L65/00—Network arrangements, protocols or services for supporting real-time applications in data packet communication
- H04L65/60—Network streaming of media packets
- H04L65/61—Network streaming of media packets for supporting one-way streaming services, e.g. Internet radio
- H04L65/612—Network streaming of media packets for supporting one-way streaming services, e.g. Internet radio for unicast
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L65/00—Network arrangements, protocols or services for supporting real-time applications in data packet communication
- H04L65/60—Network streaming of media packets
- H04L65/65—Network streaming protocols, e.g. real-time transport protocol [RTP] or real-time control protocol [RTCP]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L65/00—Network arrangements, protocols or services for supporting real-time applications in data packet communication
- H04L65/80—Responding to QoS
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/44004—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving video buffer management, e.g. video decoder buffer or video display buffer
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/60—Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client
- H04N21/63—Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
- H04N21/647—Control signaling between network components and server or clients; Network processes for video distribution between server and clients, e.g. controlling the quality of the video stream, by dropping packets, protecting content from unauthorised alteration within the network, monitoring of network load, bridging between two different networks, e.g. between IP and wireless
- H04N21/64784—Data processing by the network
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/32—Flow control; Congestion control by discarding or delaying data units, e.g. packets or frames
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/34—Flow control; Congestion control ensuring sequence integrity, e.g. using sequence numbers
Definitions
- the present application relates to the field of communications, and in particular, to a data processing method and device.
- the static jitter cache needs to set a fixed cache on the user device, which will make the interaction delay unguaranteed and affect the user experience.
- the real-time and accuracy are not high, and it cannot meet the real-time requirements of AR/VR for both image quality and interaction. Relatively high demand for media services. Therefore, how to optimize the jitter buffer to meet the demands of real-time media services with a large amount of data has become an urgent problem to be solved.
- the present application provides a data processing method, which can meet the jitter buffering requirements during playback of real-time media services with a large amount of data, and improve the real-time performance and accuracy of real-time media service data transmission.
- a first aspect provides a data processing method, which is characterized in that, executing in a RAN device of a radio access network includes: acquiring first information of first media data, where the first information is used to indicate the first media data The size of the first media data is determined according to the first information, where the playback strategy is used to indicate the buffer size or the playback rate; and the playback strategy is sent to the terminal device UE.
- the RAN may acquire the first information of the first media data before acquiring the first media data, or may acquire the first information simultaneously with the first media data, which is not limited in this application.
- the RAN may assist the UE to determine the buffer size used for buffering the first media data in advance, and assist the UE to decide a playback strategy for the media data that is waiting to be played and belong to the same service as the first media data
- the playback policy can be whether the UE performs frame skipping processing on the media data waiting to be played or the number of frames skipped by the UE, enabling the UE to meet the jitter buffering requirements during playback of real-time media services with a large amount of data, and because according to the data
- the real-time adjustment of the buffer volume can avoid frame loss due to insufficient buffering, improve the fluency and accuracy of real-time media data playback with a large amount of data, and improve the real-time experience of user media services.
- the first information is frame type information, or, the identification information corresponding to the frame type of the first media data and the data volume information corresponding to the frame type.
- the RAN may, according to the frame type information, further, the RAN itself may have the ability to perceive the size of the first media data, or may directly according to the identification information corresponding to the frame type and the data volume information corresponding to the frame type, in the On the basis of the above effects, the RAN can combine the frame type information and the perceived size of the first media data to assist the UE in deciding which frames to skip (the RAN monitors the real-time monitoring of the status of the frames in the UE's cache and the frame type of the first media data.
- the acquiring first information of the first media data includes: acquiring parameter information of the first media data, where the parameter information includes first information, the parameter information It also includes one or more of the following: stream description information of the first media data, frame rate FPS information of the first media data, buffer status information of the UE, network status information, or buffer threshold of the first media data , the buffer threshold is used to indicate the buffer size of the media data played by the UE.
- the buffering threshold of the first media data refers to the amount of media data (initial buffering threshold) that the UE needs to buffer first before playing the media data in order to ensure the quality of real-time media data playback, and this amount of data is reached. will start playing the media data.
- the RAN can further assist the UE to determine the buffer size for buffering the first media data, and assist the UE to decide the playback strategy for the media data that is waiting to be played and belong to the same service as the first media data, which further improves the performance of the UE.
- the smoothness and accuracy of real-time media data playback with a large amount of data improves the real-time experience of users' media services.
- the determining a play strategy for the first media data according to the first information includes: determining a play strategy for the first media data according to the parameter information.
- the buffer status information includes one or more of the following: the buffer size occupied by the media data that the UE is waiting to play, the buffer size that the UE can use to store the first media Maximum buffer information of data, or frame status information of media data to be played.
- the sending the play policy to the terminal equipment UE further includes: the play policy information is carried in the radio resource control RRC information or the packet data convergence protocol PDCP information; the RAN Send the play policy information to the UE.
- a data processing method which is characterized in that, executing in a RAN device of a radio access network includes: acquiring first information of first media data and cache status information of a terminal device UE, the first information used to indicate the size of the first media data; determine a transmission strategy for the first media data according to the first information and the cache status information, where the transmission strategy is used to indicate the transmission rate of the first media data and/or the Transmission priority of the first media data, or whether to discard the first media data.
- the RAN can determine whether it is necessary to allocate more transmission resources for the transmission of the first media data to speed up the transmission rate by using the first information and the buffer status information reported by the UE, or determine whether the first media data needs to be allocated more transmission resources.
- the RAN can decide whether to The first media data is transmitted with high priority to ensure the quality of the UE's media data playback, improve the smoothness and accuracy of real-time media data playback with a large amount of data, and improve the user's real-time experience of media services.
- the first information is frame type information, or, the identification information corresponding to the frame type of the first media data and the data volume information corresponding to the frame type.
- the RAN itself may have the ability to perceive the size of the first media data, or may directly determine the first media data according to the identification information corresponding to the frame type and the data volume information corresponding to the frame type. It further improves the fluency and accuracy of real-time media data playback with a large amount of data, and improves the real-time experience of users' media services.
- the acquiring the first information of the first media data and the buffer status information of the terminal equipment UE includes: acquiring parameter information of the first media data, the parameter information It includes the first information and the buffer status information of the terminal equipment UE, and the parameter information also includes one or more of the following: stream description information of the first media data, frame rate FPS information of the first media data, and the first media data.
- Data cache threshold, the cache threshold is used to indicate the buffer size, network status information, and tolerable delay information of the UE playing media data, and the tolerable delay information is used to instruct the UE to wait for the next frame of the currently playing frame time of arrival.
- the buffering threshold of the first media data refers to the amount of media data (initial buffering threshold) that the UE needs to buffer first before playing the media data in order to ensure the quality of real-time media data playback, and this amount of data is reached. will start playing the media data.
- the cache status information includes one or more of the following: the size of the cache occupied by the media data that the UE is waiting to play, the size of the cache that the UE can use to store the first media The maximum buffer information of the data or the frame status information of the media data to be played.
- the buffering status information includes: the buffering status information is carried in radio resource control RRC information or packet data convergence protocol PDCP information received from the UE.
- the identification information corresponding to the frame type of the first media data is carried in the Universal Radio Packet Service Tunneling Protocol of the first media data. in GTP information.
- a data processing method executed in a user plane function UPF network element, comprising: receiving first parameter information, where the first parameter information is used to indicate a type of first media data; receiving the first media data ; determine first identification information according to the first parameter information, the first identification information is used to identify the frame type of the first media data; send second media data to the radio access network RAN device, the second media data includes the first identification information and the first media data.
- the UPF helps the RAN to identify the frame type of the first media data by identifying the first media data, thereby improving the real-time experience of the user's media service.
- the first parameter information includes: data type information of the first media data, or identification information corresponding to the data type of the first media data, or, GOP frame sequence information of the first media data and real-time transmission protocol RTP information of the first media data.
- the data type may refer to, for example, an intra-frame coded frame (I frame), a predicted frame (P frame), and a bidirectionally predicted frame (B frame) in a video stream.
- Cosine transform to simply encode frames without motion estimation/compensation P-frames do motion estimation/compensation when involving I-frames or other P-frames, then use discrete cosine transform to encode remaining data;
- B-frames do motion like P-frames Compensation, but performing motion estimation/compensation from two frames on the time axis, may also refer to other similar data types, which is not limited in this application.
- the first identification information is carried in information of the General Radio Packet Service Tunneling Protocol GTP layer of the second media data.
- a data processing method executed in a terminal device UE, comprising: sending buffer status information of the UE to a radio access network RAN, where the buffer status information is used to determine a playback strategy of the UE, the The play policy is used to indicate the buffer size or play rate; the play policy is received from the RAN.
- the UE helps the RAN to formulate a strategy for the UE to play media data by sending the buffer status information to the RAN, so as to improve the real-time experience of the user's media service.
- the buffer status information includes one or more of the following: the buffer size occupied by the media data that the UE is waiting to play, the buffer size that the UE can use to store the first media Maximum buffer information of data, or frame status information of media data to be played.
- the cache status information is carried in radio resource control RRC information or packet data convergence protocol PDCP information.
- a fifth aspect provides a data processing method, executed in a first media server, comprising: determining first service information, where the first service information includes at least one of the following: stream description information of the first media data, the first service information Type information of the media data, frame rate information of the first media data, frame sequence information of the GOP of the first media data, data volume information corresponding to the type of the first media data, jitter of the first media data Cache initial threshold information, tolerable delay information of the first media data, or identification information corresponding to the type of the first media data; send the first service information; and send the first media data.
- the first service information includes at least one of the following: stream description information of the first media data, the first service information Type information of the media data, frame rate information of the first media data, frame sequence information of the GOP of the first media data, data volume information corresponding to the type of the first media data, jitter of the first media data Cache initial threshold information, tolerable delay information of the first media data, or identification information corresponding to the type of the first media data
- the media server helps the core network device to formulate the transmission strategy of the first media data and the playback strategy of the UE by delivering the first service information, so as to improve the real-time experience of the user's media service.
- a data processing method executed in a terminal device UE, comprising: acquiring first information of first media data, where the first information is used to indicate the size of the first media data; according to the first information Determine a play strategy corresponding to the first media data, where the play strategy is used to indicate a buffer size or a play rate; and execute the play strategy.
- the UE obtains the first information of the first media data before receiving the first media data, and the UE may determine in advance a buffer size for buffering the first media data according to the size of the first media data, and Decide a play strategy for the media data waiting to be played that belongs to the same service as the first media data.
- the play strategy can be whether the UE performs frame skip processing on the media data waiting to be played or the number of frames skipped by the UE.
- the UE meets the jitter buffer requirements of real-time media services with a large amount of data during playback, and the real-time adjustment of the buffer according to the amount of data can avoid frame loss due to insufficient buffering, and improve the smoothness of real-time media data playback with a large amount of data. and accuracy, improving the real-time experience of users' media services.
- the first information is frame type information, or, the identification information corresponding to the frame type of the first media data and the data volume information corresponding to the frame type.
- the obtaining the first information of the first media data includes: obtaining parameter information of the first media data, where the parameter information includes first information, the parameter information It also includes one or more of the following: stream description information of the first media data, frame rate FPS information of the first media data, and cache threshold of the first media data, where the cache threshold is used to instruct the UE to play the media data
- the buffer size of the UE, the buffer status information of the UE, the network status information, the frame sequence information of the GOP of the first media data, the tolerable delay information of the first media data, the tolerable delay information is used to indicate The UE waits for the arrival time of the next frame of the currently playing frame, and first transmission rate information, where the first transmission rate information is used to indicate the network transmission rate between the UE and the radio access network RAN.
- determining the playback strategy corresponding to the first media data according to the first information includes: determining the playback strategy for the first media data according to the parameter information.
- the identification information corresponding to the frame type of the first media data is carried in the information of the General Radio Packet Service Tunneling Protocol GTP layer of the first media data.
- the buffer status information includes one or more of the following: the buffer size occupied by the media data that the UE is waiting to play, the buffer size that the UE can use to store the first media Maximum buffer information of data, or frame status information of media data to be played.
- a data processing device in a seventh aspect, includes at least one processor and a communication interface, the at least one processor is configured to invoke a computer program stored in at least one memory, so that the data processing device executes the above-mentioned first aspect or any possible implementation method of the first aspect.
- a data processing device in an eighth aspect, includes at least one processor and a communication interface, the at least one processor is configured to invoke a computer program stored in at least one memory, so that the data processing device executes the above-mentioned second aspect or any possible implementation method of the second aspect.
- a data processing device in a ninth aspect, includes at least one processor and a communication interface, the at least one processor is used to invoke a computer program stored in at least one memory, so that the data processing device executes the above-mentioned third aspect or any possible implementation method of the third aspect.
- a tenth aspect provides a data processing device, the device includes at least one processor and a communication interface, the at least one processor is configured to call a computer program stored in at least one memory, so that the data processing device executes the above-mentioned fourth aspect or any possible implementation method of the fourth aspect.
- a data processing device in an eleventh aspect, includes at least one processor and a communication interface, the at least one processor is configured to invoke a computer program stored in at least one memory, so that the data processing device executes the fifth A method of any possible implementation of the aspect or the fifth aspect.
- a twelfth aspect provides a data processing apparatus, the apparatus includes at least one processor and a communication interface, the at least one processor is configured to invoke at least one computer program stored in a memory, so that the data processing apparatus executes the above sixth A method of any possible implementation of the aspect or the sixth aspect.
- a thirteenth aspect provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed, causes the apparatus to execute the first aspect or any possible implementation of the first aspect or, causing the apparatus to execute the method for implementing the second aspect or any possible implementation manner of the second aspect, or, causing the apparatus to execute the method for implementing the third aspect or any possible implementation manner of the third aspect, Alternatively, the apparatus is caused to execute the method for implementing the fourth aspect or any possible implementation manner of the fourth aspect, or the apparatus is caused to execute the method for implementing the fifth aspect or any possible implementation manner of the fifth aspect, or the apparatus is caused to perform A method of implementing the sixth aspect or any possible implementation of the sixth aspect is performed.
- a fourteenth aspect provides a chip system, comprising: a processor for calling and running a computer program from a memory, so that a communication device installed with the chip system executes the first aspect or any one of the possible first aspects.
- the communication device of the chip system executes the method for realizing the fifth aspect or any possible implementation of the fifth aspect; or, causing the communication device installed with the chip system to execute the sixth aspect or any possible implementation of the sixth aspect way method.
- a fifteenth aspect provides a communication system, comprising: a network device configured to execute a method for implementing the first aspect or any possible implementation manner of the first aspect, or for implementing the second aspect or the second aspect A method for any possible implementation, or for implementing the third aspect or a method for implementing any possible implementation of the third aspect, or for implementing the fifth aspect or any possible implementation for the fifth aspect A method for realizing the implementation of the fourth aspect; a terminal device for implementing the method for implementing the fourth aspect or any possible implementation of the fourth aspect, or for implementing the sixth aspect or any possible implementation for the sixth aspect. Methods.
- a computer program product comprising: computer program code, when the computer program code is run by a network device, the network device is made to execute the first aspect or any one of the first aspects.
- a method in one possible implementation, the second aspect or any one possible implementation of the second aspect, the third aspect or any one possible implementation of the third aspect, or the fifth or fifth aspect A method for any possible implementation.
- a twelfth aspect provides a computer program product, the computer program product comprising: computer program code, when the computer program code is run by a terminal device, the terminal device is made to execute the method of the fourth aspect or the fourth aspect any possible implementation manner, or the method in the sixth aspect or any possible implementation manner of the sixth aspect.
- FIG. 1 shows a schematic diagram of a 5G communication system applied by an embodiment of the present application
- FIG. 2 is a schematic flowchart of an example of a data processing method provided by an embodiment of the present application
- FIG. 3 shows an example of a schematic flow of a session establishment method according to an embodiment of the present application
- FIG. 4 is a schematic flowchart of another example of a data processing method according to an embodiment of the present application.
- FIG. 5 is a schematic flowchart of another example of a data processing method according to an embodiment of the present application.
- FIG. 6 is a schematic block diagram of an example of a network device according to an embodiment of the present application.
- FIG. 7 is a schematic block diagram of an example of a terminal device according to an embodiment of the present application.
- FIG. 8 is a schematic block diagram of another example of a network device according to an embodiment of the present application.
- FIG. 9 is a schematic block diagram of another example of a terminal device according to an embodiment of the present application.
- the methods of the embodiments of the present application may be applied to a long term evolution (long term evolution, LTE) system, a long term evolution advanced (long term evolution-advanced, LTE-A) system, an enhanced long term evolution (enhanced long term evolution-advanced) system , eLTE), the fifth generation (the 5th Generation, 5G) mobile communication system new radio (new radio, NR) system, can also be extended to similar wireless communication systems, such as wireless fidelity (wireless-fidelity, WiFi), Worldwide interoperability for microwave access (WIMAX), future 6th generation (6G) systems, and 3rd generation partnership project (3gpp) related cellular systems.
- a network device is a device deployed in a wireless access network to provide a wireless communication function for a terminal device.
- the network equipment may include various forms of base stations, macro base stations, micro base stations (also called small cells), relay stations, access points, etc., or various network element equipment in a core network (core network, CN).
- core network core network
- CN core network
- the names of devices with base station functions may vary.
- a network device may be an access point (AP) in a wireless local area network (WLAN), or a global system for mobile communication (GSM) or code division multiple access (CDMA). code division multiple access, CDMA) in the base station (base transceiver station, BTS).
- AP access point
- WLAN wireless local area network
- GSM global system for mobile communication
- CDMA code division multiple access
- BTS base transceiver station
- the network device may also be a node B (5G nodeB, gNB) in a 5G system or an evolved node B (evolved nodeB, eNB or eNodeB) in an LTE system.
- the network device may also be a Node B (Node B) of the third generation (3rd generation, 3G) system, and in addition, the network device may also be a relay station or an access point, or a vehicle-mounted device, a wearable device, and a fifth-generation A (radio access network, (R)AN) network device in a communication (fifth-generation, 5G) network or a network device in a future evolved public land mobile network (PLMN) network Wait.
- R radio access network
- PLMN public land mobile network
- the terminal equipment in the embodiments of the present application may also be referred to as user equipment (user equipment, UE), access terminal, terminal equipment unit (subscriber unit), terminal equipment station, mobile station, mobile station (mobile station, MS), A remote station, remote terminal, mobile device, user terminal, terminal, wireless communication device, terminal device proxy or terminal device device.
- Terminal devices may include various wireless communication capable handheld devices, in-vehicle devices, wearable devices, computing devices, or other processing devices connected to a wireless modem. May also include subscriber units, cellular phones, smart phones, wireless data cards, personal digital assistant (PDA) computers, tablet computers, wireless modems, handheld devices ), laptop computer (laptop computer), machine type communication (MTC) terminal, station (station, ST) in wireless local area networks (WLAN).
- PDA personal digital assistant
- MTC machine type communication
- station station, ST
- WLAN wireless local area networks
- FIG. 1 shows a schematic diagram of a 5G communication system 100 to which an embodiment of the present application is applied.
- the communication system at least includes a terminal device 110, a (radio access network, (R)AN) network element 120, and a user plane network element 130 , an application function network element 140 , an access management network element 150 , a session management network element 160 and a policy control network element 170 .
- R radio access network
- a “network element” may also be referred to as an entity, a device, an apparatus, or a module, etc., which is not particularly limited in this application.
- the description of "network element” is omitted in some descriptions.
- the (R)AN network element is abbreviated as (R)AN.
- the "(R)AN network element” should be understood For (R)AN network element or (R)AN entity, below, the description of the same or similar situations is omitted.
- the terminal device 110 may refer to the above description about the terminal device, which will not be repeated here.
- the network element 120 of a can refer to the above-mentioned description about the network equipment, which will not be repeated here.
- the user plane network element 130 can be connected to the same or different data networks, so as to realize the data transmission of the service.
- it can also be used for packet routing and forwarding and user plane data quality of service (quality of service, QoS) processing.
- quality of service QoS
- the user plane network element may be a user plane function (UPF) network element.
- the user plane network element may still be the UPF network element, or may have other names, which are not limited in this application.
- an application function network element (application function, AF) 140: used to realize the information exchange between the external server and the 3GPP network.
- the access management network element 150 is mainly used for mobility management and access management. It can be used to implement other functions in addition to session management in the mobility management entity (MME) function, for example, functions such as legal interception and access authorization/authentication, terminal equipment attachment, mobility management, tracking District update process, etc.
- MME mobility management entity
- the access management network element may be an access and mobility management function (AMF) network element.
- AMF access and mobility management function
- the access management network element may still be an AMF network element, or may have other names, which are not limited in this application.
- session management network element 160 used for session management.
- session management includes the selection of user plane equipment, the reselection of user plane equipment, network protocol address allocation, quality of service (QoS) control, and the establishment, modification or release of sessions, and network interconnection protocols for terminal equipment.
- IP internet protocol
- the session management network element may be a session management function (session management function, SMF) network element.
- SMF session management function
- the session management network element may still be an SMF network element, or may have other names, which are not limited in this application.
- policy control network element 170 a unified policy framework for guiding network behavior, including the functions of policy control and flow-based charging control.
- policy rule information can be provided for control plane function network elements (eg AMF, SMF network elements, etc.) to implement user subscription data management functions, policy control functions, charging policy control functions, QoS control, and the like.
- control plane function network elements eg AMF, SMF network elements, etc.
- an application server (application service, AS) 180 used to determine and send media service data.
- the policy control network element may be a policy and charging rules function (policy and charging rules function, PCRF) network element.
- policy control network element may be a policy control function (PCF) network element.
- PCF policy control function
- the policy control network element may still be the PCF network element, or may have other names, which are not limited in this application.
- the above network elements can be either network elements implemented on dedicated hardware, software instances running on dedicated hardware, or virtualized function instances on a virtualization platform.
- the above-mentioned virtualization platform may be a cloud platform.
- the embodiments of the present application may also be applicable to other future-oriented communication technologies.
- the network architecture and service scenarios described in this application are for the purpose of illustrating the technical solutions of this application more clearly, and do not constitute a limitation on the technical solutions provided by this application. appears, the technical solutions provided in this application are also applicable to similar technical problems.
- FIG. 2 is a flowchart of a data processing method 200 provided by an embodiment of the present application. This method can be used in other scenarios of real-time interaction of large amounts of data such as AR/VR services in the above-mentioned 5G communication system.
- the method can include:
- the RAN acquires first information of the first media data, where the first information is used to indicate the size of the first media data.
- the first information includes one or more of the following information:
- Frame type where the frame type can be an intra-frame coded frame (I frame), a predicted frame (P frame), and a bidirectionally predicted frame (B frame) in the video stream, where the I frame is obtained by discrete cosine transform.
- I frame intra-frame coded frame
- P frame predicted frame
- B frame bidirectionally predicted frame
- the frame type can also be represented by the GOP frame order.
- the RAN can record the order in which the media data arrives according to this rule to obtain the frame type, and can also make the frame carry the frame sequence number (identification #A), Then divide the frame number by 9 and then obtain the frame type corresponding to the sequence in the GOP. Assuming that the frame number starts from 0, then when the remainder is 0, the frame type is I frame, when the remainder is 1, the frame type is B frame, and so on, other remainders represent the frame type in turn;
- the identification information corresponding to the frame type is used to identify different frame types, for example, the 2-bit "00" represents the I frame, "01” represents the P frame, and "10" represents the B frame. It should be understood that the identification information Type is not limited in this application, and the identification information can be carried in the General Radio Packet Service Tunneling Protocol GTP information of the first media data;
- the data volume information corresponding to the frame type where the data volume refers to the data size of the frame type.
- the data volume of an I frame means the data size is 5KB.
- the data volume can be a statistical value, for example, it can be calculated by taking an average value.
- the data amount of the I frame counted by the method may also be calculated by other methods, which is not limited in this application.
- the RAN acquires parameter information of the first media data, where the parameter information includes the first information, and the parameter information may include one or more of the following information in addition to the first information:
- Stream description information (it can be IP triplet information, which can correspond to the service to which the first media data belongs), which can represent the IP address of the application server, the port number through which the application server sends the media data, and the protocol used to send the media data, etc.
- the media data can be ultra-high-definition video streams, VR video streams, voice data and other media data, and different stream description information corresponds to different media servers;
- Frame rate (frames per second, FPS) information that is, the number of frames transmitted per second of the first media data, or the number of frames encoded and output by the encoder per second, or the number of frames played by the player per second;
- jitter buffer initial threshold information that is, the data size to be buffered before the media player of the UE plays the media data belonging to the service for the first time
- Tolerable delay information that is, the time that the UE's media player waits for the arrival of the next frame of the current playing frame. For example, when the UE plays the fifth frame, if it does not receive the sixth frame within 5ms, it will skip the sixth frame. In the seventh frame, the 5ms is the tolerable delay. If the tolerable delay exceeds the tolerable delay, the corresponding frame will no longer be necessary to play;
- the cache status information includes one or more of the following: the buffer size occupied by the media data that the UE is waiting to play, the maximum buffer information that the UE can use to store the first media data, or the frame of the media data to be played.
- Status information where the media data to be played and the first media data belong to the same service, and the media data to be played is the media data waiting to be played in the cache of the UE, and the cache status information can be carried in the radio resource control RRC information sent by the UE or in the PDCP information of the Packet Data Convergence Protocol;
- Network status information where the network status information is the transmission rate between the RAN and the UE.
- the RAN determines a play strategy for the first media data according to the first information, where the play strategy is used to indicate a buffer size or a play rate.
- the play policy may be used to indicate the buffer size used by the terminal device to cache the first media data, or to indicate whether the terminal device needs to skip frames when playing the media data to be played, or if frame skipping is required, then frame skipping is required or which frames to skip.
- the play policy information may be carried in radio resource control RRC information or packet data convergence protocol PDCP information.
- the RAN may determine the play strategy for the first media data according to the first information in the following ways:
- the RAN determines the play strategy of the UE according to the frame type information of the first media data. For example: if the first media data frame type is an I frame, the playback strategy is to indicate that the buffer size of the UE is 50M; and/or perform a certain number of frame skipping for the remaining types of frames in the playback buffer except for the I frame (for example, you can Jump 4 frames) (the speed of human eye recognition of coherent images is 24 frames/second, as long as the playback speed is greater than 24 frames/second, it will not affect the continuity of the image, and due to the characteristics of GOP image coding, except for key frames, skip Passing some other frames does not play (dropping some frames does not play no substantial effect).
- the RAN determines the play policy of the UE according to the identifier of the frame type of the GTP layer of the first media data. For example: the identifier of the GTP layer in the first media data is "00", the frame type corresponding to the identifier is I frame, and the playback strategy is to indicate that the buffer size of the UE is 50M; , the remaining types of frames undergo a certain number of frame skips (for example, 4 frames can be skipped).
- the RAN determines the play strategy of the UE according to the data amount of the first media data.
- the data volume of the first media data is 5M
- the playback strategy is to indicate that the buffer size of the UE is 50M; and/or perform a certain number of frame skips (for example, skip the frames of the remaining types except the I frame in the playback buffer). 2 frames).
- the data volume of the first media data is 10M
- the playback strategy is to indicate that the buffer size of the UE is 80M; and/or a certain number of frames of the remaining types are performed in the playback buffer except for the 1 frame (for example, can be skipped). skip 4 frames).
- the RAN may determine the playback strategy for the first media data according to the parameter information in the following ways:
- the RAN determines the play strategy of the UE according to the frame rate information of the first media data.
- the frame rate of the first media data is 25FPS
- the playback strategy is to indicate that the buffer size of the UE is 50M; and/or perform a certain number of frame skips (for example, skip the frames of the remaining types in the playback buffer except the I frame). 2 frames).
- the frame rate of the first media data is 30FPS
- the playback strategy is to indicate that the buffer size of the UE is 80M; and/or perform a certain number of frame skips for the remaining types of frames in the playback buffer (for example, it can be skip 4 frames).
- the RAN determines the play strategy of the UE according to the initial threshold information of the jitter buffer of the first media data.
- the initial threshold of the jitter buffer of the first media data is 5M
- the playback strategy is to indicate that the buffer size of the UE is 50M; and/or except for the I frame in the playback buffer, a certain number of frames of the remaining types are skipped (for example, , you can skip 2 frames).
- the initial threshold of the jitter buffer of the first media data is 10M
- the playback strategy is to indicate that the buffer size of the UE is 80M; and/or in addition to the 1 frame in the playback buffer, a certain number of frames of the remaining types are skipped (for example, , you can skip 4 frames).
- the RAN determines the play strategy of the UE according to the tolerable delay information of the first media data.
- the tolerable delay of the first media data is 5ms
- the playback strategy is to indicate that the buffer size of the UE is 50M; and/or perform a certain number of frame skipping (for example, frame skipping for the remaining types of frames in the playback buffer except for I frame). , you can skip 2 frames).
- the tolerable delay of the first media data is 2ms, and the playback strategy is to indicate that the buffer size of the UE is 80M; and/or perform a certain number of frame skipping (for example, frame skipping for the remaining types of frames in the playback buffer except for I frame). , you can skip 4 frames).
- the RAN determines the play strategy of the UE according to the buffer status information of the UE. For example, according to whether the idle buffer of the UE exceeds 50M, if it exceeds 50M, the playback strategy is to indicate that the buffer size of UE#A is 60M, and no frame skipping; if it does not exceed 50M, the playback strategy is to indicate the buffer size of UE#A. is 80M, and/or performs a certain number of frame skips (for example, 2 frames can be skipped) for the remaining types of frames in the playback buffer except for the I frame.
- the RAN determines the UE's play strategy according to the UE's network status information. For example, according to whether the transmission rate between the RAN and the UE exceeds 50Mbps, if it exceeds 50Mbps, the playback strategy is to indicate that the buffer size of UE#A is 50M, or no frame skipping; if it exceeds 50Mbps, the playback strategy is to indicate UE#A.
- the size of the buffer is 80M, and/or a certain number of frame skips (for example, 2 frames can be skipped) are performed on the remaining types of frames in the playback buffer except for the I frame.
- the RAN can also combine the above methods to determine the play strategy of the UE, for example, according to the identifier of the frame type of the first media data and the data volume corresponding to the frame type to determine the play strategy of the UE, and the method of determining the play strategy refers to the above Method one and method three. Other manners are deduced in the same way, and details are not described herein again in this application.
- the RAN may acquire the first information of the first media data before acquiring the first media data, or may acquire the first information simultaneously with the first media data, which is not limited in this application.
- the RAN may assist the UE to determine the buffer size used for buffering the first media data in advance, and assist the UE to decide a playback strategy for the media data that is waiting to be played and belong to the same service as the first media data, Enables the UE to meet the jitter buffering requirements during playback of real-time media services with a large amount of data, and the real-time adjustment of the buffering based on the data volume can avoid frame loss due to insufficient buffering, improving the playback performance of real-time media data with a large amount of data Fluency and accuracy improve the real-time experience of users' media services.
- This application describes the embodiments by taking UE#A, RAN#A, AMF#A, SMF#A, UPF#A, PCF#A, and AF#A as examples of the above network elements.
- FIG. 3 shows an example of a schematic flow of a session establishment method 300 according to an embodiment of the present application.
- AF#A determines information #A, where information #A is used to indicate relevant parameters of the embodiment of the present application.
- the parameter information of the information #A includes at least one of the following information: stream description information, frame rate information, jitter buffer initial threshold information, tolerable delay information, frame sequence information in the GOP, frame type and Identify #A and their corresponding relationship information, frame type information, or frame type and data amount information corresponding to the frame type.
- stream description information For the description of the above information, reference is made to the description in the method 200, and details are not repeated here.
- the specific information #A comes from AS#A, and AS#A interacts with the 3GPP network through AF#A; the information #A can be finalized by AF#A or sent to AF#A after being confirmed by AS#A, here Not limited.
- the application server AS#A may further determine the media data information #F and send the media data information #F to the UPF #A.
- the media data information #F includes media data #A
- the media data #A may be media data such as video stream or voice data, and may also include identification #A.
- the identification #A of media data #A1 is "00", which represents an I frame
- the identification #A of media data #A2 is "01", which represents a P frame. It should be understood that the type of identification #A is not limited in this application.
- the frame can carry the frame serial number (identification #A), and then divide the frame serial number by the remainder of 9, and then obtain the frame type according to the sequence in the GOP.
- identity #A the frame serial number
- the frame type is I frame
- the frame type is B frame
- other remainders represent the frame type in turn.
- the media data #A and the identification #A can be sent to the UPF #A at the same time, and the UPF #A can identify and process the media data #A.
- the types of the above-mentioned media data can not only represent the type of data by "I frame”, “B frame” and “P frame”, but also the frame serial number in a GOP to indicate the type of data, or it can be different corresponding to other encoding technologies.
- the frame type such as P frame, Golden frame and AltRef frame in the VP8/VP9 coding technology, is not limited in this application for the type of media data.
- AF#A requests AF Request through the application function network element to send information #A to PCF#A, and PCF#A receives the information #A.
- PCF#A sends response information to AF#A, which is used to indicate that the indication information #A is successfully received, and AS#A receives the response information.
- the UE#A sends the protocol data unit (protocol data unit, PDU) session establishment/modification request information PDU Session Establishment/Modification Request to the AMF side through a non-access stratum (non-access stratum, NAS) message, and the AMF side
- PDU protocol data unit
- NAS non-access stratum
- SMF#A initiates a session management policy association request to PCF#A.
- SMF#A initiates a session management policy association request to PCF#A through the AMF service interface PDU session creation/update session management content Namf_PDUSession_Create/UpdateSMContext,
- PCF #A identifies information #B.
- PCF#A determines information #B according to information #A, and the parameter information of this information #B may be part of the parameter information of information #A or all parameter information, which will not be repeated in this application.
- the flow description information may be carried in a Policy and Charging Control Rule (PCC), that is, the PCF determines the service flow template, frame type and frame type in the PCC rule according to the flow description information.
- PCC Policy and Charging Control Rule
- the correspondence information of the identifier #A may also be carried in the PCC.
- PCF #A sends information #B to SMF #A, and SMF receives the information #B.
- PCF#A sends information #B by establishing/modifying SM Policy Association Establishment/Modification information through session management policy association.
- PCF#A determines service flow template information (Service Data Flow template) (information #B) according to the flow description information, and then sends the service flow template information to SMF#A, so that SMF#A can The corresponding packet inspection rule PDR is determined according to the service flow template information and sent to UPF#A.
- Service Data Flow template Service Data Flow template
- PDR packet inspection rule
- SMF #A sends response information to PCF #A to indicate that the information #B is successfully received, and PCF #A receives the response information.
- SMF#A determines message #C, and sends message #C to UPF#A.
- SMF#A sends information #C to UPF#A through N4 Session Establishment through N4 session establishment according to information #B.
- the parameter information of this information #C may be part of the parameter information of information #B or all of the parameter information.
- PCF#A determines the service flow template information (Information #B) according to the flow description information, and then sends the service flow template information to SMF#A, and SMF#A determines the corresponding packet detection rule (packet detection rule) according to the service flow template information.
- rules, PDR) (information #C), frame type and identifier #A, and their corresponding relationship information, may also be carried in the PDR.
- Information #C may also include execution rule indication information for instructing UPF#A how to identify the frame type of media data, for example, the way to identify the frame type of media data according to the timestamp and sequence number in RTP and GOP frame order information , or, according to the frame type and identifier #A and their corresponding relationship information to identify the frame type of the media data, or UPF#A enhances the analysis of the media data packet to clarify the frame type of different data.
- the execution rule indication information may also be configured in UPF#A in a manner of strengthening UPF#A.
- UPF#A detects the frame type of the received media data.
- UPF #A detects the frame type of the received media data according to information #C, where information #C includes one or more of timestamps and sequence numbers in RTP, GOP frame sequence information or identifier #A.
- SMF#A determines message #D, and sends message #D to AMF#A.
- SMF#A communicates N1N2 information transmission through the AMF service-oriented interface.
- Namf_CommunicationN1N2MessageTransfer determines information #D according to information #B.
- the parameter information of information #D may be part of the parameter information of information #B or all parameter information. This application It is not repeated here.
- Information #D may also include execution rule indication information, which is used to indicate the manner in which RAN #A identifies the frame type of the media data, which may be to identify the frame type of the media data according to the timestamp and sequence number in the RTP and the GOP frame sequence information or, it can also be a way of identifying the frame type of the media data according to the frame type and identifier #A and their corresponding relationship information, or identifying the frame type of the media data according to the identifier #H (that is, which data packets are identified After belonging to the same frame, the size of the data amount of the frame can be perceived) (the frame type of the media data identified by RAN#A in this application can be understood as: identifying the type of media data belonging to the same frame, so as to know the frame according to the type. The size of the data, or directly perceive the size of the media data belonging to the same frame without identifying the type of the frame data). It should be understood that the execution rule indication information can also be configured in RAN#A in a manner of strengthening RAN
- AMF#A sends response information to SMF#A to indicate successful reception of information #D, and SMF#A receives the response information.
- AMF#A forwards the message #D to RAN#A, and RAN#A receives the message #D.
- the AMF#A requests the N2 PDU Session Request information through the N2 PDU session or forwards the information #D to the RAN#A through the N2 session management N2 SM information.
- RAN#A detects the general packet radio service tunneling protocol (general packet radio service tunneling protocol) of the media data #A according to the information #D, the extension information of the GTP) layer, and the extension information of the GTP layer includes the media data# A's frame type information.
- general packet radio service tunneling protocol general packet radio service tunneling protocol
- RAN#A stores information #D.
- SMF#A determines information #E, and sends information #E to UE#A.
- the SMF#A determines the information #E according to the information #B, and the parameter information of the information #E may be part of the parameter information of the information #B or all the parameter information, which will not be repeated in this application.
- Information #E may also include execution rule indication information, which is used to instruct UE#A to make corresponding adjustments to the cache according to the policy information issued by RAN#A. It should be understood that this information can also be configured in UE#A as configuration information. Applications are not limited.
- UE#A establishes a PDU session with the core network.
- the relevant coding information about the corresponding media stream from AS#A on the server side such as frame rate, data volume corresponding to frame type, tolerable delay, Information such as the frame sequence in the GOP or the initial threshold of the jitter buffer on the UE side are delivered to the UE#A/RAN#A/UPF#A, which are the necessary nodes in the user plane data transmission process, and are used for subsequent real-time data transmission for large amounts of data.
- the implementation of the transmission optimization scheme enables the transmission of parameters. It should be understood that the above parameters may be issued before the media data transmission, and may also be issued during the media transmission process, which is not limited in this application.
- FIG. 4 is a schematic flowchart of an example of a data processing method 400 according to an embodiment of the present application.
- UE#A establishes a PDU session with the core network.
- PDU session For the specific process, refer to method 300, which will not be repeated in this application.
- AS#A determines the media data information #F, and the content of the media data information #F has been described in the method 300, and will not be repeated here.
- AS#A sends media data information #F to UPF#A, and UPF#A receives the media data information #F.
- UPF #A determines frame type information of media data #A.
- UPF#A identifies the frame type information corresponding to the media service data according to the RTP header information of the media data #A and the frame sequence in the GOP, or identifies the frame type information corresponding to the media service data according to the identifier #A, or determines which ones are based on the RTP header information of the media service data.
- Media service data belongs to a group of frames.
- UPF#A identifies the frame type information corresponding to the media service data according to the real-time transport protocol (real-time transport protocol, RTP) header information of the media service data and the frame sequence in the GOP.
- RTP real-time transport protocol
- UPF#A determines which data packets of the media service belong to the same frame (for example, frame #f1) according to the time stamp and sequence number in the RTP header information of the media service data, where the time stamp in the RTP header information represents the The time information when the frame data is sampled, if it has the same time stamp, it means that these data belong to the same frame; then determine the type of frame #f1 according to the frame sequence information in the GOP saved during the establishment of the PDU session, for example, The first frame #f1 is the first I frame of the first GOP of the service stream, then the frame type of the data packet belonging to this frame #f1 is I frame, and the subsequent frame types are based on the GOP sequence and the RTP header.
- frame #f1 the first I frame of the first GOP of the service stream
- the frame type of the data packet belonging to this frame #f1 is I frame
- the subsequent frame types are based on the GOP sequence and the RTP header.
- UPF #A processes the media data #A to determine media data information #I.
- UPF#A identifies the frame type information corresponding to the media data according to the real-time transport protocol (real-time transport protocol, RTP) header information of the media data and the frame sequence in the GOP, or according to the identifier #A, or Which media data belong to a group of frames is determined according to the RTP header information of the media data.
- RTP real-time transport protocol
- UPF#A identifies the frame type information corresponding to the media data according to the RTP header information of the media data and the frame sequence in the GOP
- the specific manner may be:
- UPF#A determines which data packets of the media data belong to the same frame (eg, frame #f1) according to the timestamp and sequence number in the RTP header information of the media data, where the timestamp in the RTP header information represents the frame
- the time information when the data is sampled if it has the same time stamp, it means that these data belong to the same frame; then determine the type of frame #f1 according to the frame sequence information in the GOP saved during the establishment of the PDU session, for example, the first A frame #f1 is the first I frame of the first GOP of the service flow, and the frame type of the data packet belonging to the frame #f1 is an I frame.
- UPF#A adds corresponding identification information #H in the GTP layer, and the identification information #H is used to identify the data packets belonging to the same frame or the frame of the media data. type, and determine the media data information #I.
- UPF#A identifies that the frame type of a group of data packets (for example, p1, p2, and p3) belonging to frame #f1 of the media data is I frame, it adds " 00" as identification information #H, and media data information #I is determined according to p1, p2, and p3 and identification information #H of the respective GTP layers.
- UPF#A represents a group of data packets belonging to the same frame with the same identifier according to the RTP header information and the frame sequence in the GOP, for example, "00" for I frame, "01” for P frame, "01” for P frame, " 10" indicates a B frame.
- UPF#A copies the sequence number in the RTP header information of the media data packet to the GTP layer of the data packet.
- UPF#A adds sequence number information to the data packets in the same frame to ensure that the data packets belonging to the same frame will not be out of sequence during transmission.
- sequence number information added to the data packets by UPF#A is sequenced between frames, that is, the data packets of multiple frames are uniformly sequenced to ensure that data packets belonging to the same type of frame are transmitted during transmission Not out of order.
- UPF#A sends media data information #I to RAN#A.
- UE#A sends jitter buffer status information #J to RAN#A.
- the jitter buffer status information #J may be the jitter at time k (the time k here refers to the k th frame interval, and it can also be understood that the currently playing frame is the k th frame, and k is an integer).
- Buffer size information #J1 that is, the buffered media data to be played of the service
- the maximum buffer information #J2 that UE#A can use to store the media data.
- the jitter buffer situation information #J sent by UE#A further includes sequence number information of all frames in the jitter buffer at time k, and the sequence number may be a GOP sequence number or a frame sequence number.
- the jitter buffer situation information #J sent by UE#A further includes type information of all frames in the jitter buffer at time k.
- type information of I frame, B frame or P frame For example type information of I frame, B frame or P frame.
- UE#A may periodically notify UE#A on the RAN#A side through radio resource control (RRC) information or a packet data convergence protocol (packet data convergence protocol, PDCP) layer
- RRC radio resource control
- PDCP packet data convergence protocol
- RAN#A determines adjustment information #M, where adjustment information #M is used to instruct UE#A to make corresponding adjustments.
- RAN#A according to information #D and/or media data information #1 and/or according to information #J and/or the jitter buffer #B1 of UE #A and/or network status information
- the network status information refers to the first transmission rate of the link between RAN#A and UE#A), and determines the time k+n (n is the offset at time k, n is an integer) for UE#A.
- Cache adjustment information #M is used to indicate the buffer size or playback rate of UE #A.
- the RAN#A can determine the buffer adjustment information #M according to the information #D in the following ways:
- RAN#A determines the buffer size of UE#A at time k+n according to the coding information.
- the encoding information includes at least one of the following: the frame rate of the media data, the frame type of the media data, and the data size of the media data belonging to the same frame, and the media data is the media data arriving at RAN#A at time k. For example, set the buffer size of UE#A at time k+n to 100M for media data with a frame rate of 30FPS, and set the buffer size of UE#A at time k+n to 50M for media data with a frame rate of 25FPS.
- the buffer size of UE#A at time k+n is set to the buffer size of UE#A at time k+n. For example, the data volume of media data belonging to the same frame perceived at time k is 5M, then the UE#A at time k+n is set.
- the buffer size of A is 50M, or the amount of media data perceived at time k belonging to the same frame is 10M, then the buffer size of UE#A at time k+n is set to 80M.
- the RAN#A determines the playback rate of UE#A at time k+n according to the coding information.
- the encoding information includes at least one of the following: the frame rate of the media data, the frame type of the media data, and the data size of the media data belonging to the same frame, and the media data is the media data arriving at RAN#A at time k.
- set UE#A at time k+n for media data with a frame rate of 30FPS and perform a certain number of frame skips (for example, 4 frames can be skipped) for the remaining types of frames in the playback buffer except for I frame (human eye recognition
- the speed of a coherent image is 24 frames/sec, as long as the playback speed is greater than 24 frames/sec, it will not affect the coherence of the image, and due to the characteristics of GOP image encoding, except for key frames, it is not substantial to discard some other frames.
- Influence set UE#A at time k+n for the media data with a frame rate of 25FPS, and perform a certain number of frame skips (for example, 2 frames can be skipped) for the remaining types of frames in the playback buffer except for I frame; For the media data of the type I frame, UE#A at time k+n performs a certain number of frame skips (for example, 4 frames can be skipped) for the frames of the remaining type except the I frame in the playback buffer.
- a certain number of frame skips for example, 2 frames can be skipped
- the frame type of the P frame UE#A at time k+n of the media data set performs a certain number of frame skips (for example, 2 frames can be skipped) except for the I frame in the playback buffer; it can also be sensed by the media data belonging to the same frame.
- Set the playback speed of UE#A at time k+n For the frame type of the P frame UE#A at time k+n of the media data set performs a certain number of frame skips (for example, 2 frames can be skipped) except for the I frame in the playback buffer; it can also be sensed by the media data belonging to the same frame.
- the data volume of media data belonging to the same frame perceived at time k is 5M, then set UE#A at time k+n to the playback cache except for Except for the I frame, the remaining types of frames undergo a certain number of frame skipping (for example, 2 frames can be skipped), or the amount of media data belonging to the same frame perceived at time k is 10M, then set the UE# at time k+n.
- A performs a certain number of frame skips (for example, 4 frames can be skipped) for the remaining types of frames other than the I frame in the playback buffer.
- RAN#A determines the buffer size of UE#A at time k+n according to the initial threshold information of the jitter buffer. For example, set the buffer size of UE#A at time k+n to 20M for media data whose initial jitter buffer threshold is 5M, and set the buffer size of UE#A at time k+n to media data whose initial jitter buffer threshold is 10M 50M.
- RAN#A determines the playback rate of UE#A at time k+n according to the initial threshold information of the jitter buffer. For example, set the UE#A at time k+n to the media data whose initial threshold of jitter buffer is 5M, and perform a certain number of frame skips (for example, 2 frames can be skipped) for the remaining types of frames in the playback buffer except the I frame. For media data with an initial threshold of 10M jitter buffer, UE#A at time k+n performs a certain number of frame skips (for example, 4 frames can be skipped) for the remaining types of frames except I frame in the playback buffer.
- a certain number of frame skips for example, 4 frames can be skipped
- RAN#A determines the buffer size of UE#A at time k+n according to the tolerable delay information. For example: for media data with a tolerable delay of 5ms, set the buffer size of UE#A at time k+n to 20M, and for media data with a tolerable delay of 2ms, set the buffer size of UE#A at time k+n The size is 40M.
- RAN#A determines the playback rate of UE#A at time k+n according to the tolerable delay information, and may refer to the above-mentioned way 1 or way 2, which will not be repeated here.
- the RAN#A can determine the buffer adjustment information #M according to the media data information #I in the following ways:
- RAN#A determines the buffer size of UE#A at time k+n according to the identifier of the frame type of the GTP layer in the media data information #I. For example, if the identifier of the GTP layer in the media data information #I is "00", and the frame type corresponding to the identifier is I frame, the buffer size of UE#A at time k+n is set to 50M.
- RAN#A determines the playback rate of UE#A at time k+n according to the identifier of the frame type of the GTP layer in the media data information #I, and may refer to the above-mentioned way 1 or way 2, which will not be repeated here.
- RAN#A determines the buffer size of UE#A at time k+n according to the data amount of media data information #I. For example, if the data amount of the media data information #I arriving at RAN#A at time k is 1M, the buffer size of UE#A at time k+n is set to 20M.
- RAN#A determines the playback rate of UE#A at time k+n according to the amount of data in media data information #I, and may refer to the above-mentioned way 1 or way 2, which will not be repeated here.
- RAN#A can determine cache adjustment information #M according to cache situation information #J, in the following ways:
- RAN#A determines the buffer size of UE#A at time k+n according to the size of the free buffer used by UE#A to store media data at time k. For example: according to whether the idle buffer of UE#A exceeds 50M, if it exceeds 50M, set the buffer size of UE#A at time k+n to 60M, if it does not exceed 50M, set the buffer size of UE#A at time k+n The size is 80M.
- RAN#A determines the playback rate of UE#A at time k+n according to the size of the idle buffer used by UE#A to store media data at time k, and may refer to the above-mentioned way 1 or way 2, which will not be repeated here.
- RAN#A determines k according to the size of the cache used by UE#A to store the media data to be played at time k (the media data to be played refers to the media data that has been stored in the cache of UE#A but has not yet been played)
- the buffer size of UE#A at time +n For example: according to whether the buffer used by UE#A to store the media data to be played exceeds 30M, if it exceeds 30M, set the buffer size of UE#A at time k+n to 70M, if it does not exceed 30M, set k+n
- the buffer size of UE#A at the moment is 50M.
- RAN#A determines the buffer size of UE#A at time k+n according to the first transmission rate at time k. For example: according to whether the first transmission rate exceeds 50Mbps, if it exceeds 50Mbps, set the cache size of UE#A at time k+n to 80M, if it does not exceed 50Mbps, set the cache size of UE#A at time k+n as 60M.
- RAN#A determines, according to other contents of information #D or other contents of media data information #I or other contents of buffer status information #J or the first transmission rate information, the The buffer adjustment information #M at time k+n (n is the offset at time k, and n is an integer) is within the scope of protection of the present application.
- RAN#A can also combine the above methods to determine the buffer adjustment information #M of UE#A at time k+n, for example, according to the initial threshold information of the jitter buffer and the frame rate information, set the UE# at time k+n.
- A's buffer size or playback rate for this media data #A For example, for media data with a frame rate of 25FPS and an initial threshold of jitter buffer of 5M, the buffer size of UE#A at time k+n for the media data #A is set to 20M.
- Other manners are deduced in the same way, and details are not described herein again in this application.
- UE#A can also use the above manner or a combination of the above manners to determine the buffer size of UE#A at time k+n, and make corresponding adjustments, which will not be repeated in this application.
- RAN#A determines the jitter buffer #B1 of UE#A at time k according to media data information #I, and calculates the jitter buffer #B2 of UE#A at time k+1 or k+n , according to jitter buffer #B2 and jitter buffer #B1, the target jitter buffer #B3 may also be determined in combination with the maximum buffer information #J2, or the playback speed of UE #A may be determined, and corresponding adjustment information #M may be determined.
- the ways to calculate jitter buffer #B2 are as follows:
- RAN#A determines the frame type or which data packets belong to the same frame according to the identification information #H in the media data information #I, then determines the jitter buffer #B1 of UE#A at time k, and calculates the jitter buffer #B1 of UE#A at time k+1.
- the steps of jitter buffer #B2 are (the k time here refers to the kth frame interval, and it can also be understood that the currently playing frame is the kth frame, and k is an integer):
- B(k+1) is the frame set buffered by UE#A at time k+1
- L B(k+1) is the buffer size occupied by the frame set UE#A at time k+1 (represented by the length of the frame set, that is, the number of frames in the frame set)
- RAN#A ensures that L B(k+1) does not exceed the maximum buffer value according to the maximum buffer information #J2.
- B (k) is the frame set buffered by UE#A at time k
- LB (k) is the buffer size occupied by UE#A’s frame set at time k (calculated by the length of the frame set, that is, the number of frames in the frame set Representation)
- a_frames is the type of frame (for example, it can be I frame, P frame or B frame)
- V a_frames is the data amount corresponding to the frame type of the frame arriving at RAN#A
- V is the frame type that RAN#A reaches through
- R k is the transmission rate of the link between RAN #A and UE #A at time k
- R k may be the statistical average value of the link between RAN #A and UE #A over a period of time.
- LB (k) may be the buffer size occupied by the frame set reported by UE#A at time k (represented by the length of the frame set, that is, the number of frames in the frame set), for example, UE#A Periodically report the buffer size occupied by the frame set at time k.
- X k,1 is a frame in the cache of UE#A at time k, and Q1 frames constitute frame set B(k).
- X k+1 , 1 is a certain frame in the buffer of UE#A at time k+1, and Q2 frames constitute a frame set B(k+1).
- RAN#A can also determine target jitter buffer #B3 in combination with maximum buffer information #J2, or determine the playback speed of UE#A, and determine corresponding adjustment information #M.
- the target jitter buffer #B3 is determined according to the size relationship between the jitter buffer #B2 and the jitter buffer #B1, or may also be combined with the maximum buffer information #J2, and the corresponding jitter buffer #B3 is determined according to the target jitter buffer #B3.
- Adjustment information #M specifically:
- jitter buffer #B2 jitter buffer #B1
- the adjustment information #M is to maintain the jitter buffer #B1 of UE#A;
- the adjustment information #M is to adjust the buffer value of UE#A to jitter buffer #B2, or the adjustment information #M is to maintain the jitter buffer #B1 of UE#A;
- adjustment information #M is to adjust the buffer value of UE#A to jitter buffer #B2 , you can also speed up the playback speed of UE#A when adjusting the cache of UE#A.
- the adjustment information #M includes instructing UE#A to discard some frames and not play them, that is, the number of frames skipped (the speed of human eyes recognizing consecutive images is 24 frames per second, as long as the playback speed is greater than 24 frames per second, it will not affect the coherence of the image, and due to the characteristics of GOP image encoding, except for key frames, discarding some other frames has no substantial effect);
- the adjustment information #M is to adjust the buffer value of UE#A to jitter buffer #B5.
- RAN#A may send adjustment information #M to UE#A through RRC information or PDCP layer extension bit information.
- RAN#A determines the frame type or which data packets belong to the same frame according to the identification information #H in the media data information #I, then determines the jitter buffer #B1 of UE#A at time k, and calculates the jitter buffer #B1 of UE#A at time k+n.
- the steps of jitter buffer #B2 are (the k time here refers to the kth frame interval, which can also be understood as the kth frame currently being played, k is an integer, n is an integer):
- RAN#A can know the playback speed of UE#A;
- RAN#A calculates the size of jitter buffer #B2 required by UE#A at the (k+n)th frame interval according to formula (4):
- B(k+n) is the frame set buffered by UE#A at time k+n
- L B(k+n) is the buffer size occupied by the frame set of UE#A at time k+n (represented by the length of the frame set, that is, the number of frames in the frame set)
- RAN#A ensures that L B(k+n) does not exceed the maximum buffer value according to the maximum buffer information #J2.
- B (k) is the frame set buffered by UE#A at time k
- LB (k) is the buffer size occupied by UE#A’s frame set at time k (calculated by the length of the frame set, that is, the number of frames in the frame set Representation)
- n is an integer greater than or equal to 0
- a_frames is the type of frame (for example, it can be I frame, P frame or B frame)
- V a_frames is the data amount corresponding to the frame type of the frame reaching RAN#A size
- V k is the data size of these data packets after RAN#A learns which data packets belong to the same frame through the identification information of the GTP layer of the arriving media data packet
- ⁇ t is used by UE#A to play a frame Time, that is, the time taken by AS#A to transmit one frame
- R k is the transmission rate of the link between RAN#A and UE#A at time k, optionally, R k can
- LB (k) may be the buffer size occupied by the frame set reported by UE#A at time k (represented by the length of the frame set, that is, the number of frames in the frame set), for example, UE#A Periodically report the buffer size occupied by the frame set at time k.
- X k,1 is a certain frame in the cache of UE#A at time k, and Q1 frames constitute frame set B(k).
- X k+n , 1 is a certain frame in the buffer of UE#A at time k+n, and Q2 frames constitute a frame set B(k+n).
- RAN#A can also determine target jitter buffer #B3 in combination with maximum buffer information #J2, or determine the playback speed of UE#A, and determine corresponding adjustment information #M.
- the target jitter buffer #B3 is determined according to the size relationship between the jitter buffer #B2 and the jitter buffer #B1, or may also be combined with the maximum buffer information #J2, and the corresponding jitter buffer #B3 is determined according to the target jitter buffer #B3.
- Adjustment information #M specifically:
- jitter buffer #B2 jitter buffer #B1
- the adjustment information #M is to maintain the jitter buffer #B1 of UE#A;
- the adjustment information #M is to adjust the buffer value of UE#A to jitter buffer #B2, or the adjustment information #M is to maintain the jitter buffer #B1 of UE#A;
- adjustment information #M is to adjust the buffer value of UE#A to jitter buffer #B2 , you can also speed up the playback speed of UE#A when adjusting the cache of UE#A.
- the adjustment information #M includes instructing UE#A to discard some frames and not play them, that is, the number of frames skipped (the speed of human eyes recognizing consecutive images is 24 frames per second, as long as the playback speed is greater than 24 frames per second, it will not affect the coherence of the image, and due to the characteristics of GOP image encoding, except for key frames, discarding some other frames has no substantial effect);
- the adjustment information #M is to adjust the buffer value of UE#A to jitter buffer #B5.
- RAN#A may send adjustment information #M to UE#A through RRC information or PDCP layer extension bit information.
- RAN#A optimizes the transmission of the media data.
- RAN#A makes resource adjustment and/or optimization of transmission speed according to media data information #I and/or according to information #J sent by UE#A, or determines whether the current frame is discarded.
- the RAN#A side optimizes the transmission speed by calculating the required Rk according to the playback threshold requirement of UE#A and/or comprehensively considering the network conditions, and adjusting the transmission speed or priority of the media data. For example, in order to ensure smooth playback requirements or cope with the current unstable network conditions, and in the case that the frames in the buffer of UE#A are limited to not less than 3 frames, then according to the condition L B(k+1) ⁇ 3, Calculate the required R k ; or, when the number of frames in the buffer of UE#A is smaller than that of other UEs, in the case of unstable network conditions, media data transmission is preferentially performed on UE#A.
- the way of resource adjustment on RAN#A side is to comprehensively consider the needs of multiple UEs for flexible resource scheduling. For example, there are 100 frames in the buffer of UE#1, and there is only 1 frame in the buffer of UE#2. Within the time of transmitting a set of frames to UE#2, if 100 frames in the buffer of UE#1 meet the playback requirements, more resources can be used to speed up the transmission to UE#2, or to give UE#2 a high priority of media data level transmission.
- the way in which the RAN#A side determines whether the currently arriving frame is discarded is, according to the currently arriving frame type or frame sequence number, and/or the playback speed of the UE#A side, and/or the tolerable delay and/or R k , determine Whether the following frame of the frame currently arriving at the RAN#A side has started to play or it is expected that the following frame has already started playing when the frame currently arriving at the RAN#A side arrives at the UE#A side, then discard the media frame, otherwise transmit the media frame .
- the RAN#A side records the frame sequence number from the time when the media data is transmitted. Assuming that the playback speed of the UE#A side is 1FPS, according to statistics and calculations, the current frame sequence number played by UE#A is 10, and at this time it reaches the RAN# If the frame number of the A side is 9, the frame with the frame number of 9 is discarded.
- the RAN#A side records the sequence of frames sent in each GOP when it starts transmitting the media data. Assuming that the playback speed of the UE#A side is 1 FPS, according to statistics and calculations, the current frame played by UE#A is the first frame. The P frame in the two GOPs, and the frame arriving at the RAN#A side is the B frame in the second GOP. According to the frame sequence (such as IBPBP9), the B frame does not need to be played again and does not need to be used as a B frame. If it is a reference frame of other frames, it is judged to discard the B frame.
- the frame sequence such as IBPBP7
- the RAN#A side can also calculate, according to the playback speed of the UE#A side, the frame type currently arriving at RAN#A, the data size of the corresponding frame, and R k , whether UE#A has Play the subsequent frame of the frame, or whether the tolerable delay has exceeded, it can be judged whether to discard the frame.
- RAN#A determines that UE#A is currently playing the fifth frame according to statistics and calculations, the tolerable delay is 50ms, and the frame type currently arriving at RAN#A is I frame, The data size is 5KB, the frame number is 7, and the Rk is 2.5KB/s, then the transmission time of the current frame is 2s, and UE#A needs 1.05s to play the 6th frame plus the tolerable delay, if it is less than 2s, it can be If it is judged that the frame cannot reach UE#A before the tolerable time, it can be judged to discard the frame.
- RAN#A sends adjustment information #M to UE#A, and UE#A receives the adjustment information #M.
- RAN #A determines and sends media data information #N.
- RAN#A determines media data information #N according to media data information #I, and sends the media data information #N to UE#A.
- the RAN #A needs to remove the identification #A to determine the media data information #N.
- the sequential transmission of frame types is guaranteed, for example, data belonging to the I frame type is guaranteed
- the packet is transmitted, other subsequent frame types are transmitted, and the transmission is carried out according to the decoding order of the frames.
- the decoding order of the frames may or may not be consistent with the GOP.
- the guaranteed frame While the type is transmitted in sequence it can also ensure that the data packets of the same frame type can also be transmitted in sequence. For example, f1 B and f2 B that belong to the same B frame type, after the first f1 B is transmitted, then f2 B is transmitted. .
- UE #A makes corresponding adjustments according to the adjustment information #M.
- UE #A determines the target jitter buffer #B4 according to the adjustment information #M, or determines the playback speed accordingly (for example, the number of skipped frames, the sequence number of the specific skipped frame).
- the adjustment information #M indicates that the target jitter buffer #B4 of UE#A is 100M, then UE#A adjusts the buffer to 100M according to the indication to store the media data #N.
- the adjustment information #M instructs UE#A to skip 4 frames of frames other than the I frame
- UE#A skips 4 frames and does not play according to the instruction, and these 4 frames are other types of frames except the I frame.
- RAN#A assists the UE in determining the size of the media data to be used for buffering the media data according to the parameters related to the media data #A, in combination with the frame type of the media data #A and/or the size of the data volume of the media data #A.
- the buffer size of #A and assist the UE to decide the playback strategy for the media data waiting to be played that belongs to the same service as the media data #A.
- the playback strategy may be whether the UE performs frame skip processing on the media data waiting to be played during playback Or the number of frames skipped by the UE, enabling the UE to meet the jitter buffering requirements during playback of real-time media services with a large amount of data, and the real-time adjustment of the buffering according to the data volume can avoid frame loss due to insufficient buffering.
- the smoothness and accuracy of real-time media data playback of data can also optimize the transmission rate of media data #A according to the above parameters and/or the media data #A, or when multiple UEs transmit data, adjust the priority of media data transmission corresponding to multiple UEs level, which improves the real-time experience of users' media services.
- FIG. 5 is a schematic flowchart of an example of a data processing method 500 according to an embodiment of the present application.
- UE#A establishes a PDU session with the core network according to the method.
- AS#A determines media data information #F.
- the media data information #F includes media data #A, and other contents have been described in the method 300, and will not be repeated here.
- AS#A sends media data information #F to UPF#A, and UPF#A receives the media data information #F.
- the UPF#A determines the frame type information of the media data #A, and for the specific process, refer to the method S404.
- the UPF #A processes the media data #A, and determines the media data information #I. For the specific process, refer to method S405.
- UPF#A sends media data information #I to RAN#A, and RAN#A receives the media data information #I.
- the RAN#A determines the information #T, and the information #T is used to assist the UE#A in determining the play strategy.
- RAN#A determines information #T according to media data information #I and/or the data volume and/or network rate of the media data packet, where the information #T includes: data of the media data packet sent to UE#A amount or frame type, and/or minimum frame sequence number and/or network rate (first transmission rate between RAN#A and UE#A).
- RAN#A sends information #T to UE#A, and UE#A receives the information #T.
- RAN#A may send information #T to UE#A through RRC information or PDCP layer.
- UE#A determines the play strategy and makes corresponding adjustments.
- the playback strategy includes adjustment of the size of the target jitter buffer or the playback rate, and UE#A sets a buffer for storing the media data according to the size of the target jitter buffer, or performs frame skipping processing on the media data to be played.
- UE#A calculates the jitter buffer #B2 of UE#A at time k+1 or k+n according to the information #T and/or the jitter buffer #B1 of UE#A at time k, according to the jitter buffer #B2 and jitter buffer #B2 Buffer #B1 can also be combined with the maximum buffer information #J2 of UE#A to determine a target jitter buffer #B3, and set a buffer for storing the media data according to the target jitter buffer #B3, wherein the method of calculating the jitter buffer #B2 There are the following:
- UE#A determines the jitter buffer #B1 of UE#A at time k, and then calculates the jitter buffer #B2 of UE#A at time k+1 according to the data volume information and network rate information in information #T as follows (here The k moment refers to the kth frame interval, which can also be understood as the currently playing frame is the kth frame, and k is an integer):
- B(k+1) is the frame set buffered by UE#A at time k+1
- L B(k+1) is the buffer size occupied by the frame set UE#A at time k+1 (represented by the length of the frame set, that is, the number of frames in the frame set)
- UE#A ensures that L B(k+1) does not exceed the maximum buffer value according to the maximum buffer information #J2.
- B (k) is the frame set buffered by UE#A at time k
- LB (k) is the buffer size occupied by UE#A’s frame set at time k (calculated by the length of the frame set, that is, the number of frames in the frame set Representation)
- a_frames is the type of frame (for example, it can be I frame, P frame or B frame)
- V a_frames is the data amount corresponding to the frame type of the frame arriving at RAN#A
- V is the frame type that RAN#A reaches through
- X k,1 is a certain frame in the buffer of UE#A at time k, and Q1 frames constitute frame set B(k).
- X k+ 1,1 is a certain frame in the buffer of UE#A at the k+1 th time, and Q2 frames constitute a frame set B(k+1).
- UE#A determines target jitter buffer #B3 according to jitter buffer #B2 and jitter buffer #B1, or determines the playback speed of UE#A, and makes corresponding adjustments.
- the target jitter buffer #B3 is determined according to the size relationship between the jitter buffer #B2 and the jitter buffer #B1, specifically:
- jitter buffer #B2 jitter buffer #B1, maintain jitter buffer #B1 of UE#A;
- jitter buffer #B2 ⁇ jitter buffer #B1, adjust the buffer value of UE#A to jitter buffer #B2, or maintain the jitter buffer #B1 of UE#A;
- the maximum buffer value of the maximum buffer information #J2 is jitter buffer #B5, and jitter buffer #B5 ⁇ jitter buffer #B2 > jitter buffer #B1, adjust the buffer value of UE#A to jitter buffer #B2, or you can adjust Speed up the playback speed of UE#A when caching UE#A, such as discarding some frames and not playing, that is, skipping the number of frames (the speed of human eye recognition of coherent images is 24 frames/second, as long as the playback speed is greater than 24 frames/second, Does not affect the coherence of the image, and due to the characteristics of GOP image coding, except for key frames, dropping some other frames has no substantial effect);
- jitter buffer #B2 ⁇ jitter buffer #B5
- adjust the buffer value of UE#A to jitter buffer #B5.
- UE#A determines the jitter buffer #B1 of UE#A at time k, and then calculates the jitter buffer #B2 of UE#A at time k+n according to the data volume information and network rate information in information #T as follows (here The k moment refers to the kth frame interval, which can also be understood as the currently playing frame is the kth frame, and k is an integer):
- UE#A calculates the size of jitter buffer #B2 required by UE#A at the (k+n)th frame interval according to formula (4):
- B(k+n) is the frame set buffered by UE#A at time k+n
- L B(k+n) is the buffer size occupied by the frame set of UE#A at time k+n (represented by the length of the frame set, that is, the number of frames in the frame set)
- RAN#A ensures that L B(k+n) does not exceed the maximum buffer value according to the maximum buffer information #J2.
- B (k) is the frame set buffered by UE#A at time k
- LB (k) is the buffer size occupied by UE#A’s frame set at time k (calculated by the length of the frame set, that is, the number of frames in the frame set Representation)
- n is an integer greater than or equal to 0
- a_frames is the frame type (for example, it can be I frame, P frame or B frame)
- V a_frames is the frame type corresponding to the data amount of the frame arriving at RAN#A Size
- V k is the data size of these data packets that RAN#A knows by the identification information of the GTP layer of the arriving media data packet which belong to the same frame
- ⁇ t is the data size used by UE#A to play a frame Time, that is, the time taken by AS#A to transmit one frame
- R k is the transmission rate of the link between RAN#A and UE#A at time k, optionally, R k can be
- X k,1 is a certain frame in the cache of UE#A at time k, and Q1 frames constitute frame set B(k).
- X k+n , 1 is a certain frame in the buffer of UE#A at time k+n, and Q2 frames constitute a frame set B(k+n).
- UE#A determines target jitter buffer #B3 according to jitter buffer #B2 and jitter buffer #B1, or determines the playback speed of UE#A, and makes corresponding adjustments.
- the target jitter buffer #B3 is determined according to the size relationship between the jitter buffer #B2 and the jitter buffer #B1, specifically:
- jitter buffer #B2 jitter buffer #B1, maintain jitter buffer #B1 of UE#A;
- jitter buffer #B2 ⁇ jitter buffer #B1, adjust the buffer value of UE#A to jitter buffer #B2, or maintain the jitter buffer #B1 of UE#A;
- the maximum buffer value of the maximum buffer information #J2 is jitter buffer #B5, and jitter buffer #B5 ⁇ jitter buffer #B2 > jitter buffer #B1, adjust the buffer value of UE#A to jitter buffer #B2, or you can adjust Speed up the playback speed of UE#A when caching UE#A, such as discarding some frames and not playing, that is, skipping the number of frames (the speed of human eye recognition of coherent images is 24 frames/second, as long as the playback speed is greater than 24 frames/second, Does not affect the coherence of the image, and due to the characteristics of GOP image coding, except for key frames, dropping some other frames has no substantial effect);
- jitter buffer #B2 ⁇ jitter buffer #B5
- adjust the buffer value of UE#A to jitter buffer #B5.
- RAN#A determines media data information #N and sends it to UE#A.
- RAN #A determines and transmits media data information #N according to media data information #I. Specifically, when the identification #A is included in the media data information #I, the identification #A needs to be deleted to determine the media data information #N, and the UE #A receives the media data information #N.
- the sequential transmission of frame types is guaranteed, for example, data belonging to the I frame type is guaranteed
- the subsequent decoded frames are transmitted according to the decoding order.
- the guaranteed frame While the type is transmitted in sequence it can also ensure that the data packets of the same frame type can also be transmitted in sequence. For example, f1 B and f2 B that belong to the same B frame type, after the first f1 B is transmitted, then f2 B is transmitted. .
- UE #A optimizes the playback rate of the media data according to the information #T.
- UE#A judges whether to discard the smallest frame in the media data information #N according to the smallest frame number in the information #T and the largest frame number that has been played. For example, if the minimum frame number of the information #T is smaller than the played maximum frame number, the frame with the minimum frame number of the information #T may be discarded when receiving the media data sent by RAN #A.
- UE#A determines, according to parameters related to the media data #A, the frame type of the media data #A and/or the size of the data volume of the media data #A to cache the media data #A the buffer size, and the auxiliary decision-making strategy for the media data waiting to be played belonging to the same service as the media data #A.
- the number of skipped frames enables the UE to meet the jitter buffer requirements during playback of real-time media services with a large amount of data, and the real-time adjustment of the buffer according to the amount of data can avoid frame loss due to insufficient buffering.
- the smoothness and accuracy of media data playback improves the real-time experience of users' media services
- media data in the above embodiments of the present application is not limited to video, and the above method is also applicable to audio services, which will not be repeated in this application.
- FIG. 6 is a schematic block diagram of an example of a network device according to an embodiment of the present application. As shown in FIG. 6, the network device 600 includes:
- a receiving unit 610 configured to acquire first information of the first media data, where the first information is used to indicate the size of the first media data; or,
- first parameter information for receiving first parameter information, where the first parameter information is used to indicate the type of the first media data; for receiving the first media data;
- a processing unit 620 configured to determine a playback strategy for the first media data according to the first information, where the playback strategy is used to indicate a buffer size or a playback rate; or,
- the transmission strategy is used to indicate the transmission rate of the first media data and/or the transmission priority of the first media data, or whether to discard the first media data;
- a sending unit 630 configured to send the play strategy to the terminal device UE; or,
- It is used for sending second media data to the RAN device of the radio access network, where the second media data includes the first identification information and the first media data.
- the first information is frame type information, or, identification information corresponding to the frame type of the first media data and data amount information corresponding to the frame type.
- the parameter information further includes one or more of the following:
- the stream description information of the first media data, the frame rate FPS information of the first media data, and the buffering threshold of the first media data is used to indicate the buffer size of the media data played by the UE, and the buffering status of the UE. information, network status information, and tolerable delay information, where the tolerable delay information is used to indicate the time that the UE waits for the arrival of the next frame of the currently playing frame.
- a play strategy for the first media data is determined according to the parameter information.
- the play policy information is carried in radio resource control RRC information or packet data convergence protocol PDCP information; the RAN sends the play policy information to the UE.
- the buffer status information includes: the buffer size occupied by the media data waiting to be played by the UE, the maximum buffer information that the UE can use to store the first media data, or the frame of the media data to be played Status information.
- the buffer status information is carried in radio resource control RRC information or packet data convergence protocol PDCP information, and is sent by the UE to the RAN.
- the identification information corresponding to the frame type of the first media data is carried in the General Radio Packet Service Tunneling Protocol GTP information of the first media data.
- the first parameter information includes:
- Identification information corresponding to the data type of the first media data or,
- GOP frame sequence information of the first media data and real-time transmission protocol RTP information of the first media data are included in GOP frame sequence information of the first media data and real-time transmission protocol RTP information of the first media data.
- the first identification information is carried in information of the General Radio Packet Service Tunneling Protocol (GTP) layer of the second media data.
- GTP General Radio Packet Service Tunneling Protocol
- FIG. 7 is a schematic block diagram of an example of a terminal device according to an embodiment of the present application. As shown in FIG. 7, the terminal device 700 includes:
- Sending unit 730 Send the buffer status information of the UE to the radio access network RAN, where the buffer status information is used for the RAN to determine a play policy for the UE, where the play policy is used to indicate a buffer size or a play rate;
- Receiving unit 710 Receive the play policy from the RAN.
- Processing unit 720 Execute the play strategy.
- FIG. 8 is a schematic block diagram of another example of a network device according to an embodiment of the present application.
- the network device 800 includes a transceiver 810 and a processor 820, and the processor 820 is configured to support the network device to perform the corresponding functions of the network device in the above method.
- the network device may further include a memory 830, where the memory 830 is coupled to the processor 820 and stores necessary program instructions and data of the network device.
- the processor 820 is specifically configured to execute the instructions stored in the memory 830, and when the instructions are executed, the network device executes the method performed by the network device in the foregoing method.
- the network device 600 shown in FIG. 6 may be implemented by the network device 800 shown in FIG. 8 .
- the receiving unit 610 and the transmitting unit 630 shown in FIG. 6 may be implemented by the transceiver 810
- the processing unit 620 may be implemented by the processor 820 .
- FIG. 9 is a schematic block diagram of another example of a terminal device according to an embodiment of the present application.
- the terminal device 900 includes a transceiver 910 and a processor 920, and the processor 920 is configured to support the terminal device to perform the corresponding functions of the terminal device in the above method.
- the terminal device may further include a memory 930, and the memory 930 is configured to be coupled with the processor 920 and store necessary program instructions and data of the terminal device.
- the processor 920 is specifically configured to execute the instructions stored in the memory 930, and when the instructions are executed, the terminal device executes the method performed by the terminal device in the above method.
- the terminal device 700 shown in FIG. 7 may be implemented by the terminal device 900 shown in FIG. 9 .
- the receiving unit 710 and the transmitting unit 730 shown in FIG. 7 may be implemented by the transceiver 910
- the processing unit 920 may be implemented by the processor 920 .
- this application takes network devices and terminal devices as examples to describe the data processing methods and devices in the embodiments of the present application. It should be understood that the data processing method in the embodiment of the present application may also be implemented by a baseband chip, and the baseband chip is used to implement the related operations of the above-mentioned network device or the above-mentioned terminal device in the embodiment of the present application.
- the input/output circuit of the baseband chip can be used to implement the above-mentioned operations related to the transceiver of the network device or the terminal device.
- the processor may be a central processing unit (central processing unit, CPU), and the processor may also be other general-purpose processors, digital signal processors (digital signal processors, DSP), dedicated integrated Circuit (application specific integrated circuit, ASIC), off-the-shelf programmable gate array (field programmable gate array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
- CPU central processing unit
- DSP digital signal processors
- ASIC application specific integrated circuit
- FPGA field programmable gate array
- FPGA field programmable gate array
- the memory in the embodiments of the present application may be volatile memory or non-volatile memory, or may include both volatile and non-volatile memory.
- the non-volatile memory may be read-only memory (ROM), programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), electrically programmable Erase programmable read-only memory (electrically EPROM, EEPROM) or flash memory.
- Volatile memory may be random access memory (RAM), which acts as an external cache.
- RAM random access memory
- SRAM static random access memory
- DRAM dynamic random access memory
- DRAM synchronous dynamic random access memory
- SDRAM synchronous dynamic random access memory
- DDR SDRAM double data rate synchronous dynamic random access memory
- enhanced SDRAM enhanced synchronous dynamic random access memory
- SLDRAM synchronous connection dynamic random access memory Fetch memory
- direct memory bus random access memory direct rambus RAM, DR RAM
- the above embodiments may be implemented in whole or in part by software, hardware, firmware or any other combination.
- the above-described embodiments may be implemented in whole or in part in the form of a computer program product.
- the computer program product includes one or more computer instructions. When the computer program instructions are loaded or executed on a computer, all or part of the processes or functions described in the embodiments of the present application are generated.
- the computer may be a general purpose computer, special purpose computer, computer network, or other programmable device.
- the computer instructions may be stored in or transmitted from one computer readable storage medium to another computer readable storage medium, for example, the computer instructions may be downloaded from a website site, computer, server or data center Transmission to another website site, computer, server or data center by wire (eg, infrared, wireless, microwave, etc.).
- the computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device such as a server, a data center, or the like that contains one or more sets of available media.
- the usable media may be magnetic media (eg, floppy disks, hard disks, magnetic tapes), optical media (eg, digital versatile disc (DVD)), or semiconductor media.
- the semiconductor medium may be a solid state drive.
- the disclosed system, apparatus and method may be implemented in other manners.
- the apparatus embodiments described above are only illustrative.
- the division of the units is only a logical function division. In actual implementation, there may be other division methods.
- multiple units or components may be combined or Can be integrated into another system, or some features can be ignored, or not implemented.
- the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in electrical, mechanical or other forms.
- the units described as separate components may or may not be physically separated, and components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
- each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
- the functions, if implemented in the form of software functional units and sold or used as independent products, may be stored in a computer-readable storage medium.
- the technical solution of the present application can be embodied in the form of a software product in essence, or the part that contributes to the prior art or the part of the technical solution.
- the computer software product is stored in a storage medium, including Several instructions are used to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in the various embodiments of the present application.
- the aforementioned storage medium includes: U disk, removable hard disk, read only memory (ROM), random access memory (RAM), magnetic disk or optical disk and other media that can store program codes.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computer Networks & Wireless Communication (AREA)
- Computer Security & Cryptography (AREA)
- Mobile Radio Communication Systems (AREA)
Abstract
Description
Claims (26)
- 一种数据处理方法,其特征在于,在无线接入网RAN设备中执行,包括:获取第一媒体数据的第一信息,所述第一信息用于指示所述第一媒体数据的大小;根据所述第一信息确定对所述第一媒体数据的播放策略,所述播放策略用于指示缓存大小或者播放速率;向终端设备UE发送所述播放策略。
- 如权利要求1所述的方法,其特征在于,所述第一信息为帧类型信息,或者,所述第一媒体数据的帧类型对应的标识信息和帧类型对应的数据量信息。
- 如权利要求1或2所述的方法,其特征在于,所述获取第一媒体数据的第一信息,包括:获取所述第一媒体数据的参数信息,所述参数信息包括第一信息,所述参数信息还包括以下一项或多项:所述第一媒体数据的流描述信息、所述第一媒体数据的帧率FPS信息、所述UE的缓存状况信息、网络状况信息、或者所述第一媒体数据的缓存门限,所述缓存门限用于指示所述UE播放媒体数据的缓存大小。
- 如权利要求3所述的方法,其特征在于,所述根据所述第一信息确定对所述第一媒体数据的播放策略,包括:根据所述参数信息确定对所述第一媒体数据的播放策略。
- 如权利要求3或4所述的方法,其特征在于,所述缓存状况信息包括以下一项或多项:所述UE等待播放的媒体数据所占用的缓存大小、所述UE可用于存储所述第一媒体数据的最大缓存信息、或者待播放媒体数据的帧状况信息。
- 如权利要求1-5中任一项所述的方法,其特征在于,所述向终端设备UE发送所述播放策略还包括:所述播放策略信息承载于无线资源控制RRC信息或分组数据汇聚协议PDCP信息中;所述RAN向所述UE发送所述播放策略信息。
- 一种数据处理方法,其特征在于,在无线接入网RAN设备中执行,包括:获取第一媒体数据的第一信息和终端设备UE的缓存状况信息,所述第一信息用于指示所述第一媒体数据的大小;根据所述第一信息和所述缓存状况信息确定对所述第一媒体数据的传输策略,所述传输策略用于指示所述第一媒体数据的传输速率和/或所述第一媒体数据的传输优先级,或者是否丢弃所述第一媒体数据。
- 如权利要求7所述的方法,其特征在于,所述第一信息为帧类型信息,或者,所述第一媒体数据的帧类型对应的标识信息和帧类型对应的数据量信息。
- 如权利要求7或8所述的方法,其特征在于,所述获取第一媒体数据的第一信息和终端设备UE的缓存状况信息,包括:获取所述第一媒体数据的参数信息,所述参数信息包括第一信息和终端设备UE的缓存状况信息,所述参数信息还包括以下一项或多项:所述第一媒体数据的流描述信息、所述第一媒体数据的帧率FPS信息、所述第一媒体数据的缓存门限,所述缓存门限用于指示所述UE播放媒体数据的缓存大小、网络状况信 息、可容忍时延信息,所述可容忍时延信息用于指示所述UE等待当前播放的帧的下一帧到达的时间。
- 如权利要求7-9中任一项所述的方法,其特征在于,所述缓存状况信息包括一下一项或多项:所述UE等待播放的媒体数据所占用的缓存大小、所述UE可用于存储所述第一媒体数据的最大缓存信息、或者待播放媒体数据的帧状况信息。
- 如权利要求7-10中任一项所述的方法,其特征在于,所述缓存状况信息包括:所述缓存状况信息承载于从所述UE接收的无线资源控制RRC信息或分组数据汇聚协议PDCP信息中。
- 如权利要求2或8所述的方法,其特征在于,所述第一媒体数据的帧类型对应的标识信息承载于所述第一媒体数据的通用无线分组业务隧道协议GTP信息中。
- 一种数据处理方法,其特征在于,在用户面功能UPF网元中执行,包括:接收第一参数信息,所述第一参数信息用于指示第一媒体数据的类型;接收所述第一媒体数据;根据所述第一参数信息确定第一标识信息,所述第一标识信息用于标识所述第一媒体数据的帧类型;向无线接入网RAN设备发送第二媒体数据,所述第二媒体数据包括所述第一标识信息和所述第一媒体数据。
- 如权利要求13所述的方法,其特征在于,所述第一参数信息包括:所述第一媒体数据的数据类型信息,或者,所述第一媒体数据的数据类型对应的标识信息,或者,所述第一媒体数据的画面组GOP帧顺序信息和所述第一媒体数据的实时传输协议RTP信息。
- 如权利要求13或14所述的方法,其特征在于,所述第一标识信息承载于所述第二媒体数据的通用无线分组业务隧道协议GTP层的信息中。
- 一种数据处理方法,其特征在于,在终端设备UE中执行,包括:向无线接入网RAN发送所述UE的缓存状况信息,所述缓存状况信息用于对所述UE的播放策略的确定,所述播放策略用于指示缓存大小或者播放速率;从所述RAN接收所述播放策略。
- 如权利要求16所述的方法,其特征在于,所述缓存状况信息包括以下一项或多项:所述UE等待播放的媒体数据所占用的缓存大小、所述UE可用于存储所述第一媒体数据的最大缓存信息或者待播放媒体数据的帧状况信息。
- 如权利要求16或17所述的方法,其特征在于,所述缓存状况信息承载于无线资源控制RRC信息或分组数据汇聚协议PDCP信息中。
- 一种数据处理的装置,其特征在于,所述装置包括至少一个处理器和通信接口,所述至少一个处理器用于调用至少一个存储器中存储的计算机程序,以使得所述装置实现如权利要求1至6中任一项所述的方法。
- 一种数据处理的装置,其特征在于,所述装置包括至少一个处理器和通信接口,所述至少一个处理器用于调用至少一个存储器中存储的计算机程序,以使得所述装置实现如权利要求7至12中任一项所述的方法。
- 一种数据处理的装置,其特征在于,所述装置包括至少一个处理器和通信接口,所述至少一个处理器用于调用至少一个存储器中存储的计算机程序,以使得所述装置实现如权利要求13至15中任一项所述的方法。
- 一种数据处理的装置,其特征在于,所述装置包括至少一个处理器和通信接口,所述至少一个处理器用于调用至少一个存储器中存储的计算机程序,以使得所述装置实现如权利要求16至18中任一项所述的方法。
- 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质上存储有计算机程序,当所述计算机程序运行时,使得装置执行如权利要求1至6中任一项所述的方法,或者使得装置执行如权利要求7至12中任一项所述的方法,或者使得装置执行如权利要求13至15中任一项所述的方法,或者使得装置执行如权利要求16至18中任一项所述的方法。
- 一种芯片***,其特征在于,包括:处理器,用于从存储器中调用并运行计算机程序,使得安装有所述芯片***的通信装置执行如权利要求1至6中任一项所述的方法;或者使得安装有所述芯片***的通信装置执行如权利要求7至12中任一项所述的方法;或者使得安装有所述芯片***的通信装置执行如权利要求13至15中任一项所述的方法;或者使得安装有所述芯片***的通信装置执行如权利要求16至18中任一项所述的方法。
- 一种通信***,其特征在于,包括:网络设备,用于执行如权利要求1至6中任一项所述的方法,或者用于执行如权利要求7至12中任一项所述的方法,或者用于执行如权利要求13至15中任一项所述的方法;终端设备,用于执行如权利要求16至18中任一项所述的方法。
- 一种计算机程序产品,其特征在于,当所述计算机程序产品在处理器上运行时,以实现权利要求1-18任一项所述的方法。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202180094540.9A CN116918337A (zh) | 2021-03-04 | 2021-03-04 | 数据处理方法和设备 |
PCT/CN2021/079055 WO2022183431A1 (zh) | 2021-03-04 | 2021-03-04 | 数据处理方法和设备 |
EP21928517.8A EP4294023A4 (en) | 2021-03-04 | 2021-03-04 | DATA PROCESSING METHOD AND DEVICE |
US18/459,482 US20230412662A1 (en) | 2021-03-04 | 2023-09-01 | Data processing method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2021/079055 WO2022183431A1 (zh) | 2021-03-04 | 2021-03-04 | 数据处理方法和设备 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US18/459,482 Continuation US20230412662A1 (en) | 2021-03-04 | 2023-09-01 | Data processing method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022183431A1 true WO2022183431A1 (zh) | 2022-09-09 |
Family
ID=83153851
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/079055 WO2022183431A1 (zh) | 2021-03-04 | 2021-03-04 | 数据处理方法和设备 |
Country Status (4)
Country | Link |
---|---|
US (1) | US20230412662A1 (zh) |
EP (1) | EP4294023A4 (zh) |
CN (1) | CN116918337A (zh) |
WO (1) | WO2022183431A1 (zh) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101431720A (zh) * | 2007-11-09 | 2009-05-13 | 华为技术有限公司 | 一种流媒体的传输方法和设备 |
CN102595199A (zh) * | 2011-01-11 | 2012-07-18 | 中兴通讯股份有限公司 | 一种流媒体数据包的封装、传输方法及流媒体处理装置 |
CN102771134A (zh) * | 2010-01-18 | 2012-11-07 | 瑞典爱立信有限公司 | 用于支持播放内容的方法及装置 |
Family Cites Families (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030081607A1 (en) * | 2001-10-30 | 2003-05-01 | Alan Kavanagh | General packet radio service tunneling protocol (GTP) packet filter |
US7496086B2 (en) * | 2002-04-30 | 2009-02-24 | Alcatel-Lucent Usa Inc. | Techniques for jitter buffer delay management |
US8036122B2 (en) * | 2003-04-03 | 2011-10-11 | Alcatel Lucent | Initiation of network treatment for data packet associated with real-time application different from network treatment applicable to data packet non-associated with the real-time application |
WO2006026635A2 (en) * | 2004-08-30 | 2006-03-09 | Qualcomm Incorporated | Adaptive de-jitter buffer for voice over ip |
EP2869513A1 (en) * | 2013-10-30 | 2015-05-06 | Telefonaktiebolaget L M Ericsson (Publ) | Method and network node for controlling sending rates |
US11444850B2 (en) * | 2016-05-02 | 2022-09-13 | Huawei Technologies Co., Ltd. | Method and apparatus for communication network quality of service capability exposure |
CN113133122A (zh) * | 2016-08-12 | 2021-07-16 | 华为技术有限公司 | 业务数据传输的方法、网络设备和终端设备 |
US10447619B2 (en) * | 2017-03-15 | 2019-10-15 | Verizon Patent And Licensing Inc. | Dynamic application buffer adaptation for proxy based communication |
US10819763B2 (en) * | 2017-03-31 | 2020-10-27 | At&T Intellectual Property I, L.P. | Apparatus and method of video streaming |
US11032590B2 (en) * | 2018-08-31 | 2021-06-08 | At&T Intellectual Property I, L.P. | Methods, devices, and systems for providing panoramic video content to a mobile device from an edge server |
US20200120211A1 (en) * | 2018-10-10 | 2020-04-16 | Avaya Inc. | Dynamic agent media type selection based on communication session quality of service parameters |
WO2020034464A1 (en) * | 2018-11-14 | 2020-02-20 | Zte Corporation | Methods, apparatus and systems for satisfying a time control requirement in a wireless communication |
-
2021
- 2021-03-04 EP EP21928517.8A patent/EP4294023A4/en active Pending
- 2021-03-04 CN CN202180094540.9A patent/CN116918337A/zh active Pending
- 2021-03-04 WO PCT/CN2021/079055 patent/WO2022183431A1/zh active Application Filing
-
2023
- 2023-09-01 US US18/459,482 patent/US20230412662A1/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101431720A (zh) * | 2007-11-09 | 2009-05-13 | 华为技术有限公司 | 一种流媒体的传输方法和设备 |
CN102771134A (zh) * | 2010-01-18 | 2012-11-07 | 瑞典爱立信有限公司 | 用于支持播放内容的方法及装置 |
CN102595199A (zh) * | 2011-01-11 | 2012-07-18 | 中兴通讯股份有限公司 | 一种流媒体数据包的封装、传输方法及流媒体处理装置 |
Non-Patent Citations (1)
Title |
---|
See also references of EP4294023A4 * |
Also Published As
Publication number | Publication date |
---|---|
EP4294023A4 (en) | 2024-04-10 |
EP4294023A1 (en) | 2023-12-20 |
CN116918337A (zh) | 2023-10-20 |
US20230412662A1 (en) | 2023-12-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20210410168A1 (en) | Service data transmission method, network device, and terminal device | |
US10764610B2 (en) | Media user client, a media user agent and respective methods performed thereby for providing media from a media server to the media user client | |
US9203886B2 (en) | Content rate control for streaming media servers | |
KR101524325B1 (ko) | 스트리밍 미디어 서버에 있어서 프록시 구동의 콘텐츠 레이트 선택 | |
EP4216603A1 (en) | Data processing method, device, readable storage medium, and program product | |
EP4231703A1 (en) | Data processing method, device, readable storage medium, and program product | |
EP3280208B1 (en) | Cooperative applications in communication systems | |
US20110067072A1 (en) | Method and apparatus for performing MPEG video streaming over bandwidth constrained networks | |
US20230388360A1 (en) | Method and apparatus for adjusting media parameter | |
WO2022052102A1 (zh) | 一种通信方法及装置 | |
WO2022183431A1 (zh) | 数据处理方法和设备 | |
WO2017177356A1 (zh) | 数据传输的方法、基站和用户设备 | |
CN116711370A (zh) | 一种通信方法与装置 | |
CN114928554B (zh) | 视频传输方法、装置和存储介质 | |
US20230371060A1 (en) | Resource scheduling method and apparatus | |
EP4274189A2 (en) | Packet validity time enhancement for quality of service flows | |
WO2022198464A1 (zh) | 一种数据传输的方法、相关设备以及通信*** | |
WO2022056863A1 (zh) | 一种切换方法及装置 | |
WO2023142774A1 (zh) | 一种通信方法及设备 | |
WO2023095438A1 (ja) | 端末装置、無線通信システム、および、端末装置の処理方法 | |
JP2024042534A (ja) | 通信システムおよび通信方法 | |
WO2015085525A1 (zh) | 体验质量QoE的实现方法及装置 | |
WO2024035616A1 (en) | Indicating extended reality (xr) awareness and xr traffic characteristics | |
WO2024033902A1 (en) | Drift compensation of configured scheduling | |
CN114793357A (zh) | 同步传输方法及装置、存储介质、发送端设备、接收端设备 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21928517 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 202180094540.9 Country of ref document: CN |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2021928517 Country of ref document: EP |
|
ENP | Entry into the national phase |
Ref document number: 2021928517 Country of ref document: EP Effective date: 20230915 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |