CN103686853A - Fluid control distribution method and device of HSDPA (High Speed Downlink Packet Access) multithread technique - Google Patents

Fluid control distribution method and device of HSDPA (High Speed Downlink Packet Access) multithread technique Download PDF

Info

Publication number
CN103686853A
CN103686853A CN201210338621.1A CN201210338621A CN103686853A CN 103686853 A CN103686853 A CN 103686853A CN 201210338621 A CN201210338621 A CN 201210338621A CN 103686853 A CN103686853 A CN 103686853A
Authority
CN
China
Prior art keywords
dsch
network element
target network
user cache
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201210338621.1A
Other languages
Chinese (zh)
Other versions
CN103686853B (en
Inventor
陈艳丽
张瑜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ZTE Corp
Original Assignee
ZTE Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ZTE Corp filed Critical ZTE Corp
Priority to CN201210338621.1A priority Critical patent/CN103686853B/en
Publication of CN103686853A publication Critical patent/CN103686853A/en
Application granted granted Critical
Publication of CN103686853B publication Critical patent/CN103686853B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Mobile Radio Communication Systems (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention discloses a fluid control distribution method and a device of an HSDPA (High Speed Downlink Packet Access) multithread technique. The method comprises the following steps: an RNC (Radio Network Controller) indicates corresponding user buffer size information to target network elements through User Buffer Size cells of an HS-DSCH (High Speed-Downlink Shared Channel) data frame or an HS-DSCH ability request frame, or the RNC indicates corresponding user buffer size information to the target network elements through the User Buffer Size cells and newly added cells of the HS-DSCH data frame or the HS-DSCH ability request frame. Through the adoption of the method, fluid control distribution of HS-DSCH priority queues in the HSDPA multithread technique is achieved.

Description

A kind of Flow Control distribution method and device of HSDPA multithread technology
Technical field
The present invention relates to HSDPA multithread technology, refer to especially a kind of Flow Control distribution method and device of HSDPA multithread technology.
Background technology
Along with the fast development of data service, high-speed packet access (HSPA, High-Speed Packet Access) technology is applied more and more at large, and to the future development of multi-antenna multi-carrier-wave.Such as, the R7 version of 3GPP has been introduced multiple-input and multiple-output (MIMO, Multiple-Input Multiple-Out-put) technology, make target network element (may be base station (NodeB) and/or drifting radio network system (DRNS, Drift Radio Network Subsystem)) to send two transmission blocks to subscriber equipment (UE) by double antenna from same community simultaneously; Subsequently, the R8 version of 3GPP has been introduced two carrier wave high speed downlink packet access (DC-HSDPA, Dual Cell-High Speed Downlink Packet Access) technology makes target network element from two frequencies of two neighbor cells, to send HSDPA data to UE simultaneously.The introducing of these two technology, has improved the data throughout of community to a great extent.
When UE is in edge, two co-frequency cells and carry out soft handover or during More Soft Handoff, serving high speed downlink shared channel (HS-DSCH often, High-Speed Downlink Shared Channel) limited ability of eating dishes without rice or wine of community, and non-service HS-DSCH community in Active Set also has available resources, if can also send out data from non-service cell to UE simultaneously, will greatly improve user's impression, thereby also improve the data throughout of community.Therefore, the R11 version of 3GPP enters on the multithread technology of HSDPA.
According to the node of shunting, distinguish, HSDPA multithread technology is divided between the interior shunting of target network element and target network element to be shunted.For shunting in target network element, need target network element in real time user data to be diverted to a plurality of communities according to channel quality indication (CQI, the Channel quality indicator) feedback of eating dishes without rice or wine; For shunting between target network element, 3GPP meeting determines that user data is at wireless link control (RLC, Radio Link Control) layer shunting, need radio network controller (RNC, Radio Network Controller) regularly to each target network element, send HS-DSCH capability requests, and respectively RLC data distribution is arrived to different target network element according to the ability situation of target network element feedback.Because load and the time delay of each target network element are not identical, between target network element, shunting may cause the RLC data of large sequence number first to arrive UE than the RLC data of little sequence number, thereby occurs so-called imbalance (SKEW) phenomenon, affects data throughout.
In existing HSDPA single current technology, when RNC sends HS-DSCH capability requests frame or HS-DSCH Frame to target network element, the User Buffer Size wherein carrying is the total data volume to be sent of the corresponding HS-DSCH of respective user priority query.And in HSDPA multithread technology, the implication of the User Buffer Size in HS-DSCH capability requests frame is also not clear and definite.In HSDPA multithread technology, the Flow Control distribution that how to realize HS-DSCH priority query does not also have solution.
Summary of the invention
In view of this, main purpose of the present invention is to provide a kind of Flow Control distribution method and device of HSDPA multithread technology, to realize the Flow Control of HS-DSCH priority query in HSDPA multithread technology, distributes.
For achieving the above object, technical scheme of the present invention is achieved in that
The Flow Control distribution method that the invention provides a kind of HSDPA multithread technology, comprising:
Radio network controller (RNC), by the user cache size User Buffer Size cell of high-speed downlink shared channel HS-DSCH Frame or HS-DSCH capability requests frame, is indicated corresponding user cache dimension information to each target network element;
Described user cache dimension information is: the user cache size that this target network element is used is expected by corresponding HS-DSCH priority query.
Preferably, before RNC indicates corresponding user cache dimension information to each target network element, the method also comprises: described RNC is according to the synthetic load of the total user cache size of corresponding HS-DSCH priority query and each target network element, determines that described corresponding HS-DSCH priority query corresponding to each target network element expect the user cache size that this target network element is used.
Preferably, when the synthetic load of each target network element equates or in the time of cannot obtaining the synthetic load of each target network element, described corresponding HS-DSCH priority query corresponding to each target network element expects that the user cache size that this target network element is used is: the total user cache size of corresponding HS-DSCH priority query is multiplied by R divided by target network element sum;
When the synthetic load of each target network element is unequal, corresponding HS-DSCH priority query corresponding to each target network element expects that the user cache that this target network element is used is of a size of:
Figure BDA00002134855100031
Wherein, described R is the redundancy factor of user cache size; Described S is the total user cache size of corresponding HS-DSCH priority query; f isynthetic load for this target network element; The value of described n is target network element sum.
Preferably, the method also comprises: the user cache dimension information that each target network element provides according to RNC, is user's corresponding HS-DSCH priority query Resources allocation, and resource allocation result is fed back to RNC by HS-DSCH capability distribution frame.
Preferably, the method also comprises: the resource allocation result that RNC returns according to each target network element, is handed down to user by HS-DSCH Frame from corresponding target network element by the user data of respective byte.
The present invention also provides a kind of Flow Control distribution method of HSDPA multithread technology, comprising:
Radio network controller (RNC), by user cache size User Buffer Size cell and the newly-increased cell of high-speed downlink shared channel HS-DSCH Frame or HS-DSCH capability requests frame, is indicated corresponding user cache dimension information to each target network element;
Described user cache dimension information is: the ratio of the user cache size that the relatively corresponding HS-DSCH of user cache size priority query that this NodeB uses is total is expected by the user cache size that corresponding HS-DSCH priority query is total and corresponding HS-DSCH priority query.
Preferably, by the total user cache size of the described user cache size cell described corresponding HS-DSCH priority query of indication;
By the described corresponding HS-DSCH of described newly-increased cell indication priority query, expect the ratio of the user cache size that the relatively corresponding HS-DSCH of user cache size priority query that this NodeB uses is total.
Preferably, before RNC indicates corresponding user cache dimension information to each target network element, the method also comprises: described RNC is according to the synthetic load of the total user cache size of corresponding HS-DSCH priority query and each target network element, determines the ratio of the user cache size that the relatively corresponding HS-DSCH of user cache size priority query that the corresponding described corresponding HS-DSCH of each target network element priority query expects that this NodeB uses is total.
Preferably, when the synthetic load of each target network element equates or in the time of cannot obtaining the synthetic load of each target network element, described corresponding HS-DSCH priority query corresponding to each target network element expects that the ratio of the user cache size that the relatively corresponding HS-DSCH of user cache size priority query that this target network element used is total is: R (1/ target network element sum);
When the synthetic load of each target network element is unequal, described corresponding HS-DSCH priority query corresponding to each target network element expects that the ratio of the user cache size that the relatively corresponding HS-DSCH of user cache size priority query that this target network element used is total is:
Wherein, described R is the redundancy factor of user cache size; f isynthetic load for this target network element; The value of described n is target network element sum.
Preferably, the method also comprises: the user cache dimension information that each target network element provides according to RNC, is user's corresponding HS-DSCH priority query Resources allocation, and resource allocation result is fed back to RNC by HS-DSCH capability distribution frame.
Preferably, the method also comprises: the resource allocation result that RNC returns according to each target network element, is handed down to user by HS-DSCH Frame from corresponding target network element by the user data of respective byte.
The present invention also provides a kind of Flow Control distributor of HSDPA multithread technology, comprising:
Indicating module, for by user cache size (the User Buffer Size) cell of HS-DSCH Frame or HS-DSCH capability requests frame, indicates corresponding user cache dimension information to each target network element; Described user cache dimension information is: the user cache size that this target network element is used is expected by corresponding HS-DSCH priority query.
Preferably, described device also comprises: analysis module, for according to the synthetic load of the total user cache size of corresponding HS-DSCH priority query and each target network element, determine that described corresponding HS-DSCH priority query corresponding to each target network element expect the user cache size that this target network element is used, and offer described indicating module.
The present invention also provides a kind of Flow Control distributor of HSDPA multithread technology, comprising:
Indicating module, for by user cache size (User Buffer Size) cell and the newly-increased cell of HS-DSCH Frame or HS-DSCH capability requests frame, indicates corresponding user cache dimension information to each target network element; Described user cache dimension information is: the ratio of the user cache size that the relatively corresponding HS-DSCH of user cache size priority query that this NodeB uses is total is expected by the user cache size that corresponding HS-DSCH priority query is total and corresponding HS-DSCH priority query.
Preferably, described indicating module, also for indicating the total user cache size of described corresponding HS-DSCH priority query by described User Buffer Size cell; And, by the described corresponding HS-DSCH of described newly-increased cell indication priority query, expect the ratio of the user cache size that the relatively corresponding HS-DSCH of user cache size priority query that this NodeB uses is total.
Preferably, described device also comprises: analysis module, for according to the synthetic load of the total user cache size of corresponding HS-DSCH priority query and each target network element, determine the ratio of the user cache size that the relatively corresponding HS-DSCH of user cache size priority query that described corresponding HS-DSCH priority query corresponding to each target network element expect that this NodeB uses is total, and offer described indicating module; Also for the total user cache size of corresponding HS-DSCH priority query is offered to described indicating module.
Flow Control distribution method and the device of SDPA multithread technology of the present invention: RNC indicates corresponding user cache size (User Buffer Size) information by HS-DSCH Frame or HS-DSCH capability requests frame to each target network element, concrete: only by User Buffer Size cell, indicate corresponding HS-DSCH priority query corresponding to each target network element to expect the user cache size that this target network element is used; Or, the ratio of the user cache size that the relatively corresponding HS-DSCH of the user cache size priority query that indicated the total user cache size of corresponding HS-DSCH priority query and indicated corresponding HS-DSCH priority query to expect that this NodeB uses by newly-increased cell by User Buffer Size cell is total.
The Flow Control distribution method that the present invention is above-mentioned, for the HSDPA multithread user who shunts between target network element, by to the suitable transmitting capacity of each target network element request, thereby promote to greatest extent the total throughout of each HS-DSCH business under target network element, promote the interface-free resources utilization ratio of target network element.
Accompanying drawing explanation
Fig. 1 is the structural representation of HS-DSCH Class1 Frame;
Fig. 2 is the structural representation of HS-DSCH type 2 Frames;
Fig. 3 is the structural representation of HS-DSCH capability requests frame;
Fig. 4 is the structural representation that comprises the HS-DSCH Class1 Frame that increases cell newly;
Fig. 5 is the structural representation that comprises HS-DSCH type 2 Frames that increase cell newly;
Fig. 6 is the structural representation that comprises the HS-DSCH capability requests frame that increases cell newly;
Fig. 7 is the HSDPA multi-stream service topological diagram of the embodiment of the present invention;
Fig. 8 is that the Flow Control of the HSDPA multithread technology of the embodiment of the present invention one is distributed schematic diagram;
Fig. 9 is that the Flow Control of the HSDPA multithread technology of the embodiment of the present invention two is distributed schematic diagram;
Figure 10 is that the Flow Control of the HSDPA multithread technology of the embodiment of the present invention three is distributed schematic diagram;
Figure 11 is that the Flow Control of the HSDPA multithread technology of the embodiment of the present invention four is distributed schematic diagram;
Figure 12 is that the Flow Control of the HSDPA multithread technology of the embodiment of the present invention five is distributed schematic diagram;
Figure 13 is the Flow Control distributor schematic diagram of the HSDPA multithread technology of the embodiment of the present invention.
Embodiment
The core concept of the Flow Control distribution method of HSDPA multithread technology of the present invention is: RNC is corresponding by User Buffer Size information to each target network element indication by HS-DSCH Frame or HS-DSCH capability requests frame.Mainly contain following two kinds of modes:
One, RNC, by the User Buffer Size cell of HS-DSCH Frame or HS-DSCH capability requests frame, indicates corresponding user cache dimension information to each target network element; User cache dimension information is: the user cache size that this target network element is used is expected by corresponding HS-DSCH priority query.
Two, RNC, by User Buffer Size cell and the newly-increased cell of HS-DSCH Frame or HS-DSCH capability requests frame, indicates corresponding user cache dimension information to each target network element; User cache dimension information is: the ratio of the user cache size that the relatively corresponding HS-DSCH of user cache size priority query that this NodeB uses is total is expected by the user cache size that corresponding HS-DSCH priority query is total and corresponding HS-DSCH priority query.(by total user cache size and ratio, finally can obtain expecting the user cache size of this target network element use)
Mode two times, by User Buffer Size cell, indicate the total user cache size of corresponding HS-DSCH priority query; The ratio of the user cache size that the relatively corresponding HS-DSCH of the user cache size priority query that indicates corresponding HS-DSCH priority query to expect that this NodeB uses by newly-increased cell is total.
Preferably, before RNC indicates corresponding user cache dimension information to each target network element, also need RNC (to comprise the ability of eating dishes without rice or wine according to the synthetic load of the total user cache size of corresponding HS-DSCH priority query and each target network element, the quality of eating dishes without rice or wine, Iub transfer resource loading condition, the situations such as target network element own resource load), determine that corresponding HS-DSCH priority query corresponding to each target network element expect the user cache size (mode one) that this target network element is used, or, determine the ratio (mode two) of the user cache size that the relatively corresponding HS-DSCH of user cache size priority query that the corresponding described corresponding HS-DSCH of each target network element priority query expects that this NodeB uses is total.
Preferably, for mode one:
When the synthetic load of each target network element equates or in the time of cannot obtaining the synthetic load of each target network element, corresponding HS-DSCH priority query corresponding to each target network element expects that the user cache size that this target network element is used is: the total user cache size of corresponding HS-DSCH priority query is multiplied by R divided by target network element sum;
When the synthetic load of each target network element is unequal, corresponding HS-DSCH priority query corresponding to each target network element expects that the user cache that this target network element is used is of a size of:
Figure BDA00002134855100071
Wherein, R is the redundancy factor of user cache size, and better value is 100%-150%; S is the total user cache size of corresponding HS-DSCH priority query; f isynthetic load for this target network element; The value of n is target network element sum.
Preferably, for mode two:
When the synthetic load of each target network element equates or in the time of cannot obtaining the synthetic load of each target network element, corresponding HS-DSCH priority query corresponding to each target network element expects that the ratio of the user cache size that the relatively corresponding HS-DSCH of user cache size priority query that this target network element used is total is: R (1/ target network element sum);
When the synthetic load of each target network element is unequal, corresponding HS-DSCH priority query corresponding to each target network element expects that the ratio of the user cache size that the relatively corresponding HS-DSCH of user cache size priority query that this target network element used is total is:
Wherein, R is the redundancy factor of user cache size, and better value is 100%-150%; f isynthetic load for this target network element; The value of n is target network element sum.
Above-mentioned, when the user cache size of using according to this target network element of the definite expectation of the synthetic load of target network element or ratio, be not limited in above-mentioned mode, as long as can be applicable to according to the definite user cache size of the synthetic load of target network element the transmitting capacity of this target network element.
It should be noted that: the total user cache size of above-mentioned corresponding HS-DSCH priority query is total amount of user data of RNC buffer memory; Corresponding HS-DSCH priority query expects that the user cache size that this target network element is used is: the amount of user data that can issue to user by this target network element.
Further, above-mentioned HS-DSCH Frame comprises: HS-DSCH Class1 Frame (as shown in Figure 1) and HS-DSCH type 2 Frames (as shown in Figure 2); The payload structure of HS-DSCH capability requests frame as shown in Figure 3; When carrying User Buffer Size information by newly-increased cell, the HS-DSCH Class1 Frame that HS-DSCH Frame comprises as shown in Figure 4, HS-DSCH type 2 Frames as shown in Figure 5, the payload structure of HS-DSCH capability requests frame is as shown in Figure 6.
Further, the user cache dimension information that each target network element provides according to RNC, is user's corresponding HS-DSCH priority query Resources allocation, and resource allocation result will be fed back to RNC by HS-DSCH capability distribution frame.The resource allocation result that RNC returns according to each target network element, is handed down to user by HS-DSCH Frame from corresponding target network element by the user data of respective byte.
Wherein, target network element of the present invention may be NodeB and/or drifting radio network system (DRNS, Drift Radio Network Subsystem).
The Flow Control distribution method that the present invention is above-mentioned, for the HSDPA multithread user who shunts between target network element, by to the suitable transmitting capacity of each target network element request, thereby promote to greatest extent the total throughout of each HS-DSCH business under target network element, promote the interface-free resources utilization ratio of target network element.
Below by specific embodiment, technique scheme of the present invention is described.As shown in Figure 7, target network element wherein be take NodeB as example to the HSDPA multi-stream service topological diagram of the follow-up specific embodiment foundation of the present invention.
Suppose that the applicable scene of the HSDPA multi-stream service topological diagram of Fig. 7 is as follows:
1, RNC1 ownership CN1 management;
2, NodeB1 and NodeB2 ownership RNC1 management;
3, CELL1 ownership NodeB1 management; CELL2 ownership NodeB2 management; CELL1 and CELL2 have the common area of coverage;
4, UE1 resides in the common area of coverage of CELL1 and CELL2, and the state of certain business of UE1 in HSDPA double fluid, and UE1 is the HSDPA multithread user who shunts between NodeB1 and NodeB2;
5, the signing speed of UE1 is 40Mbit/s.
Embodiment mono-
Step 110, RNC1 receives the user data of UE1 and puts into buffering area from CN1, user data total amount is Y1 bytes.
Step 120, RNC1 monitors the synthetic load situation of NodeB1 and the synthetic load situation of NodeB2 (the synthetic load situation of NodeB comprises the situations such as corresponding community interface-free resources, Iub transfer resource, NodeB own resource) is all good, and the synthetic load state of NodeB1 and NodeB2 is underloading (synthetic load that can be considered NodeB1 and NodeB2 equates).
Step 130, RNC1 sends respectively HS-DSCH Frame to NodeB1 and NodeB2, and cell User Buffer Size wherein fills in respectively Y1*0.5bytes; Or RNC1 sends respectively HS-DSCH capability requests frame to NodeB1 and NodeB2, cell User Buffer Size wherein fills in respectively Y1*0.5bytes.
Step 140, the User Buffer Size information that NodeB1 provides according to RNC1 and the ability information etc. of eating dishes without rice or wine, corresponding HS-DSCH priority query Resources allocation for UE1, be that NodeB1 is that the user data that the corresponding HS-DSCH priority query of UE1 distributes is Y1*0.5bytes, and by HS-DSCH capability distribution frame, this allocation result fed back to RNC1.
Step 150, the User Buffer Size information that NodeB2 provides according to RNC1 and the ability information etc. of eating dishes without rice or wine, corresponding HS-DSCH priority query Resources allocation for UE1, be that NodeB2 is that the user data that the corresponding HS-DSCH priority query of UE1 distributes is Y1*0.5bytes, and by HS-DSCH capability distribution frame, this allocation result fed back to RNC1.
Step 160, the resource allocation result of the HS-DSCH priority query that RNC1 returns separately according to NodeB1 and NodeB2, is divided into two parts by the user data of buffer memory, by HS-DSCH Frame, is handed down to UE1 respectively from NodeB1 and NodeB2.In this embodiment, RNC1 is handed down to UE1 by the Y1*0.5bytes of the user data of buffer memory by NodeB1; The Y1*0.5bytes of the user data of buffer memory is handed down to UE1 by NodeB2.
Embodiment bis-
Step 210, RNC1 receives the user data of UE1 and puts into buffering area from CN1, user data total amount is Y1bytes.
Step 220, RNC1 monitors the synthetic load situation of NodeB1 and the synthetic load situation of NodeB2 (the synthetic load situation of NodeB comprises the situations such as corresponding community interface-free resources, Iub transfer resource, NodeB own resource) is all good, and the synthetic load state of NodeB 1 and NodeB2 is underloading (synthetic load that can be considered NodeB1 and NodeB2 equates).
Step 230, RNC1 sends respectively HS-DSCH Frame to NodeB1 and NodeB2, consider the factors such as possible data volume fluctuation and NodeB load fluctuation, RNC1 is respectively to NodeB 1 and the larger User Buffer Size of NodeB2 notice: such as, the cell UserBuffer Size in HS-DSCH Frame fills in respectively Y1*0.5*R bytes; Or RNC1 sends respectively HS-DSCH capability requests frame to NodeB1 and NodeB2, cell User Buffer Size wherein fills in respectively Y1*0.5*Rbytes; Wherein R is the redundancy factor of User Buffer Size; RNC is according to the definite R of self strategy, such as determining according to traffic performance, such as R scope is 100%-150%.
Step 240, the User Buffer Size information that NodeB1 provides according to RNC1 and the ability information etc. of eating dishes without rice or wine, corresponding HS-DSCH priority query Resources allocation for UE1, be that NodeB1 is that the user data that the corresponding HS-DSCH priority query of UE1 distributes is Y1*0.5*R bytes, and by HS-DSCH capability distribution frame, this allocation result fed back to RNC1.
Step 250, the User Buffer Size information that NodeB2 provides according to RNC1 and the ability information etc. of eating dishes without rice or wine, corresponding HS-DSCH priority query Resources allocation for UE1, be that NodeB2 is that the user data that the corresponding HS-DSCH priority query of UE1 distributes is Y1*0.5*R bytes, and by HS-DSCH capability distribution frame, this allocation result fed back to RNC1.
Step 260, the resource allocation result of the HS-DSCH priority query that RNC1 returns separately according to NodeB1 and NodeB2, is divided into two parts by the user data of buffer memory, is handed down to UE1 respectively by HS-DSCH Frame from NodeB1 and NodeB2.In this embodiment, RNC1 is handed down to UE1 by the Y1*0.5*R bytes of the user data of buffer memory by NodeB1; The Y1*0.5*Rbytes of the user data of buffer memory is handed down to UE1 by NodeB2.
Embodiment tri-
Step 310, RNC1 receives the user data of UE1 and puts into buffering area from CN1, user data total amount is Y1 bytes.
Step 320, RNC1 monitors the synthetic load situation of NodeB1 and the synthetic load situation of NodeB2 (the synthetic load situation of NodeB comprises the situations such as corresponding community interface-free resources, Iub transfer resource, NodeB own resource), wherein, the synthetic load of the relative NodeB2 of NodeB1 weighs 20%.
Step 330, RNC1 sends respectively HS-DSCH Frame to NodeB1 and NodeB2, consider the factors such as possible data volume fluctuation and NodeB load fluctuation, RNC1 is respectively to the larger User Buffer Size of NodeB1 and NodeB2 notice: such as, send to cell User Buffer Size in the HS-DSCH Frame of NodeB1 to fill in Y1*0.4*R bytes, send to the cell User Buffer Size in the HS-DSCH Frame of NodeB2 to fill in Y1*0.6*R bytes; Or RNC1 sends to the cell User Buffer Size in the HS-DSCH capability requests frame of NodeB1 to fill in Y1*0.4*R bytes, sends to the cell User Buffer Size in the HS-DSCH capability requests frame of NodeB2 to fill in Y1*0.6*R bytes; Wherein R is the redundancy factor of User Buffer Size; RNC is according to the definite R of self strategy, such as determining according to traffic performance, such as R scope is 100%-150%.
Step 340, the User Buffer Size information that NodeB1 provides according to RNC1 and the ability information etc. of eating dishes without rice or wine, corresponding HS-DSCH priority query Resources allocation for UE1, be that NodeB1 is that the user data that the corresponding HS-DSCH priority query of UE1 distributes is Y1*0.4*R bytes, and by HS-DSCH capability distribution frame, this allocation result fed back to RNC1.
Step 350, the User Buffer Size information that NodeB2 provides according to RNC1 and the ability information etc. of eating dishes without rice or wine, corresponding HS-DSCH priority query Resources allocation for UE1, be that NodeB2 is that the user data that the corresponding HS-DSCH priority query of UE1 distributes is Y1*0.6*R bytes, and by HS-DSCH capability distribution frame, this allocation result fed back to RNC1.
Step 360, the resource allocation result of the HS-DSCH priority query that RNC1 returns separately according to NodeB1 and NodeB2, is divided into two parts by the user data of buffer memory, is handed down to UE1 respectively by HS-DSCH Frame from NodeB1 and NodeB2.In this embodiment, RNC1 is handed down to UE1 by the Y1*0.4*R bytes of the user data of buffer memory by NodeB1; The Y1*0.6*R bytes of the user data of buffer memory is handed down to UE1 by NodeB2.
Embodiment tetra-
Step 410, RNC1 receives the user data of UE1 and puts into buffering area from CN1, user data total amount is Y1 bytes.
Step 420, RNC1 not yet obtains the synthetic load situation of NodeB1 and the synthetic load situation of NodeB2.
Step 430, RNC1 sends respectively HS-DSCH Frame to NodeB1 and NodeB2, consider the factors such as possible data volume fluctuation and NodeB load fluctuation, RNC1 is respectively to NodeB1 and the larger User Buffer Size of NodeB2 notice: such as, the cell User Buffer Size in HS-DSCH Frame fills in respectively Y1*0.5*R bytes; Or RNC1 sends respectively HS-DSCH capability requests frame to NodeB1 and NodeB2, cell User Buffer Size wherein fills in respectively Y1*0.5*Rbytes; Wherein R is the redundancy factor of User Buffer Size; RNC is according to the definite R of self strategy, such as determining according to traffic performance, such as R scope is 100%-150%.
In such cases, the processing of step 440-460 is with step 240-260.
Embodiment five
Step 510, RNC1 receives the user data of UE1 and puts into buffering area from CN1, user data total amount is Y1 bytes.
Step 520, RNC1 monitors the synthetic load situation of NodeB1 and the synthetic load situation of NodeB2 (the synthetic load situation of NodeB comprises the situations such as corresponding community interface-free resources, Iub transfer resource, NodeB own resource) is all good, and the synthetic load state of NodeB1 and NodeB2 is underloading (synthetic load that can be considered NodeB1 and NodeB2 equates).
Step 530, RNC1 sends respectively HS-DSCH Frame to NodeB1 and NodeB2, and cell User Buffer Size wherein fills in respectively Y1 bytes, and newly-increased cell User Buffer Size Ratio fills in respectively 50, represent 50%, or, fill in 50R, represent 50%R; Or RNC1 sends respectively HS-DSCH capability requests frame to NodeB1 and NodeB2, cell User Buffer Size wherein fills in respectively Y1 bytes and newly-increased cell User Buffer Size Ratio fills in respectively 50, represents 50%.
Step 540, the User Buffer Size information that NodeB1 provides according to RNC1 and the ability information etc. of eating dishes without rice or wine, corresponding HS-DSCH priority query Resources allocation for UE1, be that NodeB1 is that the user data that the corresponding HS-DSCH priority query of UE1 distributes is Y1 bytes*50%, and by HS-DSCH capability distribution frame, this allocation result fed back to RNC1.
Step 550, the User Buffer Size information that NodeB2 provides according to RNC1 and the ability information etc. of eating dishes without rice or wine, corresponding HS-DSCH priority query Resources allocation for UE1, be that NodeB2 is that the user data that the corresponding HS-DSCH priority query of UE1 distributes is Y1 bytes*50%, and by HS-DSCH capability distribution frame, this allocation result fed back to RNC1.
Step 560, the resource allocation result of the HS-DSCH priority query that RNC1 returns separately according to NodeB1 and NodeB2, is divided into two parts by the user data of buffer memory, is handed down to UE1 respectively by HS-DSCH Frame from NodeB1 and NodeB2.In this embodiment, RNC1 is handed down to UE1 by the Y1bytes*50% of the user data of buffer memory by NodeB1; The Y1bytes*50% of the user data of buffer memory is handed down to UE1 by NodeB2.
In this embodiment, RNC1 considers the factors such as possible data volume fluctuation and NodeB load fluctuation, also can be to NodeB1 and the larger User Buffer Size of NodeB2 notice,, according to NodeB1 and NodeB2 synthetic load, notify larger User Buffer Size also can adopt the mode of embodiment bis-, three, four.
In order to realize above-mentioned Flow Control distribution method, the present invention also provides a kind of Flow Control distributor of HSDPA multithread technology, and this device can be applicable in RNC, and as shown in figure 13, this device comprises: indicating module and analysis module.
Mode one in corresponding said method:
Indicating module, for by user cache size (the User Buffer Size) cell of HS-DSCH Frame or HS-DSCH capability requests frame, indicates corresponding user cache dimension information to each target network element; User cache dimension information is: the user cache size that this target network element is used is expected by corresponding HS-DSCH priority query.
Analysis module, for according to the synthetic load of the total user cache size of corresponding HS-DSCH priority query and each target network element, determine that corresponding HS-DSCH priority query corresponding to each target network element expect the user cache size that this target network element is used, and offer indicating module.
Mode two in corresponding said method:
Indicating module, for by user cache size (User Buffer Size) cell and the newly-increased cell of HS-DSCH Frame or HS-DSCH capability requests frame, indicates corresponding user cache dimension information to each target network element; User cache dimension information is: the ratio of the user cache size that the relatively corresponding HS-DSCH of user cache size priority query that this NodeB uses is total is expected by the user cache size that corresponding HS-DSCH priority query is total and corresponding HS-DSCH priority query.
Indicating module, also for indicating the total user cache size of corresponding HS-DSCH priority query by User Buffer Size cell; And, the ratio of the user cache size that the relatively corresponding HS-DSCH of the user cache size priority query that indicates corresponding HS-DSCH priority query to expect that this NodeB uses by newly-increased cell is total.
Analysis module, for according to the synthetic load of the total user cache size of corresponding HS-DSCH priority query and each target network element, determine the ratio of the user cache size that the relatively corresponding HS-DSCH of user cache size priority query that corresponding HS-DSCH priority query corresponding to each target network element expect that this NodeB uses is total, and offer indicating module; Also for the total user cache size of corresponding HS-DSCH priority query is offered to indicating module.
The above, be only preferred embodiment of the present invention, is not intended to limit protection scope of the present invention.

Claims (16)

1. a Flow Control distribution method for HSDPA multithread technology, is characterized in that, comprising:
Radio network controller (RNC), by the user cache size User Buffer Size cell of high-speed downlink shared channel HS-DSCH Frame or HS-DSCH capability requests frame, is indicated corresponding user cache dimension information to each target network element;
Described user cache dimension information is: the user cache size that this target network element is used is expected by corresponding HS-DSCH priority query.
2. the Flow Control distribution method of HSDPA multithread technology according to claim 1, it is characterized in that, before RNC indicates corresponding user cache dimension information to each target network element, the method also comprises: described RNC is according to the synthetic load of the total user cache size of corresponding HS-DSCH priority query and each target network element, determines that described corresponding HS-DSCH priority query corresponding to each target network element expect the user cache size that this target network element is used.
3. the Flow Control distribution method of HSDPA multithread technology according to claim 2, is characterized in that,
When the synthetic load of each target network element equates or in the time of cannot obtaining the synthetic load of each target network element, described corresponding HS-DSCH priority query corresponding to each target network element expects that the user cache size that this target network element is used is: the total user cache size of corresponding HS-DSCH priority query is multiplied by R divided by target network element sum;
When the synthetic load of each target network element is unequal, corresponding HS-DSCH priority query corresponding to each target network element expects that the user cache that this target network element is used is of a size of:
Figure FDA00002134855000011
Wherein, described R is the redundancy factor of user cache size; Described S is the total user cache size of corresponding HS-DSCH priority query; f isynthetic load for this target network element; The value of described n is target network element sum.
4. according to the Flow Control distribution method of the arbitrary described HSDPA multithread technology of claims 1 to 3, it is characterized in that, the method also comprises: the user cache dimension information that each target network element provides according to RNC, for user's corresponding HS-DSCH priority query Resources allocation, and resource allocation result is fed back to RNC by HS-DSCH capability distribution frame.
5. the Flow Control distribution method of HSDPA multithread technology according to claim 4, it is characterized in that, the method also comprises: the resource allocation result that RNC returns according to each target network element, is handed down to user by HS-DSCH Frame from corresponding target network element by the user data of respective byte.
6. a Flow Control distribution method for HSDPA multithread technology, is characterized in that, comprising:
Radio network controller (RNC), by user cache size User Buffer Size cell and the newly-increased cell of high-speed downlink shared channel HS-DSCH Frame or HS-DSCH capability requests frame, is indicated corresponding user cache dimension information to each target network element;
Described user cache dimension information is: the ratio of the user cache size that the relatively corresponding HS-DSCH of user cache size priority query that this NodeB uses is total is expected by the user cache size that corresponding HS-DSCH priority query is total and corresponding HS-DSCH priority query.
7. the Flow Control distribution method of HSDPA multithread technology according to claim 6, is characterized in that,
By the total user cache size of the described user cache size cell described corresponding HS-DSCH priority query of indication;
By the described corresponding HS-DSCH of described newly-increased cell indication priority query, expect the ratio of the user cache size that the relatively corresponding HS-DSCH of user cache size priority query that this NodeB uses is total.
8. the Flow Control distribution method of HSDPA multithread technology according to claim 7, it is characterized in that, before RNC indicates corresponding user cache dimension information to each target network element, the method also comprises: described RNC is according to the synthetic load of the total user cache size of corresponding HS-DSCH priority query and each target network element, determines the ratio of the user cache size that the relatively corresponding HS-DSCH of user cache size priority query that the corresponding described corresponding HS-DSCH of each target network element priority query expects that this NodeB uses is total.
9. the Flow Control distribution method of HSDPA multithread technology according to claim 8, is characterized in that,
When the synthetic load of each target network element equates or in the time of cannot obtaining the synthetic load of each target network element, described corresponding HS-DSCH priority query corresponding to each target network element expects that the ratio of the user cache size that the relatively corresponding HS-DSCH of user cache size priority query that this target network element used is total is: R (1/ target network element sum);
When the synthetic load of each target network element is unequal, described corresponding HS-DSCH priority query corresponding to each target network element expects that the ratio of the user cache size that the relatively corresponding HS-DSCH of user cache size priority query that this target network element used is total is:
Figure FDA00002134855000031
Wherein, described R is the redundancy factor of user cache size; f isynthetic load for this target network element; The value of described n is target network element sum.
10. according to the Flow Control distribution method of the arbitrary described HSDPA multithread technology of claim 6 to 9, it is characterized in that, the method also comprises: the user cache dimension information that each target network element provides according to RNC, for user's corresponding HS-DSCH priority query Resources allocation, and resource allocation result is fed back to RNC by HS-DSCH capability distribution frame.
The 11. Flow Control distribution methods of HSDPA multithread technology according to claim 10, it is characterized in that, the method also comprises: the resource allocation result that RNC returns according to each target network element, is handed down to user by HS-DSCH Frame from corresponding target network element by the user data of respective byte.
The Flow Control distributor of 12. 1 kinds of HSDPA multithread technology, is characterized in that, comprising:
Indicating module, for by user cache size (the User Buffer Size) cell of HS-DSCH Frame or HS-DSCH capability requests frame, indicates corresponding user cache dimension information to each target network element; Described user cache dimension information is: the user cache size that this target network element is used is expected by corresponding HS-DSCH priority query.
13. according to the Flow Control distributor of HSDPA multithread technology described in claim 12, it is characterized in that, described device also comprises: analysis module, for according to the synthetic load of the total user cache size of corresponding HS-DSCH priority query and each target network element, determine that described corresponding HS-DSCH priority query corresponding to each target network element expect the user cache size that this target network element is used, and offer described indicating module.
The Flow Control distributor of 14. 1 kinds of HSDPA multithread technology, is characterized in that, comprising:
Indicating module, for by user cache size (User Buffer Size) cell and the newly-increased cell of HS-DSCH Frame or HS-DSCH capability requests frame, indicates corresponding user cache dimension information to each target network element; Described user cache dimension information is: the ratio of the user cache size that the relatively corresponding HS-DSCH of user cache size priority query that this NodeB uses is total is expected by the user cache size that corresponding HS-DSCH priority query is total and corresponding HS-DSCH priority query.
15. according to the Flow Control distributor of HSDPA multithread technology described in claim 14, it is characterized in that,
Described indicating module, also for indicating the total user cache size of described corresponding HS-DSCH priority query by described User Buffer Size cell; And, by the described corresponding HS-DSCH of described newly-increased cell indication priority query, expect the ratio of the user cache size that the relatively corresponding HS-DSCH of user cache size priority query that this NodeB uses is total.
16. according to the Flow Control distributor of HSDPA multithread technology described in claim 15, it is characterized in that,
Described device also comprises: analysis module, for according to the synthetic load of the total user cache size of corresponding HS-DSCH priority query and each target network element, determine the ratio of the user cache size that the relatively corresponding HS-DSCH of user cache size priority query that described corresponding HS-DSCH priority query corresponding to each target network element expect that this NodeB uses is total, and offer described indicating module; Also for the total user cache size of corresponding HS-DSCH priority query is offered to described indicating module.
CN201210338621.1A 2012-09-13 2012-09-13 A kind of the flow control distribution method and device of HSDPA multi-stream technology Expired - Fee Related CN103686853B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210338621.1A CN103686853B (en) 2012-09-13 2012-09-13 A kind of the flow control distribution method and device of HSDPA multi-stream technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210338621.1A CN103686853B (en) 2012-09-13 2012-09-13 A kind of the flow control distribution method and device of HSDPA multi-stream technology

Publications (2)

Publication Number Publication Date
CN103686853A true CN103686853A (en) 2014-03-26
CN103686853B CN103686853B (en) 2019-07-09

Family

ID=50322833

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210338621.1A Expired - Fee Related CN103686853B (en) 2012-09-13 2012-09-13 A kind of the flow control distribution method and device of HSDPA multi-stream technology

Country Status (1)

Country Link
CN (1) CN103686853B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105323801A (en) * 2014-07-31 2016-02-10 中兴通讯股份有限公司 Information sending method, information sending device, DRNC and SRNC
WO2020187324A1 (en) * 2019-03-20 2020-09-24 华为技术有限公司 Communication method and apparatus

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060034168A1 (en) * 2004-08-13 2006-02-16 Alcatel Method for data flow control in a mobile communications system
CN1885965A (en) * 2006-07-03 2006-12-27 华为技术有限公司 High-speed downlink packet access service scheduling method
WO2007024167A1 (en) * 2005-08-26 2007-03-01 Telefonaktiebolaget L M Ericsson (Publ) Method and arrangement for flow control in umts using information in ubs field
CN101128015A (en) * 2006-08-15 2008-02-20 大唐移动通信设备有限公司 User access method and applied communication devices for high-speed downlink packet access system
CN101459588A (en) * 2007-12-13 2009-06-17 华为技术有限公司 Method, apparatus and system for flow control

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060034168A1 (en) * 2004-08-13 2006-02-16 Alcatel Method for data flow control in a mobile communications system
WO2007024167A1 (en) * 2005-08-26 2007-03-01 Telefonaktiebolaget L M Ericsson (Publ) Method and arrangement for flow control in umts using information in ubs field
CN1885965A (en) * 2006-07-03 2006-12-27 华为技术有限公司 High-speed downlink packet access service scheduling method
CN101128015A (en) * 2006-08-15 2008-02-20 大唐移动通信设备有限公司 User access method and applied communication devices for high-speed downlink packet access system
CN101459588A (en) * 2007-12-13 2009-06-17 华为技术有限公司 Method, apparatus and system for flow control

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
邓为,张国平: "WCDMA***中高速下行分组接入的接纳控制策略", 《现代电子技术》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105323801A (en) * 2014-07-31 2016-02-10 中兴通讯股份有限公司 Information sending method, information sending device, DRNC and SRNC
WO2020187324A1 (en) * 2019-03-20 2020-09-24 华为技术有限公司 Communication method and apparatus
CN111726379A (en) * 2019-03-20 2020-09-29 华为技术有限公司 Communication method and device
CN111726379B (en) * 2019-03-20 2021-11-19 华为技术有限公司 Communication method and device
US11963034B2 (en) 2019-03-20 2024-04-16 Huawei Technologies Co., Ltd. Communication method and apparatus

Also Published As

Publication number Publication date
CN103686853B (en) 2019-07-09

Similar Documents

Publication Publication Date Title
EP2739082B1 (en) Load sharing method, device, and system
CN110691419B (en) Balancing uplink transmissions for dual connectivity
US11272568B2 (en) Command reception method and apparatus and communication system
KR101011441B1 (en) Method and apparatus for performing buffer status reporting
Wang et al. Dynamic load balancing and throughput optimization in 3gpp lte networks
EP2426975A1 (en) Device and method for base stations dynamic clustering in mobile communication
CN102752874B (en) The dispatching method of Physical Downlink Shared Channel and device
CN103947249A (en) Methods to transport internet traffic over multiple wireless networks simultaneously
CN104640223A (en) BSR (buffer status report) submitting method, base station and terminal
CN109479295A (en) Transmission state reporting device, method and communication system
CN111866905B (en) Communication method and related equipment
CN102595499B (en) A kind of UE sets up the method for mixing carrying in multiple community
CN107154840A (en) Resource allocation control method, apparatus and system
CN106664747B (en) Multi-plate architecture for a wireless transceiver station
CN103686853A (en) Fluid control distribution method and device of HSDPA (High Speed Downlink Packet Access) multithread technique
KR101901536B1 (en) Multi-board architecture for wireless transceiver station
CN101808364A (en) Carrier aggregation system business transmission method and equipment
CN102264111A (en) Inter-cell switching method under carrier aggregation and base station
Zhang et al. Congestion-aware user-centric cooperative base station selection in ultra-dense networks
JP5452046B2 (en) Method and apparatus for determining which of a plurality of resources a plurality of elements of a group must be allocated to, and a computer program
EP3249987B1 (en) Enhanced carrier aggregation in wireless communication networks
CN103686847A (en) High-speed downlink packet access (HSDPA) multi-stream optimization method, system and device
US20160182281A1 (en) Wireless communication system and wireless base station
CN103634080A (en) Data scheduling processing method and device
WO2023164617A1 (en) Load balancing among user plane entities of a central unit of a base station

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20190709

Termination date: 20200913