Summary of the invention
The present invention provides a kind of processing method of downlink physical channel compression schema for solving above-mentioned defective of the prior art, (NodeB Frame Number: the compression clearance public system frame number) sends DTX at corresponding System Frame Number BFN, to reduce system loading, reduce the interference of compact model to system.
For solving the problems of the technologies described above, the present invention includes:
A kind of processing method of downlink physical channel compression schema may further comprise the steps:
A, downward special physical channel are mapped to the compressed information of condensed frame on the corresponding System Frame Number;
The compression clearance of B, the described corresponding System Frame Number of mark, and saving result;
C, high-speed uplink scheduler inquire about described saving result and send detection discontinuous transmission signal at the compression clearance of described corresponding System Frame Number when handling absolute grant channel, relative authorization channel and answering indication signal channel.
Described processing method, wherein, described steps A also comprises: according to the chip offset of described downward special physical channel, the compressed information of described condensed frame is mapped to described corresponding System Frame Number and adds on another corresponding System Frame Number of one.
Described processing method, wherein, described compressed information comprises initial compression clearance, finishes compression clearance and compression clearance length.
Described processing method, wherein, described processing method is further comprising the steps of:
D, collect compressed-mode parameter according to compact model;
Whether E, judge to connect frame be condensed frame, if, the compressed information of the described connection frame of mark then;
F, the vertical shift according to downward special physical channel, chip offset and System Frame Number calculate the corresponding System Frame Number of described connection frame.
Described processing method, wherein, described processing method also comprises: the chip offset of downward special physical channel and initial compression clearance sum be greater than ten five o'clock, and the compressed information of described connection frame only is mapped to corresponding System Frame Number and adds on another corresponding System Frame Number of one.
Described processing method, wherein, described processing method also comprises: the chip offset of downward special physical channel and initial compression clearance sum are less than 15, and initial compression clearance and compression clearance length sum were less than ten five o'clock, and the compressed information of then described connection frame only is mapped on the corresponding System Frame Number.
Described processing method, wherein, described processing method also comprises: the chip offset of downward special physical channel and initial compression clearance sum are less than 15, and initial compression clearance and compression clearance length sum be greater than 15, and the compressed information of then described connection frame is mapped to respectively on the corresponding System Frame Number with described corresponding System Frame Number and adds on another corresponding System Frame Number of one.
Described processing method, wherein, described step B is further comprising the steps of:
B1, user side are created an one-dimension array in proprietary logical links, described one-dimension array comprises array element, compression time slot and non-compression time slot, with the value of System Frame Number mould 4 as the array index index;
B2, the initial compression time slot that calculates corresponding System Frame Number, compression slot length compress time slot with end, and it are recorded in the described array element.
Described processing method, wherein, described step C also comprises: the high-speed uplink scheduler is when handling exhausted relative authorization channel and answering indication signal channel, adopt the processing mode of subframe signature, synchronization with relative authorization channel and answering indication signal channel on all subframes sign and carry out accumulation process.
Described processing method, wherein, described step B also comprises: the compressed information that system preserves in the data set element according to corresponding System Frame Number is handled the descending proprietary physical channel of high-speed uplink, and when processing finishes with the zero clearing of corresponding data group element.
The invention provides a kind of processing method of downlink physical channel compression schema, increase by a mapping behavior in the prior art, make HSUPA send DTX, reduced system loading at the compression clearance of corresponding System Frame Number BFN, reduced the interference of compact model, improved the stability of system system.
Embodiment
Below with reference to accompanying drawing, comparatively detailed explanation is carried out in each preferred embodiment of the present invention.
The processing method of a kind of downlink physical channel compression schema provided by the invention can reduce the interference of HSUPA down physical channel to system when UE is in compact model, improved the stability of system.
Processing method of the present invention may further comprise the steps:
A, according to the compact model information that issues in the system signaling, collect compressed-mode parameter;
B, according to the method for present processing DPCH compact model, put forward former frame and handle next CFN (Connection Frame Number: frame Connection Frame Number), judge whether the CFN frame is condensed frame, if yes, then mark CFN frame is a condensed frame, and the initial compression clearance of mark CFN frame is that Nfirst, end compression clearance are that Nlast and compression clearance length are Nlenth; If then mark CFN frame is not a condensed frame not;
C, be that FrameOffset, chip offset are ChipOffset and current System Frame Number BFN (NodeB Frame Number: System Frame Number), calculate the corresponding System Frame Number BFN of CFN frame according to the vertical shift of DPCH;
D, DPCH are that Nfirst, Nlast are mapped on the corresponding System Frame Number BFN with Nlenth with the compressed information on the CFN frame, owing to influenced by the ChipOffset of DPCH channel, the compressed information that can occur the CFN frame in the system is mapped on corresponding System Frame Number BFN or the BFN+1; Concrete mapping relations are judged according to information such as Nfirst that writes down among the ChipOffset of DPCH channel and the step B and Nlenth;
E, according to the compressed information that writes down among the step B when the CFN frame of pre-treatment, all corresponding System Frame Number BFN that determine among the treatment step D, the compression clearance of the corresponding System Frame Number BFN of mark, and the result preserved;
F, HSUPA scheduler according to the compression clearance of the corresponding System Frame Number BFN of institute's mark in the step e, send DTX at corresponding compression clearance when handling E-AGCH, E-RGCH or E-HICH channel.
Wherein, the step D among the present invention is further comprising the steps of:
If the ChipOffset and the Nfirst sum of D1, calculating DPCH channel greater than 15, then enter step D2; Otherwise, enter step D3;
The compressed information of D2, explanation CFN frame correspondence only is mapped on the corresponding System Frame Number BFN+1;
The compressed information of D3, the described CFN correspondence of explanation is mapped on the corresponding System Frame Number BFN, whether also can on System Frame Number BFN+1, mapping be arranged, judge that (whether (ChipOffset/2560)+Nfirst) %15+Nlenth+ (ChipOffset%2560)/2560 sum is greater than 15, if yes, the compressed information that CFN frame correspondence then is described also is mapped on the corresponding System Frame Number BFN+1; If the compressed information mapping that CFN frame correspondence, then is not described is less than on the corresponding System Frame Number BFN+1.
Wherein, the step e among the present invention is further comprising the steps of:
If E1 judges the compressed information of CFN frame and only is mapped on the corresponding System Frame Number BFN in step D, then enter step e 3; If be mapped on corresponding System Frame Number BFN and the BFN+1, then enter step e 4; If only be mapped on the corresponding System Frame Number BFN+1, enter step e 5;
E2, be among the RL in the proprietary logical links information of UE example, create an one-dimension array, array size is 4 to be wSlotInGap[4], array element represents with WORD16, variable bit 0 expression 0 time slot wherein, bit 14 expressions 15 time slots; When a certain bit set is 1, represent that then this time slot is a compression clearance, otherwise be non-compression clearance; When handling a corresponding System Frame Number BFN, the value of utilizing BFN%4 is as the array index index, and the compressed information of corresponding System Frame Number BFN is recorded in the array element;
E3, the current corresponding System Frame Number BFN of calculating go up compact model information:
The %15 of initial compression clearance=(ChipOffset/2560)+Nfirst)
Compression clearance length=Nlenth+ (ChipOffset%2560)/2560
Finish compression clearance=initial compression clearance+compression clearance length-1
Then according to step e 2, the compact model information of corresponding System Frame Number BFN is recorded in the array element;
Compact model information on E4, the corresponding System Frame Number BFN of calculating and the BFN+1:
At first calculate the compact model information on the corresponding System Frame Number BFN:
The %15 of initial compression clearance=(ChipOffset/2560+Nfirst)
Finish compression clearance=14
The initial compression clearance of compression clearance length=14-+1
Then according to step e 2, the compact model information of corresponding System Frame Number BFN is recorded in the array element;
Next calculates the compact model information on the corresponding System Frame Number BFN+1:
Initial compression clearance=0
Compression clearance length among the corresponding System Frame Number BFN of compression clearance length=Nlenth+ (ChipOffset%2560)/2560-
Finish compression clearance=initial compression clearance+compression clearance length-1
Then according to step e 2, the compact model information of corresponding System Frame Number BFN+1 is recorded in the array element;
Compact model information is the same with the E3 process on E5, the current System Frame Number BFN+1 of calculating.
Wherein, the step F among the present invention is further comprising the steps of:
F1: the present invention is when handling E-AGCH, E-RGCH, E-HICH channel, no matter be 10msUE, or 2ms UE, the unified processing mode that adopts subframe, also comprise system time gap 2,3,4 when being system's 0 time slot processing subframe 0, system's 3 time slots are handled subframe 1 and are also comprised system time gap 5,6,7, and the rest may be inferred;
F2: the present invention when handling E-RGCH, E-HICH channel, adopt synchronization with the RGHI channel on all signatures carry out the flow process of accumulation process;
F3, because the absolute grant that sends on the UE correct decoding E-AGCH channel, need to collect the information of a subframe, therefore some time slots are compression clearance in subframe, UE can abandon the information of this subframe; When system when handling E-AGCH channel compact model, when handling a certain subframe, handle the service Radio Link of institute's corresponding UE according to the information such as corresponding System Frame Number BFN, BFN%4 gained index value and this subframe at current subframe place; When the corresponding System Frame Number BFN that writes down in step e is a compression clearance, and a certain time slot of this subframe falls into the compression clearance of institute's corresponding UE, and then system disregards and directly send out DTX to this subframe;
F4, because the relative authorization that on correct decoding E-RGCH, E-HICH channel, sends of UE during with the HRAQ response message, need obtain a subframe to get final product in the information of some time slots; Therefore, the current compact model that whether is in of RL at each signature place before all signatures add up on carrying out synchronization RGHI channel, is at first judged by system when handling E-RGCH, E-HICH channel compact model; When the corresponding System Frame Number BFN that writes down in step e is a compression clearance, and a certain time slot of the subframe of this signature then directly fills in DTX with this time slot when falling into described compression clearance, do not sign and adds up.
For clearer explanation the present invention program, as shown in Figure 2, suppose that the relevant information of DPCH is: FrameOffset=0, ChipOffset=11264.Handling process then of the present invention is:
At first, according to the compact model information that issues in the system signaling, collect compressed-mode parameter;
Then,, put forward former frame and judge whether each CFN frame of DPCH channel is condensed frame according to the method for present processing DPCH compact model, and recording-related information;
Then, according to FrameOffset, the chip offset ChipOffset of DPCH channel and current System Frame Number BFN, calculate the corresponding System Frame Number BFN of CFN frame;
Then, as shown in Figure 2, when DPCH was condensed frame at the CFN frame, its compressed information was Nfirst=8, Nlast=14, Nlenth=7; By the ChipOffset of DPCH and Nfirst as can be known, ChipOffset/2560+Nfirst=12<15 illustrate that the compressed information of this CFN frame is mapped on the System Frame Number BFN; Again by ((ChipOffset/2560)+Nfirst) %15+Nlenth+ (ChipOffset%2560)/2560=20〉15, illustrate that the compressed information of this CFN frame also is mapped on the System Frame Number BFN+1;
Then, calculate compact model information on the corresponding System Frame Number BFN:
Initial compression clearance=((ChipOffset/2560)+Nfirst) %15=12
Finish compression clearance=14
Initial compression clearance+the 1=3 of compression clearance length=14-
The value of utilizing BFN%4 is as the array index index, and the compressed information of this correspondence System Frame Number BFN is recorded in array wSlotInGap[4 under the UE example RL information] in the element;
Calculate the compact model information on the corresponding System Frame Number BFN+1:
Initial compression clearance=0
Compression clearance length=5 among compression clearance length=Nlenth+ (ChipOffset%2560)/2560-BFN
Finish compression clearance=initial compression clearance+compression clearance length-1=4
The value of utilizing (BFN+1) %4 is as the array index index, and the compressed information of this correspondence System Frame Number BFN is recorded in array wSlotInGap[4 under the UE example RL information] in the element;
Then, when corresponding System Frame Number BFN carries out E-AGCH, E-RGCH, E-HICH Channel Processing since known in advance should correspondence System Frame Number BFN correspondence UE time slot 12,13, with 14 be compression clearance; Corresponding System Frame Number BFN+1 is at time slot 0,1, and 2,3,4 time slots are compression clearance; Therefore, system is when handling the E-AGCH channel, and when system time gap 9 was handled subframes 3, equally corresponding to time slot 11,12 and 13, owing to known time slot 12,13 is a compression clearance, so this subframe disregarded, and directly sends out DTX; When system handled subframes 4 at system time gap 12, equally corresponding to time slot 14,0 and 1, owing to known time slot 14 is a compression clearance, and 0,1 time slot of corresponding System Frame Number BFN+1 frame was compression clearance, so this subframe disregards, and directly sent out DTX; Same reason, system are when handling the E-RGHI channel, and when system time gap 9 was handled subframes 3, equally corresponding to time slot 11,12 and 13, owing to known time slot 12,13 is a compression clearance, so this subframe disregarded, and directly sends out DTX; When system handled subframes 4 at system time gap 12, equally corresponding to time slot 14,0 and 1, owing to known time slot 14 is a compression clearance, and 0,1 time slot of corresponding System Frame Number BFN+1 frame was compression clearance, so this subframe disregards, and directly sent out DTX;
Then, with the value of BFN%4 as the array index index value, with the array wSlotInGap[4 under the UE example RL information] the corresponding element zero clearing; And according to above-mentioned flow process begin the circulation.By the processing of above-mentioned flow process, effectively reduced the load of system, improved the stability of system.
In order further to set forth processing method of the present invention, as shown in Figure 3, the present invention includes following steps:
Step 301: compressed-mode parameter is obtained in collection according to compact model;
Step 302: put forward former frame and handle next CFN (Connection Frame Number: frame Connection Frame Number), judge whether the CFN frame is condensed frame, if yes, then mark CFN frame is a condensed frame, and the initial compression clearance of mark CFN frame is that Nfirst, end compression clearance are that Nlast and compression clearance length are Nlenth and enter step 303; If then mark CFN frame is not a condensed frame, returns step 301 not;
Step 303: the vertical shift according to DPCH is that FrameOffset, chip offset are ChipOffset and current System Frame Number BFN (NodeB Frame Number: public frame number), calculate the corresponding System Frame Number BFN of CFN frame;
Step 304: judge that the ChipOffset of DPCH channel and Nfirst sum whether greater than 15, if yes, then enter step 305; If, then do not enter step 307;
The compressed information of step 305:CFN frame correspondence only is mapped on the corresponding System Frame Number BFN+1;
Step 306: write down the compressed information on the corresponding System Frame Number BFN+1, and the value that the result is saved in (BFN+1) %4 is the wSlotInGap[4 under the array index index] in the array element, enter step 312 then;
Step 307: judge that (whether (ChipOffset/2560+Nfirst) %15+Nlenth+ (ChipOffset%2560)/2560 sum if yes, then enters step 308 greater than 15; If, then do not enter step 310;
The compressed information of step 308:CFN frame correspondence is mapped on corresponding System Frame Number BFN+1 and the BFN;
Step 309: write down the compressed information on corresponding System Frame Number BFN+1 and the BFN, and the value that the result is saved in (BFN+1) %4 and BFN%4 is the wSlotInGap[4 under the array index index] in the array element, enter step 312 then;
The compressed information of step 310:CFN frame correspondence only is mapped on the corresponding System Frame Number BFN;
Step 311: write down the compressed information on the corresponding System Frame Number BFN, and the value that the result is saved in BFN%4 is the wSlotInGap[4 under the array index index] in the array element, enter step 312 then;
Step 312: according to current corresponding System Frame Number BFN or BFN+1 at wSlotInGap[4] the information processing HSUPA down physical channel preserved in the array element, and when processing finishes with wSlotInGap[4] corresponding element zero clearing in the array element;
Step 313: judge whether compact model finishes, if yes, process ends then; If, then do not return step 302 and enter circulation.By the processing of above-mentioned flow process, effectively reduce the HSUPA down physical channel transmitting power, can reserve more power and use to HSDPA, improved the service efficiency of HSDPA.
In sum, the processing method of a kind of downlink physical channel compression schema provided by the invention, increase by a mapping behavior in the prior art, make HSUPA send DTX at the compression clearance of corresponding System Frame Number BFN, promptly when handling the HSUPA down physical channel, handle in the prior art scheme on the basis of DPCH channel compact model, by increasing simple query function, judge when having time slot to fall into compression clearance in the current subframe, some time slot is disregarded in then current subframe or the subframe, effectively reduces the load of system; Effectively reduced the unexpected lifting of system emission power, reduced extra interference, improved the stability of system system; Particularly the HSDPA power algorithm adopts when dynamically adjusting, and effectively reduces the HSUPA down physical channel transmitting power, can reserve more power and use to HSDPA, has further improved the service efficiency of HSDPA.
Should be understood that; above-mentioned description at preferred embodiment is comparatively detailed; can not therefore think restriction to scope of patent protection of the present invention; those of ordinary skill in the art is under enlightenment of the present invention; do not breaking away under the scope situation that claim of the present invention protects; can also make various deformation such as replacement, simple combination, these all fall within protection scope of the present invention, and the scope of asking for protection of the present invention should be as the criterion with claims.