Summary of the invention
The present invention provides a kind of processing method of downlink physical channel compression schema for solving above-mentioned defective of the prior art; (NodeB Frame Number: the compression clearance public system frame number) sends DTX at corresponding System Frame Number BFN; To reduce system loading, reduce the interference of compact model to system.
For solving the problems of the technologies described above, the present invention includes:
A kind of processing method of downlink physical channel compression schema may further comprise the steps:
A, downward special physical channel are mapped to the compressed information of condensed frame on the corresponding System Frame Number;
The compression clearance of B, the said corresponding System Frame Number of mark, and saving result;
C, high-speed uplink scheduler inquire about said saving result and send detection discontinuous transmission signal at the compression clearance of said corresponding System Frame Number when handling absolute grant channel, relative authorization channel and answering indication signal channel.
Described processing method, wherein, said steps A also comprises: according to the chip offset of said downward special physical channel, the compressed information of said condensed frame is mapped to said corresponding System Frame Number and adds on another corresponding System Frame Number of one.
Described processing method, wherein, said compressed information comprises initial compression clearance, finishes compression clearance and compression clearance length.
Described processing method, wherein, said processing method is further comprising the steps of:
D, collect compressed-mode parameter according to compact model;
Whether E, judge to connect frame be condensed frame, if, the compressed information of the said connection frame of mark then;
F, the vertical shift according to downward special physical channel, chip offset and System Frame Number calculate the corresponding System Frame Number of said connection frame.
Described processing method; Wherein, Said processing method also comprises: the chip offset of downward special physical channel and initial compression clearance sum be greater than ten five o'clock, and the compressed information of said connection frame only is mapped to corresponding System Frame Number and adds on another corresponding System Frame Number of one.
Described processing method; Wherein, Said processing method also comprises: the chip offset of downward special physical channel and initial compression clearance sum are less than 15; And initial compression clearance and compression clearance length sum were less than ten five o'clock, and the compressed information of then said connection frame only is mapped on the corresponding System Frame Number.
Described processing method; Wherein, Said processing method also comprises: the chip offset of downward special physical channel and initial compression clearance sum are less than 15; And initial compression clearance and compression clearance length sum be greater than 15, and the compressed information of then said connection frame is mapped to respectively on the corresponding System Frame Number with said corresponding System Frame Number and adds on another corresponding System Frame Number of one.
Described processing method, wherein, said step B is further comprising the steps of:
B1, user side are created an one-dimension array in proprietary logical links, said one-dimension array comprises array element, compression time slot and non-compression time slot, with the value of System Frame Number mould 4 as the array index index;
B2, the initial compression time slot that calculates corresponding System Frame Number, compression slot length compress time slot with end, and it are recorded in the said array element.
Described processing method; Wherein, Said step C also comprises: the high-speed uplink scheduler is when handling exhausted relative authorization channel and answering indication signal channel; Adopt the processing mode of subframe signature, synchronization with relative authorization channel and answering indication signal channel on all subframes sign and carry out accumulation process.
Described processing method, wherein, said step B also comprises: the compressed information that system preserves in the data set element according to corresponding System Frame Number is handled the descending proprietary physical channel of high-speed uplink, and when processing finishes with the zero clearing of corresponding data group element.
The invention provides a kind of processing method of downlink physical channel compression schema; Increase by a mapping behavior in the prior art, make HSUPA send DTX, reduced system loading at the compression clearance of corresponding System Frame Number BFN; Reduced the interference of compact model, improved the stability of system system.
Embodiment
Below will combine accompanying drawing, comparatively detailed explanation will be carried out in each preferred embodiment of the present invention.
The processing method of a kind of downlink physical channel compression schema provided by the invention can reduce the interference of HSUPA down physical channel to system when UE is in compact model, improved the stability of system.
Processing method of the present invention may further comprise the steps:
A, according to the compact model information that issues in the system signaling, collect compressed-mode parameter;
B, according to the method for present processing DPCH compact model; Put forward former frame and handle next CFN (Connection Frame Number: frame Connection Frame Number); Judge whether the CFN frame is condensed frame; If yes, then mark CFN frame is a condensed frame, and the initial compression clearance of mark CFN frame is that Nfirst, end compression clearance are that Nlast and compression clearance length are Nlenth; If then mark CFN frame is not a condensed frame not;
C, be that FrameOffset, chip offset are ChipOffset and current System Frame Number BFN (NodeB Frame Number: System Frame Number), calculate the corresponding System Frame Number BFN of CFN frame according to the vertical shift of DPCH;
D, DPCH are that Nfirst, Nlast are mapped on the corresponding System Frame Number BFN with Nlenth with the compressed information on the CFN frame; Owing to influenced by the ChipOffset of DPCH channel, the compressed information that can occur the CFN frame in the system is mapped on corresponding System Frame Number BFN or the BFN+1; Concrete mapping relations are judged according to information such as Nfirst that writes down among the ChipOffset of DPCH channel and the step B and Nlenth;
E, according to the compressed information that writes down among the step B when the CFN frame of pre-treatment, all corresponding System Frame Number BFN that confirm among the treatment step D, the compression clearance of the corresponding System Frame Number BFN of mark, and the result preserved;
F, HSUPA scheduler according to the compression clearance of the corresponding System Frame Number BFN of institute's mark in the step e, send DTX at corresponding compression clearance when handling E-AGCH, E-RGCH or E-HICH channel.
Wherein, the step D among the present invention is further comprising the steps of:
If the ChipOffset and the Nfirst sum of D1, calculating DPCH channel greater than 15, then get into step D2; Otherwise, get into step D3;
D2, the corresponding compressed information of explanation CFN frame only are mapped on the corresponding System Frame Number BFN+1;
D3, the corresponding compressed information of the said CFN of explanation are mapped on the corresponding System Frame Number BFN; Whether also can on System Frame Number BFN+1, mapping be arranged; Judge that (whether (ChipOffset/2560)+Nfirst) %15+Nlenth+ (ChipOffset%2560)/2560 sum is greater than 15; If yes, explain that then the corresponding compressed information of CFN frame also is mapped on the corresponding System Frame Number BFN+1; If do not explain then that, the corresponding compressed information mapping of CFN frame is less than on the corresponding System Frame Number BFN+1.
Wherein, the step e among the present invention is further comprising the steps of:
If E1 judges the compressed information of CFN frame and only is mapped on the corresponding System Frame Number BFN in step D, then get into step e 3; If be mapped on corresponding System Frame Number BFN and the BFN+1, then get into step e 4; If only be mapped on the corresponding System Frame Number BFN+1, get into step e 5;
E2, in the proprietary logical links information of UE instance, be among the RL, create an one-dimension array, array size is 4 to be wSlotInGap [4], and array element representes with WORD16, variable bit 0 expression 0 time slot wherein, bit 14 expressions 15 time slots; When a certain bit set is 1, represent that then this time slot is a compression clearance, otherwise be non-compression clearance; When handling a corresponding System Frame Number BFN, the value of utilizing BFN%4 is as the array index index, and the compressed information of corresponding System Frame Number BFN is recorded in the array element;
E3, the current corresponding System Frame Number BFN of calculating go up compact model information:
The %15 of initial compression clearance=(ChipOffset/2560)+Nfirst)
Compression clearance length=Nlenth+ (ChipOffset%2560)/2560
Finish compression clearance=initial compression clearance+compression clearance length-1
Then according to step e 2, the compact model information of corresponding System Frame Number BFN is recorded in the array element;
Compact model information on E4, the corresponding System Frame Number BFN of calculating and the BFN+1:
At first calculate the compact model information on the corresponding System Frame Number BFN:
The %15 of initial compression clearance=(ChipOffset/2560+Nfirst)
Finish compression clearance=14
The initial compression clearance of compression clearance length=14-+1
Then according to step e 2, the compact model information of corresponding System Frame Number BFN is recorded in the array element;
Next calculates the compact model information on the corresponding System Frame Number BFN+1:
Initial compression clearance=0
Compression clearance length among the corresponding System Frame Number BFN of compression clearance length=Nlenth+ (ChipOffset%2560)/2560-
Finish compression clearance=initial compression clearance+compression clearance length-1
Then according to step e 2, the compact model information of corresponding System Frame Number BFN+1 is recorded in the array element;
Compact model information is the same with the E3 process on E5, the current System Frame Number BFN+1 of calculating.
Wherein, the step F among the present invention is further comprising the steps of:
F1: the present invention is when handling E-AGCH, E-RGCH, E-HICH channel; No matter be 10msUE; Or 2ms UE, the unified processing mode that adopts subframe also comprises system time gap 2,3,4 when promptly system's 0 time slot is handled subframe 0; System's 3 time slots are handled subframe 1 and are also comprised system time gap 5,6,7, and the rest may be inferred;
F2: the present invention is employed in the flow process that synchronization carries out signatures all on the RGHI channel accumulation process when handling E-RGCH, E-HICH channel;
F3, because the absolute grant of sending on the UE correct decoding E-AGCH channel, need to collect the information of a sub-frame, therefore some time slots are compression clearance in subframe, UE can abandon the information of this subframe; When system when handling E-AGCH channel compact model, when handling a certain subframe, handle the service Radio Link of institute's corresponding UE according to the information such as corresponding System Frame Number BFN, BFN%4 gained index value and this subframe at current subframe place; When the corresponding System Frame Number BFN that in step e, writes down is a compression clearance, and a certain time slot of this subframe falls into the compression clearance of institute's corresponding UE, and then system disregards and directly send out DTX to this subframe;
F4, because the UE relative authorization of on correct decoding E-RGCH, E-HICH channel, sending during with the HRAQ response message, need obtain a sub-frame to get final product in the information of some time slots; Therefore, the current compact model that whether is in of RL at each signature place before all signatures add up on carrying out synchronization RGHI channel, is at first judged by system when handling E-RGCH, E-HICH channel compact model; When the corresponding System Frame Number BFN that in step e, writes down is a compression clearance, and a certain time slot of the subframe of this signature then directly fills in DTX with this time slot when falling into said compression clearance, do not sign and adds up.
For clearer explanation the present invention program, as shown in Figure 2, suppose that the relevant information of DPCH is: FrameOffset=0, ChipOffset=11264.Handling process then of the present invention is:
At first, according to the compact model information that issues in the system signaling, collect compressed-mode parameter;
Then,, put forward former frame and judge whether each CFN frame of DPCH channel is condensed frame according to the method for present processing DPCH compact model, and recording-related information;
Then, according to FrameOffset, chip offset ChipOffset and the current System Frame Number BFN of DPCH channel, calculate the corresponding System Frame Number BFN of CFN frame;
Then, can know that when DPCH was condensed frame at the CFN frame, its compressed information was Nfirst=8, Nlast=14, Nlenth=7 by Fig. 2; ChipOffset and Nfirst by DPCH can know, ChipOffset/2560+Nfirst=12 15, explain that the compressed information of this CFN frame is mapped on the System Frame Number BFN; Again by ((ChipOffset/2560)+Nfirst) %15+Nlenth+ (ChipOffset%2560)/2560=20>15, explain that the compressed information of this CFN frame also is mapped on the System Frame Number BFN+1;
Then, calculate compact model information on the corresponding System Frame Number BFN:
Initial compression clearance=((ChipOffset/2560)+Nfirst) %15=12
Finish compression clearance=14
Initial compression clearance+the 1=3 of compression clearance length=14-
The value of utilizing BFN%4 is as the array index index, and the compressed information of this correspondence System Frame Number BFN is recorded in array wSlotInGap [4] element under the UE instance RL information;
Calculate the compact model information on the corresponding System Frame Number BFN+1:
Initial compression clearance=0
Compression clearance length=5 among compression clearance length=Nlenth+ (ChipOffset%2560)/2560-BFN
Finish compression clearance=initial compression clearance+compression clearance length-1=4
The value of utilizing (BFN+1) %4 is as the array index index, and the compressed information of this correspondence System Frame Number BFN is recorded in array wSlotInGap [4] element under the UE instance RL information;
Then, when corresponding System Frame Number BFN carries out E-AGCH, E-RGCH, E-HICH Channel Processing since known in advance should correspondence System Frame Number BFN corresponding UE time slot 12,13, with 14 be compression clearance; Corresponding System Frame Number BFN+1 is at time slot 0,1, and 2,3,4 time slots are compression clearance; Therefore, system is when handling the E-AGCH channel, and when system time gap 9 was handled subframes 3, equally corresponding to time slot 11,12 and 13, owing to known time slot 12,13 is a compression clearance, so this subframe disregarded, and directly sends out DTX; When system handled subframes 4 at system time gap 12, equally corresponding to time slot 14,0 and 1, owing to known time slot 14 is a compression clearance, and 0,1 time slot of corresponding System Frame Number BFN+1 frame was compression clearance, so this subframe disregards, and directly sent out DTX; Same reason, system are when handling the E-RGHI channel, and when system time gap 9 was handled subframes 3, equally corresponding to time slot 11,12 and 13, owing to known time slot 12,13 is a compression clearance, so this subframe disregarded, and directly sends out DTX; When system handled subframes 4 at system time gap 12, equally corresponding to time slot 14,0 and 1, owing to known time slot 14 is a compression clearance, and 0,1 time slot of corresponding System Frame Number BFN+1 frame was compression clearance, so this subframe disregards, and directly sent out DTX;
Then, with the value of BFN%4 as the array index index value, with array wSlotInGap [4] the corresponding element zero clearing under the UE instance RL information; And according to above-mentioned flow process begin the circulation.Through the processing of above-mentioned flow process, effectively reduced the load of system, improved the stability of system.
For the processing method of the present invention of elaboration further, as shown in Figure 3, the present invention includes following steps:
Step 301: compressed-mode parameter is obtained in collection according to compact model;
Step 302: put forward former frame and handle next CFN (Connection Frame Number: frame Connection Frame Number); Judge whether the CFN frame is condensed frame; If yes; Then mark CFN frame is a condensed frame, and the initial compression clearance of mark CFN frame is that Nfirst, end compression clearance are that Nlast and compression clearance length are Nlenth and get into step 303; If then mark CFN frame is not a condensed frame, returns step 301 not;
Step 303: the vertical shift according to DPCH is that FrameOffset, chip offset are ChipOffset and current System Frame Number BFN (NodeB Frame Number: public frame number), calculate the corresponding System Frame Number BFN of CFN frame;
Step 304: whether ChipOffset and the Nfirst sum of judging the DPCH channel if yes, then get into step 305 greater than 15; If, then do not get into step 307;
The compressed information that step 305:CFN frame is corresponding only is mapped on the corresponding System Frame Number BFN+1;
Step 306: write down the compressed information on the corresponding System Frame Number BFN+1, and the value that the result is saved in (BFN+1) %4 gets into step 312 then in the wSlotInGap under the array index index [4] array element;
Step 307: judge that (whether (ChipOffset/2560+Nfirst) %15+Nlenth+ (ChipOffset%2560)/2560 sum if yes, then gets into step 308 greater than 15; If, then do not get into step 310;
The compressed information that step 308:CFN frame is corresponding is mapped on corresponding System Frame Number BFN+1 and the BFN;
Step 309: write down the compressed information on corresponding System Frame Number BFN+1 and the BFN, and the value that the result is saved in (BFN+1) %4 and BFN%4 gets into step 312 then in the wSlotInGap under the array index index [4] array element;
The compressed information that step 310:CFN frame is corresponding only is mapped on the corresponding System Frame Number BFN;
Step 311: write down the compressed information on the corresponding System Frame Number BFN, and the value that the result is saved in BFN%4 gets into step 312 then in the wSlotInGap under the array index index [4] array element;
Step 312: the information processing HSUPA down physical channel of in wSlotInGap [4] array element, preserving according to current corresponding System Frame Number BFN or BFN+1, and when processing finishes with the corresponding element zero clearing in wSlotInGap [4] array element;
Step 313: judge whether compact model finishes, if yes, process ends then; Do not get into circulation if, then return step 302.Through the processing of above-mentioned flow process, effectively reduce the HSUPA down physical channel transmitting power, can reserve more power and use to HSDPA, improved the service efficiency of HSDPA.
In sum; The processing method of a kind of downlink physical channel compression schema provided by the invention increases by a mapping behavior in the prior art, makes HSUPA send DTX at the compression clearance of corresponding System Frame Number BFN; Promptly when handling the HSUPA down physical channel; Handle in the prior art scheme on the basis of DPCH channel compact model,, judge when having time slot to fall into compression clearance in the current subframe through increasing simple query function; Some time slot is disregarded in then current subframe or the subframe, effectively reduces the load of system; Effectively reduced the unexpected lifting of system emission power, reduced extra interference, improved the stability of system system; When particularly the HSDPA power algorithm adopts dynamically adjustment, effectively reduce the HSUPA down physical channel transmitting power, can reserve more power and use to HSDPA, raising further the service efficiency of HSDPA.
Should be understood that; Above-mentioned description to preferred embodiment is comparatively detailed, can not therefore think the restriction to scope of patent protection of the present invention, and those of ordinary skill in the art is under enlightenment of the present invention; Do not breaking away under the scope situation that claim of the present invention protects; Can also make various deformation such as replacement, simple combination, these all fall within protection scope of the present invention, and the scope of asking for protection of the present invention should be as the criterion with accompanying claims.