CN101383658A - Processing method for downlink physical channel compression schema - Google Patents

Processing method for downlink physical channel compression schema Download PDF

Info

Publication number
CN101383658A
CN101383658A CNA2008102165376A CN200810216537A CN101383658A CN 101383658 A CN101383658 A CN 101383658A CN A2008102165376 A CNA2008102165376 A CN A2008102165376A CN 200810216537 A CN200810216537 A CN 200810216537A CN 101383658 A CN101383658 A CN 101383658A
Authority
CN
China
Prior art keywords
frame number
system frame
processing method
corresponding system
channel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CNA2008102165376A
Other languages
Chinese (zh)
Other versions
CN101383658B (en
Inventor
游佳
李涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Li Xin
Li Yawen
Li Yuezhong
Li Yunfang
Original Assignee
ZTE Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ZTE Corp filed Critical ZTE Corp
Priority to CN2008102165376A priority Critical patent/CN101383658B/en
Publication of CN101383658A publication Critical patent/CN101383658A/en
Application granted granted Critical
Publication of CN101383658B publication Critical patent/CN101383658B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Mobile Radio Communication Systems (AREA)

Abstract

The invention discloses a processing method of a down physical channel compact mode, which comprises the following steps: a down special physical channel maps the compact information of a compact frame on a corresponding system frame number BFN, the compact gap of the corresponding system frame number BFN is marked, and the result is stored; when a high-speed up-chain dispatcher processes an absolute authorization channel, a relative authorization channel and a responder indication channel, the stored result is inquired, and a DTX is sent to the compact gap of the corresponding system frame number BFN. The processing method of the invention adds a mapping conduct in the prior art so as to cause an HSUPA to send the DTX to the compact gap of the corresponding system frame number BFN, so the system load is lowered, the interference of the compact model to the system is reduced, the stability of the system is improved, the sending power of the down physical channel of the HSUPA is effectively lowered, more power can be reserved for the HSUPA, and the service efficiency of the HSUPA is more further improved.

Description

A kind of processing method of downlink physical channel compression schema
Technical field
The present invention relates to the processing method of compact model, relate in particular to a kind of processing method of HSUPA downlink physical channel compression schema.
Background technology
HSUPA (High Speed Uplink Packet Access: high-speed uplink) be the wireless side up link enhancement techniques that the 3GPP protocol architecture is introduced in the R6 version, HSUPA is by adopting many yards transmission, HARQ (Hybrid Automatic Repeat Quest: mix automatically retransmitting), key technologies such as fast dispatch based on NodeB, make single sub-district maximal uplink throughput reach 5.76Mbit/s, strengthened WCDMA (Wideband Code Division Multiple Access: the many cdmas in broadband) the data service bearing capacity and the availability of frequency spectrum of up link greatly.
HSUPA and HSDPA (High Speed Downlink Packet Access: high-speed down link) similar, it is E-DCH (EnhancedDedicated Channel: strengthen dedicated channel) Dedicated Physical Data Channel that HSUPA has introduced five new physical channel E-DPDCH), E-DPCCH is the E-DCH Dedicated Physical Control Channel, E-AGCH is the E-DCH absolute grant channel, E-RGCH is an E-DCH relative authorization channel, E-HICH is EDCH and HARQ answering indication signal channel and two new MAC (Media Access Control: medium access control) entity MAC-e and MAC-es, and the packet scheduling function (Radio Network Controller: radio network controller) moving down into base station B is on the NodeB from RNC, realized fast packet scheduling based on NodeB, and by mixing re-transmission HARQ automatically, 2ms is wireless short frame and many yards key technologies such as transmission, make the data throughput of up link can reach 5.76Mbit/s, greatly the bearing capacity of the uplink data business of Ti Gaoing.
Compressed mode techniques is a kind of certain idle periods that produces in radio frames, and user terminal is that UE can utilize compression clearance to carry out the various objectives measurement, switches with the switching between the realization system, alien frequencies direct-cut operation and time division duplex.
In the WCDMA system, RNC is after receiving the event measurement report of UE, and RNC sends message to UE and NodeB respectively by the channel reconfigure command after judging that measurement result satisfies startup compact model condition, and the startup compact model, carry compressed-mode parameter in the message.
In agreement 3GPP TS 25.214, (Downlink DedicatedPhysical Channel: when downward special physical channel) being in compact model, how UE receives the HSUPA down physical channel at DPCH to spell out UE.The HSUPA down physical channel comprises E-AGCH, E-RGCH and E-HICH, and its agreement is defined as: UE only deciphers on E-AGCH, E-RGCH, the E-HICH channel those and does not go up the overlapping time slot data of compression clearance with DPCH; UE will abandon on E-AGCH, E-RGCH, the E-HICH channel those and DPCH goes up the overlapping time slot data of compression clearance.
Because E-AGCH, E-RGCH, the time slot data that E-HICH is last and the DPCH downlink compression mode is overlapping will be abandoned by UE, so overlapping slot NodeB should send DTX (DiscontinuousTransmission: detect discontinuous transmission signal), with E-AGCH be example the compact model pattern as shown in Figure 1.The prior art scheme mainly adopts the normal frame handling process to the compact model processing method of HSUPA down physical channel, according to 5.3.2.4 part among the 3GPP agreement TS25.211,3 time slots to E-RGCH, each subframe of E-HICH channel adopt different signature sequences respectively, no matter this moment, whether descending DPCH was compression clearance, all sent ACK/NACK or DTX; According to 5.3.3.14 part among the 3GPP agreement TS25.211, to the E-AGCH channel, 10ms UE adopts each subframe to send identical AG, each subframe of 2ms UE sends an AG, existing technical scheme adopts this processing method main because handle simply, and system need not consider the realization of HSUPA downlink physical channel compression schema.
In sum, the processing method that prior art adopts does not send DTX at the time slot that should send DTX, and is sending normal data, has caused the extra processing load of system, for example increased the coding of E-AGCH channel, what E-RGCH and E-HICH channel were signed adds up; And because compact model may cause the transmitting power of signal that a unexpected lifting is arranged, will cause bigger interference, for example be a greater impact as capacity and covering to system.This shows that prior art awaits further developing.
Summary of the invention
The present invention provides a kind of processing method of downlink physical channel compression schema for solving above-mentioned defective of the prior art, (NodeB Frame Number: the compression clearance public system frame number) sends DTX at corresponding System Frame Number BFN, to reduce system loading, reduce the interference of compact model to system.
For solving the problems of the technologies described above, the present invention includes:
A kind of processing method of downlink physical channel compression schema may further comprise the steps:
A, downward special physical channel are mapped to the compressed information of condensed frame on the corresponding System Frame Number;
The compression clearance of B, the described corresponding System Frame Number of mark, and saving result;
C, high-speed uplink scheduler inquire about described saving result and send detection discontinuous transmission signal at the compression clearance of described corresponding System Frame Number when handling absolute grant channel, relative authorization channel and answering indication signal channel.
Described processing method, wherein, described steps A also comprises: according to the chip offset of described downward special physical channel, the compressed information of described condensed frame is mapped to described corresponding System Frame Number and adds on another corresponding System Frame Number of one.
Described processing method, wherein, described compressed information comprises initial compression clearance, finishes compression clearance and compression clearance length.
Described processing method, wherein, described processing method is further comprising the steps of:
D, collect compressed-mode parameter according to compact model;
Whether E, judge to connect frame be condensed frame, if, the compressed information of the described connection frame of mark then;
F, the vertical shift according to downward special physical channel, chip offset and System Frame Number calculate the corresponding System Frame Number of described connection frame.
Described processing method, wherein, described processing method also comprises: the chip offset of downward special physical channel and initial compression clearance sum be greater than ten five o'clock, and the compressed information of described connection frame only is mapped to corresponding System Frame Number and adds on another corresponding System Frame Number of one.
Described processing method, wherein, described processing method also comprises: the chip offset of downward special physical channel and initial compression clearance sum are less than 15, and initial compression clearance and compression clearance length sum were less than ten five o'clock, and the compressed information of then described connection frame only is mapped on the corresponding System Frame Number.
Described processing method, wherein, described processing method also comprises: the chip offset of downward special physical channel and initial compression clearance sum are less than 15, and initial compression clearance and compression clearance length sum be greater than 15, and the compressed information of then described connection frame is mapped to respectively on the corresponding System Frame Number with described corresponding System Frame Number and adds on another corresponding System Frame Number of one.
Described processing method, wherein, described step B is further comprising the steps of:
B1, user side are created an one-dimension array in proprietary logical links, described one-dimension array comprises array element, compression time slot and non-compression time slot, with the value of System Frame Number mould 4 as the array index index;
B2, the initial compression time slot that calculates corresponding System Frame Number, compression slot length compress time slot with end, and it are recorded in the described array element.
Described processing method, wherein, described step C also comprises: the high-speed uplink scheduler is when handling exhausted relative authorization channel and answering indication signal channel, adopt the processing mode of subframe signature, synchronization with relative authorization channel and answering indication signal channel on all subframes sign and carry out accumulation process.
Described processing method, wherein, described step B also comprises: the compressed information that system preserves in the data set element according to corresponding System Frame Number is handled the descending proprietary physical channel of high-speed uplink, and when processing finishes with the zero clearing of corresponding data group element.
The invention provides a kind of processing method of downlink physical channel compression schema, increase by a mapping behavior in the prior art, make HSUPA send DTX, reduced system loading at the compression clearance of corresponding System Frame Number BFN, reduced the interference of compact model, improved the stability of system system.
Description of drawings
Fig. 1 is an E-AGCH compact model schematic diagram in the prior art;
Fig. 2 is the schematic diagram of HSUPA downlink physical channel compression schema processing mode among the present invention;
Fig. 3 is the schematic flow sheet of HSUPA downlink physical channel compression schema processing mode among the present invention.
Embodiment
Below with reference to accompanying drawing, comparatively detailed explanation is carried out in each preferred embodiment of the present invention.
The processing method of a kind of downlink physical channel compression schema provided by the invention can reduce the interference of HSUPA down physical channel to system when UE is in compact model, improved the stability of system.
Processing method of the present invention may further comprise the steps:
A, according to the compact model information that issues in the system signaling, collect compressed-mode parameter;
B, according to the method for present processing DPCH compact model, put forward former frame and handle next CFN (Connection Frame Number: frame Connection Frame Number), judge whether the CFN frame is condensed frame, if yes, then mark CFN frame is a condensed frame, and the initial compression clearance of mark CFN frame is that Nfirst, end compression clearance are that Nlast and compression clearance length are Nlenth; If then mark CFN frame is not a condensed frame not;
C, be that FrameOffset, chip offset are ChipOffset and current System Frame Number BFN (NodeB Frame Number: System Frame Number), calculate the corresponding System Frame Number BFN of CFN frame according to the vertical shift of DPCH;
D, DPCH are that Nfirst, Nlast are mapped on the corresponding System Frame Number BFN with Nlenth with the compressed information on the CFN frame, owing to influenced by the ChipOffset of DPCH channel, the compressed information that can occur the CFN frame in the system is mapped on corresponding System Frame Number BFN or the BFN+1; Concrete mapping relations are judged according to information such as Nfirst that writes down among the ChipOffset of DPCH channel and the step B and Nlenth;
E, according to the compressed information that writes down among the step B when the CFN frame of pre-treatment, all corresponding System Frame Number BFN that determine among the treatment step D, the compression clearance of the corresponding System Frame Number BFN of mark, and the result preserved;
F, HSUPA scheduler according to the compression clearance of the corresponding System Frame Number BFN of institute's mark in the step e, send DTX at corresponding compression clearance when handling E-AGCH, E-RGCH or E-HICH channel.
Wherein, the step D among the present invention is further comprising the steps of:
If the ChipOffset and the Nfirst sum of D1, calculating DPCH channel greater than 15, then enter step D2; Otherwise, enter step D3;
The compressed information of D2, explanation CFN frame correspondence only is mapped on the corresponding System Frame Number BFN+1;
The compressed information of D3, the described CFN correspondence of explanation is mapped on the corresponding System Frame Number BFN, whether also can on System Frame Number BFN+1, mapping be arranged, judge that (whether (ChipOffset/2560)+Nfirst) %15+Nlenth+ (ChipOffset%2560)/2560 sum is greater than 15, if yes, the compressed information that CFN frame correspondence then is described also is mapped on the corresponding System Frame Number BFN+1; If the compressed information mapping that CFN frame correspondence, then is not described is less than on the corresponding System Frame Number BFN+1.
Wherein, the step e among the present invention is further comprising the steps of:
If E1 judges the compressed information of CFN frame and only is mapped on the corresponding System Frame Number BFN in step D, then enter step e 3; If be mapped on corresponding System Frame Number BFN and the BFN+1, then enter step e 4; If only be mapped on the corresponding System Frame Number BFN+1, enter step e 5;
E2, be among the RL in the proprietary logical links information of UE example, create an one-dimension array, array size is 4 to be wSlotInGap[4], array element represents with WORD16, variable bit 0 expression 0 time slot wherein, bit 14 expressions 15 time slots; When a certain bit set is 1, represent that then this time slot is a compression clearance, otherwise be non-compression clearance; When handling a corresponding System Frame Number BFN, the value of utilizing BFN%4 is as the array index index, and the compressed information of corresponding System Frame Number BFN is recorded in the array element;
E3, the current corresponding System Frame Number BFN of calculating go up compact model information:
The %15 of initial compression clearance=(ChipOffset/2560)+Nfirst)
Compression clearance length=Nlenth+ (ChipOffset%2560)/2560
Finish compression clearance=initial compression clearance+compression clearance length-1
Then according to step e 2, the compact model information of corresponding System Frame Number BFN is recorded in the array element;
Compact model information on E4, the corresponding System Frame Number BFN of calculating and the BFN+1:
At first calculate the compact model information on the corresponding System Frame Number BFN:
The %15 of initial compression clearance=(ChipOffset/2560+Nfirst)
Finish compression clearance=14
The initial compression clearance of compression clearance length=14-+1
Then according to step e 2, the compact model information of corresponding System Frame Number BFN is recorded in the array element;
Next calculates the compact model information on the corresponding System Frame Number BFN+1:
Initial compression clearance=0
Compression clearance length among the corresponding System Frame Number BFN of compression clearance length=Nlenth+ (ChipOffset%2560)/2560-
Finish compression clearance=initial compression clearance+compression clearance length-1
Then according to step e 2, the compact model information of corresponding System Frame Number BFN+1 is recorded in the array element;
Compact model information is the same with the E3 process on E5, the current System Frame Number BFN+1 of calculating.
Wherein, the step F among the present invention is further comprising the steps of:
F1: the present invention is when handling E-AGCH, E-RGCH, E-HICH channel, no matter be 10msUE, or 2ms UE, the unified processing mode that adopts subframe, also comprise system time gap 2,3,4 when being system's 0 time slot processing subframe 0, system's 3 time slots are handled subframe 1 and are also comprised system time gap 5,6,7, and the rest may be inferred;
F2: the present invention when handling E-RGCH, E-HICH channel, adopt synchronization with the RGHI channel on all signatures carry out the flow process of accumulation process;
F3, because the absolute grant that sends on the UE correct decoding E-AGCH channel, need to collect the information of a subframe, therefore some time slots are compression clearance in subframe, UE can abandon the information of this subframe; When system when handling E-AGCH channel compact model, when handling a certain subframe, handle the service Radio Link of institute's corresponding UE according to the information such as corresponding System Frame Number BFN, BFN%4 gained index value and this subframe at current subframe place; When the corresponding System Frame Number BFN that writes down in step e is a compression clearance, and a certain time slot of this subframe falls into the compression clearance of institute's corresponding UE, and then system disregards and directly send out DTX to this subframe;
F4, because the relative authorization that on correct decoding E-RGCH, E-HICH channel, sends of UE during with the HRAQ response message, need obtain a subframe to get final product in the information of some time slots; Therefore, the current compact model that whether is in of RL at each signature place before all signatures add up on carrying out synchronization RGHI channel, is at first judged by system when handling E-RGCH, E-HICH channel compact model; When the corresponding System Frame Number BFN that writes down in step e is a compression clearance, and a certain time slot of the subframe of this signature then directly fills in DTX with this time slot when falling into described compression clearance, do not sign and adds up.
For clearer explanation the present invention program, as shown in Figure 2, suppose that the relevant information of DPCH is: FrameOffset=0, ChipOffset=11264.Handling process then of the present invention is:
At first, according to the compact model information that issues in the system signaling, collect compressed-mode parameter;
Then,, put forward former frame and judge whether each CFN frame of DPCH channel is condensed frame according to the method for present processing DPCH compact model, and recording-related information;
Then, according to FrameOffset, the chip offset ChipOffset of DPCH channel and current System Frame Number BFN, calculate the corresponding System Frame Number BFN of CFN frame;
Then, as shown in Figure 2, when DPCH was condensed frame at the CFN frame, its compressed information was Nfirst=8, Nlast=14, Nlenth=7; By the ChipOffset of DPCH and Nfirst as can be known, ChipOffset/2560+Nfirst=12<15 illustrate that the compressed information of this CFN frame is mapped on the System Frame Number BFN; Again by ((ChipOffset/2560)+Nfirst) %15+Nlenth+ (ChipOffset%2560)/2560=20〉15, illustrate that the compressed information of this CFN frame also is mapped on the System Frame Number BFN+1;
Then, calculate compact model information on the corresponding System Frame Number BFN:
Initial compression clearance=((ChipOffset/2560)+Nfirst) %15=12
Finish compression clearance=14
Initial compression clearance+the 1=3 of compression clearance length=14-
The value of utilizing BFN%4 is as the array index index, and the compressed information of this correspondence System Frame Number BFN is recorded in array wSlotInGap[4 under the UE example RL information] in the element;
Calculate the compact model information on the corresponding System Frame Number BFN+1:
Initial compression clearance=0
Compression clearance length=5 among compression clearance length=Nlenth+ (ChipOffset%2560)/2560-BFN
Finish compression clearance=initial compression clearance+compression clearance length-1=4
The value of utilizing (BFN+1) %4 is as the array index index, and the compressed information of this correspondence System Frame Number BFN is recorded in array wSlotInGap[4 under the UE example RL information] in the element;
Then, when corresponding System Frame Number BFN carries out E-AGCH, E-RGCH, E-HICH Channel Processing since known in advance should correspondence System Frame Number BFN correspondence UE time slot 12,13, with 14 be compression clearance; Corresponding System Frame Number BFN+1 is at time slot 0,1, and 2,3,4 time slots are compression clearance; Therefore, system is when handling the E-AGCH channel, and when system time gap 9 was handled subframes 3, equally corresponding to time slot 11,12 and 13, owing to known time slot 12,13 is a compression clearance, so this subframe disregarded, and directly sends out DTX; When system handled subframes 4 at system time gap 12, equally corresponding to time slot 14,0 and 1, owing to known time slot 14 is a compression clearance, and 0,1 time slot of corresponding System Frame Number BFN+1 frame was compression clearance, so this subframe disregards, and directly sent out DTX; Same reason, system are when handling the E-RGHI channel, and when system time gap 9 was handled subframes 3, equally corresponding to time slot 11,12 and 13, owing to known time slot 12,13 is a compression clearance, so this subframe disregarded, and directly sends out DTX; When system handled subframes 4 at system time gap 12, equally corresponding to time slot 14,0 and 1, owing to known time slot 14 is a compression clearance, and 0,1 time slot of corresponding System Frame Number BFN+1 frame was compression clearance, so this subframe disregards, and directly sent out DTX;
Then, with the value of BFN%4 as the array index index value, with the array wSlotInGap[4 under the UE example RL information] the corresponding element zero clearing; And according to above-mentioned flow process begin the circulation.By the processing of above-mentioned flow process, effectively reduced the load of system, improved the stability of system.
In order further to set forth processing method of the present invention, as shown in Figure 3, the present invention includes following steps:
Step 301: compressed-mode parameter is obtained in collection according to compact model;
Step 302: put forward former frame and handle next CFN (Connection Frame Number: frame Connection Frame Number), judge whether the CFN frame is condensed frame, if yes, then mark CFN frame is a condensed frame, and the initial compression clearance of mark CFN frame is that Nfirst, end compression clearance are that Nlast and compression clearance length are Nlenth and enter step 303; If then mark CFN frame is not a condensed frame, returns step 301 not;
Step 303: the vertical shift according to DPCH is that FrameOffset, chip offset are ChipOffset and current System Frame Number BFN (NodeB Frame Number: public frame number), calculate the corresponding System Frame Number BFN of CFN frame;
Step 304: judge that the ChipOffset of DPCH channel and Nfirst sum whether greater than 15, if yes, then enter step 305; If, then do not enter step 307;
The compressed information of step 305:CFN frame correspondence only is mapped on the corresponding System Frame Number BFN+1;
Step 306: write down the compressed information on the corresponding System Frame Number BFN+1, and the value that the result is saved in (BFN+1) %4 is the wSlotInGap[4 under the array index index] in the array element, enter step 312 then;
Step 307: judge that (whether (ChipOffset/2560+Nfirst) %15+Nlenth+ (ChipOffset%2560)/2560 sum if yes, then enters step 308 greater than 15; If, then do not enter step 310;
The compressed information of step 308:CFN frame correspondence is mapped on corresponding System Frame Number BFN+1 and the BFN;
Step 309: write down the compressed information on corresponding System Frame Number BFN+1 and the BFN, and the value that the result is saved in (BFN+1) %4 and BFN%4 is the wSlotInGap[4 under the array index index] in the array element, enter step 312 then;
The compressed information of step 310:CFN frame correspondence only is mapped on the corresponding System Frame Number BFN;
Step 311: write down the compressed information on the corresponding System Frame Number BFN, and the value that the result is saved in BFN%4 is the wSlotInGap[4 under the array index index] in the array element, enter step 312 then;
Step 312: according to current corresponding System Frame Number BFN or BFN+1 at wSlotInGap[4] the information processing HSUPA down physical channel preserved in the array element, and when processing finishes with wSlotInGap[4] corresponding element zero clearing in the array element;
Step 313: judge whether compact model finishes, if yes, process ends then; If, then do not return step 302 and enter circulation.By the processing of above-mentioned flow process, effectively reduce the HSUPA down physical channel transmitting power, can reserve more power and use to HSDPA, improved the service efficiency of HSDPA.
In sum, the processing method of a kind of downlink physical channel compression schema provided by the invention, increase by a mapping behavior in the prior art, make HSUPA send DTX at the compression clearance of corresponding System Frame Number BFN, promptly when handling the HSUPA down physical channel, handle in the prior art scheme on the basis of DPCH channel compact model, by increasing simple query function, judge when having time slot to fall into compression clearance in the current subframe, some time slot is disregarded in then current subframe or the subframe, effectively reduces the load of system; Effectively reduced the unexpected lifting of system emission power, reduced extra interference, improved the stability of system system; Particularly the HSDPA power algorithm adopts when dynamically adjusting, and effectively reduces the HSUPA down physical channel transmitting power, can reserve more power and use to HSDPA, has further improved the service efficiency of HSDPA.
Should be understood that; above-mentioned description at preferred embodiment is comparatively detailed; can not therefore think restriction to scope of patent protection of the present invention; those of ordinary skill in the art is under enlightenment of the present invention; do not breaking away under the scope situation that claim of the present invention protects; can also make various deformation such as replacement, simple combination, these all fall within protection scope of the present invention, and the scope of asking for protection of the present invention should be as the criterion with claims.

Claims (10)

1, a kind of processing method of downlink physical channel compression schema may further comprise the steps:
A, downward special physical channel are mapped to the compressed information of condensed frame on the corresponding System Frame Number;
The compression clearance of B, the described corresponding System Frame Number of mark, and saving result;
C, high-speed uplink scheduler inquire about described saving result and send detection discontinuous transmission signal at the compression clearance of described corresponding System Frame Number when handling absolute grant channel, relative authorization channel and answering indication signal channel.
2, processing method according to claim 1, it is characterized in that, described steps A also comprises: according to the chip offset of described downward special physical channel, the compressed information of described condensed frame is mapped to described corresponding System Frame Number and adds on another corresponding System Frame Number of one.
3, processing method according to claim 2 is characterized in that, described compressed information comprises initial compression clearance, finishes compression clearance and compression clearance length.
4, processing method according to claim 1 is characterized in that, described processing method is further comprising the steps of:
D, collect compressed-mode parameter according to compact model;
Whether E, judge to connect frame be condensed frame, if, the compressed information of the described connection frame of mark then;
F, the vertical shift according to downward special physical channel, chip offset and System Frame Number calculate the corresponding System Frame Number of described connection frame.
5, processing method according to claim 3, it is characterized in that, described processing method also comprises: the chip offset of downward special physical channel and initial compression clearance sum be greater than ten five o'clock, and the compressed information of described connection frame only is mapped to corresponding System Frame Number and adds on another corresponding System Frame Number of one.
6, processing method according to claim 3, it is characterized in that, described processing method also comprises: the chip offset of downward special physical channel and initial compression clearance sum are less than 15, and initial compression clearance and compression clearance length sum were less than ten five o'clock, and the compressed information of then described connection frame only is mapped on the corresponding System Frame Number.
7, processing method according to claim 3, it is characterized in that, described processing method also comprises: the chip offset of downward special physical channel and initial compression clearance sum are less than 15, and initial compression clearance and compression clearance length sum be greater than 15, and the compressed information of then described connection frame is mapped to respectively on the corresponding System Frame Number with described corresponding System Frame Number and adds on another corresponding System Frame Number of one.
8, processing method according to claim 1 is characterized in that, described step B is further comprising the steps of:
B1, user side are created an one-dimension array in proprietary logical links, described one-dimension array comprises array element, compression time slot and non-compression time slot, with the value of System Frame Number mould 4 as the array index index;
B2, the initial compression time slot that calculates corresponding System Frame Number, compression slot length compress time slot with end, and it are recorded in the described array element.
9, processing method according to claim 1, it is characterized in that, described step C also comprises: the high-speed uplink scheduler is when handling relative authorization channel and answering indication signal channel, adopt the processing mode of subframe signature, synchronization with relative authorization channel and answering indication signal channel on all subframes sign and carry out accumulation process.
10, processing method according to claim 8, it is characterized in that, described step B also comprises: the compressed information that system preserves in the data set element according to corresponding System Frame Number is handled the descending proprietary physical channel of high-speed uplink, and when processing finishes with the zero clearing of corresponding data group element.
CN2008102165376A 2008-09-24 2008-09-24 Processing method for downlink physical channel compression schema Expired - Fee Related CN101383658B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2008102165376A CN101383658B (en) 2008-09-24 2008-09-24 Processing method for downlink physical channel compression schema

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2008102165376A CN101383658B (en) 2008-09-24 2008-09-24 Processing method for downlink physical channel compression schema

Publications (2)

Publication Number Publication Date
CN101383658A true CN101383658A (en) 2009-03-11
CN101383658B CN101383658B (en) 2012-12-05

Family

ID=40463307

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2008102165376A Expired - Fee Related CN101383658B (en) 2008-09-24 2008-09-24 Processing method for downlink physical channel compression schema

Country Status (1)

Country Link
CN (1) CN101383658B (en)

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1192513C (en) * 2001-12-12 2005-03-09 华为技术有限公司 Method for controlling downward special physical channel power under compressed mode
CN1185887C (en) * 2001-12-24 2005-01-19 华为技术有限公司 Open-loop power control method in switching process between frequencies
JP2005064751A (en) * 2003-08-08 2005-03-10 Matsushita Electric Ind Co Ltd Mobile station device and receiving method thereof
CN101141744B (en) * 2007-10-19 2010-08-04 华为技术有限公司 Terminal compression mode starting method, wireless network controller and terminal

Also Published As

Publication number Publication date
CN101383658B (en) 2012-12-05

Similar Documents

Publication Publication Date Title
JP4445938B2 (en) Mobile station, mobile communication system, and mobile communication method
CN101176371B (en) Communicating control information in mobile communication system
RU2348107C2 (en) Method for control of transmission speed, mobile station, base radio station and radio network controller
JP3614413B2 (en) Method for transmitting / receiving orthogonal code information assigned to user data in a high-speed data packet access communication system
US8804626B2 (en) Transmission rate control method, mobile station, radio network controller, and radio base station
US7817598B2 (en) Transmission rate control method, mobile station, and radio base station
EP1722522A1 (en) Transmission rate control in an enhanced uplink mobile communication system
KR100788856B1 (en) Transmission rate control method, and mobile station
CN101193438B (en) A method for realizing high-speed downlink packet access
EP1874072B1 (en) Transmission rate control method, mobile station and radio base station
EP1848234B1 (en) Wireless resource management method, wireless base station, and base station controller
KR100937371B1 (en) Transmitting method, receiving method, radio base station, and mobile station
CN100455043C (en) Method for transmitting E-DCH data
CN100558007C (en) Method for transmitting mobile telecommunication network data
CN1921353B (en) User data transmission method, and radio network controller
JP2006033778A (en) Mobile station, base station, controller, and system and method for mobile communication
CN102013963B (en) Data transmission method and device
US8195173B2 (en) Transmission rate control method, mobile station, and radio network controller
CN101383658B (en) Processing method for downlink physical channel compression schema
CN101420752B (en) Channel multiplexing method
CN102638868B (en) Method, device and system for realizing information transmission
EP2793416A1 (en) High speed uplink control channels
CN109314959A (en) The sending method and device of control channel in a kind of multicarrier system
CN101932030A (en) Happy site inconsistency processing method, device and base station

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20170830

Address after: 161000 Qigihar City, Heilongjiang province Tiefeng District Zhanqian Street joint community group 148

Co-patentee after: Li Yawen

Patentee after: Li Xin

Co-patentee after: Li Yuezhong

Co-patentee after: Li Yunfang

Address before: 518057 Nanshan District Guangdong high tech Industrial Park, South Road, science and technology, ZTE building, Ministry of Justice

Patentee before: ZTE Corp.

CB03 Change of inventor or designer information
CB03 Change of inventor or designer information

Inventor after: Li Xin

Inventor after: Li Yawen

Inventor after: Li Yuezhong

Inventor after: Li Yunfang

Inventor before: You Jia

Inventor before: Li Tao

CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20121205

Termination date: 20210924