CN101383658B - Processing method for downlink physical channel compression schema - Google Patents

Processing method for downlink physical channel compression schema Download PDF

Info

Publication number
CN101383658B
CN101383658B CN2008102165376A CN200810216537A CN101383658B CN 101383658 B CN101383658 B CN 101383658B CN 2008102165376 A CN2008102165376 A CN 2008102165376A CN 200810216537 A CN200810216537 A CN 200810216537A CN 101383658 B CN101383658 B CN 101383658B
Authority
CN
China
Prior art keywords
frame number
system frame
corresponding system
channel
physical channel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN2008102165376A
Other languages
Chinese (zh)
Other versions
CN101383658A (en
Inventor
游佳
李涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Li Xin
Li Yawen
Li Yuezhong
Li Yunfang
Original Assignee
ZTE Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ZTE Corp filed Critical ZTE Corp
Priority to CN2008102165376A priority Critical patent/CN101383658B/en
Publication of CN101383658A publication Critical patent/CN101383658A/en
Application granted granted Critical
Publication of CN101383658B publication Critical patent/CN101383658B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Mobile Radio Communication Systems (AREA)

Abstract

The invention discloses a processing method of a down physical channel compact mode, which comprises the following steps: a down special physical channel maps the compact information of a compact frame on a corresponding system frame number BFN, the compact gap of the corresponding system frame number BFN is marked, and the result is stored; when a high-speed up-chain dispatcher processes an absolute authorization channel, a relative authorization channel and a responder indication channel, the stored result is inquired, and a DTX is sent to the compact gap of the corresponding system frame number BFN. The processing method of the invention adds a mapping conduct in the prior art so as to cause an HSUPA to send the DTX to the compact gap of the corresponding system frame number BFN, so the system load is lowered, the interference of the compact model to the system is reduced, the stability of the system is improved, the sending power of the down physical channel of the HSUPA is effectively lowered, more power can be reserved for the HSUPA, and the service efficiency of the HSUPA is more further improved.

Description

A kind of processing method of downlink physical channel compression schema
Technical field
The present invention relates to the processing method of compact model, relate in particular to a kind of processing method of HSUPA downlink physical channel compression schema.
Background technology
HSUPA (High Speed Uplink Packet Access: high-speed uplink) be the wireless side up link enhancement techniques that the 3GPP protocol architecture is introduced in the R6 version; HSUPA through adopt many yards transmission, HARQ (Hybrid Automatic Repeat Quest: mix re-transmission automatically), based on the key technologies such as fast dispatch of NodeB; Make single sub-district maximal uplink throughput reach 5.76Mbit/s, strengthened WCDMA (Wideband Code Division Multiple Access: the many cdmas in broadband) the data service bearing capacity and the availability of frequency spectrum of up link greatly.
HSUPA and HSDPA (High Speed Downlink Packet Access: high-speed down link) similar; It is E-DCH (EnhancedDedicated Channel: strengthen dedicated channel) Dedicated Physical Data Channel that HSUPA has introduced five new physical channel E-DPDCH), E-DPCCH is that E-DCH Dedicated Physical Control Channel, E-AGCH are that E-DCH absolute grant channel, E-RGCH are that E-DCH relative authorization channel, E-HICH are EDCH and HARQ answering indication signal channel and two new MAC (Media Access Control: medium access control) entity MAC-e and MAC-es; And the packet scheduling function (Radio Network Controller: radio network controller) moving down into base station B is on the NodeB from RNC; Realized fast packet scheduling based on NodeB; And retransmit HARQ automatically through mixing, 2ms is wireless short frame and many yards key technologies such as transmission; Make the data throughput of up link can reach 5.76Mbit/s, the bearing capacity that the uplink data that improves greatly is professional.
Compressed mode techniques is a kind of certain idle periods that in radio frames, produces, and user terminal is that UE can utilize compression clearance to carry out the various objectives measurement, switches with the switching between the realization system, alien frequencies direct-cut operation and time division duplex.
In the WCDMA system; RNC is after receiving the event measurement report of UE, and RNC sends message to UE and NodeB respectively through the channel reconfigure command after judging that measurement result satisfies startup compact model condition; And the startup compact model, carry compressed-mode parameter in the message.
In agreement 3GPP TS25.214, (Downlink DedicatedPhysical Channel: when downward special physical channel) being in compact model, how UE receives the HSUPA down physical channel at DPCH to spell out UE.The HSUPA down physical channel comprises E-AGCH, E-RGCH and E-HICH, and its agreement is defined as: UE only deciphers on E-AGCH, E-RGCH, the E-HICH channel those and does not go up the overlapping time slot data of compression clearance with DPCH; UE will abandon on E-AGCH, E-RGCH, the E-HICH channel those and DPCH goes up the overlapping time slot data of compression clearance.
Because E-AGCH, E-RGCH, E-HICH is last and the overlapping time slot data of DPCH downlink compression mode will be abandoned by UE; So overlapping slot NodeB should send DTX (DiscontinuousTransmission: detect the discontinuous transmission signal), be that the compact model pattern of example is as shown in Figure 1 with E-AGCH.The prior art scheme mainly adopts the normal frame handling process to the compact model processing method of HSUPA down physical channel; According to 5.3.2.4 part among the 3GPP agreement TS25.211; 3 time slots to E-RGCH, each subframe of E-HICH channel adopt different signature sequences respectively; No matter this moment, whether descending DPCH was compression clearance, all sent ACK/NACK or DTX; According to 5.3.3.14 part among the 3GPP agreement TS25.211; To the E-AGCH channel; 10ms UE adopts each subframe to send identical AG; Each subframe of 2ms UE is sent an AG, and existing technical scheme adopts this processing method main because handle simply, and system need not consider the realization of HSUPA downlink physical channel compression schema.
In sum; The processing method that prior art adopts is not sent DTX at the time slot that should send DTX, and is sending normal data, has caused the extra processing load of system; For example increased the coding of E-AGCH channel, what E-RGCH and E-HICH channel were signed adds up; And because compact model possibly cause the transmitting power of signal that a unexpected lifting is arranged, will cause bigger interference, for example be a greater impact like capacity and covering to system.This shows that prior art awaits development further.
Summary of the invention
The present invention provides a kind of processing method of downlink physical channel compression schema for solving above-mentioned defective of the prior art; (NodeB Frame Number: the compression clearance public system frame number) sends DTX at corresponding System Frame Number BFN; To reduce system loading, reduce the interference of compact model to system.
For solving the problems of the technologies described above, the present invention includes:
A kind of processing method of downlink physical channel compression schema may further comprise the steps:
A, downward special physical channel are mapped to the compressed information of condensed frame on the corresponding System Frame Number;
The compression clearance of B, the said corresponding System Frame Number of mark, and saving result;
C, high-speed uplink scheduler inquire about said saving result and send detection discontinuous transmission signal at the compression clearance of said corresponding System Frame Number when handling absolute grant channel, relative authorization channel and answering indication signal channel.
Described processing method, wherein, said steps A also comprises: according to the chip offset of said downward special physical channel, the compressed information of said condensed frame is mapped to said corresponding System Frame Number and adds on another corresponding System Frame Number of one.
Described processing method, wherein, said compressed information comprises initial compression clearance, finishes compression clearance and compression clearance length.
Described processing method, wherein, said processing method is further comprising the steps of:
D, collect compressed-mode parameter according to compact model;
Whether E, judge to connect frame be condensed frame, if, the compressed information of the said connection frame of mark then;
F, the vertical shift according to downward special physical channel, chip offset and System Frame Number calculate the corresponding System Frame Number of said connection frame.
Described processing method; Wherein, Said processing method also comprises: the chip offset of downward special physical channel and initial compression clearance sum be greater than ten five o'clock, and the compressed information of said connection frame only is mapped to corresponding System Frame Number and adds on another corresponding System Frame Number of one.
Described processing method; Wherein, Said processing method also comprises: the chip offset of downward special physical channel and initial compression clearance sum are less than 15; And initial compression clearance and compression clearance length sum were less than ten five o'clock, and the compressed information of then said connection frame only is mapped on the corresponding System Frame Number.
Described processing method; Wherein, Said processing method also comprises: the chip offset of downward special physical channel and initial compression clearance sum are less than 15; And initial compression clearance and compression clearance length sum be greater than 15, and the compressed information of then said connection frame is mapped to respectively on the corresponding System Frame Number with said corresponding System Frame Number and adds on another corresponding System Frame Number of one.
Described processing method, wherein, said step B is further comprising the steps of:
B1, user side are created an one-dimension array in proprietary logical links, said one-dimension array comprises array element, compression time slot and non-compression time slot, with the value of System Frame Number mould 4 as the array index index;
B2, the initial compression time slot that calculates corresponding System Frame Number, compression slot length compress time slot with end, and it are recorded in the said array element.
Described processing method; Wherein, Said step C also comprises: the high-speed uplink scheduler is when handling exhausted relative authorization channel and answering indication signal channel; Adopt the processing mode of subframe signature, synchronization with relative authorization channel and answering indication signal channel on all subframes sign and carry out accumulation process.
Described processing method, wherein, said step B also comprises: the compressed information that system preserves in the data set element according to corresponding System Frame Number is handled the descending proprietary physical channel of high-speed uplink, and when processing finishes with the zero clearing of corresponding data group element.
The invention provides a kind of processing method of downlink physical channel compression schema; Increase by a mapping behavior in the prior art, make HSUPA send DTX, reduced system loading at the compression clearance of corresponding System Frame Number BFN; Reduced the interference of compact model, improved the stability of system system.
Description of drawings
Fig. 1 is an E-AGCH compact model sketch map in the prior art;
Fig. 2 is the sketch map of HSUPA downlink physical channel compression schema processing mode among the present invention;
Fig. 3 is the schematic flow sheet of HSUPA downlink physical channel compression schema processing mode among the present invention.
Embodiment
Below will combine accompanying drawing, comparatively detailed explanation will be carried out in each preferred embodiment of the present invention.
The processing method of a kind of downlink physical channel compression schema provided by the invention can reduce the interference of HSUPA down physical channel to system when UE is in compact model, improved the stability of system.
Processing method of the present invention may further comprise the steps:
A, according to the compact model information that issues in the system signaling, collect compressed-mode parameter;
B, according to the method for present processing DPCH compact model; Put forward former frame and handle next CFN (Connection Frame Number: frame Connection Frame Number); Judge whether the CFN frame is condensed frame; If yes, then mark CFN frame is a condensed frame, and the initial compression clearance of mark CFN frame is that Nfirst, end compression clearance are that Nlast and compression clearance length are Nlenth; If then mark CFN frame is not a condensed frame not;
C, be that FrameOffset, chip offset are ChipOffset and current System Frame Number BFN (NodeB Frame Number: System Frame Number), calculate the corresponding System Frame Number BFN of CFN frame according to the vertical shift of DPCH;
D, DPCH are that Nfirst, Nlast are mapped on the corresponding System Frame Number BFN with Nlenth with the compressed information on the CFN frame; Owing to influenced by the ChipOffset of DPCH channel, the compressed information that can occur the CFN frame in the system is mapped on corresponding System Frame Number BFN or the BFN+1; Concrete mapping relations are judged according to information such as Nfirst that writes down among the ChipOffset of DPCH channel and the step B and Nlenth;
E, according to the compressed information that writes down among the step B when the CFN frame of pre-treatment, all corresponding System Frame Number BFN that confirm among the treatment step D, the compression clearance of the corresponding System Frame Number BFN of mark, and the result preserved;
F, HSUPA scheduler according to the compression clearance of the corresponding System Frame Number BFN of institute's mark in the step e, send DTX at corresponding compression clearance when handling E-AGCH, E-RGCH or E-HICH channel.
Wherein, the step D among the present invention is further comprising the steps of:
If the ChipOffset and the Nfirst sum of D1, calculating DPCH channel greater than 15, then get into step D2; Otherwise, get into step D3;
D2, the corresponding compressed information of explanation CFN frame only are mapped on the corresponding System Frame Number BFN+1;
D3, the corresponding compressed information of the said CFN of explanation are mapped on the corresponding System Frame Number BFN; Whether also can on System Frame Number BFN+1, mapping be arranged; Judge that (whether (ChipOffset/2560)+Nfirst) %15+Nlenth+ (ChipOffset%2560)/2560 sum is greater than 15; If yes, explain that then the corresponding compressed information of CFN frame also is mapped on the corresponding System Frame Number BFN+1; If do not explain then that, the corresponding compressed information mapping of CFN frame is less than on the corresponding System Frame Number BFN+1.
Wherein, the step e among the present invention is further comprising the steps of:
If E1 judges the compressed information of CFN frame and only is mapped on the corresponding System Frame Number BFN in step D, then get into step e 3; If be mapped on corresponding System Frame Number BFN and the BFN+1, then get into step e 4; If only be mapped on the corresponding System Frame Number BFN+1, get into step e 5;
E2, in the proprietary logical links information of UE instance, be among the RL, create an one-dimension array, array size is 4 to be wSlotInGap [4], and array element representes with WORD16, variable bit 0 expression 0 time slot wherein, bit 14 expressions 15 time slots; When a certain bit set is 1, represent that then this time slot is a compression clearance, otherwise be non-compression clearance; When handling a corresponding System Frame Number BFN, the value of utilizing BFN%4 is as the array index index, and the compressed information of corresponding System Frame Number BFN is recorded in the array element;
E3, the current corresponding System Frame Number BFN of calculating go up compact model information:
The %15 of initial compression clearance=(ChipOffset/2560)+Nfirst)
Compression clearance length=Nlenth+ (ChipOffset%2560)/2560
Finish compression clearance=initial compression clearance+compression clearance length-1
Then according to step e 2, the compact model information of corresponding System Frame Number BFN is recorded in the array element;
Compact model information on E4, the corresponding System Frame Number BFN of calculating and the BFN+1:
At first calculate the compact model information on the corresponding System Frame Number BFN:
The %15 of initial compression clearance=(ChipOffset/2560+Nfirst)
Finish compression clearance=14
The initial compression clearance of compression clearance length=14-+1
Then according to step e 2, the compact model information of corresponding System Frame Number BFN is recorded in the array element;
Next calculates the compact model information on the corresponding System Frame Number BFN+1:
Initial compression clearance=0
Compression clearance length among the corresponding System Frame Number BFN of compression clearance length=Nlenth+ (ChipOffset%2560)/2560-
Finish compression clearance=initial compression clearance+compression clearance length-1
Then according to step e 2, the compact model information of corresponding System Frame Number BFN+1 is recorded in the array element;
Compact model information is the same with the E3 process on E5, the current System Frame Number BFN+1 of calculating.
Wherein, the step F among the present invention is further comprising the steps of:
F1: the present invention is when handling E-AGCH, E-RGCH, E-HICH channel; No matter be 10msUE; Or 2ms UE, the unified processing mode that adopts subframe also comprises system time gap 2,3,4 when promptly system's 0 time slot is handled subframe 0; System's 3 time slots are handled subframe 1 and are also comprised system time gap 5,6,7, and the rest may be inferred;
F2: the present invention is employed in the flow process that synchronization carries out signatures all on the RGHI channel accumulation process when handling E-RGCH, E-HICH channel;
F3, because the absolute grant of sending on the UE correct decoding E-AGCH channel, need to collect the information of a sub-frame, therefore some time slots are compression clearance in subframe, UE can abandon the information of this subframe; When system when handling E-AGCH channel compact model, when handling a certain subframe, handle the service Radio Link of institute's corresponding UE according to the information such as corresponding System Frame Number BFN, BFN%4 gained index value and this subframe at current subframe place; When the corresponding System Frame Number BFN that in step e, writes down is a compression clearance, and a certain time slot of this subframe falls into the compression clearance of institute's corresponding UE, and then system disregards and directly send out DTX to this subframe;
F4, because the UE relative authorization of on correct decoding E-RGCH, E-HICH channel, sending during with the HRAQ response message, need obtain a sub-frame to get final product in the information of some time slots; Therefore, the current compact model that whether is in of RL at each signature place before all signatures add up on carrying out synchronization RGHI channel, is at first judged by system when handling E-RGCH, E-HICH channel compact model; When the corresponding System Frame Number BFN that in step e, writes down is a compression clearance, and a certain time slot of the subframe of this signature then directly fills in DTX with this time slot when falling into said compression clearance, do not sign and adds up.
For clearer explanation the present invention program, as shown in Figure 2, suppose that the relevant information of DPCH is: FrameOffset=0, ChipOffset=11264.Handling process then of the present invention is:
At first, according to the compact model information that issues in the system signaling, collect compressed-mode parameter;
Then,, put forward former frame and judge whether each CFN frame of DPCH channel is condensed frame according to the method for present processing DPCH compact model, and recording-related information;
Then, according to FrameOffset, chip offset ChipOffset and the current System Frame Number BFN of DPCH channel, calculate the corresponding System Frame Number BFN of CFN frame;
Then, can know that when DPCH was condensed frame at the CFN frame, its compressed information was Nfirst=8, Nlast=14, Nlenth=7 by Fig. 2; ChipOffset and Nfirst by DPCH can know, ChipOffset/2560+Nfirst=12 15, explain that the compressed information of this CFN frame is mapped on the System Frame Number BFN; Again by ((ChipOffset/2560)+Nfirst) %15+Nlenth+ (ChipOffset%2560)/2560=20>15, explain that the compressed information of this CFN frame also is mapped on the System Frame Number BFN+1;
Then, calculate compact model information on the corresponding System Frame Number BFN:
Initial compression clearance=((ChipOffset/2560)+Nfirst) %15=12
Finish compression clearance=14
Initial compression clearance+the 1=3 of compression clearance length=14-
The value of utilizing BFN%4 is as the array index index, and the compressed information of this correspondence System Frame Number BFN is recorded in array wSlotInGap [4] element under the UE instance RL information;
Calculate the compact model information on the corresponding System Frame Number BFN+1:
Initial compression clearance=0
Compression clearance length=5 among compression clearance length=Nlenth+ (ChipOffset%2560)/2560-BFN
Finish compression clearance=initial compression clearance+compression clearance length-1=4
The value of utilizing (BFN+1) %4 is as the array index index, and the compressed information of this correspondence System Frame Number BFN is recorded in array wSlotInGap [4] element under the UE instance RL information;
Then, when corresponding System Frame Number BFN carries out E-AGCH, E-RGCH, E-HICH Channel Processing since known in advance should correspondence System Frame Number BFN corresponding UE time slot 12,13, with 14 be compression clearance; Corresponding System Frame Number BFN+1 is at time slot 0,1, and 2,3,4 time slots are compression clearance; Therefore, system is when handling the E-AGCH channel, and when system time gap 9 was handled subframes 3, equally corresponding to time slot 11,12 and 13, owing to known time slot 12,13 is a compression clearance, so this subframe disregarded, and directly sends out DTX; When system handled subframes 4 at system time gap 12, equally corresponding to time slot 14,0 and 1, owing to known time slot 14 is a compression clearance, and 0,1 time slot of corresponding System Frame Number BFN+1 frame was compression clearance, so this subframe disregards, and directly sent out DTX; Same reason, system are when handling the E-RGHI channel, and when system time gap 9 was handled subframes 3, equally corresponding to time slot 11,12 and 13, owing to known time slot 12,13 is a compression clearance, so this subframe disregarded, and directly sends out DTX; When system handled subframes 4 at system time gap 12, equally corresponding to time slot 14,0 and 1, owing to known time slot 14 is a compression clearance, and 0,1 time slot of corresponding System Frame Number BFN+1 frame was compression clearance, so this subframe disregards, and directly sent out DTX;
Then, with the value of BFN%4 as the array index index value, with array wSlotInGap [4] the corresponding element zero clearing under the UE instance RL information; And according to above-mentioned flow process begin the circulation.Through the processing of above-mentioned flow process, effectively reduced the load of system, improved the stability of system.
For the processing method of the present invention of elaboration further, as shown in Figure 3, the present invention includes following steps:
Step 301: compressed-mode parameter is obtained in collection according to compact model;
Step 302: put forward former frame and handle next CFN (Connection Frame Number: frame Connection Frame Number); Judge whether the CFN frame is condensed frame; If yes; Then mark CFN frame is a condensed frame, and the initial compression clearance of mark CFN frame is that Nfirst, end compression clearance are that Nlast and compression clearance length are Nlenth and get into step 303; If then mark CFN frame is not a condensed frame, returns step 301 not;
Step 303: the vertical shift according to DPCH is that FrameOffset, chip offset are ChipOffset and current System Frame Number BFN (NodeB Frame Number: public frame number), calculate the corresponding System Frame Number BFN of CFN frame;
Step 304: whether ChipOffset and the Nfirst sum of judging the DPCH channel if yes, then get into step 305 greater than 15; If, then do not get into step 307;
The compressed information that step 305:CFN frame is corresponding only is mapped on the corresponding System Frame Number BFN+1;
Step 306: write down the compressed information on the corresponding System Frame Number BFN+1, and the value that the result is saved in (BFN+1) %4 gets into step 312 then in the wSlotInGap under the array index index [4] array element;
Step 307: judge that (whether (ChipOffset/2560+Nfirst) %15+Nlenth+ (ChipOffset%2560)/2560 sum if yes, then gets into step 308 greater than 15; If, then do not get into step 310;
The compressed information that step 308:CFN frame is corresponding is mapped on corresponding System Frame Number BFN+1 and the BFN;
Step 309: write down the compressed information on corresponding System Frame Number BFN+1 and the BFN, and the value that the result is saved in (BFN+1) %4 and BFN%4 gets into step 312 then in the wSlotInGap under the array index index [4] array element;
The compressed information that step 310:CFN frame is corresponding only is mapped on the corresponding System Frame Number BFN;
Step 311: write down the compressed information on the corresponding System Frame Number BFN, and the value that the result is saved in BFN%4 gets into step 312 then in the wSlotInGap under the array index index [4] array element;
Step 312: the information processing HSUPA down physical channel of in wSlotInGap [4] array element, preserving according to current corresponding System Frame Number BFN or BFN+1, and when processing finishes with the corresponding element zero clearing in wSlotInGap [4] array element;
Step 313: judge whether compact model finishes, if yes, process ends then; Do not get into circulation if, then return step 302.Through the processing of above-mentioned flow process, effectively reduce the HSUPA down physical channel transmitting power, can reserve more power and use to HSDPA, improved the service efficiency of HSDPA.
In sum; The processing method of a kind of downlink physical channel compression schema provided by the invention increases by a mapping behavior in the prior art, makes HSUPA send DTX at the compression clearance of corresponding System Frame Number BFN; Promptly when handling the HSUPA down physical channel; Handle in the prior art scheme on the basis of DPCH channel compact model,, judge when having time slot to fall into compression clearance in the current subframe through increasing simple query function; Some time slot is disregarded in then current subframe or the subframe, effectively reduces the load of system; Effectively reduced the unexpected lifting of system emission power, reduced extra interference, improved the stability of system system; When particularly the HSDPA power algorithm adopts dynamically adjustment, effectively reduce the HSUPA down physical channel transmitting power, can reserve more power and use to HSDPA, raising further the service efficiency of HSDPA.
Should be understood that; Above-mentioned description to preferred embodiment is comparatively detailed, can not therefore think the restriction to scope of patent protection of the present invention, and those of ordinary skill in the art is under enlightenment of the present invention; Do not breaking away under the scope situation that claim of the present invention protects; Can also make various deformation such as replacement, simple combination, these all fall within protection scope of the present invention, and the scope of asking for protection of the present invention should be as the criterion with accompanying claims.

Claims (5)

1. the processing method of a downlink physical channel compression schema may further comprise the steps:
A, downward special physical channel are mapped to the compressed information of condensed frame on the corresponding System Frame Number; Or according to the chip offset of said downward special physical channel, the compressed information of said condensed frame is mapped to said corresponding System Frame Number and adds on another corresponding System Frame Number of one;
Said compressed information comprises initial compression clearance, finishes compression clearance and compression clearance length;
Wherein:
The chip offset of downward special physical channel and initial compression clearance sum be greater than ten five o'clock, and the compressed information of said condensed frame only is mapped to corresponding System Frame Number and adds on another corresponding System Frame Number of one;
The chip offset of downward special physical channel and initial compression clearance sum be less than 15, and initial compression clearance and compression clearance length sum be less than ten five o'clock, and then the compressed information of said condensed frame only is mapped on the corresponding System Frame Number;
The chip offset of downward special physical channel and initial compression clearance sum are less than 15; And initial compression clearance and compression clearance length sum be greater than 15, and then the compressed information of said condensed frame is mapped to respectively on the corresponding System Frame Number with said corresponding System Frame Number and adds on another corresponding System Frame Number of one;
The compression clearance of B, the said corresponding System Frame Number of mark, and saving result;
C, high-speed uplink scheduler inquire about said saving result and send detection discontinuous transmission signal at the compression clearance of said corresponding System Frame Number when handling absolute grant channel, relative authorization channel and answering indication signal channel.
2. processing method according to claim 1 is characterized in that, before steps A, said processing method is further comprising the steps of:
D, collect compressed-mode parameter according to compact model;
Whether E, judge to connect frame be condensed frame, if, the compressed information of the said connection frame of mark then;
F, the vertical shift according to downward special physical channel, chip offset and System Frame Number calculate the corresponding System Frame Number of said connection frame.
3. processing method according to claim 1 is characterized in that, said step B is further comprising the steps of:
B1, user side are created an one-dimension array in proprietary logical links, said one-dimension array comprises array element, compression time slot and non-compression time slot, with the value of System Frame Number mould 4 as the array index index;
B2, the initial compression time slot that calculates corresponding System Frame Number, compression slot length compress time slot with end, and it are recorded in the said array element.
4. processing method according to claim 1; It is characterized in that; Said step C also comprises: the high-speed uplink scheduler is when handling relative authorization channel and answering indication signal channel; Adopt the processing mode of subframe signature, synchronization with relative authorization channel and answering indication signal channel on all subframes sign and carry out accumulation process.
5. processing method according to claim 3; It is characterized in that; Said step B also comprises: the compressed information that system preserves in the data set element according to corresponding System Frame Number is handled the descending proprietary physical channel of high-speed uplink, and when processing finishes with the zero clearing of corresponding data group element.
CN2008102165376A 2008-09-24 2008-09-24 Processing method for downlink physical channel compression schema Expired - Fee Related CN101383658B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2008102165376A CN101383658B (en) 2008-09-24 2008-09-24 Processing method for downlink physical channel compression schema

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2008102165376A CN101383658B (en) 2008-09-24 2008-09-24 Processing method for downlink physical channel compression schema

Publications (2)

Publication Number Publication Date
CN101383658A CN101383658A (en) 2009-03-11
CN101383658B true CN101383658B (en) 2012-12-05

Family

ID=40463307

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2008102165376A Expired - Fee Related CN101383658B (en) 2008-09-24 2008-09-24 Processing method for downlink physical channel compression schema

Country Status (1)

Country Link
CN (1) CN101383658B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1426172A (en) * 2001-12-12 2003-06-25 华为技术有限公司 Method for controlling downward special physical channel power under compressed mode
CN1429032A (en) * 2001-12-24 2003-07-09 华为技术有限公司 Open-loop power control method in switching process between frequencies
EP1653638A1 (en) * 2003-08-08 2006-05-03 Matsushita Electric Industrial Co., Ltd. Mobile station apparatus and receiving method
CN101141744A (en) * 2007-10-19 2008-03-12 华为技术有限公司 Terminal compression mode starting method, wireless network controller and terminal

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1426172A (en) * 2001-12-12 2003-06-25 华为技术有限公司 Method for controlling downward special physical channel power under compressed mode
CN1429032A (en) * 2001-12-24 2003-07-09 华为技术有限公司 Open-loop power control method in switching process between frequencies
EP1653638A1 (en) * 2003-08-08 2006-05-03 Matsushita Electric Industrial Co., Ltd. Mobile station apparatus and receiving method
CN101141744A (en) * 2007-10-19 2008-03-12 华为技术有限公司 Terminal compression mode starting method, wireless network controller and terminal

Also Published As

Publication number Publication date
CN101383658A (en) 2009-03-11

Similar Documents

Publication Publication Date Title
JP4445938B2 (en) Mobile station, mobile communication system, and mobile communication method
JP3614413B2 (en) Method for transmitting / receiving orthogonal code information assigned to user data in a high-speed data packet access communication system
RU2348107C2 (en) Method for control of transmission speed, mobile station, base radio station and radio network controller
KR100787514B1 (en) Transmission rate control method, mobile station, radio base station, and radio network controller
US8804626B2 (en) Transmission rate control method, mobile station, radio network controller, and radio base station
EP1722522A1 (en) Transmission rate control in an enhanced uplink mobile communication system
EP1708532B1 (en) Transmisssion rate control method, mobile station, and radio base station
CN101176371A (en) Communicating control information in mobile communication system
CN101461280A (en) Individual and group identifiers for user equipment in wireless systems with a shared transport channel
CN1738442A (en) Be used to be operated in the method and apparatus of the cordless communication network under the compact model
KR20070100340A (en) Transmission rate control method, mobile station, radio base station, and wireless line control station
CN101193438B (en) A method for realizing high-speed downlink packet access
EP1874072B1 (en) Transmission rate control method, mobile station and radio base station
KR20060091259A (en) Transmission rate control method, and mobile station
KR100937371B1 (en) Transmitting method, receiving method, radio base station, and mobile station
US8364160B2 (en) Transmission rate control method, mobile station and radio network controller
JP4684062B2 (en) Transmission rate control method and radio network controller
KR20030068743A (en) A Method for transmitting, receiving and processing control information in wireless mobile communication system
CN100558007C (en) Method for transmitting mobile telecommunication network data
CN100455043C (en) Method for transmitting E-DCH data
CN1921353B (en) User data transmission method, and radio network controller
JP2006033778A (en) Mobile station, base station, controller, and system and method for mobile communication
KR20030029310A (en) Method for transporting control information to dsch in a mobile communication system
CN102013963B (en) Data transmission method and device
CN101383658B (en) Processing method for downlink physical channel compression schema

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20170830

Address after: 161000 Qigihar City, Heilongjiang province Tiefeng District Zhanqian Street joint community group 148

Co-patentee after: Li Yawen

Patentee after: Li Xin

Co-patentee after: Li Yuezhong

Co-patentee after: Li Yunfang

Address before: 518057 Nanshan District Guangdong high tech Industrial Park, South Road, science and technology, ZTE building, Ministry of Justice

Patentee before: ZTE Corp.

TR01 Transfer of patent right
CB03 Change of inventor or designer information

Inventor after: Li Xin

Inventor after: Li Yawen

Inventor after: Li Yuezhong

Inventor after: Li Yunfang

Inventor before: You Jia

Inventor before: Li Tao

CB03 Change of inventor or designer information
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20121205

Termination date: 20210924

CF01 Termination of patent right due to non-payment of annual fee