CN109725836B - User context compression method and device - Google Patents

User context compression method and device Download PDF

Info

Publication number
CN109725836B
CN109725836B CN201711030139.0A CN201711030139A CN109725836B CN 109725836 B CN109725836 B CN 109725836B CN 201711030139 A CN201711030139 A CN 201711030139A CN 109725836 B CN109725836 B CN 109725836B
Authority
CN
China
Prior art keywords
node
content
context data
user
contents
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711030139.0A
Other languages
Chinese (zh)
Other versions
CN109725836A (en
Inventor
张凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Potevio Information Technology Co Ltd
Original Assignee
Potevio Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Potevio Information Technology Co Ltd filed Critical Potevio Information Technology Co Ltd
Priority to CN201711030139.0A priority Critical patent/CN109725836B/en
Publication of CN109725836A publication Critical patent/CN109725836A/en
Application granted granted Critical
Publication of CN109725836B publication Critical patent/CN109725836B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Mobile Radio Communication Systems (AREA)

Abstract

The invention provides a user context data compression method and device. The method comprises the following steps: the LTE base station writes the content into the memory block when finding that any context data node of any LTE user generates the content, and records the corresponding relation between the identifier of the node and the pointer of the memory block in a context data list of the user maintained by the LTE base station; when the LTE base station is idle, the LTE base station polls each context data node of each user and counts the repetition times of each content of each context data node; and when each counting period is finished, calculating the repetition degree of each content of each node, putting the preset number of contents with the highest repetition degree into the universal configuration template, releasing the memory block where the node content is positioned, and replacing the pointer corresponding to the identifier of the node in the context data list of the user with the pointer of the node content in the universal configuration template. The invention improves the compression rate of the user context data.

Description

User context compression method and device
Technical Field
The present invention relates to the field of data compression technologies, and in particular, to a method and an apparatus for compressing user context data.
Background
With the wide popularization of the Long Term Evolution (LTE) technology in the industry, the wireless private network of the power system has also been upgraded to the LTE standard. Compared with the traditional 2G system, the LTE system has qualitative improvement in the aspects of transmission bandwidth, transmission delay, the number of online users and the like. Meanwhile, due to the service characteristics (high user density, small data volume and long scheduling interval) of the LTE private power network, a single device needs to support tens of thousands of users to be online simultaneously. Maintaining and managing so much user management data (i.e., user context data) presents a significant challenge to base station storage resources. A context compression method suitable for an embedded system of a base station is needed to improve the storage efficiency of the base station system and reduce the equipment cost.
For the storage of the context of the management user, the conventional LTE base station device is directly deployed in a memory according to the fixed length of each user. The memory overhead and the number of users are in a linear relation, and the memory overhead is huge as the number of users is increased.
Due to the complexity of the LTE protocol stack, the context data for one user generally needs to occupy 5KByte storage overhead. In a public network scenario, a single cell 1200 is online, and the storage overhead is 6 MByte. However, under the environment of the electric power wireless private network, the single-cell online users can reach more than 2 thousands according to the coverage area of the 230Mhz frequency band cell, and the total users of the single-station six-cell can reach more than 10 thousands. At this time, only the context storage overhead is above 500MByte, and the context storage overhead cannot be deployed on the baseband board card which generally has only 512MByte memory.
To ensure the deployment of a protocol stack that supports large user features, the context must be compressed using compression techniques. However, the LTE protocol stack, especially the TD-LTE (Time-Division LTE) system, is very Time sensitive, and one processing cycle needs to be strictly limited to one TTI (Transmission Time Interval) (usually 1ms), which is a strong real-Time system. The commonly used real-time compression algorithms cannot guarantee such strict real-time requirements.
Disclosure of Invention
The invention provides a user context data compression method and device, which are used for improving the compression rate of LTE user context data.
The technical scheme of the invention is realized as follows:
a method of user context data compression, the method comprising:
a Long Term Evolution (LTE) base station finds that any context data node of any LTE user generates content, applies for a memory block according to the length of the node defined in advance, writes the content into the memory block, and records the corresponding relation between the identifier of the node and the pointer of the memory block in a context data list of the user maintained by the LTE base station;
when the LTE base station is idle, the LTE base station polls each context data node of each user and counts the repetition times of each content of each context data node;
when each counting period is finished, according to the counted repetition times of each content of each context data node, calculating the repetition degree of each content of each node, putting a preset number of contents with the highest repetition degree into a universal configuration template, and for the node contents put into the universal configuration template, if the node contents of a user are the same as the node contents put into the universal configuration template, releasing a memory block where the node contents of the user are located, and simultaneously replacing a pointer corresponding to the identifier of the node in a context data list of the user with the pointer of the node contents in the universal configuration template.
The method further comprises: the LET base station counts the calling times of each context data node in real time;
and, at the end of each statistical period, calculating the call rate of each context data node,
and the step of placing the preset number of contents with the highest repetition degree into the universal configuration template comprises the following steps:
calculating the weighted sum of the scheduling rate and the repeatability of each context data node according to the weights distributed for the scheduling rate and the repeatability in advance;
after the weighted sum of the call rates and the repeatability of all the context data nodes is obtained through calculation, sorting is carried out according to the weighted sum from high to low, and a first preset number of nodes which are sorted in the front are selected;
and for each selected node, selecting a second preset number of contents with the highest repetition degree of the node, and putting the selected second preset number of contents into the universal configuration template.
After selecting the second preset number of contents with the highest repetition degree of the node, before placing the selected second preset number of contents into the universal configuration template, the method further includes:
and for each content in the selected second preset number of contents, if the repetition degree of the content is greater than a preset threshold value, determining to place the content into the universal configuration template, otherwise, determining not to place the content into the universal configuration template.
Initializing a barrel-type DDR memory pool according to the length of each node in a predefined user context data structure and the number of the planned users of the LTE base station, wherein each node of each user corresponds to at least one DDR memory block with the length not less than the length of the node in the barrel-type DDR memory pool;
applying for a memory block according to the predefined length of the node as:
applying for a DDR memory block to a barrel type DDR memory pool according to the length of the node which is defined in advance;
the generic configuration template is located in the M2 cache;
the step of putting the preset number of contents with the highest repetition degree into the universal configuration template comprises the following steps:
for each content in the preset number of contents with the highest repetition degree, applying for a cache block from the M2 cache according to the length of the node where the content is located, and placing the content into the applied cache block.
The method further comprises:
when the content of any context data node of any user changes, if the pointer corresponding to the identifier of the node in the context data list of the user points to the general configuration template, a memory block is applied for the node again, new content is written into the applied memory block, and meanwhile, the pointer corresponding to the identifier of the node in the context data list of the user is replaced by the pointer of the applied memory block.
An apparatus for user context data compression, the apparatus comprising:
a context write module: when finding that any context data node of any LTE user generates content, applying for a memory block according to the length of the node defined in advance, writing the content into the memory block, and recording the corresponding relation between the identifier of the node and the pointer of the memory block in a context data list of the user maintained by the node;
a statistic module: when the LTE base station is idle, polling each context data node of each user, counting the number of times of repetition of each content of each context data node, and calculating the number of times of repetition of each content of each node according to the counted number of times of repetition of each content of each context data node when each counting period is finished;
a compression module: and for the node content in which the universal configuration template is placed, if the node content of a user is the same as the node content in which the universal configuration template is placed, releasing the memory block in which the node content of the user is located, and replacing a pointer corresponding to the identifier of the node in the context data list of the user with the pointer of the node content in the universal configuration template.
The statistical module is further used for carrying out real-time statistics on the calling times of each context data node; and, when each statistical period is over, calculating the call rate of each context data node;
and the compression module puts the preset number of contents with the highest repetition degree into the universal configuration template specifically comprises:
calculating the weighted sum of the scheduling rate and the repeatability of each context data node according to the weights distributed for the scheduling rate and the repeatability in advance; after the weighted sum of the call rates and the repeatability of all the context data nodes is obtained through calculation, sorting is carried out according to the weighted sum from high to low, and a first preset number of nodes which are sorted in the front are selected; and for each selected node, selecting a second preset number of contents with the highest repetition degree of the node, and putting the selected second preset number of contents into the universal configuration template.
After the compression module selects the second preset number of contents with the highest repetition degree of the node, before placing the selected second preset number of contents into the universal configuration template, the method further includes:
and for each content in the selected second preset number of contents, if the repetition degree of the content is greater than a preset threshold value, determining to place the content into the universal configuration template, otherwise, determining not to place the content into the universal configuration template.
The context write-in module applies for a memory block according to the length of the node defined in advance as follows:
applying for a DDR memory block to a barrel DDR memory pool according to the length of the node which is defined in advance, wherein the barrel DDR memory pool is initialized in advance according to the length of each node in a user context data structure which is defined in advance and the number of the planned users of the LTE base station, and each node of each user at least corresponds to one DDR memory block with the length not less than the length of the node in the barrel DDR memory pool;
and, the generic configuration template is located in the M2 cache; the compression module puts the preset number of contents with the highest repetition degree into the universal configuration template, and comprises the following steps:
for each content in the preset number of contents with the highest repetition degree, applying for a cache block from the M2 cache according to the length of the node where the content is located, and placing the content into the applied cache block.
The context write module is further configured to,
when the content of any context data node of any user changes, if the pointer corresponding to the identifier of the node in the context data list of the user points to the general configuration template, a memory block is applied for the node again, new content is written into the applied memory block, and meanwhile, the pointer corresponding to the identifier of the node in the context data list of the user is replaced by the pointer of the applied memory block.
According to the context data compression method and device, the repetition degree of each content of each context data node is counted, and the content with high repetition degree is stored in the universal configuration template, so that the compression rate of the user context data is improved;
furthermore, the embodiment of the invention combines the calling rate of the context data node with the repeatability of the content to decide which content is put into the universal configuration template, thereby enabling the compression of the user context data to be more suitable for actual needs.
Drawings
FIG. 1 is a flowchart of a method for compressing user context data according to an embodiment of the present invention;
FIG. 2 is a flowchart of a method for compressing user context data according to another embodiment of the present invention;
FIG. 3 is a diagram of a user context data structure;
FIG. 4 is a diagram illustrating the circular polling of user context data according to the present invention;
FIG. 5 is an exemplary diagram of user context data compression provided by the present invention;
FIG. 6 is a schematic diagram showing the change of the memory occupied by the user context data with the running time under the two conditions of not using the compression algorithm and adopting the compression algorithm provided by the present invention;
fig. 7 is a schematic structural diagram of an apparatus for compressing user context data according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and specific embodiments.
The inventor finds out through analysis that: although the number of on-network users of the power wireless private network is huge, the terminal type and the collected Service are relatively single, and the terminal capability description, the Quality of Service (QOS) description, the power control and the like are basically consistent. The similarity of the user context content is extremely high, which lays a foundation for the feasibility of context compression.
In addition, the private power network service is long online, and once a user attaches, the probability of reconfiguring the service type is extremely low. The number of users in the device and their contextual content are also extremely stable. The maintenance cost is low once the context is formed into a compact structure.
In view of the high real-time requirements of the system, the inventors contemplate: a dictionary compression scheme is employed that is extremely efficient in read operations (i.e., decompression).
Fig. 1 is a flowchart of a method for compressing user context data according to an embodiment of the present invention, which includes the following steps:
step 101: the LTE base station finds that any context data node of any LTE user generates content, applies for a memory block according to the length of the node defined in advance, writes the content into the memory block, and records the corresponding relation between the identifier of the node and the pointer of the memory block in a context data list of the user maintained by the LTE base station.
Step 102: when the LTE base station is idle, the LTE base station polls each context data node of each user according to a context data list of each user maintained by the LTE base station, and counts the repetition times of each content of each context data node.
Step 103: when each counting period is finished, according to the counted repetition times of each content of each context data node, calculating the repetition degree of each content of each node, putting a preset number of contents with the highest repetition degree into a universal configuration template, and for the node contents put into the universal configuration template, if the node contents of a user are the same as the node contents put into the universal configuration template, releasing a memory block where the node contents of the user are located, and simultaneously replacing a pointer corresponding to the identifier of the node in a context data list of the user with the pointer of the node contents in the universal configuration template.
Fig. 2 is a flowchart of a method for compressing user context data according to another embodiment of the present invention, which includes the following specific steps:
step 201: according to a predefined user context data structure, a unique key value is respectively allocated to each context data node in the structure on an LTE base station.
The user context data structure is typically a tree structure, as shown in FIG. 3, where K1, K2, K3, … are key values for each context data node to uniquely identify the corresponding node.
Step 202: initializing a barrel DDR (Double Data Rate) memory pool according to the length of each node in a predefined user context Data structure and the number of the planned users of the LTE base station, wherein each node of each user corresponds to at least one DDR memory block with the length not less than the length of the node in the barrel DDR memory pool.
That is, if the number of the planned users of the LTE base station is M and the number of nodes in the user context data structure is N, the barrel-type DDR memory pool at least includes N × M DDR memory blocks, each context data node of each user corresponds to at least one DDR memory block in the barrel-type DDR memory pool, and the length of the DDR memory block is not less than the length of the corresponding node.
Step 203: the LTE base station finds that any context data node of any user generates content, applies a DDR memory block to a barrel-type DDR memory pool according to the length of the node, writes the content into the DDR memory block, and records the corresponding relation between the key value of the node and the pointer of the DDR memory block in a context data list of the user maintained by the LTE base station.
Step 204: and the LET base station counts the calling times of each context data node in real time.
The number of calls of each context data node is equal to the sum of the number of calls of the context data node of all users. For example: for the node Kn, as long as any user node Kn is called, the number of calling times of the node Kn is added by 1.
Step 205: when the LTE base station is idle, the LTE base station polls each context data node of each user according to a context data list of each user maintained by the LTE base station, and counts the repetition times of each content of each context data node.
For example: by polling each context data node of each user, we get: for node Kn, its contents are of two kinds: vn _1, Vn _2, wherein the content of node Kn of p users is Vn _1, and the content of node Kn of q users is Vn _2, then: the content Vn _1 of the node Kn has the number of repetitions p, and the content Vn _2 of the node Kn has the number of repetitions q.
As shown in fig. 4, all users may form a ring, and the LTE base station circularly polls each context data node of each user on the ring, and counts the number of times of repeating various contents of each context data node.
It should be noted that each idle period of the LTE base station is short, and at the beginning of the next idle period, the LTE base station continues to poll the last user to which the LTE base station has polled next to the last idle period.
Step 206: and when each statistical period is finished, calculating the dispatching rate and the repeatability of each context data node, and calculating the weighted sum of the dispatching rate and the repeatability of each context data node according to the weight values distributed for the dispatching rate and the repeatability in advance.
Wherein the content of the first and second substances,
setting the calling rate of each context data node as the total calling times/statistical time of the context data node as a formula I;
the repetition degree of each context data node is set as formula two, where the highest repetition degree of the content of the context data node is the repetition degree of the content with the highest repetition degree among all the contents of the node.
In general, the statistical time is a statistical period, and correspondingly, in the formula one, "the total number of calls of the context data node" is the total number of calls of the context data node obtained when the statistical period ends — the total number of calls of the context data node obtained when the last statistical period ends;
the "highest repetition number of the context data node content" in the formula two is the highest repetition number of the context data node content obtained at the end of the present statistical period — the highest repetition number of the context data node content obtained at the end of the previous statistical period.
Step 207: and after the weighted sum of the call rates and the repeatability of all the context data nodes is obtained through calculation, sorting is carried out according to the weighted sum from high to low, and a first preset number of nodes which are sorted in the front are selected.
Step 208: and for each selected node, selecting a second preset number of contents with the highest repetition degree of the node, and for each selected content, if the repetition degree of the content is greater than a preset threshold value, applying for a cache block from the M2 cache according to the length of the node, and placing the content into the cache block.
Further, in this step, for each selected content, if the repetition degree of the content is greater than the preset threshold, first, according to the pointer corresponding to the node where the content is located, it is determined whether the pointer points to the M2 cache, and if so, it is determined that the content has already been stored in the M2 cache and is not further processed; otherwise, the subsequent action of applying for a cache block from the M2 cache according to the length of the node and putting the content into the cache block is executed.
In addition, for each selected content, if the repetition degree of the content is greater than the preset threshold, confirming that the node content should be put into the M2 cache, after determining all the node contents which should be put into the M2 cache in the statistical period, releasing the M2 cache block which is already stored in the M2 cache but does not belong to the node contents which should be put into the M2 cache determined in the statistical period, and at the same time, searching the corresponding relation between the identifier of each node and the pointer in the context data list of each user according to the pointer of the released M2 cache block and the corresponding node identifier, if the corresponding node identifier is matched, then re-applying for a DDR memory block for the node content of the user according to the length of the node, writing the node content of the user into the DDR memory block, and replacing the pointer corresponding to the identifier of the node in the context data list of the user with the pointer of the DDR memory block.
Step 209: and inquiring the content of the node of each user for the content of the node put into the M2 cache block, if the content of the node of any user is the same as the content of the node put into the M2 cache block, releasing the DDR memory block where the content of the node is positioned, meanwhile, searching the key value of the node in the context data list of the user, and replacing the pointer of the DDR memory block corresponding to the key value of the node with the pointer of the M2 cache block.
As shown in fig. 5, the context data of a certain user is stored at a certain time as follows:
the content V1 of the K1 node is written into the DDR memory block DB 3-1;
the content V2 of the K2 node is written into the DDR memory block DB 1-1;
the content V4 of the K4 node is written into the DDR memory block DB 2-1;
the content V5 of the K5 node is written into the DDR memory block DB 3-3;
the content V6 of the K6 node is written into the M2 cache block T2-4;
at the end of the current statistical period, the content V5 of the K5 node is found to be transferred to the M2 cache block T3-2, and then the DDR memory block DB3-3 where the V5 originally locates is released.
Further, when the content of any context data node of any user changes, if it is found that the pointer corresponding to the key value of the node in the context data list of the user points to the M2 cache block, a DDR memory block is applied for the node again from the bucket-type DDR memory pool, new content is written into the applied DDR memory block, and meanwhile, the pointer corresponding to the key value of the node in the context data list of the user is replaced with the pointer of the applied DDR memory block.
The method is particularly suitable for scenes with high user context data repeatability, such as a power wireless private network.
Examples of applications of the invention are given below:
in a power private network, a network terminal is configured to be a narrow band/broadband 8:1, the repetition degree of user context data is 94%, and 10 ten thousand LTE user terminals are accessed to an LTE network within 2 minutes to perform impact test.
Fig. 6 is a schematic diagram showing the change of the memory occupied by the user context data with the running time under the two conditions of not using the compression algorithm and adopting the compression algorithm provided by the present invention, wherein the thick black line represents the unused compression algorithm, and the gray line represents the use of the compression algorithm provided by the present invention.
As shown by the thick black line in fig. 6, if the context data of each user is not directly stored in the memory by using the compression algorithm, the memory overhead of the context data is basically linear with the number of users. The memory overhead is 510MB in a 10-multiuser scenario.
As shown by the grey line in fig. 6, after the compression algorithm provided by the present invention is used, the statistical period is set to 20S, and the peak value of the memory overhead occurs before 20S, which reaches 75 MB; with the perfection of the context data node, the memory overhead is obviously reduced. The memory overhead is 55MB in a 10-multiuser scenario. The average load rate of a Digital Signal Processor (DSP) of the LTE base station is improved from 58% to 71%, and the analysis reason is consumed by running the compression algorithm provided by the invention in an idle state, the compression algorithm provided by the invention can increase about 13% of DSP processing capacity, and the scheme is a scheme for changing space by time.
The invention has the following beneficial technical effects:
by counting the repetition degree of each content of each context data node, the content with high repetition degree is put into a general configuration template for storage, thereby improving the compression rate of the user context data;
furthermore, the calling rate of the context data node is combined with the repeatability of the content to decide which content is put into the universal configuration template, so that the compression of the user context data is more suitable for actual needs;
in addition, repetition degree calculation and statistical operation are carried out when the LTE base station is idle, and the influence on normal service is reduced.
Fig. 7 is a schematic structural diagram of a user context data compression apparatus according to an embodiment of the present invention, where the apparatus is located on an LTE base station, and the apparatus mainly includes: a context write module 71, a statistics module 72, and a compression module 73, wherein:
the context write module 71: when finding that any context data node of any LTE user generates content, applying for a memory block according to the length of the node defined in advance, writing the content into the memory block, and recording the corresponding relation between the identifier of the node and the pointer of the memory block in a context data list of the user maintained by the node.
The statistics module 72: when the LTE base station is idle, polling each context data node of each user according to the context data list of each user maintained by the context write-in module 71, counting the number of repetitions of each content of each context data node, and when each counting period is finished, calculating the number of repetitions of each content of each node according to the counted number of repetitions of each content of each context data node.
The compression module 73: according to the repetition degree of each content of each node calculated by the statistical module 72, a preset number of contents with the highest repetition degree are put into the general configuration template, and for the contents of the node put into the general configuration template, if the contents of the node of a user are the same as the contents of the node put into the general configuration template, the memory block where the contents of the node of the user are located is released, and meanwhile, a pointer corresponding to the identifier of the node in the context data list of the user maintained by the context write-in module 71 is replaced by the pointer of the contents of the node in the general configuration template.
The counting module 72 is further configured to count the number of calls of each context data node in real time; and, when each statistical period is over, calculating the call rate of each context data node;
and the compression module 73 puts the preset number of contents with the highest repetition degree into the universal configuration template specifically includes: calculating the weighted sum of the call rate and the content repetition rate of each context data node according to the call rate and the content repetition rate of each context data node calculated by the statistical module 72 and the weight values distributed to the call rate and the repetition rate in advance; after the weighted sum of the call rates and the content repeatability of all the context data nodes is obtained through calculation, sorting is carried out according to the weighted sum from high to low, and a first preset number of nodes which are sorted in the front are selected; and for each selected node, selecting a second preset number of contents with the highest repetition degree of the node, and putting the selected second preset number of contents into the universal configuration template.
After the compressing module 73 selects the second preset number of contents with the highest repetition degree of the node, before placing the selected second preset number of contents into the general configuration template, the method further includes: and for each content in the selected second preset number of contents, if the repetition degree of the content is greater than a preset threshold value, determining to place the content into the universal configuration template, otherwise, determining not to place the content into the universal configuration template.
The context write module 71 applies for a memory block according to the predefined length of the node as follows: applying for a DDR memory block to a barrel DDR memory pool according to the length of the node which is defined in advance, wherein the barrel DDR memory pool is initialized in advance according to the length of each node in a user context data structure which is defined in advance and the number of the planned users of the LTE base station, and each node of each user at least corresponds to one DDR memory block with the length not less than the length of the node in the barrel DDR memory pool;
and, the generic configuration template is located in the M2 cache; the compressing module 73 puts the preset number of contents with the highest repetition degree into the universal configuration template, which includes: for each content in the preset number of contents with the highest repetition degree, applying for a cache block from the M2 cache according to the length of the node where the content is located, and placing the content into the applied cache block.
The context writing module 71 is further configured to, when the content of any context data node of any user changes, if it is found that the pointer corresponding to the identifier of the node in the context data list of the user points to the general configuration template, reapply a memory block for the node, write new content into the applied memory block, and replace the pointer corresponding to the identifier of the node in the context data list of the user with the pointer of the applied memory block.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (10)

1. A method for compressing user context data, the method comprising:
a Long Term Evolution (LTE) base station finds that any context data node of any LTE user generates content, applies for a memory block according to the length of the node defined in advance, writes the content into the memory block, and records the corresponding relation between the identifier of the node and the pointer of the memory block in a context data list of the user maintained by the LTE base station;
when the LTE base station is idle, the LTE base station polls each context data node of each user and counts the repetition times of each content of each context data node;
when each counting period is finished, according to the counted repetition times of each content of each context data node, calculating the repetition degree of each content of each node, putting a preset number of contents with the highest repetition degree into a universal configuration template, and for the node contents put into the universal configuration template, if the node contents of a user are the same as the node contents put into the universal configuration template, releasing a memory block where the node contents of the user are located, and simultaneously replacing a pointer corresponding to the identifier of the node in a context data list of the user with the pointer of the node contents in the universal configuration template.
2. The method of claim 1, further comprising: the LET base station counts the calling times of each context data node in real time;
and, at the end of each statistical period, calculating the call rate of each context data node,
and the step of placing the preset number of contents with the highest repetition degree into the universal configuration template comprises the following steps:
calculating the weighted sum of the scheduling rate and the repeatability of each context data node according to the weights distributed for the scheduling rate and the repeatability in advance;
after the weighted sum of the call rates and the repeatability of all the context data nodes is obtained through calculation, sorting is carried out according to the weighted sum from high to low, and a first preset number of nodes which are sorted in the front are selected;
and for each selected node, selecting a second preset number of contents with the highest repetition degree of the node, and putting the selected second preset number of contents into the universal configuration template.
3. The method of claim 2, wherein after selecting the second predetermined number of contents with the highest degree of repetition of the node, before placing the selected second predetermined number of contents into the common configuration template, further comprises:
and for each content in the selected second preset number of contents, if the repetition degree of the content is greater than a preset threshold value, determining to place the content into the universal configuration template, otherwise, determining not to place the content into the universal configuration template.
4. The method according to claim 1, wherein a barrel-type DDR memory pool is initialized according to the length of each node in a predefined user context data structure and the number of planned users of the LTE base station, wherein each node of each user corresponds to at least one DDR memory block with the length not less than the length of the node in the barrel-type DDR memory pool;
applying for a memory block according to the predefined length of the node as:
applying for a DDR memory block to a barrel type DDR memory pool according to the length of the node which is defined in advance;
the generic configuration template is located in the M2 cache;
the step of putting the preset number of contents with the highest repetition degree into the universal configuration template comprises the following steps:
for each content in the preset number of contents with the highest repetition degree, applying for a cache block from the M2 cache according to the length of the node where the content is located, and placing the content into the applied cache block.
5. The method of claim 1, further comprising:
when the content of any context data node of any user changes, if the pointer corresponding to the identifier of the node in the context data list of the user points to the general configuration template, a memory block is applied for the node again, new content is written into the applied memory block, and meanwhile, the pointer corresponding to the identifier of the node in the context data list of the user is replaced by the pointer of the applied memory block.
6. An apparatus for compressing user context data, the apparatus comprising:
a context write module: when finding that any context data node of any LTE user generates content, applying for a memory block according to the length of the node defined in advance, writing the content into the memory block, and recording the corresponding relation between the identifier of the node and the pointer of the memory block in a context data list of the user maintained by the node;
a statistic module: when the LTE base station is idle, polling each context data node of each user, counting the number of times of repetition of each content of each context data node, and calculating the number of times of repetition of each content of each node according to the counted number of times of repetition of each content of each context data node when each counting period is finished;
a compression module: and for the node content in which the universal configuration template is placed, if the node content of a user is the same as the node content in which the universal configuration template is placed, releasing the memory block in which the node content of the user is located, and replacing a pointer corresponding to the identifier of the node in the context data list of the user with the pointer of the node content in the universal configuration template.
7. The apparatus of claim 6, wherein the statistics module is further configured to count the number of calls of each context data node in real time; and, when each statistical period is over, calculating the call rate of each context data node;
and the compression module puts the preset number of contents with the highest repetition degree into the universal configuration template specifically comprises:
calculating the weighted sum of the scheduling rate and the repeatability of each context data node according to the weights distributed for the scheduling rate and the repeatability in advance; after the weighted sum of the call rates and the repeatability of all the context data nodes is obtained through calculation, sorting is carried out according to the weighted sum from high to low, and a first preset number of nodes which are sorted in the front are selected; and for each selected node, selecting a second preset number of contents with the highest repetition degree of the node, and putting the selected second preset number of contents into the universal configuration template.
8. The apparatus of claim 7, wherein the compressing module, after selecting a second predetermined number of contents with the highest repetition degree of the node, before placing the selected second predetermined number of contents into the common configuration template further comprises:
and for each content in the selected second preset number of contents, if the repetition degree of the content is greater than a preset threshold value, determining to place the content into the universal configuration template, otherwise, determining not to place the content into the universal configuration template.
9. The apparatus of claim 6, wherein the context write module applies for a memory block according to a predefined length of the node as:
applying for a DDR memory block to a barrel DDR memory pool according to the length of the node which is defined in advance, wherein the barrel DDR memory pool is initialized in advance according to the length of each node in a user context data structure which is defined in advance and the number of the planned users of the LTE base station, and each node of each user at least corresponds to one DDR memory block with the length not less than the length of the node in the barrel DDR memory pool;
and, the generic configuration template is located in the M2 cache; the compression module puts the preset number of contents with the highest repetition degree into the universal configuration template, and comprises the following steps:
for each content in the preset number of contents with the highest repetition degree, applying for a cache block from the M2 cache according to the length of the node where the content is located, and placing the content into the applied cache block.
10. The apparatus of claim 6, wherein the context write module is further configured to,
when the content of any context data node of any user changes, if the pointer corresponding to the identifier of the node in the context data list of the user points to the general configuration template, a memory block is applied for the node again, new content is written into the applied memory block, and meanwhile, the pointer corresponding to the identifier of the node in the context data list of the user is replaced by the pointer of the applied memory block.
CN201711030139.0A 2017-10-30 2017-10-30 User context compression method and device Active CN109725836B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711030139.0A CN109725836B (en) 2017-10-30 2017-10-30 User context compression method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711030139.0A CN109725836B (en) 2017-10-30 2017-10-30 User context compression method and device

Publications (2)

Publication Number Publication Date
CN109725836A CN109725836A (en) 2019-05-07
CN109725836B true CN109725836B (en) 2021-11-26

Family

ID=66292463

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711030139.0A Active CN109725836B (en) 2017-10-30 2017-10-30 User context compression method and device

Country Status (1)

Country Link
CN (1) CN109725836B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110955612B (en) * 2019-11-07 2022-03-08 浪潮电子信息产业股份有限公司 Data caching method and related device
CN110895492B (en) * 2019-12-11 2023-01-10 Oppo(重庆)智能科技有限公司 Device control method, device, storage medium and electronic device
CN113051175A (en) * 2021-04-19 2021-06-29 杭州至千哩科技有限公司 Extensible general workflow framework system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103177111A (en) * 2013-03-29 2013-06-26 西安理工大学 System and method for deleting repeating data
CN105824881A (en) * 2016-03-10 2016-08-03 中国人民解放军国防科学技术大学 Repeating data and deleted data placement method and device based on load balancing
CN106230564A (en) * 2016-07-27 2016-12-14 重庆重邮汇测通信技术有限公司 The weight fragment data storage of wireless chain control layer determination transmission mode and method for sorting

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9329799B2 (en) * 2014-03-04 2016-05-03 Netapp, Inc. Background checking for lost writes and data corruption

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103177111A (en) * 2013-03-29 2013-06-26 西安理工大学 System and method for deleting repeating data
CN105824881A (en) * 2016-03-10 2016-08-03 中国人民解放军国防科学技术大学 Repeating data and deleted data placement method and device based on load balancing
CN106230564A (en) * 2016-07-27 2016-12-14 重庆重邮汇测通信技术有限公司 The weight fragment data storage of wireless chain control layer determination transmission mode and method for sorting

Also Published As

Publication number Publication date
CN109725836A (en) 2019-05-07

Similar Documents

Publication Publication Date Title
CN109725836B (en) User context compression method and device
US9277431B1 (en) System, method, and computer program for generating mobile subscriber network experience indicators based on geo-located events
CN113260023B (en) Packet paging method, terminal equipment and network equipment
CN1611087A (en) Method of logging call processing information in a mobile communication network
CN109118360B (en) Block chain account checking method, device, equipment and storage medium
US9924423B2 (en) Different frequency measurement and evaluation method and apparatus of cluster answering user
CN105959934A (en) Repeated network access identification method and system
CN102149084A (en) Method and system for identifying M2M (machine-to-machine) terminal
CN103746851A (en) Method and device for realizing counting of independent user number
CN113301555B (en) Resident cell determining method, device, equipment, medium and product
CN108632088B (en) Method for processing business, device and server
CN113891336A (en) Communication network frequency-reducing network-quitting method and device, computer equipment and storage medium
US10142888B2 (en) Method and apparatus for enabling near real time data analysis
US10440676B2 (en) Method and apparatus for processing data
CN108234778B (en) Method and device for generating digital graph rule
CN113795032B (en) Method and device for judging invisible faults of indoor division, storage medium and equipment
CN114793325A (en) Short message charging method and device for VoLTE terminal user and electronic equipment
CN110868732B (en) VoLTE radio access failure problem positioning method, system and equipment
CN104219102A (en) Network data discounting counter method, device and system
KR100545685B1 (en) Real time information service method in wireless communication system
CN100502577C (en) Method for PHS system to broadcast short message, short message structure and its system
CN107680607B (en) Signal compression method, signal decompression method and device thereof
CN116017411B (en) Terminal identification method, device, server and storage medium
CN214014540U (en) Wireless network test system
CN113079523B (en) Configuration method and device for Uplink Data Compression (UDC), base station and terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant