CN113194430B - Switch cabinet sensor network data compression method based on periodic transmission model - Google Patents

Switch cabinet sensor network data compression method based on periodic transmission model Download PDF

Info

Publication number
CN113194430B
CN113194430B CN202110469096.6A CN202110469096A CN113194430B CN 113194430 B CN113194430 B CN 113194430B CN 202110469096 A CN202110469096 A CN 202110469096A CN 113194430 B CN113194430 B CN 113194430B
Authority
CN
China
Prior art keywords
vector
reading
elements
vectors
sub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110469096.6A
Other languages
Chinese (zh)
Other versions
CN113194430A (en
Inventor
任新卓
王丽群
潘黄萍
钟恒强
许琴
诸葛嘉锵
黄娜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Dianzi University
Hangzhou Power Equipment Manufacturing Co Ltd
Original Assignee
Hangzhou Dianzi University
Hangzhou Power Equipment Manufacturing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Dianzi University, Hangzhou Power Equipment Manufacturing Co Ltd filed Critical Hangzhou Dianzi University
Priority to CN202110469096.6A priority Critical patent/CN113194430B/en
Publication of CN113194430A publication Critical patent/CN113194430A/en
Application granted granted Critical
Publication of CN113194430B publication Critical patent/CN113194430B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/30Services specially adapted for particular environments, situations or purposes
    • H04W4/38Services specially adapted for particular environments, situations or purposes for collecting sensor information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/04Protocols for data compression, e.g. ROHC
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/06Optimizing the usage of the radio link, e.g. header compression, information sizing, discarding information
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Abstract

The invention discloses a switch cabinet sensor network data compression method based on a periodic transmission model, which comprises the following steps: s1, collecting readings in a current period by a sensor node, and constructing a reading vector R; s2, selecting a certain vector R from the reading vector set to be executed according to the adding orderiAnd according to the vector RiThe number of the medium elements divides the medium elements into two types; s3, regarding a reading element in the first two sub-vectors in the step S2 as a candidate outlier, and judging whether the reading element is the candidate outlier: if yes, calculating and updating a reading vector; otherwise, keeping the original numerical value; s4, when the reading vector set to be executed is empty, counting the number of the same elements and the number of different elements in the reading vector set, and compiling a dictionary; s5, transmitting the dictionary and the R obtained in the S4 to the next sensor node; and S6, entering the next period, and continuously circulating according to the S1-S5. The method can greatly compress data and save energy consumption and storage space.

Description

Switch cabinet sensor network data compression method based on periodic transmission model
Technical Field
The invention relates to the technical field of sensors, in particular to a switch cabinet sensor network data compression method based on a periodic transmission model.
Background
A switch cabinet is one of the key main devices of an electric power system, and its operating state has an important influence on the reliability of the whole electric power system. In recent years, power system accidents caused by switch cabinet faults frequently occur, and collecting and monitoring switch cabinet data through a wireless sensor network is an effective method for avoiding accidents. The wireless sensor network has limited resources such as energy consumption, storage space, communication bandwidth, processing speed and the like. How to save limited resources of the sensor is one of the popular research directions of the wireless sensor network. The energy consumption of sensor processing is far lower than that of sensor communication, and the sensing information has a large amount of data redundancy, so that compressing and then transmitting the data is an effective method for saving the energy consumption of the sensor. Meanwhile, the data compression can also save the storage space resources of the sensor. Compared with the traditional data compression method, the wireless sensor network data compression algorithm has the characteristics of low complexity and small size so as to achieve the effect of saving the storage space. In a wireless sensor network, most of the existing methods adopt a mode of continuously collecting, compressing and transmitting data, so that a large amount of energy loss and waste of communication resources are caused.
Disclosure of Invention
In order to overcome the defects of the technology, the invention provides a switch cabinet sensor network data compression method based on a periodic transmission model. Aiming at the problem of data deviation caused by interference and the like, the invention introduces the substitution outlier to process data, introduces the Pearson correlation coefficient to ensure that the data can keep time sequence while being compressed, and finally compiles a dictionary with the processed data to reduce the number of bits of the data and further compress the data.
The technical scheme adopted by the invention for overcoming the technical problems is as follows:
a switch cabinet sensor network data compression method based on a periodic transmission model comprises the following steps:
s1, collecting readings in the current period by a sensor node, and constructing a reading vector R by all the collected readings according to a time sequence;
s2, selecting a certain vector R from the reading vector set to be executed according to the adding orderiAnd according to the vector RiThe number of middle elements divides it into two categories: the first is to use the vector RiDividing into two sub-vectors with equal elementsCalculating the Pearson correlation coefficient of the two sub-vectors and the absolute value of the difference between the mean values of the elements in the two sub-vectors; the second type is that the absolute value of the difference between two elements in the vector is directly calculated;
step S3, regarding a reading element in the first two sub-vectors in the step S2 as a candidate outlier, and judging whether the candidate outlier is an outlier: if yes, calculating and updating a reading vector; otherwise, keeping the original numerical value;
s4, when the reading vector set to be executed is empty, counting the number of the same elements and the number of different elements in the reading vector set, and compiling a dictionary;
s5, transmitting the dictionary and reading vector set R obtained in the step S4 to the next sensor node;
and S6, entering the next period, and continuously circulating according to the steps S1-S5.
Further, step S1 specifically includes the following steps:
the sensor node collects readings in the current period, when every time one reading is collected, the number of the readings is added with '1', and when tau readings are obtained, a reading vector is constructed according to the time sequence:
R=[r1,r2,...,rτ]
in the above formula, τ represents the total number of readings in the current cycle, and τ =2NN belongs to Z, and Z is an integer set; and adds R to the set of read vectors to be executed.
Further, step S2 specifically includes the following steps:
step S2.1, selecting a vector R from the reading vector set to be executed according to the adding orderiAnd judging the vector RiNumber of middle elements 2n equal to 2: if 2n ≠ 2, executing step S2.2; if 2n =2, execute step S2.3;
step S2.2, the vector RiDividing into two sub-vectors of equal number of elements
Figure GDA0003256682730000021
And
Figure GDA0003256682730000022
and calculating the Pearson correlation coefficient of the two sub-vectors
Figure GDA0003256682730000023
And the absolute value of the difference between the mean values of the elements in the two subvectors
Figure GDA0003256682730000024
Figure GDA0003256682730000025
In the above formula, the first and second carbon atoms are,
Figure GDA0003256682730000031
Figure GDA0003256682730000032
representing the relevance of the data;
then, judge
Figure GDA0003256682730000033
And tpThe magnitude relation of (1)
Figure GDA0003256682730000034
And tmThe size relationship of (1):
(1) If it is
Figure GDA0003256682730000035
And is
Figure GDA0003256682730000036
Then deem to be
Figure GDA0003256682730000037
And
Figure GDA0003256682730000038
are highly positively correlated and vector element means are similar; wherein, the first and the second end of the pipe are connected with each other,
Figure GDA0003256682730000039
and
Figure GDA00032566827300000310
respectively representing the mean, t, of the two subvector elementspRepresenting a highly relevant threshold number, tp∈[-1,1],tmRepresenting the number of close thresholds of the mean, tm≥0,tpHigher values the more accurate the data, lower compression ratios, tmThe lower the value, the more accurate the data, the higher the compression ratio;
the element value being the vector of the average of the corresponding element values of the two sub-vectors
Figure GDA00032566827300000311
Figure GDA00032566827300000312
Updating
Figure GDA00032566827300000313
And updates R in the reading vector RiThe value of the corresponding position of (a); removing vector R from vector set to be executediAnd will be
Figure GDA00032566827300000314
Adding a data set to be executed;
(2) If it is
Figure GDA00032566827300000315
Or
Figure GDA00032566827300000316
Then deem to be
Figure GDA00032566827300000317
And
Figure GDA00032566827300000318
not highly positive correlation or dissimilar vector element means;
when in use
Figure GDA00032566827300000319
The number of the medium elements n =2 and
Figure GDA00032566827300000320
the absolute value of the difference between the sum and the two elements is greater than tmIf yes, executing step S3; otherwise, the sequence will be
Figure GDA00032566827300000321
And
Figure GDA00032566827300000322
adding a reading vector set to be executed;
step S2.3, calculating the absolute value of the difference between two elements in the vector
Figure GDA00032566827300000323
Judging again
Figure GDA00032566827300000324
And tmThe size relationship of (1):
(1) If it is
Figure GDA00032566827300000325
Then it is considered as RiThe values of the two elements are close, so that:
Figure GDA00032566827300000326
then updating R in reading vector RiThe value of the corresponding position of (a);
(2) If it is
Figure GDA00032566827300000327
The original value is maintained.
Further, step S3 specifically includes the following steps:
step S3.1, sub-vector
Figure GDA00032566827300000328
And
Figure GDA00032566827300000329
reading element r of (1)iRegarded as a candidate outlier and order
Figure GDA00032566827300000330
Step S3.2, calculate separately
Figure GDA00032566827300000331
And
Figure GDA00032566827300000332
and vector RiPearson's correlation coefficient
Figure GDA00032566827300000333
And
Figure GDA00032566827300000334
Figure GDA0003256682730000041
wherein R isi' the method of obtaining is as follows: taking reading vector set RiIf the remainder is 1, 4 elements are taken backward from the last element position to form a vector Ri' if the remainder is 5, then 4 elements are taken from the position immediately preceding the first element to form a vector Ri′;j=4,5,
Figure GDA0003256682730000042
rl∈Ri′
Step S3.3, respectively judging
Figure GDA0003256682730000043
And
Figure GDA0003256682730000044
and tpAnd the absolute value of the difference between the mean values of the elements in the two corresponding sub-vectors
Figure GDA0003256682730000045
And tmThe size relationship of (1):
(1) If it is
Figure GDA0003256682730000046
And correspond to
Figure GDA0003256682730000047
Then it is considered that
Figure GDA0003256682730000048
And Ri' highly correlated, close element mean and reading element riIs an outlier; if it is
Figure GDA0003256682730000049
And correspond to
Figure GDA00032566827300000410
Then it is considered that
Figure GDA00032566827300000411
And Ri' highly correlated, close element mean and reading element riIs an outlier;
secondly, if
Figure GDA00032566827300000412
And
Figure GDA00032566827300000413
if the calculated values of (A) all satisfy the above conditions, then the calculated values are taken
Figure GDA00032566827300000414
The conditions are satisfied: calculate correspondences
Figure GDA00032566827300000415
Updating
Figure GDA00032566827300000416
And updates R in the reading vector RiAnd Ri' a value of the corresponding position;
(2) If at the same time satisfy
Figure GDA00032566827300000417
Or correspond to
Figure GDA00032566827300000418
And
Figure GDA00032566827300000419
or correspond to
Figure GDA00032566827300000420
Then the reading element r is considerediNot outliers, retained the original values.
Further, step S4 specifically includes the following steps:
when the reading vector set to be executed is empty, all data processing is finished; counting the number of the same elements and the number of different elements in the reading vector set, arranging the same elements from large to small according to the number of the same elements, distributing binary indexes according to the number of the different elements, compiling the binary indexes into a dictionary as follows, and finally replacing the element reading in the reading vector R with the binary indexes:
number n of non-identical elementsi Index binary representation s1 Corresponding element reading ri
1,2 0,1 r1,r2
3,4 00,01,10,11 r1,r2,r3,r4
5,6,7,8 000,001,...,111 r1,r2,...,r8
The invention has the beneficial effects that:
the invention provides a novel data compression method for a switch cabinet sensor network based on a periodic transmission model, introduces the concept of Pearson correlation coefficient, replaces outliers generated by factors such as interference, and finally is compiled into a dictionary, so that data are compressed more greatly, and the energy consumption and the storage space of a wireless sensor network are saved under the condition of keeping a data time sequence.
Drawings
Fig. 1 is a flowchart of a data compression method for a switch cabinet sensor network based on a periodic transmission model according to an embodiment of the present invention.
Detailed Description
In order to facilitate a better understanding of the invention for those skilled in the art, the invention will be described in further detail with reference to the accompanying drawings and specific examples, which are given by way of illustration only and do not limit the scope of the invention.
The method for compressing data of the switch cabinet sensor network based on the periodic transmission model disclosed by the embodiment, as shown in fig. 1, includes the following steps:
s1, the sensor node collects the readings in the current period, and all the collected readings construct a reading vector R according to the time sequence.
Specifically, the sensor node collects readings in the current period, and when τ readings are obtained, the reading vector is constructed in time order by adding "1" to the number of readings each time one reading is collected:
R=[r1,r2,...,rτ]
in the above formula, τ represents the total number of readings in the current cycle, τ =2NN belongs to Z, and Z is an integer set;
and adds R to the set of read vectors to be executed.
S2, selecting a certain vector R from the reading vector set to be executed according to the adding orderiAnd according to the vector RiThe number of middle elements divides it into two categories: the first is to use the vector RiDividing the vector into two sub-vectors with equal element numbers, and calculating the Pearson correlation coefficient of the two sub-vectors and the absolute value of the element mean value difference in the two sub-vectors; the second type is to directly calculate the absolute value of the difference between two elements in the vector.
Specifically, step S2 specifically includes the following steps:
step S2.1, selecting a vector R from the reading vector set to be executed according to the adding orderiAnd judging the vector RiNumber of middle elements 2n equal to 2: if 2n ≠ 2, executing step S2.2; if 2n =2, execute step S2.3;
step S2.2, the vector RiDividing into two sub-vectors of equal number of elements
Figure GDA0003256682730000061
And
Figure GDA0003256682730000062
and calculating the Pearson correlation coefficient of the two sub-vectors
Figure GDA0003256682730000063
And the absolute value of the difference between the mean values of the elements in the two subvectors
Figure GDA0003256682730000064
Wherein the Pearson correlation coefficient
Figure GDA0003256682730000065
The calculation formula of (a) is as follows:
Figure GDA0003256682730000066
in the above formula, the first and second carbon atoms are,
Figure GDA0003256682730000067
Figure GDA0003256682730000068
representing the relevance of the data;
then, judge
Figure GDA0003256682730000069
And tpIn a magnitude relation of
Figure GDA00032566827300000610
And tmThe size relationship of (1):
(1) If it is
Figure GDA00032566827300000611
And is
Figure GDA00032566827300000612
Then it is considered that
Figure GDA00032566827300000613
And
Figure GDA00032566827300000614
are highly positive correlated and vector element means are similar; wherein the content of the first and second substances,
Figure GDA00032566827300000615
and
Figure GDA00032566827300000616
respectively representing the mean, t, of the two subvector elementspRepresenting a highly relevant threshold number, tp∈[-1,1],tmRepresenting the number of close thresholds of the mean, tm≥0,tpHigher values the more accurate the data, lower compression ratios, tmThen, in contrast, tmLower values the more accurate the data, higher compression ratios, tpAnd tmThe value of (A) is reasonably adjusted according to specific conditions;
the element value being the vector of the average of the corresponding element values of the two sub-vectors
Figure GDA00032566827300000617
Figure GDA00032566827300000618
Updating
Figure GDA00032566827300000619
And updates R in the reading vector RiA value of the corresponding position of (a); removing vector R from vector set to be executediAnd will be
Figure GDA00032566827300000620
Adding a data set to be executed;
(2) If it is
Figure GDA00032566827300000621
Or
Figure GDA00032566827300000622
Then it is considered that
Figure GDA00032566827300000623
And
Figure GDA00032566827300000624
not highly positive correlation or dissimilar vector element means;
when the temperature is higher than the set temperature
Figure GDA00032566827300000625
The number of the medium elements n =2 and
Figure GDA00032566827300000626
the absolute value of the difference between the sum and the two elements is greater than tmIf yes, executing step S3; otherwise, it will be in order
Figure GDA00032566827300000627
And
Figure GDA00032566827300000628
adding a reading vector set to be executed;
step S2.3, calculating the absolute value of the difference between two elements in the vector
Figure GDA0003256682730000071
Judging again
Figure GDA0003256682730000072
And tmThe size relationship of (1):
(1) If it is
Figure GDA0003256682730000073
Then it is considered as RiThe values of the two elements are close to each other, so that:
Figure GDA0003256682730000074
then updating R in reading vector RiThe value of the corresponding position of (a);
(2) If it is
Figure GDA0003256682730000075
The original value is maintained.
Step S3, regarding a reading element in the first two sub-vectors in the step S2 as a candidate outlier, and judging whether the candidate outlier is an outlier: if yes, calculating and updating a reading vector; otherwise, the original value is kept.
In this embodiment, step S3 specifically includes the following steps:
step S3.1, sub-vector
Figure GDA0003256682730000076
And
Figure GDA0003256682730000077
reading element r of (1)iRegarded as a candidate outlier and order
Figure GDA0003256682730000078
Respectively calculate
Figure GDA0003256682730000079
And
Figure GDA00032566827300000710
and vector RiPearson's correlation coefficient
Figure GDA00032566827300000711
And
Figure GDA00032566827300000712
Figure GDA00032566827300000713
wherein R isi' the method of obtaining is as follows: taking reading vector set R in RiThe remainder of the division of the position of the head element of the vector by 8, if the remainder is 1, then 4 elements are taken backward from the position of the tail element to form the vector Ri' if the remainder is 5, then 4 elements are taken from the position immediately preceding the first element to form a vector Ri′;j=4,5,
Figure GDA00032566827300000714
rl∈Ri′
Step S3.3, respectively judging
Figure GDA00032566827300000715
And
Figure GDA00032566827300000716
and tpAnd the absolute value of the difference between the mean values of the elements in the two corresponding sub-vectors
Figure GDA00032566827300000717
And tmThe size relationship of (1):
(1) If it is
Figure GDA00032566827300000718
And correspond to
Figure GDA00032566827300000719
Then it is considered that
Figure GDA00032566827300000720
And Ri' highly correlated, close element mean and reading element riIs an outlier; if it is
Figure GDA00032566827300000721
And correspond to
Figure GDA00032566827300000722
Then it is considered that
Figure GDA00032566827300000723
And Ri' highly correlated, close element mean and reading element riIs an outlier;
secondly, if
Figure GDA0003256682730000081
And
Figure GDA0003256682730000082
the calculated values of (a) all satisfy the above condition, that is,
Figure GDA0003256682730000083
and correspond to
Figure GDA0003256682730000084
And
Figure GDA0003256682730000085
and correspond to
Figure GDA0003256682730000086
Then get
Figure GDA0003256682730000087
The conditions are satisfied: calculate correspondences
Figure GDA0003256682730000088
Updating
Figure GDA0003256682730000089
And updates R in the reading vector RiAnd Ri' a value of the corresponding position;
(2) If at the same time satisfy
Figure GDA00032566827300000810
Or correspond to
Figure GDA00032566827300000811
And
Figure GDA00032566827300000812
or correspond to
Figure GDA00032566827300000813
Then the reading element r is considerediInstead of outliers, the original values are maintained.
And S4, when the reading vector set to be executed is empty, counting the number of the same elements and the number of different elements in the reading vector set, compiling a dictionary to reduce the number of bits of data, and further compressing the data.
In this embodiment, step S4 specifically includes the following steps:
when the reading vector set to be executed is empty, all data processing is finished; counting the number of the same elements and the number of different elements in the reading vector set, arranging the same elements from large to small according to the number of the same elements, distributing binary indexes according to the number of the different elements, compiling the binary indexes into a dictionary as follows, and finally replacing the element reading in the reading vector R with the binary indexes:
TABLE 1 compiled dictionary
Number n of non-identical elementsi Index binary representation s1 Corresponding element reading ri
1,2 0,1 r1,r2
3,4 00,01,10,11 r1,r2,r3,r4
5,6,7,8 000,001,...,111 r1,r2,...,r8
And S5, transmitting the dictionary and reading vector set R obtained in the step S4 to the next sensor node.
And S6, entering the next period, and continuously circulating according to the steps S1-S5.
The data compression method can achieve good effect when processing the period switch cabinet sensing data with disturbance, can compress the switch cabinet sensor network data to 10% -30% of the original data, and ensures that the data distortion rate is within 0.5% -5%, and the more the total number of readings in a single reading period is, the higher the compression rate is. The method ensures the time sequence, so that the change trend of the switch cabinet sensor data to the time is also ensured to a certain extent. Meanwhile, two threshold values t are reasonably adjusted according to specific conditionspAnd tmThe compression method can achieve different effects and has certain flexibility.
The foregoing merely illustrates the principles and preferred embodiments of the invention and many variations and modifications may be made by those skilled in the art in light of the foregoing description, which are within the scope of the invention.

Claims (4)

1. A switch cabinet sensor network data compression method based on a periodic transmission model is characterized by comprising the following steps:
s1, collecting readings in the current period by a sensor node, and constructing a reading vector R by all the collected readings according to a time sequence;
s2, selecting a certain vector R from the reading vector set to be executed according to the adding orderiAnd according to the vector RiThe number of middle elements divides it into two categories: the first is to use the vector RiDividing the vector into two sub-vectors with the same number of elements, and calculating Pearson correlation coefficients of the two sub-vectors and absolute values of mean value differences of the elements in the two sub-vectors; the second type is that the absolute value of the difference between two elements in the vector is directly calculated;
the step S2 specifically includes the following steps:
step S2.1, selecting a vector R from the reading vector set to be executed according to the adding orderiAnd determining the vector RiNumber of middle elements 2n equal to 2: if 2n ≠ 2, executing step S2.2; if 2n =2, execute step S2.3;
step S2.2, the vector RiDividing into two sub-vectors of equal number of elements
Figure FDA0003802851940000011
And
Figure FDA0003802851940000012
and calculating the Pearson correlation coefficient of the two sub-vectors
Figure FDA0003802851940000013
And the absolute value of the difference between the mean values of the elements in the two subvectors
Figure FDA0003802851940000014
Figure FDA0003802851940000015
In the above formula, the first and second carbon atoms are,
Figure FDA0003802851940000016
representing the relevance of the data;
then, judge
Figure FDA0003802851940000017
And tpThe magnitude relation of (1)
Figure FDA0003802851940000018
And tmThe size relationship of (1):
(1) If it is
Figure FDA0003802851940000019
And is
Figure FDA00038028519400000110
Then it is considered that
Figure FDA00038028519400000111
And
Figure FDA00038028519400000112
are highly positively correlated and vector element means are similar; wherein the content of the first and second substances,
Figure FDA00038028519400000113
and
Figure FDA00038028519400000114
respectively representing the mean, t, of the two subvector elementspRepresenting a highly relevant threshold number, tp∈[-1,1],tmRepresenting the number of close thresholds of the mean, tm≥0,tpHigher values the more accurate the data, lower compression ratios, tmThe lower the value, the more accurate the data, the higher the compression ratio;
the element value being the vector of the average of the corresponding element values of the two sub-vectors
Figure FDA00038028519400000115
Figure FDA00038028519400000116
Updating
Figure FDA0003802851940000021
And updates R in the reading vector RiThe value of the corresponding position of (a); removing vector R from the set of read vectors to be executediAnd will be
Figure FDA0003802851940000022
Adding a reading vector set to be executed;
(2) If it is
Figure FDA0003802851940000023
Or
Figure FDA0003802851940000024
Then it is considered that
Figure FDA0003802851940000025
And
Figure FDA00038028519400000216
not highly positive correlation or dissimilar vector element means;
when the temperature is higher than the set temperature
Figure FDA0003802851940000027
The number of middle elements n =2 and
Figure FDA0003802851940000028
the absolute value of the difference between the two elements is greater than tmIf yes, executing step S3; otherwise, it will be in order
Figure FDA0003802851940000029
And
Figure FDA00038028519400000210
adding a reading vector set to be executed;
step S2.3, calculating the absolute value of the difference between two elements in the vector
Figure FDA00038028519400000211
Judging again
Figure FDA00038028519400000212
And tmThe size relationship of (1):
(1) If it is
Figure FDA00038028519400000213
Then it is considered as RiThe values of the two elements are close, so that:
Figure FDA00038028519400000214
then updating R in reading vector RiThe value of the corresponding position of (a);
(2) If it is
Figure FDA00038028519400000215
Keeping the original value;
step S3, regarding a reading element in the first two sub-vectors in the step S2 as a candidate outlier, and judging whether the candidate outlier is an outlier: if yes, calculating and updating a reading vector; otherwise, keeping the original numerical value;
s4, when the reading vector set to be executed is empty, counting the number of the same elements and the number of different elements in the reading vector, and compiling a dictionary;
s5, transmitting the dictionary and the reading vector R obtained in the step S4 to the next sensor node;
and step S6, entering the next period, and continuously circulating according to the steps S1-S5.
2. The method according to claim 1, wherein step S1 comprises in particular the following:
the sensor node collects readings in the current period, when every time one reading is collected, the number of the readings is added with '1', and when tau readings are obtained, a reading vector is constructed according to the time sequence:
R=[r1,r2,...,rτ]
in the above formula, τ represents the total number of readings in the current cycle, and τ =2NN belongs to Z, and Z is an integer set;
and adding R to the set of read vectors to be executed.
3. The method according to claim 2, wherein step S3 specifically comprises the following:
step S3.1, sub-vector
Figure FDA0003802851940000031
And
Figure FDA0003802851940000032
reading element r of (1)iRegarded as a candidate outlier and order
Figure FDA0003802851940000033
Step S3.2, calculate respectively
Figure FDA0003802851940000034
And
Figure FDA0003802851940000035
and vector Ri′Pearson's correlation coefficient
Figure FDA0003802851940000036
And
Figure FDA0003802851940000037
Figure FDA0003802851940000038
wherein R isi′The obtaining method comprises the following steps: taking R in reading vector RiIf the remainder is 1, 4 elements are taken backward from the last element position to form a vector Ri′If the remainder is 5, then 4 elements are taken from the position immediately before the first element to form a vector Ri′;j=4,5,
Figure FDA0003802851940000039
rl∈Ri′
Step S3.3, respectively judging
Figure FDA00038028519400000310
And
Figure FDA00038028519400000311
and tpAnd the absolute value of the difference between the mean values of the elements in the two corresponding sub-vectors
Figure FDA00038028519400000312
And tmThe size relationship of (1):
(1) If it is
Figure FDA00038028519400000313
And correspond to
Figure FDA00038028519400000314
Then deem to be
Figure FDA00038028519400000315
And Ri′Highly correlated, close element mean and reading element riIs an outlier; if it is
Figure FDA00038028519400000316
And correspond to
Figure FDA00038028519400000317
Then deem to be
Figure FDA00038028519400000318
And Ri′Highly correlated, close element mean and reading element riIs an outlier;
secondly, if
Figure FDA00038028519400000319
And
Figure FDA00038028519400000320
if the calculated values of (A) all satisfy the above conditions, then the calculated values are taken
Figure FDA00038028519400000321
The conditions are satisfied: calculate correspondences
Figure FDA00038028519400000322
Updating
Figure FDA00038028519400000323
And updates R in the reading vector RiAnd Ri′The value of the corresponding position of (a);
(2) If at the same time satisfy
Figure FDA00038028519400000324
Or correspond to
Figure FDA00038028519400000325
And
Figure FDA00038028519400000326
or correspond to
Figure FDA00038028519400000327
Then the reading element r is considerediInstead of outliers, the original values are maintained.
4. The method according to claim 3, wherein step S4 specifically comprises the following:
when the reading vector set to be executed is empty, all data processing is finished; counting the number of the same elements and the number of different elements in the reading vector, arranging the same elements from large to small, distributing binary indexes according to the number of the different elements, compiling the binary indexes into a dictionary, and finally replacing the element reading in the reading vector R with the binary indexes:
Figure FDA00038028519400000328
Figure FDA0003802851940000041
CN202110469096.6A 2021-04-28 2021-04-28 Switch cabinet sensor network data compression method based on periodic transmission model Active CN113194430B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110469096.6A CN113194430B (en) 2021-04-28 2021-04-28 Switch cabinet sensor network data compression method based on periodic transmission model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110469096.6A CN113194430B (en) 2021-04-28 2021-04-28 Switch cabinet sensor network data compression method based on periodic transmission model

Publications (2)

Publication Number Publication Date
CN113194430A CN113194430A (en) 2021-07-30
CN113194430B true CN113194430B (en) 2022-11-01

Family

ID=76980099

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110469096.6A Active CN113194430B (en) 2021-04-28 2021-04-28 Switch cabinet sensor network data compression method based on periodic transmission model

Country Status (1)

Country Link
CN (1) CN113194430B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108494408A (en) * 2018-03-14 2018-09-04 电子科技大学 While-drilling density logger underground high speed real-time compression method based on Hash dictionary

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011067769A1 (en) * 2009-12-03 2011-06-09 Infogin Ltd. Shared dictionary compression over http proxy
US9864846B2 (en) * 2012-01-31 2018-01-09 Life Technologies Corporation Methods and computer program products for compression of sequencing data
NZ759804A (en) * 2017-10-16 2022-04-29 Illumina Inc Deep learning-based techniques for training deep convolutional neural networks
CN108990108B (en) * 2018-07-10 2021-07-02 西华大学 Self-adaptive real-time spectrum data compression method and system

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108494408A (en) * 2018-03-14 2018-09-04 电子科技大学 While-drilling density logger underground high speed real-time compression method based on Hash dictionary

Also Published As

Publication number Publication date
CN113194430A (en) 2021-07-30

Similar Documents

Publication Publication Date Title
CN112953550B (en) Data compression method, electronic device and storage medium
CN102437854B (en) Industrial real-time data compression method with high compression ratio
CN109597757B (en) Method for measuring similarity between software networks based on multidimensional time series entropy
CN111104241A (en) Server memory anomaly detection method, system and equipment based on self-encoder
CN111062620A (en) Intelligent analysis system and method for electric power charging fairness based on hybrid charging data
CN109656887B (en) Distributed time series mode retrieval method for mass high-speed rail shaft temperature data
CN113194430B (en) Switch cabinet sensor network data compression method based on periodic transmission model
CN106227881A (en) A kind of information processing method and server
CN113724101B (en) Table relation identification method and system, equipment and storage medium
WO2022111095A1 (en) Product recommendation method and apparatus, computer storage medium, and system
CN116992155B (en) User long tail recommendation method and system utilizing NMF with different liveness
CN117290364A (en) Intelligent market investigation data storage method
CN110851708B (en) Negative sample extraction method, device, computer equipment and storage medium
CN111190896B (en) Data processing method, device, storage medium and computer equipment
CN116933216A (en) Management system and method based on flexible load resource aggregation feature analysis
CN102650969A (en) Method and device for obtaining and updating context probability model value of bins
CN104077272A (en) Method and device for compressing dictionary
CN109918564A (en) It is a kind of towards the context autocoding recommended method being cold-started completely and system
CN114968992A (en) Data identification cleaning and compensation method and device, electronic equipment and storage medium
CN112614006A (en) Load prediction method, device, computer readable storage medium and processor
CN112862179A (en) Energy consumption behavior prediction method and device and computer equipment
CN112865898A (en) Antagonistic wireless communication channel model estimation and prediction method
WO2023226831A1 (en) Method and apparatus for determining weight of prediction block of coding unit
CN117294314B (en) Fruit and vegetable can production information data record management method
CN117316333B (en) Inverse synthesis prediction method and device based on general molecular diagram representation learning model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant