WO2018162046A1 - Protecting kpi during optimization of self-organizing network - Google Patents

Protecting kpi during optimization of self-organizing network Download PDF

Info

Publication number
WO2018162046A1
WO2018162046A1 PCT/EP2017/055351 EP2017055351W WO2018162046A1 WO 2018162046 A1 WO2018162046 A1 WO 2018162046A1 EP 2017055351 W EP2017055351 W EP 2017055351W WO 2018162046 A1 WO2018162046 A1 WO 2018162046A1
Authority
WO
WIPO (PCT)
Prior art keywords
kpi
degradation
son
optimization
optimization actions
Prior art date
Application number
PCT/EP2017/055351
Other languages
French (fr)
Inventor
Vladimir VERBULSKII
Gary STURGEON
Premnath KANDHASAMY NARAYANAN
Ciaran Murphy
MingXue Wang
Original Assignee
Telefonaktiebolaget Lm Ericsson (Publ)
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Telefonaktiebolaget Lm Ericsson (Publ) filed Critical Telefonaktiebolaget Lm Ericsson (Publ)
Priority to PCT/EP2017/055351 priority Critical patent/WO2018162046A1/en
Publication of WO2018162046A1 publication Critical patent/WO2018162046A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/50Network service management, e.g. ensuring proper service fulfilment according to agreements
    • H04L41/5003Managing SLA; Interaction between SLA and QoS
    • H04L41/5019Ensuring fulfilment of SLA
    • H04L41/5025Ensuring fulfilment of SLA by proactively reacting to service quality change, e.g. by reconfiguration after service quality degradation or upgrade
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/50Network service management, e.g. ensuring proper service fulfilment according to agreements
    • H04L41/5003Managing SLA; Interaction between SLA and QoS
    • H04L41/5009Determining service level performance parameters or violations of service level contracts, e.g. violations of agreed response time or mean time between failures [MTBF]

Definitions

  • the present disclosure relates generally to methods of protecting a Key Performance Indicator KPI from degradation during optimization of a communications network by a Self-Organizing Network (SON) controller, and to corresponding programs for computers and corresponding program products, and to corresponding apparatus for protecting the KPI and to SON controllers arranged to co-operate with such apparatus.
  • SON Self-Organizing Network
  • SON Self-Organizing Networks
  • SON Self-Organizing Network
  • ANR Automated Neighbour Relations
  • PCI Physical Cell Identity
  • CCO Coverage and Capacity Optimization
  • KPIs Key Performance Indicators
  • Conventionally SON is not a "zero touch" solution, where the operator launches the features he wants and those features optimize the network performance without further human intervention.
  • An aspect of this disclosure provides a method of protecting a first KPI of a communications network from effects of different optimization actions by a SON controller of the communications network, having steps of monitoring the first KPI to detect degradation of the first KPI, and receiving from the SON controller an indication of occurrences of the different optimization actions.
  • the method also involves assessing automatically which of the different optimization actions, if any, prompted the detected degradation of the first KPI, based on the indications of the occurrences, and providing feedback automatically to the SON controller, to cause the SON controller to ameliorate the detected degradation, to protect the first KPI, based on which of the optimization actions is assessed to have prompted the degradation. Any additional optional features can be added, and some are described below and set out in dependent claims.
  • Another aspect of the disclosure provides a computer program having instructions that when executed by processing circuitry cause the processing circuitry to carry out the methods set out above.
  • Another aspect provides a computer program product comprising a computer readable medium having stored on it the above-mentioned computer program.
  • Another aspect provides apparatus for protecting a first KPI of a communications network from effects of different optimization actions by a SON controller of the communications network, the apparatus having a processing circuit and a memory circuit, and the memory circuit having instructions executable by the processing circuit.
  • the processing circuit when executing the instructions is configured to monitor the first KPI to detect degradation of the first KPI, and to receive from the SON controller an indication of occurrences of the different optimization actions.
  • the processing circuit is also configured to assess automatically which of the different optimization actions, if any, prompted the detected degradation of the first KPI, based on the indications of the occurrences, and to provide feedback automatically to the controller, to cause the controller to ameliorate the detected degradation, to protect the first KPI, based on which of the optimization actions is assessed to have prompted the degradation.
  • Another aspect provides a system comprising the apparatus for protecting a KPI as set out above, and an SON controller for carrying out optimization actions on the communications network.
  • the SON controller is connected to the apparatus for protecting the KPI to send the indication of occurrences of optimization actions to the apparatus for protecting the KPI and to receive feedback from said apparatus for protecting the KPI.
  • Another aspect provides A SON controller for controlling optimization actions on a communications network, in cooperation with the apparatus for protecting a first KPI of the communications network from degradation by the optimization actions.
  • the SON controller has a processing circuit and a memory circuit, the memory circuit having instructions executable by the processing circuit. The processing circuit when executing the instructions is configured to initiate optimization actions, and to send to the apparatus for protecting the first KPI, an indication of occurrences of the optimization actions.
  • the processing circuit is also configured to receive feedback from the apparatus, based on which of the optimization actions is assessed to have prompted a degradation in the first KPI, and in response to the feedback, control the optimization actions to ameliorate the detected degradation, to protect the first KPI.
  • Another aspect provides apparatus for protecting a first KPI of a communications network from effects of different optimization actions by a SON controller of the communications network, the apparatus having a monitor unit for monitoring the first KPI to detect degradation of the first KPI, and a receiver for receiving from the SON controller an indication of occurrences of the different optimization actions.
  • the apparatus also has an assessment unit for assessing automatically which of the different optimization actions, if any, prompted the detected degradation of the first KPI, based on the indications of the occurrences, and a feedback unit for providing feedback automatically to the controller, to cause the SON controller to ameliorate the detected degradation, to protect the first KPI, based on which of the optimization actions is assessed to have prompted the degradation.
  • Another apparatus provides a SON controller for controlling optimization actions on a communications network, in cooperation with an apparatus for protecting a first KPI of the communications network from degradation by the optimization actions.
  • the SON controller has an optimization unit for initiating optimization actions, and a sender/receiver for sending to the apparatus for protecting the first KPI, an indication of occurrences of the optimization actions, and for receiving feedback from the apparatus, based on which of the optimization actions is assessed to have prompted a degradation in the first KPI.
  • the apparatus also has a control unit for controlling the optimization actions in response to the feedback, to ameliorate the detected degradation, to protect the first KPI.
  • Figure 1 shows an overall system view including embodiments
  • Figures 2 and 3 show steps according to embodiments
  • Figures 4 and 5 show steps and a graph relating to a time variable threshold representative of past behavior
  • Figures 6 to 9 show steps including ways of assessing which optimization prompted the degradation
  • Figures 10 and 1 1 show ways of ameliorating the optimization which prompted degradation
  • Figure 12 shows steps including detecting degradation based on transience or reliability
  • Figure 13 shows steps with detection of improvement and amelioration includes reinforcing improvement
  • Figure 14 shows steps where the amelioration is based on detection and feedback relating to other NEs
  • Figure 15 shows an overall view of steps of optimization and feedback
  • Figure 16 shows a flowchart showing a more detailed view of optimization and feedback
  • Figure 17 shows steps in ordering a sequence of multiple trial reversions
  • Figure 18 shows a time chart of a C-SON example
  • Figure 19 shows a time chart of a D-SON example
  • Figures 20 and 21 show a schematic network view and steps relating to amelioration based on detection and feedback of other NEs and based on service area throughput
  • Figures 22 and 23 show a schematic network view and steps relating to amelioration based on detection and feedback of other NEs, relating to neighbor relation black-/white-listing,
  • Figure 24 shows a schematic view of internal details of apparatus for protecting KPI
  • Figure 25 shows a schematic view of internal details of an SON controller
  • Figure 26 shows a schematic view of internal details of an apparatus for protecting KPI
  • Figure 27 shows a schematic view of internal details of an SON controller. Detailed Description:
  • references to computer programs or software can encompass any type of programs in any language executable directly or indirectly on processing hardware.
  • references to processors, hardware, processing hardware or circuitry can encompass any kind of logic or analog circuitry, integrated to any degree, and not limited to general purpose processors, digital signal processors, ASICs, FPGAs, discrete components or logic and so on.
  • References to a processor are intended to encompass implementations using multiple processors which may be integrated together, or co-located in the same location such as the rack, same room, same floor, same building, as appropriate, or distributed at different locations for example.
  • Optimization action is intended to encompass for example any kind of alteration of configuration such as a parameter or sequence or instruction or circuitry or relationship between these, or anything that can define how any part of the network such as an NE, physical or virtualized entity, can operate or define their characteristics, relationships or identity, such as for example defining radio output power, radio frequencies, other communications parameters, or physical cell identity PCI or neighbor relations.
  • the configuration can be stored in any format and be located anywhere convenient, either external to the VNF or service area, or internally.
  • VNF if the change is made while that VNF is in use, is intended to encompass anything which could noticeably affect the service to a UE, or risk affecting the service, such as changing a PCI, or changing a radio frequency band or a radio output power, or anything which might cause connection to the UE to be lost, or bandwidth of a connection to be reduced for example.
  • References to KPIs can encompass important values such as Call Drop Rate, DL
  • NE or component of an NE for example.
  • the thresholds need to be defined and adapted manually, this process is not automated. As most KPIs naturally change values during the day/week/year (showing strong seasonality), absolute thresholds need to be periodically re-defined. This manual input means they do not scale up well to more complex networks.
  • Use of fixed KPI thresholds means that some Result Output Periods ROPs are guarded worse than others. The end result is a lower probability of detecting a degradation for these ROPs.
  • Use of fixed KPI thresholds means that some NEs are guarded worse than others. The end result is a lower probability of detecting a degradation for these NEs. Special events are not spotted and the transient KPI degradations due to mass events or external events (when the NEs are overloaded) are treated the same way as real network degradations.
  • SON activities affect network KPIs as a feedback to SON.
  • SON is acting blindly, without assessment of the results of its activities.
  • Figure 1 overall system view including embodiments
  • Figure 1 shows a schematic view showing an apparatus 30 for protecting a first KPI and typically many KPIs, from effects of optimization actions.
  • the apparatus is shown connected to a SON controller 20 to receive indications of occurrences of optimizations from the SON controller, and to send feedback to cause the SON controller to ameliorate any degradation on the first KPI.
  • the apparatus for protecting the KPI could be implemented on separate hardware such as a separate server or blade, or could be a subroutine or software module run on the same hardware as the SON controller for example.
  • One possible embodiment is to have the apparatus fully implemented in the Cloud.
  • the apparatus is shown coupled to an NMS 60 for receiving KPI values from network elements NE 50 of the communications network 40.
  • the SON controller is shown coupled to the communications network via the NMS 60 to enable the SON controller to initiate optimization actions.
  • the apparatus is suitable for use with SON controllers which are centralized (C-SON) (as shown) or distributed (D-SON) implementations.
  • C-SON centralized
  • D-SON distributed
  • the SON functionality is typically integrated in the NEs.
  • the optimization actions can for example involve sending or altering a parameter or rule or instruction of an NE, as illustrated, or changing an algorithm which governs the NE behavior, by generating the parameter/rule /instruction for example, or changing a relationship between NEs for example.
  • the apparatus can help to effectively guard network KPIs during optimization actions (e.g. by the SON controller). In some cases, it can provide feedback to cause reversion of the optimization actions such as parameter changes that are assessed as having prompted the degradation of performance. In some cases, the parameter changes that have led to performance improvement are also assessed and reported to the SON controller. This can help enable smarter experience-based amelioration compared to conventional arrangements.
  • degradation of KPIs can be detected based on time variable KPI thresholds representative of their past behavior. The threshold used for detections can be specific to each Network Element, and varied for different periods of time. For this, advanced statistical analysis of historical and current KPI values can be applied.
  • Figure 2 shows a view of a time chart with time flowing down the chart, and showing in a left hand column, some actions of the apparatus for protecting the KPI, and in a right hand column, some actions of the SON controller 20.
  • the apparatus monitors the first KPI to detect degradation. This can involve comparison with a threshold, or by algorithm, or any other way.
  • the apparatus receives indications of occurrences of various different optimization actions from the SON controller.
  • the apparatus automatically assesses which of the optimization actions prompted the degradation detected. This can be implemented in various ways, and some will be described below with reference to subsequent figures.
  • the apparatus provides feedback automatically to the SON to cause it to ameliorate the degradation based on which of the optimization actions prompted the degradation.
  • the actions of the SON controller include step 21 of initiating optimization actions on the communications network, and sending an indication of these actions to the apparatus for protecting the KPI.
  • the SON controller receives feedback from the apparatus relating to ameliorating the degradation.
  • the SON controller ameliorates the degradation based on the feedback of which of the optimization actions prompted the degradation. Again, this can be implemented in various ways and some will be described in more detail below with reference to other figures.
  • An advantage of this feature of figure 2 of assessing which of the optimization actions prompted a particular KPI degradation, is that the feedback to cause an attempted amelioration can be based on that information and thus be more focused or more selective, so that the amelioration can be more likely to succeed or succeed more rapidly than otherwise. This is especially useful in SON situations where often KPIs are dependent in complex ways on various different tunable parameters and unpredictable variables.
  • optimization actions by the SON controller can be planned without having to take all responsibility for this concern, which can help avoid adding further complexity to the SON controller.
  • Figure 3 shows just the steps of the apparatus, without showing the SON controller. So at step 31 , the apparatus monitors the first KPI to detect degradation, for example by comparison with a threshold, or by algorithm, or any other way. At step 32, the apparatus receives indications of occurrences of various different optimization actions. At step 33, the apparatus automatically assesses which of the optimization actions prompted the degradation detected. At step 34, the apparatus provides feedback automatically to the SON controller to cause it to ameliorate the degradation based on which of the optimization actions prompted the degradation.
  • Amelioration is defined as encompassing reverting the optimization action found to be responsible, or changing other optimization actions known to have equivalent effects to such reverting, or compensating in any way for the detected degradation, including biasing the optimization algorithm being used, or biasing a selection of which of a number of optimization algorithms to use, for example by adjusting a weighting.
  • Optimization actions are defined as encompassing actions for network or network element optimization, for network reconfiguration (for example to incorporate new cells or service areas) or for network self-healing (after a fault or outage) for example.
  • Figure 4 shows steps similar to those of figure 3, and corresponding numerals have been used, but in this case, the step 38 of monitoring the first KPI involves the detecting being based on a first KPI threshold, being a time variable threshold representative of past behavior of the first KPI before the optimization.
  • Figure 5 shows a graph of an example of positive and negative KPI thresholds for a first KPI for a network Element NE1 and a similar KPI for Network Element NE2, with time variability, where time flows from left to right.
  • a top line is a dotted line showing a positive threshold PT for the first KPI for NE1 , used for detecting improvement in KPI1 .
  • the second from top line is a solid line showing a real KPI value for first KPI.
  • the third from top line is a dashed line showing a negative threshold NT for the first KPI for NE1 , used for detecting degradation in that first KPI.
  • the fourth from top line in figure 5 is a double line showing a positive threshold PT for the KPI for NE2, used for detecting improvement in the KPI.
  • the fifth from top line is a longer dashed line showing a real KPI value for the second KPI.
  • the sixth from top line is a dashed double line showing a negative threshold NT for the KPI for NE2, used for detecting degradation in that KPI.
  • These dynamic thresholds can be generated externally and retrieved by the apparatus or can be generated by the apparatus.
  • the apparatus can use them to analyze SON optimization actions on network element for degradation, and in some cases to assess both their positive or negative impact on KPIs. This analysis is then fed back into a closed loop system to guard the KPIs.
  • One type of feedback is "slow feedback" for ensuring that SON actions providing benefit to the network are given greater importance/weight while those providing less benefit or degradation are given less importance/weight in determining the next optimization actions.
  • Another type of feedback is faster, for causing SON actions to be reverted when Negative Threshold (NT) breaches occur (considered as KPI's degradation).
  • NT Negative Threshold
  • Feedback is provided to the SON apparatus about such reversions and degradations to ensure that the actions are resulting in the negative impact are given less weight. This can help ensure that SON actions leading to negative impacts on the network are demoted. This in turn can lead to a better trade off between protection of KPIs and better optimization.
  • the feedback may leave these optimizations unchanged and report the positive feedback to the SON apparatus so that in next runs or iterations, the algorithms/rules used to improve certain KPI(s) will have more weight over the whole optimized cluster.
  • the overall result is that the most beneficial actions are prioritized, and preferably prioritized on a per network element basis.
  • the time variability of the KPI thresholds can enable accounting for seasonal, daily, or more granular variations in KPIs values, by automatically calculating KPI Positive Threshold (PT) and NT based on the history of the KPI values using statistical methods.
  • Thresholds can be recalculated after the end of each ROP to enable them to take into account the most recent data. This can be done at any network element (NE) level (e.g. cell level, base station, neighbour relation etc.) and with a defined time granularity (e.g. 1 hour ROP) given that the historical data has the same or higher granularity.
  • Both thresholds (PT, NT) can be recalculated using the KPI values for the recent ROP. In a case of breaching NT after SON activities, the thresholds may be based on the same historical data set for the next ROP. This can help ensure that the performance of the SON actions are consistently evaluated in a dynamic network environment.
  • Figures 6 to 9 ways of assessing which optimization prompted the degradation
  • Figure 6 shows steps similar to those of figure 3, and corresponding numerals have been used, but in this case, the step 33 of assessing automatically which of the different optimization actions prompted the degradation, comprises assessing 35 based on an expected time delay and an actual time delay between the respective occurrence and the detection of the degradation.
  • An advantage of this is that it is a convenient additional way of assessing which optimization prompted the degradation.
  • the assessment of which of the optimization actions prompted the degradation can also be based on how closely related to a KPI is the respective optimization action. This can encompass a closeness in terms of how many linked events there are in a chain or tree of linked events leading to the degradation, or a predetermined likelihood of causation, for example in terms of whether it relates to a same NE, a downstream NE or neighbouring NE, and if neighbouring, then how close a neighbor in terms of coverage overlap or in terms of handover statistics and so on.
  • Figure 7 shows steps similar to those of figure 6, and corresponding numerals have been used, but in this case, the step 33 of assessing automatically which of the different optimization actions prompted the degradation, also comprises assessing 36 by causing the controller to selectively make a trial reversion of at least one of the optimization actions, and a step of detecting whether the trial reversion results in reduction of the degradation.
  • An advantage of this is that it can give more certainty in the assessment, though it takes some time. This can lead to more certainty in the amelioration, which in turn can lead to a better trade off between protection of KPIs and better optimization.
  • Figure 8 shows steps similar to those of figures 2 and 6, and corresponding reference numerals have been used.
  • the step of assessing automatically which of the different optimization actions prompted the degradation comprises assessing 41 by causing the SON controller to selectively make a trial reversion by sending feedback.
  • the SON controller makes the trial reversion of the optimization action identified in the feedback.
  • the apparatus detects whether the trial reversion has reduced the degradation, and assesses whether this selected optimization had prompted the degradation based on this detection, and optionally based on other factors as described.
  • Figure 9 shows steps similar to those of figure 3 and figure 7, and corresponding numerals have been used, but in this case, the step 33 of assessing automatically which of the different optimization actions prompted the degradation, comprises a step 37 of, in a case where there are more than one of the optimization actions to be reverted, carrying out respective trial reversions and corresponding detection of reduction in degradation sequentially in order, the order being based on how closely related to the first KPI are the different optimization actions.
  • How closely related can encompass a closeness in terms of how many linked events there are in a chain or tree of linked events leading to the degradation, or a predetermined likelihood of causation, for example in terms of whether it relates to a same NE, a downstream NE or neighbouring NE, and if neighbouring, then how close a neighbor in terms of coverage overlap or in terms of handover statistics and so on.
  • An advantage of this is that it can help speed up the assessment compared to a random order for example, and thus speed up the amelioration. This in turn can lead to a better trade off between protection of KPIs and better optimization.
  • Figures 10, 1 1 ways of ameliorating the optimization which prompted degradation
  • Figure 10 shows steps similar to those of figure 2, and corresponding reference numerals have been used.
  • the step of providing feedback involves a step 44 by the apparatus of sending an instruction to the SON controller to cause it to at least partially revert the optimization assessed to have prompted the degradation.
  • the SON controller receives the feedback at step 22 and at step 24, it ameliorates the degradation based on the feedback. This amelioration includes reverting, partially or fully, the optimization action assessed to have prompted the degradation.
  • An advantage of this is that it can help provide more rapid amelioration.
  • Figure 1 1 shows steps similar to those of figure 2, and corresponding reference numerals have been used.
  • the step of providing feedback involves a step 45 by the apparatus of providing feedback to cause the SON controller to ameliorate by biasing how the optimization is determined by the SON controller.
  • the feedback can include an indication of how to bias how the SON controller determines the optimization actions.
  • the sending of the indication in the feedback can involve sending a parameter for use by an optimization algorithm, and/or sending a weighting for use in selecting between different optimization algorithms, for example.
  • the amelioration by biasing can involve the SON controller using the feedback by selecting an optimization algorithm for generating the optimization actions based on the weighting, or by using the parameter sent in the feedback, in an optimization algorithm for example.
  • Figure 12 shows steps similar to those of figure 3, and corresponding numerals have been used, but in this case, step 39 of monitoring the first KPI involves the detecting being based on an assessment of the degradation for transience and/ or reliability.
  • An advantage of this is that it can help avoid inaccurate detections and thus help improve accuracy of assessments of which optimization action prompted the degradation. This in turn can lead to a better trade off between protection of KPIs and better optimization.
  • Figure 13 shows steps similar to those of figure 2, and corresponding reference numerals have been used.
  • the step of monitoring the first KPI is now carried out 51 to detect degradation or improvement.
  • the assessing now involves assessing automatically 53 which of the different optimization actions, if any, prompted the detected improvement of the first KPI, based on the indications of the occurrences.
  • the step of providing the feedback automatically to the controller, additionally 54 causes the SON controller to reinforce the detected improvement, based on which of the optimization actions is assessed to have prompted the improvement. This can be additional to detecting and responding to degradation of the first KPI.
  • An advantage of detecting and responding to improvements is that it can help to improve the optimization more rapidly than only detecting degradations. This in turn can lead to a better trade off between protection of KPIs and better optimization.
  • Figure 14 shows steps similar to those of figure 2, and corresponding reference numerals have been used.
  • the step of providing feedback now has the condition that for the case that the first KPI relates to a first NE of a group of NEs, there is a step 64 of providing feedback to cause the SON controller to ameliorate optimization actions relating to another of the group of NEs.
  • An advantage of this is that it can improve scalability to larger networks or reduce the number of KPIs to be monitored, to reduce the complexity for a given size of network.
  • References to network entities are intended to encompass any type or level of entity, for example from service area, to node to parts of nodes, to other managed or addressable elements or components of nodes, or of network management components, and can also include non physical entities such as relationships between elements, (such as neighbor relations between nodes) or virtualized elements or classes or groups of elements for example.
  • Figure 15 overall view of steps of optimization and feedback
  • Figure 15 shows an overall view of steps of operation of a system according to an embodiment, the system including the apparatus and the SON controller, some of the steps forming a loop.
  • a user enters initial configuration information.
  • initial configuration information There are many possible initial settings for the apparatus, some examples are set out here as default parameters, which can be customized by the user, like:
  • KPIs to monitor The user can set a list of KPIs which are most important to him, like Call Drop Rate, DL Throughput, Call Setup Failure Rate etc.
  • ROP definition e.g. 1 hour).
  • ROPs to exclude from KPI history. If the user knows about some major network problems (e.g. long power outages), which affected the KPIs, the affected ROPs can be removed and instead the algorithm could take interpolated values for thresholds calculation.
  • significance criterion number of times in a specified period the KPI value must fall below/jump above the thresholds in order to consider the threshold breach as significant (this is called the first significance criterion), e.g. 2 times within 6 ROPs (the latter is called monitoring period, or second significance criterion).
  • significance criteria can be used to ensure that a KPI has come to its pre- optimization values after the NT breach and revert actions (this is not a limitation, different significance criteria could be set for detecting a significant breach and detecting the return to normal values). This setting is designed to cope with transient KPI behavior.
  • Minimum number of samples for each KPI This can include how many samples should be collected in each ROP order to consider the KPI value as reliable in a ROP. This could be set manually as an absolute value or calculated automatically based on the history of number of samples. If in a specific ROP the number of samples taken to calculate a KPI is less than the Samples Threshold (ST), then even if there is a KPI threshold breach, it's discarded as the reliability criterion is not fulfilled. This is to cope with the "small numbers effect", when for example having one or just a few fails would lead to very high failure rate due to having too few samples.
  • Mass event or external event related settings This can be an absolute or automatically calculated threshold for an indicator reflecting NE load (e.g. PRB utilization in LTE cell). If the load indicator is higher than the threshold, then even if there is a KPI threshold breach, it is discarded as during this ROP the NE was overloaded due to mass event (like a musical festival for instance), j) Data Reliability Settings: Minimum value for NE's and its neighbors' availability to consider the KPI values as reliable in a ROP. This is to discard the ROPs with outages.
  • NE load e.g. PRB utilization in LTE cell
  • the apparatus calculates or retrieves suitable thresholds for use in detecting degradation and/or improvement.
  • the SON initiates or makes some change to the optimization actions or reverts previous optimization actions.
  • the apparatus decides either the optimization is acceptable or should be reverted and feeds this back to the SON controller. This can involve monitoring the first KPI and using the KPI threshold to determine if the optimization has prompted a degradation which is unacceptable, as described above in relation to other figures.
  • the apparatus recalculates the KPI thresholds used in step IV, to update the thresholds, to adapt them to recent behavior of the KPI, as described above in relation to figure 5 for example.
  • the apparatus optionally reports to the user periodically, and returns to the start of the continuous loop, to step III, to repeat steps III to VI.
  • Figure 16 shows a flow chart showing an example of some of the actions of the apparatus according to an embodiment, in more detail than figure 15.
  • the apparatus first looks through the SON executions history and splits the NEs into two groups: first - NEs, which haven't been optimized by SON within the effect window; second - the ones which have been optimized by SON within the effect window.
  • the Apparatus simply recalculates at step 300 the NT, PT, reliability criteria (assuming the automated option had been chosen by the User.
  • User is meant a Mobile Network Operator MNO engineer, responsible for the network performance/network optimization.).
  • the performance management PM data in the form of values of the first KPI, typically received from the NMS, is compared to previously calculated KPI thresholds at step 210.
  • NT and PT thresholds can be calculated based on historical data and can represent expected normal data boundaries or behaviors. Thresholds can be calculated by combining a number of statistical learning functions in a workflow as described for example in "MingXue Wang and Sidath Handurukande. A Streaming Data Anomaly Detection Analytic Engine for Mobile Network Management, IEEE International Conference on Cloud and Big Data Computing, 2016". Given a sequence of values X as a time series, NT and PT can be calculated based on robust statistics, i.e., median and Median Absolute Deviation (MAD).
  • MAD Median Absolute Deviation
  • threshold(X) median(X) ⁇ 3 * median( ⁇ X — median(X) )
  • %t x t-f-w> x t-f-w-i> ⁇ ' x t-f> ⁇ ' x t-f+w-i> x t-f+w>
  • 7 and 9 o'clock data would be also used for calculating the limits of 8 o'clock time window; a peak normally happening in 7 o'clock occurred in 8 o'clock is still considered as normal.
  • a peak normally happening in 7 o'clock occurred in 8 o'clock is still considered as normal.
  • PT threshold(X ) , where X ⁇ X t> X t ⁇ median(X t )
  • NT threshold(X ) , where X _ ⁇ X t , X ⁇ median ⁇ X f )
  • PT and NT change dynamically according to behaviors of the data. As a result, it detects NE performance degradation and improvement according to each NE's own behaviors.
  • Other approaches such as based on ARIMA, Holt-winters, etc. can also be used to calculate PT and NT (see for example "Ajay Mahimkar, Ashwin Lall, Jia Wang, Jun Xu, Jennifer Yates, Qi Zhao, Synergy: Detecting and Diagnosing Correlated Network Anomalies").
  • the monitoring involves detecting if there was a breach of one of thresholds (NT, PT). If so, at step 230, then the reliability criteria are checked. If at least one of the reliability criteria (for example minimum number of samples, mass event flag - NE load indicators, NE availability) are not fulfilled, then the Apparatus doesn't provide any feedback (including reversions) to the SON about the NE, and the method goes to step 300.
  • the reliability criteria for example minimum number of samples, mass event flag - NE load indicators, NE availability
  • significance criteria are checked at step 240. This can mean for example determining whether the NE has breached a threshold (either NT or PT) a given number of times within a given number of ROPs. If not (for example if this is the first breach out of minimum two needed within 6 ROPs period), then any optimization on the NE and its neighbors is stopped, by including them into the temporary exclusion list till the end of the monitoring period at step 270.
  • a threshold either NT or PT
  • the Apparatus checks at step 250 based on the setting e) described above, whether the breach is a degradation or improvement. If this is an improvement, then the thresholds, both NT, PT and also reliability (in case the User had set the reliability thresholds to be calculated automatically) are recalculated at step 270 taking into account the PM data for the last ROP; SON weights are recalculated at step 280 taking into account the KPI improvement to which a specific SON feature/policy/rule has led. This feature/policy/rule will increase the weight in the next ROP. It can be realized in two ways: either to increase the weight for all the optimized NEs, or to increase it only for NEs with similar KPI values/behavior (for similarity analysis different techniques could be used, e.g. clustering).
  • the optimization changes made to the NE and its neighbors within the effect window are added to the reversion list so that they will be reverted one at a time in sequence in the following order: first the optimizations of the current NE, then those of its neighbors (those for different neighbours being ordered by number of Successful Handovers and/or distance); and the optimizations being ordered from newest to oldest.
  • the SON weights are recalculated at step 280 taking into account the KPI degradation to which a specific SON feature/policy/rule has led. This feature/policy/rule will have increased weight in the next ROP. It can be also realized in two ways as described above.
  • a Feedback report for the SON controller is created, which contains the reversions list, new weights for features/policies/rules at step 290, which is sent to the SON controller.
  • Figure 17 steps in ordering sequence of multiple reversions
  • Figure 17 shows a flow chart of steps by the apparatus to show an example of causing multiple trial reversions of optimization actions where a KPI of an NE has breached its NT. It is assumed that the reversion actions have started.
  • a SON report is checked to see if the requested parameter revert has really been implemented by the SON controller as sometimes parameter changes, including reversions, can fail due to different reasons (e.g. hardware failures). If the revert, requested in the last ROP, failed then the User should be informed at step 350 (in a User execution report for example) as often in a case of parameter change failure human intervention is required.
  • the first part of significance criteria is checked at step 320 - if the KPI's value returned and stayed within the pre-optimization levels (or better) in the minimum required number of ROPs (equal or less than the monitoring period) after the last revert. If the first significance criterion is met, then supposing that the revert has been successful, a penalty period is set at step 390 for the NE (or its neighbor) for the changes that led to KPI degradation; also SON weights are recalculated taking into account the KPI degradation to which a specific SON feature/policy/rule has led.
  • the setting k) described above in relation to figure 15 can be used to define how long the penalty period is.
  • a second significance criterion is checked - has the monitoring period finished; if not, then the Apparatus will wait at step 360 the needed number of ROPs till at least one of significance criteria is satisfied. If yes, then the Apparatus checks at step 340 if all the reverts assigned to the NE and/or its neighbors have been finished. If there is at least one more to implement, then the next revert should be performed in the next ROP at step 370.
  • step 380 a deduction is drawn that the degradation was not caused by SON optimization activities, and the NE and/or its neighbors are included back into the optimization list, and the thresholds are recalculated, taking into account the data from the last ROP(s).
  • C-SON Centralized SON
  • D-SON Distributed SON
  • RET Remote Electrical Tilt
  • SON algorithms are run in the NEs themselves (e.g. Random Access Channel (RACH) Optimization in the eNodeB).
  • RACH Random Access Channel
  • Figure 18 shows a time chart of steps and interactions by different entities in the case of a C-SON implementation of an embodiment. Time flows down the chart In a left-most column are steps by the network 40. In a next column to the right are steps by an OSS/customer dbase 410. In a next column to the right are steps by the SON controller implemented as part of a C-SON arrangement 420. In a next column are steps by the apparatus 430. In the right most column are steps by the user 440. Each of the steps shown will now be described, using the reference numbers shown.
  • the Apparatus should request the history of KPIs (based on initial settings a), b), and c) described above).
  • SON configuration is meant which features are turned on, which weights has each feature/policy/rule. Based on weights SON can choose which actions to prioritize. This is an important input for the Apparatus as its feedback to SON are the revised weights. Also the list of optimized NEs is requested as the thresholds are calculated on NE+KPI basis.
  • Request historical PM data and current CM. This is sent from the C- SON to the OSS/customer dbase. In the existing solutions C-SON downloads only the data for the last ROP to calculate its output configuration changes. In embodiments having dynamic KPI thresholds, the KPI data for the requested number of ROPs is requested from the customer OSS/Database and sent to the Apparatus so that the latter can calculate the thresholds for the next ROP based on the historical KPI data.
  • First step is analyzing the KPI history and defining the most suitable algorithm for calculating the thresholds.
  • a number of methods can be used to calculate the PT and NT (in statistics they are usually referred as Upper Control Limit (UCL) and Lower Control Limit (LCL)).
  • Forecasting based and Heuristic limit based approaches can be used for setting the automated dynamic thresholds for each NE and chosen KPIs, and reference is made to the citations referred to above, in relation to figure 16.
  • the Apparatus can use the historical data as training data to choose the most suitable method for each situation.
  • the criterion for the defining the most suitable method can be F1 score.
  • the PT, NT and also ST and load indicator threshold are calculated for the next ROP.
  • SON execution This step is the SON generating optimization actions and can be implemented using conventional optimization algorithms.
  • the inputs are typically user settings and network most recent CM and PM data.
  • the output of the execution can be an execution report, stating the proposed CM changes (which then need to be injected into the Network via the OSS), logs which describe the execution process - if any problems occurred, if any NEs were excluded from the optimization specifying the reason of it etc.
  • CM changes implementation. This step, is a way of requesting execution of the desired optimization actions, sent from the C-SON to the OSS/customer dbase, and can be the same as conventional methods.
  • CM changes implementation. This step passes on the request from the OSS/customer dbase to the relevant NEs of the network. Again it can be implemented in the same way as conventional methods.
  • CM implementation This step is a response to the request and is sent from the NEs of the network to the OSS/customer dbase. Again, it can follow conventional practice and can provides a feedback on how successful each implementation was and if any issues occurred during implementation, provide the reasons of faults.
  • CM implementation In this step the OSS forwards the implementation logs to C-SON, so that C-SON is aware of any faults during the implementation. Again it can follow conventional practice.
  • PM for new ROP This step involves all the PM data being uploaded to the OSS at the end of each ROP for later storage in the Customer Database. Again, it can follow conventional practice.
  • CM and PM for new ROP This step involves the C-SON fetching the CM and PM data for each ROP to use them as the inputs for its activities. Again, it can follow conventional practice.
  • CM and PM for new ROP This step is the response from the OSS/customer dbase to the C-SON. Both CM and PM data are loaded to C- SON Database.
  • Analyze Execution Report re-calculate thresholds and weights create feedback report. This step by the apparatus can be implemented as described in more detail in Figures 16 and 17 for example.
  • This step of providing feedback to the C-SON from the apparatus can include sending a first output of a list of NEs, for which the parameter change reverts must be performed in the next ROP. All the NEs in this list are temporarily excluded from optimization, because their KPI(s) had breached the NT.
  • a general aim of the apparatus to protect the KPIs is to bring the KPIs to the previous (better) values, going back to the most stable parameter configuration.
  • a second part of the feedback can be execution feedback, which is for example the indication of how to bias the C-SON, for example the new weights, assigned to the SON features/policies/rules, based on the current and previous (for the case the number of optimization ROPs is more than one) ROPs experience.
  • experience is meant for example a comparison of pre-optimized and post-optimized KPI values with the aid of NT, PT calculated for each ROP.
  • the features/policies/rules which lead to KPI(s) improvement more frequently, will receive higher weights comparing to the once that lead to the KPI(s) degradation.
  • User execution report is sent to the User. It can contain (but is not limited to):
  • CM changes implementation, including reversions. This step can be the same as step 5 described above, but with updated reversion list. The method can continue with a repeat of step 5.1 onwards, in a continuous loop.
  • Figure 19 shows a time chart of steps and interactions by different entities, similar to the chart of figure 18, but in this case for a D-SON implementation of an embodiment.
  • Time flows down the chart In a left-most column are steps by an NE/D-SON 450.
  • steps by an OSS/customer dbase 410 In a next column to the right are steps by the apparatus 430.
  • In the right most column are steps by the user 440.
  • the interworking with D-SON is very similar to that of the C-SON example, but still has some difference, mostly related to the topology of the process.
  • Step 14 can be implemented in the same way as step 4 in the C-SON use case.
  • One difference is that the proposed changes are implemented directly in the NE (D-SON features are executed "inside" the NE), so the OSS is not needed for this.
  • Analyze Execution Report re-calculate thresholds & weights. This step by the apparatus includes the detection of degradation, and the assessment of what prompted the degradation, as described above in relation to figures 1 to
  • OSS/customer DB from the Apparatus is forwarded to the NE/D-SON by the OSS/customer DB.
  • Figures 20, 21 amelioration based on other NE and service area throughput
  • Figure 20 shows a schematic view of an example of a 5G Service Area of a 5G network. It shows a cloud symbol representing a service area, in which a user equipment can get mobile services or connectivity services from one or more antennas. High power antennas node A, node B, node C, and node D are shown within the service area. Node D has associated low power antennas node D.1 ., D.2., and D.3. Node B has associated lower antenna node B.1 . Neighbour Relations NR are shown by dotted lines between some of the antennas.
  • an optimization action included a change of a parameter of Node C, for example, for load balancing reasons (handled by the SON controller).
  • Node C got higher throughput (and so the throughput KPI breached its PT within the effect window after the parameter change).
  • neighbouring node node A was monitored for throughput and it was found that its KPI degraded (in other words its throughput KPI breached its NT within the effect window after the Node C's parameter change).
  • Figure 21 shows some steps by the apparatus according to this embodiment, shown in figure 20.
  • the apparatus monitors the first KPI to detect degradation, in this case a KPI of throughput of node A.
  • the apparatus receives an indication of occurrences of different optimization actions, including a change of parameter at node C.
  • the step of assessing automatically which of the optimization actions prompted the detected degradation in this case determine that the change at node C prompted the degradation at node A.
  • the apparatus provides feedback automatically to the SON controller to cause amelioration of the degradation based on which of the optimization actions prompted the degradation. This amelioration is of optimization actions relating to another of the group of NEs. The another one of the group is node C in this case. Amelioration in this case is also dependent on the service area throughput:
  • the amelioration is based on the neighbours' KPIs and also on the Service Area KPIs.
  • the amelioration of optimization actions relating to another of the group of NEs has advantages of widening the scope of the KPI protection, to cover a wider range of optimization actions for a given KPI, or to cover a wider range of KPIs for a given optimization action. Therefore, it can improve scalability to larger networks or reduce the number of KPIs to be monitored, to reduce the complexity of the protection scheme, for a given size of network.
  • Figure 22 shows a schematic view of an example of a 5G Service Area of a 5G network. It shows a cloud symbol representing a service area, in which a user equipment can get mobile services or connectivity services from one or more antennas. High power antennas node A, node B, node C, and node D are shown within the service area. There are dotted lines to show neighbour relations NR between the nodes, which NRs can be either black- or white-listed. In this example the optimization action relates to handling of the NRs. There's a SON algorithm, that blacklists NRs based on distance, targeting NRs between very distant Nodes (they could be previously created by ANR).
  • An example optimization action is to set the distance threshold to 8km, meaning that all the NRs between Nodes situated more than 8km away from each other would be blacklisted (HO between them would no longer be allowed).
  • the NR to/from Node A from/to Node C [NR:A-C] got blacklisted because, as shown in the figure, it has a distance of 10km.
  • Node C's KPIs e.g. traffic throughput
  • the Apparatus detects this and assesses that the blacklisting has prompted the degradation.
  • the blacklisted relation NR:A-C] gets whitelisted (HO is allowed again).
  • the apparatus monitors the first KPI to detect degradation, in this case, the first KPI relates to a first NE, node C, of a group of nodes (A-D), such as throughput at node C.
  • the apparatus receives an indication of occurrences of different optimization actions including change of NR to blacklist [NR:A-C].
  • it assesses automatically which of the optimization actions prompted the detected degradation, in this case determining that a change to blacklist [NR:A- C] prompted the degradation at node C.
  • the apparatus then provides feedback automatically to the SON controller at step 734 to cause amelioration of the degradation based on which of the actions prompted the degradation.
  • amelioration of optimization actions relating to another of the group of NEs by whitelisting [NR:A-C]. Advantages of this example are similar to those discussed above in relation to figures 20 and 21 .
  • the amelioration of optimization actions relating to another of the group of NEs has advantages of widening the scope of the KPI protection, to cover a wider range of optimization actions for a given KPI, or to cover a wider range of KPIs for a given optimization action.
  • FIG. 24 schematic view of apparatus for protecting KPI
  • FIG 24 shows a schematic view of a possible implementation of the apparatus for protecting the KPIs.
  • the apparatus includes a processing circuit 180, coupled via a bus to a storage medium in the form of a memory circuit 185 having a stored program 188. Also coupled via the bus to the processing circuit is a receiver/sender circuit 183 having an external path for connection to the SON controller and to parts of the network such as the OSS or NMS.
  • the program can comprise computer code which, when run by the processing circuit can cause the processing circuit to carry out any of the method steps described above in relation to figures 1 to 23 for protecting KPIs.
  • the memory circuit is an example of a computer program product comprising a computer program and a computer readable storage medium on which the computer program is stored.
  • the storage may also comprise persistent storage, which, for example, can be any single one or combination of magnetic memory, optical memory, solid state memory or even remotely mounted memory.
  • Figure 25 schematic view of SON controller
  • FIG. 25 shows a schematic view of a possible implementation of the SON controller.
  • the SON controller includes a processing circuit 520, coupled via a bus to a storage medium in the form of a memory circuit 530 having a stored program 525. Also coupled via the bus to the processing circuit is a receiver/sender circuit 540 having an external path for connection to the apparatus and to parts of the network such as the OSS or NMS for a C-SON implementation. For a D-SON implementation, the external path could be used for coupling to other parts of the NE.
  • the program can comprise computer code which, when run by the processing circuit can cause the processing circuit to carry out any of the method steps described above for the SON controller in relation to at least figures 2, 8 to 1 1 , 13 to 15, 18 and 19.
  • the memory circuit is an example of a computer program product comprising a computer program and a computer readable storage medium on which the computer program is stored.
  • the storage may also comprise persistent storage, which, for example, can be any single one or combination of magnetic memory, optical memory, solid state memory or even remotely mounted memory.
  • FIG. 26 schematic view of apparatus for protecting KPI
  • FIG 26 shows a schematic view of another possible implementation of the apparatus for protecting the KPIs.
  • the apparatus includes a monitor unit 191 , a receiver 193, an assessment unit 195 and a feedback unit 197. Each of these units are coupled together via a bus, and the receiver 193 is coupled to an external path for connection to receive indications from the SON controller and from parts of the network such as the OSS or NMS.
  • the parts of the apparatus can cooperate to to carry out any of the method steps described above in relation to figures 1 to 23 for protecting KPIs.
  • the monitor unit can monitor the first KPI to detect degradation, for example by comparison with a threshold, or by algorithm, or any other way.
  • the receiver can receive indications of occurrences of various different optimization actions.
  • the assessment unit can automatically assess which of the optimization actions prompted the degradation detected.
  • the feedback unit can provide feedback automatically to the SON controller to cause it to ameliorate the degradation based on which of the optimization actions prompted the degradation.
  • Each of the units can be implemented using any conventional circuitry or processing hardware and may be integrated or divided in different ways.
  • Figure 27 schematic view of SON controller
  • Figure 27 shows a schematic view of a possible implementation of the SON controller.
  • the SON controller includes an optimization unit 544, a sender/receiver 550, and a control unit 560.
  • the various units are coupled together via a bus.
  • the sender/receiver 550 has an external path for connection to the apparatus and another external path to parts of the network such as the OSS or NMS for a C-SON implementation.
  • the external path could be used for coupling to other parts of the NE.
  • the various units can co-operate to carry out any of the method steps described above for the SON controller in relation to at least figures 2, 8 to 1 1 , 13 to 15, 18 and 19.
  • Each of the units can be implemented using any conventional circuitry or processing hardware and may be integrated or divided in different ways.
  • KPI thresholds can automatically adjust based on fixed historical time periods (e.g. last 2 weeks) or historical patterns (e.g. last 10 Monday afternoons) to provide more accurate detection of degradation for more accurate feedbacks for the SON controller.
  • the examples with adaptive thresholds can guard different network elements and different ROPs more equally, so that the KPI protection can be more consistent.
  • Some examples classify NE's into groups, and allow optimization actions for NE's to be adapted from the detections and assessments and feedback of other similar NE's that have been optimized. These can thus give higher weight to actions that provided improvement and less weight to those that led to degradation and widen the use of the feedback to ameliorate optimization actions on other similar NEs of the group.
  • the feedback can also be reported to the User regarding any significant KPI degradations/ improvements on NE level.
  • the examples which adapt their detections using historical data can help improve accuracy of detection as well as helping determine the significance of the KPI threshold breach and whether the breach is transient or valid.
  • the examples of the apparatus described can be used along with any automated optimization SON control process to protect the optimized system from degradation and provide feedback about how effective specific optimization actions are.
  • the apparatus can be implemented in a physical server or node or a virtual (e.g. Cloud) node with software package, running synchronized together with the SON controller, analyzing the network's output KPIs (usually PM files or streams) and giving feedback including commands to the SON controller such as to revert some of the parameter changes and/or to change weights given to how the optimization actions are determined, such as weighting specific SON features/policies/rules, or changing parameters or settings of SON features/policies/rules.
  • a maximum limit could be changed, such as allowing a maximum of 1 degree of uptilt/downtilt to be performed by a RET feature to be changed to allow maximum of 0.5 degrees when degradations due to the RET feature are detected.
  • Another optimization action example is load balancing.
  • utilization is below a threshold (for example: In LTE, normally in a serving area or a Cell coverage area, the PRB (Physical Resource Blocks) are not utilized 100% of the time).
  • the SON controller can perform a load balancing between nodes or between service areas and maintain cell availability in a non-congested way.
  • One way of carrying out an optimization action is to have the SON controller cause the OSS to generate a proposed configuration change, for example in the form of a list of parameter changes. These configuration parameters are pushed from an OSS toward an RBS, and then the configuration changes are implemented in the RBS.
  • Another possible optimization action is a change in radio power output. This might be useful to save power consumption or to increase coverage area for example to match capacity to demand. Conventionally it requires a restart of an RBS to bring in new power setting, and so it is usually carried out overnight. Similar considerations apply to a change in bandwidth, for example from 5 MHz to 10 Mhz or vice-versa. In case of 5G it is more likely to be more bandwidth, say 50 MHz to 100 Mhz, or a change in band, say from 700 MHz band to 2100 AWS (Advanced Wireless Services) band.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Quality & Reliability (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

Protecting a first KPI of a communications network (40) from effects of different optimization actions by a SON controller (20) involves monitoring (31, 631, 731) the first KPI to detect degradation of the first KPI, and assessing automatically (33, 633, 733) which of the different optimization actions prompted the detected degradation. Feedback (34, 634, 734) of this to the SON controller, causes it to ameliorate the detected degradation, to protect the first KPI, based on which of the optimization actions is assessed to have prompted the degradation. The detection of the degradation can be based on time variable first KPI threshold (38), the time variability being representative of past behaviour of that KPI before the optimization. Such feedback can enable better amelioration and a better trade-off between protecting KPIs, and minimizing interfering with optimization actions.

Description

PROTECTING KPI DURING OPTIMIZATION OF SELF-ORGANIZING
NETWORK
Technical Field
The present disclosure relates generally to methods of protecting a Key Performance Indicator KPI from degradation during optimization of a communications network by a Self-Organizing Network (SON) controller, and to corresponding programs for computers and corresponding program products, and to corresponding apparatus for protecting the KPI and to SON controllers arranged to co-operate with such apparatus.
Background
It is known to provide communications networks such as cellular networks for providing communication services for user equipments, UE. During the operational phase of a cellular network such as a radio access network, RAN, changes to parameters used for configuration are regularly required for problem resolution or network optimization for example. Self-Organizing Networks (SON) are known and in some cases they have a SON controller responsive to feedback of KPI values to optimize a KPI.
As mobile networks become more and more dense to cope with increasing traffic and new services, the Mobile Network Operators (MNOs) are looking for a way to automate most of the planning, maintenance and optimization processes. That is why the demand for Self-Organizing Network (SON) solutions is constantly growing. SON can include many features, such as Automated Neighbour Relations (ANR), Physical Cell Identity (PCI) Management, Coverage and Capacity Optimization (CCO) to name just a few. They can be designed to improve network Key Performance Indicators (KPIs) and/or reduce the amount of routine work that engineers have to do every day. Conventionally SON is not a "zero touch" solution, where the operator launches the features he wants and those features optimize the network performance without further human intervention. In reality human supervision and intervention are still needed, because in many cases a blindly switched on feature can bring more harm than benefit. In one conventional SON example, there is monitoring of defined network KPIs to detect a breach of a threshold predefined by the operator. This can enable the operator to manually select and revert the changes done by the SON. Thus the operator can take care that SON is not going to degrade the KPIs below a certain value.
Summary
An aspect of this disclosure provides a method of protecting a first KPI of a communications network from effects of different optimization actions by a SON controller of the communications network, having steps of monitoring the first KPI to detect degradation of the first KPI, and receiving from the SON controller an indication of occurrences of the different optimization actions. The method also involves assessing automatically which of the different optimization actions, if any, prompted the detected degradation of the first KPI, based on the indications of the occurrences, and providing feedback automatically to the SON controller, to cause the SON controller to ameliorate the detected degradation, to protect the first KPI, based on which of the optimization actions is assessed to have prompted the degradation. Any additional optional features can be added, and some are described below and set out in dependent claims.
Another aspect of the disclosure provides a computer program having instructions that when executed by processing circuitry cause the processing circuitry to carry out the methods set out above. Another aspect provides a computer program product comprising a computer readable medium having stored on it the above-mentioned computer program. Another aspect provides apparatus for protecting a first KPI of a communications network from effects of different optimization actions by a SON controller of the communications network, the apparatus having a processing circuit and a memory circuit, and the memory circuit having instructions executable by the processing circuit. The processing circuit when executing the instructions is configured to monitor the first KPI to detect degradation of the first KPI, and to receive from the SON controller an indication of occurrences of the different optimization actions. The processing circuit is also configured to assess automatically which of the different optimization actions, if any, prompted the detected degradation of the first KPI, based on the indications of the occurrences, and to provide feedback automatically to the controller, to cause the controller to ameliorate the detected degradation, to protect the first KPI, based on which of the optimization actions is assessed to have prompted the degradation.
Another aspect provides a system comprising the apparatus for protecting a KPI as set out above, and an SON controller for carrying out optimization actions on the communications network. The SON controller is connected to the apparatus for protecting the KPI to send the indication of occurrences of optimization actions to the apparatus for protecting the KPI and to receive feedback from said apparatus for protecting the KPI. Another aspect provides A SON controller for controlling optimization actions on a communications network, in cooperation with the apparatus for protecting a first KPI of the communications network from degradation by the optimization actions. The SON controller has a processing circuit and a memory circuit, the memory circuit having instructions executable by the processing circuit. The processing circuit when executing the instructions is configured to initiate optimization actions, and to send to the apparatus for protecting the first KPI, an indication of occurrences of the optimization actions. The processing circuit is also configured to receive feedback from the apparatus, based on which of the optimization actions is assessed to have prompted a degradation in the first KPI, and in response to the feedback, control the optimization actions to ameliorate the detected degradation, to protect the first KPI. Another aspect provides apparatus for protecting a first KPI of a communications network from effects of different optimization actions by a SON controller of the communications network, the apparatus having a monitor unit for monitoring the first KPI to detect degradation of the first KPI, and a receiver for receiving from the SON controller an indication of occurrences of the different optimization actions. The apparatus also has an assessment unit for assessing automatically which of the different optimization actions, if any, prompted the detected degradation of the first KPI, based on the indications of the occurrences, and a feedback unit for providing feedback automatically to the controller, to cause the SON controller to ameliorate the detected degradation, to protect the first KPI, based on which of the optimization actions is assessed to have prompted the degradation.
Another apparatus provides a SON controller for controlling optimization actions on a communications network, in cooperation with an apparatus for protecting a first KPI of the communications network from degradation by the optimization actions. The SON controller has an optimization unit for initiating optimization actions, and a sender/receiver for sending to the apparatus for protecting the first KPI, an indication of occurrences of the optimization actions, and for receiving feedback from the apparatus, based on which of the optimization actions is assessed to have prompted a degradation in the first KPI. The apparatus also has a control unit for controlling the optimization actions in response to the feedback, to ameliorate the detected degradation, to protect the first KPI. Any additional optional features can be combined with any of the aspects. Other effects and consequences will be apparent to those skilled in the art, especially compared to other prior art. Numerous variations and modifications can be made without departing from the claims of the present invention.
Brief Description of the Drawings:
Embodiments of the invention will now be described, by way of example, with reference to the appended drawings, in which: Figure 1 shows an overall system view including embodiments, Figures 2 and 3, show steps according to embodiments,
Figures 4 and 5 show steps and a graph relating to a time variable threshold representative of past behavior,
Figures 6 to 9 show steps including ways of assessing which optimization prompted the degradation,
Figures 10 and 1 1 show ways of ameliorating the optimization which prompted degradation,
Figure 12 shows steps including detecting degradation based on transience or reliability,
Figure 13 shows steps with detection of improvement and amelioration includes reinforcing improvement,
Figure 14 shows steps where the amelioration is based on detection and feedback relating to other NEs,
Figure 15 shows an overall view of steps of optimization and feedback,
Figure 16 shows a flowchart showing a more detailed view of optimization and feedback,
Figure 17 shows steps in ordering a sequence of multiple trial reversions, Figure 18 shows a time chart of a C-SON example,
Figure 19 shows a time chart of a D-SON example,
Figures 20 and 21 show a schematic network view and steps relating to amelioration based on detection and feedback of other NEs and based on service area throughput,
Figures 22 and 23 show a schematic network view and steps relating to amelioration based on detection and feedback of other NEs, relating to neighbor relation black-/white-listing,
Figure 24 shows a schematic view of internal details of apparatus for protecting KPI,
Figure 25 shows a schematic view of internal details of an SON controller, Figure 26 shows a schematic view of internal details of an apparatus for protecting KPI, and
Figure 27 shows a schematic view of internal details of an SON controller. Detailed Description:
The present invention will be described with respect to particular embodiments and with reference to certain drawings but the scope of the invention is not limited thereto and modifications and other embodiments are intended to be included within the scope of the disclosure. The drawings described are only schematic and are non-limiting. In the drawings, the size of some of the elements may be exaggerated and not drawn to scale for illustrative purposes.
Definitions:
Where the term "comprising" is used in the present description and claims, it does not exclude other elements or steps and should not be interpreted as being restricted to the means listed thereafter. Where an indefinite or definite article is used when referring to a singular noun e.g. "a" or "an", "the", this includes a plural of that noun unless something else is specifically stated.
The steps of the methods described herein may be carried out in any suitable order, or simultaneously where appropriate.
References to computer programs or software can encompass any type of programs in any language executable directly or indirectly on processing hardware.
References to processors, hardware, processing hardware or circuitry can encompass any kind of logic or analog circuitry, integrated to any degree, and not limited to general purpose processors, digital signal processors, ASICs, FPGAs, discrete components or logic and so on. References to a processor are intended to encompass implementations using multiple processors which may be integrated together, or co-located in the same location such as the rack, same room, same floor, same building, as appropriate, or distributed at different locations for example.
Optimization action is intended to encompass for example any kind of alteration of configuration such as a parameter or sequence or instruction or circuitry or relationship between these, or anything that can define how any part of the network such as an NE, physical or virtualized entity, can operate or define their characteristics, relationships or identity, such as for example defining radio output power, radio frequencies, other communications parameters, or physical cell identity PCI or neighbor relations. The configuration can be stored in any format and be located anywhere convenient, either external to the VNF or service area, or internally.
Configuration changes that could negatively impact a service to a UE using a
VNF if the change is made while that VNF is in use, is intended to encompass anything which could noticeably affect the service to a UE, or risk affecting the service, such as changing a PCI, or changing a radio frequency band or a radio output power, or anything which might cause connection to the UE to be lost, or bandwidth of a connection to be reduced for example.
References to KPIs can encompass important values such as Call Drop Rate, DL
Throughput, Call Setup Failure Rate, or any other values which can directly or indirectly indicate or predict any aspect of performance of a node, a link, or of any
NE or component of an NE for example.
Abbreviations:
KPI - Key Performance Indicator
SON - Self-Organizing Network
C-SON - Centralized SON
D-SON - Distributed SON
MNO - Mobile Network Operator
NR - Neighbour Relation
ANR - Automated Neighbour Relations
ARIMA-Autoregressive Integrated Moving Average
PCI - Physical Cell Identity
RACH - Random Access Channel
RET - Remote Electrical Tilt
NT - Negative Threshold
PT - Positive Threshold
ST - Samples Threshold
NE - Network Element
ROP - Result Output Period
HO - Handover UE - User Equipment
CCO - Coverage and Capacity Optimisation
NMS - Network Management System Introduction
By way of introduction to the embodiments, some issues with conventional SONs will be explained.
a. In detecting degradation, the thresholds need to be defined and adapted manually, this process is not automated. As most KPIs naturally change values during the day/week/year (showing strong seasonality), absolute thresholds need to be periodically re-defined. This manual input means they do not scale up well to more complex networks.
b. Use of fixed KPI thresholds means that some Result Output Periods ROPs are guarded worse than others. The end result is a lower probability of detecting a degradation for these ROPs. Use of fixed KPI thresholds means that some NEs are guarded worse than others. The end result is a lower probability of detecting a degradation for these NEs. Special events are not spotted and the transient KPI degradations due to mass events or external events (when the NEs are overloaded) are treated the same way as real network degradations.
c. Current solutions can't spot slow KPI degradations, which in many cases are more likely to happen than fast degradations.
d. Current SON solutions don't use the information about how previous
SON activities affect network KPIs as a feedback to SON. Thus SON is acting blindly, without assessment of the results of its activities.
Figure 1 , overall system view including embodiments
Figure 1 shows a schematic view showing an apparatus 30 for protecting a first KPI and typically many KPIs, from effects of optimization actions. The apparatus is shown connected to a SON controller 20 to receive indications of occurrences of optimizations from the SON controller, and to send feedback to cause the SON controller to ameliorate any degradation on the first KPI. The apparatus for protecting the KPI could be implemented on separate hardware such as a separate server or blade, or could be a subroutine or software module run on the same hardware as the SON controller for example. One possible embodiment is to have the apparatus fully implemented in the Cloud. The apparatus is shown coupled to an NMS 60 for receiving KPI values from network elements NE 50 of the communications network 40. The SON controller is shown coupled to the communications network via the NMS 60 to enable the SON controller to initiate optimization actions. The apparatus is suitable for use with SON controllers which are centralized (C-SON) (as shown) or distributed (D-SON) implementations. In the distributed case, the SON functionality is typically integrated in the NEs. The optimization actions can for example involve sending or altering a parameter or rule or instruction of an NE, as illustrated, or changing an algorithm which governs the NE behavior, by generating the parameter/rule /instruction for example, or changing a relationship between NEs for example.
The apparatus can help to effectively guard network KPIs during optimization actions (e.g. by the SON controller). In some cases, it can provide feedback to cause reversion of the optimization actions such as parameter changes that are assessed as having prompted the degradation of performance. In some cases, the parameter changes that have led to performance improvement are also assessed and reported to the SON controller. This can help enable smarter experience-based amelioration compared to conventional arrangements. In some examples described in more detail below, degradation of KPIs can be detected based on time variable KPI thresholds representative of their past behavior. The threshold used for detections can be specific to each Network Element, and varied for different periods of time. For this, advanced statistical analysis of historical and current KPI values can be applied.
Figures 2,3, actions of apparatus, including embodiments
Figure 2 shows a view of a time chart with time flowing down the chart, and showing in a left hand column, some actions of the apparatus for protecting the KPI, and in a right hand column, some actions of the SON controller 20. At step 31 , the apparatus monitors the first KPI to detect degradation. This can involve comparison with a threshold, or by algorithm, or any other way. At step 32, the apparatus receives indications of occurrences of various different optimization actions from the SON controller. At step 33, the apparatus automatically assesses which of the optimization actions prompted the degradation detected. This can be implemented in various ways, and some will be described below with reference to subsequent figures. At step 34, the apparatus provides feedback automatically to the SON to cause it to ameliorate the degradation based on which of the optimization actions prompted the degradation. The actions of the SON controller include step 21 of initiating optimization actions on the communications network, and sending an indication of these actions to the apparatus for protecting the KPI. At step 22 the SON controller receives feedback from the apparatus relating to ameliorating the degradation. At step 23, the SON controller ameliorates the degradation based on the feedback of which of the optimization actions prompted the degradation. Again, this can be implemented in various ways and some will be described in more detail below with reference to other figures.
An advantage of this feature of figure 2 of assessing which of the optimization actions prompted a particular KPI degradation, is that the feedback to cause an attempted amelioration can be based on that information and thus be more focused or more selective, so that the amelioration can be more likely to succeed or succeed more rapidly than otherwise. This is especially useful in SON situations where often KPIs are dependent in complex ways on various different tunable parameters and unpredictable variables. By having a separate or delegated function for protection of particular KPIs, optimization actions by the SON controller can be planned without having to take all responsibility for this concern, which can help avoid adding further complexity to the SON controller. This can enable a better trade-off between potentially conflicting aims of protecting the more valuable KPIs from unpredicted degradation, and minimizing interfering with optimization actions, so that more optimization attempts can take place, more rapidly, and in more complex networks with inherently less predictable effects.
Figure 3 shows just the steps of the apparatus, without showing the SON controller. So at step 31 , the apparatus monitors the first KPI to detect degradation, for example by comparison with a threshold, or by algorithm, or any other way. At step 32, the apparatus receives indications of occurrences of various different optimization actions. At step 33, the apparatus automatically assesses which of the optimization actions prompted the degradation detected. At step 34, the apparatus provides feedback automatically to the SON controller to cause it to ameliorate the degradation based on which of the optimization actions prompted the degradation.
Amelioration is defined as encompassing reverting the optimization action found to be responsible, or changing other optimization actions known to have equivalent effects to such reverting, or compensating in any way for the detected degradation, including biasing the optimization algorithm being used, or biasing a selection of which of a number of optimization algorithms to use, for example by adjusting a weighting.
Optimization actions are defined as encompassing actions for network or network element optimization, for network reconfiguration (for example to incorporate new cells or service areas) or for network self-healing (after a fault or outage) for example.
Figures 4,5, time variable threshold representative of past behavior
Figure 4 shows steps similar to those of figure 3, and corresponding numerals have been used, but in this case, the step 38 of monitoring the first KPI involves the detecting being based on a first KPI threshold, being a time variable threshold representative of past behavior of the first KPI before the optimization. Figure 5 shows a graph of an example of positive and negative KPI thresholds for a first KPI for a network Element NE1 and a similar KPI for Network Element NE2, with time variability, where time flows from left to right. Of the six lines shown, a top line is a dotted line showing a positive threshold PT for the first KPI for NE1 , used for detecting improvement in KPI1 . The second from top line is a solid line showing a real KPI value for first KPI. The third from top line is a dashed line showing a negative threshold NT for the first KPI for NE1 , used for detecting degradation in that first KPI.
The fourth from top line in figure 5 is a double line showing a positive threshold PT for the KPI for NE2, used for detecting improvement in the KPI. The fifth from top line is a longer dashed line showing a real KPI value for the second KPI. The sixth from top line is a dashed double line showing a negative threshold NT for the KPI for NE2, used for detecting degradation in that KPI.
These dynamic thresholds can be generated externally and retrieved by the apparatus or can be generated by the apparatus. The apparatus can use them to analyze SON optimization actions on network element for degradation, and in some cases to assess both their positive or negative impact on KPIs. This analysis is then fed back into a closed loop system to guard the KPIs. One type of feedback is "slow feedback" for ensuring that SON actions providing benefit to the network are given greater importance/weight while those providing less benefit or degradation are given less importance/weight in determining the next optimization actions. Another type of feedback is faster, for causing SON actions to be reverted when Negative Threshold (NT) breaches occur (considered as KPI's degradation). Feedback is provided to the SON apparatus about such reversions and degradations to ensure that the actions are resulting in the negative impact are given less weight. This can help ensure that SON actions leading to negative impacts on the network are demoted. This in turn can lead to a better trade off between protection of KPIs and better optimization.
When PT breaches are detected, (considered as improvements) after SON activities, the feedback may leave these optimizations unchanged and report the positive feedback to the SON apparatus so that in next runs or iterations, the algorithms/rules used to improve certain KPI(s) will have more weight over the whole optimized cluster. The overall result is that the most beneficial actions are prioritized, and preferably prioritized on a per network element basis.
The time variability of the KPI thresholds can enable accounting for seasonal, daily, or more granular variations in KPIs values, by automatically calculating KPI Positive Threshold (PT) and NT based on the history of the KPI values using statistical methods. Thresholds can be recalculated after the end of each ROP to enable them to take into account the most recent data. This can be done at any network element (NE) level (e.g. cell level, base station, neighbour relation etc.) and with a defined time granularity (e.g. 1 hour ROP) given that the historical data has the same or higher granularity. Both thresholds (PT, NT) can be recalculated using the KPI values for the recent ROP. In a case of breaching NT after SON activities, the thresholds may be based on the same historical data set for the next ROP. This can help ensure that the performance of the SON actions are consistently evaluated in a dynamic network environment.
Figures 6 to 9, ways of assessing which optimization prompted the degradation Figure 6 shows steps similar to those of figure 3, and corresponding numerals have been used, but in this case, the step 33 of assessing automatically which of the different optimization actions prompted the degradation, comprises assessing 35 based on an expected time delay and an actual time delay between the respective occurrence and the detection of the degradation. An advantage of this is that it is a convenient additional way of assessing which optimization prompted the degradation.
In some cases, the assessment of which of the optimization actions prompted the degradation can also be based on how closely related to a KPI is the respective optimization action. This can encompass a closeness in terms of how many linked events there are in a chain or tree of linked events leading to the degradation, or a predetermined likelihood of causation, for example in terms of whether it relates to a same NE, a downstream NE or neighbouring NE, and if neighbouring, then how close a neighbor in terms of coverage overlap or in terms of handover statistics and so on.
Figure 7 shows steps similar to those of figure 6, and corresponding numerals have been used, but in this case, the step 33 of assessing automatically which of the different optimization actions prompted the degradation, also comprises assessing 36 by causing the controller to selectively make a trial reversion of at least one of the optimization actions, and a step of detecting whether the trial reversion results in reduction of the degradation. An advantage of this is that it can give more certainty in the assessment, though it takes some time. This can lead to more certainty in the amelioration, which in turn can lead to a better trade off between protection of KPIs and better optimization.
Figure 8 shows steps similar to those of figures 2 and 6, and corresponding reference numerals have been used. The step of assessing automatically which of the different optimization actions prompted the degradation, comprises assessing 41 by causing the SON controller to selectively make a trial reversion by sending feedback. At step 42, the SON controller makes the trial reversion of the optimization action identified in the feedback. At step 43, the apparatus detects whether the trial reversion has reduced the degradation, and assesses whether this selected optimization had prompted the degradation based on this detection, and optionally based on other factors as described.
Figure 9 shows steps similar to those of figure 3 and figure 7, and corresponding numerals have been used, but in this case, the step 33 of assessing automatically which of the different optimization actions prompted the degradation, comprises a step 37 of, in a case where there are more than one of the optimization actions to be reverted, carrying out respective trial reversions and corresponding detection of reduction in degradation sequentially in order, the order being based on how closely related to the first KPI are the different optimization actions. How closely related can encompass a closeness in terms of how many linked events there are in a chain or tree of linked events leading to the degradation, or a predetermined likelihood of causation, for example in terms of whether it relates to a same NE, a downstream NE or neighbouring NE, and if neighbouring, then how close a neighbor in terms of coverage overlap or in terms of handover statistics and so on. An advantage of this is that it can help speed up the assessment compared to a random order for example, and thus speed up the amelioration. This in turn can lead to a better trade off between protection of KPIs and better optimization.
Figures 10, 1 1 ways of ameliorating the optimization which prompted degradation Figure 10 shows steps similar to those of figure 2, and corresponding reference numerals have been used. The step of providing feedback involves a step 44 by the apparatus of sending an instruction to the SON controller to cause it to at least partially revert the optimization assessed to have prompted the degradation. The SON controller receives the feedback at step 22 and at step 24, it ameliorates the degradation based on the feedback. This amelioration includes reverting, partially or fully, the optimization action assessed to have prompted the degradation. An advantage of this is that it can help provide more rapid amelioration.
Figure 1 1 shows steps similar to those of figure 2, and corresponding reference numerals have been used. The step of providing feedback involves a step 45 by the apparatus of providing feedback to cause the SON controller to ameliorate by biasing how the optimization is determined by the SON controller. The feedback can include an indication of how to bias how the SON controller determines the optimization actions. An advantage of this is that it can help improve how the optimization is determined, whether that be selecting better algorithms for the optimization, or altering parameters or rules, or any other way. This can provide slower longer term improvement in the optimization, compared to reversion of optimization actions, and can be combined with such reversion.
As shown in step 45, the sending of the indication in the feedback can involve sending a parameter for use by an optimization algorithm, and/or sending a weighting for use in selecting between different optimization algorithms, for example. As shown in step 25, the amelioration by biasing can involve the SON controller using the feedback by selecting an optimization algorithm for generating the optimization actions based on the weighting, or by using the parameter sent in the feedback, in an optimization algorithm for example. An advantage of these examples is that they are particularly useful and convenient ways of enabling the biasing of the optimization to improve it based on experience. Again, this can lead to a better trade off between protection of KPIs and better optimization.
Figure 12, detect degradation based on transience or reliability
Figure 12 shows steps similar to those of figure 3, and corresponding numerals have been used, but in this case, step 39 of monitoring the first KPI involves the detecting being based on an assessment of the degradation for transience and/ or reliability. An advantage of this is that it can help avoid inaccurate detections and thus help improve accuracy of assessments of which optimization action prompted the degradation. This in turn can lead to a better trade off between protection of KPIs and better optimization.
Figure 13, detect improvement and amelioration includes reinforcing improvement
Figure 13 shows steps similar to those of figure 2, and corresponding reference numerals have been used. The step of monitoring the first KPI is now carried out 51 to detect degradation or improvement. The assessing now involves assessing automatically 53 which of the different optimization actions, if any, prompted the detected improvement of the first KPI, based on the indications of the occurrences. The step of providing the feedback automatically to the controller, additionally 54 causes the SON controller to reinforce the detected improvement, based on which of the optimization actions is assessed to have prompted the improvement. This can be additional to detecting and responding to degradation of the first KPI. An advantage of detecting and responding to improvements is that it can help to improve the optimization more rapidly than only detecting degradations. This in turn can lead to a better trade off between protection of KPIs and better optimization.
Figure 14, amelioration includes actions relating to other NEs
Figure 14 shows steps similar to those of figure 2, and corresponding reference numerals have been used. The step of providing feedback now has the condition that for the case that the first KPI relates to a first NE of a group of NEs, there is a step 64 of providing feedback to cause the SON controller to ameliorate optimization actions relating to another of the group of NEs. An advantage of this is that it can improve scalability to larger networks or reduce the number of KPIs to be monitored, to reduce the complexity for a given size of network. There are corresponding steps by the SON controller of receiving the feedback 22 and ameliorating 65 the detected degradation and ameliorating other optimization actions relating to the other NEs based on the feedback.
References to network entities are intended to encompass any type or level of entity, for example from service area, to node to parts of nodes, to other managed or addressable elements or components of nodes, or of network management components, and can also include non physical entities such as relationships between elements, (such as neighbor relations between nodes) or virtualized elements or classes or groups of elements for example. Figure 15, overall view of steps of optimization and feedback
Figure 15 shows an overall view of steps of operation of a system according to an embodiment, the system including the apparatus and the SON controller, some of the steps forming a loop. At step I., a user enters initial configuration information. There are many possible initial settings for the apparatus, some examples are set out here as default parameters, which can be customized by the user, like:
a) KPIs to monitor. The user can set a list of KPIs which are most important to him, like Call Drop Rate, DL Throughput, Call Setup Failure Rate etc. b) ROP definition (e.g. 1 hour).
c) Training Dataset Definition: Period taken as training dataset (KPI history) in ROPs. For effective thresholds calculation there should be at least 2- 3 weeks of data for each KPI. This defined number of ROPs can be used as a constant number of inputs for thresholds calculation, meaning that when data for new ROP is available then the data for the oldest ROP can be deleted (or another setting can be introduced for this purpose).
d) ROPs to exclude from KPI history. If the user knows about some major network problems (e.g. long power outages), which affected the KPIs, the affected ROPs can be removed and instead the algorithm could take interpolated values for thresholds calculation.
e) Degradation Definition: For each of KPIs it should be stated what is considered as degradation (e.g. increase or decrease). The opposite is automatically considered as improvement.
f) Significance criteria: number of times in a specified period the KPI value must fall below/jump above the thresholds in order to consider the threshold breach as significant (this is called the first significance criterion), e.g. 2 times within 6 ROPs (the latter is called monitoring period, or second significance criterion). The same criteria can be used to ensure that a KPI has come to its pre- optimization values after the NT breach and revert actions (this is not a limitation, different significance criteria could be set for detecting a significant breach and detecting the return to normal values). This setting is designed to cope with transient KPI behavior.
g) The "effect window", if used as a way of assessing which optimization action prompted the degradation: If the Apparatus spots a KPI threshold breach, there are two options.
First, if there were no SON activities on the NE or its neighbours within the "effect window" (set as a number of ROPs for example), thus the spotted anomaly is not assumed to be directly related to SON, and as a consequence, there are no parameter changes to revert and no improvement/degradation to feedback to SON.
Second, if there were SON optimization activities on the NE or its neighbours some number of ROPs back, if the number of ROPs is within "effect window" then a revert and feedback should take place as it is deduced that the optimization prompted the degradation.
h) Minimum number of samples for each KPI. This can include how many samples should be collected in each ROP order to consider the KPI value as reliable in a ROP. This could be set manually as an absolute value or calculated automatically based on the history of number of samples. If in a specific ROP the number of samples taken to calculate a KPI is less than the Samples Threshold (ST), then even if there is a KPI threshold breach, it's discarded as the reliability criterion is not fulfilled. This is to cope with the "small numbers effect", when for example having one or just a few fails would lead to very high failure rate due to having too few samples.
i) Mass event or external event related settings. This can be an absolute or automatically calculated threshold for an indicator reflecting NE load (e.g. PRB utilization in LTE cell). If the load indicator is higher than the threshold, then even if there is a KPI threshold breach, it is discarded as during this ROP the NE was overloaded due to mass event (like a musical festival for instance), j) Data Reliability Settings: Minimum value for NE's and its neighbors' availability to consider the KPI values as reliable in a ROP. This is to discard the ROPs with outages.
k) Penalty period. Once a KPI significantly breaches the NT for a specific cell, this setting specifies: for how long the SON action prior to the degradation is prohibited for the degraded cell.
At step II of figure 15, the apparatus calculates or retrieves suitable thresholds for use in detecting degradation and/or improvement. At step III, the SON initiates or makes some change to the optimization actions or reverts previous optimization actions. At step IV, the apparatus decides either the optimization is acceptable or should be reverted and feeds this back to the SON controller. This can involve monitoring the first KPI and using the KPI threshold to determine if the optimization has prompted a degradation which is unacceptable, as described above in relation to other figures. At step V, the apparatus recalculates the KPI thresholds used in step IV, to update the thresholds, to adapt them to recent behavior of the KPI, as described above in relation to figure 5 for example. At step VI, the apparatus optionally reports to the user periodically, and returns to the start of the continuous loop, to step III, to repeat steps III to VI.
Figure 16, more detailed view of optimization and feedback
Figure 16 shows a flow chart showing an example of some of the actions of the apparatus according to an embodiment, in more detail than figure 15. At step 200, the apparatus first looks through the SON executions history and splits the NEs into two groups: first - NEs, which haven't been optimized by SON within the effect window; second - the ones which have been optimized by SON within the effect window. For each NE, if it is in the first group the Apparatus simply recalculates at step 300 the NT, PT, reliability criteria (assuming the automated option had been chosen by the User. By User is meant a Mobile Network Operator MNO engineer, responsible for the network performance/network optimization.). If the current NE is in the second group, the Apparatus performs the following steps 210 to 280. The performance management PM data, in the form of values of the first KPI, typically received from the NMS, is compared to previously calculated KPI thresholds at step 210. NT and PT thresholds can be calculated based on historical data and can represent expected normal data boundaries or behaviors. Thresholds can be calculated by combining a number of statistical learning functions in a workflow as described for example in "MingXue Wang and Sidath Handurukande. A Streaming Data Anomaly Detection Analytic Engine for Mobile Network Management, IEEE International Conference on Cloud and Big Data Computing, 2016". Given a sequence of values X as a time series, NT and PT can be calculated based on robust statistics, i.e., median and Median Absolute Deviation (MAD).
threshold(X) = median(X) ± 3 * median(\X — median(X) )
To handle seasonal patterns, one can assume a constant periodicity p existing in the series. The historical data of current time t, Xt is selected based on the following equation to handle the seasonality for the range functions.
%t = xt-f-w> xt-f-w-i>■■■ ' xt-f>■■■ ' xt-f+w-i> xt-f+w> where f = 0, p, 2p, 3p ... w defines the window size for current time to take account of neighbours' data to increase statistical sample size and also relax the periodicity p value. For example, 7 and 9 o'clock data would be also used for calculating the limits of 8 o'clock time window; a peak normally happening in 7 o'clock occurred in 8 o'clock is still considered as normal. As a result, for example, if there is a weekly pattern, only related time windows of pervious Mondays' data will be used to calculate thresholds of current Monday's time points.
However, for network KPI values, the distribution of X_t are fairly skewed or asymmetric in most cases based on observations of many real world networks datasets. Calculating both NT and PT using a same formula will lead the threshold over estimated for the un-skewed side. Hence, it is preferred to divide original data X_t into two subsets based on the median to find NT and PT separately.
PT= threshold(X ) , where X <≡ Xt> Xt≥ median(Xt)
NT= threshold(X ) , where X _≡ Xt, X ≤ median^Xf)
PT and NT change dynamically according to behaviors of the data. As a result, it detects NE performance degradation and improvement according to each NE's own behaviors. Other approaches, such as based on ARIMA, Holt-winters, etc. can also be used to calculate PT and NT (see for example "Ajay Mahimkar, Ashwin Lall, Jia Wang, Jun Xu, Jennifer Yates, Qi Zhao, Synergy: Detecting and Diagnosing Correlated Network Anomalies").
At step 220 the monitoring involves detecting if there was a breach of one of thresholds (NT, PT). If so, at step 230, then the reliability criteria are checked. If at least one of the reliability criteria (for example minimum number of samples, mass event flag - NE load indicators, NE availability) are not fulfilled, then the Apparatus doesn't provide any feedback (including reversions) to the SON about the NE, and the method goes to step 300.
If the reliability criteria have been fulfilled at step 230, then significance criteria are checked at step 240. This can mean for example determining whether the NE has breached a threshold (either NT or PT) a given number of times within a given number of ROPs. If not (for example if this is the first breach out of minimum two needed within 6 ROPs period), then any optimization on the NE and its neighbors is stopped, by including them into the temporary exclusion list till the end of the monitoring period at step 270.
If the significance criteria are fulfilled, then the Apparatus checks at step 250 based on the setting e) described above, whether the breach is a degradation or improvement. If this is an improvement, then the thresholds, both NT, PT and also reliability (in case the User had set the reliability thresholds to be calculated automatically) are recalculated at step 270 taking into account the PM data for the last ROP; SON weights are recalculated at step 280 taking into account the KPI improvement to which a specific SON feature/policy/rule has led. This feature/policy/rule will increase the weight in the next ROP. It can be realized in two ways: either to increase the weight for all the optimized NEs, or to increase it only for NEs with similar KPI values/behavior (for similarity analysis different techniques could be used, e.g. clustering).
If the breach is a KPI's degradation (NT), then at step 260 the optimization changes made to the NE and its neighbors within the effect window are added to the reversion list so that they will be reverted one at a time in sequence in the following order: first the optimizations of the current NE, then those of its neighbors (those for different neighbours being ordered by number of Successful Handovers and/or distance); and the optimizations being ordered from newest to oldest. Also the SON weights are recalculated at step 280 taking into account the KPI degradation to which a specific SON feature/policy/rule has led. This feature/policy/rule will have increased weight in the next ROP. It can be also realized in two ways as described above.
Once the overall analysis for all the NEs has been done, then a Feedback report for the SON controller is created, which contains the reversions list, new weights for features/policies/rules at step 290, which is sent to the SON controller. Figure 17, steps in ordering sequence of multiple reversions
Figure 17 shows a flow chart of steps by the apparatus to show an example of causing multiple trial reversions of optimization actions where a KPI of an NE has breached its NT. It is assumed that the reversion actions have started. First of all, at step 310 a SON report is checked to see if the requested parameter revert has really been implemented by the SON controller as sometimes parameter changes, including reversions, can fail due to different reasons (e.g. hardware failures). If the revert, requested in the last ROP, failed then the User should be informed at step 350 (in a User execution report for example) as often in a case of parameter change failure human intervention is required.
If the requested parameter revert has been implemented, then the first part of significance criteria is checked at step 320 - if the KPI's value returned and stayed within the pre-optimization levels (or better) in the minimum required number of ROPs (equal or less than the monitoring period) after the last revert. If the first significance criterion is met, then supposing that the revert has been successful, a penalty period is set at step 390 for the NE (or its neighbor) for the changes that led to KPI degradation; also SON weights are recalculated taking into account the KPI degradation to which a specific SON feature/policy/rule has led. The setting k) described above in relation to figure 15 can be used to define how long the penalty period is.
If the first significance criterion is not met, then at step 330, a second significance criterion is checked - has the monitoring period finished; if not, then the Apparatus will wait at step 360 the needed number of ROPs till at least one of significance criteria is satisfied. If yes, then the Apparatus checks at step 340 if all the reverts assigned to the NE and/or its neighbors have been finished. If there is at least one more to implement, then the next revert should be performed in the next ROP at step 370.
If all the reverts have been finished, then at step 380 a deduction is drawn that the degradation was not caused by SON optimization activities, and the NE and/or its neighbors are included back into the optimization list, and the thresholds are recalculated, taking into account the data from the last ROP(s). Figure 18, time chart of C-SON example
Both Centralized SON (C-SON) and Distributed SON (D-SON) modes are supported or both simultaneously (Hybrid SON). The difference between C-SON and D-SON is that in the C-SON case SON algorithms run on a separate (external to the cellular network) server (e.g. Remote Electrical Tilt (RET) optimization algorithm); in the D-SON case SON algorithms are run in the NEs themselves (e.g. Random Access Channel (RACH) Optimization in the eNodeB).
Figure 18 shows a time chart of steps and interactions by different entities in the case of a C-SON implementation of an embodiment. Time flows down the chart In a left-most column are steps by the network 40. In a next column to the right are steps by an OSS/customer dbase 410. In a next column to the right are steps by the SON controller implemented as part of a C-SON arrangement 420. In a next column are steps by the apparatus 430. In the right most column are steps by the user 440. Each of the steps shown will now be described, using the reference numbers shown.
1 . Initial configuration by means of the user sending or setting (or resetting) a number of initial configuration settings for use by the apparatus. Examples of these have been described above in relation to figure 15.
2. Request historical PM, current CM and SON configuration data. In order to calculate the thresholds, the Apparatus should request the history of KPIs (based on initial settings a), b), and c) described above). By SON configuration is meant which features are turned on, which weights has each feature/policy/rule. Based on weights SON can choose which actions to prioritize. This is an important input for the Apparatus as its feedback to SON are the revised weights. Also the list of optimized NEs is requested as the thresholds are calculated on NE+KPI basis.
2.1 . Request: historical PM data and current CM. This is sent from the C- SON to the OSS/customer dbase. In the existing solutions C-SON downloads only the data for the last ROP to calculate its output configuration changes. In embodiments having dynamic KPI thresholds, the KPI data for the requested number of ROPs is requested from the customer OSS/Database and sent to the Apparatus so that the latter can calculate the thresholds for the next ROP based on the historical KPI data.
2.2. Response: historical PM data and current CM. The requested KPIs' history and the latest network configuration are downloaded from the Customer's OSS/Database, depending on where the necessary data is stored, to the C-SON.
2.3. Response: historical PM data, current CM and SON configuration. The requested KPIs' history, the latest network configuration and SON-specific settings are sent as the response to the Apparatus. At this stage the Apparatus can start its KPI analysis.
3. Calculating thresholds. Having the historical data for the KPIs selected by the User with the number of samples, based on which each KPI had been calculated, also the NE load for each ROP, the Apparatus begins the KPI analysis.
First step is analyzing the KPI history and defining the most suitable algorithm for calculating the thresholds. A number of methods can be used to calculate the PT and NT (in statistics they are usually referred as Upper Control Limit (UCL) and Lower Control Limit (LCL)). Forecasting based and Heuristic limit based approaches can be used for setting the automated dynamic thresholds for each NE and chosen KPIs, and reference is made to the citations referred to above, in relation to figure 16. The Apparatus can use the historical data as training data to choose the most suitable method for each situation. The criterion for the defining the most suitable method can be F1 score.
After defining the thresholds calculation approach, the PT, NT and also ST and load indicator threshold (the latter two - in case if automated option is chosen by the User) are calculated for the next ROP.
4. SON execution. This step is the SON generating optimization actions and can be implemented using conventional optimization algorithms. The inputs are typically user settings and network most recent CM and PM data. The output of the execution can be an execution report, stating the proposed CM changes (which then need to be injected into the Network via the OSS), logs which describe the execution process - if any problems occurred, if any NEs were excluded from the optimization specifying the reason of it etc.
5. Request: CM changes implementation. This step, is a way of requesting execution of the desired optimization actions, sent from the C-SON to the OSS/customer dbase, and can be the same as conventional methods.
5.1 . Request: CM changes implementation. This step passes on the request from the OSS/customer dbase to the relevant NEs of the network. Again it can be implemented in the same way as conventional methods.
5.2. Response: CM implementation. This step is a response to the request and is sent from the NEs of the network to the OSS/customer dbase. Again, it can follow conventional practice and can provides a feedback on how successful each implementation was and if any issues occurred during implementation, provide the reasons of faults.
5.3. Response: CM implementation. In this step the OSS forwards the implementation logs to C-SON, so that C-SON is aware of any faults during the implementation. Again it can follow conventional practice.
6. PM for new ROP. This step involves all the PM data being uploaded to the OSS at the end of each ROP for later storage in the Customer Database. Again, it can follow conventional practice.
7. Request: CM and PM for new ROP. This step involves the C-SON fetching the CM and PM data for each ROP to use them as the inputs for its activities. Again, it can follow conventional practice.
7.1 . Response: CM and PM for new ROP. This step is the response from the OSS/customer dbase to the C-SON. Both CM and PM data are loaded to C- SON Database.
8. Execution Report and CM, PM data for new ROP. In this step the C- SON sends all the information to the Apparatus regarding the previous execution, including for each NE: parameters changed with old values and new values, features/policies/rules applied. The CM data is used for keeping track of any CM changes in the network (not all of them can be caused by SON).
9. Analyze Execution Report, re-calculate thresholds and weights create feedback report. This step by the apparatus can be implemented as described in more detail in Figures 16 and 17 for example.
10. Reversion list, execution feedback. This step of providing feedback to the C-SON from the apparatus can include sending a first output of a list of NEs, for which the parameter change reverts must be performed in the next ROP. All the NEs in this list are temporarily excluded from optimization, because their KPI(s) had breached the NT. A general aim of the apparatus to protect the KPIs is to bring the KPIs to the previous (better) values, going back to the most stable parameter configuration. A second part of the feedback can be execution feedback, which is for example the indication of how to bias the C-SON, for example the new weights, assigned to the SON features/policies/rules, based on the current and previous (for the case the number of optimization ROPs is more than one) ROPs experience. By experience is meant for example a comparison of pre-optimized and post-optimized KPI values with the aid of NT, PT calculated for each ROP. The features/policies/rules, which lead to KPI(s) improvement more frequently, will receive higher weights comparing to the once that lead to the KPI(s) degradation.
10.1 . User execution report is sent to the User. It can contain (but is not limited to):
a). failed parameter reverts;
b) . the list of NEs, which got into the temporary exclusion, with the reason;
c) . the list of NEs, which breached NT;
d) . the list of NEs, which breached PT;
e). Mass events info.
1 1 . SON execution with updated weights. This is similar to step 4 described above for the C-SON, but with updated weights, updated temporary exclusion list, updated reversion list.
12. Request: CM changes implementation, including reversions. This step can be the same as step 5 described above, but with updated reversion list. The method can continue with a repeat of step 5.1 onwards, in a continuous loop.
Figure 19, time chart of D-SON example
Figure 19 shows a time chart of steps and interactions by different entities, similar to the chart of figure 18, but in this case for a D-SON implementation of an embodiment. Time flows down the chart In a left-most column are steps by an NE/D-SON 450. In a next column to the right are steps by an OSS/customer dbase 410. In a next column to the right are steps by the apparatus 430. In the right most column are steps by the user 440. The interworking with D-SON is very similar to that of the C-SON example, but still has some difference, mostly related to the topology of the process.
1 . Initial parameter configuration/reconfiguration. There is no difference from the C-SON flowchart. The same parameters can be used.
1 .1 Initial (re-)configuration; Request: historical PM data, current CM. The settings, introduced by the user, are passed over to the OSS so that OSS can implement them in the NE with D-SON functionality. Also at this stage historical PM data are requested for thresholds calculation and current CM configuration; this request is the same as in C-SON use case.
1 .2 Initial (re-)configu ration. The settings from the user are finally passed over and implemented in the NE with D-SON functionality.
1 .3 Response: historical PM data, current CM. The requested KPIs' history and the latest network configuration are downloaded from the Customer's
OSS/Database, depending on where the necessary data is stored, to the apparatus.
13. Calculating thresholds. This step can be done the same way as step 3 in the C-SON use case.
14. SON execution. This step can be implemented in the same way as step 4 in the C-SON use case. One difference is that the proposed changes are implemented directly in the NE (D-SON features are executed "inside" the NE), so the OSS is not needed for this.
15. Execution report & CM, PM for new ROP. Once a new ROP is finished, current network configuration (including parameters, changed by D- SON) and performance counters are uploaded to the OSS. Later the OSS calculates KPIs from counters according to pre-defined formulae.
16. Request: Execution Report & CM, PM for new ROP. The Apparatus requests the information received in step 15 from the OSS/Customer DB.
16.1 . Response: Execution Report & CM, PM for new ROP. The OSS/Customer DB sends the information, received in step 15.
17. Analyze Execution Report, re-calculate thresholds & weights. This step by the apparatus includes the detection of degradation, and the assessment of what prompted the degradation, as described above in relation to figures 1 to
17. It can be done the same way as in the C-SON use case.
18. Reversion list; execution feedback. This step can be done the same way as in step 10 in the C-SON use case. The only difference is again in topology: in order for Apparatus to reach D-SON instance in the NE, it sends the information via the OSS.
18.1 Reversion list; execution feedback. The information received by the
OSS/customer DB from the Apparatus is forwarded to the NE/D-SON by the OSS/customer DB.
19. User execution report. This step is done the same way as in step 1 1 in the C-SON use case.
14. SON execution with updated weights. This is a repeat of step 14 above but for the new ROP and using the feedback from the Apparatus as inputs for SON algorithms. After this the steps 15 onwards can be repeated in a continuous loop.
Figures 20, 21 , amelioration based on other NE and service area throughput Figure 20 shows a schematic view of an example of a 5G Service Area of a 5G network. It shows a cloud symbol representing a service area, in which a user equipment can get mobile services or connectivity services from one or more antennas. High power antennas node A, node B, node C, and node D are shown within the service area. Node D has associated low power antennas node D.1 ., D.2., and D.3. Node B has associated lower antenna node B.1 . Neighbour Relations NR are shown by dotted lines between some of the antennas.
In this example an optimization action included a change of a parameter of Node C, for example, for load balancing reasons (handled by the SON controller). As a result, Node C got higher throughput (and so the throughput KPI breached its PT within the effect window after the parameter change). Also, neighbouring node (node A) was monitored for throughput and it was found that its KPI degraded (in other words its throughput KPI breached its NT within the effect window after the Node C's parameter change). Figure 21 shows some steps by the apparatus according to this embodiment, shown in figure 20. At step 631 the apparatus monitors the first KPI to detect degradation, in this case a KPI of throughput of node A. This is an example of the first KPI relating to a first NE of a group of NEs. In this case a KPI of the whole Service Area's throughput is also checked.
At step 632, the apparatus receives an indication of occurrences of different optimization actions, including a change of parameter at node C. At step 633, the step of assessing automatically which of the optimization actions prompted the detected degradation, in this case determine that the change at node C prompted the degradation at node A. At step 634 the apparatus provides feedback automatically to the SON controller to cause amelioration of the degradation based on which of the optimization actions prompted the degradation. This amelioration is of optimization actions relating to another of the group of NEs. The another one of the group is node C in this case. Amelioration in this case is also dependent on the service area throughput:
-if the throughput KPI for the service area was improved, the change is not reverted, despite the breach of NT for node A;
-otherwise the change should be reverted.
Thus the amelioration is based on the neighbours' KPIs and also on the Service Area KPIs. The amelioration of optimization actions relating to another of the group of NEs has advantages of widening the scope of the KPI protection, to cover a wider range of optimization actions for a given KPI, or to cover a wider range of KPIs for a given optimization action. Therefore, it can improve scalability to larger networks or reduce the number of KPIs to be monitored, to reduce the complexity of the protection scheme, for a given size of network.
Figure 22, 23, amelioration of other NE, neighbor relation black- white-listing Figure 22 shows a schematic view of an example of a 5G Service Area of a 5G network. It shows a cloud symbol representing a service area, in which a user equipment can get mobile services or connectivity services from one or more antennas. High power antennas node A, node B, node C, and node D are shown within the service area. There are dotted lines to show neighbour relations NR between the nodes, which NRs can be either black- or white-listed. In this example the optimization action relates to handling of the NRs. There's a SON algorithm, that blacklists NRs based on distance, targeting NRs between very distant Nodes (they could be previously created by ANR). An example optimization action is to set the distance threshold to 8km, meaning that all the NRs between Nodes situated more than 8km away from each other would be blacklisted (HO between them would no longer be allowed). Thus the NR to/from Node A from/to Node C [NR:A-C] got blacklisted because, as shown in the figure, it has a distance of 10km. But shortly after the blacklisting optimization action took place, Node C's KPIs (e.g. traffic throughput) degraded (in other words traffic KPI breached NT within the effect window after the NR:A-C parameter change). As a result, the Apparatus detects this and assesses that the blacklisting has prompted the degradation. Thus is sends feedback to cause the SON controller to revert the previous optimization action: the blacklisted relation NR:A-C] gets whitelisted (HO is allowed again).
This is shown in figure 23, which shows steps of the apparatus. At step 731 , the apparatus monitors the first KPI to detect degradation, in this case, the first KPI relates to a first NE, node C, of a group of nodes (A-D), such as throughput at node C. At step 732 the apparatus receives an indication of occurrences of different optimization actions including change of NR to blacklist [NR:A-C]. At step 733 it assesses automatically which of the optimization actions prompted the detected degradation, in this case determining that a change to blacklist [NR:A- C] prompted the degradation at node C. The apparatus then provides feedback automatically to the SON controller at step 734 to cause amelioration of the degradation based on which of the actions prompted the degradation. In this case, it causes amelioration of optimization actions relating to another of the group of NEs, by whitelisting [NR:A-C]. Advantages of this example are similar to those discussed above in relation to figures 20 and 21 . The amelioration of optimization actions relating to another of the group of NEs has advantages of widening the scope of the KPI protection, to cover a wider range of optimization actions for a given KPI, or to cover a wider range of KPIs for a given optimization action.
Figure 24 schematic view of apparatus for protecting KPI
Figure 24 shows a schematic view of a possible implementation of the apparatus for protecting the KPIs. The apparatus includes a processing circuit 180, coupled via a bus to a storage medium in the form of a memory circuit 185 having a stored program 188. Also coupled via the bus to the processing circuit is a receiver/sender circuit 183 having an external path for connection to the SON controller and to parts of the network such as the OSS or NMS. The program can comprise computer code which, when run by the processing circuit can cause the processing circuit to carry out any of the method steps described above in relation to figures 1 to 23 for protecting KPIs. The memory circuit is an example of a computer program product comprising a computer program and a computer readable storage medium on which the computer program is stored. The storage may also comprise persistent storage, which, for example, can be any single one or combination of magnetic memory, optical memory, solid state memory or even remotely mounted memory.
Figure 25 schematic view of SON controller
Figure 25 shows a schematic view of a possible implementation of the SON controller. The SON controller includes a processing circuit 520, coupled via a bus to a storage medium in the form of a memory circuit 530 having a stored program 525. Also coupled via the bus to the processing circuit is a receiver/sender circuit 540 having an external path for connection to the apparatus and to parts of the network such as the OSS or NMS for a C-SON implementation. For a D-SON implementation, the external path could be used for coupling to other parts of the NE. The program can comprise computer code which, when run by the processing circuit can cause the processing circuit to carry out any of the method steps described above for the SON controller in relation to at least figures 2, 8 to 1 1 , 13 to 15, 18 and 19. The memory circuit is an example of a computer program product comprising a computer program and a computer readable storage medium on which the computer program is stored. The storage may also comprise persistent storage, which, for example, can be any single one or combination of magnetic memory, optical memory, solid state memory or even remotely mounted memory.
Figure 26 schematic view of apparatus for protecting KPI
Figure 26 shows a schematic view of another possible implementation of the apparatus for protecting the KPIs. The apparatus includes a monitor unit 191 , a receiver 193, an assessment unit 195 and a feedback unit 197. Each of these units are coupled together via a bus, and the receiver 193 is coupled to an external path for connection to receive indications from the SON controller and from parts of the network such as the OSS or NMS. The parts of the apparatus can cooperate to to carry out any of the method steps described above in relation to figures 1 to 23 for protecting KPIs. In particular, the monitor unit can monitor the first KPI to detect degradation, for example by comparison with a threshold, or by algorithm, or any other way. The receiver can receive indications of occurrences of various different optimization actions. The assessment unit can automatically assess which of the optimization actions prompted the degradation detected. The feedback unit can provide feedback automatically to the SON controller to cause it to ameliorate the degradation based on which of the optimization actions prompted the degradation. Each of the units can be implemented using any conventional circuitry or processing hardware and may be integrated or divided in different ways.
Figure 27 schematic view of SON controller Figure 27 shows a schematic view of a possible implementation of the SON controller. The SON controller includes an optimization unit 544, a sender/receiver 550, and a control unit 560. The various units are coupled together via a bus. The sender/receiver 550 has an external path for connection to the apparatus and another external path to parts of the network such as the OSS or NMS for a C-SON implementation. For a D-SON implementation, the external path could be used for coupling to other parts of the NE. The various units can co-operate to carry out any of the method steps described above for the SON controller in relation to at least figures 2, 8 to 1 1 , 13 to 15, 18 and 19. Each of the units can be implemented using any conventional circuitry or processing hardware and may be integrated or divided in different ways.
Other remarks
In some of the examples described above, there can be both fast and slow feedback on whether SON optimization actions have led to KPI changes (both positive and negative). The feedback on SON optimization actions provided by the apparatus can be used not only for ameliorating degradation but in some cases also to promote the SON changes that lead to KPI improvements.
The feedback on SON optimization actions provided by the apparatus can in some cases cause reversion of those SON changes that led to KPI degradations. KPI thresholds can automatically adjust based on fixed historical time periods (e.g. last 2 weeks) or historical patterns (e.g. last 10 Monday afternoons) to provide more accurate detection of degradation for more accurate feedbacks for the SON controller.
The examples with adaptive thresholds can guard different network elements and different ROPs more equally, so that the KPI protection can be more consistent. Some examples classify NE's into groups, and allow optimization actions for NE's to be adapted from the detections and assessments and feedback of other similar NE's that have been optimized. These can thus give higher weight to actions that provided improvement and less weight to those that led to degradation and widen the use of the feedback to ameliorate optimization actions on other similar NEs of the group. The feedback can also be reported to the User regarding any significant KPI degradations/ improvements on NE level. The examples which adapt their detections using historical data, can help improve accuracy of detection as well as helping determine the significance of the KPI threshold breach and whether the breach is transient or valid.
The examples of the apparatus described can be used along with any automated optimization SON control process to protect the optimized system from degradation and provide feedback about how effective specific optimization actions are. The apparatus can be implemented in a physical server or node or a virtual (e.g. Cloud) node with software package, running synchronized together with the SON controller, analyzing the network's output KPIs (usually PM files or streams) and giving feedback including commands to the SON controller such as to revert some of the parameter changes and/or to change weights given to how the optimization actions are determined, such as weighting specific SON features/policies/rules, or changing parameters or settings of SON features/policies/rules. For example a maximum limit could be changed, such as allowing a maximum of 1 degree of uptilt/downtilt to be performed by a RET feature to be changed to allow maximum of 0.5 degrees when degradations due to the RET feature are detected.
Another optimization action example is load balancing. When utilization is below a threshold (for example: In LTE, normally in a serving area or a Cell coverage area, the PRB (Physical Resource Blocks) are not utilized 100% of the time). The SON controller can perform a load balancing between nodes or between service areas and maintain cell availability in a non-congested way.
One way of carrying out an optimization action is to have the SON controller cause the OSS to generate a proposed configuration change, for example in the form of a list of parameter changes. These configuration parameters are pushed from an OSS toward an RBS, and then the configuration changes are implemented in the RBS.
Another possible optimization action is a change in radio power output. This might be useful to save power consumption or to increase coverage area for example to match capacity to demand. Conventionally it requires a restart of an RBS to bring in new power setting, and so it is usually carried out overnight. Similar considerations apply to a change in bandwidth, for example from 5 MHz to 10 Mhz or vice-versa. In case of 5G it is more likely to be more bandwidth, say 50 MHz to 100 Mhz, or a change in band, say from 700 MHz band to 2100 AWS (Advanced Wireless Services) band.
Other variations can be envisaged within the claims.

Claims

Claims:
1 . A method of protecting a first Key Performance Indicator, KPI, of a communications network from effects of different optimization actions by a Self- Organizing Network, SON, controller of the communications network, the method having steps of:
monitoring the first Key Performance Indicator, KPI, to detect degradation of the first KPI,
receiving from the SON controller an indication of occurrences of the different optimization actions,
assessing automatically which of the different optimization actions, if any, prompted the detected degradation of the first KPI, based on the indications of the occurrences, and
providing feedback automatically to the SON controller, to cause the SON controller to ameliorate the detected degradation, to protect the first KPI, based on which of the optimization actions is assessed to have prompted the degradation.
2. The method of claim 1 , the step of detecting the degradation being based on a first KPI threshold, for the first KPI, the first KPI threshold being time variable, with the time variability being representative of past behaviour of that KPI before the optimization.
3. The method of claim 1 or 2, the step of assessing automatically which of the different optimization actions prompted the degradation, comprises assessing based on an expected time delay and an actual time delay between the respective occurrence and the detection of the degradation.
4. The method of any of claims 1 to 3, the step of assessing automatically which of the different optimization actions prompted the degradation, comprises a step of causing the SON controller to selectively make a trial reversion of at least one of the optimization actions, and a step of detecting whether the trial reversion results in reduction of the degradation.
5. The method of claim 4, the step of causing the SON controller to selectively make a trial reversion comprises, in a case where there are more than one of the optimization actions to be reverted, a step of carrying out respective trial reversions and corresponding detection of reduction in degradation sequentially in order, the order being based on how closely related to the first KPI are the different optimization actions.
6. The method of any preceding claim, the step of providing feedback comprising sending an instruction to the SON controller to cause it to at least partially revert the optimization assessed to have prompted the degradation.
7. The method of any preceding claim, the step of providing feedback to cause the SON controller to ameliorate the detected degradation, comprising sending an indication of how to bias how the SON controller determines the optimization actions.
8. The method of claim 7, the step of sending the indication comprising at least one of: sending a parameter for use by an optimization algorithm, and sending a weighting for use in selecting between different optimization algorithms.
9. The method of any preceding claim, the step of monitoring the first KPI to detect degradation, comprises making the detection based on an assessment of the degradation for at least one of: transience and reliability.
10. The method of any preceding claim, having a step of monitoring the first KPI for an improvement and assessing automatically which of the different optimization actions, if any, prompted the detected improvement of the first KPI, based on the indications of the occurrences, and
the step of providing the feedback automatically to the SON controller, additionally causes the controller to reinforce the detected improvement, based on which of the optimization actions is assessed to have prompted the improvement.
1 1 . The method of any preceding claim, where the first KPI relates to a first Network Entity, NE, of a group of NEs, and having a step of providing feedback to cause the SON controller to ameliorate optimization actions relating to another of the group of NEs.
12. A computer program having instructions that when executed by a processing circuit cause the processing circuitry to carry out the method of any of claims 1 to 1 1 .
13. A computer program product comprising a computer readable medium having stored on it the computer program of claim 12.
14. Apparatus for protecting a first Key Performance Indicator, KPI, of a communications network from effects of different optimization actions by a Self- Organizing Network, SON, controller of the communications network, the apparatus having a processing circuit and a memory circuit, the memory circuit having instructions executable by the processing circuit, wherein said processing circuit when executing the instructions is configured to:
monitor the first KPI to detect degradation of the first KPI,
receive from the SON controller an indication of occurrences of the different optimization actions,
assess automatically which of the different optimization actions, if any, prompted the detected degradation of the first KPI, based on the indications of the occurrences, and
provide feedback automatically to the controller, to cause the controller to ameliorate the detected degradation, to protect the first KPI, based on which of the optimization actions is assessed to have prompted the degradation.
15. The apparatus of claim 14, the processing circuit also being configured to detect the degradation based on a first KPI threshold, for the first KPI, the first KPI threshold being time variable, with the time variability being representative of past behaviour of that KPI before the optimization.
16. The apparatus of claim 14 or 15, the processing circuit also being configured to assess which of the different optimization actions prompted the degradation, by assessing based on an expected time delay and an actual time delay between the respective occurrence and the detection of the degradation.
17. The apparatus of any of claims 14 to 16, the processing circuit also being configured to assess which of the different optimization actions prompted the degradation, by causing the SON controller to selectively make a trial reversion of at least one of the optimization actions, and by detecting whether the trial reversion results in reduction of the degradation.
18. The apparatus of claim 17, the processing circuit also being configured to cause the SON controller to selectively make a trial reversion by, in a case where there are more than one of the optimization actions to be reverted, carrying out respective trial reversions and corresponding detections of reduction in degradation sequentially in order, the order being based on how closely related to the first KPI are the different optimization actions.
19. The apparatus of any of claims 14 to 18, the processing circuit also being configured to provide the feedback by sending an instruction to the SON controller to cause it to at least partially revert the optimization assessed to have prompted the degradation.
20. The apparatus of any of claims 14 to 19, the processing circuit also being configured to provide the feedback to cause the SON controller to ameliorate the detected degradation, by sending an indication of how to bias how the SON controller determines the optimization actions, based on which of the optimization actions is assessed to have prompted the degradation.
21 . The apparatus of claim 20, the processing circuit also being configured to send the indication by at least one of: sending a parameter for use by an optimization algorithm, and sending a weighting for use in selecting between different optimization algorithms.
22. The apparatus of any of claims 14 to 21 , the processing circuit also being configured to monitor the first KPI to detect the degradation based on an assessment of the degradation for at least one of: transience and reliability.
23. The apparatus of any of claims 14 to 22, the processing circuit also being configured to monitor the first KPI for an improvement and to assess automatically which of the different optimization actions, if any, prompted the detected improvement of the first KPI, based on the indications of the occurrences, and the processing circuit also being configured to provide the feedback automatically to the controller, additionally to cause the SON controller to reinforce the detected improvement, based on which of the optimization actions is assessed to have prompted the improvement.
24. The apparatus of any of claims 14 to 23, where the first KPI relates to a first NE of a group of NEs, and the processing circuit also being configured to provide feedback to cause the SON controller to ameliorate optimization actions relating to another of the group of NEs.
25. A system comprising the apparatus for protecting a Key
Performance Indicator, KPI, of any of claims 14 to 24 and a Self-Organizing Network, SON, controller for carrying out optimization actions on the communications network, the SON controller being connected to the apparatus for protecting the KPI to send the indication of occurrences of optimization actions to said apparatus for protecting the KPI and to receive feedback from said apparatus for protecting the KPI.
26. A Self-Organizing Network, SON, controller for controlling optimization actions on a communications network, in cooperation with an apparatus for protecting a first Key Performance Indicator, KPI, of the communications network from degradation by the optimization actions, the SON controller having a processing circuit and a memory circuit, the memory circuit having instructions executable by the processing circuit, wherein said processing circuit when executing the instructions is configured to:
initiate optimization actions, and send to the apparatus for protecting the first KPI, an indication of occurrences of the optimization actions,
receive feedback from the apparatus, based on which of the optimization actions is assessed to have prompted a degradation in the first KPI, and in response to the feedback, control the optimization actions to ameliorate the detected degradation, to protect the first KPI.
27. The SON controller of claim 26, the processing circuit also being configured to receive from the apparatus an instruction to make a trial reversion of at least one of the optimization actions, and in response, to initiate such a trial reversion.
28. The SON controller of claim 26 or 27, the processing circuit also being configured to receive from the apparatus an indication of how to bias how the SON controller determines the optimization actions, and in response, to bias the determination of the optimization actions accordingly.
29. The SON controller of claim 28, the indication of bias comprising at least one of: a parameter for use by an optimization algorithm, and a weighting for use in selecting between different optimization algorithms.
30. Apparatus for protecting a first Key Performance Indicator, KPI, of a communications network from effects of different optimization actions by a Self- Organizing Network, SON, controller of the communications network, the apparatus having: a monitor unit for monitoring the first KPI to detect degradation of the first
KPI,
a receiver for receiving from the SON controller an indication of occurrences of the different optimization actions,
an assessment unit for assessing automatically which of the different optimization actions, if any, prompted the detected degradation of the first KPI, based on the indications of the occurrences, and
a feedback unit for providing feedback automatically to the controller, to cause the SON controller to ameliorate the detected degradation, to protect the first KPI, based on which of the optimization actions is assessed to have prompted the degradation.
31 . A Self-Organizing Network, SON, controller for controlling optimization actions on a communications network, in cooperation with an apparatus for protecting a first Key Performance Indicator, KPI, of the communications network from degradation by the optimization actions, the SON controller having:
an optimization unit for initiating optimization actions,
a sender/receiver for sending to the apparatus for protecting the first KPI, an indication of occurrences of the optimization actions, and for receiving feedback from the apparatus, based on which of the optimization actions is assessed to have prompted a degradation in the first KPI, and
a control unit for controlling the optimization actions in response to the feedback, to ameliorate the detected degradation, to protect the first KPI.
PCT/EP2017/055351 2017-03-07 2017-03-07 Protecting kpi during optimization of self-organizing network WO2018162046A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/EP2017/055351 WO2018162046A1 (en) 2017-03-07 2017-03-07 Protecting kpi during optimization of self-organizing network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/EP2017/055351 WO2018162046A1 (en) 2017-03-07 2017-03-07 Protecting kpi during optimization of self-organizing network

Publications (1)

Publication Number Publication Date
WO2018162046A1 true WO2018162046A1 (en) 2018-09-13

Family

ID=58266584

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2017/055351 WO2018162046A1 (en) 2017-03-07 2017-03-07 Protecting kpi during optimization of self-organizing network

Country Status (1)

Country Link
WO (1) WO2018162046A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021137650A1 (en) * 2020-01-03 2021-07-08 Samsung Electronics Co., Ltd. Method and network entity for handling kpi pm data
WO2022053739A1 (en) * 2020-09-09 2022-03-17 Elisa Oyj Evaluating effect of a change made in a communication network
CN116957807A (en) * 2023-09-21 2023-10-27 成都天用唯勤科技股份有限公司 Transaction degradation method and system for high-concurrency multi-dimensional distributed transaction system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013144950A1 (en) * 2012-03-25 2013-10-03 Intucell Ltd. System and method for optimizing performance of a communication network
US20160057679A1 (en) * 2014-08-22 2016-02-25 Qualcomm Incorporated Cson-aided small cell load balancing based on backhaul information
US20160248632A1 (en) * 2013-05-27 2016-08-25 Cisco Technology, Inc. Method and system for coordinating cellular networks operation

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013144950A1 (en) * 2012-03-25 2013-10-03 Intucell Ltd. System and method for optimizing performance of a communication network
US20160248632A1 (en) * 2013-05-27 2016-08-25 Cisco Technology, Inc. Method and system for coordinating cellular networks operation
US20160057679A1 (en) * 2014-08-22 2016-02-25 Qualcomm Incorporated Cson-aided small cell load balancing based on backhaul information

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
LARS CHRISTOPH SCHMELZ ET AL: "A coordination framework for self-organisation in LTE networks", INTEGRATED NETWORK MANAGEMENT (IM), 2011 IFIP/IEEE INTERNATIONAL SYMPOSIUM ON, IEEE, 23 May 2011 (2011-05-23), pages 193 - 200, XP032035479, ISBN: 978-1-4244-9219-0, DOI: 10.1109/INM.2011.5990691 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021137650A1 (en) * 2020-01-03 2021-07-08 Samsung Electronics Co., Ltd. Method and network entity for handling kpi pm data
WO2022053739A1 (en) * 2020-09-09 2022-03-17 Elisa Oyj Evaluating effect of a change made in a communication network
CN116957807A (en) * 2023-09-21 2023-10-27 成都天用唯勤科技股份有限公司 Transaction degradation method and system for high-concurrency multi-dimensional distributed transaction system
CN116957807B (en) * 2023-09-21 2023-12-08 成都天用唯勤科技股份有限公司 Transaction degradation method and system for high-concurrency multi-dimensional distributed transaction system

Similar Documents

Publication Publication Date Title
US11758415B2 (en) Method and apparatus of sharing information related to status
US9860126B2 (en) Method and system for coordinating cellular networks operation
US9451517B2 (en) Method and system for path predictive congestion avoidance
US10834620B2 (en) Quantum intraday alerting based on radio access network outlier analysis
US9503919B2 (en) Wireless communication network using multiple key performance indicators and deviations therefrom
US11582111B2 (en) Master node, a local node and respective methods performed thereby for predicting one or more metrics associated with a communication network
US11523287B2 (en) Machine-learning framework for spectrum allocation
US8600384B1 (en) Optimization of interlayer handovers in multilayer wireless communication networks
CN111466103A (en) Method and system for generation and adaptation of network baselines
WO2018162046A1 (en) Protecting kpi during optimization of self-organizing network
EP2695328B1 (en) Optimization of network configuration
US10225371B2 (en) Method and network components and self-organizing network
US9622094B2 (en) Self-optimizing communication network with criteria class-based functions
Tsvetkov et al. Verification of configuration management changes in self-organizing networks
Frenzel et al. Detection and resolution of ineffective function behavior in self-organizing networks
Frenzel et al. Operational troubleshooting-enabled coordination in self-organizing networks
Ali-Tolppa et al. Network Element Stability Aware Method for Verifying Configuration Changes in Mobile Communication Networks
EP3085149A1 (en) Method for determining system resource scheduling in communication systems
GB2603173A (en) System and method for network traffic analysis
CN115336311A (en) Network automation management method and device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17710175

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17710175

Country of ref document: EP

Kind code of ref document: A1