CN110582991B - Techniques involving interfaces between next generation node B central units and next generation node B distributed units - Google Patents

Techniques involving interfaces between next generation node B central units and next generation node B distributed units Download PDF

Info

Publication number
CN110582991B
CN110582991B CN201880027531.6A CN201880027531A CN110582991B CN 110582991 B CN110582991 B CN 110582991B CN 201880027531 A CN201880027531 A CN 201880027531A CN 110582991 B CN110582991 B CN 110582991B
Authority
CN
China
Prior art keywords
attribute
gnb
nsd
processor
nfvo
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201880027531.6A
Other languages
Chinese (zh)
Other versions
CN110582991A (en
Inventor
J·舒
姚羿志
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apple Inc
Intel Corp
Original Assignee
Apple Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Apple Inc filed Critical Apple Inc
Publication of CN110582991A publication Critical patent/CN110582991A/en
Application granted granted Critical
Publication of CN110582991B publication Critical patent/CN110582991B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0803Configuration setting
    • H04L41/0813Configuration setting characterised by the conditions triggering a change of settings
    • H04L41/082Configuration setting characterised by the conditions triggering a change of settings the condition being updates or upgrades of network functionality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0895Configuration of virtualised networks or elements, e.g. virtualised network function or OpenFlow elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/34Signalling channels for network management communication
    • H04L41/342Signalling channels for network management communication between virtual entities, e.g. orchestrators, SDN or NFV entities

Abstract

Techniques for employing network function virtualization in conjunction with a detached-function gNB (next generation node B) are discussed. The first set of embodiments discussed herein may facilitate the operation of gNB-CUs and gNB-DUs with respect to reference point Itf-N. The second set of embodiments discussed herein may facilitate techniques for updating latency and bandwidth requirements of an interface between a gNB-CU and a gNB-DU. The various embodiments discussed herein may belong to a first set of embodiments, a second set of embodiments, or both.

Description

Techniques involving interfaces between next generation node B central units and next generation node B distributed units
Citation of related application
The present application claims the benefit of U.S. provisional application No.62/539,925 filed on 8/1/2017 entitled "UPDATING FORWARDING GRAPH FOR RAN VIRTUALIZED NETWORK FUNCTION" and U.S. provisional application No.62/539,936 filed on 8/1/2017 entitled "MANAGING THE INTERFACE BETWEEN NEXT GENERATION NODEB CENTRAL UNIT AND NEXT GENERATION NODEB DISTRIBUTED UNIT OF A NEXT GENERATION RADIO ACCESS NETWORK," the contents of which are incorporated herein by reference in their entirety.
Technical Field
The present disclosure relates to core network technology of a communication network, and more particularly, to technology for employing network function virtualization in conjunction with a gNB (next generation node B) having a detach function.
Background
Network Function Virtualization (NFV) involves replacing a physical network node with a Virtual Network Function (VNF) implemented via Virtualized Resources (VR) that performs the same function as the physical node. In 5G, the NG-RAN is separated into a gNB-CU and a gNB-DU, wherein the gNB-CU is a Virtualized Network Function (VNF) deployed in the cloud, the gNB-DU being implemented in vertical hardware comprising radio functions for interfacing with User Equipment (UE).
Drawings
Fig. 1 is a diagram illustrating components of a network in accordance with some embodiments.
Fig. 2 is a block diagram illustrating components capable of reading instructions from a machine-readable or computer-readable medium (e.g., a machine-readable storage medium) and performing any one or more of the methods discussed herein, according to some example embodiments.
Fig. 3 is an illustration of an example architecture that facilitates supporting lifecycle management by a 3GPP (third generation partnership project) management system, in accordance with various aspects described herein.
Fig. 4 is a block diagram of a system that facilitates employing network function virtualization in conjunction with a detached-function gNB (next generation node B) that can be employed by a Network Manager (NM) in accordance with various aspects described herein.
Fig. 5 is a block diagram of a system that facilitates employing network function virtualization in conjunction with a split-function gNB that can be employed by a network Element Manager (EM) in accordance with various aspects described herein.
Fig. 6 is a block diagram of a system that facilitates employing network function virtualization in conjunction with a detached-function gNB that can be employed by a Network Function Virtualization (NFV) orchestrator (NFVO) in accordance with various aspects described herein.
Fig. 7 is an illustration of an example architecture including a gNB including virtualized gNB-CUs (core units) and non-virtualized gNB-DUs (distributed units) along with a relationship between the gNB-CUs and CNs (core networks) in accordance with various aspects discussed herein.
Fig. 8 is a diagram of the structure of an NSD (network service descriptor) in accordance with various aspects discussed herein.
Fig. 9 is a diagram of inclusion/naming and association relationships for a gNB with functional separation in accordance with various aspects discussed herein.
Fig. 10 is a diagram of inheritance relationships for a gNB with functional separation in accordance with various aspects discussed herein.
Fig. 11 is a flow chart illustrating an example method of NSD up-line in accordance with various aspects discussed herein.
Fig. 12 is a flowchart illustrating an example method for NSD updating in accordance with various aspects discussed herein.
Detailed Description
The present disclosure will now be described with reference to the drawings, wherein like reference numerals are used to refer to like elements throughout, and wherein the illustrated structures and devices are not necessarily drawn to scale. As used herein, the terms "component," "system," "interface," and the like are intended to refer to a computer-related entity, hardware, software (e.g., in execution), and/or firmware. For example, a component can be a processor (e.g., a microprocessor, a controller, or other processing device), a process running on a processor, a controller, an object, an executable, a program, a storage device, a computer, a tablet PC, and/or a user device (e.g., a mobile phone, etc.) having a processing device. By way of illustration, an application running on a server and the server can also be a component. One or more components may reside within a process and a component may be localized on one computer and/or distributed between two or more computers. A set of elements or other collection of components may be described herein, wherein the term "set" may be interpreted as "one or more".
Furthermore, these components can execute from various computer readable storage media having various data structures stored thereon, such as by means of modules. The components may communicate via local and/or remote processes such as in accordance with a signal having one or more data packets (e.g., data one component interacting with another component in a local system, distributed system, and/or across a network such as the internet, local area network, wide area network, or the like with other systems via the signal).
As another example, a component may be an apparatus having particular functionality provided by a mechanical part operated by an electrical or electronic circuit, where the electrical or electronic circuit may be operated by a software application or a firmware application executed by one or more processors. The one or more processors may be internal or external to the device and may execute at least a portion of the software or firmware application. As yet another example, a component may be a device that provides a specific function through an electronic component (without mechanical parts); the electronic component may include one or more processors therein to execute software and/or firmware that at least partially imparts functionality to the electronic component.
The use of the word exemplary is intended to present concepts in a concrete fashion. As used in this application, the term "or" is intended to mean an inclusive "or" rather than an exclusive "or". That is, unless specified otherwise or clear from context, "X employs A or B" is intended to mean any of the natural inclusive permutations. That is, if X employs A; x is B; or X employs both A and B, then "X employs A or B" is satisfied under any of the foregoing instances. Furthermore, the articles "a" and "an" as used in this application and the appended claims should generally be construed to mean "one or more" unless specified otherwise or clear from context to be directed to a singular form. Furthermore, if the terms "comprising," including, "" having, "" with, "" containing, "or variations thereof are used in the description and claims, these terms are intended to be inclusive in a manner similar to the term" comprising.
As used herein, the term "circuitry" may refer to or be part of or include the following: an Application Specific Integrated Circuit (ASIC), an electronic circuit, a processor (shared, dedicated, or group) and/or memory (shared, dedicated, or group) that execute one or more software or firmware programs, a combinational logic circuit, and/or other suitable hardware components that provide the described functionality. In some embodiments, the circuitry may be implemented in or the functionality associated with one or more software or firmware modules. In some embodiments, the circuitry may comprise logic that is at least partially operable in hardware.
The embodiments described herein may be implemented as a system using any suitable configuration of hardware and/or software. Fig. 1 illustrates components of a network according to some embodiments. In various aspects, some or all of one or more components shown in connection with fig. 1 may be implemented as a Virtual Network Function (VNF) in connection with various aspects described herein. An Evolved Packet Core (EPC) network 100 is shown to include a Home Subscriber Server (HSS) 110, a Mobility Management Entity (MME) 120, a Serving Gateway (SGW) 130, a Packet Data Network (PDN) gateway (PGW) 140, a Policy and Charging Rules Function (PCRF) 150.
HSS 110 includes one or more databases for network users that include subscription-related information for supporting network entity handling communication sessions. For example, HSS 110 may provide support for routing/roaming, authentication, authorization, naming/address resolution, location dependencies, and the like. Depending on the number of mobile subscribers, the capacity of the devices, the organization of the network, etc., the EPC network 100 may include one or several HSS 110.
The MME 120 is functionally similar to the control plane of a legacy serving General Packet Radio Service (GPRS) support node (SGSN). MME 120 manages mobility aspects in access (e.g., gateway selection and tracking area list management). EPC network 100 may include one or several MMEs 120.
The SGW 130 terminates the interface towards the evolved UMTS (universal mobile telecommunications system) terrestrial radio access network (E-UTRAN) and routes data packets between the E-UTRAN and the EPC network 100. Furthermore, SGW 130 may be a local mobility anchor for inter-eNodeB handover and may also provide anchoring for inter-3 GPP mobility. Other responsibilities may include legal interception, billing, and some policy enforcement.
PGW 140 terminates the SGi interface towards PDN. PGW 140 routes data packets between EPC network 100 and external networks and may be a node for policy enforcement and charging data collection. PCRF 150 is a policy and charging control element of EPC network 100. In a non-roaming scenario, there may be a single PCRF associated with an internet protocol connectivity access network (IP-CAN) session of a User Equipment (UE) in a Home Public Land Mobile Network (HPLMN). In a roaming scenario where traffic is off-home, there may be two PCRFs associated with the IP-CAN session of the UE: a home PCRF (H-PCRF) within the HPLMN and a visited PCRF (V-PCRF) within the Visited Public Land Mobile Network (VPLMN). PCRF 150 may be communicatively coupled to an application server (alternatively referred to as an Application Function (AF)). In general, an application server is an element that provides applications using Internet Protocol (IP) bearer resources (e.g., UMTS Packet Service (PS) domain, long Term Evolution (LTE) PS data service, etc.) to a core network. The application server may signal PCRF 150 to indicate the new service flow and select the appropriate quality of service (QoS) and charging parameters. PCRF 150 may assign the rules to a Policy and Charging Enforcement Function (PCEF) (not shown) with an appropriate Traffic Flow Template (TFT) and QoS Class Identifier (QCI), which causes QoS and charging to commence as specified by the application server.
The components of EPC 100 may be implemented in one physical node or may be implemented in a separate physical node. In some embodiments, network Function Virtualization (NFV) is utilized to virtualize any or all of the above-described network node functions via executable instructions stored in one or more computer-readable storage media (described in further detail below). Logical instantiation of EPC network 100 may be referred to as network slice 101. A logical instantiation of a portion of EPC network 100 may be referred to as a network subslice 102 (e.g., network subslice 102 is shown to include PGW 140 and PCRF 150).
Fig. 2 is a block diagram illustrating components capable of reading instructions from a machine-readable or computer-readable medium (e.g., a machine-readable storage medium) and performing any one or more of the methods discussed herein, according to some example embodiments. Specifically, fig. 2 shows a diagrammatic representation of a hardware resource 200, including one or more processors (or processor cores) 210, one or more memory/storage devices 220, and one or more communication resources 230, each of which are communicatively coupled via a bus 240. For embodiments that utilize node virtualization (e.g., NFV), the hypervisor 202 can be executed to provide an execution environment for one or more network slices/sub-slices to utilize the hardware resources 200.
Processor 210 (e.g., a Central Processing Unit (CPU), a Reduced Instruction Set Computing (RISC) processor, a Complex Instruction Set Computing (CISC) processor, a Graphics Processing Unit (GPU), a Digital Signal Processor (DSP) (e.g., a baseband processor), an Application Specific Integrated Circuit (ASIC), a Radio Frequency Integrated Circuit (RFIC), another processor, or any suitable combination thereof) may include, for example, processor 212 and processor 214. Memory/storage 220 may include main memory, disk storage, or any suitable combination thereof.
The communication resources 230 may include interconnection and/or network interface components or other suitable devices to communicate with one or more peripheral devices 204 and/or one or more databases 206 via the network 208. For example, the communication resources 230 may include wired communication components (e.g., for coupling via Universal Serial Bus (USB)), cellular communication components, near Field Communication (NFC) components,
Figure SMS_1
Components (e.g.)>
Figure SMS_2
)、/>
Figure SMS_3
Components and other communication components.
The instructions 250 may include software, programs, applications, applets (applets), apps, or other executable code for causing at least any of the processors 210 to perform any one or more of the methods discussed herein. The instructions 250 may reside, completely or partially, within at least one of the processor 210 (e.g., a cache memory of the processor), the memory/storage 220, or any suitable combination thereof. Further, any portion of instructions 250 may be transferred from any combination of peripherals 204 and/or databases 206 to hardware resource 200. Thus, the memory of the processor 210, the memory/storage 220, the peripherals 204, and the database 206 are examples of computer readable and machine readable media.
Referring to FIG. 3, illustrated is an illustration of an example architecture that facilitates generation and communication of performance measurements related to virtualized resources in accordance with various aspects described herein. The system shown in fig. 3 includes a Network Manager (NM) 302, an Operation Support System (OSS)/business support system 304, a network Element Manager (EM) 306, a Domain Manager (DM) 308, a Network Function Virtualization (NFV) management and orchestration (MANO) component (NFV orchestrator (NFVO) 310, a VNF manager (VNFM) 312, and a Virtualization Infrastructure Manager (VIM) 314)), a set of Virtualized Network Functions (VNFs) 316 virtualized by a Virtualization Resource (VR) of NFV infrastructure (NFVI) 318, which may include a hypervisor (e.g., hypervisor 202) and a hardware resource (e.g., hardware resource 200) i And (optionally) one or more Network Entities (NEs) 320i that may implement Physical Network Functions (PNFs). The lines between these entities indicate reference points or other communication connections that may facilitate data exchange between these entities (some of which are labeled, e.g., reference points Itf-N, etc.).
In 5G (fifth generation), the gNB (next generation node B) is separated into a Central Unit (CU) and a Distributed Unit (DU), wherein the gNB-CU may be a Virtualized Network Function (VNF) that may be deployed in the cloud, while the gNB-DU may be implemented in vertical hardware comprising radio functions for interfacing with UEs (user equipments).
Various embodiments discussed herein may facilitate operations of gNB separation between gNB-CUs and gNB-DUs related to NR (New air interface) RANs (radio access networks). The first set of embodiments discussed herein may facilitate operations of a gNB-CU and a gNB-DU in relation to a network resource model. The second set of embodiments discussed herein may facilitate techniques for updating latency and bandwidth requirements of an interface between a gNB-CU and a gNB-DU. The various embodiments discussed herein may belong to a first set of embodiments, a second set of embodiments, or both.
Referring to fig. 4, illustrated is a block diagram of a system 400 that facilitates techniques for employing network function virtualization in conjunction with a split-function gNB (next generation node B) that can be employed by a Network Manager (NM) in accordance with various aspects described herein. The system 400 may include one or more processors 410 (e.g., which may include one or more of the processors 210, etc., each of which may include processing circuitry, and one or more interfaces for sending/receiving data with each other or other circuitry (e.g., a memory interface for sending/receiving data with the memory 430, a communication circuit interface for sending/receiving data with the communication circuit 420, etc.), communication circuitry 420 (which may facilitate communication of data via one or more reference points, networks, etc., and may include communication resources 230, etc.), and memory 430 (which may include any of a variety of storage media, and may store instructions and/or data associated with at least one of the one or more processors 410 or communication circuits 420, and may include the memory/storage device 220 and/or cache memory of the processor 410, etc.). In some aspects, one or more of processor 410, communication circuit 420, and memory 430 may be included in a single device, while in other aspects they may be included in different devices (e.g., portions of a distributed architecture). As described in more detail below, in various embodiments, the system 400 may be employed by the NM to facilitate implementation of embodiments of the first and/or second sets of embodiments discussed herein.
Referring to fig. 5, illustrated is a block diagram of a system 500 that facilitates techniques for employing network function virtualization in conjunction with a split-function gNB (next generation node B) that can be employed by a network Element Manager (EM) in accordance with various aspects described herein. The system 500 may include one or more processors 510 (e.g., which may include one or more of the processors 210, etc., each of which may include processing circuitry, and one or more interfaces for sending/receiving data with each other or other circuitry (e.g., a memory interface for sending/receiving data with the memory 530, a communication circuit interface for sending/receiving data with the communication circuit 520, etc.), communication circuitry 520 (which may facilitate communication of data via one or more reference points, networks, etc., and may include the communication resources 230, etc.), and memory 530 (which may include any of a variety of storage media, and may store instructions and/or data associated with at least one of the one or more processors 510 or the communication circuitry 520, and may include the memory/storage device 220 and/or a cache memory of the processor 510, etc.). In some aspects, one or more of the processor 510, the communication circuit 520, and the memory 530 may be included in a single device, while in other aspects they may be included in different devices (e.g., portions of a distributed architecture). As described in more detail below, in various embodiments, the EM may employ the system 500 to facilitate implementation of embodiments of the first and/or second sets of embodiments discussed herein.
Referring to fig. 6, illustrated is a block diagram of a system 600 that facilitates techniques for employing network function virtualization in conjunction with a split-capable gNB (next generation node B) that can be employed by a Network Function Virtualization (NFV) orchestrator (NFVO) in accordance with various aspects described herein. The system 600 can include one or more processors 610 (e.g., which can include one or more of the processors 210, etc., each of which can include processing circuitry, and one or more interfaces for sending/receiving data with each other or other circuitry (e.g., a memory interface for sending/receiving data with the memory 630, a communication circuit interface for sending/receiving data with the communication circuit 620, etc.), communication circuitry 620 (which can facilitate communication of data via one or more reference points, networks, etc., and can include communication resources 230, etc.), and memory 630 (which can include any of a variety of storage media, and can store instructions and/or data associated with at least one of the one or more processors 610 or the communication circuitry 620, and can include memory/storage 220 and/or cache memory of the processor 610, etc.). In some aspects, one or more of the processor 610, the communication circuit 620, and the memory 630 may be included in a single device, while in other aspects they may be included in different devices (e.g., portions of a distributed architecture). As described in more detail below, in various embodiments, NFVO may employ system 600 to facilitate implementation of embodiments of the first and/or second set of embodiments discussed herein.
Managing interfaces between gNB-CU and gNB-DU of NG (Next Generation) -RAN
RAN3 (RAN (radio access network) WG3 (working group # 3)) works with respect to specifications in Rel-15 (3 GPP (third generation partnership project) release 15), option 2 (based on centralized PDCP (packet data convergence protocol)/RRC (radio resource control) and distributed RLC (radio link control)/MAC (medium access control)/PHY (physical layer)) has been selected for higher layer functional separation between CUs and DUs.
The operator may manage CUs and DUs by reference point Itf-N, e.g. to manage coverage, capacity and/or frequency bands of a particular DU, or to manage features of a particular CU (e.g. mobility, SON (self organizing network)). Managing CUs and DUs by reference point Itf-N should remain valid, regardless of whether: (1) CU and DU are provided by the same vendor; and/or (2) DUs are managed directly by the EM or through the CU.
To support these capabilities, CUs and DUs can be modeled in the network resource model. The first set of embodiments discussed herein may facilitate operation of a gNB that includes a virtualized portion and a non-virtualized portion.
The use cases and capabilities for establishing the relationship between virtualized and non-virtualized parts of the gNB are addressed in clauses 4.1.1 and 5.1 of draft 3GPP TR (technical report) 32.864, however, there is no ready potential solution for these use cases. Various embodiments of the first set of embodiments may facilitate a solution for use cases for establishing a relationship between virtualized and non-virtualized portions of the gNB.
The use cases and capabilities regarding establishing the relationship between CN (core network) NF (network function) and gNB are addressed in clauses 4.1.2 and 5.1 of draft TR 32.864, however, there is no ready potential solution for these use cases. Various embodiments of the first set of embodiments may facilitate a solution for use cases for establishing a relationship between CN NF and gNB.
The use cases and capabilities regarding instantiating an NS (network services) comprising a CN VNF and a gNB are addressed in clauses 4.2.2 and 5.2 of draft TR 32.864. Various embodiments of the first set of embodiments may facilitate solutions for instantiating these use cases and capabilities of an NS including a CN VNF and a gNB.
The use cases and capabilities concerning the update delivery network requirements are addressed in clauses 4.2.3 and 5.2 of draft TR 32.864. The various embodiments of the first set of embodiments may facilitate solutions for updating these use cases and capabilities of the transport network requirements.
Referring to FIG. 7, illustrated is a diagram illustrating a system for including virtualized gNB-CU 720 and non-virtualized gNB-DU 710 in accordance with aspects discussed herein i Together with a relationship between the gNB-CU 720 and the CN 730.
Referring to fig. 8, illustrated is a diagram illustrating the structure of an NSD (network service descriptor) in accordance with various aspects discussed herein.
NRM (network resource model) for gNB with functional separation
To enable NM (e.g., employing system 400) to manage gNB-CU, gNB-DU, and F1 interfaces between gNB-CU and gNB-DU through Itf-N (e.g., manage: coverage, capacity, or frequency bands of gNB-DU; specific features of gNB-CU (e.g., mobility, SON), or transport network requirements between gNB-CU and gNB-DU), endpoints of gNB-CU, gNB-DU, and F1 interfaces may be modeled by Itf-N.
Referring to fig. 9, an inclusion/naming and association relationship for a gNB with functional separation is shown in accordance with various aspects discussed herein. The gNB-CU may be modeled as an IOC (information object class) GNbCUFunction (gNB-CU function), the gNB-DU may be modeled as a IOC GNbDuFunction (gNB-DU function), and the endpoint of the F1 interface may be modeled as an IOC EP_F1 (endpoint of F1). IOC GNbCuFunction and IOC GNbDuFunction may be included in IOC ManagedElement (managed element). IOC GNbCuFunction may include one or more IOC ep_f1, which may be associated with IOC GNbDuFunction, respectively.
Referring to fig. 10, illustrated is a diagram of inheritance relationships for a gNB with functional separation in accordance with various aspects discussed herein. To support virtualization of the gNB-CU, IOC GNbCuFunction may be inherited from IOC ManagedFunction, including VNF related attributes. Since the gNB-DU portion of the gNB is non-virtualized, IOC GNbDuFunction may be inherited from the IOC top. IOC EP_F1 may be inherited from IOC EP_RP.
Potential solutions for configuration management
Potential solutions for establishing relationships between virtualized and non-virtualized portions of gNB
This section provides a potential solution to support use cases and capabilities for establishing a relationship between virtualized parts of the gNB (gNB-CU) and non-virtualized parts (gNB-DU).
In various embodiments, NMs (e.g., employing system 400) may establish relationships by: (1) Creating (e.g., via processor 410) an MOI (managed object instance) of an endpoint (e.g., EP_F1) of an F1 interface between the gNB-CU and the gNB-DU; and (2) configuring (e.g., via processor 410) the endpoint MOI to be associated with a far-end MOI of the gNB-DU (e.g., gnbcuchunction) or a far-end MOI of the gNB-CU (e.g., gnbcuchunction).
One or more EM (e.g., employing system 500) may configure the gNB-CU and/or the gNB-DU (e.g., via processor 410) to establish the relationship. Depending on the embodiment, the EM of the gNB-CU and the EM of the gNB-DU may be the same or different.
Potential solutions for establishing a relationship between CN NF and gNB
This section provides a potential solution to support use cases and capabilities for establishing a relationship between CN NF and gNB.
In various embodiments, NMs (e.g., employing system 400) may establish relationships by: (1) Creating (e.g., via processor 410) an MOI of an endpoint of a reference point between CN NF and gNB CU; and (2) configuring (e.g., via processor 410) the endpoint MOI to be associated with the MOI of the CN NF.
The EM (e.g., employing system 500) may configure the CU (e.g., via processor 510) to establish a relationship with the CN NF.
In various aspects, the CN NF may be configured to: the relationship is established based on RAN3 definitions of reference points between CN NF and gNB.
Potential solutions for instantiating NS including CN VNF and gNB
This section provides a potential solution to support use cases and capabilities for instantiating an NS that includes a CN VNF and a gNB.
In various embodiments, the NM (e.g., employing system 400) may request NFVO (e.g., employing NFVO of system 600, which may receive the request via communication circuit 620 and process it via processor 610) (e.g., via processor 410-generated request sent via reference point Os-Ma-NFVO) to create (e.g., via processor 610) an NS identifier for an NSD that may reference VNFD (VNF descriptor) of VN NF, VNFD (VNF descriptor) of a virtualized portion of gNB (e.g., gNB-CU), and/or PNFD (PNF (physical network function) descriptor) of a non-virtualized portion of gNB (e.g., gNB-DU), if any.
NFVO may respond to NM with a parameter nsInstanceId (NS instance identifier) with respect to successful creation (e.g., a response generated via processor 610, sent through Os-Ma-NFVO reference point via communication circuit 620, received via communication circuit 420, and processed by processor 410).
The NM may request that the NFVO instantiate the NS identified by the nsInstanceId with parameters AdditionalParamFormf (additional parameters for the VNF), parameters pnfInfo (PNF information) that provide information about virtualized portions of the CN NF and gNB, and (possibly) other parameters (e.g., requests generated via processor 410, sent via communication circuit 420 through Os-Ma-NFVO reference points, received via communication circuit 620, and processed by processor 610).
NFVO may send (e.g., a notification generated via processor 610, sent via communication circuit 620 through Os-Ma-NFVO reference point, received via communication circuit 420, and processed by processor 410) an NS declaration period change notification to NM indicating the start of an NS instantiation process.
The NFVO may instantiate (e.g., via the processor 610) an NS containing the CN VNF and the gNB based on the information provided by the NM and one or more of the NSD, VNF package, and information provided in the PNFD.
NFVO may send (e.g., a notification generated via processor 610, sent via communication circuit 620 through Os-Ma-NFVO reference point, received via communication circuit 420, and processed by processor 410) an NS lifecycle change notification to NM indicating the result of NS instantiation.
Potential solutions for updating transport network requirements
Potential solutions for updating latency and bandwidth requirements
This section provides a potential solution to support use cases and capabilities with respect to updating transport network requirements.
This solution can be applied in cases where both latency and bandwidth requirements are updated.
The NM may invoke an update NSD operation (e.g., via processor 410) to request NFVO to update NSD (e.g., a request generated via processor (410), sent via communication circuit 420 through Os-Ma-NFVO reference point, received via communication circuit 620, and processed by processor 610), where parameter nsdintid (NSD information identifier) may indicate NSD to be updated, and parameter NSD may indicate updated NSD, which contains the following attributes to be updated:
1) vnffgd (VNFFG (VNF forwarding graph) descriptor), which may include the following attributes:
a) vnffgdId (VNFFGD identifier), which uniquely identifies VNFFGD;
b) vnfdId (VNFD identifier) identifying the VNFD that make up the VNF that implements the virtualized portion of the gNB;
c) pnfdId (PNFD identifier) identifying PNFD's constituting PNF that implement the non-virtualized portion of the gNB;
d) virtual link descriptor (vllinkdsc), which may include the following attributes:
i) virtual link deployment style (virtual link deployment style), which may include the following attributes:
(1) qoS (quality of service), which may include a latency attribute, which may indicate latency requirements for the F1 interface.
e) cpdPoolId (connection point descriptor pool identifier) that references a pool of descriptors attached to connection points that make up the VNF (implementing the virtualized portion of the gNB) and the PNF (implementing the non-virtualized portion of the gNB).
f) nfpd (network forwarding path descriptor) (optional), which specifies a network forwarding path associated with VNFFG.
2) nsDf (NS deployment style), which may include the following attributes:
a) virtual link profile (virtual link profile), which may include the following attributes:
i) maxBitrateRequirements, which indicate maximum bandwidth requirements for the F1 interface;
ii) a minbitetraterequirements attribute indicating a minimum bandwidth requirement for the F1 interface;
b) A nslnstance level (NS instantiation level) for indicating nslevels (NS levels) within the NS deployment style, each NsLevel containing the following attributes:
i) The virtual link to level mapping may include bitrateRequirements indicating bandwidth requirements for the F1 interface.
NFVO may verify (e.g., via processor 610) whether bitrateRequirements are in a range between minbitrotetrateRequirements and maxbitetrateRequirements, and if verification passes, may return an nsdInfoid (NSD information identifier) indicating a successful NSD update to NM (e.g., a notification generated via processor 610, sent via communication circuit 420 through an Os-Ma-NFVO reference point, received via communication circuit 620, and processed by processor 410); otherwise, NFVO may return an out-of-range error indicating that bitrateRequirements are outside the range between minbitetrateRequirements and maxbitetrateRequirements to NM (e.g., a response generated via processor 610, sent via communication circuit 620 through an Os-Ma-NFVO reference point, received via communication circuit 420, and processed by processor 410).
The NM may invoke an update NS operation (e.g., via processor 410) to request (e.g., a request generated via processor 410, sent via communication circuit 420 through Os-Ma-NFVO reference point, received via communication circuit 620, and processed by processor 610) that NFVO update latency and bandwidth requirements for the F1 interface with the following parameters: (a) updateType = "AssocNewNsdVersion" ("associate new NSD version") for associating NS instances with updated NSDs; and (b) assocnewnsddversion data (associated new NSD version data) for indicating the NSD to be associated with the NS instance.
NFVO may send (e.g., a notification generated via processor 610, sent via communication circuit 620 through Os-Ma-NFVO reference point, received via communication circuit 420, and processed by processor 410) an NS lifecycle change notification to NM indicating the start of an NS update procedure.
The NFVO may update the NS (e.g., via processor 610) based on the information provided by the NM and the information provided in the updated NSD.
NFVO may send (e.g., a notification generated via processor 610, sent via communication circuit 620 through Os-Ma-NFVO reference point, received via communication circuit 420, and processed by processor 410) an NS lifecycle change notification to NM indicating the result of an NS update.
Potential solutions for updating bandwidth requirements
This section provides a potential solution to support use cases and capabilities with respect to updating transport network requirements.
This solution can be applied to the case of updating only the bandwidth requirements.
The NM may invoke (e.g., via processor 410) an update NSD operation to request (e.g., via a request generated by processor (410), sent via communication circuit 420 through Os-Ma-NFVO reference point, received via communication circuit 620, and processed by processor 610) that NFVO update NSD with the following parameters: a parameter nsdInfoId indicating the NSD to be updated; and a parameter NSD indicating an updated NSD, which may include the following attributes to be updated:
1) nsDf, which may include the following attributes:
a) virtualLinkProfile, which may include the following attributes:
i) maxbiterequirments, which indicate maximum bandwidth requirements for the F1 interface;
ii) minbitetraterequirements, which indicate minimum bandwidth requirements for the F1 interface;
b) nsinstationlevel, which may indicate nslevels within the NS deployment style, where each NsLevel may include the following attributes:
i) The virtuallinktorelevmapping, which may include bitRateRequirements indicating bandwidth requirements for the F1 interface.
NFVO may verify (e.g., via processor 610) whether bitrateRequirements are in a range between minbitrotetrateRequirements and maxbitetrateRequirements, and if verification passes, may return (e.g., a notification generated via processor 610, sent via communication circuit 420 through Os-Ma-NFVO reference point, received via communication circuit 620, and processed by processor 410) an nsdInfoid indicating a successful NSD update to NM; otherwise, NFVO may return an out-of-range error indicating that bitrateRequirements are outside the range between minbitetrateRequirements and maxbitetrateRequirements to NM (e.g., a response generated via processor 610, sent via communication circuit 620 through an Os-Ma-NFVO reference point, received via communication circuit 420, and processed by processor 410).
The NM may invoke an update NS operation (e.g., via processor 410) to request NFVO to update the bandwidth requirements for the F1 interface (e.g., a request generated via processor 410, sent through Os-Ma-NFVO reference via communication circuit 420, received via communication circuit 620, and processed by processor 610) with the following parameters: (a) updatetype= "ChangeNsDf" ("change NS deployment style") for changing deployment style for NS instance.
NFVO may send (e.g., a notification generated via processor 610, sent via communication circuit 620 through Os-Ma-NFVO reference point, received via communication circuit 420, and processed by processor 410) an NS lifecycle change notification to NM indicating the start of an NS update procedure.
The NFVO may update the NS (e.g., via processor 610) based on the information provided by the NM and the information provided in the updated NSD.
NFVO may send (e.g., a notification generated via processor 610, sent via communication circuit 620 through Os-Ma-NFVO reference point, received via communication circuit 420, and processed by processor 410) an NS lifecycle change notification to NM indicating the result of an NS update.
Potential solutions for updating latency requirements
This section provides a potential solution to support use cases and capabilities with respect to updating transport network requirements.
This solution can be applied to the case of updating only the latency requirements.
The NM may invoke (e.g., via processor 410) an update NSD operation to request (e.g., a request generated via processor 410, sent through Os-Ma-NFVO reference point via communication circuit 420, received via communication circuit 620, and processed by processor 610) NFVO to update NSD with the following parameters: a parameter nsdInfoId indicating the NSD to be updated; and a parameter NSD indicating an updated NSD, which includes the following attributes to be updated:
1) vnffgd, which may include the following attributes:
a) vnffgdId, which uniquely identifies VNFFGD;
b) vnfdId, which identifies VNFD that make up VNF that implements the virtualized portion of the gNB;
c) pnfdId, which identifies PNFD's constituting PNF that implement the non-virtualized portion of the gNB;
d) virtual linkdata, which may include the following attributes:
i) virtual linkd f, which may include the following attributes:
(1) qoS, which may include a latency attribute indicating latency requirements for the F1 interface.
e) cpdPoolId that references a pool of descriptors attached to connection points that constitute VNFs (implementing virtualized parts of the gNB) and PNFs (implementing non-virtualized parts of the gNB);
f) nfpd (optional), which specifies a network forwarding path associated with VNFFG.
NFVO may update Vnffgd (e.g., via processor 610) and may return nsdfnfoid indicating updated NSD to NM (e.g., via a response generated by processor 610, sent via communication circuit 620 through Os-Ma-NFVO reference point, received via communication circuit 420, and processed by processor 410).
The NM may invoke an update NS operation (e.g., via processor 410) to request NFVO to update latency requirements for the F1 interface (e.g., a request generated via processor 410, sent through Os-Ma-NFVO reference via communication circuit 420, received via communication circuit 620, and processed by processor 610) with the following parameters: (a) updatetype= "UpdateVnffg" updates VNFFG, including VL between gNB-CU and gNB-DU for NS instance; and (b) updateVnffg for indicating updated VNFFGD for the NS instance.
NFVO may send (e.g., a notification generated via processor 610, sent via communication circuit 620 through Os-Ma-NFVO reference point, received via communication circuit 420, and processed by processor 410) an NS lifecycle change notification to NM indicating the start of an NS update procedure.
The NFVO may update the NS (e.g., via processor 610) based on the information provided by the NM and the information provided in the updated NSD.
NFVO may send (e.g., a notification generated via processor 610, sent via communication circuit 620 through Os-Ma-NFVO reference point, received via communication circuit 420, and processed by processor 410) an NS lifecycle change notification to NM indicating the result of an NS update.
VNNNFGD up and update
The second set of embodiments discussed herein may facilitate techniques for updating latency and bandwidth requirements of an interface between a gNB-CU and a gNB-DU.
NG-RAN deployment
The operator may perform deployment of the NG-RAN including the gNB-CU and the gNB-DU as follows: (1) deploying a gNB-DU with RF devices; (2) NSD (network service descriptor) for the gNB-CU VNF is put on-line by latency and bandwidth requirements for the interface between the gNB-CU and the gNB-DU; (3) NSDs based on the VNFDs of the online referenced gNB-CU and/or PNFDs of the gNB-DU instantiate NS.
Referring to fig. 11, illustrated is a flow chart of an example method of NSD online in accordance with various aspects discussed herein.
At 1101, the NM (e.g., employing system 400) may request (e.g., a request generated via processor 410, sent via communication circuit 420 through Os-Ma-Nfvo reference point, received via communication circuit 620, and processed by processor 610) Nfvo (e.g., employing system 600) to go on line NSD via the virtual linkdc and nsDf attributes to specify latency and/or bandwidth requirements for the interface between the gNB-CU and the gNB-DU, respectively.
The nsDf attribute may define bandwidth requirements in terms of minBitrateRequirements, maxBitrateRequirements and bitrateRequirements, where bitrateRequirements should be in a range between minbitetrateRequirements and maxbitetrateRequirements for a particular NS deployment style. At 1102, the nfvo may verify (e.g., via processor 610) the NS deployment style (e.g., indicated via the nsDf attribute) to determine whether bitraterequirements are in a range between minbitetraterequirements and maxBitrateRequirements.
At 1103, if the bitrateRequirements are within a range between the minbitetrateRequirements and the maxBitrateRequirements, then the NFVO may line NSD (e.g., via processor 610) and may return the nsdInfoId to NM (e.g., a response generated via processor 610, sent through the Os-Ma-NFVO reference point via communication circuit 620, received via communication circuit 420, and processed by processor 410).
Otherwise, if the bitrateRequirements are not within the range between minbitetrateRequirements and maxBitetrateRequirements, then the NFVO may return an out-of-range error (of the bitrateRequirements parameters) to the NM to indicate NSD on-line operation failure at 1104 (e.g., a response generated via the processor 610, sent via the communication circuit 620 through the Os-Ma-NFVO reference point, received via the communication circuit 420, and processed by the processor 410).
Referring to fig. 12, illustrated is a flow chart of an example method for NSD update in accordance with various aspects discussed herein. The operator may employ the method of fig. 12 in various situations to update transport network requirements (e.g., latency and bandwidth) of the NG-RAN.
At 1201, nm may request (e.g., a request generated via processor 410, sent via communication circuit 420 through Os-Ma-Nfvo reference point, received via communication circuit 620, and processed by processor 610) Nfvo to update NSD with the following properties specifying latency and bandwidth requirements for the interface between the gNB-CU and the gNB DU, respectively: (a) virtualnlinkdata, which may include latency requirements; (b) VNFFDG (VNF forwarding graph), which may include virtual linkdusc; and (c) nsDf, which may include a range of bandwidth requirements via bitrateRequirements parameters and acceptable bandwidth requirements (e.g., via minbitetrateRequirements and maxbitetrateRequirements parameters).
At 1202, the nfvo may verify (e.g., via the processor 610) whether the bandwidth requirements are within range. If the bandwidth requirement is within range, then NFVO may update NSD (e.g., via processor 610) and may return an nsdfnfoid (e.g., a response generated via processor 610, sent through Os-Ma-NFVO reference point via communication circuit 620, received via communication circuit 420, and processed by processor 410) to indicate that NSD has been successfully updated. Otherwise, if the bandwidth requirement is not within range, then NFVO may return an out-of-range error (of the bitRateRequirement parameter) to NM (e.g., a response generated via processor 610, sent via communication circuit 620 through the Os-Ma-NFVO reference point, received via communication circuit 420, and processed by processor 410) to indicate that the NSD online operation failed.
At 1203, if the NSD has been successfully updated, the NM may request (e.g., via a request generated by processor 410, sent via communication circuit 420 via the Os-Ma-NFVO reference point, received via communication circuit 620, and processed by processor 610) that NFVO update the NS with the successfully updated NSD.
At 1204, nfvo may return a notification (e.g., a response generated via processor 610, sent via communication circuit 620 through Os-Ma-nfvo reference point, received via communication circuit 420, and processed by processor 410) to NM to indicate that NS update has started.
At 1205, the nfvo may update the NS (e.g., via processor 610).
At 1206, nfvo may return the notification (e.g., a notification generated via processor 610, sent via communication circuit 620 through Os-Ma-nfvo reference point, received via communication circuit 420, and processed by processor 410) to NM to indicate the result of the NS update.
Life cycle management use case
NSD online of NS deployment style
Problem(s)
Based on the function separation option, the interface between the gNB-CU and the gNB-DU has specific latency and bandwidth requirements. The NSD information elements include the virtualLinkDesc and nsDf attributes that are used in NSD on-line operations to specify latency and bandwidth requirements, respectively, for the interface between the gNB-CU and the gNB DU.
Preconditions of preconditions
(a) The operator decides to go online NSD, and (b) the underlying transport network requirements for the selected CU-DU function separation option are known.
Description of the invention
NM (e.g., a request generated via processor 410, sent via communication circuit 420 through an Os-Ma-Nfvo reference point, received via communication circuit 620, and processed by processor 610) requests NFVO to go up line NSD with the virtualLinkDesc and nsDf attributes specifying latency and bandwidth requirements, respectively, for the interface between gNB-CU and gNB DU.
The nsDf attribute defines the bandwidth requirement in terms of minBitrateRequirements, maxBitrateRequirements and bitrateRequirements, which should be in the range between minbitetrateRequirements and maxbitetrateRequirements.
NFVO may (e.g., via processor 610) line NSD and may return (e.g., a response generated via processor 610, sent via communication circuit 620 through Os-Ma-NFVO reference point, received via communication circuit 420, and processed by processor 410) to NM in the case of maxbiterequirements in range, an nsdfinoid indicating that NSD line operation was successful; otherwise, NFVO may return an out-of-range error (e.g., a response generated via processor 610, sent via communication circuit 620 through Os-Ma-NFVO reference point, received via communication circuit 420, and processed by processor 410) to NM to indicate that the NSD online operation failed.
Post condition
NSD has been successfully or not put on line based on whether bandwidth information is valid
NSD update of NS deployment style
Problem(s)
Based on the function separation option, the interface between the gNB-CU and the gNB-DU has specific latency and bandwidth requirements. The NSD information elements include the virtualLinkDesc and nsDf attributes that may be used in NSD update operations to change latency and bandwidth requirements, respectively, for the interface between the gNB-CU and the gNB DU.
The NSD may be updated only when bandwidth information contained in the nsDf attribute is valid.
Preconditions of preconditions
(a) The operator decides to go online NSD, and (b) the underlying transport network requirements for the selected CU-DU function separation option are known.
Description of the invention
The NM may request (e.g., by a request generated by processor 410, sent via communication circuit 420 through Os-Ma-Nfvo reference point, received via communication circuit 620, and processed by processor 610) that Nfvo update NSD with the virtuallinkdus and nsDf attributes that specify latency and bandwidth requirements, respectively, for the interface between the gNB-CU and the gNB DU.
The nsDf attribute defines the bandwidth requirement in terms of minBitrateRequirements, maxBitrateRequirements and bitrateRequirements, which should be in the range between minbitetrateRequirements and maxbitetrateRequirements.
NFVO may update NSD (e.g., via processor 610) and may return an nsdfnfoid indicating that the NSD update operation was successful to NM (e.g., a response generated via processor 610, sent through Os-Ma-NFVO reference point via communication circuit 620, received via communication circuit 420, and processed by processor 410) if maxbiterequirementis in range; otherwise, NFVO may return an out-of-range error (e.g., a response generated via processor 610, sent through Os-Ma-NFVO reference point via communication circuit 620, received via communication circuit 420, and processed by processor 410) to NM to indicate that the NSD update operation failed.
Post condition
Based on whether the bandwidth information is valid, the NSD has been successfully updated or not updated.
REQ-VRAN_Mgmt_LCM-CON-x: the 3GPP management system should be able to receive the result of the NSD on-line operation (e.g., success (if the bandwidth parameter is in range), error (with a reason for indicating that the bandwidth parameter is out of range)) from the ETSI MANO system.
REQ-VRAN_Mgmt_LCM-CON-y: the 3GPP management system should be able to receive the result of the NSD update operation (e.g., success (if the bandwidth parameter is in range), error (with a reason for indicating that the bandwidth parameter is out of range)) from the ETSI MANO system.
Potential solution for NSD for gNB on-line
This section provides a potential solution to support use cases and capabilities for NSD for gNB on-line.
This section can be applied to the case where the following items are already online at NFVO: (a) VNF packages for the virtualized portion of the gNB; (b) PNFD of the non-virtualized portion of the gNB; (c) VNF packets of other VNFs (except for the virtualized part of the gNB if they are part of NS); and (d) PNFDs of other PNFs (except for the non-virtualized portion of the gNB) if they are constituent parts of the NS.
The NM may request (e.g., a request generated via processor 410, sent via communication circuit 420 through Os-Ma-Nfvo reference point, received via communication circuit 620, and processed by processor 610) that Nfvo call an on-line NSD operation, which may include the following attributes:
1) References to VNFD of virtualized portion of the gNB;
2) A reference to PNFD of the non-virtualized portion of the gNB;
3) References to other VNFDs of the NS that constitute the VNF (if present);
4) NFD of other PNFs (except for non-virtualized part of the gNB)
5) References to other PNFDs of the NS that make up the PNF (if present);
6) vnffgd, which may include the following attributes:
a) vnffgdId, which uniquely identifies VNFFGD;
b) vnfdId, which identifies VNFD that make up VNF that implements the virtualized portion of the gNB;
c) pnfdId, which identifies PNFD's constituting PNF that implement the non-virtualized portion of the gNB;
d) virtallinkdata, which includes the following attributes:
i) A virtallinkd f comprising the following attributes:
(1) qoS, which includes a latency attribute indicating latency requirements for the F1 interface.
e) cpdPoolId references a pool of descriptors attached to connection points that constitute VNFs (implementing virtualized parts of the gNB) and PNFs (implementing non-virtualized parts of the gNB).
f) nfpd (optional), which specifies a network forwarding path associated with VNFFG.
7) nsDf, comprising the following attributes:
a) virtualLinkProfile, which includes the following attributes:
i) maxbiterequirments, which indicate maximum bandwidth requirements for the F1 interface;
ii) a minbitetraterequirements attribute indicating a minimum bandwidth requirement for the F1 interface;
b) nsinstationlevel for indicating nslevels within NS deployment styles, each NsLevel including the following attributes:
i) virtualLinkToLevelmapping, which includes bitRateRequirements indicating bandwidth requirements for the F1 interface.
NFVO may verify (e.g., via processor 610) whether bitrateRequirements are in a range between minbitrotetrateRequirements and maxbitetrateRequirements, and if verification passes, may return (e.g., a response generated via processor 610, sent via communication circuit 620 over the Os-Ma-NFVO reference point, received via communication circuit 420, and processed by processor 410) an nsdInfoId indicating a successful NSD up line to NM; otherwise, NFVO may return an out-of-range error indicating that bitrateRequirements are outside the range between minbitetrateRequirements and maxbitetrateRequirements to NM (e.g., a response generated via processor 610, sent via communication circuit 620 through an Os-Ma-NFVO reference point, received via communication circuit 420, and processed by processor 410).
Potential solution for adding VNFFGD for gNB
This section provides a potential solution to support use cases and capabilities for adding VNFFGD for the gNB.
This section may apply to the case where NSD may have been already on-line but no VNFFGD.
The NM may (e.g., a request generated via processor 410, sent via communication circuit 420 through Os-Ma-Nfvo reference point, received via communication circuit 620, and processed by processor 610) invoke an update NSD operation with the following parameters request Nfvo to add VNFFGD for gNB:
1) nsdInfoId, which indicates NSD to be updated;
2) NSD, which indicates the NSD containing VNFFGD to be added. VNFFGD includes the following attributes:
a) vnffgdId, which uniquely identifies VNFFGD;
b) vnfdId, which identifies VNFD that make up VNF that implements the virtualized portion of the gNB;
c) pnfdId, which identifies PNFD's constituting PNF that implement the non-virtualized portion of the gNB;
d) virtallinkdata, which includes the following attributes:
i) A virtallinkd f comprising the following attributes:
(1) qoS, which includes a latency attribute indicating latency requirements for the F1 interface.
e) cpdPoolId references a pool of descriptors attached to connection points that constitute VNFs (implementing virtualized parts of the gNB) and PNFs (implementing non-virtualized parts of the gNB).
f) nfpd (optional), which specifies a network forwarding path associated with VNFFG.
NFVO may update NSD with added VNFFGD (e.g., via processor 610), and may respond to NM with a parameter nsdinoid indicating updated NSD (e.g., a response generated via processor 610, sent through Os-Ma-NFVO reference point via communication circuit 620, received via communication circuit 420, and processed by processor 410) regarding successful NSD update.
Potential solution for adding VNFFG for gNB
This section provides a potential solution to support use cases and capabilities for adding VNFFG for the gNB.
This section can be applied to the case where NSD has been already on-line, where vnffgd and nsDf (which include information about latency and bandwidth of the virtual link, respectively) have been on-line at NFVO, and NS based on this NSD has been instantiated.
If NSD has been online but no associated nsDf, then NM may request (e.g., a request generated via processor 410, sent via communication circuit 420 through Os-Ma-Nfvo reference point, received via communication circuit 620, and processed by processor 610) that Nfvo update NSD to include the following attributes, and Nfvo may respond to NM with a parameter nsdfnfoid indicating updated NSD with respect to successful NSD update (e.g., a response generated via processor 610, sent via communication circuit 620 through Os-Ma-Nfvo reference point, received via communication circuit 420, and processed by processor 410):
1) nsDf, comprising the following attributes:
a) virtualLinkProfile, which includes the following attributes:
i) maxbiterequirments, which indicate maximum bandwidth requirements for the F1 interface;
ii) a minbitetraterequirements attribute indicating a minimum bandwidth requirement for the F1 interface;
b) nsinstationlevel for indicating nslevels within NS deployment styles, each NsLevel including the following attributes:
i) virtualLinkToLevelmapping, which contains bitRateRequirements indicating bandwidth requirements for the F1 interface.
The NM may invoke an update NS operation to request the Nfvo to add VNFFG connecting the virtualized and non-virtualized portions of the gNB with the following parameters (e.g., a request generated via processor 410, sent via communication circuit 420 through Os-Ma-Nfvo reference point, received via communication circuit 620, and processed by processor 610): (a) updatetype= "AddVnffg"; (b) addVnffg, which includes vnffgdId and vnfstanceid of the gNB virtualized portion to create VNFFG instance in NS.
NFVO may send (e.g., a notification generated via processor 610, sent via communication circuit 620 through Os-Ma-NFVO reference point, received via communication circuit 420, and processed by processor 410) an NS lifecycle change notification to NM indicating the start of an NS update procedure.
The NFVO may update the NS (e.g., via processor 610) based on the information provided by the NM and the information provided in the updated NSD.
NFVO may send (e.g., a notification generated via processor 610, sent via communication circuit 620 through Os-Ma-NFVO reference point, received via communication circuit 420, and processed by processor 410) an NS lifecycle change notification to NM indicating the result of an NS update.
Additional embodiments
Examples herein may include subject matter such as a method, a module for performing the acts or blocks of the method, at least one machine readable medium comprising executable instructions that when executed by a machine (e.g., a processor with memory, an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), etc.), cause the machine to perform the acts of the method, or to perform the acts of an apparatus or system for concurrent communication using a variety of communication techniques in accordance with the described embodiments and examples.
A first example embodiment (e.g., related to management of Itf-N reference points separate from functionality in conjunction with the gNB) according to the first set of embodiments described herein includes a network manager (NM, e.g., employing system 400) that includes: one or more processors (e.g., processor 410) configured to: a request to establish a relationship between a gNB-CU (gNB central unit) and a gNB-DU (gNB distributed unit) (e.g., a request generated by processor 410, sent via communication circuitry 420 through an Itf-N reference point, received via communication circuitry 520, and processed by processor 510) is sent to an element manager (EM, e.g., employing system 500); and receiving a response from the EM regarding the setup result (e.g., a response generated by processor 510, sent through the Itf-N reference point via communication circuit 520, received via communication circuit 420, and processed by processor 410).
The second example embodiment of the first set of embodiments may include the first example embodiment wherein the requesting is by creating and/or configuring (e.g., via processor 510) a Management Object Instance (MOI) of an Information Object Class (IOC) modeling an endpoint of a reference point between the gNB-CU and the gNB-DU.
According to a third example embodiment of the first set of embodiments, may comprise any of the first to second example embodiments, wherein the Information Object Class (IOC) is ep_f1.
The fourth example embodiment of the first set of embodiments may include any of the first to third example embodiments, wherein the MOI of the IOC modeling the endpoint is included within the MOI of the IOC modeling one of the gNB-CU or the gNB-DU.
The fifth example embodiment of the first set of embodiments may include any of the first to fourth example embodiments, wherein an MOI of the IOC modeling an endpoint of a reference point between the gNB-CU and the gNB-DU is associated with an MOI of the IOC modeling one of the gNB-DU or the gNB-CU.
According to a sixth example embodiment of the first set of embodiments may include an EM (e.g., employing system 500) comprising one or more processors (e.g., processor 510) configured to: receiving a request from the NM (e.g., employing system 400) to establish a relationship between the gNB-CU and the gNB-DU (e.g., a request generated by processor 410, sent via communication circuit 420 through the Itf-N reference point, received via communication circuit 520, and processed by processor 510); configuration of the gNB-CU and/or gNB-DU (e.g., via processor 510) to establish the relationship; and sends a response regarding the setup result (e.g., a response generated by processor 510, sent over the Itf-N reference point via communication circuit 520, received via communication circuit 420, and processed by processor 410) to the NM.
According to a seventh example embodiment of the first set of embodiments, a Network Manager (NM) (e.g., employing system 400) comprising one or more processors (e.g., processor 410) configured to: a request to establish a relationship between a gNB-CU (gNB central unit) and a Core Network (CN) Network Function (NF) (e.g., a request generated by processor 410, sent via communication circuit 420 through an Itf-N reference point, received via communication circuit 520, and processed by processor 510) is sent to an Element Manager (EM); and receiving a response from the EM regarding the setup result (e.g., a response generated by processor 510, sent through the Itf-N reference point via communication circuit 520, received via communication circuit 420, and processed by processor 410).
According to an eighth example embodiment of the first set of embodiments, may include the seventh example embodiment, wherein the requesting is by creating and/or configuring (e.g., via the processor 410) a Management Object Instance (MOI) of an Information Object Class (IOC) modeling an endpoint of a reference point between the gNB-CU and the CN NF.
The ninth example embodiment of the first set of embodiments may include any of the seventh to eighth example embodiments, wherein the MOI of the IOC modeling the endpoint is included within the MOI of the IOC modeling one of the gNB-CU or the CN NF.
The tenth example embodiment of the first set of embodiments may include any of the seventh to ninth example embodiments, wherein the MOI of the IOC modeling an endpoint of a reference point between the gNB-CU and the CN NF is associated with the MOI of the IOC modeling one of the CN NF or the gNB-CU.
According to an eleventh example embodiment of the first set of embodiments may include an EM (e.g., employing system 500) comprising one or more processors (e.g., processor 510) configured to: receiving a request from the NM (e.g., employing system 400) to establish a relationship between the gNB-CU and the CN NF-DU (e.g., a request generated by processor 410, sent via communication circuit 420 through the Itf-N reference point, received via communication circuit 520, and processed by processor 510); configuring the gNB-CU and/or the CN NF (e.g., via the processor 510) to establish a relationship; and sends a response regarding the setup result (e.g., a response generated by processor 510, sent over the Itf-N reference point via communication circuit 520, received via communication circuit 420, and processed by processor 410) to the NM.
The twelfth example embodiment of the first set of embodiments may include any of the first to eleventh example embodiments, wherein the IOC modeling a reference point between the gNB-CU and the gNB-DU and the IOC modeling a reference point between the gNB-CU and the CN NF are inherited from the IOC ep_rp.
According to a thirteenth example embodiment of the first set of embodiments, a Network Manager (NM) (e.g., employing system 400) comprising one or more processors (e.g., processor 410) configured to: the first request is sent (e.g., via a request generated by processor 410, sent via communication circuit 420 through an Os-Ma-Nfvo reference point, received via communication circuit 620, and processed by processor 610) to a network function virtualization orchestrator (Nfvo) (e.g., employing system 600) to create an NS identifier for a Network Service Descriptor (NSD) that references one or more of a Virtualized Network Function Descriptor (VNFD) of CN NF, a VNFD of a virtualized portion of gNB, and a Physical Network Function Descriptor (PNFD) of a non-virtualized portion of gNB, if any; receiving (e.g., a response generated via processor 610, sent via communication circuit 620 through the Os-Ma-NFVO reference point, received via communication circuit 420, and processed by processor 410) a nsInstanceId from NFVO; and/or (e.g., a request generated via processor 410, sent via communication circuit 420 through an Os-Ma-Nfvo reference point, received via communication circuit 620, and processed by processor 610) send a second request to a network function virtualization orchestrator (Nfvo) with the following parameters to instantiate the NS identified by the nsInstanceId: parameters additionalammanforvnf, which provide information for the virtualized parts of CN NF and gNB; and (optionally) a parameter pnfnfo providing information of the non-virtualized part of the gNB; receiving from NFVO (e.g., a notification generated via processor 610, sent via communication circuit 620 through Os-Ma-NFVO reference point, received via communication circuit 420, and processed by processor 410) an NS lifecycle change notification indicating a start of an NS instantiation process; and (e.g., a notification generated via processor 610, sent via communication circuit 620 through the Os-Ma-NFVO reference point, received via communication circuit 420, and processed by processor 410) receive an NS lifecycle change notification from the NFVO indicating a result of the NS instantiation.
According to a fourteenth example embodiment of the first set of embodiments, a NFVO (e.g., employing system 600) comprising one or more processors (e.g., processor 610) configured to: receiving a first request (e.g., generated by processor 410, sent by communication circuit 420 through Os-Ma-Nfvo reference point, received by communication circuit 620, and processed by processor 610) from an NM (e.g., employing system 400) to create an NS identifier for a Network Service Descriptor (NSD) referencing one or more of a Virtualized Network Function Descriptor (VNFD) of CN NF, a VNFD of a virtualized portion of gNB, and a Physical Network Function Descriptor (PNFD) (if present) of a non-virtualized portion of gNB; creates an NS identifier and saves the NS identifier to the nstanceid (e.g., via processor 610); (e.g., a response generated via processor 610, sent via communication circuit 620 through the Os-Ma-nfvo reference point, received via communication circuit 420, and processed by processor 410) sends the nsInstanceId to the NM; and/or receiving a second request (e.g., generated by processor 410, sent via communication circuit 420 through Os-Ma-Nfvo reference point, received via communication circuit 620, and processed by processor 610) from NM to instantiate the NS identified by the nsInstanceId, wherein parameter additionalamforvnf provides information for virtualized portions of CN NF and gNB, and optionally parameter pnfnfo provides information for non-virtualized portions of gNB; (e.g., a notification generated via processor 610, sent via communication circuit 620 through the Os-Ma-nfvo reference point, received via communication circuit 420, and processed by processor 410) sends an NS lifecycle change notification to the NM indicating the start of the NS instantiation process; and (e.g., a notification generated via processor 610, sent via communication circuit 620 through the Os-Ma-nfvo reference point, received via communication circuit 420, and processed by processor 410) send an NS lifecycle change notification to the NM indicating the result of the NS instantiation.
According to a fifteenth example embodiment of the first set of embodiments, a Network Manager (NM) (e.g., employing system 400) comprising one or more processors (e.g., processor 410) configured to: the update NSD operation is invoked (e.g., via a request generated by processor 410, sent via communication circuit 420 through an Os-Ma-NFVO reference point, received via communication circuit 620, and processed by processor 610) to a Network Function Virtualization Orchestrator (NFVO) (e.g., employing system 600) to update NSD, which includes the following information:
1) vnffgd (VNF forwarding graph descriptor), which may include the following attributes:
a) vnffgdId (VNFFGD identifier), which uniquely identifies VNFFGD;
b) vnfdId (VNFD identifier) identifying the VNFD that make up the VNF that implements the virtualized portion of the gNB;
c) pnfdId (PNFD identifier) identifying PNFD's constituting PNF that implement the non-virtualized portion of the gNB;
d) virtual link descriptor (vllinkdsc), which may include the following attributes:
i) virtual link deployment style (virtual link deployment style), which may include the following attributes:
(1) qoS (quality of service), which may include a latency attribute that may indicate latency requirements for the F1 interface.
e) cpdPoolId (connection point descriptor pool identifier) that references a pool of descriptors attached to connection points that make up the VNF (implementing the virtualized portion of the gNB) and the PNF (implementing the non-virtualized portion of the gNB).
f) nfpd (network forwarding path descriptor) (optional), which specifies a network forwarding path associated with VNFFG.
2) nsDf (NS deployment style), which may include the following attributes:
a) virtual link profile (virtual link profile), which may include the following attributes:
i) maxBitrateRequirements, which indicate maximum bandwidth requirements for the F1 interface;
ii) a minbitetraterequirements attribute indicating a minimum bandwidth requirement for the F1 interface;
b) A nslnstance level (NS instantiation level) for indicating nslevels (NS levels) within the NS deployment style, each NsLevel containing the following attributes:
i) virtual link to level mapping, which may include bitrateRequirements indicating bandwidth requirements for the F1 interface;
a response is received from NFVO indicating one of the following (e.g., generated by processor 610, sent through Os-Ma-NFVO reference point via communication circuit 620, received via communication circuit 420, and processed by processor 410): the update NSD operation is successful in the case of an nsdInfoId, or the update NSD operation fails due to an out-of-range error (of the bitRateRequirements parameter); and/or invoke an update NS operation (e.g., via processor 410) to request NFVO to associate NS with the newly updated NSD if the update NSD operation is successful (e.g., where the request may be generated by processor 410, sent via communication circuit 420 through the Os-Ma-NFVO reference point, received via communication circuit 620, and processed by processor 610) with: (a) updatetype= "AssocNewNsdVersion" for associating NS instances with updated NSDs; and (b) assocnewnsddveriondata for indicating the NSD to which the NS instance will be associated; receive an NS lifecycle change notification (e.g., generated by processor 610, sent through Os-Ma-NFVO reference point via communication circuit 620, received via communication circuit 420, and processed by processor 410) from NFVO indicating a start of an NS update procedure; and receive NS lifecycle change notifications (e.g., generated by processor 610, sent through Os-Ma-NFVO reference point via communication circuit 620, received via communication circuit 420, and processed by processor 410) from NFVO indicating the results of the NS update.
According to a sixteenth example embodiment of the first set of embodiments, NFVO (e.g., employing system 600) may be included that includes one or more processors (e.g., processor 610) configured to: receive (e.g., a request generated via processor 410, sent via communication circuit 420 through Os-Ma-Nfvo reference point, received via communication circuit 620, and processed by processor 610) an update NSD operation from the NM to update the NSD that includes the following information:
1) vnffgd (VNF forwarding graph descriptor), which may include the following attributes:
a) vnffgdId (VNFFGD identifier), which uniquely identifies VNFFGD;
b) vnfdId (VNFD identifier) identifying the VNFD that make up the VNF that implements the virtualized portion of the gNB;
c) pnfdId (PNFD identifier) identifying PNFD's constituting PNF that implement the non-virtualized portion of the gNB;
d) virtual link descriptor (vllinkdsc), which may include the following attributes:
i) virtual link deployment style (virtual link deployment style), which may include the following attributes:
(1) qoS (quality of service), which may include a latency attribute that may indicate latency requirements for the F1 interface.
e) cpdPoolId (connection point descriptor pool identifier) that references a pool of descriptors attached to connection points that make up the VNF (implementing the virtualized portion of the gNB) and the PNF (implementing the non-virtualized portion of the gNB).
f) nfpd (network forwarding path descriptor) (optional), which specifies a network forwarding path associated with VNFFG.
2) nsDf (NS deployment style), which may include the following attributes:
a) virtual link profile (virtual link profile), which may include the following attributes:
i) maxBitrateRequirements, which indicate maximum bandwidth requirements for the F1 interface;
ii) a minbitetraterequirements attribute indicating a minimum bandwidth requirement for the F1 interface;
b) A nslnstance level (NS instantiation level) for indicating nslevels (NS levels) within the NS deployment style, each NsLevel containing the following attributes:
i) virtual link to level mapping, which may include bitrateRequirements indicating bandwidth requirements for the F1 interface;
verifying (e.g., via the processor 610) whether the bitrateRequirements are in a range between minbitetrateRequirements and maxbitetrateRequirements; and if the verification passes, returning (e.g., a response generated via processor 610, sent via communication circuit 620 through the Os-Ma-nfvo reference point, received via communication circuit 420, and processed by processor 410) an nsdinoid indicating a successful NSD update to NM; or (e.g., a response generated via processor 610, sent via communication circuit 620 through the Os-Ma-nfvo reference point, received via communication circuit 420, and processed by processor 410) returns an out-of-range error indicating that bitrateRequirements are outside the range between minbitetrateRequirements and maxbitetrateRequirements to NM; and/or receive (e.g., via the processor 410 generated, via the communication circuit 420, by a request sent at the Os-Ma-Nfvo reference point, received via the communication circuit 620, and processed by the processor 610) an update NS operation with the following parameters to associate the NS with the newly updated NSD: (a) updatetype= "AssocNewNsdVersion" for associating NS instances with updated NSDs; and (b) assocnewnsddveriondata for indicating the NSD to which the NS instance will be associated; (e.g., a notification generated via processor 610, sent via communication circuit 620 through the Os-Ma-nfvo reference point, received via communication circuit 420, and processed by processor 410) sends an NS lifecycle change notification to the NM indicating the start of an NS update procedure; and (e.g., a notification generated via processor 610, sent via communication circuit 620 through the Os-Ma-nfvo reference point, received via communication circuit 420, and processed by processor 410) send an NS lifecycle change notification to the NM indicating the result of the NS update.
According to a seventeenth example embodiment of the first set of embodiments, a network manager (NM, e.g. employing system 400) comprising one or more processors (e.g. processor 410) configured to: the update NSD operation is invoked (e.g., via a request generated by processor 410, sent via communication circuit 420 through an Os-Ma-Nfvo reference point, received via communication circuit 620, and processed by processor 610) to a network function virtualization orchestrator (NFVO, e.g., employing system 600) to update NSD, which includes the following information:
1) vnffgd, which may include the following attributes:
a) vnffgdId, which uniquely identifies VNFFGD;
b) vnfdId, which identifies VNFD that make up VNF that implements the virtualized portion of the gNB;
c) pnfdId, which identifies PNFD's constituting PNF that implement the non-virtualized portion of the gNB;
d) virtual linkdata, which may include the following attributes:
i) virtual linkd f, which may include the following attributes:
(1) qoS, which may include a latency attribute indicating latency requirements for the F1 interface.
e) cpdPoolId that references a pool of descriptors attached to connection points that constitute VNFs (implementing virtualized parts of the gNB) and PNFs (implementing non-virtualized parts of the gNB);
f) nfpd (optional), which specifies a network forwarding path associated with VNFFG.
Receive a response from the NFVO to update the NSD operation (e.g., a response generated via processor 610, sent via communication circuit 620 through the Os-Ma-NFVO reference point, received via communication circuit 420, and processed by processor 410); and/or invoke an update NS operation to request Nfvo to update VNFFG containing VL connecting gNB-CU and gNB-DU with the following parameters (e.g., a request generated via processor 410, sent via communication circuit 420 through Os-Ma-Nfvo reference point, received via communication circuit 620, and processed by processor 610): (a) update type= "UpdateVnffg", update VNFFG, including VL between gNB-CU and gNB-DU for NS instance; and (b) updateVnffg for indicating updated VNFFGD for the NS instance; receive an NS lifecycle change notification (e.g., generated by processor 610, sent through Os-Ma-NFVO reference point via communication circuit 620, received via communication circuit 420, and processed by processor 410) from NFVO indicating a start of an NS update procedure; and receive NS lifecycle change notifications (e.g., generated by processor 610, sent through Os-Ma-NFVO reference point via communication circuit 620, received via communication circuit 420, and processed by processor 410) from NFVO indicating the results of the NS update.
According to an eighteenth example embodiment of the first set of embodiments, NFVO (e.g., employing system 600) may be included that includes one or more processors (e.g., processor 610) configured to: receive (e.g., a request generated via processor 410, sent via communication circuit 420 through Os-Ma-Nfvo reference point, received via communication circuit 620, and processed by processor 610) an update NSD operation from the NM to update the NSD that includes the following information:
1) vnffgd, which may include the following attributes:
a) vnffgdId, which uniquely identifies VNFFGD;
b) vnfdId, which identifies VNFD that make up VNF that implements the virtualized portion of the gNB;
c) pnfdId, which identifies PNFD's constituting PNF that implement the non-virtualized portion of the gNB;
d) virtual linkdata, which may include the following attributes:
i) virtual linkd f, which may include the following attributes:
(1) qoS, which may include a latency attribute indicating latency requirements for the F1 interface.
e) cpdPoolId that references a pool of descriptors attached to connection points that constitute VNFs (implementing virtualized parts of the gNB) and PNFs (implementing non-virtualized parts of the gNB);
f) nfpd (optional), which specifies a network forwarding path associated with VNFFG.
Updating the NSD (e.g., via the processor 610); (e.g., a response generated via processor 610, sent via communication circuit 620 through the Os-Ma-nfvo reference point, received via communication circuit 420, and processed by processor 410) returns an nsdfinoid indicating a successful NSD update; and/or receive an update NS operation to update a VNFFG containing a VL connecting a gNB-CU and a gNB-DU (e.g., a request generated via processor 410, sent via communication circuitry 420 through an Os-Ma-Nfvo reference point, received via communication circuitry 620, and processed by processor 610) with the following parameters: (a) update type= "UpdateVnffg", update VNFFG, including VL between gNB-CU and gNB-DU for NS instance; and (b) updateVnffg for indicating updated VNFFGD for the NS instance; (e.g., a notification generated via processor 610, sent via communication circuit 620 through the Os-Ma-nfvo reference point, received via communication circuit 420, and processed by processor 410) sends an NS lifecycle change notification to the NM indicating the start of an NS update procedure; and (e.g., a notification generated via processor 610, sent via communication circuit 620 through the Os-Ma-nfvo reference point, received via communication circuit 420, and processed by processor 410) send an NS lifecycle change notification to the NM indicating the result of the NS update.
According to a nineteenth example embodiment of the first set of embodiments, a Network Manager (NM) (e.g., employing system 400) comprising one or more processors (e.g., processor 410) configured to: the update NSD operation is invoked (e.g., via a request generated by processor 410, sent via communication circuit 420 through an Os-Ma-Nfvo reference point, received via communication circuit 620, and processed by processor 610) to a network function virtualization orchestrator (NFVO, e.g., employing system 600) to update NSD, which includes the following information:
1) nsDf, which may include the following attributes:
a) virtualLinkProfile, which may include the following attributes:
i) maxbiterequirments, which indicate maximum bandwidth requirements for the F1 interface;
ii) minbitetraterequirements, which indicate minimum bandwidth requirements for the F1 interface;
b) nsinstationlevel, which may indicate nslevels within the NS deployment style, where each NsLevel may include the following attributes:
i) The virtuallinktorelevmapping, which may include bitRateRequirements indicating bandwidth requirements for the F1 interface.
A response is received from NFVO indicating (e.g., generated by processor 610, sent through Os-Ma-NFVO reference point via communication circuit 620, received via communication circuit 420, and processed by processor 410) that: the update NSD operation is successful in the case of an nsdInfoId, or the update NSD operation fails due to an out-of-range error (of the bitRateRequirements parameter); and/or invoke an update NS operation to request the Nfvo to change NS deployment style with the following parameters if the update NSD operation is successful (e.g., a request generated via processor 410, sent through Os-Ma-Nfvo reference point via communication circuit 420, received via communication circuit 620, and processed by processor 610): (a) updatetype= "ChangeNsDf" for changing deployment style for NS instance; and (b) receiving an NS lifecycle change notification from the NFVO indicating a start of an NS update procedure; and receive NS lifecycle change notifications (e.g., generated by processor 610, sent through Os-Ma-NFVO reference point via communication circuit 620, received via communication circuit 420, and processed by processor 410) from NFVO indicating the results of the NS update.
According to a twentieth example embodiment of the first set of embodiments, the NFVO (e.g., employing system 600) may include one or more processors (e.g., processor 610) configured to: receive (e.g., a request generated via processor 410, sent via communication circuit 420 through Os-Ma-Nfvo reference point, received via communication circuit 620, and processed by processor 610) an update NSD operation from the NM to update the NSD, which includes the following information:
1) nsDf, which may include the following attributes:
a) virtualLinkProfile, which may include the following attributes:
i) maxbiterequirments, which indicate maximum bandwidth requirements for the F1 interface;
ii) minbitetraterequirements, which indicate minimum bandwidth requirements for the F1 interface;
b) nsinstationlevel, which may indicate nslevels within the NS deployment style, where each NsLevel may include the following attributes:
i) The virtuallinktorelevmapping, which may include bitRateRequirements indicating bandwidth requirements for the F1 interface.
Verifying (e.g., via the processor 610) whether the bitrateRequirements are in a range between minbitetrateRequirements and maxbitetrateRequirements; and if the verification passes, returning (e.g., a response generated via processor 610, sent via communication circuit 620 through the Os-Ma-nfvo reference point, received via communication circuit 420, and processed by processor 410) an nsdinoid indicating a successful NSD update to NM; or (e.g., a response generated via processor 610, sent via communication circuit 620 through the Os-Ma-nfvo reference point, received via communication circuit 420, and processed by processor 410) returns an out-of-range error indicating that bitrateRequirements are outside the range between minbitetrateRequirements and maxbitetrateRequirements to NM; and/or receive (e.g., a request generated via processor 410, sent via communication circuit 420 through Os-Ma-Nfvo reference point, received via communication circuit 620, and processed by processor 610) an update NS operation to update the bandwidth requirements for the F1 interface with the following parameters: updatetype= "ChangeNsDf" for changing deployment style for NS instance; send an NS lifecycle change notification (e.g., generated by processor 610, sent via communication circuit 620 through the Os-Ma-nfvo reference point, received via communication circuit 420, and processed by processor 410) to the NM indicating the start of the NS update procedure; and sending an NS lifecycle change notification (e.g., generated by processor 610, sent via communication circuit 620 through the Os-Ma-nfvo reference point, received via communication circuit 420, and processed by processor 410) to the NM indicating the result of the NS update.
The first example embodiment of the second set of embodiments may include a network function virtualization orchestrator (NFVO, e.g., employing system 600) comprising one or more processors (e.g., processor 610) configured to: receiving a request from the NM to go online NSD (e.g., a request generated by processor 410, sent through Os-Ma-Nfvo reference point via communication circuit 420, received via communication circuit 620, and processed by processor 610); if the attribute in the NSD is valid (e.g., as determined by the processor 610), then the NSD is online (e.g., via the processor 610); and send a response (e.g., a response generated by processor 610, sent through Os-Ma-nfvo reference point via communication circuit 620, received via communication circuit 420, and processed by processor 410) indicating the result (e.g., success or failure) of the NSD online operation to NM.
According to a second example embodiment of the second set of embodiments, may include an NM (e.g., employing system 400) comprising one or more processors (e.g., processor 410) configured to: a request to go online NSD (e.g., a request generated by processor 410, sent via communication circuit 420 through Os-Ma-Nfvo reference point, received via communication circuit 620, and processed by processor 610) is sent to Nfvo; and receiving a response (e.g., a response generated by processor 610, sent via communication circuit 620 through the Os-Ma-nfvo reference point, received via communication circuit 420, and processed by processor 410) to the NM indicating a result (e.g., success or failure) of the NSD online operation.
The third example embodiment of the second set of embodiments may include any of the first to second example embodiments, wherein the NSD to be online may include one or more of the following attributes: (a) A virtual link descriptor comprising latency requirements of an interface between the gNB-CU and the gNB-DU; (b) The VNF forwards a graph descriptor comprising latency requirements of an interface between the gNB-CU and the gNB-DU; and/or (c) an NS deployment style comprising bandwidth requirements and ranges defined by maximum bandwidth and minimum bandwidth values of an interface between the gNB-CU and the gNB-DU.
The fourth example embodiment of the second set of embodiments may include any of the first to third example embodiments, wherein the NFVO sends (e.g., sent by the Os-Ma-NFVO reference point via the communication circuit 620, received via the communication circuit 420, and processed by the processor 410) a response with an NSD identifier indicating that NSD is successful in coming online, and comes online if the bandwidth requirement is within range.
The fifth example embodiment of the second set of embodiments may include any of the first through third example embodiments, wherein the NFVO sends a response (e.g., generated by the processor 610, sent via the communication circuit 620 through the Os-Ma-NFVO reference point, received via the communication circuit 420, and processed by the processor 410) to indicate that the NSD line has failed through an out-of-range error if the bandwidth requirement is out of range.
According to a sixth example embodiment of the second set of embodiments, may include any of the first to second example embodiments, wherein the NFVO up contains only NSDs of virtual link descriptors and/or VNF forwarding graph descriptors, and a response with an NSD identifier is sent to indicate success of the NSD up.
According to a seventh example embodiment of the second set of embodiments, NFVO (e.g., employing system 600) comprising one or more processors (e.g., processor 610) configured to: receiving a request from the NM to update the NSD (e.g., a request generated by processor 410, sent through Os-Ma-Nfvo reference point via communication circuit 420, received via communication circuit 620, and processed by processor 610); if the attribute in the NSD is valid (e.g., as determined by the processor 610), the NSD is updated (e.g., via the processor 610); a response (e.g., a response generated by processor 610, sent through Os-Ma-nfvo reference point via communication circuit 620, received via communication circuit 420, and processed by processor 410) indicating the result (e.g., success or failure) of the NSD update operation is sent to the NM; based on the on-line NSD, a request to update NS is received from NM (e.g., a request generated by processor 410, sent through Os-Ma-Nfvo reference point via communication circuit 420, received via communication circuit 620, and processed by processor 610); a notification indicating the start of the NS update operation (e.g., a notification generated by processor 610, sent through Os-Ma-nfvo reference point via communication circuit 620, received via communication circuit 420, and processed by processor 410) is sent to the NM; updating the NS (e.g., via processor 610) according to the NSD; and sending a notification (e.g., a notification generated by processor 610, sent through Os-Ma-nfvo reference point via communication circuit 620, received via communication circuit 420, and processed by processor 410) to the NM indicating the result of the NS update operation.
According to an eighth example embodiment of the second set of embodiments, may include a NM (e.g., employing system 400) comprising one or more processors (e.g., processor 410) configured to: a request to update NSD (e.g., a request generated by processor 410, sent via communication circuit 420 through Os-Ma-Nfvo reference point, received via communication circuit 620, and processed by processor 610) is sent to Nfvo; receiving a response (e.g., a response generated by processor 610, sent over the Os-Ma-NFVO reference point via communication circuit 620, received via communication circuit 420, and processed by processor 410) from NFVO indicating the result (e.g., success or failure) of the NSD update operation; based on the on-line NSD, a request to update NS (e.g., a request generated by processor 410, sent through Os-Ma-Nfvo reference point via communication circuit 420, received via communication circuit 620, and processed by processor 610) is sent to Nfvo; receive a notification from NFVO indicating the start of an NS update operation (e.g., a notification generated by processor 610, sent through Os-Ma-NFVO reference point via communication circuit 620, received via communication circuit 420, and processed by processor 410); and receiving a notification from NFVO indicating the result of the NS update operation (e.g., a notification generated by processor 610, sent through Os-Ma-NFVO reference point via communication circuit 620, received via communication circuit 420, and processed by processor 410).
According to a ninth example embodiment of the second set of embodiments, may comprise any of the seventh to eighth example embodiments, wherein the NSD to be updated may comprise one or more of the following attributes: (a) A virtual link descriptor comprising latency requirements of an interface between the gNB-CU and the gNB-DU; (b) The VNF forwards a graph descriptor comprising latency requirements of an interface between the gNB-CU and the gNB-DU; and/or (c) an NS deployment style comprising bandwidth requirements and ranges defined by maximum bandwidth and minimum bandwidth values of an interface between the gNB-CU and the gNB-DU.
According to a tenth example embodiment of the second set of embodiments, any of the seventh through ninth example embodiments may be included, wherein the NFVO may send a response (e.g., a response generated by the processor 610, sent through the Os-Ma-NFVO reference point via the communication circuit 620, received via the communication circuit 420, and processed by the processor 410) with an NSD identifier indicating that the NSD update was successful, and may update the NSD (e.g., via the processor 610) if the bandwidth requirement is within range.
According to an eleventh example embodiment of the second set of embodiments, any of the seventh through ninth example embodiments may be included, wherein the NFVO may send a response (e.g., a response generated by the processor 610, sent via the communication circuit 620 through the Os-Ma-NFVO reference point, received via the communication circuit 420, and processed by the processor 410) to indicate that the NSD update has failed by an out-of-range error if the bandwidth requirement is out-of-range.
According to a twelfth example embodiment of the second set of embodiments, which may include any of the seventh to ninth example embodiments, wherein if the NSD to be updated does not contain bandwidth requirements, the NFVO may update the virtual link descriptor and/or VNF forwarding graph descriptor (e.g., via processor 610) and may send a response (e.g., a response generated by processor 610, sent via communication circuit 620 through the Os-Ma-NFVO reference point, received via communication circuit 420, and processed by processor 410) with an NSD identifier indicating that the NSD update was successful.
According to a thirteenth example embodiment of the second set of embodiments, the method may comprise any of the seventh to eighth example embodiments, wherein the NSD updating may comprise: VNF forwarding graph descriptors are added (e.g., via processor 610) for connecting the gNB-CU VNF and the gNB-DUs.
According to a fourteenth example embodiment of the second set of embodiments, any one of the seventh to eighth example embodiments may be included, wherein the NS update may include: the VNF forwarding graph is added to the NS (e.g., via processor 610) to connect the gNB-CU VNF and the gNB-DUs.
The fifteenth example embodiment of the second set of embodiments may include any of the seventh, eighth, or fourteenth example embodiments, wherein the NS update request includes: vnffgdId identifying VNFFG to be used to create VNFFGD instances; and vnfnstanceid, which identifies one or more VNF instances to be included to the VNFFG instance.
The sixteenth example embodiment of the second set of embodiments may include any of the first or thirteenth example embodiments, wherein the bandwidth requirements are defined by a bitrateRequirements attribute for an NS level within the NS deployment style.
A seventeenth example embodiment of the second set of embodiments may include any of the first or thirteenth example embodiments, wherein the bandwidth requirement range is defined by maxbiterequirementand minbiterterequirementattribute for the NS deployment style.
Example 1 is an apparatus configured to be employed within a NM (network manager), comprising: a memory interface; and processing circuitry configured to: invoking an update NSD (NS (network service) descriptor) operation to request an NFVO (NFV (network function virtualization) orchestrator) to update the NSD via a first request including an nsdInfoId (NSD information identifier) parameter indicating the NSD to be updated and an NSD parameter including a vnffgD (VNFF (virtual network function) FG (forwarding graph)) descriptor attribute to be updated and an nsDf (NS deployment style) attribute to be updated; receiving a response from the NFVO indicating successful update of the NSD, wherein the response includes the nsdinoid parameter; invoking an update NS operation to request the NFVO to associate an NS with the NSD via a second request, the second request including an updateType parameter equal to AssocNewNsdVersion (associated new NSD version data) for associating an NS with the NSD and an AssocNewNsdVersion data parameter for indicating that the NSD is associated with the NS; receiving a first NS lifecycle change notification from the NFVO indicating a start of an NS update procedure; receiving a second NS lifecycle change notification from the NFVO indicating a result of the NS update procedure; and sending the nsdInfoId parameter to memory via the memory interface.
Example 2 includes the subject matter of any variation of any of example 1, wherein the vnffgd parameter includes: vnffgdId (VNFFGD identifier) attribute that uniquely identifies VNFFGD; vnfdId (VNFD (VNF descriptor) identifier) attribute that identifies VNFD constituting VNF for virtualized part of gcb (next generation node B); a pnfdId (PNFD (PNF (physical network function) descriptor) attribute that identifies PNFD constituting PNF for the non-virtualized portion of the gNB; and a cpdPoolId (connection point descriptor pool identifier) attribute that references a pool of descriptors implementing one or more connection points that constitute VNFs for the virtualized portion of the gNB and one or more PNFs for the virtual non-virtualized portion of the gNB.
Example 3 includes the subject matter of any variation of example 2, wherein the vnffgd parameter further includes an nfpd (network forwarding path descriptor) attribute specifying a network forwarding path associated with the VNFFG of the NS.
Example 4 includes the subject matter of any of examples 1-3, wherein the first request includes a virtuallinkd (virtual link deployment style) attribute including a qoS (quality of service) attribute, wherein the qoS attribute includes a latency attribute indicating latency requirements for an F1 interface.
Example 5 includes the subject matter of any variation of any of examples 1-3, wherein the nsDf attribute includes a virtual link profile attribute and an nsinstationlevel attribute indicating one or more nslevels within a deployment style of the NS.
Example 6 includes the subject matter of any variation of example 5, wherein the virtualLinkProfile attribute includes a maxBitetrateRequirements attribute and a minBitetrateRequirements attribute, the maxBitetrateRequirements attribute indicating a maximum bandwidth requirement for an interface between a virtualized portion of a gNB (next generation node B) and a non-virtualized portion of the gNB, and the minBitetrateRequirements attribute indicating a minimum bandwidth requirement for an interface between the virtualized portion of the gNB and the non-virtualized portion of the gNB.
Example 7 includes the subject matter of any variation of example 5, wherein each of the one or more NsLevel attributes includes a virtual link to level mapping attribute including a bitrateRequirements attribute indicating a bandwidth requirement for an interface between a virtualized portion of a gNB (next generation node B) and a non-virtualized portion of the gNB.
Example 8 is an apparatus configured to be employed within an NFVO (network function virtualization orchestrator), comprising: a memory interface; and processing circuitry configured to: receiving a first request to update an NSD from an NM (network manager), wherein the first request includes an nsdintid (NSD information identifier) parameter indicating the NSD to be updated and an NSD parameter including a VNFFGD (virtual network function) FG (forwarding diagram) descriptor)) attribute to be updated and an nsDf (NS deployment style) attribute to be updated; transmitting a response to the NM indicating successful update of the NSD, wherein the response includes the nsdinoid parameter; receiving a second request from the NM to associate an NS with the NSD, the second request including an updateType parameter equal to AssocNewNsdVersion (associated new NSD version data) for associating the NS with the NSD and an AssocNewNsdVersion data parameter for indicating that the NSD is associated with the NS; sending a first NS lifecycle change notification to the NM indicating a start of an NS update procedure; updating the NS based on the information provided by the NM and the information provided in the NSD; sending a second NS lifecycle change notification to the NM indicating a result of the NS update procedure; and sending the nsdInfoId parameter to memory via the memory interface.
Example 9 includes the subject matter of any variation of any of example 8, wherein the vnffgd parameter includes: vnffgdId (VNFFGD identifier) attribute that uniquely identifies VNFFGD; vnfdId (VNFD (VNF descriptor) identifier) attribute that identifies VNFD constituting VNF for virtualized part of gcb (next generation node B); a pnfdId (PNFD (PNF (physical network function) descriptor) attribute that identifies PNFD constituting PNF for the non-virtualized portion of the gNB; and a cpdPoolId (connection point descriptor pool identifier) attribute that references a pool of descriptors implementing one or more connection points that constitute VNFs for the virtualized portion of the gNB and one or more PNFs for the virtual non-virtualized portion of the gNB.
Example 10 includes the subject matter of any variation of example 9, wherein the vnffgd parameter further includes an nfpd (network forwarding path descriptor) attribute specifying a network forwarding path associated with the VNFFG of the NS.
Example 11 includes the subject matter of any of examples 8-10, wherein the first request includes a virtuallinkd (virtual link deployment style) attribute including a qoS (quality of service) attribute, wherein the qoS attribute includes a latency attribute indicating latency requirements for an F1 interface.
Example 12 includes the subject matter of any variation of any of examples 8-10, wherein the nsDf attribute includes a virtual link profile attribute and an nsinstationlevel attribute indicating one or more nslevels within a deployment style of the NS.
Example 13 includes the subject matter of any variation of example 12, wherein the virtualLinkProfile attribute includes a maxBitetrateRequirements attribute and a minBitetrateRequirements attribute, the maxBitetrateRequirements attribute indicating a maximum bandwidth requirement for an interface between a virtualized portion of a gNB (next generation node B) and a non-virtualized portion of the gNB, and the minBitetrateRequirements attribute indicating a minimum bandwidth requirement for an interface between the virtualized portion of the gNB and the non-virtualized portion of the gNB.
Example 14 includes the subject matter of any variation of example 12, wherein each of the one or more NsLevel attributes includes a virtual link to level mapping attribute including a bitrateRequirements attribute indicating a bandwidth requirement for an interface between a virtualized portion of a gNB (next generation node B) and a non-virtualized portion of the gNB.
Example 15 is an apparatus configured to be employed within a NM (network manager), comprising: a memory interface; and processing circuitry configured to: sending a request to NFVO (NFV (network function virtualization) orchestrator) invoking an update NSD (NS (network service) descriptor) operation to add VNFFGD (VNF (virtualized network function) FG (forwarding graph) descriptor) for a gNB (next generation node B), wherein the request includes a nsdfnfoid parameter indicating an NSD to be updated and an NSD parameter, wherein the NSD parameter indicates the NSD and includes the VNFFGD to be added; receiving a response from the NFVO indicating successful update of the NSD, wherein the response includes the nsdinoid parameter; and sending the nsdInfoId parameter to memory via the memory interface.
Example 16 includes the subject matter of any variation of any of example 15, wherein the NSD parameters include Vnffgd (VNFFGD) attributes comprising: a vnffgdId (VNFFGD identifier) attribute that uniquely identifies the VNFFGD; vnfdId (VNFD (VNF descriptor) identifier) attribute that identifies the VNFD constituting the VNF that implements the virtualized portion of the gNB (next generation node B); a pnfdId (PNFD (PNF (physical network function) descriptor) attribute that identifies the PNFD that implements the non-virtualized portion of the gNB; and a cpdPoolId (connection point descriptor pool identifier) attribute that references a pool of descriptors implementing one or more connection points that constitute VNFs for the virtualized portion of the gNB and one or more PNFs for the virtual non-virtualized portion of the gNB.
Example 17 includes the subject matter of any variation of example 16, wherein the vnffgd parameter further includes an nfpd (network forwarding path descriptor) attribute specifying a network forwarding path associated with the VNFFG of the NS.
Example 18 includes the subject matter of any of examples 15-17, wherein the first request includes a virtual linkd (virtual link deployment style) attribute that includes a qoS (quality of service) attribute, wherein the qoS attribute includes a latency attribute that indicates latency requirements for an F1 interface.
Example 19 includes the subject matter of any variation of any of examples 15-17, wherein the NSD is already online.
Example 20 is an apparatus configured to be employed within an NFVO (network function virtualization orchestrator), comprising: a memory interface; and processing circuitry configured to: receiving a request from an NM (network manager) to invoke an update NSD (NS (network service) descriptor) operation to add VNFFGD (VNF (virtualized network function) FG (forwarding graph) descriptor) for a gNB (next generation node B), wherein the request includes an nsdfnfoid parameter indicating an NSD to be updated and an NSD parameter, wherein the NSD parameter indicates the NSD and includes the VNFFGD to be added; updating the NSD by adding the VNFFGD for the gNB; transmitting a response to the NM indicating successful update of the NSD, wherein the response includes the nsdinoid parameter; and sending the nsdInfoId parameter to memory via the memory interface.
Example 21 includes the subject matter of any variation of example 20, wherein the NSD parameters include Vnffgd (VNFFGD) attributes comprising: a vnffgdId (VNFFGD identifier) attribute that uniquely identifies the VNFFGD; vnfdId (VNFD (VNF descriptor) identifier) attribute that identifies the VNFD constituting the VNF that implements the virtualized portion of the gNB (next generation node B); a pnfdId (PNFD (PNF (physical network function) descriptor) attribute that identifies the PNFD that implements the non-virtualized portion of the gNB; and a cpdPoolId (connection point descriptor pool identifier) attribute that references a pool of descriptors implementing one or more connection points that constitute VNFs for the virtualized portion of the gNB and one or more PNFs for the virtual non-virtualized portion of the gNB.
Example 22 includes the subject matter of any variation of example 21, wherein the vnffgd parameter further includes an nfpd (network forwarding path descriptor) attribute specifying a network forwarding path associated with the VNFFG of the NS.
Example 23 includes the subject matter of any of examples 20-22, wherein the first request includes a virtual linkd (virtual link deployment style) attribute including a qoS (quality of service) attribute, wherein the qoS attribute includes a latency attribute indicating latency requirements for the F1 interface.
Example 24 includes the subject matter of any variation of any of examples 20-22, wherein the NSD is already online.
Example 25 is an apparatus configured to be employed within a NM (network manager), comprising: a memory interface; and processing circuitry configured to: invoking an update NS (network services) operation for an NS to send a request to an NFVO (NFV (network function virtualization) orchestrator to add one or more VNFFGs (VNFs (virtual network functions) FG (forwarding graphs)) connecting a virtualized portion of a gNB (next generation node B) with a non-virtualized portion of the gNB, wherein the request includes an updateType parameter and an addvnnffg parameter equal to AddVnffg (adding VNFFG), wherein the addvnnffg parameter includes a vnffgId (VNFFG identifier) parameter of the virtualized portion of the gNB and a vnfnstanceid (VNF instance identifier) parameter of the virtualized portion of the gNB; receiving a first NS lifecycle change notification from the NFVO indicating a start of an update procedure for the NS; receiving a second NS lifecycle change notification from the NFVO indicating a result of an update procedure for the NS; and sending the vnffgId parameter and the vnfnstanceid parameter to a memory via the storage interface.
Example 26 includes the subject matter of any variation of example 25, wherein the NS has been instantiated.
Example 27 includes the subject matter of any of examples 25-26, wherein an NSD (NS descriptor) of the NS has been updated with a vnffgd (VNFFG descriptor) parameter and an nsDf (NS deployment style) parameter, wherein the vnffgd parameter includes information about a latency of a virtual link between a virtualized portion of the gNB and a non-virtualizable portion of the gNB, and wherein the nsDf parameter includes information about a bandwidth of the virtual link.
Example 28 is an apparatus configured to be employed within an NFVO (network function virtualization orchestrator), comprising: a memory interface; and processing circuitry configured to: receiving a request from an NM (network manager) to add one or more VNFFGs (VNFs (virtual network functions) FG (forwarding graphs)) connecting a virtualized portion of a gNB (next generation node B) with a non-virtualized portion of the gNB to an NS (network service), wherein the request includes an updateType (update type) parameter equal to AddVnffg (add VNFFG) and an AddVnffg parameter, wherein the AddVnffg parameter includes a vnffgId (VNFFG identifier) parameter of the virtualized portion of the gNB and a vnnfinstanceid (VNF instance identifier) parameter of the virtualized portion of the gNB; send a first NS lifecycle change notification to the NM indicating a start of an update procedure for the NS; updating the NS based on the information provided by the NM and the information provided in the updated NSD; send a second NS lifecycle change notification to the NM indicating a result of an update procedure for the NS; and sending the vnffgId parameter and the vnfnstanceid parameter to a memory via the storage interface.
Example 29 includes the subject matter of any variation of example 28, wherein the NS has been instantiated.
Example 30 includes the subject matter of any of examples 28-29, wherein an NSD (NS descriptor) of the NS has been updated with a vnffgd (VNFFG descriptor) parameter and an nsDf (NS deployment style) parameter, wherein the vnffgd parameter includes information about a latency of a virtual link between a virtualized portion of the gNB and a non-virtualizable portion of the gNB, and wherein the nsDf parameter includes information about a bandwidth of the virtual link.
Example 31 is an apparatus configured to be employed within a NM (network manager), comprising: a memory interface; and processing circuitry configured to: sending a first request to an NFVO (network function virtualization orchestrator) to create an NS identifier for an NSD (NS (network service) descriptor) referencing a VNFD (virtual network function (VNF) descriptor) of a CN (core network) NF (network function), a VNFD of a virtualized portion of a gNB (next generation node B), and a PNFD (physical network function (PNF) descriptor of a non-virtualized portion of the gNB; receiving a response from the NFVO indicating successful creation of the NS identifier, wherein the response includes a nsInstanceId (NS instance identifier) parameter; sending a second request to the NFVO to instantiate the NS identified by the nsInstanceId parameter, wherein the second request includes an additionalasforvnf (additional parameters for VNF) parameter that provides information for virtualized portions of the CN NF and the gNB, and wherein the second request includes a pnfnfo (PNF information) parameter that provides information for non-virtualized portions of the gNB; receiving a first NS lifecycle change notification from the NFVO indicating a start of an NS instantiation process; receive a second NS lifecycle change notification from the NFVO indicating a result of the NS instantiation process; and sending the nslnstanceid parameter to a memory via the memory interface.
Example 32 includes the subject matter of any variation of example 31, wherein the second request further includes one or more other parameters.
Example 33 is an apparatus configured to be employed within an NFVO (network function virtualization orchestrator), comprising: a memory interface; and processing circuitry configured to: receiving a first request from an NM (network manager) to create an NS identifier for an NSD (NS (network service) descriptor) referencing a VNFD (virtual network function (VNF) descriptor) of a CN (core network) NF (network function), a VNFD of a virtualized portion of a gNB (next generation node B) and a PNFD (physical network function (PNF) descriptor) of a non-virtualized portion of the gNB; transmitting a response to the NM indicating successful creation of the NS identifier, wherein the response includes a nsInstanceId (NS instance identifier) parameter; receiving a second request from the NM to instantiate the NS identified by the nsInstanceId parameter, wherein the second request includes an additionalammanfnf (additional parameter for VNF) parameter providing information for virtualized portions of the CN NF and the gNB, and wherein the second request includes a PNF info parameter providing information for non-virtualized portions of the gNB; sending a first NS lifecycle change notification to the NFVO indicating a start of an NS instantiation process; instantiating the NS including the CN NF, the virtualized portion of the gNB, and the non-virtualized portion of the gNB based on the information provided by the NM and the information provided in the NSD, VNF packages, and the PNFD; sending a second NS lifecycle change notification from the NFVO indicating a result of the NS instantiation process; and sending the nslnstanceid parameter to a memory via the memory interface.
Example 34 includes the subject matter of any variation of example 33, wherein the second request further includes one or more other parameters.
Example 35 includes an apparatus comprising means for performing any of the described operations of any of the other examples described herein.
Example 36 includes a machine-readable medium having stored thereon instructions that are executable by a processor to perform any of the described operations of any of the other examples described herein.
Example 37 includes an apparatus, comprising: a memory interface; and processing circuitry configured to: any of the described operations of any other examples described herein are performed.
The above description of illustrated embodiments of the disclosure, including what is described in the abstract, is not intended to be exhaustive or to limit the disclosed embodiments to the precise forms disclosed. Although specific embodiments of, and examples for, the illustrative purposes are described herein, it will be appreciated by those skilled in the art that various modifications are possible within the scope of the embodiments and examples.
In this regard, while the disclosed subject matter has been described in connection with various embodiments and corresponding figures, it is to be understood that other similar embodiments may be used or modifications and additions may be made to the described embodiment for performing the same, similar, alternative or alternative function of the disclosed subject matter, unless otherwise specified. Thus, the disclosed subject matter should not be limited to any single embodiment described herein, but rather should be understood in terms of width and scope in accordance with the appended claims.
In particular regard to the various functions performed by the above described components or structures (assemblies, devices, circuits, systems, etc.), the terms (including a reference to a "means/block") used to describe such components are intended to correspond, unless otherwise indicated, to any component or structure which performs the specified function of the described component (e.g., that is functionally equivalent), even though not structurally equivalent to the disclosed structure which performs the function in the herein illustrated exemplary implementations. Furthermore, while a particular feature may have been disclosed with respect to only one of several implementations, such feature may be combined with one or more other features of the other implementations as may be desired and advantageous for any given or particular application.

Claims (20)

1. An apparatus configured to be employed within a NM (network manager), comprising: a memory interface; and
processing circuitry configured to:
invoking an update NSD (NS (network service) descriptor) operation to request an NFVO (NFV (network function virtualization) orchestrator) to update the NSD via a first request including an nsdInfoId (NSD information identifier) parameter indicating the NSD to be updated and an NSD parameter including a vnffgD (VNFF (virtual network function) FG (forwarding graph)) descriptor attribute to be updated and an nsDf (NS deployment style) attribute to be updated;
Receiving a response from the NFVO indicating successful update of the NSD, wherein the response includes the nsdinoid parameter;
invoking an update NS operation to request the NFVO to associate an NS with the NSD via a second request, the second request including an updateType parameter equal to AssocNewNsdVersion (associated new NSD version data) for associating the NS with the NSD and an AssocNewNsdVersion data parameter for indicating that the NSD is associated with the NS;
receiving a first NS lifecycle change notification from the NFVO indicating a start of an NS update procedure;
receiving a second NS lifecycle change notification from the NFVO indicating a result of the NS update procedure; and
the nsdinoid parameter is sent to memory via the memory interface,
wherein the first request includes a virtuallinkd (virtual link deployment style) attribute that includes a qoS (quality of service) attribute, wherein the qoS attribute includes a latency attribute that indicates latency requirements for an F1 interface.
2. The apparatus of claim 1, wherein the vnffgd attribute comprises:
vnffgdId (VNFFGD identifier) attribute that uniquely identifies VNFFGD;
vnfdId (VNFD (VNF descriptor) identifier) attribute that identifies VNFD constituting VNF for virtualized part of gcb (next generation node B);
a pnfdId (PNFD (PNF (physical network function) descriptor) attribute that identifies PNFD constituting PNF for the non-virtualized portion of the gNB; and
a cpdPoolId (connection point descriptor pool identifier) attribute that references a pool of descriptors implementing one or more connection points that make up VNFs of the virtualized portion of the gNB and one or more PNFs implementing the non-virtualized portion of the gNB.
3. The apparatus of claim 2, wherein the vnffgd attribute further comprises:
nfpd (network forwarding path descriptor) attribute that specifies a network forwarding path associated with VNFFG of the NS.
4. The apparatus of any of claims 1-3, wherein the nsDf attribute comprises a virtual link profile attribute and a nsinstationlevel attribute indicating one or more nslevels within a deployment style of the NS.
5. The apparatus of claim 4, wherein the virtualLinkProfile attribute comprises a maxBitetraRequirements attribute and a minBitetraRequirements attribute, the maxBitetraRequirements attribute indicating a maximum bandwidth requirement for an interface between a virtualized portion of a gNB (next generation node B) and a non-virtualized portion of the gNB, and the minBitetraRequirements attribute indicating a minimum bandwidth requirement for an interface between the virtualized portion of the gNB and the non-virtualized portion of the gNB.
6. The apparatus of claim 4, wherein each of the one or more NsLevel attributes comprises a virtual link to level mapping attribute comprising a bitrateRequirements attribute indicating a bandwidth requirement for an interface between a virtualized portion of a gNB (next generation node B) and a non-virtualized portion of the gNB.
7. An apparatus configured to be employed within an NFVO (network function virtualization orchestrator), comprising:
a memory interface; and
processing circuitry configured to:
receiving a first request to update an NSD from an NM (network manager), wherein the first request includes an nsdintid (NSD information identifier) parameter indicating the NSD to be updated and an NSD parameter including a vnffgd (VNF (virtual network function) FG (forwarding graph)) attribute to be updated and an nsDf (NS deployment style) attribute to be updated;
transmitting a response to the NM indicating successful update of the NSD, wherein the response includes the nsdinoid parameter;
receiving a second request from the NM to associate an NS with the NSD, the second request including an updateType parameter equal to AssocNewNsdVersion (associated new NSD version data) for associating the NS with the NSD and an AssocNewNsdVersion data parameter for indicating that the NSD is associated with the NS;
Sending a first NS lifecycle change notification to the NM indicating a start of an NS update procedure;
updating the NS based on the information provided by the NM and the information provided in the NSD;
sending a second NS lifecycle change notification to the NM indicating a result of the NS update procedure; and
the nsdinoid parameter is sent to memory via the memory interface,
wherein the first request includes a virtuallinkd (virtual link deployment style) attribute that includes a qoS (quality of service) attribute, wherein the qoS attribute includes a latency attribute that indicates latency requirements for an F1 interface.
8. The apparatus of claim 7, wherein the vnffgd attribute comprises:
vnffgdId (VNFFGD identifier) attribute that uniquely identifies VNFFGD;
vnfdId (VNFD (VNF descriptor) identifier) attribute that identifies VNFD constituting VNF for virtualized part of gcb (next generation node B);
a pnfdId (PNFD (PNF (physical network function) descriptor) attribute that identifies PNFD constituting PNF for the non-virtualized portion of the gNB; and
a cpdPoolId (connection point descriptor pool identifier) attribute that references a pool of descriptors implementing one or more connection points that make up VNFs of the virtualized portion of the gNB and one or more PNFs implementing the non-virtualized portion of the gNB.
9. The apparatus of claim 8, wherein the vnffgd attribute further comprises:
nfpd (network forwarding path descriptor) attribute that specifies a network forwarding path associated with VNFFG of the NS.
10. The apparatus of any of claims 7-9, wherein the nsDf attribute comprises a virtual link profile attribute and a nsinstationlevel attribute indicating one or more nslevels within a deployment style of the NS.
11. The apparatus of claim 10, wherein the virtualLinkProfile attribute comprises a maxbiterequirementment attribute and a minbitrequirementment attribute, the maxbiterequirementment attribute indicating a maximum bandwidth requirement for an interface between a virtualized portion of a gNB (next generation node B) and a non-virtualized portion of the gNB, and the minbitrequirementment attribute indicating a minimum bandwidth requirement for an interface between the virtualized portion of the gNB and the non-virtualized portion of the gNB.
12. The apparatus of claim 10, wherein each of the one or more NsLevel attributes comprises a virtual link to level mapping attribute comprising a bitrateRequirementfor indicating a bandwidth requirement for an interface between a virtualized portion of a gNB (next generation node B) and a non-virtualized portion of the gNB.
13. An apparatus configured to be employed within a NM (network manager), comprising:
a memory interface; and
processing circuitry configured to:
sending a request to NFVO (NFV (network function virtualization) orchestrator) invoking an update NSD (NS (network service) descriptor) operation to add VNFFGD (VNF (virtual network function) FG (forwarding diagram) descriptor) for a gNB (next generation node B), wherein the request includes an nsdfnfoid parameter indicating an NSD to be updated and an NSD parameter, wherein the NSD parameter indicates the NSD and includes the VNFFGD to be added;
receiving a response from the NFVO indicating successful update of the NSD, wherein the response includes the nsdinoid parameter; and
the nsdinoid parameter is sent to memory via the memory interface,
wherein the request includes a virtuallinkd (virtual link deployment style) attribute that includes a qoS (quality of service) attribute, wherein the qoS attribute includes a latency attribute that indicates latency requirements for an F1 interface.
14. The apparatus of claim 13, wherein the NSD parameter comprises a vnffgd attribute comprising:
a vnffgdId (VNFFGD identifier) attribute that uniquely identifies the VNFFGD;
vnfdId (VNFD (VNF descriptor) identifier) attribute that identifies the VNFD constituting the VNF that implements the virtualized portion of the gNB (next generation node B);
a pnfdId (PNFD (PNF (physical network function) descriptor) attribute that identifies the PNFD that implements the non-virtualized portion of the gNB; and
a cpdPoolId (connection point descriptor pool identifier) attribute that references a pool of descriptors implementing one or more connection points that make up VNFs of the virtualized portion of the gNB and one or more PNFs implementing the non-virtualized portion of the gNB.
15. The apparatus of claim 14, wherein the vnffgd attribute further comprises an nfpd (network forwarding path descriptor) attribute that specifies a network forwarding path associated with a VNFFG of the NS.
16. The device of any one of claims 13-15, wherein the NSD is already online.
17. An apparatus configured to be employed within an NFVO (network function virtualization orchestrator), comprising:
a memory interface; and
processing circuitry configured to:
receiving a call to update NSD (NS (network service)) from NM (network manager)
Descriptor) to add a request for VNFFGD (VNF (virtual network function) FG (forwarding graph) descriptor) of a gNB (next generation node B), wherein the request includes an nsdfnfoid parameter indicating an NSD to be updated and an NSD parameter indicating the NSD and including the VNFFGD to be added;
Updating the NSD by adding VNFFGD for the gNB;
transmitting a response to the NM indicating successful update of the NSD, wherein the response includes the nsdinoid parameter; and
the nsdinoid parameter is sent to memory via the memory interface,
wherein the request includes a virtuallinkd (virtual link deployment style) attribute that includes a qoS (quality of service) attribute, wherein the qoS attribute includes a latency attribute that indicates latency requirements for an F1 interface.
18. The apparatus of claim 17, wherein the NSD parameter comprises a vnffgd attribute comprising:
a vnffgdId (VNFFGD identifier) attribute that uniquely identifies the VNFFGD;
vnfdId (VNFD (VNF descriptor) identifier) attribute that identifies the VNFD constituting the VNF that implements the virtualized portion of the gNB (next generation node B);
a pnfdId (PNFD (PNF (physical network function) descriptor) attribute that identifies the PNFD that implements the non-virtualized portion of the gNB; and
a cpdPoolId (connection point descriptor pool identifier) attribute that references a pool of descriptors implementing one or more connection points that make up VNFs of the virtualized portion of the gNB and one or more PNFs implementing the non-virtualized portion of the gNB.
19. The apparatus of claim 18, wherein the vnffgd attribute further comprises:
nfpd (network forwarding path descriptor) attribute that specifies a network forwarding path associated with VNFFG of the NS.
20. The device of any one of claims 17-19, wherein the NSD is already online.
CN201880027531.6A 2017-08-01 2018-07-27 Techniques involving interfaces between next generation node B central units and next generation node B distributed units Active CN110582991B (en)

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
US201762539936P 2017-08-01 2017-08-01
US201762539925P 2017-08-01 2017-08-01
US62/539,936 2017-08-01
US62/539,925 2017-08-01
PCT/US2018/044063 WO2019027827A1 (en) 2017-08-01 2018-07-27 Techniques related to interface between next generation nodeb central unit and next generation nodeb distributed unit

Publications (2)

Publication Number Publication Date
CN110582991A CN110582991A (en) 2019-12-17
CN110582991B true CN110582991B (en) 2023-05-19

Family

ID=63350591

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201880027531.6A Active CN110582991B (en) 2017-08-01 2018-07-27 Techniques involving interfaces between next generation node B central units and next generation node B distributed units

Country Status (2)

Country Link
CN (1) CN110582991B (en)
WO (1) WO2019027827A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020252052A1 (en) * 2019-06-10 2020-12-17 Apple Inc. End-to-end radio access network (ran) deployment in open ran (o-ran)
CN112152832B (en) * 2019-06-28 2023-01-13 ***通信有限公司研究院 Management object processing method and device, related equipment and storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017119933A1 (en) * 2016-01-08 2017-07-13 Intel IP Corporation Performance monitoring techniques for virtualized resources

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101979362B1 (en) * 2014-01-29 2019-08-28 후아웨이 테크놀러지 컴퍼니 리미티드 Method for upgrading virtualized network function and network function virtualization orchestrator
CN106797323B (en) * 2014-09-25 2021-04-30 苹果公司 Network function virtualization
EP3249871A4 (en) * 2015-02-16 2018-02-21 Huawei Technologies Co., Ltd. Method and device for updating network service descriptor
CN106533714A (en) * 2015-09-09 2017-03-22 中兴通讯股份有限公司 Method and device for re-instantiating virtual network function

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017119933A1 (en) * 2016-01-08 2017-07-13 Intel IP Corporation Performance monitoring techniques for virtualized resources

Also Published As

Publication number Publication date
CN110582991A (en) 2019-12-17
WO2019027827A1 (en) 2019-02-07

Similar Documents

Publication Publication Date Title
US11327787B2 (en) Using a managed object operation to control a lifecycle management operation
CN111480366B (en) Shared PDU session establishment and binding
US10856183B2 (en) Systems and methods for network slice service provisioning
RU2643451C2 (en) System and method for virtualisation of mobile network function
US11909587B2 (en) Management services for 5G networks and network functions
WO2018084975A1 (en) Lifecycle management parameter modeling for virtual network functions
EP3595244B1 (en) Network slice management method, unit and system
US20200145833A1 (en) Method for associating network functions with a network slice instance of a mobile radio communication network
CN111587601A (en) Network slice provisioning and operation
WO2018089634A1 (en) Network slice management
EP3577857B1 (en) Network resource model to support next generation node b
EP3456004B1 (en) Apparatus of performance measurement data subscription for nfv performance management
US20200028938A1 (en) Enhancement of traffic detection and routing in virtualized environment
US11601877B2 (en) Systems and methods for exposing network slices for third party applications
CN110582991B (en) Techniques involving interfaces between next generation node B central units and next generation node B distributed units
WO2017164932A1 (en) Network function virtualization (nfv) performance measurement (pm) threshold monitoring operations
US20220109971A1 (en) Communication method and communications apparatus
Kim et al. GiLAN Roaming: Roam Like at Home in a Multi-Provider NFV Environment
WO2019024981A1 (en) Supporting resource allocation in a radio communication network
KR20230131843A (en) Methods for deploying multi-access edge computing applications
WO2023032102A1 (en) Performance index value calculation system and performance index value calculation method
WO2018014172A1 (en) Business processing method and network equipment in core network
Core Supporting Evolved Packet Core for One Million Mobile Subscribers with Four Intel® Xeon® Processor-Based Servers

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
TA01 Transfer of patent application right

Effective date of registration: 20200413

Address after: California, USA

Applicant after: INTEL Corp.

Address before: California, USA

Applicant before: INTEL IP Corp.

Effective date of registration: 20200413

Address after: California, USA

Applicant after: Apple Inc.

Address before: California, USA

Applicant before: INTEL Corp.

TA01 Transfer of patent application right
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant