An Apparatus and Method for Bundling Associated Network Process Flows in Service Aware Networks
I. DESCRIPTION OF THE INVENTION
LA. Field of the Invention The present invention relates generally to the monitoring and processing of packets in a full duplex communication system; and more specifically to high speed digital communication networks transporting packets which may be monitored for association between process flows such that a system can either unite or relate between such process flows. More specifically, in service aware networks (SAN), where a father process flow may create son and grandson process flows, the ability to relate or unite between them results in better management of system resources.
LB. Background of the Invention
The Internet World Wide Web (WWW) is constituted of a large number of computer systems. Most of these systems are designed to route packets of data from a source node to a destination node. The routing is done primarily by sending some basic structure over the network. The basic structure that is sent is also known as a data packet. Such a data packet typically contains information about the source and destination of the packet and an attached amount of data. The data is also knows as the payload of the packet.
Packets are related to each other according to a variety of pre-defined protocols. Generally, communication protocols use full duplex connections to
exchange packets. According to a duplex protocol, the same session may include packets moving both upstream and downstream. Additionally, the protocols may comprise an application layer protocol. More details on the application layer are provided in Figure 6 and the associated passages of the present disclosure. In a policy-based system, packets may be monitored for basic qualities in order to apply certain rules related to such packets. Furthermore, such a monitoring is required to ensure that the packets are processed in a more efficient manner. Moreover, as higher network speeds are required, the ability to associate process flows is essential to an efficient handling of the stream of packetized data transmitted through the system. These process flows would otherwise be considered independent and be processed on separate packet processors in a multiprocessor system, resulting in more complex management system, waist of system resources, and inability to take any kind of advantage from the shared information regarding these process flows. Moreover, it is also important that the network be able to address the issues arising from the association between -application level protocols and lower level protocols in order to achieve better system performance and management. Such an association may be done by inter-processor communication. However, inter- processor communication is complex and overall performance is reduced. The information related to each packet includes source and destination
Internet Protocol (IP) addresses the respective ports for source and destination as well as the protocol type field. Currently this information is used for the basic association of the packet with a certain queue that may fit the specific classification of that packet.
However, this enables only a few very basic operations on each packet separately. Also, the information is insufficient to allow the association of process flows to each other to process the flow of the packets more efficiently through the network. Furthermore, it also does not allow the application of certain rules for further manipulation or treatment. Moreover, it does not handle the association that exists between packets from related process flows.
It is known that the ability to use the information in one packet in conjunction with information from the history of other packets and process flows currently under execution in the system, allows for a more efficient handling of packets as they flow through the network.
Conventional techniques, such as that provided in U.S. Patent no. 5,606,668, by Shwed et al, merely deals with handling a single process flow on a single packet basis, making a determination on how such a packet should be handled. In the technique disclosed in Shwed, either a packet may be allowed to pass or is rejected by the system, based on certain predefined security rules, allowing the for implementation of a sophisticated security system.
In U.S. Patent No. 4,577,311. by Duquesne et al, a system where the routing from point to point is predetermined for each and every packet is disclosed. Such an allocation of channels up front may be efficient in that the user will specifically have knowledge and early control κ>f the path taken. However, a disadvantage is that when the load increases over the various nodes on which such user may have no control over, the performance of he connection between the two selected nodes may deteriorate.
Therefore, even if the system proposed by Duquesne takes into account all
the associated process flows of a communication session, the determination of the path is still fixed and no control that allows for sustaining a desired level of service can be achieved. On the other hand, conventional techniques relating to asynchronous transfer mode (ATM) networks also does not show the capabilities required to bundle together different process flows that may be associated in a variety of ways.
It would be advantageous, for example, for the packet processing of a file transfer protocol (FTP) session that the various process flows relating to a specific session have some kind of association between them. In this case there is a control process flow and a data process flow and while no interaction occurs between the flows, their start and finish points are highly correlated. The ability to successfully identify such process flows, establish the relationship and send them to the same packet processor will result in a higher performance system and a better ability to recover in cases of failure. . Another type of associated process flows occur in the case of a voice over
Internet protocol (VoIP), such as the H.323 standard. In this case, three basic process flows can be identified: video, audio and control. However, unlike FTP, in this case there is significant interaction throughout the life span of the protocol. In other words, during the duration of the connection, the video, audio and control process flows have to interact. This is required, for example, to ensure full synchronization between the audio stream and video stream, so that full lip synchronization is achieved. In such a case it would be advantageous for the packet processor to unite these process flows such that they can share resources and therefore save on system resources and enhance overall performance and system
efficiency.
H.323 is a well known VoIP standard. Its components include terminals, gateways, gatekeepers and Multipoint Control Units (MCUs). The terminals provide real-time bidirectional multimedia communication. It supports audio and optionally supports video and data. The terminals can reside as stand-alone or on a PC. The gateways can connect two dissimilar networks. The protocols support setup and release, media format conversions and transferring information between networks. The gatekeeper forms the brain of H.323. It provides services such as addressing, authorization, authentication, bandwidth management and accounting. The MCUs provide for conferencing between 3 or more terminals. Further details on H.323 can easily be found on the Internet.
IL SUMMARY OF THE INVENTION
An object of the present invention is to provide a method and apparatus for the bundling of associated process flows. It is another objective of the invention to provide for the differentiation between two types of associated process flows, the first which has an association only in its start and end portions, and the other which has an on-going association throughout the existence of the relevant process flows. It is a further objective of this invention to provide a method by which an association can be created between a first process flow and a second process flow prior to such time where all the parameters of said second process flow are known. To meet the objects of the invention there is provided a network" comprising at least one data path for processing one of an upstream and downstream stream of data packets, a classifier capable of classifying the data packets as belonging to a
father process flow based on a header information associated with each of the data packets, and at least one packet processor capable of processing packets sent from said data path.
Preferably, the packet processor is replaced by a multi-processor system designated to logically process one activity by partitioning the task between the processing units.
Preferably, the classifier is further capable of identifying son process flows based on information contained in the father process flow, wherein the son process flows were created by the father process flow. Still preferably, classifier is further capable of predicting a tuple each for each of the son process flows from the father process flow.
Still preferably, the classifier is further capable of providing a partial tuple each for each of the son process flows based on currently available information from the father process flow„ Still preferably, at least a source port information is missing in the partial tuple.
Still preferably, at least a destination port is missing in the partial tuple.
Still preferably, at least a source IP address is missing in the partial tuple.
Still preferably, at least a destination IP address is missing in the partial tuple.
Still preferably, the classifier is capable of updating the partial tuple intended for the son process flows upon receipt of the first packet of said son process flow.
Still preferably, the data path is capable of discarding the information related
o the son process flows.
Preferably, with a classifier having a content addressable memory (CAM). Still preferably, each such CAM cell can be disabled from performing a comparison. Still preferably, each such CAM cell can be enabled to perform a comparison.
Still preferably, the CAM contains at least one row of CAM cells. Still preferably, the CAM comprises a first set of rows of CAM cells and a second set of rows of CAM, wherein contents of the CAM cells can be compared against a received tuple and wherein each of the CAM cells can be disabled from performing the comparison.
Still preferably, the received tuple is compared against the first set of CAM cells and if the comparison fails the received tuple is compared against the second set of CAM cells.
Still preferably, the received tuple is compared against the contents of said first and second sets of CAM cells in parallel, and wherein results of the comparison with the of second set of CAM cells are ignored if the comparison with the first set of CAM cells is successful.
Still preferably, when the comparison with the second set of CAM cells is successful, an entry in the second set of CAM cells is deleted, information for the son tuple is updated with information from the received tuple and an updated son tuple information is moved to the classifier for further handing and storage.
Still preferably, the updated son tuple information is moved to the first set of CAM cells.
Still preferably, at least one of the son process flows shares system resources with the father process flow. Still preferably, at least one of the son process flows shares system resources with a son process flow different from said at least one of the son process flows.
Still preferably, the classifier is capable of uniting the father process flow with at least one son process flow under a same process flow ID.
Still preferably, the classifier is further capable of assigning a unique identification number in addition to the process flow ID to the son process flow.
Still preferably, the classifier is further capable of directing the united process flow into a designated packet processor.
Still preferably, the packet processor is capable of distinguishing between the process flows of a united process flow by means of the unique identification number assigned to the son process flows.
Still preferably, the united process flows are capable of sharing the same system resources.
Still preferably, the shared system resource is an area in the system memory. Still preferably, the shared resource is a packet processor.
Preferably, the father process flow may be a protocol defined in any one of the layers three, four, five, six or seven of an OSI communication model.
Still preferably, at least one of the son process flow is a protocol of the third
layer of the standard communication model if the father process flow is from the third layer of the communication model.
Still preferably, at least one of the son process flow is a protocol of the third or fourth layer of the standard communication model if the father process flow is from the fourth layer of the communication model.
Still preferably, at least one of the son process flow is a protocol of the third, fourth or fifth layer of the standard communication model if the father process flow is from the fifth layer of the communication model.
Still preferably, at least one of the son process flow is a protocol of the third, fourth, fifth or sixth layer of the standard communication model if the father process flow is from the sixth layer of the communication model.
Still preferably, at least one of the son process flow is a protocol of the third, fourth, fifth, sixth or seventh layer of the standard communication model if the father process flow is from the seven layer of the communication model. Another aspect of the present invention is a method of associating process flows in a network, said method comprising processing a stream of data packets and classifying the data packets as belonging to a father process flow based on a header information associated with each of the data packets.
Preferably, the method further comprises identifying son process flows based on information contained in the father process flow, wherein the son process flows may be created as a result of the father process flow.
Still preferably, the method further comprised predicting a tuple each for each of said son process flows from the father process flow.
Still preferably, the method further comprises providing a partial tuple of the
son process flow based on currently available information from the father process flow.
Still preferably, at least a source port information is missing in- the partial tuple. Still preferably, at least a destination port is missing in the partial tuple.
Still preferably, at least a source IP address is missing in the partial tuple. Still preferably, at least a destination IP address is missing in the partial tuple.
Still preferably, the method further comprises updating the partial tuple intended for the son process flow upon receipt of the first packet of said son process flow.
Still preferably, the information related to the son process flow can be discarded.
Preferably, the method further comprises comparing a received tuple against contents of a first set of CAM cells; and comparing the received tuple against contents of a second set of CAM cells if said comparing against the first set of CAM cells is unsuccessful.
Still preferably, the method urther comprises comparing the received tuple against contents of a first and second set of CAM cells in parallel, and wherein results of the comparison with the of second set of CAM cells are ignored if the comparison with the first set of CAM cells is successful.
Still preferably, when the comparison with the second set of CAM cells is successful, an entry in the second set of CAM cells is deleted, information for the
son tuple is updated with information from the received tuple and an updated son tuple information is moved further handing and storage.
Still preferably, the method further comprises uniting the father process flow with at least one son process flow under a same process flow ID. Still preferably, the method further comprises assigning a unique identification number in addition to the process flow ID to the son process flow.
Still preferably, the method further comprises directing the united process flow into a designated packet processor.
Still preferably, the method further comprises distinguishing between the process flows of a united process flow by means of the unique identification number assigned to the son process flows.
Preferably, the father process flow may be a protocol defined in any one of the layers three, four, five, six or seven of an OSI communication model.
Still preferably, at least one of the son process flows is a protocol of the third layer of the standard communication model if the father process flow is from the third layer of the communication model.
Still preferably, at least one of the son process flows is a protocol of the third or fourth layer of the standard communication model if the father process flow is from the fourth layer of the communication model. Still preferably, at least one of the son process flows is a protocol of the third, fourth or fifth layer of the standard communication model if the father process flow is from the fifth layer of the communication model.
Still preferably, at least one of the son process flows is a protocol of the third, fourth, fifth or sixth layer of the standard communication model if the father
process flow is from the sixth layer of the communication model.
Still preferably, at least one of the son process flows is a protocol of the third, fourth, fifth, sixth or seventh layer of the standard communication model if the father process flow is from the seven layer of the communication model.
HI. BRIEF DESCRIPTION OF THE DRAWINGS
The above objectives and advantages of the present invention will become more apparent by describing in detail preferred embodiments thereof with reference to the attached drawings in which:
Figure 1 - is a block diagram of the preferred embodiment of the system. Figure 2 - is a diagram of the header added to each packed by the data path. Figure 3 - is a diagram providing the details of the header status format. Figure 4 - is an example diagram of the communication chart of a static FTP.
Figure 5 - is a diagram of the H.323 voice over internet protocol (VoIP). Figure 6 - is a diagram of the standard seven layers of the communication model. . Figure 7 - is an example diagram of a communication protocol with one implicit port number.
Figure 8 - is an example diagram of a communication protocol with one implicit port number and one implicit IP address number.
IV. DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT
The preferred embodiment for the apparatus and method of the present invention is described using the system described in Figure. 1. Streams of packets belonging to a process flow or process flows are processed using the system described in Figure 1. In this system, packets going upstream and down stream are classified in a way that results in the distribution of packet streams to a plurality of Packet Processors (PPs).
The system itself is described in detail in U.S. Patent Application Serial No. 09/541,598, tilted wAn Apparatus and Method for Wire-Speed Classification and Pre- Processing of Data Packets in a Full Duplex Network" by Michael Ben-Nun, Sagi Ravid, Ofer Weill, (hereinafter 598) and U.S. Patent Application Serial No. 09/547,034 titled "A Method and Apparatus for Wire-Speed Application Layer Classification of Data Packets" by Michael Ben Nun, Sagi Ravid, Itzhaki Barak, Ofer Weill, (hereinafter 034) the disclosures of which are incorporated herein by reference. The details of the systems and method disclosed in the above applications are not discussed herein, unless when required to specifically explain this invention.
The Data Path (110) extracts the tuple associated with a packet. The tuple comprises five fields: IP source address (32 bits), IP destination address (32 bits), Protocol (8 bits), Source Port (16 bits), and Destination port (16 bits). A unique tuple identifies a process flow. Therefore, a set of packets with the same tuple will belong to a process flow. While the received packets themselves are stored in the Data Path, the corresponding tuple is sent to the Header Processor (120) and to the Classifier (130) for purposes of process flow identification.
The Header Processor is a filter that matches the incoming tuple to a set of rules. If no rule can be applied on the incoming packet, then a 'Flow-Kill' command is sent to the Classifier. Such a 'Flow-kill' command means that a new process flow • entry need not be opened for the packet. Therefore system resources are saved and system performance improved.
The main function of the Classifier is to classify the packet to the proper process flow. If the packet is part of a known process flow, the Classifier returns the process flow information. This process flow information includes the Flow-ID, the Packet Processor number and other control/status information. This information is required for the later packet processing. On the other hand, if the packet is the first packet of a new process flow, a new process flow entry is opened. Certain aspects of this operation is described herein.
A specific interface between the Data Path and the plurality of Packet Processors (140) provides the required information for further packet processing. Figure 2 describes the header, which is 32 bytes long. This header is attached to every IP packet which was received from the network communication and that passed the required rule checks. Most of the details of the interface are discussed in '598 and '034.
The Header Status is divided into sub-fields that are shown in Figure. 3. These are described in detail in '598 and '034. However, it should be noted that a Unite Number is used to distinguish between associated process flows. Such a Unite Number is required for as there are cases, such as in IP telephony, where a process flow termed as a 'father' process flow initiates a new process flow called 'son' process flow.
In this case there is a need to propagate the new process flow to the same Packet Processor used for handling the 'father' process flow. Nevertheless, the 'father' process flow must still be distinguished from the new process flows as indicated by the Unite Number. The Flow ID field provides the unique number of the process flow as designated by the Classifier to all the packets that belong to the same process flow. The new process flow is termed as the 'son' process flow.
As mentioned before, network protocols may be better handled if the system is aware of the relationship between such 'father' and 'son' protocols. A first illustrative example, shown in Figure 4 describes a static file transfer protocol (FTP) in its schematic description. FTP is characterized by a control session and a data session. A control session is initiated when a user initiates an FTP session and stays on till the user terminates the FTP session. The control connection established during the control session stays alive during the duration of the control session. On the other hand, a data session is opened within a control session. A new data session is opened for each file transfer within a control session. A data connection that is established during the data session stays alive only during that data session. For transferring a next file during the same control session, a new data session is initiated within the same control session.
The tuples used for packets in the data session are called data session tuples. Likewise, the tuples used for packets associated with a control session are called control session tuples. In the FTP case shown in Figure. 4, the parameters of the communication protocol are known in advance. For example, the client'has an IP address of "10", a data port destination of "1022", as indicated in the message sent to the server, the server has an IP address of "99". Since this is a standard FTP
transmission, the data port address is "20". It is also known that the protocol used is TCP. Therefore, all the entries for the future data session tuple are known in advance. So, a data transmission corresponding to a tuple that will have the following structure can be expected:
Clearly, the above data session tuple was created by a control session tuple, since the data session was opened within the context of a control session. Therefore, the above data session tuple is clearly related to the control session tuple that created it. The control session tuple has the following structure:
In this case, the control and data protocols association is only at the beginning and at the end of the corresponding data session. Otherwise, such an association is not required and the expected tuple can be predicted in its entirety. Therefore, the process flow associated with the control session and the process flow associated with the corresponding data session are considered to be related process flows.
In fact, as early as when an FTP session and the associated control session are initiated, the system can predict that a data session is likely to occur. It can therefore immediately generate a 'son' process flow for the data session. Such a λson' process flow will have a separate flow ID, and will have the necessary
addresses to create the expected data session tuple as all the elements are known in advance.
By providing this anticipated data session tuple, the performance of the system is enhanced and resources are utilized more effectively and efficiently as the related process flows may be executed on the same packet processor, and the preparatory work is done in advance of receipt of such first data packet. Further some of the necessary resources are commonly utilized thereby conserving on system resources. Even in the case where only one packet processor is used, the capability to logically combine the related process flows enhances overall system performance and efficiency.
Unlike in the above FTP case, in other cases it may not be easy to predict the expected tuple early on in the session. However, a capability to predict the tuple and allocate the system resource in anticipation is desirable in order to enhance system performance and efficiency. The second illustrative example, shown in Figure 5, describes the case of a voice over internet protocol (VoIP) where the source port of the data from the server is unknown when the control sessions begin. A hierarchy diagram of this standard protocol is described in Figure 5. Form the diagram it is clear that as a result of the father protocol H.323, several sons, grandson and great-grandson protocols are created all of which are associated with each other.
The process flows resulting from a hierarchy, for example, as shown above with reference to H.323, may be from the third through the seventh layer of the open system interconnection (OSI) standard communication model. The OSI model that defines the architecture model of data communication protocols is
schematically shown in Figure 6. The model was developed by the international organization for standardization (ISO) and the Consultative Communication for International Telegraph and Telephone (CCITT).
Layer 7 of the OSI model is the application layer and the highest layer of the model. It defines the way applications interact with the network. Layer 6, the presentation layer, includes protocols that are part of the operating system, and defines how information is formatted for display or printing and how data is encrypted, and translation of other character sets. Layer 5 is the session layer that coordinates communication between systems, maintaining sessions for as long as needed and performing security, logging, and administrative functions. Layer 4, the transport layer, controls the movement of data between systems, defines protocols for structuring messages, and supervises the validity of transmissions by performing error checking. Layer 3 is the network layer that defines protocols for routing data by opening and .maintaining a path on the network between systems to ensure that data arrives at the correct destination node. Layer 2, the data-link layer, defines the rules for sending;and receiving information from one node to another between systems. Layer 1 is the physical layer that governs hardware connections and byte-stream encoding for transmission. It is the only layer that involves a physical transfer of information between network nodes. It is necessary to ensure consistent performance between the various protocols operating in the various layers of a communication model (for example, the OSI model discussed above). Therefore, it is advantageous to identify and maintain an association between the protocols. In the present system, all packets
belonging to the associated process flows are treated consistently. This is achieved by uniting the associated process flows into one process flow.
However, in order to distinguish between the packets belonging to various process flows, each process flow is assigned a unique Unite Number. This number is used to distinguish packets belonging to one process flow from another and handling them correctly. However, by uniting the process flow and assigning the united process flow to one packet processor, the overall performance of the system is enhanced as resource allocation is optimized and data can be shared. This results in saving resources such as memory. Alternatively the united process flows can be assigned to a logical processing unit, which may be combined from a multiplicity of packet processor, operating in a multi-processor environment. In such a system, the advantage of logically uniting between the process flows allows for an efficient processing of the packets and overall system resource conservation.
Those skilled in the art can expand this capability to a variety of other applications such as billing systems, reservation systems and the like. Other implementations could include more specific billing applications. These include refraining from beginning billing until a certain number of son process flows are active. Or, refraining from beginning billing until the bandwidth requirement is consistent with the quality of service required by the user. A skilled artisan will know that the scope of the disclosed technique extends to process flows and combinations thereof that are more complex than the illustrative examples discussed above. The disclosed technique provides the capability of predicting the case where a 'son' protocol associated with a 'son' process flow is expected but not all the details about it are known when the ^father'
protocol associated with the 'father' process flow is created. An illustrative example for such a protocol exchange is described in Figure 7. The initial father tuple is defined as follows:
The message contains the destination port for the client, indicated as "1022" and also known are the source and destination IP addresses. Therefore four out of the five elements of the tuple are known. However, the source port to be allotted by the server for this 'son' process flow is unknown. Therefore, the present technique predicts the expected tuple will be as follows:
The "*" denotes an unknown, value that will be known later. The explanation of how this situation is handled is described below. However, it is worthwhile noting that upon receiving a tuple with the following value:
The system will identify it as the expected tuple it had predicted ahead of time for the 'son' process flow. The system will update the missing information and
handle the packet as a process flow associated with the information available earlier to the system.
The illustrative example of Figure 8 shows a case where both the source IP address arid the source IP destination are not known at the time when a tuple for a 'son' process flow is predicted. The initial communication will use the following tuple:
The message may contain a request to have the client destination IP address to be "90" and the client destination port to be "1022". As at this time both the server source IP address and source port address are unknown the following tuple is predicted:
Where the "*" again indicate missing information which is currently unpredictable. However, a tuple having these characteristics is likely to occur and therefore the system can make preparations for resource allocation for this stream of packets as well as associating it with the other relevant process flows. By doing this prediction, overall system performance and efficiency are increased. Eventually, a packet will arrive from the server having the following tuple format:
The system will identify it as the expected tuple it had predicted ahead of time, update the tuple to ensure that the missing information is updated and handle the packet as a process flow associated with the information available earlier to the system.
Those skilled in the art will know that this technique can be expanded to additional missing information. For the illustrative example above with a five-field tuple, it is practical to have up to four missing fields. However, this could be further expanded to any N number of fields, with up to N-l missing fields when the 'father' process flow is identified as a process flow with the likelihood of having 'son' process flows.
Moreover, the hierarchical nature of this structure is also not limited to one level, and multiple levels of 'father' and 'son' process flows is possible, as well as a multiplicity of 'son' process flows for each 'father' process flow. Normally the classifier (130) of Figure 1 will generate the new process ID from the full tuple information contained in the first packet received from a new process flow. However, as mentioned above, this is not always possible, as a 'son' process may be expected but its first packet not yet arrived. In anticipation of receipt of such packet, and based on the information already received by the father process flow, the classifier can prepare in advance the expected Tuple, filling in the missing portions with wildcards, or "don't care" characters. Later, when the
information is available these characters can be replaced by the data received from the anticipated process flow.
A partial tuple is defined as the 'son' tuple with incomplete information. In another aspect of the disclosed technique, the tuples, including partial tuples, are stored in content addressable memory (CAM) of the classifier. CAM is discussed in detail in '034. The CAM contains a multiplicity of rows each containing multiple CAM cells. Each of these cells can be separately enabled or disabled for performing the match function of the CAM. When the match is disabled, the content of the respective CAM cell is considered to be a 'wild card' and therefore ignored. If a match is found, the entry in invalidated, and the full entry is updated in the classifier.
For improving the overall system efficiency, the CAM is separated into two regions. The first region contains full tuples, i.e., those tuples for which all the fields are known. A second region is used for storing partial tuples, i.e., tuples for which one or more of the fields are not known at the time of the initial creation of the expected tuple. Clearly "wild card" comparison will take place for the tuples in the second region.
When a tuple is presented to the CAM system, the CAM containing the full tuple is searched first. Only if no hit is found in that portion of the CAM, the second portion of the CAM is searched for a match. If such a match is found in the second portion of the CAM, the entry is invalidated. The full tuple that has now been created by updating information in the partial tuple, is entered into theTirst region of the CAM.
In another embodiment, both regions of the CAM are searched simultaneously. However, the results of a match in the second region of the CAM are ignored if a match is found in the first region. This results in improved performance.
The ability to identify the relationship between 'father' and 'son' process flows allows for the allocation of certain shared system resources, such as memory, therefore enhancing overall system efficiency.
Other modifications and variations to the invention will be apparent to those skilled in the art from the foregoing disclosure and teachings. Thus, while only certain embodiments of the invention have been specifically described herein, it will be apparent that numerous modifications may be made thereto without departing from the spirit and scope of the invention.