DYNAMIC DISTRIBUTIONS OF APPLICATIONS AND ASSOCIATED RESOURCE UTILIZATION
TECHNICAL FIELD OF THE INVENTION
This invention relates to Interactive Voice Response (IVR) systems and more particularly to an efficient IVR system having a large number of ports.
BACKGROUND OF THE INVENTION
In the voice processing industry, Interactive Voice Response (IVR) units typically have been implemented on a single processor that is capable of supporting only a finite number of ports. The number of ports is primarily limited by the bandwidth of the processor and the associated storage media. In other words, if the user is playing a unique voice message or other information that is stored in memory, the processor can only support up to a finite number of communication ports before running out of bandwidth. In the past, this problem was solved by building large systems in which a number of individual processors were connected together. For example, many individual processors supporting 72 to 96 ports can be connected together to build very large systems with thousands of ports. However, there are problems with this type of system architecture.
IVR applications, such as recorded voice messages, must be readily accessible to the processors. Therefore, they are usually held by the storage media associated with each individual processor. For instance, in a system having 25 IVR applications and a number of processor nodes, there would need to be copies of each IVR application in every processor node to anticipate incoming calls. This is a very inefficient use of the processors and the storage media, especially in the case of very large service bureau applications which might have thousands of applications running at one time. There are other applications, such as voice recognition, which must be configured separately for each unit in order to operate properly. Accordingly, there is a need for a large port count system, such as an IVR system having thousands of ports, that can efficiently manage the IVR applications, media space and the associated signal processing resources and hardware. This will eliminate the requirement of having all of the IVR applications continuously resident in every system unit or node. Such a system would also preclude having the system resources tied to a particular processor unit at any given time.
The applications used in these large port count systems include IVR applications such as: customized long distance carrier services, 1-800 number call
processing and routing, call directors, banking applications, medical applications and anything that is handled in large volumes at a network level or a large service bureau level where there are many customers calling the system. In addition, there may be associated application media that is used by the individual IVR applications in operation. This application media must also be stored in, or accessible to, each system unit or node.
Today, most processors support 96 to 120 ports. Therefore, to implement a thousand port IVR, a system comprised of 10 individual 96 to 120 port processors would be required. Each individual processor would need its own disk storage media, voice and telephony hardware and copies of the system applications and associated media. These individual units would also require resources such as voice recognition, text to speech and any other resources the application may need at a given time. This implementation of a thousand port IVR would be inefficient and cumbersome. Accordingly, there is a need for an IVR system that has a large number of telecommunication ports and that can efficiently manage the distribution and use of the
IVR applications and the associated application media.
SUMMARY OF THE INVENTION
In the present invention, individual nodes are comprised of a processor, associated memory and voice and telecommunications hardware. The nodes are treated as a set of resources that are managed by a monitoring program which operates as a statistical/predictive demand engine. The demand engine monitors the system's use of the individual IVR applications and estimates the types of resources and applications that will be required to handle future callers. For instance, the demand engine may estimate the number of ports that will be required for peak capacity periods and the various voice recognition resources that will be needed by each port during those periods. The demand engine also selects which processor nodes should run the applications. A resource manager will then assign incoming callers to available processor nodes containing the required IVR application. Usually, the present invention anticipates the number of future callers and downloads the required IVR applications and associated media that are required to handle those calls. Occasionally, the system may fail to predict the needs of a caller. In those cases in which the needs of a caller have not been anticipated and, as a result, a specific IVR application has not been preloaded to a processor node, the present invention provides for real-time copying of the required application and associated media to an available node in order to process the call. In a preferred embodiment, the system monitors 10 to 15 minute periods to anticipate the demand level for future time periods and to determine the applications and associated voice media that will be required by the processor nodes during those periods. The system maintains a single master copy of each IVR application and the associated application media. When it is predicted that an application will be needed in a future time period, the system provides a temporary copy to a processing node in anticipation of the future call. The system will provide copies of the IVR applications to as many nodes as required to handle the predicted number of callers. The system also removes the temporary application copies when they are no longer needed by the processor nodes to handle calls. This has the effect of freeing processor node memory and processor capability so that the node can accept other IVR applications to handle
additional future callers requiring different applications. Accordingly, the system provides a much more efficient utilization of the memory and processor capacity of the voice telephony nodes thereby allowing for faster processing of calls.
Accordingly, it is one object of the present invention to provide an IVR system with a very large number of ports that can provide various IVR applications to callers as required.
It is another object of the present invention to provide a system in which master copies of the IVR applications are stored in a central location from which they can be temporarily provided to one or more nodes in anticipation of the requirements of future callers or to be used by the nodes to process outbound calls.
A feature of the present invention allows for unanticipated IVR applications to be provided to a processor node in response to the specific requirements of a caller. A copy of the unanticipated application is provided in real-time from the central storage location to an available node when no other node has a copy of the application or when all of the nodes having the application are at their maximum capacity.
Another feature of the present invention provides for removing unneeded applications from the nodes when the system predicts that the application will not be required to handle future calls. The memory and processor capability made available by removing unneeded applications are then used to run other applications which are predicted to be needed to handle future calls or which are demanded by future calls.
The foregoing has outlined rather broadly the features and technical advantages of the present invention in order that the detailed description of the invention that follows may be better understood. Additional features and advantages of the invention will be described hereinafter which form the subject of the claims of the invention. It should be appreciated by those skilled in the art that the conception and the specific embodiment disclosed may be readily utilized as a basis for modifying or designing other structures for carrying out the same purposes of the present invention. It should
also be realized by those skilled in the art that such equivalent constructions do not depart from the spirit and scope of the invention as set forth in the appended claims.
BRIEF DESCRIPTION OF THE DRAWINGS
For a more complete understanding of the present invention, and the advantages thereof, reference is now made to the following descriptions taken in conjunction with the accompanying drawings, in which: FIGURE 1 is a block diagram of an IVR system embodying the present invention;
FIGURE 2 is a block diagram of a prior art IVR system;
FIGURE 3 is a block diagram of an alternate embodiment of a prior art IVR system; FIGURE 4 is a block diagram of a high port count prior art IVR system;
FIGURE 5 is a system for monitoring and predicting the use of the system resources of the present invention;
FIGURE 6 is system for determining the availability of processors in the present invention; and FIGURE 7 is an algorithm for handling calls in the system of the present invention.
DESCRIPTION OF THE PREFERRED EMBODIMENTS
Before describing the operation of the invention, the prior art systems shown in FIGURES 2, 3 and 4 will be discussed.
Referring to FIGURE 2, a typical prior art IVR system is shown as system 20. Storage device 201, typically embodied as a disk drive or some other bulk memory storage device, contains media associated with the IVR applications, such as digitized voice information. Application media delivery device 202 facilitates moving media from storage device 201 to voice/telephony hardware 204. Voice/telephone hardware 204 provides the interface between callers 206 and the IVR applications that are running on system 20. Callers 206 are connected to hardware 204 through switch public network 205.
IVR application 203 is connected to callers 206 and storage media 201 through voice/telephony hardware 204. In this type of system, the IVR applications are somewhat static in that they are typically memory resident in anticipation of incoming phone calls. If a thousand applications had to run on this particular system, then a copy of each application and its associated media would have to be resident on each system node, even though statistically only a small percentage of those applications would be likely to run at any given time. As a result, in a large IVR system comprised of many system 20 type nodes, the nodes would be required to store IVR applications and media that is used infrequently or, even if needed regularly, the application may be used by only a few callers. A large IVR system of this type wastes much of its storage space by holding copies of these rarely used applications.
System 20 also has problems due to the bandwidth of link 210 between storage device 201 and application media delivery device 202. Only a finite amount of application media can be moved across link 210 at one time. This media restriction limits the port count for system 20. If link 210 had an infinite bandwidth and storage device 201 had infinite storage capabilities, then system 20 could facilitate an unlimited number of ports. However, present capabilities typically restrict IVR systems to 96 or 120 ports maximum.
In FIGURE 3, system 30 illustrates another type of architecture for prior art IVR systems. System 30 has application 303 which is linked to application media delivery hardware 302 and voice/telephony hardware 304. Application 303 is connected to callers 306 through voice/telephony hardware 304 over switched public network 305. Media from storage device 301 moves from application media delivery hardware 302 to application 303 across link 310. Like system 20, media distribution is restricted on system 30 due to the bandwidth limitations of link 310. Also, system 30 stores infrequently used IVR applications and media like system 20. As a result, both systems 20 and 30 waste memory space holding rarely used applications and media and both systems are limited in port count.
Turning to FIGURE 4, another prior art IVR system is shown as system 40. Storage device 401 holds all of the media for the IVR applications on the system. Application media is distributed to nodes 402-1 to N across link 410 to application media delivery hardware 404 within each node 402. Link 410 may be a LAN or some kind of data bus. The transferred application media is used in each node by application
403. Voice/telephony hardware 405 links each node to switched public network 406 and to callers 407. In this arrangement, the applications, which typically require significantly less space than their associated media, are distributed to all of the nodes. This is a compromise system that attempts to better utilize the individual processor nodes 402. However, in system 40 there is still a media delivery problem between storage device 401 and nodes 402 due to the bandwidth limitations of link 410. If a high performance disk system is used, system 40 may support about 1500 ports if it handles voice media from a single disk drive. However, system 40 will not meet the requirements of a very large system having tens of thousands of ports. The most significant difference in the systems shown in FIGURES 2, 3 and 4 is the bandwidth limitation. Systems 20 and 30 achieve their bandwidth limit quickly, at about 96 or 120 ports. In system 40, on the other hand, the bandwidth limit is extended to about 1500 ports where it finally reaches the storage capacity limit of disk 401 and the delivery capacity limit of link 410. Ideally, in order to avoid bandwidth limitations during periods of peak system use, an IVR system should dynamically
anticipate resource requirements and move the IVR application and associated media to the processor nodes during lull periods.
One embodiment of an Interactive Voice Response system incorporating the present invention is shown as system 10 in FIGURE 1. Storage device 101 holds system 10's IVR applications and associated media. The applications and application media stored on storage device 10 are master copies and they are the only permanently resident copies of the system's IVR applications. Statistical demand engine 102 is linked to storage device 101 via link 110. Demand engine 102 monitors the historical use of the IVR applications and from historical use data it anticipates future IVR application requirements. Based on the predicted requirements for the system, demand engine 102 proactively provides copies of the applications and their associated media to processor nodes 103-1 to N before they are needed by IVR system 10. In cases where an incoming call requires an IVR application that has not been anticipated and, therefore, is not preloaded in a processor nodes 103-1 - 103-N, demand engine 102 downloads a copy of the required application and its associated media to an available node in real-time.
Accordingly, demand engine 102 performs both a predictive function to anticipate application demand levels and a real-time correction function to compensate if a demand was not properly anticipated. Demand engine 102 also monitors when an IVR application is no longer needed by a node 103. Determining when an application is no longer needed can be accomplished by a variety of methods, including monitoring actual usage statistics or monitoring the elapsed time since the application was downloaded or last used. The system may also use the time of day to determine whether the application will be required again. As the utilization of the application decreases, demand engine 102 removes copies of the application from selected nodes in order to free memory space in local cache disk 104 and transient application 105. Demand engine 102 can then load copies of other IVR applications and associated media into the available cache 104 and application 105 memory space in anticipation of future callers that will require different applications. Each application can have different parameters for controlling how the application is handled by system 10.
These parameters may be adjusted from time to time either by the system or by a user. For example, demand engine 102 can be set to keep only the lowest possible number of applications at the nodes in order to decrease memory retrieval time.
If desired, specific application criteria can be stored in database 101 and would be provided to demand engine 102 over link 110. These criteria could be used to vary application parameters. Thus, a user or system administrator could use the criteria to vary the applications that are loaded on the various processor nodes without regard to the current system statistics. For example, a user could instruct demand engine 102 that a certain type of application will be needed during a specified time period. Accordingly, these criteria would cause demand engine 102 to load that type of application in a specified number of nodes. The criteria can be input to system 10 by a keyboard, such as in situations where a user knows that a certain application demand will occur in a future time period. In other circumstances, system 10 may develop criteria on its own. For instance, when a certain application requires an operator, system 10 could set the criteria for that application so that it is limited to only those nodes which have operators available at a particular time.
Voice/telephony hardware 106 in each node 103 provides an interface between transient application 105 and switched public network 107. Callers 108 are connected to system 10 through network 107. In a preferred embodiment, demand engine 102 uses a statistical database, such as data array 50 as shown in FIGURE 5. Data array 50 represents the system's IVR applications on the y-axis and time periods T0 to TN on the x-axis. The time periods of arbitrary duration, such as a day, a year, an hour, a month or a week, depending upon the type of system, the duration of incoming calls and the rate at which the current IVR application are changed.
To initialize the system, the user could populate the various time zones with an anticipated usage level for each IVR application. The anticipated use could be derived from historical data or it could be an estimation. In one embodiment, this information will then be updated in real-time as the system is used and as the demand for
applications varies over time. In other embodiments, the data in array 50 may only be inserted by a coordinator or supervisor so that the user can control which applications are readily available to future callers or which applications system 10 can use for outgoing messages. Demand engine 102 would then be able to treat outgoing call applications differently from applications used for incoming calls.
In an adaptive system or in a system with real-time automatic updating, blocks 501 and 502 measure the historical use of the system applications and resources. Block 501 represents a source of historical or real time usage of various port and other resources in system 10. This historical usage is provided to block 502 where statistics are gathered in order to measures the actual resource demand levels. Block 503 then uses the statistical data on an actual application demand to predict future application demand levels. Block 504 represents an algorithm that is used to update data array 50 to reflect application use in each time period. The data provided by and the functions performed in blocks 501-504 form a closed loop adaptive system in which system 10 (FIGURE 1) can build an array of actual application use even though array 50 may have been initialized using estimated values.
Depending upon the time frame that is chosen for periods T0 to TN, the system can anticipate not only day-to-day changes, but also weekly changes. For instance, in a seven day cycle having five lull days and two busy days, the system would be able to anticipate the busy days proactively. Therefore, there could be one chart having several different versions of application demand for specific days of the week or times of the year. This capability depends upon the dimensions of the array, and how much data the user desires to maintain on the system. For example, if the system is tracking use over the course of a year and the user wants to anticipate usage during a specific period of time, such as during a holiday, that would require a lot of individual time slots or alternate tables to maintain the year's worth of data. If the system is managed on a narrower level, such as the typical use during a 24-hour day, it could rely on a fast predictor mode to update the histogram in near real-time. The length of the update cycle and the amount of data used to monitor usage are dependent upon the time period that is analyzed, the size of the system and, if desired, the particular application.
FIGURE 6 is a block diagram illustrating a system for assigning incoming calls to specific processor nodes 103. Map 60 tracks processor and port availability. For each processor 103 there is a gauge of available resources 604 that indicates the level of resource use. As shown in map 60, gauge 604 is generally an empty/full measure of resource capacity. In the preferred embodiment, there would be some variable in the system software that reflects the actual usage of each processor resource in real-time. In the case of a real-time system, the goal is to anticipate application demand and to match applications with available processors. Because the applications are distributed dynamically among the processors, a specific application will not always be on the same processor node, therefore, the system will have to track the particular applications that are assigned to each processor node. This is accomplished by resource management and call assignment device 62.
Resource management and call assignment device 62 first looks at incoming call attributes, such as the originating number, ANI/DNIS or some particular message that is part of the incoming call signal which indicates that the call will require a specific IVR application. Resource management device 62 then selects an available processor node with the required application. Block 61 illustrates a method of maintaining a pool of available processors, wherein there is a list 602 of available processors for each application 601. Device 62 issues a command back to the network or to a proprietary front end switch to direct the call to a particular processor node resource.
The incoming calls may be routed to general purpose processors wherein a menu is provided and, depending upon the program or application selected from the menu, resource manager 61 switches the call to a processor node loaded with the selected application. If the system is working properly, then histogram 50 will have served to load the required applications into a proper number of processors so that there is sufficient capacity to handle the call promptly using the proper applications. Otherwise, demand engine 102 will download the required application to an available node and the call will be connected to that node.
FIGURE 7 illustrates a program for resource management and call assignment. A call arrives in step 701 and it is presented to the resource manager in step 702. Step 702 is an alerting signal, such as a ring signal along with some information about the call or, in the case of an ISDN call, it could be a call setup message or signalling system seven call. In step 703, it is determined whether the call is mapped to a specific application due to the call's ANI/DNIS information. Step 703 uses a map based on ANI to identify a specific application or an originating area. There are two ways of handling incoming calls. If there is a specific mapping, the call is branched to step 706 and the application resource is selected from the available processor list, then the call is routed to the specific port or processor node via a redirect function or switching function. If the system does not know how to handle the call in step 703, an application can be run that presents a selection menu which interacts with the caller and allows the caller to select a specific application. The call is then redirected or switched through the selected application by steps 705 and 706. Finally, the call is completed in step 707 when it is routed to a specific port or processor node.
Although the present invention and its advantages have been described in detail, it should be understood that various changes, substitutions and alterations can be made herein without departing from the spirit and scope of the invention as defined by the appended claims. For example, while the embodiment described is an IVR system for inbound and outbound calling, the inventive concept can be used in any type of communication system, including voice systems, data systems, such as the internet, and wireless systems. The inventive concept disclosed herein can be used in the Advanced Intelligent Network (AIN) telecommunications environment (for example, between the service control point (SCP), intelligent peripheral (IP) or service node (SN)) as set out in Bell Core specification 1129 and available from Bell Core, which specification is hereby incorporated by reference herein. Also, incorporated by reference herein is U.S. Patent No. 5,469,500 to Satter et al., issued November 21, 1995, and owned by a common assignee.
It will also be understood that in some applications the statistical engine of the present invention could, in fact, reside at one or more of the nodes or remote locations.