US20170155610A1 - Processing messages in a data messaging system - Google Patents
Processing messages in a data messaging system Download PDFInfo
- Publication number
- US20170155610A1 US20170155610A1 US14/953,354 US201514953354A US2017155610A1 US 20170155610 A1 US20170155610 A1 US 20170155610A1 US 201514953354 A US201514953354 A US 201514953354A US 2017155610 A1 US2017155610 A1 US 2017155610A1
- Authority
- US
- United States
- Prior art keywords
- processing
- message
- messaging system
- data messaging
- node
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L51/00—User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
- H04L51/21—Monitoring or handling of messages
- H04L51/226—Delivery according to priorities
-
- H04L51/26—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/54—Interprogram communication
- G06F9/546—Message passing systems or structures, e.g. queues
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/24—Traffic characterised by specific attributes, e.g. priority or QoS
- H04L47/2416—Real-time traffic
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/10—Flow control; Congestion control
- H04L47/24—Traffic characterised by specific attributes, e.g. priority or QoS
- H04L47/2441—Traffic characterised by specific attributes, e.g. priority or QoS relying on flow classification, e.g. using integrated services [IntServ]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/50—Queue scheduling
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/50—Queue scheduling
- H04L47/56—Queue scheduling implementing delay-aware scheduling
Definitions
- the present invention relates to processing messages in a data messaging system.
- a computer-implemented method of processing messages in a data messaging system comprising a plurality of processing nodes, wherein each message processed by the data messaging system has associated with it a priority level.
- the method includes receiving a first message for processing at a processing node of the plurality of processing nodes and determining if the processing node has an associated message staging area. Based on a determination that the processing node has an associated message staging area, the method includes determining if a second message received by the data messaging system has a higher priority value than the first message. Based on a determination that the second message has a higher priority value than the first message, the method includes delaying processing of the first message by the processing node, and processing the second message using the processing node.
- a data messaging system for processing messages having a plurality of processing nodes. Each message that is processed by the data messaging system has associated with it a priority level.
- the data messaging system is configured to receive a first message for processing at a processing node of the plurality of processing nodes and determine if the processing node has an associated message staging area. Based on a determination that the processing node has an associated message staging area, the data messaging system is configured to determine if a second message received by the data messaging system has a higher priority value than the first message. Based on a determination that the second message has a higher priority value than the first message, the data messaging system is configured to delay processing of the first message by the processing node, and process the second message using the processing node.
- a computer program product for processing messages in a data messaging system having a plurality of processing nodes, wherein each message processed by the data messaging system has associated with it a priority level.
- the computer program product includes a computer-readable storage medium having computer-readable program code embodied therewith, the computer-readable program code configured to perform a method.
- the method includes receiving a first message for processing at a processing node of the plurality of processing nodes and determining if the processing node has an associated message staging area. Based on a determination that the processing node has an associated message staging area, the method includes determining if a second message received by the data messaging system has a higher priority value than the first message. Based on a determination that the second message has a higher priority value than the first message, the method includes delaying processing of the first message by the processing node, and processing the second message using the processing node.
- FIG. 1 is a schematic diagram showing a data messaging system in accordance with an embodiment of the invention
- FIGS. 2 a to 2 c are a flowchart showing the operation of the data messaging system of FIG. 1 ;
- FIG. 3 is a flowchart showing the further operation of the data messaging system of FIG. 1 ;
- FIGS. 4 a and 4 b are representations of the processing nodes of the data messaging system of FIG. 1 .
- Data messaging systems provide connectivity and integration between various systems and services. Examples of data messaging systems include IBM Integration Bus (IIB), IBM Websphere MQ, and Java Message Service. Users of such data messaging systems can develop applications to integrate their systems and services, which are sometimes referred to as message flows.
- IBM Integration Bus IBM Websphere MQ
- Java Message Service Users of such data messaging systems can develop applications to integrate their systems and services, which are sometimes referred to as message flows.
- Message flows may comprise a number of distinct nodes connected together, with the different nodes performing different individual functions as part of the composite application. For example, there could be an XSLT node, a routing node, an MQ output node, etc. Different nodes will exhibit different behaviours. For example, some may be CPU intensive, some disk input/output (I/O) intensive etc. Such data messaging system often process messages for which a priority value can be specified.
- FIG. 1 shows a data messaging system in accordance with an embodiment of the invention.
- the data messaging system 1 comprises a processing system 2 , which is in communication with external computer systems 7 and 8 , to provide connectivity and integration between the external computer systems 7 and 8 .
- the processing system 2 may comprise a single computing device or multiple computing devices in connection with each other.
- a data messaging system 1 will commonly be used to connectivity and integration between multiple external computing devices, rather than just two.
- the data messaging system 1 further comprises a global store 3 , a statistical data capture device 4 , and a resource modeller 5 , all of which are in communication with the processing system 2 .
- the statistical data capture device 4 and a resource modeller 5 are as in direct communication.
- a message herein called message 1 enters the message flow (step 101 ).
- the message may for example be passed by the external computing device 7 to the data messaging system 1 , in particular the processing system 2 thereof, to be processed and passed to the external computing device 8 .
- the ID of the thread message 1 is being processed by (threadID), the priority level of message 1 (messagePriority) and its position in the message flow are stored in the global store 3 (step 102 ).
- the message is propagated to the next node (step 103 ), which as the message 1 has just entered the message flow is the initial node 10 of FIG. 4 a .
- the data messaging system 1 then checks if any staging area has been associated with the node 10 (step 104 ).
- any staging area has been associated with the node 10 (step 104 ).
- steps 103 and 104 are repeated in turn, and message 1 simply propagates to successive nodes.
- message 1 propagates through data messaging system 1 , its position recorded in the global store 3 is updated, as is its priority if that is modified at any time.
- the message message 1 may, for example, propagate from node 10 , along the middle upper branch of FIG. 4 a that includes node 11 , along the rightmost upper branch of FIG. 4 a that includes node 12 , after which it has finished processing and is passed to its destination, the external computing device 8 .
- the data messaging system 1 in the initial state operates in much the same way as a conventional data messaging system, other than the data being stored in the global store 3 .
- statistical data about the processing of the message by the processing nodes can be collected by the statistical data capture device 4 .
- Such data could be captured using the known Accounting and Statistics capture functionality of IIB, for example. It will be appreciated that in alternative embodiments of the invention the statistical data could be captured in various other ways.
- the resource modeller 5 uses the statistical data to build resource models for the processing nodes of, and messages processed by, the data messaging system 1 (step 201 ). It will be appreciated that the resource models can be built using any of various standard statistical techniques.
- the resource models allow the data messaging system 1 to determine the resources that will be used by messages processed by the data messaging system 1 (or an estimation thereof), based on the properties of the message.
- a resource model may allow resources such as the CPU usage, memory usage and/or disk I/O a message will require while being processed, to be determined from the properties of the message such as its size, type, format, structure, schema and/or purpose.
- Statistical data such as the time taken to process a message may also be used.
- Different processing nodes can have their own separate resource models, and may or may not depending on whether they share processing resources or not, amongst other things. (Two processing nodes may share a CPU but use separate memory areas, for example, and in that case may share a CPU resource model but have separate memory resource models.)
- the resource models also allow the data messaging system 1 to identify the processing resources required by the processing node during the processing of messages (step 202 ).
- the data messaging system 1 can then determine if the processing node will (or is likely to) experience limiting processing resources during operation of the data messaging system 1 (step 203 ), for example insufficient CPU availability to process multiple messages at the same time at optimal speed.
- the statistical data capture device 4 can also capture the actual historical use of processing resources by the processing node, time taken for messages to be processed, and so on, during operation of the data messaging system 1 , and so directly identify if the processing node will (or is likely to) experience limiting processing resources (on the basis of whether it has in the past).
- the data messaging system 1 determines that the processing node will experience limiting processing resources, it associates a staging area with the processing node (step 204 ). The use of the staging area is described later below. If not, it skips straight to the next step.
- associating a staging area with a processing node may merely involve adding a flag to the processing node so that it is identified by the data messaging system 1 as having a staging area, with the required functionality to allow the use as described below being provided by another component of the data messaging system 1 .
- specific functionality may be added to, or enabled on, the processing node itself.
- the association may be done in other suitable ways. It will be appreciated that the relevance of the association is that a processing node can be identified by the data messaging system 1 as having a staging area associated with it, so that use of the staging area as described later below is possible, however provided.
- the data messaging system 1 checks if there are other processing nodes for it to consider (step 205 ). If so, it repeats the steps of identifying the processing resources required and so on, but for one of the other processing nodes (step 202 and subsequent steps again).
- the process is repeated until all processing nodes have been considered.
- the data messaging system 1 continues to capture statistical data as more messages are processed, and after a sufficient amount of new statistical data has been captured the whole process in repeated using the new statistical data (step 206 , and then step 201 and subsequent steps again).
- the process is repeated when a predetermined period of time has passed, is manually triggered, or occurs as part of an established maintenance cycle, for example.
- the processing nodes of the data messaging system 1 are as shown in FIG. 4 b .
- the processing nodes 11 a , 11 b and 11 c have been determined to experience limiting processing resources, for example insufficient CPU availability.
- the staging areas 12 a, 12 b and 12 c have been associated with the processing nodes.
- the message message 1 propagates along successive processing node. However, when message 1 if propagated to processing node 11 a , the data messaging system 1 identifies that the staging area 12 a is associated with the processing node 11 a (step 104 ). (A similar process will occur if message 1 takes the other branch, and so is propagated to processing node 11 b with staging area 12 b .) In some embodiments, the staging area 12 a is provided with and makes available data regarding the processing resources that are being heavily utilised ahead.
- the data messaging system 1 determines whether there are any higher priority messages in the data messaging system 1 that require processing (step 105 ). It does this by checking the global store 3 , in which the priority levels of all messages in the data messaging system 1 are stored. In other embodiments, alternative methods are used to determine if there are any higher priority messages, for example the other messages in the data messaging system 1 are directly interrogated. If there are no higher priority messages, message 1 is processed by the processing node 11 a in the usual way, and then propagated to the next processing node (step 103 again).
- the data messaging system 1 uses the resource models described above to determine the processing resources message 1 will require (step 107 ). The data messaging system 1 then uses the resource models described to determine the processing resources the higher priority messages will require (step 108 ). The data messaging system 1 then determines the processing resources available to the processing node (step 109 ). In embodiments in which the staging area 12 a is provided with data regarding the processing resources that are being heavily utilised ahead, the processing resources available can be determined from this data. The processing resources may include the number of threads available to process different messages.
- the data messaging system 1 uses the determined information to determine if the processing of any of the higher priority messages will be impacted by the processing of the message message 1 (step 110 ), i.e. if message 1 will use processing resources that are required by any of the higher priority messages if they are to be optimally processed (e.g. processed as quickly as possible). If the processing of none of the higher priority messages will be impacted, message 1 is processed by the processing node 11 a in the usual way, and then propagated to the next processing node (step 103 again).
- the determination that the processing of a higher priority message will be impacted could be done in various more or less optimal ways. For example, only the processing resources required by message 1 and the available resources could be considered, with it being assumed that a higher priority message will require at least a certain level of resources. Alternatively, only the processing resources required by the higher priority message and the available resources could be considered, with it being assumed that message 1 will require at least a certain level of resources. Alternatively again, only the processing resources required by message 1 and the processing resources required by the higher priority message could be considered, with it being assumed that there will always be only at most a certain level if available resources.
- the time message 1 should be kept in the staging area is determined (step 111 ). This will be, for example, the time all the impacted higher priority message will take to be processed. Alternatively, the time may be the time the first higher priority message will take to be processed, for example.
- the message message 1 is then suspended in the staging area 12 a (step 112 ).
- the thread in which message 1 is being processed could be placed into a temporary sleep mode, for example to reduce contention on CPU resources when higher priority messages are being processed in other threads.
- the thread could switch out its current state to enable the thread itself to be used to process the higher priority message, for example where these is only a single thread available.
- the thread state could be switched back in to allow the processing of message 1 to be completed.
- the current thread could be suspended and a secondary temporary thread created in which the higher priority message is processed. The temporary thread could then be destroyed when complete, and processing of the thread containing message 1 resumed, so ensuring that there are not too many threads were actively processing at any particular point in time.
- the data messaging system is configured to trigger an event when a message is placed on an inbound queue, the event updating a separate part of the global store 3 which stores details of messages waiting to be processed. This part of the global store 3 can then be queried as part of the check whether the processing of any higher priority messages will be impacted (step 110 ), and if it is determined that not enough threads will be available it can be managed as described above.
- the message message 1 remains suspended in the staging area 12 a until the determined time has passed (steps 113 and 114 ). Once the determined time has passed, message 1 is processed by the processing node 11 a in the usual way, and then propagated to the next processing node (step 103 again).
- step 105 the step of determining if there are any higher priority messages to be processed is repeated (step 105 ) and subsequent steps. This allows message 1 to be suspended to allow further higher priority messages to be processed, even if those higher priority messages were not present in the data messaging system 1 when message 1 was initially propagated to processing node 11 a.
- the data messaging system 1 may impose a limit on the number of times message 1 can be suspended in the staging area, or may always propagate it to the next processing node after it has been suspended (which is equivalent to having a limit of one time).
- the data messaging system 1 can check any Service Level Agreement (SLA) policies attached to the message flow and time-out settings, and use these when determining whether to suspend a message in the staging area, and how long for.
- SLA Service Level Agreement
- message 1 has been released from the staging area 12 a and processed by processing node 11 a, it is propagated to the next node (step 103 again). It will then continue to propagate along successive processing nodes in the usual way until it passes to the external computing device 8 , with the process of checking for higher priority messages occurring if (and only if) it is propagated to processing node 11 c with staging area 12 c.
- the present invention may be a system, a method, and/or a computer program product.
- the computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention.
- the computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device.
- the computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
- a non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing.
- RAM random access memory
- ROM read-only memory
- EPROM or Flash memory erasable programmable read-only memory
- SRAM static random access memory
- CD-ROM compact disc read-only memory
- DVD digital versatile disk
- memory stick a floppy disk
- a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon
- a computer readable storage medium is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire.
- Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network.
- the network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers.
- a network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device.
- Computer readable program instructions for carrying out operations of the present invention may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++ or the like, and conventional procedural programming languages, such as the “C” programming language or similar programming languages.
- the computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
- the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
- electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention.
- These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
- These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks.
- the computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
- each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s).
- the functions noted in the block may occur out of the order noted in the figures.
- two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
Abstract
Description
- The present invention relates to processing messages in a data messaging system.
- In accordance with a one embodiment there is provided a computer-implemented method of processing messages in a data messaging system comprising a plurality of processing nodes, wherein each message processed by the data messaging system has associated with it a priority level. The method includes receiving a first message for processing at a processing node of the plurality of processing nodes and determining if the processing node has an associated message staging area. Based on a determination that the processing node has an associated message staging area, the method includes determining if a second message received by the data messaging system has a higher priority value than the first message. Based on a determination that the second message has a higher priority value than the first message, the method includes delaying processing of the first message by the processing node, and processing the second message using the processing node.
- In accordance with another embodiment there is provided a data messaging system for processing messages having a plurality of processing nodes. Each message that is processed by the data messaging system has associated with it a priority level. The data messaging system is configured to receive a first message for processing at a processing node of the plurality of processing nodes and determine if the processing node has an associated message staging area. Based on a determination that the processing node has an associated message staging area, the data messaging system is configured to determine if a second message received by the data messaging system has a higher priority value than the first message. Based on a determination that the second message has a higher priority value than the first message, the data messaging system is configured to delay processing of the first message by the processing node, and process the second message using the processing node.
- In accordance with another embodiment, there is provided a computer program product for processing messages in a data messaging system having a plurality of processing nodes, wherein each message processed by the data messaging system has associated with it a priority level. The computer program product includes a computer-readable storage medium having computer-readable program code embodied therewith, the computer-readable program code configured to perform a method. The method includes receiving a first message for processing at a processing node of the plurality of processing nodes and determining if the processing node has an associated message staging area. Based on a determination that the processing node has an associated message staging area, the method includes determining if a second message received by the data messaging system has a higher priority value than the first message. Based on a determination that the second message has a higher priority value than the first message, the method includes delaying processing of the first message by the processing node, and processing the second message using the processing node.
- It will of course be appreciated that features described in relation to one aspect of the present invention may be incorporated into other aspects of the present invention. For example, the method of the invention may incorporate any of the features described with reference to the computer system of the invention and vice versa.
- Preferred embodiments of the present invention will now be described, by way of example only, with reference to the following drawings in which:
-
FIG. 1 is a schematic diagram showing a data messaging system in accordance with an embodiment of the invention; -
FIGS. 2a to 2c are a flowchart showing the operation of the data messaging system ofFIG. 1 ; -
FIG. 3 is a flowchart showing the further operation of the data messaging system ofFIG. 1 ; and -
FIGS. 4a and 4b are representations of the processing nodes of the data messaging system ofFIG. 1 . - Data messaging systems provide connectivity and integration between various systems and services. Examples of data messaging systems include IBM Integration Bus (IIB), IBM Websphere MQ, and Java Message Service. Users of such data messaging systems can develop applications to integrate their systems and services, which are sometimes referred to as message flows.
- Message flows may comprise a number of distinct nodes connected together, with the different nodes performing different individual functions as part of the composite application. For example, there could be an XSLT node, a routing node, an MQ output node, etc. Different nodes will exhibit different behaviours. For example, some may be CPU intensive, some disk input/output (I/O) intensive etc. Such data messaging system often process messages for which a priority value can be specified.
-
FIG. 1 shows a data messaging system in accordance with an embodiment of the invention. Thedata messaging system 1 comprises aprocessing system 2, which is in communication withexternal computer systems external computer systems processing system 2 may comprise a single computing device or multiple computing devices in connection with each other. It will further be appreciated that adata messaging system 1 will commonly be used to connectivity and integration between multiple external computing devices, rather than just two. - The
data messaging system 1 further comprises aglobal store 3, a statisticaldata capture device 4, and aresource modeller 5, all of which are in communication with theprocessing system 2. The statisticaldata capture device 4 and aresource modeller 5 are as in direct communication. - The operation of the
data messaging system 1 when processing a message, when thedata messaging system 1 is in an initial state, is now described with reference to the flowchart ofFIG. 2a , and the chart ofFIG. 4a which represent the processing nodes of the data messaging system in the initial state. - As shown in
FIG. 2a , in a first step a message, herein called message1, enters the message flow (step 101). The message may for example be passed by theexternal computing device 7 to thedata messaging system 1, in particular theprocessing system 2 thereof, to be processed and passed to theexternal computing device 8. The ID of the thread message1 is being processed by (threadID), the priority level of message1 (messagePriority) and its position in the message flow are stored in the global store 3 (step 102). - The message is propagated to the next node (step 103), which as the
message 1 has just entered the message flow is theinitial node 10 ofFIG. 4a . Thedata messaging system 1 then checks if any staging area has been associated with the node 10 (step 104). In the present example, as thedata messaging system 1 is in an initial state, there are no staging areas associated with any nodes of thedata messaging system 1. As a result,steps data messaging system 1, its position recorded in theglobal store 3 is updated, as is its priority if that is modified at any time. - The message message1 may, for example, propagate from
node 10, along the middle upper branch ofFIG. 4a that includesnode 11, along the rightmost upper branch ofFIG. 4a that includesnode 12, after which it has finished processing and is passed to its destination, theexternal computing device 8. - As can be seen, the
data messaging system 1 in the initial state operates in much the same way as a conventional data messaging system, other than the data being stored in theglobal store 3. While the message is being processed by thedata messaging system 1, statistical data about the processing of the message by the processing nodes can be collected by the statisticaldata capture device 4. Such data could be captured using the known Accounting and Statistics capture functionality of IIB, for example. It will be appreciated that in alternative embodiments of the invention the statistical data could be captured in various other ways. - The use of the captured statistical data is used by the
data messaging system 1 is now described, with reference to the flowchart ofFIG. 3 . First, theresource modeller 5 uses the statistical data to build resource models for the processing nodes of, and messages processed by, the data messaging system 1 (step 201). It will be appreciated that the resource models can be built using any of various standard statistical techniques. - The resource models allow the
data messaging system 1 to determine the resources that will be used by messages processed by the data messaging system 1 (or an estimation thereof), based on the properties of the message. For example a resource model may allow resources such as the CPU usage, memory usage and/or disk I/O a message will require while being processed, to be determined from the properties of the message such as its size, type, format, structure, schema and/or purpose. Statistical data such as the time taken to process a message may also be used. Different processing nodes can have their own separate resource models, and may or may not depending on whether they share processing resources or not, amongst other things. (Two processing nodes may share a CPU but use separate memory areas, for example, and in that case may share a CPU resource model but have separate memory resource models.) - For each processing node of the
data messaging system 1, the resource models also allow thedata messaging system 1 to identify the processing resources required by the processing node during the processing of messages (step 202). Thedata messaging system 1 can then determine if the processing node will (or is likely to) experience limiting processing resources during operation of the data messaging system 1 (step 203), for example insufficient CPU availability to process multiple messages at the same time at optimal speed. The statisticaldata capture device 4 can also capture the actual historical use of processing resources by the processing node, time taken for messages to be processed, and so on, during operation of thedata messaging system 1, and so directly identify if the processing node will (or is likely to) experience limiting processing resources (on the basis of whether it has in the past). - If the
data messaging system 1 determines that the processing node will experience limiting processing resources, it associates a staging area with the processing node (step 204). The use of the staging area is described later below. If not, it skips straight to the next step. - It will be appreciated that associating a staging area with a processing node may merely involve adding a flag to the processing node so that it is identified by the
data messaging system 1 as having a staging area, with the required functionality to allow the use as described below being provided by another component of thedata messaging system 1. In an alternative embodiment, specific functionality may be added to, or enabled on, the processing node itself. In other embodiments, the association may be done in other suitable ways. It will be appreciated that the relevance of the association is that a processing node can be identified by thedata messaging system 1 as having a staging area associated with it, so that use of the staging area as described later below is possible, however provided. - Next, the
data messaging system 1 checks if there are other processing nodes for it to consider (step 205). If so, it repeats the steps of identifying the processing resources required and so on, but for one of the other processing nodes (step 202 and subsequent steps again). - The process is repeated until all processing nodes have been considered. The
data messaging system 1 continues to capture statistical data as more messages are processed, and after a sufficient amount of new statistical data has been captured the whole process in repeated using the new statistical data (step 206, and then step 201 and subsequent steps again). In alternative embodiments the process is repeated when a predetermined period of time has passed, is manually triggered, or occurs as part of an established maintenance cycle, for example. - In the present example, after the process has been completed a first time, the processing nodes of the
data messaging system 1 are as shown inFIG. 4b . Theprocessing nodes staging areas - The operation of the
data messaging system 1 when processing a message, when thedata messaging system 1 includes staging areas, is now described with reference to the flowcharts ofFIGS. 2a to 2c , and the chart ofFIG. 4 b. - As before, the message message1 propagates along successive processing node. However, when message1 if propagated to processing
node 11 a, thedata messaging system 1 identifies that thestaging area 12 a is associated with theprocessing node 11 a (step 104). (A similar process will occur if message1 takes the other branch, and so is propagated to processingnode 11 b withstaging area 12 b.) In some embodiments, thestaging area 12 a is provided with and makes available data regarding the processing resources that are being heavily utilised ahead. - The
data messaging system 1 then determines whether there are any higher priority messages in thedata messaging system 1 that require processing (step 105). It does this by checking theglobal store 3, in which the priority levels of all messages in thedata messaging system 1 are stored. In other embodiments, alternative methods are used to determine if there are any higher priority messages, for example the other messages in thedata messaging system 1 are directly interrogated. If there are no higher priority messages, message1 is processed by theprocessing node 11 a in the usual way, and then propagated to the next processing node (step 103 again). - If, on the other hand, higher priority messages exist, the
data messaging system 1 uses the resource models described above to determine the processing resources message1 will require (step 107). Thedata messaging system 1 then uses the resource models described to determine the processing resources the higher priority messages will require (step 108). Thedata messaging system 1 then determines the processing resources available to the processing node (step 109). In embodiments in which thestaging area 12 a is provided with data regarding the processing resources that are being heavily utilised ahead, the processing resources available can be determined from this data. The processing resources may include the number of threads available to process different messages. - The
data messaging system 1 then uses the determined information to determine if the processing of any of the higher priority messages will be impacted by the processing of the message message1 (step 110), i.e. if message1 will use processing resources that are required by any of the higher priority messages if they are to be optimally processed (e.g. processed as quickly as possible). If the processing of none of the higher priority messages will be impacted, message1 is processed by theprocessing node 11 a in the usual way, and then propagated to the next processing node (step 103 again). - It will be appreciated that in alternative embodiments, the determination that the processing of a higher priority message will be impacted could be done in various more or less optimal ways. For example, only the processing resources required by message1 and the available resources could be considered, with it being assumed that a higher priority message will require at least a certain level of resources. Alternatively, only the processing resources required by the higher priority message and the available resources could be considered, with it being assumed that message1 will require at least a certain level of resources. Alternatively again, only the processing resources required by message1 and the processing resources required by the higher priority message could be considered, with it being assumed that there will always be only at most a certain level if available resources.
- If, on the other hand, the processing of one or more of the higher priority messages will be impacted, the time message1 should be kept in the staging area is determined (step 111). This will be, for example, the time all the impacted higher priority message will take to be processed. Alternatively, the time may be the time the first higher priority message will take to be processed, for example.
- The message message1 is then suspended in the
staging area 12 a (step 112). There are various different ways message1 could be suspended, in alternative embodiments or as alternatives in the same the same embodiment. The thread in which message1 is being processed could be placed into a temporary sleep mode, for example to reduce contention on CPU resources when higher priority messages are being processed in other threads. Alternatively, the thread could switch out its current state to enable the thread itself to be used to process the higher priority message, for example where these is only a single thread available. Once completed, the thread state could be switched back in to allow the processing of message1 to be completed. Alternatively again, if again not enough threads are available the current thread could be suspended and a secondary temporary thread created in which the higher priority message is processed. The temporary thread could then be destroyed when complete, and processing of the thread containing message1 resumed, so ensuring that there are not too many threads were actively processing at any particular point in time. - In an alternative embodiment, the data messaging system is configured to trigger an event when a message is placed on an inbound queue, the event updating a separate part of the
global store 3 which stores details of messages waiting to be processed. This part of theglobal store 3 can then be queried as part of the check whether the processing of any higher priority messages will be impacted (step 110), and if it is determined that not enough threads will be available it can be managed as described above. - The message message1 remains suspended in the
staging area 12 a until the determined time has passed (steps 113 and 114). Once the determined time has passed, message1 is processed by theprocessing node 11 a in the usual way, and then propagated to the next processing node (step 103 again). - In alternative embodiments, and for example if the time determined for message1 to be suspended in the staging area is only the time the first higher priority message to be processed will take to be processed, rather than being immediately processed by the
processing node 11 a after the determined time has passed, the step of determining if there are any higher priority messages to be processed is repeated (step 105) and subsequent steps. This allows message1 to be suspended to allow further higher priority messages to be processed, even if those higher priority messages were not present in thedata messaging system 1 when message1 was initially propagated to processingnode 11 a. - To ensure that message1 is not suspended for an excessive amount of time (or forever), the
data messaging system 1 may impose a limit on the number of times message1 can be suspended in the staging area, or may always propagate it to the next processing node after it has been suspended (which is equivalent to having a limit of one time). Alternatively and/or additionally, thedata messaging system 1 can check any Service Level Agreement (SLA) policies attached to the message flow and time-out settings, and use these when determining whether to suspend a message in the staging area, and how long for. - Once message1 has been released from the
staging area 12 a and processed by processingnode 11 a, it is propagated to the next node (step 103 again). It will then continue to propagate along successive processing nodes in the usual way until it passes to theexternal computing device 8, with the process of checking for higher priority messages occurring if (and only if) it is propagated to processingnode 11 c withstaging area 12 c. - In this way, more control over message processing is provided, enabling higher priority messages to be processed more rapidly, but while not affecting the processing of lower priority messages when there are no higher priority messages that require processing.
- While the present invention has been described and illustrated with reference to particular embodiments, it will be appreciated by those of ordinary skill in the art that the invention lends itself to many different variations not specifically illustrated herein.
- The present invention may be a system, a method, and/or a computer program product. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention.
- The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire.
- Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device.
- Computer readable program instructions for carrying out operations of the present invention may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++ or the like, and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention.
- Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions.
- These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks.
- The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
- The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions.
Claims (19)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/953,354 US9843550B2 (en) | 2015-11-29 | 2015-11-29 | Processing messages in a data messaging system using constructed resource models |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/953,354 US9843550B2 (en) | 2015-11-29 | 2015-11-29 | Processing messages in a data messaging system using constructed resource models |
Publications (2)
Publication Number | Publication Date |
---|---|
US20170155610A1 true US20170155610A1 (en) | 2017-06-01 |
US9843550B2 US9843550B2 (en) | 2017-12-12 |
Family
ID=58777919
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/953,354 Expired - Fee Related US9843550B2 (en) | 2015-11-29 | 2015-11-29 | Processing messages in a data messaging system using constructed resource models |
Country Status (1)
Country | Link |
---|---|
US (1) | US9843550B2 (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10983846B2 (en) * | 2018-05-11 | 2021-04-20 | Futurewei Technologies, Inc. | User space pre-emptive real-time scheduler |
US10701534B2 (en) | 2018-07-30 | 2020-06-30 | Nxp B.V. | Message relaying in vehicle-to-vehicle communication system |
Family Cites Families (37)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5627764A (en) * | 1991-10-04 | 1997-05-06 | Banyan Systems, Inc. | Automatic electronic messaging system with feedback and work flow administration |
US5913921A (en) * | 1996-07-12 | 1999-06-22 | Glenayre Electronics, Inc. | System for communicating information about nodes configuration by generating advertisements having era values for identifying time reference for which the configuration is operative |
US5878351A (en) * | 1996-11-07 | 1999-03-02 | Nokia Mobile Phones Limited | Methods and apparatus for providing delayed transmission of SMS delivery acknowledgement, manual acknowledgement and SMS messages |
US6073142A (en) * | 1997-06-23 | 2000-06-06 | Park City Group | Automated post office based rule analysis of e-mail messages and other data objects for controlled distribution in network environments |
US6658485B1 (en) * | 1998-10-19 | 2003-12-02 | International Business Machines Corporation | Dynamic priority-based scheduling in a message queuing system |
US6952398B1 (en) * | 1999-04-30 | 2005-10-04 | Furrukh Fahim | System and method for optimal allocation of link bandwidth in a communications network for truck routing |
US6771653B1 (en) * | 1999-09-23 | 2004-08-03 | International Business Machines Corporation | Priority queue management system for the transmission of data frames from a node in a network node |
US20020049608A1 (en) * | 2000-03-03 | 2002-04-25 | Hartsell Neal D. | Systems and methods for providing differentiated business services in information management environments |
US20020152305A1 (en) * | 2000-03-03 | 2002-10-17 | Jackson Gregory J. | Systems and methods for resource utilization analysis in information management environments |
US6938024B1 (en) * | 2000-05-04 | 2005-08-30 | Microsoft Corporation | Transmitting information given constrained resources |
US7127486B1 (en) * | 2000-07-24 | 2006-10-24 | Vignette Corporation | Method and system for facilitating marketing dialogues |
US6826153B1 (en) * | 2000-09-13 | 2004-11-30 | Jeffrey Kroon | System and method of increasing the message throughput in a radio network |
AU2002240526A1 (en) * | 2001-02-26 | 2002-09-12 | Eprivacy Group, Inc. | System and method for controlling distribution of network communications |
US7146260B2 (en) * | 2001-04-24 | 2006-12-05 | Medius, Inc. | Method and apparatus for dynamic configuration of multiprocessor system |
US20030067874A1 (en) * | 2001-10-10 | 2003-04-10 | See Michael B. | Central policy based traffic management |
US7852865B2 (en) * | 2002-11-26 | 2010-12-14 | Broadcom Corporation | System and method for preferred service flow of high priority messages |
DE60319753T2 (en) * | 2003-12-17 | 2009-04-02 | Telefonaktiebolaget Lm Ericsson (Publ) | SYSTEM AND METHOD FOR DYNAMICALLY OPTIMIZED MESSAGE PROCESSING |
US7941491B2 (en) * | 2004-06-04 | 2011-05-10 | Messagemind, Inc. | System and method for dynamic adaptive user-based prioritization and display of electronic messages |
US8433768B1 (en) * | 2004-10-14 | 2013-04-30 | Lockheed Martin Corporation | Embedded model interaction within attack projection framework of information system |
ATE439721T1 (en) * | 2004-11-11 | 2009-08-15 | Koninkl Philips Electronics Nv | PROCEDURES FOR QUEUEING AND PACKET ASSEMBLY ON A PRIORITY BASIS |
KR100679858B1 (en) * | 2004-11-25 | 2007-02-07 | 한국전자통신연구원 | Apparatus for forwarding message based on dynamic priority and apparatus for priority adjustment and method for processing dynamic priority message |
JP4667859B2 (en) * | 2004-12-28 | 2011-04-13 | インターナショナル・ビジネス・マシーンズ・コーポレーション | Message processing apparatus, message processing method, and message processing program for agent |
US8201205B2 (en) * | 2005-03-16 | 2012-06-12 | Tvworks, Llc | Upstream bandwidth management methods and apparatus |
US8036372B2 (en) * | 2005-11-30 | 2011-10-11 | Avaya Inc. | Methods and apparatus for dynamically reallocating a preferred request to one or more generic queues |
GB0709527D0 (en) * | 2007-05-18 | 2007-06-27 | Surfcontrol Plc | Electronic messaging system, message processing apparatus and message processing method |
EP2179549B1 (en) * | 2007-08-09 | 2012-03-21 | Markport Limited | Network resource management |
US8539097B2 (en) * | 2007-11-14 | 2013-09-17 | Oracle International Corporation | Intelligent message processing |
US7773519B2 (en) * | 2008-01-10 | 2010-08-10 | Nuova Systems, Inc. | Method and system to manage network traffic congestion |
US8081659B2 (en) * | 2008-07-02 | 2011-12-20 | Cisco Technology, Inc. | Map message expediency monitoring and automatic delay adjustments in M-CMTS |
KR20100073846A (en) * | 2008-12-23 | 2010-07-01 | 한국전자통신연구원 | Data frame transmissing and receiving method in a can protocol |
US8380575B2 (en) * | 2009-12-15 | 2013-02-19 | Trading Technologies International, Inc. | System and methods for risk-based prioritized transaction message flow |
US8923147B2 (en) * | 2011-10-03 | 2014-12-30 | Qualcomm Incorporated | Method and apparatus for filtering and processing received vehicle peer transmissions based on reliability information |
US8824484B2 (en) * | 2012-01-25 | 2014-09-02 | Schneider Electric Industries Sas | System and method for deterministic I/O with ethernet based industrial networks |
US9319254B2 (en) * | 2012-08-03 | 2016-04-19 | Ati Technologies Ulc | Methods and systems for processing network messages in an accelerated processing device |
US20140082215A1 (en) * | 2012-09-19 | 2014-03-20 | Arm Limited | Arbitrating between data paths in a bufferless free flowing interconnect |
US8805320B2 (en) * | 2012-11-28 | 2014-08-12 | At&T Intellectual Property I, Lp | Method and system for message collision avoidance |
US9571384B2 (en) * | 2013-08-30 | 2017-02-14 | Futurewei Technologies, Inc. | Dynamic priority queue mapping for QoS routing in software defined networks |
-
2015
- 2015-11-29 US US14/953,354 patent/US9843550B2/en not_active Expired - Fee Related
Also Published As
Publication number | Publication date |
---|---|
US9843550B2 (en) | 2017-12-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10459832B2 (en) | How to track operator behavior via metadata | |
US10079750B2 (en) | Limiting data output from windowing operations | |
US9954939B2 (en) | Processing a message received by a message server | |
US8997060B2 (en) | Parallel program analysis and branch prediction | |
US10757039B2 (en) | Apparatus and method for routing data in a switch | |
US10608915B2 (en) | Providing dynamic latency in an integration flow | |
US10642802B2 (en) | Identifying an entity associated with an online communication | |
US9665626B1 (en) | Sorted merge of streaming data | |
US9432832B2 (en) | Enabling mobile computing devices to track data usage among mobile computing devices that share a data plan | |
US9843550B2 (en) | Processing messages in a data messaging system using constructed resource models | |
US10235214B2 (en) | Hierarchical process group management | |
US9418201B1 (en) | Integration of functional analysis and common path pessimism removal in static timing analysis | |
US10171313B2 (en) | Managing workload to meet execution criterion in a hybrid cloud environment | |
US9600617B1 (en) | Automated timing analysis | |
US9959133B2 (en) | Identification and removal of zombie virtual machines | |
US20160371171A1 (en) | Stream-based breakpoint for too many tuple creations | |
US20180131756A1 (en) | Method and system for affinity load balancing | |
US9471431B2 (en) | Buffered cloned operators in a streaming application | |
GB2528949A (en) | Providing dynamic latency in an integration flow |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: INTERNATIONAL BUSINESS MACHINES CORPORATION, NEW Y Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:HOSIE, JOHN;ROSS, MARTIN A.;STIRLING, CRAIG H.;AND OTHERS;SIGNING DATES FROM 20151116 TO 20151117;REEL/FRAME:037161/0636 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
FEPP | Fee payment procedure |
Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
AS | Assignment |
Owner name: DOORDASH, INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:INTERNATIONAL BUSINESS MACHINES CORPORATION;REEL/FRAME:057826/0939 Effective date: 20211012 |
|
LAPS | Lapse for failure to pay maintenance fees |
Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
STCH | Information on status: patent discontinuation |
Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362 |
|
FP | Lapsed due to failure to pay maintenance fee |
Effective date: 20211212 |