US7921410B1 - Analyzing and application or service latency - Google Patents
Analyzing and application or service latency Download PDFInfo
- Publication number
- US7921410B1 US7921410B1 US11/784,611 US78461107A US7921410B1 US 7921410 B1 US7921410 B1 US 7921410B1 US 78461107 A US78461107 A US 78461107A US 7921410 B1 US7921410 B1 US 7921410B1
- Authority
- US
- United States
- Prior art keywords
- latency
- transaction
- data
- normal
- components
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active, expires
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/30—Monitoring
- G06F11/34—Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
- G06F11/3452—Performance evaluation by statistical analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/30—Monitoring
- G06F11/34—Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
- G06F11/3409—Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for performance assessment
- G06F11/3419—Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for performance assessment by assessing time
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/30—Monitoring
- G06F11/34—Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
- G06F11/3466—Performance evaluation by tracing or monitoring
- G06F11/3495—Performance evaluation by tracing or monitoring for systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2201/00—Indexing scheme relating to error detection, to error correction, and to monitoring
- G06F2201/87—Monitoring of transactions
Definitions
- Monitoring transaction or job latency is one measure for determining the health of an application or service tasked with performing the transaction (or job).
- latency is a time delay between the moment a task is initiated and the moment the same task is completed.
- the task may be a transaction, a job, or a component of such a transaction or job.
- a transaction latency is response time of the transaction, i.e., the time delay between the moment the transaction is initiated by an application (or service) and the moment such a transaction is completed by the application (or service).
- FIG. 1 illustrates a block diagram system wherein one or more embodiments may be practiced.
- FIG. 3 illustrates a method for monitoring and analyzing a transaction latency, in accordance with one embodiment.
- IT information technology, or IT, encompasses all forms of technology, including but not limited to the design, development, installation, and implementation of hardware and software information or computing systems and software applications, used to create, store, exchange and utilize information in its various forms including but not limited to business data, conversations, still images, motion pictures and multimedia presentations technology and with the design, development, installation, and implementation of information systems and applications.
- IT distributed environments may be employed, for example, by Internet Service Providers (ISP), web merchants, and web search engines to provide IT applications and services to users.
- ISP Internet Service Providers
- web merchants web merchants
- web search engines web search engines to provide IT applications and services to users.
- FIG. 1 illustrates a block diagram of a system 100 for monitoring and analyzing transaction or job latencies of an IT application or service, wherein an embodiment may be practiced.
- various embodiments are discussed herein with reference to an application and a transaction performed by such an application.
- the system 100 is operable to automatically induce a model of normality for a transaction latency, automatically produce a ranked list of components for abnormal occurrences, based on the degree of abnormality of each component, and automatically adapt to changes in the normality model.
- the system 100 may be separate from or incorporated into the distributed system(s) that it monitors.
- the system 100 includes a data collection module 110 and a latency analysis module 120 .
- one or more data collectors are employed for the data collection module 110 .
- a data collector is one or more software programs, software applications or software modules.
- a software program, application, or module includes one or more machine-coded routines, objects, components, data structures, and the like that perform particular tasks or implement particular abstract data types.
- the data collector is used to monitor and measure the latency of transactions or jobs that are submitted to an IT application or service as implemented in a distributed system, such as an IT data center or an IT network system. Thus, it monitors the distributed system (not shown) to obtain the latency metrics (data measurements), which includes latency metrics of individual components that contribute to the total latency of a transaction or job.
- the data collector is operable to measure total response time of a transaction and also break down the total response time into the following components: network time, connection time, server time, and transfer time that correspond to the transaction components.
- Each of the components may include measurable sub-components.
- server time is made up of time spent in the web server, time spent in the application server, and time spent in the database server.
- Examples of possible data collectors include but are not limited to: HP Asset and OpenView softwares from Hewlett Packard Company of Palo Alto, Calif., BMC Discovery Express from BMC Software, Inc. of Houston, Tex.; and those data collectors available in the VMware CapacityPlanner software and CDAT software from IBM Corporation of Amonk, N.Y.
- the latency analysis module 120 is also one or more software programs, software applications or software modules. It is operable through automation to statistically characterize normal component latencies of transactions or jobs that are performed by an application/service in a distributed system, to adapt to changes in such characterized normal behavior over time, and to recognize statistically significant changes in component latencies. To that extent, the latency analysis module 120 is operable to receive or provide a definition of normality 130 for latency of some unit of work, such as a transaction or job.
- FIG. 2 illustrates a block diagram of a computerized system 200 that is operable to be used as a platform for implementing the system 100 , or any one of the modules 110 and 120 therein.
- the computer system 200 includes one or more processors, such as processor 202 , providing an execution platform for executing software.
- the computerized system 200 includes one or more single-core or multi-core processors of any of a number of computer processors, such as processors from Intel, AMD, and Cyrix.
- a computer processor may be a general-purpose processor, such as a central processing unit (CPU) or any other multi-purpose processor or microprocessor.
- CPU central processing unit
- a computer processor also may be a special-purpose processor, such as a graphics processing unit (GPU), an audio processor, a digital signal processor, or another processor dedicated for one or more processing purposes. Commands and data from the processor 202 are communicated over a communication bus 204 or through point-to-point links with other components in the computer system 200 .
- GPU graphics processing unit
- audio processor audio processor
- digital signal processor digital signal processor
- the computer system 200 also includes a main memory 206 where software is resident during runtime, and a secondary memory 208 .
- the secondary memory 208 may also be a computer-readable medium (CRM) that may be used to store software programs, applications, or modules that implement the modules 110 and 120 ( FIG. 1 ) and the method 300 ( FIG. 3 , as described below).
- the main memory 206 and secondary memory 208 (and an optional removable storage unit 214 ) each includes, for example, a hard disk drive and/or a removable storage drive 212 representing a floppy diskette drive, a magnetic tape drive, a compact disk drive, etc., or a nonvolatile memory where a copy of the software is stored.
- the secondary memory 408 also includes ROM (read only memory), EPROM (erasable, programmable ROM), EEPROM (electrically erasable, programmable ROM), or any other electronic, optical, magnetic, or other storage or transmission device capable of providing a processor or processing unit with computer-readable instructions.
- the computer system 200 includes a display 220 connected via a display adapter 222 , user interfaces comprising one or more input devices 218 , such as a keyboard, a mouse, a stylus, and the like. However, the input devices 218 and the display 220 are optional.
- a network interface 230 is provided for communicating with other computer systems via, for example, a network.
- FIG. 3 illustrates a flow chart diagram of a method 300 for monitoring and analyzing a latency, or response time, of an IT application transaction, in accordance with one embodiment.
- the method 300 is discussed in the context of the system 100 illustrated in FIG. 1 .
- inputs are collected for the latency monitoring and analysis, the inputs collected include monitored latency data of a transaction of interest as performed by an application in a distributed system, a definition of normality for the transaction latency, and a latency-ranking policy or rule. Each of these inputs is described below.
- the transfer time indicates the time it takes for data to be transferred to the source of the transaction request as a result of the processing of the transaction.
- the data collected in each sample or trace for each latency component includes a measurement that is collected once per each predefined time interval, an average of multiple measurements collected per each predefined time interval, or any other suitable statistics about the measurement for each latency component per each predefined time interval.
- the transaction latency L 1 may include other latency components, and each latency component may include contributing subcomponents therein.
- the latency analysis module 120 then receives the collected transaction latency data from the data collection module 110 .
- the method 300 continues at 312 , wherein the latency analysis module 120 determines whether the collected transaction latency data is normal or abnormal based on the predefined definition of normality. This determination is made for each collected sample of the transaction latency data.
- a data sample is determined to be normal, it is added to a training window.
- the latency analysis module 120 proceeds to determine whether there is a sufficient amount of training data (e.g., number of data samples) in the training window to compute statistics about the normality of the latency components in the latency transaction data. Thus, testing for sufficient amount of training data may be delayed until there is abnormal latency data to analyze.
- the sufficiency of the training window may be empirically set by a user based on one or more desired criteria, such as whether the training data in the training window is consistent for normal behavior patterns of each latency component of interest or whether there is enough training data for generating a normal distribution for each latency component.
- the latency analysis module 120 proceeds to statistically compute a normal latency for each latency component of interest in the latency transaction data. In one embodiment, this is achieved by computing a normal distribution of each latency component based on the received data samples in the training window and the mean value and standard deviation value in the normal distribution. The range of normal latency values for each latency component is then based on the mean and standard deviation values of the normal distribution of such a component as desired. For example, in a standard normal distribution, 68% of the values lie within one standard deviation of the mean, 95% within two standard deviations, and 99% within three (3) standard deviations.
- a latency component is considered normal if its value ranges within one, two, or three standard deviations as desired.
- Alternative embodiments are contemplated wherein the range of normal latency values for each latency component is based on any other desired statistics about the normal distribution of the latency component, such as percentiles of the normal distribution, or about any other desired variable, such as time, that is associated with the latency component.
- the data sample collected and determined to be abnormal at 312 is then compared against these statistical computations to rank the latency components in the new data sample based on their degree of abnormality in accordance with the latency-ranking policy collected at 310 .
- the latency components in an abnormal data sample collected for analysis are of the same respective types as those latency components in the data samples of the training window in order to perform the comparison.
- the degree of abnormality may be set as desired by the user, as based on the latency-ranking policy, and depends on the amount or percent of difference (increase or decrease) from its normal latency calculated at 318 .
- the latency analysis module 120 continuously executes the method 300 to receive transaction latency data samples and provide a moving training window at 314 as new data samples are collected and received.
- the latency analysis module 120 e.g., as specified by the user
- each transaction latency data sample includes an indication as to whether it is normal or abnormal based on a determination external to the system 100 .
- the determination of whether each data sample is normal at 312 is merely based on whether such a data sample carry a normal or abnormal indication, and the alternative embodiment proceeds in accordance to the remainder of the method 300 .
- the methods and systems as described herein are operable to provide automated analysis of transaction or job latencies and specifically pinpoint problematic latency components in each transaction latency, based on the aforementioned component ranking, so that corrective actions may be performed in the monitored distributed system to rectify the problems in the pinpointed latency components.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Quality & Reliability (AREA)
- General Physics & Mathematics (AREA)
- Computer Hardware Design (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Probability & Statistics with Applications (AREA)
- Debugging And Monitoring (AREA)
Abstract
Description
Claims (20)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11/784,611 US7921410B1 (en) | 2007-04-09 | 2007-04-09 | Analyzing and application or service latency |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11/784,611 US7921410B1 (en) | 2007-04-09 | 2007-04-09 | Analyzing and application or service latency |
Publications (1)
Publication Number | Publication Date |
---|---|
US7921410B1 true US7921410B1 (en) | 2011-04-05 |
Family
ID=43805955
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/784,611 Active 2030-02-02 US7921410B1 (en) | 2007-04-09 | 2007-04-09 | Analyzing and application or service latency |
Country Status (1)
Country | Link |
---|---|
US (1) | US7921410B1 (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2014138894A1 (en) * | 2013-03-15 | 2014-09-18 | Imagine Communications Corp. | Systems and methods for controlling branch latency within computing applications |
US9647916B2 (en) | 2012-10-27 | 2017-05-09 | Arris Enterprises, Inc. | Computing and reporting latency in priority queues |
WO2019046996A1 (en) * | 2017-09-05 | 2019-03-14 | Alibaba Group Holding Limited | Java software latency anomaly detection |
US10346292B2 (en) * | 2013-11-13 | 2019-07-09 | Microsoft Technology Licensing, Llc | Software component recommendation based on multiple trace runs |
US11463361B2 (en) | 2018-09-27 | 2022-10-04 | Hewlett Packard Enterprise Development Lp | Rate adaptive transactions |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5872976A (en) * | 1997-04-01 | 1999-02-16 | Landmark Systems Corporation | Client-based system for monitoring the performance of application programs |
US6061722A (en) * | 1996-12-23 | 2000-05-09 | T E Network, Inc. | Assessing network performance without interference with normal network operations |
US6374371B1 (en) * | 1998-03-18 | 2002-04-16 | Micron Technology, Inc. | Method and apparatus for monitoring component latency drifts |
US20020120727A1 (en) * | 2000-12-21 | 2002-08-29 | Robert Curley | Method and apparatus for providing measurement, and utilization of, network latency in transaction-based protocols |
US20030023716A1 (en) * | 2001-07-25 | 2003-01-30 | Loyd Aaron Joel | Method and device for monitoring the performance of a network |
US20030056200A1 (en) * | 2001-09-19 | 2003-03-20 | Jun Li | Runtime monitoring in component-based systems |
-
2007
- 2007-04-09 US US11/784,611 patent/US7921410B1/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6061722A (en) * | 1996-12-23 | 2000-05-09 | T E Network, Inc. | Assessing network performance without interference with normal network operations |
US5872976A (en) * | 1997-04-01 | 1999-02-16 | Landmark Systems Corporation | Client-based system for monitoring the performance of application programs |
US6374371B1 (en) * | 1998-03-18 | 2002-04-16 | Micron Technology, Inc. | Method and apparatus for monitoring component latency drifts |
US20020120727A1 (en) * | 2000-12-21 | 2002-08-29 | Robert Curley | Method and apparatus for providing measurement, and utilization of, network latency in transaction-based protocols |
US20030023716A1 (en) * | 2001-07-25 | 2003-01-30 | Loyd Aaron Joel | Method and device for monitoring the performance of a network |
US20030056200A1 (en) * | 2001-09-19 | 2003-03-20 | Jun Li | Runtime monitoring in component-based systems |
Non-Patent Citations (2)
Title |
---|
Myung-Sup Kim et al., "A Flow-based Method for Abnormal Network Traffic Detection", Apr. 2004. * |
Sujata Benerjee et al., "Network Latency Optimizations in Distributed Database Systems", Feb. 1998. * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9647916B2 (en) | 2012-10-27 | 2017-05-09 | Arris Enterprises, Inc. | Computing and reporting latency in priority queues |
WO2014138894A1 (en) * | 2013-03-15 | 2014-09-18 | Imagine Communications Corp. | Systems and methods for controlling branch latency within computing applications |
US9182949B2 (en) | 2013-03-15 | 2015-11-10 | Imagine Communications Corp. | Systems and methods for controlling branch latency within computing applications |
US10346292B2 (en) * | 2013-11-13 | 2019-07-09 | Microsoft Technology Licensing, Llc | Software component recommendation based on multiple trace runs |
WO2019046996A1 (en) * | 2017-09-05 | 2019-03-14 | Alibaba Group Holding Limited | Java software latency anomaly detection |
US11463361B2 (en) | 2018-09-27 | 2022-10-04 | Hewlett Packard Enterprise Development Lp | Rate adaptive transactions |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8095830B1 (en) | Diagnosis of system health with event logs | |
US8156377B2 (en) | Method and apparatus for determining ranked causal paths for faults in a complex multi-host system with probabilistic inference in a time series | |
US8069370B1 (en) | Fault identification of multi-host complex systems with timesliding window analysis in a time series | |
US20100324869A1 (en) | Modeling a computing entity | |
US8230262B2 (en) | Method and apparatus for dealing with accumulative behavior of some system observations in a time series for Bayesian inference with a static Bayesian network model | |
US8051162B2 (en) | Data assurance in server consolidation | |
US8291263B2 (en) | Methods and apparatus for cross-host diagnosis of complex multi-host systems in a time series with probabilistic inference | |
US7444263B2 (en) | Performance metric collection and automated analysis | |
US7502971B2 (en) | Determining a recurrent problem of a computer resource using signatures | |
US20170104658A1 (en) | Large-scale distributed correlation | |
US8224624B2 (en) | Using application performance signatures for characterizing application updates | |
Jiang et al. | Efficient fault detection and diagnosis in complex software systems with information-theoretic monitoring | |
US20020116441A1 (en) | System and method for automatic workload characterization | |
US20140195860A1 (en) | Early Detection Of Failing Computers | |
US7184935B1 (en) | Determining and annotating a signature of a computer resource | |
WO2008098631A2 (en) | A diagnostic system and method | |
US8250408B1 (en) | System diagnosis | |
US10360140B2 (en) | Production sampling for determining code coverage | |
US20050049901A1 (en) | Methods and systems for model-based management using abstract models | |
US20090307347A1 (en) | Using Transaction Latency Profiles For Characterizing Application Updates | |
US20050107997A1 (en) | System and method for resource usage estimation | |
WO2012142144A2 (en) | Assessing application performance with an operational index | |
US7921410B1 (en) | Analyzing and application or service latency | |
Zheng et al. | Hound: Causal learning for datacenter-scale straggler diagnosis | |
US9397921B2 (en) | Method and system for signal categorization for monitoring and detecting health changes in a database system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P., TEXAS Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SYMONS, JULIE A.;COHEN, IRA;WADE, GERALD T.;AND OTHERS;SIGNING DATES FROM 20070402 TO 20070409;REEL/FRAME:019212/0772 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
CC | Certificate of correction | ||
FPAY | Fee payment |
Year of fee payment: 4 |
|
AS | Assignment |
Owner name: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP, TEXAS Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P.;REEL/FRAME:037079/0001 Effective date: 20151027 |
|
AS | Assignment |
Owner name: ENTIT SOFTWARE LLC, CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP;REEL/FRAME:042746/0130 Effective date: 20170405 |
|
AS | Assignment |
Owner name: JPMORGAN CHASE BANK, N.A., DELAWARE Free format text: SECURITY INTEREST;ASSIGNORS:ENTIT SOFTWARE LLC;ARCSIGHT, LLC;REEL/FRAME:044183/0577 Effective date: 20170901 Owner name: JPMORGAN CHASE BANK, N.A., DELAWARE Free format text: SECURITY INTEREST;ASSIGNORS:ATTACHMATE CORPORATION;BORLAND SOFTWARE CORPORATION;NETIQ CORPORATION;AND OTHERS;REEL/FRAME:044183/0718 Effective date: 20170901 |
|
FEPP | Fee payment procedure |
Free format text: 7.5 YR SURCHARGE - LATE PMT W/IN 6 MO, LARGE ENTITY (ORIGINAL EVENT CODE: M1555); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 8 |
|
AS | Assignment |
Owner name: MICRO FOCUS LLC, CALIFORNIA Free format text: CHANGE OF NAME;ASSIGNOR:ENTIT SOFTWARE LLC;REEL/FRAME:050004/0001 Effective date: 20190523 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 12TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1553); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 12 |
|
AS | Assignment |
Owner name: MICRO FOCUS LLC (F/K/A ENTIT SOFTWARE LLC), CALIFORNIA Free format text: RELEASE OF SECURITY INTEREST REEL/FRAME 044183/0577;ASSIGNOR:JPMORGAN CHASE BANK, N.A.;REEL/FRAME:063560/0001 Effective date: 20230131 Owner name: NETIQ CORPORATION, WASHINGTON Free format text: RELEASE OF SECURITY INTEREST REEL/FRAME 044183/0718;ASSIGNOR:JPMORGAN CHASE BANK, N.A.;REEL/FRAME:062746/0399 Effective date: 20230131 Owner name: MICRO FOCUS SOFTWARE INC. (F/K/A NOVELL, INC.), WASHINGTON Free format text: RELEASE OF SECURITY INTEREST REEL/FRAME 044183/0718;ASSIGNOR:JPMORGAN CHASE BANK, N.A.;REEL/FRAME:062746/0399 Effective date: 20230131 Owner name: ATTACHMATE CORPORATION, WASHINGTON Free format text: RELEASE OF SECURITY INTEREST REEL/FRAME 044183/0718;ASSIGNOR:JPMORGAN CHASE BANK, N.A.;REEL/FRAME:062746/0399 Effective date: 20230131 Owner name: SERENA SOFTWARE, INC, CALIFORNIA Free format text: RELEASE OF SECURITY INTEREST REEL/FRAME 044183/0718;ASSIGNOR:JPMORGAN CHASE BANK, N.A.;REEL/FRAME:062746/0399 Effective date: 20230131 Owner name: MICRO FOCUS (US), INC., MARYLAND Free format text: RELEASE OF SECURITY INTEREST REEL/FRAME 044183/0718;ASSIGNOR:JPMORGAN CHASE BANK, N.A.;REEL/FRAME:062746/0399 Effective date: 20230131 Owner name: BORLAND SOFTWARE CORPORATION, MARYLAND Free format text: RELEASE OF SECURITY INTEREST REEL/FRAME 044183/0718;ASSIGNOR:JPMORGAN CHASE BANK, N.A.;REEL/FRAME:062746/0399 Effective date: 20230131 Owner name: MICRO FOCUS LLC (F/K/A ENTIT SOFTWARE LLC), CALIFORNIA Free format text: RELEASE OF SECURITY INTEREST REEL/FRAME 044183/0718;ASSIGNOR:JPMORGAN CHASE BANK, N.A.;REEL/FRAME:062746/0399 Effective date: 20230131 |