WO2017148293A1 - 一种基于云平台的客户端应用的信息统计方法和装置 - Google Patents

一种基于云平台的客户端应用的信息统计方法和装置 Download PDF

Info

Publication number
WO2017148293A1
WO2017148293A1 PCT/CN2017/074167 CN2017074167W WO2017148293A1 WO 2017148293 A1 WO2017148293 A1 WO 2017148293A1 CN 2017074167 W CN2017074167 W CN 2017074167W WO 2017148293 A1 WO2017148293 A1 WO 2017148293A1
Authority
WO
WIPO (PCT)
Prior art keywords
buried point
buried
point information
log
information
Prior art date
Application number
PCT/CN2017/074167
Other languages
English (en)
French (fr)
Inventor
李巨雷
Original Assignee
阿里巴巴集团控股有限公司
李巨雷
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 阿里巴巴集团控股有限公司, 李巨雷 filed Critical 阿里巴巴集团控股有限公司
Priority to JP2018544842A priority Critical patent/JP2019517040A/ja
Priority to EP17759144.3A priority patent/EP3425524A1/en
Publication of WO2017148293A1 publication Critical patent/WO2017148293A1/zh
Priority to US16/119,899 priority patent/US20180365085A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/542Event management; Broadcasting; Multicasting; Notifications
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/21Design, administration or maintenance of databases
    • G06F16/215Improving data quality; Data cleansing, e.g. de-duplication, removing invalid entries or correcting typographical errors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2458Special types of queries, e.g. statistical queries, fuzzy queries or distributed queries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/3003Monitoring arrangements specially adapted to the computing system or computing system component being monitored
    • G06F11/302Monitoring arrangements specially adapted to the computing system or computing system component being monitored where the computing system component is a software system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/3065Monitoring arrangements determined by the means or processing involved in reporting the monitored data
    • G06F11/3072Monitoring arrangements determined by the means or processing involved in reporting the monitored data where the reporting involves data filtering, e.g. pattern matching, time or event triggered, adaptive or policy-based reporting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/18File system types
    • G06F16/1805Append-only file systems, e.g. using logs or journals to store data
    • G06F16/1815Journaling file systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/40Data acquisition and logging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2201/00Indexing scheme relating to error detection, to error correction, and to monitoring
    • G06F2201/86Event-based monitoring
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2201/00Indexing scheme relating to error detection, to error correction, and to monitoring
    • G06F2201/865Monitoring of software

Definitions

  • the present invention relates to the field of cloud platform technologies, and in particular, to an information method of a cloud platform-based client application and an information statistical device of a cloud platform-based client application.
  • the technical solution for information statistics of client applications usually uses annotations of class methods or declares Aspect Oriented Programming (AOP) interception through configuration files.
  • AOP Aspect Oriented Programming
  • the existing information of the client application is collected by means of proxy interception, and the object intercepted by the agent is customized for a certain scenario in advance, and cannot be adjusted according to actual needs.
  • the proxy interception can only intercept the execution method of the client application, and the parameters or data related to the execution method cannot be intercepted. In this case, the interception result obtained by the proxy interception cannot be further utilized, and the utilization conversion rate of the proxy interception result is not used. high.
  • embodiments of the present application are provided to provide an information statistical method for a cloud platform-based client application that overcomes the above problems or at least partially solves the above problems, and a corresponding cloud platform-based client application.
  • Information statistics device
  • an information statistics method for a cloud platform-based client application including:
  • the buried point log is created according to a predefined buried point attribute, the client The application pre-defines the buried point attribute by calling an application programming interface of the buried point software development kit;
  • the buried point attribute includes a buried point category and/or a buried point keyword of the buried point information
  • the buried point category includes a buried point parent category or a buried point sub category to which the buried point information belongs.
  • the intercepting the buried point log created by each client application calling the buried software development kit comprises:
  • the directory and/or the buried point information changes, it is determined that the buried point information is added to the buried point log.
  • the extracting the buried point information from the buried point log comprises:
  • Buried point information is extracted incrementally from the buried point log.
  • the method further includes:
  • the message middleware is configured to transmit the buried point information, and control a transmission speed
  • the method further includes:
  • the buried point information is read from the message middleware.
  • the method further includes:
  • the buried point information corresponding to the buried point attribute is filtered.
  • the aggregating the buried point information according to at least one dimension comprises:
  • the buried point information is aggregated according to an application dimension and/or a time dimension.
  • the query request carries a query parameter input through a query interface
  • the querying and displaying the aggregated buried point information according to the query request includes:
  • the query parameter includes at least one of a time period, the buried point attribute, and an application identifier of the client application.
  • the application also discloses an information statistics device for a cloud platform-based client application, including:
  • Buried log monitoring module for monitoring each client application call buried software development tool Buried log created by the package
  • a burying point information extraction module configured to extract burying point information from the burying point log, where the burying point log is created according to a predefined burying point attribute, and the client application invokes the burying point software development kit
  • the application programming interface pre-defines the buried point attribute
  • a burying information aggregation module configured to aggregate the buried point information according to at least one dimension
  • the buried point information display module is configured to query and display the aggregated buried point information according to the query request.
  • the buried point attribute includes a buried point category and/or a buried point keyword of the buried point information
  • the buried point category includes a buried point parent category or a buried point sub category to which the buried point information belongs.
  • the burying point monitoring module is configured to monitor a directory of the buried point log and/or a change status of the buried point information; if the directory and/or the buried point information changes, Determine the new buried point information in the buried point log.
  • the buried point information reading module is specifically configured to incrementally read out the buried point information from the buried point log.
  • the device further comprises:
  • a burying point information sending module configured to send the burying point information to the message middleware after the burying point information is extracted from the burying point log, where the message middleware is used to transmit the burying point information And control the transmission speed;
  • the device also includes:
  • a buried point information extracting module configured to read the buried point information from the message middleware before the collecting the buried point information according to the at least one dimension.
  • the device further comprises:
  • the burying point information screening module is configured to filter the burying point information corresponding to the burying point attribute before the burying point information is aggregated according to the at least one dimension.
  • the burying point information aggregation module is specifically configured to aggregate the burying point information according to an application dimension and/or a time dimension.
  • the query request carries a query parameter input through a query interface
  • the burying point information display module is specifically configured to query and aggregate the buried point information according to the query parameter; the query parameter includes a time period, the buried point attribute, and an application of the client application. At least one of the logos.
  • the embodiment of the application is based on the cloud platform, and the client application calls the buried point log created by the application programming interface (API) in the software development kit (SDK) to monitor and bury the point.
  • the burying point information is read in the log, and the query request is displayed for the user and the burying point information aggregated according to different dimensions is used for querying and displaying.
  • the embedded point can be customized through the API in the embedded point of the SDK.
  • the data collection of the client application is customized according to the user requirements in different application scenarios, and the information collection scope for the client application is expanded, and the collection is improved.
  • the utilization conversion rate of the result; further, the burying point information read from the burying point log is aggregated according to different dimensions, and the collection result of the client application is classified and statistic.
  • FIG. 1 is a schematic diagram of a flow of information statistics of a client application in the background art
  • FIG. 2 is a flow chart of steps of an embodiment of an information statistics method for a cloud platform-based client application according to the present application
  • FIG. 3 is a schematic structural diagram of a LogAgent in an embodiment of an information statistics method for a cloud platform-based client application according to the present application;
  • FIG. 4 is a schematic structural diagram of an aggregation system in an embodiment of an information statistics method for a cloud platform-based client application according to the present application;
  • FIG. 5 is a schematic diagram of a relationship between a message middleware, a JStorm, and an aggregation system in an embodiment of an information statistical method for a cloud platform-based client application according to the present application;
  • FIG. 6 is a flow chart of steps of an embodiment of an information statistics method for another cloud platform-based client application according to the present application
  • FIG. 7 is a schematic diagram of a system structure of an embodiment of an information statistics method for a cloud platform-based client application according to the present application.
  • FIG. 8 is a structural block diagram of an embodiment of an information statistics apparatus of a cloud platform-based client application according to the present application.
  • One of the core concepts of the embodiment of the present application is to provide an information statistics method for a client application based on a cloud platform, which can be applied to a server, and a server provides a buried point SDK, and the client application invokes an API in the embedded SDK.
  • the server side systematically monitors the buried point log, collects and automatically and real-time aggregates the messages in the buried point log through the intelligent agent, and queries and displays the aggregated result to realize the information collection of the client application. statistics.
  • FIG. 2 a flow chart of steps of an embodiment of an information statistics method for a cloud platform-based client application according to the present application is shown, which may specifically include the following steps:
  • Step 100 Listening to a buried point log created by each client application calling a buried software development kit, and extracting buried point information from the buried point log, where the buried point log is created according to a predefined buried point attribute.
  • the client application pre-defines the buried point attribute by calling an application programming interface of the buried point software development kit.
  • the embedded software development kit SDK is provided by the server to the client application, and the buried point software is generated by the embedded software development kit.
  • the buried log is a specific log file used to record the collected buried point information. Specifically, in the embedded software development kit, the embedded point attribute is customized, and the embedded point attribute is used to associate each item in the running process of the client application with the above-mentioned custom buried point attribute by the embedded software development kit.
  • the buried message is stored in the buried log.
  • the buried point attribute can be pre-defined in the embedded software development kit or customized through the client application.
  • the embedded SDK can be provided to the client application in the form of a jar package.
  • the java client application needs to rely on the embedded SDK of the SDK to customize the buried point.
  • the client application can access the embedded programming attribute of the buried point log by using the application programming interface API provided in the embedded point SDK to collect the buried point information according to the buried point attribute, and support different application scenarios according to The user needs to customize the collection of client application related data, expand the scope of information collection for the client application, and improve the utilization conversion rate of the collected results.
  • the name of the buried point log, the storage path, and the like may also be defined in advance through the SDK.
  • the buried point attribute may include at least one of the buried point category and the buried point keyword.
  • the buried point attribute may include a buried point category.
  • a client application deployed in a cloud platform wants to perform stress testing, the user wants to know the pressure condition of the entire cloud platform system in the application dimension, and the custom buried point category may be
  • the user query can obtain the recorded user queries according to the buried point category, and further calculate the maximum QPS (query per second) of the cloud platform system according to each user query.
  • the burying point category may also be classified, including dividing the burying point category into a burying point parent category or a subcategory of the burying point information, that is, defining a parent by burying the category
  • the relationship with the child realizes the classification and statistics of the acquired buried point information. For example, if a client application needs to analyze multiple execution methods of a database call, the database may be defined to define a buried parent class, and a specific sub-category is defined for each specific execution method.
  • the buried point attribute may include a buried point keyword, and the buried point keyword that needs to be collected is defined by the client application to further obtain the buried point information related to the buried point keyword.
  • the operating parameters can be distinguished by separately defining the buried point keywords of the SDK buried point.
  • the buried point keyword can be defined in combination with the buried point category.
  • Operating parameters central processing unit
  • memory memory
  • disk hard disk
  • the burying point attribute may further include a time-consuming call to the target object, a call success identifier, etc., which are respectively used to represent a call time of a target object (for example, an execution method) of the client application, and the target object Whether it was successfully called. Information such as time-consuming and successful call is recorded in the buried point log for analysis.
  • the name of the API interface is logStat, and at least one of a buried point category, a buried sub-category, a buried point keyword, a call time-consuming, and a call success may be defined.
  • the buried point SDK records the result as logStat(String category, subCategory); when defining the buried point category, the buried point subcategory, and the calling time consumption, the buried point SDK The result of this record is logStat(String category, subCategory, Long response Time); when the buried point category, the buried point keyword, the calling time and the call are successfully defined, the buried point SDK records the result as logStat(String). Category, String keyWord, Long response Time, Boolean success).
  • the collection of related parameters of the execution method will be specifically described as an example.
  • the specific API of the API interface provided by the SDK is MonitorService, and the name of the API interface is MonitorService logStat.
  • the operation of the MySQL database exists in the client application.
  • the method names are the insertMethod method and the updateMethod method. Now we need to count the number of calls per minute between these two methods in the application run.
  • the embedded SDK record method is called once, the parent category (category) is set to MySQL, and the subcategory (subCategory) is defined as Insert and Update.
  • the insertMethod method for example:
  • the end time of the inserted insertMethod method minus the start time is the execution time of the insertMethod method. If it is 10 milliseconds, the result of the buried SDK record is MonitorService.logStat("MySQL". "Insert”, "10").
  • MonitorService.logStat("MySQL”, “Insert”, “10”, “true”) indicating that the call is successful
  • MonitorService.logStat("MySQL”, “Insert”, “10” , “false” indicates that the call failed.
  • Listening to the buried point log can monitor the change of the buried point log to discover the newly added buried point information.
  • the directory changes it is determined to add the buried point information, or monitor the change of the buried point information.
  • the buried point information changes it is determined to be added. Buried information, or combined in two ways to monitor.
  • the step 100 can use the LogAgent (log agent) to monitor the buried point log, and the LogAgent is used to monitor the buried point log, which can be divided into the following modules: PathWatch (directory monitoring), used to monitor the directory change of the buried point log. ;FileWatch (file listener) for listening Buried point information burying point information changes; WatchChecker (listening limit), used to limit the number of buried point information in the monitoring buried point log; LogSeeker (log exploration), used to achieve incremental reading in the buried point log Buried information. Applying to this application, LogAgent can be used to monitor the buried point log. The path change status of the buried point log can be monitored through PathWatch.
  • PathWatch directory monitoring
  • FileWatch file listener
  • WatchChecker listening limit
  • LogSeeker log exploration
  • the monitoring of the buried-point log includes not only the monitoring of the directory of the buried-point log, but also the monitoring of the buried-point information in the buried-point log by FileWatch, based on the state change of the directory in which the PathWatch monitors the buried-point log, and In the monitoring process, the WatchChecker can also be used to limit the number of buried information of the monitor. After the Buried Point information in the buried point log is monitored by FileWatch, the LogSeeker is used to incrementally read the buried point information.
  • the structure of the LogAgent The schematic is shown in Figure 3.
  • the buried point information is read from the buried point log, and the buried point information can be read in an incremental manner from the buried point log.
  • the buried point log is a xx.log file, in which each buried point information is arranged in order of segments, and each additional piece of content in the buried point log is considered to add a buried point information, when reading the buried point information, Ignore the buried point information that has been read, and only read the newly added buried point information. Since there is no need to read the entire buried point log, the amount of processed data can be reduced.
  • the read buried point information may also be sent to the message middleware.
  • the message middleware may be MetaQ (queue model message middleware) or kafka (distributed publish subscription message system), and the message middleware is used to transmit buried point information, and utilizes message stacking function of message middleware to control burying The transmission speed of the point information.
  • the buried point information needs to be extracted from the message middleware.
  • Step 102 Aggregate the buried point information according to at least one dimension.
  • the burying point information may be aggregated from at least one dimension to implement classification and statistic of the collection result of the client application. For example, the aggregation is performed from the application dimension and the time dimension, and finally the aggregation information of a client application with a minimum granularity of 1 minute is provided, that is, the aggregation information of the minimum dimension of the time dimension belonging to a client application is 1 minute.
  • a set of aggregation systems for aggregating buried point information in the buried point log may be deployed in advance on the server side, and the aggregation operation of the buried point information may be performed in the aggregation system, and the structure of the aggregation system is as follows: Figure 4 shows.
  • the data receiver can be used to receive the aggregated information;
  • the data analyzer is used to analyze the aggregated information and stored in the database;
  • the database is used to store the analyzed aggregated information, and a data query interface can be provided for querying the aggregated information.
  • the data aggregation process using the aggregation system can ensure the real-time and accuracy of data processing.
  • the burying point information is aggregated according to the time dimension, and the plurality of burying point information belonging to the same time point or the same time period may be aggregated together according to the time point or the time period corresponding to the burying point information, as the same time point. Or the aggregation information in the same time period; the aggregation information is aggregated according to the application dimension, and the plurality of buried point information belonging to the same client application may be aggregated according to the identifier of the client application corresponding to the buried point information, as Aggregate information for the same client application.
  • the plurality of buried point information may be buried information of the same client application, or may be buried information of different client applications.
  • the buried point log also includes some redundant information. Therefore, in a preferred embodiment of the present application, the read buried point information may also be cleaned and filtered before the step 102. By cleaning out some meaningless characters or marks, such as separators, etc., filtering can be used to extract useful buried point information from the remaining information. Specifically, according to the buried point category and the buried point key in the buried point attribute At least one of the words is screened.
  • the real-time computing framework can be used to read out the buried point information and clean and filter the buried point information.
  • the JStorm real-time computing framework can be used to extract the buried point information corresponding to the buried point attribute by using JStorm.
  • JStorm is a real-time streaming computing framework based on Storm. It has made continuous improvements in network input and output, thread model, resource scheduling, availability and stability. Compared with Storm, it has more stable operation, more powerful scheduling functions, higher execution efficiency, etc. advantage.
  • FIG. 5 a schematic diagram of the relationship between the message middleware, JStorm, and the aggregation system is shown.
  • JStorm obtains the buried point information from the message middleware, and is cleaned by Spout in JStorm, and is filtered by bolt. Filtering, and the buried point information corresponding to the buried point keyword after cleaning and filtering is sent to the aggregation system.
  • Step 104 Query and display the aggregated buried point information according to the query request.
  • the server that executes the application can provide the user with an interface for querying the buried point information, and the specific An interface is provided by the above described aggregation system.
  • the user can send a query request for the buried point information by accessing the interface, and after receiving the query request, the server queries the required buried point information and presents it to the user.
  • the user can input the query parameter that represents the required information by accessing the interface, and the query request received by the server carries the query parameter, and further queries the corresponding buried point information according to the query parameter.
  • the query parameter may include at least one of a time period, the buried point attribute, and an application identifier of the client application, where the query parameter includes a time period, a buried point attribute, and an application identifier, and the query parameter may be in the aggregation system.
  • the results of the query can be displayed in various forms such as charts, documents, and charts. For example, for the buried point information aggregated according to the time dimension, a visualization graph can be generated, and the graph is updated according to the new buried point data acquired in real time.
  • the technical solution in the embodiment of the present application listens to the buried point log created by the API in the client application calling the embedded point SDK, and reads the buried point information from the buried point log, and is targeted to the user.
  • the query requests the query and displays the buried point information aggregated according to different dimensions for querying and displaying.
  • the embedded point can be customized through the API in the embedded point of the SDK.
  • the data collection of the client application is customized according to the user requirements in different application scenarios, and the information collection scope for the client application is expanded, and the collection is improved.
  • the utilization conversion rate of the result; further, the burying point information read from the burying point log is aggregated according to different dimensions, and the collection result of the client application is classified and statistic.
  • the use of message middleware to transmit buried point information and control the transmission speed can alleviate the pressure of the aggregation system.
  • FIG. 6 a flow chart of steps of an embodiment of an information statistics method of another cloud platform-based client application according to the present application is shown, which may specifically include the following steps:
  • step 200 the client application invokes a buried point SDK provided by the server to customize the buried point.
  • step 202 the buried point SDK creates a buried point log.
  • Step 204 The LogAgent monitors the buried point log, and reads the buried point information in the buried point log in an incremental manner, and sends the buried point information to the message middleware.
  • Step 206 JStorm reads the buried point information from the message middleware, performs real-time calculation on the buried point information, and sends the calculated buried point information to the aggregation system.
  • Step 208 The aggregation system aggregates the buried point information according to the application dimension and/or the time dimension.
  • Step 210 The server queries and displays the aggregated buried point information according to the query request.
  • the client application in the embodiment of the present application can customize the burying point through the API in the embedded point of the SDK, and expands the client-side application data according to the user requirements in a different application scenario.
  • the application information collection scope and the utilization conversion rate of the collection result are improved; and the buried point information read from the buried point log is aggregated according to different dimensions, and the collection result of the client application is classified and counted.
  • FIG. 7 a schematic diagram of a system structure diagram of an embodiment of an information statistics method for a cloud platform-based client application according to the present application is shown.
  • the client application invokes an API in the embedded SDK to customize a buried point and create a buried point log.
  • LogAgent reads the buried point information in the buried point log and sends it to the message middleware.
  • JStorm reads the buried point information from the message middleware in real time, and performs cleaning and filtering to send the buried and filtered information.
  • the aggregation system aggregates the received buried point information according to at least one dimension, and provides a data query function and a data display function externally.
  • the technical solution in the embodiment of the present application listens to the buried point log created by the API in the client application calling the embedded point SDK, and reads the buried point information from the buried point log, and is targeted to the user.
  • the query requests the query and displays the buried point information aggregated according to different dimensions for querying and displaying.
  • the embedded point can be customized through the API in the embedded point of the SDK.
  • the data collection of the client application is customized according to the user requirements in different application scenarios, and the information collection scope for the client application is expanded, and the collection is improved.
  • the utilization conversion rate of the result; further, the burying point information read from the burying point log is aggregated according to different dimensions, and the collection result of the client application is classified and statistic.
  • FIG. 8 is a structural block diagram of an embodiment of an information statistics apparatus of a cloud platform-based client application according to the present application, which may specifically include the following modules:
  • the buried-point log monitoring module 800 is configured to listen to a buried point log created by each client application calling a buried software development kit;
  • the burying point information extraction module 802 is configured to extract burying point information from the burying point log, where the burying point log is created according to a predefined burying point attribute, and the client application invokes the burying point software development tool
  • the application programming interface of the package pre-defines the buried point attribute
  • the burying information aggregation module 804 is configured to aggregate the buried point information according to at least one dimension
  • the buried point information display module 806 is configured to query and display the aggregated buried point information according to the query request.
  • the buried point attribute includes a buried point category and/or a buried point keyword of the buried point information
  • the buried point category includes a buried point parent category of the buried point information or buried Point subcategory.
  • the buried-point log monitoring module is specifically configured to monitor a directory of the buried-point log and/or a change state of the buried-point information; if the directory and/or the buried If the point information changes, it is determined that the buried point information is added to the buried point log.
  • the buried point information reading module is configured to incrementally read out the buried point information from the buried point log.
  • the device further includes:
  • a burying point information sending module configured to send the burying point information to the message middleware after the burying point information is extracted from the burying point log, where the message middleware is used to transmit the burying point information And control the transmission speed;
  • the device also includes:
  • a buried point information extracting module configured to read the buried point information from the message middleware before the collecting the buried point information according to the at least one dimension.
  • the device further includes:
  • a buried point information screening module configured to perform the buried point information according to the at least one dimension Before the aggregation, the buried point information corresponding to the buried point attribute is filtered.
  • the burying point information aggregation module is specifically configured to aggregate the buried point information according to an application dimension and/or a time dimension.
  • the query request carries a query parameter input through a query interface
  • the buried point information display module is specifically configured to query and aggregate the buried point information according to the query parameter; the query parameter includes at least one of a time period, the buried point attribute, and an application identifier of the client application.
  • the query parameter includes at least one of a time period, the buried point attribute, and an application identifier of the client application.
  • the technical solution in the embodiment of the present application listens to the buried point log created by the API in the client application calling the embedded point SDK, and reads the buried point information from the buried point log, and is targeted to the user.
  • the query requests the query and displays the buried point information aggregated according to different dimensions for querying and displaying.
  • the embedded point can be customized through the API in the embedded point of the SDK.
  • the data collection of the client application is customized according to the user requirements in different application scenarios, and the information collection scope for the client application is expanded, and the collection is improved.
  • the utilization conversion rate of the result; further, the burying point information read from the burying point log is aggregated according to different dimensions, and the collection result of the client application is classified and statistic.
  • the description is relatively simple, and the relevant parts can be referred to the description of the method embodiment.
  • embodiments of the embodiments of the present application can be provided as a method, apparatus, or computer program product. Therefore, the embodiments of the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware. Moreover, embodiments of the present application can take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) including computer usable program code.
  • computer-usable storage media including but not limited to disk storage, CD-ROM, optical storage, etc.
  • the computer device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
  • the memory may include non-persistent memory, random access memory (RAM), and/or non-volatile memory in a computer readable medium, such as read only memory (ROM) or flash memory.
  • RAM random access memory
  • ROM read only memory
  • Memory is an example of a computer readable medium.
  • Computer readable media includes both permanent and non-persistent, removable and non-removable media.
  • Information storage can be implemented by any method or technology. The information can be computer readable instructions, data structures, modules of programs, or other data.
  • Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read only memory. (ROM), electrically erasable programmable read only memory (EEPROM), flash memory or other memory technology, compact disk read only memory (CD-ROM), digital versatile disk (DVD) or other optical storage, Magnetic tape cartridges, magnetic tape storage or other magnetic storage devices or any other non-transportable media can be used to store information that can be accessed by a computing device.
  • computer readable media does not include non-persistent computer readable media, such as modulated data signals and carrier waves.
  • Embodiments of the present application are described with reference to flowcharts and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the present application. It will be understood that each flow and/or block of the flowchart illustrations and/or FIG.
  • These computer program instructions can be provided to a processor of a general purpose computer, special purpose computer, embedded processor or other programmable data processing terminal device to produce a machine such that instructions are executed by a processor of a computer or other programmable data processing terminal device
  • Means are provided for implementing the functions specified in one or more of the flow or in one or more blocks of the flow chart.
  • the computer program instructions can also be stored in a computer readable memory that can direct a computer or other programmable data processing terminal device to operate in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture comprising the instruction device.
  • the instruction device implements the functions specified in one or more blocks of the flowchart or in a flow or block of the flowchart.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Software Systems (AREA)
  • Quality & Reliability (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Fuzzy Systems (AREA)
  • Computer Hardware Design (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Debugging And Monitoring (AREA)
  • Stored Programmes (AREA)

Abstract

本申请提供了一种基于云平台的客户端应用的信息统计方法和装置。所述方法包括:监听各个客户端应用调用埋点软件开发工具包创建的埋点日志,并从埋点日志中提取出埋点信息,埋点日志根据预先定义的埋点属性创建,客户端应用通过调用埋点软件开发工具包的应用编程接口预先定义埋点属性;按照至少一个维度对埋点信息进行聚合;根据查询请求查询并展示聚合后的埋点信息。本申请可以通过SDK埋点中的API自定义埋点,由于支持不同应用场景下根据用户需求自定义收集客户端应用相关数据,扩大了针对客户端应用的信息收集范围,并提高了收集结果的利用转化率;对从埋点日志中读取出的埋点信息按不同维度进行聚合,实现了对收集结果进行分类统计。

Description

一种基于云平台的客户端应用的信息统计方法和装置 技术领域
本申请涉及云平台技术领域,特别是涉及一种基于云平台的客户端应用的信息方法和一种基于云平台的客户端应用的信息统计装置。
背景技术
目前,对客户端应用进行信息统计的技术方案,通常采用类方法的注解或者通过配置文件的方式声明面向切面编程(Aspect Oriented Programming,AOP)拦截。具体流程图如图1所示,客户端应用通过需要通过注解或者通过Spring(一个开源框架)的AOP配置文件配置代理,由代理对客户端应用的执行方法进行拦截,访问到某个配置为被拦截的执行方法后,记录访问到的执行方法,将拦截到执行方法的次数进行汇总。
现有的通过代理拦截的方式收集客户端应用的信息,代理拦截的对象是预先针对某种场景定制的,无法根据实际需求进行调整。并且,代理拦截只能拦截客户端应用的执行方法,对于与执行方法相关的参数或者数据无法拦截得到,这种情况下对代理拦截得到的拦截结果无法进一步利用,代理拦截结果的利用转化率不高。
发明内容
鉴于上述问题,提出了本申请实施例以便提供一种克服上述问题或者至少部分地解决上述问题的一种基于云平台的客户端应用的信息统计方法和相应的一种基于云平台的客户端应用的信息统计装置。
为了解决上述问题,本申请公开了一种基于云平台的客户端应用的信息统计方法,包括:
监听各个客户端应用调用埋点软件开发工具包创建的埋点日志,并从所述埋点日志中提取出埋点信息,所述埋点日志根据预先定义的埋点属性创建,所述客户端应用通过调用所述埋点软件开发工具包的应用编程接口预先定义所述埋点属性;
按照至少一个维度对所述埋点信息进行聚合;
根据查询请求查询并展示聚合后的埋点信息。
优选地,所述埋点属性包括所述埋点信息的埋点类别和/或埋点关键字,所述埋点类别包括所述埋点信息所属埋点父类别或埋点子类别。
优选地,所述监听各个客户端应用调用埋点软件开发工具包创建的埋点日志包括:
监听所述埋点日志的目录和/或所述埋点信息的变化状态;
若所述目录和/或所述埋点信息发生变化,则确定所述埋点日志中新增埋点信息。
优选地,所述从所述埋点日志中提取出埋点信息包括:
从所述埋点日志中以增量方式提取出埋点信息。
优选地,在从所述埋点日志中提取出埋点信息之后,所述方法还包括:
将所述埋点信息发送至消息中间件,所述消息中间件用于传输所述埋点信息,并控制传输速度;
在所述按照至少一个维度对所述埋点信息进行聚合之前,所述方法还包括:
从所述消息中间件读取出所述埋点信息。
优选地,在所述按照至少一个维度对所述埋点信息进行聚合之前,所述方法还包括:
筛选与所述埋点属性对应的埋点信息。
优选地,所述按照至少一个维度对所述埋点信息进行聚合包括:
按照应用维度和/或时间维度对所述埋点信息进行聚合。
优选地,所述查询请求携带通过查询接口输入的查询参数;
所述根据查询请求查询并展示聚合后的埋点信息包括:
根据所述查询参数查询并展示聚合后的埋点信息;所述查询参数包括时间段、所述埋点属性和所述客户端应用的应用标识中至少一种。
本申请还公开了一种基于云平台的客户端应用的信息统计装置,包括:
埋点日志监听模块,用于监听各个客户端应用调用埋点软件开发工具 包创建的埋点日志;
埋点信息提取模块,用于从所述埋点日志中提取出埋点信息,所述埋点日志根据预先定义的埋点属性创建,所述客户端应用通过调用所述埋点软件开发工具包的应用编程接口预先定义所述埋点属性;
埋点信息聚合模块,用于按照至少一个维度对所述埋点信息进行聚合;
埋点信息展示模块,用于根据查询请求查询并展示聚合后的埋点信息。
优选地,所述埋点属性包括所述埋点信息的埋点类别和/或埋点关键字,所述埋点类别包括所述埋点信息所属埋点父类别或埋点子类别。
优选地,所述埋点日志监听模块,具体用于监听所述埋点日志的目录和/或所述埋点信息的变化状态;若所述目录和/或所述埋点信息发生变化,则确定所述埋点日志中新增埋点信息。
优选地,所述埋点信息读取模块,具体用于从所述埋点日志中以增量方式读取出埋点信息。
优选地,所述装置还包括:
埋点信息发送模块,用于在所述从所述埋点日志中提取出埋点信息之后,将所述埋点信息发送至消息中间件,所述消息中间件用于传输所述埋点信息,并控制传输速度;
所述装置还包括:
埋点信息提取模块,用于在所述按照至少一个维度对所述埋点信息进行聚合之前,从所述消息中间件读取出所述埋点信息。
优选地,所述装置还包括:
埋点信息筛选模块,用于在所述按照至少一个维度对所述埋点信息进行聚合之前,筛选与所述埋点属性对应的埋点信息。
优选地,所述埋点信息聚合模块,具体用于按照应用维度和/或时间维度对所述埋点信息进行聚合。
优选地,所述查询请求携带通过查询接口输入的查询参数;
所述埋点信息展示模块,具体用于根据所述查询参数查询并聚合后的埋点信息;所述查询参数包括时间段、所述埋点属性和所述客户端应用的应用 标识中至少一种。
本申请实施例包括以下优点:
本申请实施例基于云平台,对客户端应用调用埋点软件开发工具包(Software Development Kit,SDK)中的应用编程接口(Application Programming Interface,API)所创建的埋点日志进行监听,从埋点日志中读取出埋点信息,并针对用户的查询请求查询并展示按照不同的维度聚合后的埋点信息进行查询并展示。本申请实施例可以通过SDK埋点中的API自定义埋点,由于支持不同应用场景下根据用户需求自定义收集客户端应用相关数据,扩大了针对客户端应用的信息收集范围,并提高了收集结果的利用转化率;而且,对从埋点日志中读取出的埋点信息按不同维度进行聚合,实现了对客户端应用的收集结果进行分类统计。
附图说明
图1是背景技术中客户端应用的信息统计流程示意图;
图2是本申请的一种基于云平台的客户端应用的信息统计方法实施例的步骤流程图;
图3是本申请的一种基于云平台的客户端应用的信息统计方法实施例中LogAgent的结构示意图;
图4是本申请的一种基于云平台的客户端应用的信息统计方法实施例中聚合***的结构示意图;
图5是本申请的一种基于云平台的客户端应用的信息统计方法实施例中消息中间件、JStorm和聚合***之间的关系示意图;
图6是本申请的另一种基于云平台的客户端应用的信息统计方法实施例的步骤流程图;
图7是本申请的一种基于云平台的客户端应用的信息统计方法实施例的***结构逻辑示意图;
图8是本申请的一种基于云平台的客户端应用的信息统计装置实施例的结构框图。
具体实施方式
为使本申请的上述目的、特征和优点能够更加明显易懂,下面结合附图和具体实施方式对本申请作进一步详细的说明。
本申请实施例的核心构思之一在于,提供一种基于云平台的客户端应用的信息统计方法,可以应用于服务器端,服务器端提供埋点SDK,客户端应用调用埋点SDK中的API自定义埋点,服务器端对埋点日志进行***化监听,通过智能代理收集并自动、实时的聚合埋点日志中的消息,并对聚合结果进行查询和展示,实现对客户端应用的信息收集和统计。
参照图2,示出了本申请的一种基于云平台的客户端应用的信息统计方法实施例的步骤流程图,具体可以包括如下步骤:
步骤100,监听各个客户端应用调用埋点软件开发工具包创建的埋点日志,并从所述埋点日志中提取出埋点信息,所述埋点日志根据预先定义的埋点属性创建,所述客户端应用通过调用所述埋点软件开发工具包的应用编程接口预先定义所述埋点属性。
埋点软件开发工具包SDK由服务器端提供给客户端应用,由埋点软件开发工具包生成埋点日志,埋点日志是一种特定的日志文件,用于记录采集的埋点信息。具体而言,在埋点软件开发工具包自定义埋点属性,通过自定义埋点属性,由埋点软件开发工具包将客户端应用运行过程中的每一条与上述自定义的埋点属性相关的埋点消息存储到埋点日志中。
其中,埋点属性可以在埋点软件开发工具包预先定义或是通过客户端应用自定义。例如,埋点SDK可以以jar包的方式提供给客户端应用使用,对于java客户端应用,java客户端应用需要依赖埋点SDK的jar包自定义埋点。具体的,客户端应用可以通过访问埋点SDK中提供的应用编程接口API,配置用于创建埋点日志的埋点属性,以根据埋点属性采集的埋点信息,由于支持不同应用场景下根据用户需求自定义收集客户端应用相关数据,扩大了针对客户端应用的信息收集范围,并提高了收集结果的利用转化率。
具体实现中,还可以预先通过SDK定义埋点日志的名称、存储路径等信息。
其中,所述埋点属性可以包括所述埋点类别、埋点关键字中至少一种。
所述埋点属性可以包括埋点类别,例如在云平台中部署的客户端应用如果想进行压力测试,用户想知道在应用维度下整个云平台***的压力情况,可以通过自定义埋点类别为用户查询,可以根据埋点类别获取记录的各次用户查询,以进一步根据各次用户查询统计出云平台***的最大QPS(每秒查询率)。
对所述埋点类别还可以进行分级,包括将所述埋点类别分为所述埋点信息所属埋点父类别(category)或埋点子类别(subcategory),也就是通过对埋点类别定义父与子的关系,实现对获取的埋点信息的分类统计。例如,需要对一个客户端应用的对数据库调用的多种执行方法进行分析,则可以对该数据库调用定义埋点父类别,对具体的各种执行方法定义埋点子类别。
埋点属性可以包括埋点关键字,通过客户端应用定义所需收集的埋点关键字,进一步获取与埋点关键字相关的埋点信息。例如,对客户端应用的多个运行参数进行分析,则可以通过分别定义SDK埋点的埋点关键字,以区分各个运行参数。埋点关键字可以与埋点类别结合定义,例如,一个客户端应用部署在云环境中的10台虚拟机上,可以通过定义SDK埋点的埋点关键字,区分出每台虚拟机的cpu(中央处理器)、memory(内存)、disk(硬盘)等运行参数。具体的,对每台虚拟机定义埋点类别,对各台虚拟机的运行参数定义在该埋点类别下的埋点关键字。
所述埋点属性还可以包括对目标对象的调用耗时和调用成功标识等,分别用于表示客户端应用的某个目标对象(例如某个执行方法)的调用耗时,,以及该目标对象是否成功调用。以将调用耗时和调用是否成功等信息记录入埋点日志中,供分析使用。
具体而言,API接口的名称为logStat,可以对埋点类别、埋点子类别、埋点关键字、调用耗时、调用是否成功中至少一种进行定义。当对埋点类别和埋点子类别进行定义时,埋点SDK记录本次结果为logStat(String category,subCategory);当对埋点类别、埋点子类别和调用耗时进行定义时,埋点SDK 记录本次结果为logStat(String category,subCategory,Long response Time);当对埋点类别、埋点关键字、调用耗时和调用是否成功进行定义时,埋点SDK记录本次结果为logStat(String category,String keyWord,Long response Time,Boolean success)。
以对执行方法的相关参数进行收集为例进行具体说明。假设SDK提供的API接口的具体类为MonitorService,API接口的名称为MonitorService logStat,客户端应用中存在对MySQL数据库的操作,方法名分别为insertMethod方法和updateMethod方法。现需要统计这两个方法在应用运行中每分钟调用的次数,那么在使用埋点SDK记录方法被调用一次时,假设父类别(category)设为MySQL,子类别(subCategory)分别定义为Insert和Update。对于insertMethod方法的埋点,举例如下:
在不需要记录方法调用耗时的情况下,埋点SDK记录本次结果为MonitorService.logStat(”MySQL”,”Insert”)。
如需要记录insertMethod方法执行耗时,则用记录的insertMethod方法的结束时间减去开始时间就是insertMethod方法的执行时间,假设为10毫秒,则埋点SDK记录的结果为MonitorService.logStat(”MySQL”,”Insert”,”10”)。
如需记录调用是否成功,则记录结果为MonitorService.logStat(”MySQL”,”Insert”,”10”,”true”)表示调用成功,MonitorService.logStat(”MySQL”,”Insert”,”10”,”false”)表示调用失败。对于updateMethod的埋点同理使用。
监听埋点日志可以是监听埋点日志的变化,以发现新增的埋点信息。针对具备日志目录的埋点日志,可以监听埋点日志目录的变化,当目录发生变化时确定新增了埋点信息,或是监听埋点信息的变化,当埋点信息发生变化时确定新增了埋点信息,或是结合两种方式进行监听。
所述步骤100可以利用LogAgent(日志代理)对埋点日志进行监听,LogAgent用于监听埋点日志,具体可以划分为如下几个模块:PathWatch(目录监听),用于监听埋点日志的目录变化;FileWatch(文件监听),用于监听 埋点日志的埋点信息变化;WatchChecker(监听限制),用于对监听埋点日志中的埋点信息的数量做限制;LogSeeker(日志探索),用于实现增量读取埋点日志中的埋点信息。应用到本申请,利用LogAgent对埋点日志进行监听,具体可以通过PathWatch监听埋点日志的目录变化状态,若目录变化,则表示埋点日志中新增埋点信息。对埋点日志的监听,不仅包括对埋点日志的目录的监听,还包括在PathWatch监听埋点日志的目录变化状态的基础上,进一步通过FileWatch对埋点日志中埋点信息的监听,并且,监听过程中还可以利用WatchChecker对监听的埋点信息的数量进行限制,在通过FileWatch监听到埋点日志中的埋点信息变化后,利用LogSeeker实现增量的方式读取埋点信息,LogAgent的结构示意图如图3所示。
所述步骤100中从埋点日志中读取出埋点信息,具体可以从埋点日志中以增量方式读取出埋点信息。例如,埋点日志为xx.log文件,其中的每条埋点信息以段为单位顺序排列,埋点日志中每增加一段内容,则认为增加一条埋点信息,在读取埋点信息时,忽略已读取的埋点信息,仅读取新增的埋点信息,由于无需读取整个埋点日志,可以减少处理的数据量。
在本申请的一个优选的实施例中,在所述步骤100从埋点日志读取埋点信息之后,还可以将读取出的埋点信息发送至消息中间件。所述消息中间件可以为MetaQ(队列模型消息中间件)或者kafka(分布式发布订阅消息***),所述消息中间件用于传输埋点信息,同时利用消息中间件的消息堆积功能,控制埋点信息的传输速度。相应的,在对埋点信息进行处理之前,需要先从消息中间件提取出埋点信息。
步骤102,按照至少一个维度对所述埋点信息进行聚合。
在得到埋点信息之后,可以按照从至少一个维度分别对埋点信息进行聚合,实现了对客户端应用的收集结果进行分类统计。例如,从应用维度和时间维度进行聚合,最终提供最小粒度为1分钟的某客户端应用的聚合信息,即属于某客户端应用的时间维度的最小单位为1分钟的聚合信息。
可以预先在服务器端部署的一套用于聚合埋点日志中的埋点信息的聚合***,对埋点信息的聚合操作可以在聚合***中执行,聚合***的结构如 图4所示。在聚合***中可以利用数据接收器接收聚合信息;利用数据分析器分析聚合信息,并存入数据库;数据库用于存储经过分析的聚合信息据,还可以提供数据查询接口,用于查询聚合信息。采用聚合***进行数据聚合处理可以保证数据处理的实时性和准确性。
其中,按照时间维度对埋点信息进行聚合,具体可以根据埋点信息对应的时间点或时间段,将属于相同时间点或者相同时间段的多条埋点信息聚合在一起,作为该相同时间点或相同时间段内的聚合信息;按照应用维度对埋点信息进行聚合,具体可以根据埋点信息对应的客户端应用的标识,将属于相同客户端应用的多条埋点信息聚合在一起,作为该相同客户端应用的聚合信息。其中,多条埋点信息可以为同一客户端应用的埋点信息,也可以为不同客户端应用的埋点信息。
由于埋点日志中除了实际有用的埋点信息之外,还包括一些冗余的信息。因此,在本申请的一个优选的实施例中,在所述步骤102之前,还可以对读取出的埋点信息进行清洗和过滤。通过清洗删除一些无意义的字符或是标记,例如分隔符等,通过过滤可以从剩余的信息中提取出有用的埋点信息,具体的,可以根据埋点属性中的埋点类别和埋点关键字中至少一种进行筛选。
具体的,可以利用实时计算框架读取出埋点信息以及对埋点信息的清洗和过滤。例如可以采用JStorm实时计算框架,利用JStorm提取出与埋点属性对应的埋点信息。JStorm是参考Storm的实时流式计算框架,在网络输入输出、线程模型、资源调度、可用性及稳定性上做了持续改进,相比于Storm具有运行更稳定、调度功能更强大、执行效率更高等优点。如图5所示,示出了消息中间件、JStorm和聚合***之间的关系示意图,JStorm从消息中间件获取埋点信息,在JStorm中经过Spout(喷射)进行清洗,通过bolt(筛选)进行过滤,将清洗和过滤后的与埋点关键字对应的埋点信息发送至聚合***。
步骤104,根据查询请求查询并展示聚合后的埋点信息。
执行本申请的服务端可以提供给用户查询埋点信息的接口,具体的可以 由上述聚合***提供接口。用户可以通过访问该接口发送对埋点信息的查询请求,服务端接收到该查询请求后,查询所需的埋点信息并展示给用户。
具体而言,用户可以通过访问该接口输入表征所需信息的查询参数,服务端接收到的查询请求携带查询参数,进一步根据查询参数查询对应的埋点信息。查询参数可以包括时间段、所述埋点属性和所述客户端应用的应用标识中至少一种,以查询参数包括时间段、埋点属性和应用标识为例,根据查询参数在聚合***中可以查询到某个时间段内某个应用的某个关键字的埋点信息。查询结果可以以图表、文档、走势图等各种形式进行展示。例如,针对按照时间维度聚合的埋点信息,可以生成可视化曲线图,并根据实时获取的新的埋点数据对曲线图进行更新。
综上所述,本申请实施例中的技术方案,对客户端应用调用埋点SDK中的API所创建的埋点日志进行监听,从埋点日志中读取出埋点信息,并针对用户的查询请求查询并展示按照不同的维度聚合后的埋点信息进行查询并展示。本申请实施例可以通过SDK埋点中的API自定义埋点,由于支持不同应用场景下根据用户需求自定义收集客户端应用相关数据,扩大了针对客户端应用的信息收集范围,并提高了收集结果的利用转化率;而且,对从埋点日志中读取出的埋点信息按不同维度进行聚合,实现了对客户端应用的收集结果进行分类统计。同时,利用消息中间件传输埋点信息,并控制传输速度,可以缓解聚合***的压力。
参照图6,示出了本申请的另一种基于云平台的客户端应用的信息统计方法实施例的步骤流程图,具体可以包括如下步骤:
步骤200,客户端应用调用服务器端提供的埋点SDK自定义埋点。
步骤202,埋点SDK创建埋点日志。
步骤204,LogAgent监听埋点日志,并以增量方式读取埋点日志中的埋点信息,并将埋点信息发送至消息中间件。
步骤206,JStorm从消息中间件中读取出埋点信息,并对埋点信息进行实时计算,将计算后的埋点信息发送给至聚合***。
步骤208,聚合***按照应用维度和/或时间维度对埋点信息进行聚合。
步骤210,服务端根据查询请求查询并展示聚合后的埋点信息。
综上所述,本申请实施例中的客户端应用可以通过SDK埋点中的API自定义埋点,由于支持不同应用场景下根据用户需求自定义收集客户端应用相关数据,扩大了针对客户端应用的信息收集范围,并提高了收集结果的利用转化率;而且,对从埋点日志中读取出的埋点信息按不同维度进行聚合,实现了对客户端应用的收集结果进行分类统计。
参照图7,示出了本申请的一种基于云平台的客户端应用的信息统计方法实施例的***结构逻辑示意图,客户端应用调用埋点SDK中的API自定义埋点,创建埋点日志;LogAgent读取出埋点日志中的埋点信息,并发送至消息中间件,JStorm实时从消息中间件读取出埋点信息,并进行清洗和过滤,将清洗和过滤后的埋点信息发送至聚合***;聚合***对接收到的埋点信息按照至少一个维度进行聚合,并对外提供数据查询功能和数据展示功能。
综上所述,本申请实施例中的技术方案,对客户端应用调用埋点SDK中的API所创建的埋点日志进行监听,从埋点日志中读取出埋点信息,并针对用户的查询请求查询并展示按照不同的维度聚合后的埋点信息进行查询并展示。本申请实施例可以通过SDK埋点中的API自定义埋点,由于支持不同应用场景下根据用户需求自定义收集客户端应用相关数据,扩大了针对客户端应用的信息收集范围,并提高了收集结果的利用转化率;而且,对从埋点日志中读取出的埋点信息按不同维度进行聚合,实现了对客户端应用的收集结果进行分类统计。
需要说明的是,对于方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本申请实施例并不受所描述的动作顺序的限制,因为依据本申请实施例,某些步骤可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作并不一定是本申请实施例所必须的。
参照图8,示出了本申请的一种基于云平台的客户端应用的信息统计装置实施例的结构框图,具体可以包括如下模块:
埋点日志监听模块800,用于监听各个客户端应用调用埋点软件开发工具包创建的埋点日志;
埋点信息提取模块802,用于从所述埋点日志中提取出埋点信息,所述埋点日志根据预先定义的埋点属性创建,所述客户端应用通过调用所述埋点软件开发工具包的应用编程接口预先定义所述埋点属性;
埋点信息聚合模块804,用于按照至少一个维度对所述埋点信息进行聚合;
埋点信息展示模块806,用于根据查询请求查询并展示聚合后的埋点信息。
本申请实施例中,优选地,所述埋点属性包括所述埋点信息的埋点类别和/或埋点关键字,所述埋点类别包括所述埋点信息所属埋点父类别或埋点子类别。
本申请实施例中,优选地,所述埋点日志监听模块,具体用于监听所述埋点日志的目录和/或所述埋点信息的变化状态;若所述目录和/或所述埋点信息发生变化,则确定所述埋点日志中新增埋点信息。
本申请实施例中,优选地,所述埋点信息读取模块,具体用于从所述埋点日志中以增量方式读取出埋点信息。
本申请实施例中,优选地,所述装置还包括:
埋点信息发送模块,用于在所述从所述埋点日志中提取出埋点信息之后,将所述埋点信息发送至消息中间件,所述消息中间件用于传输所述埋点信息,并控制传输速度;
所述装置还包括:
埋点信息提取模块,用于在所述按照至少一个维度对所述埋点信息进行聚合之前,从所述消息中间件读取出所述埋点信息。
本申请实施例中,优选地,所述装置还包括:
埋点信息筛选模块,用于在所述按照至少一个维度对所述埋点信息进行 聚合之前,筛选与所述埋点属性对应的埋点信息。
本申请实施例中,优选地,所述埋点信息聚合模块,具体用于按照应用维度和/或时间维度对所述埋点信息进行聚合。
本申请实施例中,优选地,所述查询请求携带通过查询接口输入的查询参数;
所述埋点信息展示模块,具体用于根据所述查询参数查询并聚合后的埋点信息;所述查询参数包括时间段、所述埋点属性和所述客户端应用的应用标识中至少一种。
综上所述,本申请实施例中的技术方案,对客户端应用调用埋点SDK中的API所创建的埋点日志进行监听,从埋点日志中读取出埋点信息,并针对用户的查询请求查询并展示按照不同的维度聚合后的埋点信息进行查询并展示。本申请实施例可以通过SDK埋点中的API自定义埋点,由于支持不同应用场景下根据用户需求自定义收集客户端应用相关数据,扩大了针对客户端应用的信息收集范围,并提高了收集结果的利用转化率;而且,对从埋点日志中读取出的埋点信息按不同维度进行聚合,实现了对客户端应用的收集结果进行分类统计。
对于装置实施例而言,由于其与方法实施例基本相似,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
本说明书中的各个实施例均采用递进的方式描述,每个实施例重点说明的都是与其他实施例的不同之处,各个实施例之间相同相似的部分互相参见即可。
本领域内的技术人员应明白,本申请实施例的实施例可提供为方法、装置、或计算机程序产品。因此,本申请实施例可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本申请实施例可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
在一个典型的配置中,所述计算机设备包括一个或多个处理器(CPU)、输入/输出接口、网络接口和内存。内存可能包括计算机可读介质中的非永久性存储器,随机存取存储器(RAM)和/或非易失性内存等形式,如只读存储器(ROM)或闪存(flash RAM)。内存是计算机可读介质的示例。计算机可读介质包括永久性和非永久性、可移动和非可移动媒体可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机的存储介质的例子包括,但不限于相变内存(PRAM)、静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、快闪记忆体或其他内存技术、只读光盘只读存储器(CD-ROM)、数字多功能光盘(DVD)或其他光学存储、磁盒式磁带,磁带磁磁盘存储或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。按照本文中的界定,计算机可读介质不包括非持续性的电脑可读媒体(transitory media),如调制的数据信号和载波。
本申请实施例是参照根据本申请实施例的方法、终端设备(***)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理终端设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理终端设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理终端设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理终端设 备上,使得在计算机或其他可编程终端设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程终端设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
尽管已描述了本申请实施例的优选实施例,但本领域内的技术人员一旦得知了基本创造性概念,则可对这些实施例做出另外的变更和修改。所以,所附权利要求意欲解释为包括优选实施例以及落入本申请实施例范围的所有变更和修改。
最后,还需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者终端设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者终端设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者终端设备中还存在另外的相同要素。
以上对本申请所提供的一种基于云平台的客户端应用的信息统计方法和一种基于云平台的客户端应用的信息统计装置,进行了详细介绍,本文中应用了具体个例对本申请的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本申请的方法及其核心思想;同时,对于本领域的一般技术人员,依据本申请的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本申请的限制。

Claims (16)

  1. 一种基于云平台的客户端应用的信息统计方法,其特征在于,包括:
    监听各个客户端应用调用埋点软件开发工具包创建的埋点日志,并从所述埋点日志中提取出埋点信息,所述埋点日志根据预先定义的埋点属性创建,所述客户端应用通过调用所述埋点软件开发工具包的应用编程接口预先定义所述埋点属性;
    按照至少一个维度对所述埋点信息进行聚合;
    根据查询请求查询并展示聚合后的埋点信息。
  2. 根据权利要求1所述的方法,其特征在于,所述埋点属性包括所述埋点信息的埋点类别和/或埋点关键字,所述埋点类别包括所述埋点信息所属埋点父类别或埋点子类别。
  3. 根据权利要求1所述的方法,其特征在于,所述监听各个客户端应用调用埋点软件开发工具包创建的埋点日志包括:
    监听所述埋点日志的目录和/或所述埋点信息的变化状态;
    若所述目录和/或所述埋点信息发生变化,则确定所述埋点日志中新增埋点信息。
  4. 根据权利要求1所述的方法,其特征在于,所述从所述埋点日志中提取出埋点信息包括:
    从所述埋点日志中以增量方式提取出埋点信息。
  5. 根据权利要求1所述的方法,其特征在于,在从所述埋点日志中提取出埋点信息之后,所述方法还包括:
    将所述埋点信息发送至消息中间件,所述消息中间件用于传输所述埋点信息,并控制传输速度;
    在所述按照至少一个维度对所述埋点信息进行聚合之前,所述方法还包括:
    从所述消息中间件读取出所述埋点信息。
  6. 根据权利要求1所述的方法,其特征在于,在所述按照至少一个维 度对所述埋点信息进行聚合之前,所述方法还包括:
    筛选与所述埋点属性对应的埋点信息。
  7. 根据权利要求1所述的方法,其特征在于,所述按照至少一个维度对所述埋点信息进行聚合包括:
    按照应用维度和/或时间维度对所述埋点信息进行聚合。
  8. 根据权利要求1所述的方法,其特征在于,所述查询请求携带通过查询接口输入的查询参数;
    所述根据查询请求查询并展示聚合后的埋点信息包括:
    根据所述查询参数查询并展示聚合后的埋点信息;所述查询参数包括时间段、所述埋点属性和所述客户端应用的应用标识中至少一种。
  9. 一种基于云平台的客户端应用的信息统计装置,其特征在于,包括:
    埋点日志监听模块,用于监听各个客户端应用调用埋点软件开发工具包创建的埋点日志;
    埋点信息提取模块,用于从所述埋点日志中提取出埋点信息,所述埋点日志根据预先定义的埋点属性创建,所述客户端应用通过调用所述埋点软件开发工具包的应用编程接口预先定义所述埋点属性;
    埋点信息聚合模块,用于按照至少一个维度对所述埋点信息进行聚合;
    埋点信息展示模块,用于根据查询请求查询并展示聚合后的埋点信息。
  10. 根据权利要求9所述的装置,其特征在于,所述埋点属性包括所述埋点信息的埋点类别和/或埋点关键字,所述埋点类别包括所述埋点信息所属埋点父类别或埋点子类别。
  11. 根据权利要求9所述的装置,其特征在于,所述埋点日志监听模块,具体用于监听所述埋点日志的目录和/或所述埋点信息的变化状态;若所述目录和/或所述埋点信息发生变化,则确定所述埋点日志中新增埋点信息。
  12. 根据权利要求9所述的装置,其特征在于,所述埋点信息读取模块,具体用于从所述埋点日志中以增量方式读取出埋点信息。
  13. 根据权利要求9所述的装置,其特征在于,所述装置还包括:
    埋点信息发送模块,用于在所述从所述埋点日志中提取出埋点信息之后,将所述埋点信息发送至消息中间件,所述消息中间件用于传输所述埋点信息,并控制传输速度;
    所述装置还包括:
    埋点信息提取模块,用于在所述按照至少一个维度对所述埋点信息进行聚合之前,从所述消息中间件读取出所述埋点信息。
  14. 根据权利要求9所述的装置,其特征在于,所述装置还包括:
    埋点信息筛选模块,用于在所述按照至少一个维度对所述埋点信息进行聚合之前,筛选与所述埋点属性对应的埋点信息。
  15. 根据权利要求9所述的装置,其特征在于,所述埋点信息聚合模块,具体用于按照应用维度和/或时间维度对所述埋点信息进行聚合。
  16. 根据权利要求9所述的装置,其特征在于,所述查询请求携带通过查询接口输入的查询参数;
    所述埋点信息展示模块,具体用于根据所述查询参数查询并聚合后的埋点信息;所述查询参数包括时间段、所述埋点属性和所述客户端应用的应用标识中至少一种。
PCT/CN2017/074167 2016-03-01 2017-02-20 一种基于云平台的客户端应用的信息统计方法和装置 WO2017148293A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
JP2018544842A JP2019517040A (ja) 2016-03-01 2017-02-20 クラウドプラットフォームベースのクライアントアプリケーション情報統計方法および装置
EP17759144.3A EP3425524A1 (en) 2016-03-01 2017-02-20 Cloud platform-based client application data calculation method and device
US16/119,899 US20180365085A1 (en) 2016-03-01 2018-08-31 Method and apparatus for monitoring client applications

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610115058.XA CN107145489B (zh) 2016-03-01 2016-03-01 一种基于云平台的客户端应用的信息统计方法和装置
CN201610115058.X 2016-03-01

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/119,899 Continuation US20180365085A1 (en) 2016-03-01 2018-08-31 Method and apparatus for monitoring client applications

Publications (1)

Publication Number Publication Date
WO2017148293A1 true WO2017148293A1 (zh) 2017-09-08

Family

ID=59743528

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/074167 WO2017148293A1 (zh) 2016-03-01 2017-02-20 一种基于云平台的客户端应用的信息统计方法和装置

Country Status (6)

Country Link
US (1) US20180365085A1 (zh)
EP (1) EP3425524A1 (zh)
JP (1) JP2019517040A (zh)
CN (1) CN107145489B (zh)
TW (1) TW201734858A (zh)
WO (1) WO2017148293A1 (zh)

Families Citing this family (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180157700A1 (en) * 2016-12-06 2018-06-07 International Business Machines Corporation Storing and verifying event logs in a blockchain
CN107832784B (zh) * 2017-10-27 2021-04-30 维沃移动通信有限公司 一种图像美化的方法和一种移动终端
CN108551411A (zh) * 2018-04-28 2018-09-18 努比亚技术有限公司 数据采集方法、移动终端及计算机可读存储介质
CN108920355B (zh) * 2018-05-31 2024-02-02 康键信息技术(深圳)有限公司 打点事件信息采集方法、装置、计算机设备和存储介质
CN109189810B (zh) * 2018-08-28 2021-07-02 拉扎斯网络科技(上海)有限公司 查询方法、装置、电子设备及计算机可读存储介质
CN109688207B (zh) * 2018-12-11 2022-06-03 北京云中融信网络科技有限公司 日志传输方法、装置及服务器
CN110442511B (zh) * 2019-06-25 2022-11-18 苏宁云计算有限公司 可视化埋点测试方法及装置
CN110377383B (zh) * 2019-07-02 2023-02-03 上海上湖信息技术有限公司 一种查看应用软件性能参数的方法、装置及存储介质
US11514360B2 (en) * 2019-07-12 2022-11-29 EMC IP Holding Company LLC Method and system for verifying state monitor reliability in hyper-converged infrastructure appliances
CN110489180B (zh) * 2019-08-07 2023-03-28 北京字节跳动网络技术有限公司 一种埋点上报方法、装置、介质和电子设备
CN110737589A (zh) * 2019-09-10 2020-01-31 北京字节跳动网络技术有限公司 一种自动埋点的方法、装置、介质和电子设备
CN110597777A (zh) * 2019-09-18 2019-12-20 金瓜子科技发展(北京)有限公司 一种日志处理方法和装置
CN112632595B (zh) * 2019-09-24 2024-04-09 中国石油化工股份有限公司 一种基于地震资料解释软件的信息搜集方法及***
CN110825711A (zh) * 2019-10-17 2020-02-21 上海易点时空网络有限公司 基于Flume快速分区传输数据的方法以及装置
CN111221717A (zh) * 2020-01-17 2020-06-02 北大方正集团有限公司 用户行为数据处理方法和计算机可读存储介质
CN111563015B (zh) * 2020-04-15 2023-04-21 成都欧珀通信科技有限公司 数据监控方法及装置、计算机可读介质及终端设备
CN111651324B (zh) * 2020-06-02 2023-09-01 上海泛微网络科技股份有限公司 一种日志收集方法、装置
US11579847B2 (en) 2020-06-10 2023-02-14 Snap Inc. Software development kit engagement monitor
US11042465B1 (en) 2020-09-02 2021-06-22 Coupang Corp. Systems and methods for analyzing application loading times
CN112579408A (zh) * 2020-10-29 2021-03-30 上海钱拓网络技术有限公司 一种埋点信息的分类方法
CN112579412A (zh) * 2020-12-10 2021-03-30 上海艾融软件股份有限公司 一种用户行为采集方法、装置、***与介质
US11966323B2 (en) 2021-01-05 2024-04-23 Red Hat, Inc. Troubleshooting software services based on system calls
CN112948226B (zh) * 2021-02-05 2024-04-02 中国建设银行股份有限公司 一种用户画像绘制方法和装置
CN115174226B (zh) * 2022-07-05 2024-05-03 北京鉴微知著智能科技有限公司 基于人工智能和大数据的用户行为预测方法、设备、介质及产品

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104348650A (zh) * 2013-08-05 2015-02-11 腾讯科技(深圳)有限公司 网站的监控方法、业务装置及***
CN104572043A (zh) * 2013-10-16 2015-04-29 阿里巴巴集团控股有限公司 一种对客户端应用的控件进行实时埋点的方法及装置
CN104915296A (zh) * 2015-06-29 2015-09-16 北京金山安全软件有限公司 埋点测试方法、数据的查询方法及装置

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7620715B2 (en) * 2005-06-29 2009-11-17 Tripwire, Inc. Change event correlation
JPWO2010090027A1 (ja) * 2009-02-05 2012-08-09 日本電気株式会社 分散型イベント配信システムにおけるブローカノードおよびイベントトピック制御方法
US20130007769A1 (en) * 2011-06-29 2013-01-03 International Business Machines Corporation Tracking File-Centric Events
US20130084838A1 (en) * 2011-10-03 2013-04-04 Geospatial Holdings, Inc. System, method, and apparatus for viewing underground structures
AU2012370492B2 (en) * 2012-02-21 2016-03-24 Ensighten, Inc. Graphical overlay related to data mining and analytics
CN103631699B (zh) * 2012-08-28 2019-02-12 北京京东尚科信息技术有限公司 日志管理***及日志监控、获取和查询方法
US20140250138A1 (en) * 2013-03-04 2014-09-04 Vonage Network Llc Method and apparatus for optimizing log file filtering
US9602679B2 (en) * 2014-02-27 2017-03-21 Lifeprint Llc Distributed printing social network
US20160042388A1 (en) * 2014-08-07 2016-02-11 Somo Innovations Ltd Tracking and analyzing mobile device activity related to mobile display campaigns
CN104750824A (zh) * 2015-03-31 2015-07-01 北京百度网讯科技有限公司 应用功能数据的处理方法及装置

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104348650A (zh) * 2013-08-05 2015-02-11 腾讯科技(深圳)有限公司 网站的监控方法、业务装置及***
CN104572043A (zh) * 2013-10-16 2015-04-29 阿里巴巴集团控股有限公司 一种对客户端应用的控件进行实时埋点的方法及装置
CN104915296A (zh) * 2015-06-29 2015-09-16 北京金山安全软件有限公司 埋点测试方法、数据的查询方法及装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3425524A4 *

Also Published As

Publication number Publication date
CN107145489A (zh) 2017-09-08
US20180365085A1 (en) 2018-12-20
EP3425524A4 (en) 2019-01-09
CN107145489B (zh) 2020-12-01
JP2019517040A (ja) 2019-06-20
TW201734858A (zh) 2017-10-01
EP3425524A1 (en) 2019-01-09

Similar Documents

Publication Publication Date Title
WO2017148293A1 (zh) 一种基于云平台的客户端应用的信息统计方法和装置
US11386127B1 (en) Low-latency streaming analytics
US11792291B1 (en) Proxying hypertext transfer protocol (HTTP) requests for microservices
US11379475B2 (en) Analyzing tags associated with high-latency and error spans for instrumented software
US11836148B1 (en) Data source correlation user interface
US11829330B2 (en) Log data extraction from data chunks of an isolated execution environment
US11829236B2 (en) Monitoring statuses of monitoring modules of a distributed computing system
US11843528B2 (en) Lower-tier application deployment for higher-tier system
US11615082B1 (en) Using a data store and message queue to ingest data for a data intake and query system
US11966797B2 (en) Indexing data at a data intake and query system based on a node capacity threshold
US11755531B1 (en) System and method for storage of data utilizing a persistent queue
US11436116B1 (en) Recovering pre-indexed data from a shared storage system following a failed indexer
US11609913B1 (en) Reassigning data groups from backup to searching for a processing node
US11663172B2 (en) Cascading payload replication
US11892976B2 (en) Enhanced search performance using data model summaries stored in a remote data store
US20220245091A1 (en) Facilitating generation of data model summaries
CN110968561B (zh) 日志存储方法和分布式***
US11829415B1 (en) Mapping buckets and search peers to a bucket map identifier for searching
US12019634B1 (en) Reassigning a processing node from downloading to searching a data group
US11704285B1 (en) Metrics and log integration

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2018544842

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 2017759144

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2017759144

Country of ref document: EP

Effective date: 20181001

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17759144

Country of ref document: EP

Kind code of ref document: A1