EP3293641B1 - Data processing method and system - Google Patents

Data processing method and system Download PDF

Info

Publication number
EP3293641B1
EP3293641B1 EP16789273.6A EP16789273A EP3293641B1 EP 3293641 B1 EP3293641 B1 EP 3293641B1 EP 16789273 A EP16789273 A EP 16789273A EP 3293641 B1 EP3293641 B1 EP 3293641B1
Authority
EP
European Patent Office
Prior art keywords
key
value pairs
value
hotspot
pairs
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
EP16789273.6A
Other languages
German (de)
French (fr)
Other versions
EP3293641A1 (en
EP3293641A4 (en
Inventor
Min Han
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Advanced New Technologies Co Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to PL16789273T priority Critical patent/PL3293641T3/en
Publication of EP3293641A1 publication Critical patent/EP3293641A1/en
Publication of EP3293641A4 publication Critical patent/EP3293641A4/en
Application granted granted Critical
Publication of EP3293641B1 publication Critical patent/EP3293641B1/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2453Query optimisation
    • G06F16/24534Query rewriting; Transformation
    • G06F16/24539Query rewriting; Transformation using cached or materialised query results
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/22Indexing; Data structures therefor; Storage structures
    • G06F16/2219Large Object storage; Management thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/22Indexing; Data structures therefor; Storage structures
    • G06F16/2228Indexing structures
    • G06F16/2272Management thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/28Databases characterised by their database models, e.g. relational or object models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F7/00Methods or arrangements for processing data by operating upon the order or content of the data handled
    • G06F7/06Arrangements for sorting, selecting, merging, or comparing data on individual record carriers
    • G06F7/08Sorting, i.e. grouping record carriers in numerical or other ordered sequence according to the classification of at least some of the information they carry
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Definitions

  • the present application relates to the field of big data technologies, and in particular, to a data processing method and system.
  • a Hadoop distributed cluster system architecture is such a system architecture.
  • a Hadoop system can construct a computer cluster by using a large number of cheap computers, and replace an expensive high-computing speed computer with this cluster to perform high-speed computation and storage.
  • the Hadoop system mainly includes a distributed file system and a MapReduce system.
  • the distributed file system manages and stores data.
  • the MapReduce system computes data input by the distributed file system, mainly including: decomposing a to-be-processed data set into a plurality of data blocks; mapping each piece of original key-value pair data in each data block, to obtain intermediate result key-value pair data corresponding to each piece of original key-value pair data; and after intermediate result key-value pair data corresponding to all original key-value pair data is obtained, correspondingly reducing all intermediate result key-value pair data to obtain corresponding final result key-value pair data.
  • a big task may be divided into a large number of small tasks and the small tasks are executed by a large number of computers (also referred to as task executors) in a distributed system.
  • This processing manner still does not reduce total computing resources, but distributes a large number of required computing resources to a large number of computers, thus greatly compressing the required processing time.
  • This processing manner is suitable for an offline scenario insensitive to time. For an online service scenario, e.g., an instant messaging scenario, it is generally required that mass data processing is accomplished and a result is output in a short time; therefore, it is sensitive to time.
  • the inventor provides a data processing method and system having high execution efficiency and desirable user experience.
  • US 2012/3041186 A1 describes techniques for scheduling one or more MapReduce jobs in the presence of one or more priority classes.
  • the techniques include obtaining a preferred ordering for one or more MapReduce jobs, wherein the preferred ordering comprises one or more priority classes, prioritizing the one or more priority classes subject to one or more dynamic minimum slot guarantees for each priority class, and iteratively employing a MapReduce scheduler, once per priority class, in priority class order, to optimize performance of the one or more MapReduce jobs.
  • JP 2010 092222 A describes a cache structure that can effectively use a cache in Map-Reduce processing based on update frequency and a method for constructing a cache mechanism.
  • the method includes that a plurality of data to be processed is grouped into a plurality of groups based on the update frequency of each of the data.
  • the method further includes a step of calculating the group update frequency, which is the update frequency of each of the plurality of groups, based on the data update frequency which is the update frequency of the data constituting each of the plurality of groups, generating a partial result of a Map-Reduce processing stage for a group whose update frequency is equal to or less than a threshold, and caching the generated partial result.
  • Embodiments of the present application provide a data processing method and apparatus having high execution efficiency and desirable user experience.
  • the data processing method and apparatus are defined by the appended claims.
  • the data processing method and system provided in the embodiments of the present application has at least the following beneficial effects:
  • the data processing system pre-processes hotspot key-value pairs to facilitate calling by a service system, while non-hotspot key-value pairs are processed only when being called by the service system, which thus reduces the volume of data that needs to be processed in real time by the data processing system providing a back-end service for the service system, improves execution efficiency of data processing, reduces the time of the service system waiting for a data processing result, and has smooth service processing and desirable user experience.
  • a data processing system pre-processes hotspot key-value pairs to facilitate calling by a service system, while non-hotspot key-value pairs are processed only when being called by the service system, which thus reduces the volume of data that needs to be processed in real time by the data processing system providing a back-end service for the service system, improves execution efficiency of data processing, reduces the time of the service system waiting for a data processing result, and has smooth service processing and desirable user experience.
  • a Hadoop system may include:
  • a job process of the Hadoop system may include:
  • the client terminal requests a new job code from the job tracker, creates a new job instance, and calls a submitJob function.
  • the job tracker When receiving calling of the submitJob function, the job tracker acquires and initializes a task. The job tracker creates a task, and assigns a task code.
  • the job tracker assigns the task to the task tracker.
  • the task tracker After being assigned with a task, the task tracker starts to run the task. During mapping, the task tracker calls a map function to process the task, that is, process original key-value pairs to generate intermediate result key-value pairs, and outputs the intermediate result key-value pairs according to a sequence of key values. Then, the task tracker calls a reduce function to process the intermediate result key-value pairs to generate final result key-value pairs.
  • a map function to process the task, that is, process original key-value pairs to generate intermediate result key-value pairs, and outputs the intermediate result key-value pairs according to a sequence of key values.
  • the task tracker calls a reduce function to process the intermediate result key-value pairs to generate final result key-value pairs.
  • the job tracker After obtaining a report of the task tracker indicating that all tasks run successfully, the job tracker ends the job.
  • FIG. 1 is a flowchart of a data processing method according to an embodiment of the present application, specifically including the following steps: S100: A part of to-be-processed key-value pairs are selected as hotspot key-value pairs according to a screening rule.
  • Data is embodied as an attribute and a numerical value that describe data properties, that is, a commonly described key-value pair.
  • the key-value pair includes a key value representing an attribute and a key value representing attribute content.
  • the attribute content includes, but is not limited to, a list, a hash map, a character string, a numerical value, a Boolean value, an ordered list array, a null value, and the like. For example, ⁇ "name”: “Wang Xiao'er” ⁇ denotes data of a person whose "name” is "Wang Xiao'er".
  • the step of selecting a part of to-be-processed key-value pairs as hotspot key-value pairs according to a screening rule specifically includes that: several to-be-processed key-value pairs may be selected randomly as the hotspot key-value pairs.
  • judging whether a to-be-processed key-value pair is a hotspot key-value pair is a complex process, especially when there are millions of or even hundreds of millions of to-be-processed key-value pairs.
  • the data processing system randomly selects several to-be-processed key-value pairs as hotspot key-value pairs, thereby simplifying the process of judging whether a to-be-processed key-value pair is a hotspot key-value pair, and improving the data processing efficiency of the method.
  • the hotspot key-value pairs are pre-processed for calling by a service system, while non-hotspot key-value pairs are processed only when being called by the service system, which thus reduces the volume of data that needs to be processed in real time by the data processing system providing a back-end service for the service system, improves execution efficiency of data processing, reduces the time of the service system waiting for a data processing result, and has smooth service processing and desirable user experience.
  • the step of selecting a part of to-be-processed key-value pairs as hotspot key-value pairs according to a screening rule specifically includes:
  • the first number is greater than the second number.
  • the data processing system randomly selects a first number of mapping key-value pairs as candidate key-value pairs.
  • the first number generally corresponds to a specific service.
  • a numerical value of the first number may be a fixed value set according to historical experience, and may also be a numerical value generated by a computer through dynamic adjustment and changes.
  • the data processing system counts a frequency at which each key-value pair among the candidate key-value pairs is called, and arranges the candidate key-value pairs according to the frequencies.
  • it is generally required to call a number of key-value pairs to support the service system.
  • the data processing system tracks and records the frequency at which each key-value pair is called, i.e., the number of times each key-value pair is called within a period of time. Further, the data processing system may further arrange the key-value pairs according to the calling frequencies from large to small.
  • the data processing system selects a second number of key-value pairs having maximum calling frequencies from the candidate key-value pairs as hotspot key-value pairs.
  • the first number is greater than the second number.
  • a numerical value of the second number may be a fixed value set according to historical experience, and may also be a numerical value generated by the data processing system through dynamic adjustment and changes.
  • the data processing system pre-processes the hotspot key-value pairs, instead of pre-processing other candidate key-value pairs. A probability that the pre-processed key-value pairs are called is greater than a probability that other key-value pairs are called.
  • the volume of data that needs to be processed in real time by the data processing system providing a back-end service for the service system is reduced, execution efficiency of data processing is improved, the time of the service system waiting for a data processing result is reduced, service processing is smooth, and user experience is desirable.
  • the step of selecting a part of mapping key-value pairs as hotspot key-value pairs further includes:
  • the service category condition set may be a fixed value set according to historical experience, and may also be generated through dynamic adjustment and changes.
  • a key-value pair called by a service system of a service activity generally has some specific properties to be distinguished from other service activities.
  • a key-value pair called by a service system for pushing information has its specific properties compared with a key-value pair called by a service system for payment.
  • the service system for pushing information may be related to a key-value pair indicating the age of a receiver. For example, pushed information about wedding goods is generally junk information for receivers under the age of 16.
  • a service category condition set of the service system for pushing information includes a key-value pair indicating age, a desirable push effect may be achieved.
  • a service category condition set of to-be-processed key-value pairs is set, and the data processing system may filter out a large number of to-be-processed key-value pairs through judgment on the service category condition set, thereby improving the selection precision of hotspot key-value pairs. Therefore, the volume of data that needs to be processed in real time by the data processing system providing a back-end service for the service system is reduced, execution efficiency of data processing is improved, the time of the service system waiting for a data processing result is reduced, service processing is smooth, and user experience is desirable.
  • the step of selecting a part of to-be-processed key-value pairs as hotspot key-value pairs according to a screening rule specifically includes:
  • a calling frequency of key-value pairs is set, and when the frequency of a key-value pair being called is greater than the calling frequency threshold, the data processing system sets the key-value pair as a hotspot key-value pair.
  • the data processing system pre-processes the hotspot key-value pairs, instead of pre-processing other key-value pairs.
  • a probability that the pre-processed key-value pairs are called is greater than a probability that other key-value pairs are called. Therefore, the volume of data that needs to be processed in real time by the data processing system providing a back-end service for the service system is reduced, execution efficiency of data processing is improved, the time of the service system waiting for a data processing result is reduced, service processing is smooth, and user experience is desirable.
  • S200 The hotspot key-value pairs are mapped to obtain intermediate result key-value pairs corresponding to the hotspot key-value pairs.
  • the client terminal JobClient submits a Map-Reduce job to the job tracker, creates a new job instance, and calls a submitJob function.
  • the job tracker acquires and initializes a task.
  • the job tracker creates a task, and assigns a task code.
  • the job tracker assigns the task to the task tracker.
  • the task tracker After being assigned with a task, the task tracker starts to run the task.
  • the task tracker calls a map function to process the task, that is, process original key-value pairs to generate intermediate result key-value pairs, and outputs the intermediate result key-value pairs according to a sequence of key values.
  • S300 The intermediate result key-value pairs are reduced to generate final result key-value pairs for calling.
  • the task tracker calls a reduce function to process the intermediate result key-value pairs to generate final result key-value pairs.
  • the job tracker stores the final result key-value pairs in the HDFS, and ends the job.
  • the data processing system pre-processes hotspot key-value pairs to facilitate calling by a service system, while non-hotspot key-value pairs are processed only when being called by the service system, which thus reduces the volume of data that needs to be processed in real time by the data processing system providing a back-end service for the service system, improves execution efficiency of data processing, reduces the time of the service system waiting for a data processing result, and has smooth service processing and desirable user experience.
  • the method further includes: optimizing the screening rule by using a machine learning model.
  • the machine learning model relates to artificial intelligence.
  • the screening rule is optimized by using a machine learning model. After the data processing system runs for a period of time, the accuracy of judging hotspot key-value pairs and non-hotspot key-value pairs may be significantly improved. Therefore, the volume of data that needs to be processed in real time by the data processing system providing a back-end service for the service system is reduced, execution efficiency of data processing is improved, the time of the service system waiting for a data processing result is reduced, service processing is smooth, and user experience is desirable.
  • the type of the machine learning model is selected according to a specific service system, and optimization of a machine learning model on the screening rule is introduced simply in the following.
  • a distribution condition of frequencies of the key-value pairs being called vs. a single attribute is counted by using a clustering algorithm in the machine learning model.
  • an interval of key values of attribute content at which the frequencies of the key-value pairs being called are not less than a preset frequency threshold is selected.
  • the interval of key values of attribute content is set as a rule condition of the screening rule.
  • Illustration is made by still taking the above service system for pushing information as an example. Assume that the service system obtains by conducting statistics that services for pushing information exceed a preset proportion, e.g., 50%, and a key-value pair indicating the age of a receiver is called.
  • the machine learning model optimizes the screening rule through a K-means clustering algorithm.
  • a clustering rule of to-be-processed key-value pairs having large calling frequencies vs. age may be obtained through calculation. That the age of the receiver is in a certain category (age group) is used as the rule condition of the screening rule. For example, that the age of the receiver is 12-18 is used as a rule condition of judging that the to-be-processed key-value pair is a hotspot key-value pair.
  • the service system screens the hotspot key-value pairs from the to-be-processed key-value pairs according to the optimized screening rule.
  • a rule optimization module is further configured to: when a key-value pair of an attribute and a key-value pair of another attribute are called by service systems having a same service code, set a union set of intervals of key values of attribute content of the key-value pairs of the two attributes as a rule condition of the screening rule.
  • the machine learning model further accomplishes optimization on the screening rule in the dimension of professions of receivers after accomplishing the optimization on the screening rule in the dimension of ages of the receivers.
  • the data processing system calculates that a to-be-processed key-value pair indicating that the receiver in in an age group and a to-be-processed key-value pair indicating that the receiver is of a profession are highly related to information pushing. For example, a to-be-processed key-value pair indicates that the receiver is in an age group of 20-30, a to-be-processed key-value pair indicates that the receiver is in the computer industry, and when the service system pushes information, desirable service promotion effects can be achieved for receivers having features of the two dimensions simultaneously.
  • the machine learning model associates the to-be-processed key-value pair indicating that the receiver is in the age group of 20-30 with the to-be-processed key-value pair indicating that the receiver is in the computer industry, to form a hotspot key-value pair data group.
  • the data processing system further ranks frequencies at which the hotspot key-value pair data groups are called by using the machine learning model, and classifies the hotspot key-value pair data groups into hotspot data groups and non-hotspot data groups.
  • a dynamic adjusting mode of the hotspot data groups is: setting a calling frequency threshold of the hotspot data groups, and when the frequencies of key-value pairs in the data group being called are greater than the frequency threshold, setting the data group as a hotspot data group.
  • a processing priority value of a data group is set.
  • the priority value is obtained by calculating a weighted sum value of the to-be-processed key-value pairs.
  • a processing priority of the data group is adjusted dynamically according to the priority value.
  • the priority value of the data group is increased by a unit.
  • the data processing system moves the data group one position ahead.
  • the hotspot key-value pairs selected by the data processing system from the to-be-processed key-value pairs are mapping key-value pairs having maximum frequencies of being called, wherein hotspot data groups formed by associating the key-value pairs are data groups having maximum frequencies of being called. Therefore, the volume of data that needs to be processed in real time by the data processing system providing a back-end service for the service system is reduced, execution efficiency of data processing is improved, the time of the service system waiting for a data processing result is reduced, service processing is smooth, and user experience is desirable.
  • the method further includes: when a non-hotspot key-value pair is called, processing the non-hotspot key-value pair by using a reduce function to generate data for calling.
  • the hotspot key-value pairs are pre-processed by the data processing system by using the reduce function, to generate data for calling by the service system.
  • the data processing system processes the key-value pairs by using the reduce function in real time, to generate data for calling by the service system. Therefore, the volume of data that needs to be processed in real time by the data processing system providing a back-end service for the service system is reduced, execution efficiency of data processing is improved, the time of the service system waiting for a data processing result is reduced, service processing is smooth, and user experience is desirable.
  • a data processing method includes the following steps:
  • the step of selecting a part of key-value pairs as the hotspot key-value pairs is set to be performed after the mapping processing step.
  • the volume of data of key-value pairs of reducing processing is reduced, and the problem of large volume may be solved to some extent. Therefore, the volume of data that needs to be processed in real time by the data processing system providing a back-end service for the service system is reduced, execution efficiency of data processing is improved, the time of the service system waiting for a data processing result is reduced, service processing is smooth, and user experience is desirable.
  • the present application further provides a data processing system 1, including:
  • the screening module 10 configured to select a part of to-be-processed key-value pairs as hotspot key-value pairs is specifically configured to: randomly select several to-be-processed key-value pairs as hotspot key-value pairs.
  • the screening module 10 configured to select a part of to-be-processed key-value pairs as hotspot key-value pairs is specifically configured to:
  • screening module 10 configured to select a part of mapping key-value pairs as hotspot key-value pairs is further specifically configured to:
  • the screening module 10 configured to select a part of to-be-processed key-value pairs as hotspot key-value pairs is specifically configured to:
  • system further includes a rule optimization module 40, configured to: optimize the screening rule by using a machine learning model.
  • mapping module 20 is configured to: map the non-hotspot key-value pairs to obtain intermediate result key-value pairs corresponding to the non-hotspot key-value pairs.
  • a data processing system 1 includes:
  • the data processing system pre-processes hotspot key-value pairs to facilitate calling by a service system, while non-hotspot key-value pairs are processed only when being called by the service system, which thus reduces the volume of data that needs to be processed in real time by the data processing system providing a back-end service for the service system, improves execution efficiency of data processing, reduces the time of the service system waiting for a data processing result, and has smooth service processing and desirable user experience.
  • the embodiments of the present invention may be provided as a method, a system, or a computer program product. Therefore, the present invention may be implemented in the form of a complete hardware embodiment, a complete software embodiment, or an embodiment combining software and hardware. Moreover, the present invention may be a computer program product implemented on one or more computer usable storage media (including, but not limited to, a magnetic disk memory, a CD-ROM, an optical memory, and the like) including computer usable program code.
  • a computer usable storage media including, but not limited to, a magnetic disk memory, a CD-ROM, an optical memory, and the like
  • These computer program instructions may be provided for a general-purpose computer, a special-purpose computer, an embedded processor, or a processor of another programmable numerical processing device to generate a machine, so that the instructions executed by a computer or a processor of another programmable numerical processing device generate an apparatus for implementing a specified function in one or more processes in the flowcharts and/or in one or more blocks in the block diagrams.
  • These computer program instructions may also be stored in a computer readable memory that can instruct the computer or another programmable numerical processing device to work in a particular manner, such that the instructions stored in the computer readable memory generate an article of manufacture that includes an instruction apparatus.
  • the instruction apparatus implements a specified function in one or more processes in the flowcharts and/or in one or more blocks in the block diagrams.
  • These computer program instructions may also be loaded onto a computer or another programmable numerical processing device, such that a series of operating steps are performed on the computer or another programmable device, thereby generating computer-implemented processing. Therefore, the instructions executed on the computer or another programmable device provide steps for implementing a specified function in one or more processes in the flowcharts and/or in one or more blocks in the block diagrams.
  • a computing device includes one or more processors (CPU), an input/output interface, a network interface, and a memory.
  • the memory may include a volatile memory, a random access memory (RAM) and/or a non-volatile memory or the like in a computer readable medium, for example, a read only memory (ROM) or a flash RAM.
  • RAM random access memory
  • ROM read only memory
  • flash RAM flash random access memory
  • the computer readable medium includes non-volatile and volatile media as well as movable and non-movable media, and can implement information storage by means of any method or technology.
  • Information may be a computer readable instruction, a data structure, and a module of a program or other data.
  • a storage medium of a computer includes, for example, but is not limited to, a phase change memory (PRAM), a static random access memory (SRAM), a dynamic random access memory (DRAM), other types of RAMs, a ROM, an electrically erasable programmable read-only memory (EEPROM), a flash memory or other memory technologies, a compact disk read only memory (CD-ROM), a digital versatile disc (DVD) or other optical storages, a cassette tape, a magnetic tape/magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, and can be used to store information accessible to the computing device.
  • the computer readable medium does not include transitory media, such as a modulated data signal and a carrier.
  • the embodiments of the present application may be provided as a method, a system, or a computer program product. Therefore, the present application may be implemented in the form of a complete hardware embodiment, a complete software embodiment, or an embodiment combining software and hardware. Moreover, the present application may employ the form of a computer program product implemented on one or more computer usable storage media (including, but not limited to, a magnetic disk memory, a CD-ROM, an optical memory, and the like) including computer usable program code.
  • a computer usable storage media including, but not limited to, a magnetic disk memory, a CD-ROM, an optical memory, and the like

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Mobile Radio Communication Systems (AREA)
  • Telephonic Communication Services (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Description

    Technical Field
  • The present application relates to the field of big data technologies, and in particular, to a data processing method and system.
  • Background Art
  • With development of computer technologies, the volume of data that needs to be processed by a computer is increasingly large, and a single computer has been unable to process large-scale data. Therefore, a technology of constructing a computer cluster by combining several computers to process large-scale data in parallel is developed.
  • A Hadoop distributed cluster system architecture is such a system architecture. A Hadoop system can construct a computer cluster by using a large number of cheap computers, and replace an expensive high-computing speed computer with this cluster to perform high-speed computation and storage. The Hadoop system mainly includes a distributed file system and a MapReduce system. The distributed file system manages and stores data. The MapReduce system computes data input by the distributed file system, mainly including: decomposing a to-be-processed data set into a plurality of data blocks; mapping each piece of original key-value pair data in each data block, to obtain intermediate result key-value pair data corresponding to each piece of original key-value pair data; and after intermediate result key-value pair data corresponding to all original key-value pair data is obtained, correspondingly reducing all intermediate result key-value pair data to obtain corresponding final result key-value pair data.
  • In the above processing manner, a big task may be divided into a large number of small tasks and the small tasks are executed by a large number of computers (also referred to as task executors) in a distributed system. In this way, quick processing on mass data can be implemented. This processing manner still does not reduce total computing resources, but distributes a large number of required computing resources to a large number of computers, thus greatly compressing the required processing time. This processing manner is suitable for an offline scenario insensitive to time. For an online service scenario, e.g., an instant messaging scenario, it is generally required that mass data processing is accomplished and a result is output in a short time; therefore, it is sensitive to time.
  • In the process of implementing the present application, the inventor finds that the prior art has at least the following problem:
    In an online service scenario sensitive to time, a large number of computer resources are still occupied to accomplish processing on mass data, that is, the volume of processed data is still huge. Therefore, the process in which the Hadoop system processes data consumes a long time, such that it takes a long time for a service system to call the Hadoop system and wait for a data processing result, execution efficiency is low, and the specific requirement of smooth services cannot be met, thereby resulting in poor user experience.
  • Therefore, on the basis of research on the existing data processing method, the inventor provides a data processing method and system having high execution efficiency and desirable user experience.
  • US 2012/3041186 A1 describes techniques for scheduling one or more MapReduce jobs in the presence of one or more priority classes. The techniques include obtaining a preferred ordering for one or more MapReduce jobs, wherein the preferred ordering comprises one or more priority classes, prioritizing the one or more priority classes subject to one or more dynamic minimum slot guarantees for each priority class, and iteratively employing a MapReduce scheduler, once per priority class, in priority class order, to optimize performance of the one or more MapReduce jobs.
  • JP 2010 092222 A describes a cache structure that can effectively use a cache in Map-Reduce processing based on update frequency and a method for constructing a cache mechanism. The method includes that a plurality of data to be processed is grouped into a plurality of groups based on the update frequency of each of the data. The method further includes a step of calculating the group update frequency, which is the update frequency of each of the plurality of groups, based on the data update frequency which is the update frequency of the data constituting each of the plurality of groups, generating a partial result of a Map-Reduce processing stage for a group whose update frequency is equal to or less than a threshold, and caching the generated partial result.
  • Summary of the Invention
  • Embodiments of the present application provide a data processing method and apparatus having high execution efficiency and desirable user experience. The data processing method and apparatus are defined by the appended claims.
  • The data processing method and system provided in the embodiments of the present application has at least the following beneficial effects:
    The data processing system pre-processes hotspot key-value pairs to facilitate calling by a service system, while non-hotspot key-value pairs are processed only when being called by the service system, which thus reduces the volume of data that needs to be processed in real time by the data processing system providing a back-end service for the service system, improves execution efficiency of data processing, reduces the time of the service system waiting for a data processing result, and has smooth service processing and desirable user experience.
  • Brief Description of the Drawings
  • The accompanying drawings described herein are used to provide further understanding of the present application, and constructs a part of the present application. Exemplary embodiments of the present application and illustrations thereof are used to explain the present application, and are not intended to pose improper limitation to the present application. In the accompanying drawings:
    • FIG. 1 is a flowchart of a data processing method according to an embodiment of the present application;
    • FIG. 2 is a flowchart of selecting a part of mapping key-value pairs as hotspot key-value pairs according to an embodiment of the present application; and
    • FIG. 3 is a schematic structural diagram of a data processing system according to an embodiment of the present application.
    Detailed Description
  • To solve the following technical problems in the existing data processing method: long data processing time, low execution efficiency, incapability of meeting the specific requirement of smooth services, and poor user experience, embodiments of the present application provide a data processing method and a corresponding system. In the method and the corresponding system, a data processing system pre-processes hotspot key-value pairs to facilitate calling by a service system, while non-hotspot key-value pairs are processed only when being called by the service system, which thus reduces the volume of data that needs to be processed in real time by the data processing system providing a back-end service for the service system, improves execution efficiency of data processing, reduces the time of the service system waiting for a data processing result, and has smooth service processing and desirable user experience.
  • To make objectives, technical solutions and advantages of the present application more comprehensible, technical solutions of the present application are described clearly and completely in the following through specific embodiments of the present application and corresponding accompanying drawings. Apparently, the described embodiments are merely some of rather than all of the embodiments of the present application.
  • A Hadoop system may include:
    • a client terminal JobClient configured to submit a Map-Reduce job;
    • a job tracker JobTracker which is a Java process and configured to coordinate running of the whole job;
    • a task tracker TaskTracker which is a Java process and configured to run a task of the job; and
    • a Hadoop Distributed File System (HDFS) configured to share a file related to the job between processes.
  • A job process of the Hadoop system may include:
  • 1. Task submission
  • The client terminal requests a new job code from the job tracker, creates a new job instance, and calls a submitJob function.
  • 2. Task initialization
  • When receiving calling of the submitJob function, the job tracker acquires and initializes a task. The job tracker creates a task, and assigns a task code.
  • 3. Task assignment
  • The job tracker assigns the task to the task tracker.
  • 4. Task execution
  • After being assigned with a task, the task tracker starts to run the task. During mapping, the task tracker calls a map function to process the task, that is, process original key-value pairs to generate intermediate result key-value pairs, and outputs the intermediate result key-value pairs according to a sequence of key values. Then, the task tracker calls a reduce function to process the intermediate result key-value pairs to generate final result key-value pairs.
  • 5. Task end
  • After obtaining a report of the task tracker indicating that all tasks run successfully, the job tracker ends the job.
  • FIG. 1 is a flowchart of a data processing method according to an embodiment of the present application, specifically including the following steps:
    S100: A part of to-be-processed key-value pairs are selected as hotspot key-value pairs according to a screening rule.
  • Data is embodied as an attribute and a numerical value that describe data properties, that is, a commonly described key-value pair. The key-value pair includes a key value representing an attribute and a key value representing attribute content. The attribute content includes, but is not limited to, a list, a hash map, a character string, a numerical value, a Boolean value, an ordered list array, a null value, and the like. For example, {"name": "Wang Xiao'er"} denotes data of a person whose "name" is "Wang Xiao'er".
  • In a specific embodiment, the step of selecting a part of to-be-processed key-value pairs as hotspot key-value pairs according to a screening rule specifically includes that: several to-be-processed key-value pairs may be selected randomly as the hotspot key-value pairs. In fact, judging whether a to-be-processed key-value pair is a hotspot key-value pair is a complex process, especially when there are millions of or even hundreds of millions of to-be-processed key-value pairs. In the embodiment of the present application, the data processing system randomly selects several to-be-processed key-value pairs as hotspot key-value pairs, thereby simplifying the process of judging whether a to-be-processed key-value pair is a hotspot key-value pair, and improving the data processing efficiency of the method.
  • The hotspot key-value pairs are pre-processed for calling by a service system, while non-hotspot key-value pairs are processed only when being called by the service system, which thus reduces the volume of data that needs to be processed in real time by the data processing system providing a back-end service for the service system, improves execution efficiency of data processing, reduces the time of the service system waiting for a data processing result, and has smooth service processing and desirable user experience.
  • In another specific embodiment, referring to FIG. 2, the step of selecting a part of to-be-processed key-value pairs as hotspot key-value pairs according to a screening rule specifically includes:
    • S101: A first number of to-be-processed key-value pairs are selected randomly as candidate key-value pairs.
    • S102: A frequency at which each key-value pair among the candidate key-value pairs is called is counted.
    • S103: The candidate key-value pairs are arranged according to the frequencies.
    • S104: A second number of key-value pairs having maximum calling frequencies are selected from the candidate key-value pairs as hotspot key-value pairs.
  • The first number is greater than the second number.
  • In the embodiment of the present application, first, the data processing system randomly selects a first number of mapping key-value pairs as candidate key-value pairs. The first number generally corresponds to a specific service. A numerical value of the first number may be a fixed value set according to historical experience, and may also be a numerical value generated by a computer through dynamic adjustment and changes.
  • Then, the data processing system counts a frequency at which each key-value pair among the candidate key-value pairs is called, and arranges the candidate key-value pairs according to the frequencies. In a specific service activity, it is generally required to call a number of key-value pairs to support the service system. In this case, the data processing system tracks and records the frequency at which each key-value pair is called, i.e., the number of times each key-value pair is called within a period of time. Further, the data processing system may further arrange the key-value pairs according to the calling frequencies from large to small.
  • Next, the data processing system selects a second number of key-value pairs having maximum calling frequencies from the candidate key-value pairs as hotspot key-value pairs. The first number is greater than the second number. Likewise, a numerical value of the second number may be a fixed value set according to historical experience, and may also be a numerical value generated by the data processing system through dynamic adjustment and changes. The frequencies at which the selected hotspot key-value pairs are called greater than the frequencies at which other candidate key-value pairs are called. The data processing system pre-processes the hotspot key-value pairs, instead of pre-processing other candidate key-value pairs. A probability that the pre-processed key-value pairs are called is greater than a probability that other key-value pairs are called. Therefore, the volume of data that needs to be processed in real time by the data processing system providing a back-end service for the service system is reduced, execution efficiency of data processing is improved, the time of the service system waiting for a data processing result is reduced, service processing is smooth, and user experience is desirable.
  • Further, in the embodiment of the present application, the step of selecting a part of mapping key-value pairs as hotspot key-value pairs further includes:
    • setting a service category condition set of candidate key-value pairs before the step of randomly selecting a first number of to-be-processed key-value pairs as candidate key-value pairs; and
    • selecting to-be-processed key-value pairs meeting the service category condition set.
  • In the embodiment of the present application, the service category condition set may be a fixed value set according to historical experience, and may also be generated through dynamic adjustment and changes. In fact, a key-value pair called by a service system of a service activity generally has some specific properties to be distinguished from other service activities. For example, a key-value pair called by a service system for pushing information has its specific properties compared with a key-value pair called by a service system for payment. The service system for pushing information may be related to a key-value pair indicating the age of a receiver. For example, pushed information about wedding goods is generally junk information for receivers under the age of 16. When a service category condition set of the service system for pushing information includes a key-value pair indicating age, a desirable push effect may be achieved.
  • Therefore, a service category condition set of to-be-processed key-value pairs is set, and the data processing system may filter out a large number of to-be-processed key-value pairs through judgment on the service category condition set, thereby improving the selection precision of hotspot key-value pairs. Therefore, the volume of data that needs to be processed in real time by the data processing system providing a back-end service for the service system is reduced, execution efficiency of data processing is improved, the time of the service system waiting for a data processing result is reduced, service processing is smooth, and user experience is desirable.
  • In still another specific embodiment of the present application, the step of selecting a part of to-be-processed key-value pairs as hotspot key-value pairs according to a screening rule specifically includes:
    • setting a calling frequency threshold of the hotspot key-value pairs; and
    • when the frequency of a key-value pair being called is greater than the calling frequency threshold, setting the key-value pair as a hotspot key-value pair.
  • In the embodiment of the present application, a calling frequency of key-value pairs is set, and when the frequency of a key-value pair being called is greater than the calling frequency threshold, the data processing system sets the key-value pair as a hotspot key-value pair. The data processing system pre-processes the hotspot key-value pairs, instead of pre-processing other key-value pairs. A probability that the pre-processed key-value pairs are called is greater than a probability that other key-value pairs are called. Therefore, the volume of data that needs to be processed in real time by the data processing system providing a back-end service for the service system is reduced, execution efficiency of data processing is improved, the time of the service system waiting for a data processing result is reduced, service processing is smooth, and user experience is desirable.
  • S200: The hotspot key-value pairs are mapped to obtain intermediate result key-value pairs corresponding to the hotspot key-value pairs.
  • In an embodiment provided in the present application, the client terminal JobClient submits a Map-Reduce job to the job tracker, creates a new job instance, and calls a submitJob function. When receiving calling of the submitJob function, the job tracker acquires and initializes a task. The job tracker creates a task, and assigns a task code. The job tracker assigns the task to the task tracker. After being assigned with a task, the task tracker starts to run the task. During mapping, the task tracker calls a map function to process the task, that is, process original key-value pairs to generate intermediate result key-value pairs, and outputs the intermediate result key-value pairs according to a sequence of key values.
  • S300: The intermediate result key-value pairs are reduced to generate final result key-value pairs for calling.
  • In this step, the task tracker calls a reduce function to process the intermediate result key-value pairs to generate final result key-value pairs. After obtaining a report of the task tracker indicating that all tasks run successfully, the job tracker stores the final result key-value pairs in the HDFS, and ends the job.
  • In the embodiment of the present application, the data processing system pre-processes hotspot key-value pairs to facilitate calling by a service system, while non-hotspot key-value pairs are processed only when being called by the service system, which thus reduces the volume of data that needs to be processed in real time by the data processing system providing a back-end service for the service system, improves execution efficiency of data processing, reduces the time of the service system waiting for a data processing result, and has smooth service processing and desirable user experience.
  • In an embodiment provided in the present application, the method further includes:
    optimizing the screening rule by using a machine learning model.
  • The machine learning model relates to artificial intelligence. In the embodiment of the present application, the screening rule is optimized by using a machine learning model. After the data processing system runs for a period of time, the accuracy of judging hotspot key-value pairs and non-hotspot key-value pairs may be significantly improved. Therefore, the volume of data that needs to be processed in real time by the data processing system providing a back-end service for the service system is reduced, execution efficiency of data processing is improved, the time of the service system waiting for a data processing result is reduced, service processing is smooth, and user experience is desirable.
  • The type of the machine learning model is selected according to a specific service system, and optimization of a machine learning model on the screening rule is introduced simply in the following.
  • Specifically, a distribution condition of frequencies of the key-value pairs being called vs. a single attribute is counted by using a clustering algorithm in the machine learning model.
  • According to the distribution condition of frequencies of the key-value pairs being called vs. a single attribute, an interval of key values of attribute content at which the frequencies of the key-value pairs being called are not less than a preset frequency threshold is selected.
  • The interval of key values of attribute content is set as a rule condition of the screening rule.
  • Illustration is made by still taking the above service system for pushing information as an example. Assume that the service system obtains by conducting statistics that services for pushing information exceed a preset proportion, e.g., 50%, and a key-value pair indicating the age of a receiver is called. The machine learning model optimizes the screening rule through a K-means clustering algorithm.
  • Assume that a sample set (key-value pairs indicating ages of receivers and frequencies at which the key-value pairs are called) is classified into m categories (frequency segments), the algorithm is described as follows:
    1. (1) Initial centers (frequencies) of the m categories (frequency segments) are selected properly.
    2. (2) In the kth iteration, distances (frequency differences) from any sample (a key-value pair indicating the age of a receiver and a frequency at which the key-value pair is called) to m centers are obtained, and the sample (a key-value pair indicating the age of a receiver and a frequency at which the key-value pair is called) is classified into a category (frequency segment) where a center having the minimum distance is located.
    3. (3) A central value (frequency) of the category (frequency segment) is updated by using an average method.
    4. (4) For all the m central values (frequencies), if values thereof keep unchanged after being updated by using the iteration method of (2) and (3), the iteration ends; otherwise, the iteration continues.
    5. (5) Initial centers (ages) of n categories (age groups) are selected properly for each category (frequency segment) in m categories (frequency segments).
    6. (6) In the kth iteration, distances (age differences) from any sample (a key-value pair indicating the age of a receiver and a frequency at which the key-value pair is called) to n centers are obtained, and the sample (a key-value pair indicating the age of a receiver and a frequency at which the key-value pair is called) is classified into a category (age group) where a center having the minimum distance is located.
    7. (7) A central value (age) of the category (age group) is updated by using an average method.
    8. (8) For all the n central values (ages), if values thereof keep unchanged after being updated by using the iteration method of (6) and (7), the iteration ends; otherwise, the iteration continues.
  • By using the algorithm, a clustering rule of to-be-processed key-value pairs having large calling frequencies vs. age may be obtained through calculation. That the age of the receiver is in a certain category (age group) is used as the rule condition of the screening rule. For example, that the age of the receiver is 12-18 is used as a rule condition of judging that the to-be-processed key-value pair is a hotspot key-value pair. After optimizing the screening rule by using the machine learning model, the service system screens the hotspot key-value pairs from the to-be-processed key-value pairs according to the optimized screening rule.
  • In the embodiment provided in the present application, a rule optimization module is further configured to:
    when a key-value pair of an attribute and a key-value pair of another attribute are called by service systems having a same service code, set a union set of intervals of key values of attribute content of the key-value pairs of the two attributes as a rule condition of the screening rule.
  • Assume that the machine learning model further accomplishes optimization on the screening rule in the dimension of professions of receivers after accomplishing the optimization on the screening rule in the dimension of ages of the receivers.
  • The data processing system calculates that a to-be-processed key-value pair indicating that the receiver in in an age group and a to-be-processed key-value pair indicating that the receiver is of a profession are highly related to information pushing. For example, a to-be-processed key-value pair indicates that the receiver is in an age group of 20-30, a to-be-processed key-value pair indicates that the receiver is in the computer industry, and when the service system pushes information, desirable service promotion effects can be achieved for receivers having features of the two dimensions simultaneously. Then, the machine learning model associates the to-be-processed key-value pair indicating that the receiver is in the age group of 20-30 with the to-be-processed key-value pair indicating that the receiver is in the computer industry, to form a hotspot key-value pair data group.
  • The data processing system further ranks frequencies at which the hotspot key-value pair data groups are called by using the machine learning model, and classifies the hotspot key-value pair data groups into hotspot data groups and non-hotspot data groups. A dynamic adjusting mode of the hotspot data groups is: setting a calling frequency threshold of the hotspot data groups, and when the frequencies of key-value pairs in the data group being called are greater than the frequency threshold, setting the data group as a hotspot data group.
  • In the embodiment of the present application, a processing priority value of a data group is set. The priority value is obtained by calculating a weighted sum value of the to-be-processed key-value pairs. A processing priority of the data group is adjusted dynamically according to the priority value. When a key-value pair in the data group is called once, the priority value of the data group is increased by a unit. When a priority value of a data group exceeds a priority value of a previous data group thereof, the data processing system moves the data group one position ahead. Through optimization of the screening rule by using the machine learning model, the hotspot key-value pairs selected by the data processing system from the to-be-processed key-value pairs are mapping key-value pairs having maximum frequencies of being called, wherein hotspot data groups formed by associating the key-value pairs are data groups having maximum frequencies of being called. Therefore, the volume of data that needs to be processed in real time by the data processing system providing a back-end service for the service system is reduced, execution efficiency of data processing is improved, the time of the service system waiting for a data processing result is reduced, service processing is smooth, and user experience is desirable.
  • In an embodiment provided in the present application, the method further includes:
    when a non-hotspot key-value pair is called, processing the non-hotspot key-value pair by using a reduce function to generate data for calling.
  • In the embodiment of the present application, the hotspot key-value pairs are pre-processed by the data processing system by using the reduce function, to generate data for calling by the service system. When the non-hotspot key-value pairs are called by the service system, the data processing system processes the key-value pairs by using the reduce function in real time, to generate data for calling by the service system. Therefore, the volume of data that needs to be processed in real time by the data processing system providing a back-end service for the service system is reduced, execution efficiency of data processing is improved, the time of the service system waiting for a data processing result is reduced, service processing is smooth, and user experience is desirable.
  • In an alternative manner of the embodiment of the present application, a data processing method includes the following steps:
    • mapping to-be-processed key-value pairs to obtain intermediate result key-value pairs corresponding to the to-be-processed key-value pairs;
    • selecting a part of the intermediate result key-value pairs as hotspot key-value pairs according to a screening rule; and
    • reducing the hotspot key-value pairs to generate final result key-value pairs for calling;
    • wherein the key-value pair includes a key value representing an attribute and a key value representing attribute content.
  • It should be pointed out that a difference from the specific embodiment provided in the foregoing lies in that: the step of selecting a part of key-value pairs as the hotspot key-value pairs is set to be performed after the mapping processing step. In the embodiment of the present application, the volume of data of key-value pairs of reducing processing is reduced, and the problem of large volume may be solved to some extent. Therefore, the volume of data that needs to be processed in real time by the data processing system providing a back-end service for the service system is reduced, execution efficiency of data processing is improved, the time of the service system waiting for a data processing result is reduced, service processing is smooth, and user experience is desirable.
  • The data processing method according to the embodiments of the present application is described above. Based on the same thought, referring to FIG. 3, the present application further provides a data processing system 1, including:
    • a screening module 10 configured to select a part of to-be-processed key-value pairs as hotspot key-value pairs according to a screening rule;
    • a mapping module 20 configured to map the hotspot key-value pairs to obtain intermediate result key-value pairs corresponding to the hotspot key-value pairs; and
    • a reducing module 30 configured to reduce the intermediate result key-value pairs to generate final result key-value pairs for calling;
    • wherein the key-value pair includes a key value representing an attribute and a key value representing a numerical value.
  • Further, the screening module 10 configured to select a part of to-be-processed key-value pairs as hotspot key-value pairs is specifically configured to:
    randomly select several to-be-processed key-value pairs as hotspot key-value pairs.
  • Further, the screening module 10 configured to select a part of to-be-processed key-value pairs as hotspot key-value pairs is specifically configured to:
    • randomly select a first number of to-be-processed key-value pairs as candidate key-value pairs;
    • count a frequency at which each key-value pair among the candidate key-value pairs is called;
    • arrange the candidate key-value pairs according to the frequencies; and
    • select a second number of key-value pairs having maximum calling frequencies from the candidate key-value pairs as hotspot key-value pairs;
    • wherein the first number is greater than the second number.
  • Further, the screening module 10 configured to select a part of mapping key-value pairs as hotspot key-value pairs is further specifically configured to:
    • set a service category condition set of candidate key-value pairs before the step of randomly selecting a first number of to-be-processed key-value pairs as candidate key-value pairs; and
    • select to-be-processed key-value pairs meeting the service category condition set.
  • Further, the screening module 10 configured to select a part of to-be-processed key-value pairs as hotspot key-value pairs is specifically configured to:
    • set a calling frequency threshold of the hotspot key-value pairs; and
    • when the frequency of a key-value pair being called is greater than the calling frequency threshold, set the key-value pair as a hotspot key-value pair.
  • Further, the system further includes a rule optimization module 40, configured to:
    optimize the screening rule by using a machine learning model.
  • Further, the mapping module 20 is configured to:
    map the non-hotspot key-value pairs to obtain intermediate result key-value pairs corresponding to the non-hotspot key-value pairs.
  • Further, a data processing system 1 includes:
    • a mapping module 20 configured to map to-be-processed key-value pairs to obtain intermediate result key-value pairs corresponding to the to-be-processed key-value pairs;
    • a screening module 10 configured to select a part of the intermediate result key-value pairs as hotspot key-value pairs according to a screening rule; and
    • a reducing module 30 configured to reduce the hotspot key-value pairs to generate final result key-value pairs for calling;
    • wherein the key-value pair includes a key value representing an attribute and a key value representing attribute content.
  • In the embodiment of the present application, the data processing system pre-processes hotspot key-value pairs to facilitate calling by a service system, while non-hotspot key-value pairs are processed only when being called by the service system, which thus reduces the volume of data that needs to be processed in real time by the data processing system providing a back-end service for the service system, improves execution efficiency of data processing, reduces the time of the service system waiting for a data processing result, and has smooth service processing and desirable user experience.
  • Persons skilled in the art should understand that, the embodiments of the present invention may be provided as a method, a system, or a computer program product. Therefore, the present invention may be implemented in the form of a complete hardware embodiment, a complete software embodiment, or an embodiment combining software and hardware. Moreover, the present invention may be a computer program product implemented on one or more computer usable storage media (including, but not limited to, a magnetic disk memory, a CD-ROM, an optical memory, and the like) including computer usable program code.
  • The present invention is described with reference to flowcharts and/or block diagrams according to the method, device (system) and computer program product according to the embodiments of the present invention. It should be understood that a computer program instruction may be used to implement each process and/or block in the flowcharts and/or block diagrams and combinations of processes and/or blocks in the flowcharts and/or block diagrams. These computer program instructions may be provided for a general-purpose computer, a special-purpose computer, an embedded processor, or a processor of another programmable numerical processing device to generate a machine, so that the instructions executed by a computer or a processor of another programmable numerical processing device generate an apparatus for implementing a specified function in one or more processes in the flowcharts and/or in one or more blocks in the block diagrams.
  • These computer program instructions may also be stored in a computer readable memory that can instruct the computer or another programmable numerical processing device to work in a particular manner, such that the instructions stored in the computer readable memory generate an article of manufacture that includes an instruction apparatus. The instruction apparatus implements a specified function in one or more processes in the flowcharts and/or in one or more blocks in the block diagrams.
  • These computer program instructions may also be loaded onto a computer or another programmable numerical processing device, such that a series of operating steps are performed on the computer or another programmable device, thereby generating computer-implemented processing. Therefore, the instructions executed on the computer or another programmable device provide steps for implementing a specified function in one or more processes in the flowcharts and/or in one or more blocks in the block diagrams.
  • In a typical configuration, a computing device includes one or more processors (CPU), an input/output interface, a network interface, and a memory.
  • The memory may include a volatile memory, a random access memory (RAM) and/or a non-volatile memory or the like in a computer readable medium, for example, a read only memory (ROM) or a flash RAM. The memory is an example of the computer readable medium.
  • The computer readable medium includes non-volatile and volatile media as well as movable and non-movable media, and can implement information storage by means of any method or technology. Information may be a computer readable instruction, a data structure, and a module of a program or other data. A storage medium of a computer includes, for example, but is not limited to, a phase change memory (PRAM), a static random access memory (SRAM), a dynamic random access memory (DRAM), other types of RAMs, a ROM, an electrically erasable programmable read-only memory (EEPROM), a flash memory or other memory technologies, a compact disk read only memory (CD-ROM), a digital versatile disc (DVD) or other optical storages, a cassette tape, a magnetic tape/magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, and can be used to store information accessible to the computing device. According to the definition of this text, the computer readable medium does not include transitory media, such as a modulated data signal and a carrier.
  • It should be further noted that, the term "include", "comprise" or other variations thereof are intended to cover non-exclusive inclusion, so that a process, method, commodity or device including a series of elements not only includes the elements, but also includes other elements not clearly listed, or further includes inherent elements of the process, method, commodity or device. Without more limitations, an element defined by "including a/an... " does not exclude that the process, method, commodity or device including the element further has other identical elements.
  • Those skilled in the art should understand that, the embodiments of the present application may be provided as a method, a system, or a computer program product. Therefore, the present application may be implemented in the form of a complete hardware embodiment, a complete software embodiment, or an embodiment combining software and hardware. Moreover, the present application may employ the form of a computer program product implemented on one or more computer usable storage media (including, but not limited to, a magnetic disk memory, a CD-ROM, an optical memory, and the like) including computer usable program code.
  • The above descriptions are merely embodiments of the present application, and are not intended to limit the present application. For those skilled in the art, the present application may have various modifications and variations.

Claims (11)

  1. A data processing method, the method comprising:
    pre-processing a plurality of hotspot key-value pairs, comprising:
    selecting (S101) a first plurality of key-value pairs of a to-be-processed plurality of key-value pairs, wherein the first plurality is selected randomly as a candidate set of key-value pairs for determining the plurality of hotspot key-value pairs, wherein a key-value pair comprises a key value representing an attribute and a key value representing attribute content, wherein the first plurality of key-value pairs corresponds to a first service provided by a service system, wherein calls including one or more key-value pairs of the to-be-processed plurality of key-value pairs are received at a data processing system (1) from the service system;
    counting (S102) a frequency at which a key-value pair among the candidate set is called, by the service system, within a period of time;
    arranging (S103) the first plurality of key-value pairs according to the frequency;
    selecting (S104) a second plurality of key-value pairs from the candidate set of key-value pairs as the plurality of hotspot key-value pairs according to a screening rule, wherein the second plurality of key-value pairs is with maximum calling frequencies, and wherein the first plurality is greater than the second plurality of key-value pairs;
    mapping (S200) the plurality of hotspot key-value pairs to obtain intermediate result key-value pairs corresponding to the hotspot key-value pairs; and
    reducing (S300) the intermediate result key-value pairs of the plurality of hotspot key-value pairs to generate final result key-value pairs for calls received from the service system; the method further comprising:
    processing a call from the service system at the data processing system, the call including a key-value pair from the to-be-processed plurality of key-value pairs which is not selected as being part of the plurality of hotspot key-value pairs, comprising:
    mapping the key-value pair to obtain an intermediate result key-value pair corresponding to the key-value pair; and
    reducing the intermediate result key-value pair to generate a final result key-value pair for the calls from the service system.
  2. The method of claim 1, wherein selecting (S104) the second plurality of key-value pairs as the plurality of hotspot key-value pairs according to the screening rule further comprises:
    setting a service category condition set of candidate key-value pairs before the step of randomly selecting the first plurality of key-value pairs of the to-be-processed plurality of key-value pairs as candidate key-value pairs; and
    selecting to-be-processed key-value pairs from the to-be-processed plurality of key-value pairs meeting the service category condition set.
  3. The method of claim 1, wherein selecting (S104) the second plurality of key-value pairs as the plurality of hotspot key-value pairs according to the screening rule comprises:
    setting a calling frequency threshold of the hotspot key-value pairs; and
    when frequency of a key-value pair being called is greater than the calling frequency threshold, setting the key-value pair as a hotspot key-value pair.
  4. The method of any one of claims 1 to 3, wherein the method further comprises optimizing the screening rule by using a machine learning model, optionally comprising:
    counting a distribution condition of frequencies of the key-value pairs being called vs. a single attribute by using a clustering algorithm in the machine learning model;
    according to the distribution condition of frequencies of the key-value pairs being called vs. a single attribute, selecting an interval of key values of attribute content at which the frequencies of the key-value pairs being called are not less than a preset frequency threshold; and
    setting the interval of key values of attribute content as a rule condition of the screening rule.
  5. The method of claim 4, wherein the method further comprises:
    when a key-value pair of an attribute and a key-value pair of another attribute are called by service systems having a same service code, setting a union set of intervals of key values of attribute content of the key-value pairs of the two attributes as a rule condition of the screening rule.
  6. A data processing system (1), comprising:
    a screening module (10) configured, in a pre-processing mode, to
    select (S101) a first plurality of key-value pairs of a to-be-processed plurality of key-value pairs, wherein the first plurality is selected randomly as a candidate set of key-value pairs for determining a plurality of hotspot key-value pairs, wherein the first plurality of key-value pairs corresponds to a first service provided by a service system, wherein calls including one or more key-value pairs of the to-be-processed plurality of key-value pairs are received at a data processing system from the service system;
    count (S102) frequency at which a key-value pair among the candidate set is called within a period of time by the service system(S 102);
    arrange (S103) the first plurality of key-value pairs according to the frequency (S103);
    select (S104) a second plurality of key-value pairs from the candidate set of key-value pairs as the plurality of hotspot key-value pairs according to a screening rule, wherein the second plurality of key-value pairs is with maximum calling frequencies;
    a mapping module (20) configured, in the pre-processing mode, to map (S200) the plurality of hotspot key-value pairs to obtain intermediate result key-value pairs corresponding to the hotspot key-value pairs; and
    a reducing module (30) configured, in the pre-processing mode, to reduce (S300) the intermediate result key-value pairs of the plurality of hotspot key-value pairs to generate final result key-value pairs for calls received from the service system;
    wherein the key-value pair comprises a key value representing an attribute and a key value representing attribute content;
    wherein the mapping module (20) is configured, in a processing mode, to process a call from the service system, the call including a key-value pair from the to-be-processed plurality of key-value pairs which is not selected as being part of the plurality of hotspot key-value pairs, and to map the non-hotspot key-value pair to obtain an intermediate result key-value pair corresponding to the key-value pair; and
    wherein the reducing module (30) is configured, in the processing mode, to reduce the intermediate result key-value pair to generate a final result key-value pair for the calls from the service system.
  7. The system (1) of claim 6, wherein the screening module (10) configured to select the plurality of key-value pairs as the plurality of hotspot key-value pairs is further configured to:
    set a service category condition set of candidate key-value pairs before the step of randomly selecting the first plurality of key-value pairs of the to-be-processed plurality of key-value pairs as candidate key-value pairs; and
    select to-be-processed key-value pairs from the to-be-processed plurality of key-value pairs meeting the service category condition set.
  8. The system (1) of claim 6, wherein the screening module (10) configured to select the second plurality of key-value pairs as the plurality of hotspot key-value pairs is configured to:
    set a calling frequency threshold of the hotspot key-value pairs; and
    when frequency of a key-value pair being called is greater than the calling frequency threshold, set the key-value pair as a hotspot key-value pair.
  9. The system (1) of any one of claims 6 to 8, wherein the system further comprises a rule optimization module (40) configured to:
    optimize the screening rule by using a machine learning model.
  10. The system (1) of claim 9, wherein the rule optimization module (40) is configured to:
    count a distribution condition of frequencies of the key-value pairs being called vs. a single attribute by using a clustering algorithm in the machine learning model;
    according to the distribution condition of frequencies of the key-value pairs being called vs. a single attribute, select an interval of key values of attribute content at which the frequencies of the key-value pairs being called are not less than a preset frequency threshold; and
    set the interval of key values of attribute content as a rule condition of the screening rule.
  11. The system (1) of claim 10, wherein the rule optimization module (40) is further configured to:
    when a key-value pair of an attribute and a key-value pair of another attribute are called by service systems having a same service code, set a union set of intervals of key values of attribute content of the key-value pairs of the two attributes as a rule condition of the screening rule.
EP16789273.6A 2015-05-04 2016-04-21 Data processing method and system Active EP3293641B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PL16789273T PL3293641T3 (en) 2015-05-04 2016-04-21 Data processing method and system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510222356.4A CN106202092B (en) 2015-05-04 2015-05-04 Data processing method and system
PCT/CN2016/079812 WO2016177279A1 (en) 2015-05-04 2016-04-21 Data processing method and system

Publications (3)

Publication Number Publication Date
EP3293641A1 EP3293641A1 (en) 2018-03-14
EP3293641A4 EP3293641A4 (en) 2018-10-17
EP3293641B1 true EP3293641B1 (en) 2020-06-17

Family

ID=57218083

Family Applications (1)

Application Number Title Priority Date Filing Date
EP16789273.6A Active EP3293641B1 (en) 2015-05-04 2016-04-21 Data processing method and system

Country Status (9)

Country Link
US (2) US10592491B2 (en)
EP (1) EP3293641B1 (en)
JP (1) JP6779231B2 (en)
KR (1) KR102134952B1 (en)
CN (1) CN106202092B (en)
ES (1) ES2808948T3 (en)
PL (1) PL3293641T3 (en)
SG (1) SG11201708917SA (en)
WO (1) WO2016177279A1 (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106202092B (en) 2015-05-04 2020-03-06 阿里巴巴集团控股有限公司 Data processing method and system
WO2017107118A1 (en) * 2015-12-24 2017-06-29 Intel Corporation Facilitating efficient communication and data processing across clusters of computing machines in heterogeneous computing environment
CN107729353B (en) * 2017-08-30 2020-04-07 第四范式(北京)技术有限公司 Distributed system for performing machine learning and method thereof
US11044091B1 (en) * 2018-03-15 2021-06-22 Secure Channels Inc. System and method for securely transmitting non-pki encrypted messages
CN110347513B (en) * 2019-07-15 2022-05-20 中国工商银行股份有限公司 Hot data batch scheduling method and device
US11804955B1 (en) 2019-09-13 2023-10-31 Chol, Inc. Method and system for modulated waveform encryption
WO2021120140A1 (en) * 2019-12-20 2021-06-24 Intel Corporation Managing runtime apparatus for tiered object memory placement
CN116432903B (en) * 2023-04-01 2024-06-11 国网新疆电力有限公司电力科学研究院 Communication simulation data management system
CN116346827B (en) * 2023-05-30 2023-08-11 中国地质大学(北京) Real-time grouping method and system for inclined data flow

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010092222A (en) * 2008-10-07 2010-04-22 Internatl Business Mach Corp <Ibm> Caching mechanism based on update frequency

Family Cites Families (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7756919B1 (en) * 2004-06-18 2010-07-13 Google Inc. Large-scale data processing in a distributed and parallel processing enviornment
US8726290B2 (en) * 2008-06-12 2014-05-13 Yahoo! Inc. System and/or method for balancing allocation of data among reduce processes by reallocation
CN101645067A (en) 2008-08-05 2010-02-10 北京大学 Method for predicting hot forum in forum collection
US8370493B2 (en) * 2008-12-12 2013-02-05 Amazon Technologies, Inc. Saving program execution state
EP2325762A1 (en) * 2009-10-27 2011-05-25 Exalead Method and system for processing information of a stream of information
CN102141995B (en) * 2010-01-29 2013-06-12 国际商业机器公司 System and method for simplifying transmission in parallel computing system
CN102236581B (en) * 2010-04-30 2013-08-14 国际商业机器公司 Mapping reduction method and system thereof for data center
CN102314336B (en) * 2010-07-05 2016-04-13 深圳市腾讯计算机***有限公司 A kind of data processing method and system
CN102456031B (en) * 2010-10-26 2016-08-31 腾讯科技(深圳)有限公司 A kind of Map Reduce system and the method processing data stream
JP5552449B2 (en) * 2011-01-31 2014-07-16 日本電信電話株式会社 Data analysis and machine learning processing apparatus, method and program
US20120304186A1 (en) * 2011-05-26 2012-11-29 International Business Machines Corporation Scheduling Mapreduce Jobs in the Presence of Priority Classes
JP5637071B2 (en) * 2011-05-27 2014-12-10 富士通株式会社 Processing program, processing method, and processing apparatus
CN103019614B (en) 2011-09-23 2015-11-25 阿里巴巴集团控股有限公司 Distributed memory system management devices and method
WO2012149776A1 (en) * 2011-09-28 2012-11-08 华为技术有限公司 Method and apparatus for storing data
WO2013051131A1 (en) * 2011-10-06 2013-04-11 富士通株式会社 Data processing method, distributed processing system, and program
TWI461929B (en) 2011-12-09 2014-11-21 Promise Tecnnology Inc Cloud data storage system
JP5919825B2 (en) * 2012-01-05 2016-05-18 富士通株式会社 Data processing method, distributed processing system, and program
US9367601B2 (en) * 2012-03-26 2016-06-14 Duke University Cost-based optimization of configuration parameters and cluster sizing for hadoop
WO2013153620A1 (en) * 2012-04-10 2013-10-17 株式会社日立製作所 Data processing system and data processing method
TWI610166B (en) 2012-06-04 2018-01-01 飛康國際網路科技股份有限公司 Automated disaster recovery and data migration system and method
WO2014020735A1 (en) * 2012-08-02 2014-02-06 富士通株式会社 Data processing method, information processing device, and program
US20150363467A1 (en) 2013-01-31 2015-12-17 Hewlett-Packard Development Company, L.P. Performing an index operation in a mapreduce environment
CN104077297B (en) * 2013-03-27 2017-05-17 日电(中国)有限公司 Query method and query device based on body
CN104142950A (en) 2013-05-10 2014-11-12 中国人民大学 Microblog user classifying method based on keyword extraction and gini coefficient
US9424274B2 (en) * 2013-06-03 2016-08-23 Zettaset, Inc. Management of intermediate data spills during the shuffle phase of a map-reduce job
IN2013MU02918A (en) * 2013-09-10 2015-07-03 Tata Consultancy Services Ltd
CN103838844B (en) * 2014-03-03 2018-01-19 珠海市君天电子科技有限公司 A kind of key-value pair data storage, transmission method and device
CN103995882B (en) 2014-05-28 2017-07-07 南京大学 Probability Mining Frequent Itemsets based on MapReduce
CN104331464A (en) * 2014-10-31 2015-02-04 许继电气股份有限公司 MapReduce-based monitoring data priority pre-fetching processing method
CN104536830A (en) 2015-01-09 2015-04-22 哈尔滨工程大学 KNN text classification method based on MapReduce
CN106202092B (en) 2015-05-04 2020-03-06 阿里巴巴集团控股有限公司 Data processing method and system
CN107193500A (en) 2017-05-26 2017-09-22 郑州云海信息技术有限公司 A kind of distributed file system Bedding storage method and system

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010092222A (en) * 2008-10-07 2010-04-22 Internatl Business Mach Corp <Ibm> Caching mechanism based on update frequency

Also Published As

Publication number Publication date
US10872070B2 (en) 2020-12-22
WO2016177279A1 (en) 2016-11-10
CN106202092A (en) 2016-12-07
JP2018515844A (en) 2018-06-14
EP3293641A1 (en) 2018-03-14
ES2808948T3 (en) 2021-03-02
EP3293641A4 (en) 2018-10-17
US20200192882A1 (en) 2020-06-18
US10592491B2 (en) 2020-03-17
CN106202092B (en) 2020-03-06
SG11201708917SA (en) 2017-11-29
KR102134952B1 (en) 2020-07-17
US20180046658A1 (en) 2018-02-15
KR20180002758A (en) 2018-01-08
PL3293641T3 (en) 2021-02-08
JP6779231B2 (en) 2020-11-04

Similar Documents

Publication Publication Date Title
EP3293641B1 (en) Data processing method and system
AU2017202873B2 (en) Efficient query processing using histograms in a columnar database
US11443228B2 (en) Job merging for machine and deep learning hyperparameter tuning
US11132383B2 (en) Techniques for processing database tables using indexes
US10565022B2 (en) Systems for parallel processing of datasets with dynamic skew compensation
US20220092471A1 (en) Optimizations for machine learning data processing pipeline
US10747764B1 (en) Index-based replica scale-out
CN106909942B (en) Subspace clustering method and device for high-dimensionality big data
US11720565B2 (en) Automated query predicate selectivity prediction using machine learning models
US10334011B2 (en) Efficient sorting for a stream processing engine
WO2023093642A1 (en) Monolithic computer application refactoring
Han et al. SlimML: Removing non-critical input data in large-scale iterative machine learning
EP3975075A1 (en) Runtime estimation for machine learning data processing pipeline
US20200364211A1 (en) Predictive database index modification
US11741101B2 (en) Estimating execution time for batch queries
US11436412B2 (en) Predictive event searching utilizing a machine learning model trained using dynamically-generated event tags
US11961046B2 (en) Automatic selection of request handler using trained classification model
US11126623B1 (en) Index-based replica scale-out
US20230177054A1 (en) Reduced latency query processing
JIAQI et al. K Nearest Neighbor Joins for Big Data Processing based on Spark.
WO2022108576A1 (en) Data cataloging based on classification models

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20171204

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)
A4 Supplementary search report drawn up and despatched

Effective date: 20180919

RIC1 Information provided on ipc code assigned before grant

Ipc: G06F 17/30 20060101AFI20180913BHEP

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

17Q First examination report despatched

Effective date: 20190718

REG Reference to a national code

Ref country code: DE

Ref legal event code: R079

Ref document number: 602016038351

Country of ref document: DE

Free format text: PREVIOUS MAIN CLASS: G06F0017300000

Ipc: G06F0016245300

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: GRANT OF PATENT IS INTENDED

RIC1 Information provided on ipc code assigned before grant

Ipc: G06F 16/28 20190101ALI20200311BHEP

Ipc: G06F 16/2453 20190101AFI20200311BHEP

INTG Intention to grant announced

Effective date: 20200327

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE PATENT HAS BEEN GRANTED

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: DE

Ref legal event code: R096

Ref document number: 602016038351

Country of ref document: DE

REG Reference to a national code

Ref country code: AT

Ref legal event code: REF

Ref document number: 1282207

Country of ref document: AT

Kind code of ref document: T

Effective date: 20200715

REG Reference to a national code

Ref country code: FI

Ref legal event code: FGE

REG Reference to a national code

Ref country code: NL

Ref legal event code: FP

REG Reference to a national code

Ref country code: NO

Ref legal event code: T2

Effective date: 20200617

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200617

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200918

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200617

REG Reference to a national code

Ref country code: LT

Ref legal event code: MG4D

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: RS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200617

Ref country code: HR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200617

Ref country code: LV

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200617

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200917

REG Reference to a national code

Ref country code: AT

Ref legal event code: MK05

Ref document number: 1282207

Country of ref document: AT

Kind code of ref document: T

Effective date: 20200617

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: AL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200617

RAP2 Party data changed (patent owner data changed or rights of a patent transferred)

Owner name: ADVANCED NEW TECHNOLOGIES CO., LTD.

REG Reference to a national code

Ref country code: NO

Ref legal event code: CHAD

Owner name: ADVANCED NEW TECHNOLOGIES CO., KY

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20201019

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200617

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200617

Ref country code: SM

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200617

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200617

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200617

REG Reference to a national code

Ref country code: DE

Ref legal event code: R082

Ref document number: 602016038351

Country of ref document: DE

Representative=s name: FISH & RICHARDSON P.C., DE

Ref country code: DE

Ref legal event code: R081

Ref document number: 602016038351

Country of ref document: DE

Owner name: ADVANCED NEW TECHNOLOGIES CO., LTD., GEORGE TO, KY

Free format text: FORMER OWNER: ALIBABA GROUP HOLDING LIMITED, GEORGE TOWN, GRAND CAYMAN, KY

REG Reference to a national code

Ref country code: NL

Ref legal event code: PD

Owner name: ADVANCED NEW TECHNOLOGIES CO., LTD.; KY

Free format text: DETAILS ASSIGNMENT: CHANGE OF OWNER(S), ASSIGNMENT; FORMER OWNER NAME: ALIBABA GROUP HOLDING LIMITED

Effective date: 20210129

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200617

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20201017

REG Reference to a national code

Ref country code: ES

Ref legal event code: FG2A

Ref document number: 2808948

Country of ref document: ES

Kind code of ref document: T3

Effective date: 20210302

REG Reference to a national code

Ref country code: GB

Ref legal event code: 732E

Free format text: REGISTERED BETWEEN 20210218 AND 20210224

REG Reference to a national code

Ref country code: DE

Ref legal event code: R097

Ref document number: 602016038351

Country of ref document: DE

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200617

26N No opposition filed

Effective date: 20210318

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200617

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MC

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200617

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20210421

REG Reference to a national code

Ref country code: BE

Ref legal event code: MM

Effective date: 20210430

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20210421

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20201017

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20210430

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: TR

Payment date: 20220411

Year of fee payment: 7

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: HU

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT; INVALID AB INITIO

Effective date: 20160421

P01 Opt-out of the competence of the unified patent court (upc) registered

Effective date: 20230521

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200617

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: ES

Payment date: 20230503

Year of fee payment: 8

Ref country code: DE

Payment date: 20230427

Year of fee payment: 8

Ref country code: CH

Payment date: 20230502

Year of fee payment: 8

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: PL

Payment date: 20230405

Year of fee payment: 8

Ref country code: FI

Payment date: 20230425

Year of fee payment: 8

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: NL

Payment date: 20240315

Year of fee payment: 9

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200617

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20240229

Year of fee payment: 9

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: NO

Payment date: 20240222

Year of fee payment: 9

Ref country code: IT

Payment date: 20240313

Year of fee payment: 9

Ref country code: FR

Payment date: 20240223

Year of fee payment: 9