CN111104401B - System and method for storing data in an integrated structure based on an array and a linked list - Google Patents

System and method for storing data in an integrated structure based on an array and a linked list Download PDF

Info

Publication number
CN111104401B
CN111104401B CN201911030016.6A CN201911030016A CN111104401B CN 111104401 B CN111104401 B CN 111104401B CN 201911030016 A CN201911030016 A CN 201911030016A CN 111104401 B CN111104401 B CN 111104401B
Authority
CN
China
Prior art keywords
data
pointers
linked list
input data
determined
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911030016.6A
Other languages
Chinese (zh)
Other versions
CN111104401A (en
Inventor
马赫什·达莫达尔·巴威
苏尼尔·阿南特·普拉尼克
马诺·南比亚尔
斯瓦普尼·罗迪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tata Consultancy Services Ltd
Original Assignee
Tata Consultancy Services Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tata Consultancy Services Ltd filed Critical Tata Consultancy Services Ltd
Publication of CN111104401A publication Critical patent/CN111104401A/en
Application granted granted Critical
Publication of CN111104401B publication Critical patent/CN111104401B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/23Updating
    • G06F16/2379Updates performed during online database operations; commit processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/901Indexing; Data structures therefor; Storage structures
    • G06F16/9024Graphs; Linked lists
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/22Indexing; Data structures therefor; Storage structures
    • G06F16/2228Indexing structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/22Indexing; Data structures therefor; Storage structures
    • G06F16/2282Tablespace storage structures; Management thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Financial Or Insurance-Related Operations Such As Payment And Settlement (AREA)

Abstract

Data processing and storage are important components of many applications. Conventional data processing and storage systems utilize either a full array structure or a full linked list structure to store data, where the array consumes a large amount of memory, while the linked list provides slow processing. Thus, conventional systems and methods are not capable of optimizing both memory consumption and time efficiency. The present disclosure discloses an efficient way to store data by creating an array and linked list based integrated structure. The data is stored in an integrated structure based on arrays and linked lists using an increment-based mechanism. The delta-based mechanism helps determine where data should be stored in an integrated array and linked list based structure. The present disclosure combines the advantages of both array and linked list structures, thereby reducing memory consumption and latency.

Description

System and method for storing data in an integrated structure based on an array and a linked list
Cross-reference to related applications and priorities
The present application claims priority from indian application number 201821040565 filed in india on 10 and 26 2018.
Technical Field
The disclosure herein relates generally to the field of data storage and processing, and more particularly, to a system and method for storing data in an array and linked list based integrated structure.
Background
Data storage is a critical part of a data processing system. With the advent and increasing importance of big data and data mining in numerous applications, a large amount of data needs to be collected. Furthermore, many operations need to be performed to meaningfully analyze and present data. Data storage becomes important for performing operations on data, so that latency for operations such as searching, inserting, deleting, etc., should be minimized with proper memory usage.
Conventional systems include either a fully linked list based structure or a fully array based data structure for storing data. Storing using a fully array-based data structure requires an array of very large size and results in wasted memory. The completely linked list based structure introduces a temporal penalty because each insertion requires traversing the linked list to obtain the appropriate node.
Disclosure of Invention
Embodiments of the present disclosure present technical improvements as solutions to one or more of the above technical problems recognized by the inventors in conventional systems. For example, in one aspect, there is provided a processor-implemented method comprising: receiving a plurality of input data and corresponding reference data for each of the plurality of input data; creating a data structure for storing the plurality of input data, wherein the data structure is created in at least one form comprising an array structure or a linked list, and wherein the created data structure further comprises a plurality of data pointers defined for each of the plurality of input data; a unique set of data pointers is determined from the plurality of data pointers for each of the plurality of input data, wherein the unique set of data pointers is determined by calculating a range of candidate data pointers that vary around the corresponding reference data. In one embodiment, the range is configurable and determined based on the corresponding reference data. The processor-implemented method further comprises: (i) Defining the unique set of data pointers in the array structure, and (ii) defining remaining data pointers from the plurality of data pointers in the linked list; determining a data pointer for new input data from one of (i) the unique set of data pointers defined in the array structure or (ii) remaining data pointers from the plurality of data pointers defined in the linked list, using an delta-based mechanism, wherein during determining the data pointer, a node corresponding to the determined data pointer is created. In one embodiment, the delta-based mechanism determines a difference between the values of the reference data and the new input data, and wherein the difference indicates an index value for determining the data pointer. In one embodiment, the method further comprises: another linked list is formed from the created node corresponding to the determined data pointer for storing subsequent input data. The processor-implemented method further comprises: the new input data is stored at an address of one of the array structure or linked list pointed to by the determined node associated with the data pointer. In one embodiment, the method further comprises dynamically updating at least one of the array structure and linked list based on incoming (incoming) input data and the reference data.
In another aspect, a system is provided, comprising: a memory storing instructions; one or more communication interfaces; and one or more hardware processors coupled to the memory through the one or more communication interfaces, wherein the one or more hardware processors are configured, through the instructions, to: receiving a plurality of input data and corresponding reference data for each of the plurality of input data; creating a data structure for storing the plurality of input data, wherein the data structure is created in at least one form comprising an array structure or a linked list, and wherein the created data structure further comprises a plurality of data pointers defined for each of the plurality of input data; a unique set of data pointers is determined from the plurality of data pointers for each of the plurality of input data, wherein the unique set of data pointers is determined by calculating a range of candidate data pointers that vary around the corresponding reference data. In one embodiment, the range is configurable and determined based on the corresponding reference data. In one embodiment, the range is configurable and determined based on the corresponding reference data. The system including one or more hardware processors is further configured by the instructions to: (i) Defining the unique set of data pointers in the array structure, and (ii) defining remaining data pointers from the plurality of data pointers in the linked list; determining a data pointer for new input data from one of (i) the unique set of data pointers defined in the array structure or (ii) remaining data pointers from the plurality of data pointers defined in the linked list, using an increment-based mechanism, wherein during the determining of the data pointer, a node corresponding to the determined data pointer is created. In one embodiment, the delta-based mechanism determines a difference between the values of the reference data and the new input data, and wherein the difference indicates an index value for determining the data pointer. In one embodiment, the one or more hardware processors are further configured by the instructions to form another linked list from the created node corresponding to the determined data pointer for storing subsequent input data. The system including one or more hardware processors is further configured by the instructions to: the new input data is stored at an address of one of the array structure or linked list pointed to by the determined node associated with the data pointer. In one embodiment, the one or more hardware processors are further configured, via the instructions, to dynamically update at least one of the array structure and the linked list based on the incoming input data and the reference data.
In yet another aspect, one or more non-transitory machine-readable information storage media are provided that include one or more instructions that, when executed by one or more hardware processors, cause: receiving a plurality of input data and corresponding reference data for each of the plurality of input data; creating a data structure for storing the plurality of input data, wherein the data structure is created in at least one form comprising an array structure or a linked list, and wherein the created data structure further comprises a plurality of data pointers defined for each of the plurality of input data; a unique set of data pointers is determined from the plurality of data pointers for each of the plurality of input data, wherein the unique set of data pointers is determined by calculating a range of candidate data pointers that vary around the corresponding reference data. In one embodiment, the range is configurable and determined based on the corresponding reference data. The method further comprises the steps of: (i) Defining the unique set of data pointers in the array structure, and (ii) defining remaining data pointers from the plurality of data pointers in the linked list; determining a data pointer for new input data from one of (i) the unique set of data pointers defined in the array structure or (ii) remaining data pointers from the plurality of data pointers defined in the linked list, using an increment-based mechanism, wherein during the determining of the data pointer, a node corresponding to the determined data pointer is created. In one embodiment, the delta-based mechanism determines a difference between the values of the reference data and the new input data, and wherein the difference indicates an index value for determining the data pointer. In one embodiment, the method further comprises: another linked list is formed from the created node corresponding to the determined data pointer for storing subsequent input data. The method further comprises the steps of: the new input data is stored at an address of one of the array structure or linked list pointed to by the determined node associated with the data pointer. In one embodiment, the method further comprises dynamically updating at least one of the array structure and the linked list based on the incoming input data and the reference data.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate exemplary embodiments and, together with the description, serve to explain the principles of the disclosure:
fig. 1 illustrates a functional block diagram of a system (device) for storing data in an array and linked list based integrated structure according to some embodiments of the present disclosure.
FIG. 2 illustrates an exemplary flow chart of a processor-implemented method for storing data in an array and linked list based integrated structure, according to some embodiments of the present disclosure.
FIG. 3 illustrates an example of a processor-implemented method for storing data in an array and linked list based integrated structure, according to some embodiments of the present disclosure.
FIG. 4 illustrates an example of a processor-implemented method for storing data in an array and linked list based integrated structure, according to some embodiments of the present disclosure.
FIG. 5 illustrates an example of a processor-implemented method for storing data in an array and linked list based integrated structure, according to some embodiments of the present disclosure.
It will be appreciated by those skilled in the art that any block diagrams herein represent conceptual views of illustrative systems and devices embodying the principles of the subject matter. Likewise, it will be appreciated that any flow charts, flow diagrams, and the like represent various processes which may be substantially represented in computer readable media and so executed by a computer or processor, whether or not such computer or processor is explicitly shown.
Detailed Description
Exemplary embodiments are described with reference to the accompanying drawings. In the drawings, the leftmost digit(s) of a reference number identifies the figure in which the reference number first appears. Wherever convenient, the same reference numbers will be used throughout the drawings to refer to the same or like parts. Although examples and features of the disclosed principles are described herein, modifications, adaptations, and other implementations are possible without departing from the spirit and scope of the disclosed embodiments. It is intended that the following detailed description be considered exemplary only, with a true scope and spirit being indicated by the following claims.
Embodiments herein provide systems and methods for storing data in an array and linked list based integrated structure. The typical manner in which data is stored in a conventional data processing system has been modified by using a fully linked list structure or a fully array structure to provide improved memory utilization and time efficiency. The present disclosure provides a method of efficiently storing data in a data processing system. The method provides an array and linked list based integrated structure for storing data using an increment-based mechanism. The array structure stores a plurality of unique data pointers. The plurality of unique data pointers includes all possible data pointers separated by a configurable range around the reference data. The configurable range of the data pointer is determined based on reference data, which is further determined based on system requirements, market conditions, user requirements, etc. The linked list structure stores the remaining data pointers that are not specified within the configurable scope. The present disclosure enables faster access and processing of data pointers stored in an array structure and slower access and processing of data pointers stored in a linked list structure. The integrated structure based on the array and the linked list is beneficial to reducing the waiting time and improving the memory utilization rate.
Referring now to the drawings, and more particularly to fig. 1-5, wherein like reference numerals designate corresponding features throughout the several views, preferred embodiments are shown and described in the context of the following exemplary systems and/or methods.
FIG. 1 illustrates a functional block diagram of a system for storing data in an array and linked list based integrated structure according to some embodiments of the present disclosure. The system 100 includes or otherwise communicates with one or more hardware processors (such as processor 106), an I/O interface 104, at least one memory (such as memory 102), and a data storage module 108. In an embodiment, the data storage module 108 may be implemented as a stand-alone unit in the system 100. In another embodiment, the data storage module 108 may be implemented as a module in the memory 102. The processor 106, the I/O interface 104, and the memory 102 may be coupled by a system bus.
The I/O interface 104 may include various software and hardware interfaces, such as a network interface, a graphical user interface, and the like. The interface 104 may include various software and hardware interfaces, for example, interfaces for one or more peripheral devices (such as a keyboard, mouse, external memory, camera device, and printer). The interface 104 may facilitate a variety of communications within a variety of networks and protocol types, including a variety of communications within a wired network (e.g., a Local Area Network (LAN), cable, etc.) and a wireless network (such as a Wireless Local Area Network (WLAN), cellular network, or satellite). To this end, the interface 104 may include one or more ports for connecting multiple computing systems to each other or to another server computer. The I/O interface 104 may include one or more ports for connecting multiple devices to each other or to another server.
The hardware processor 106 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, and/or any devices that manipulate signals based on operational instructions. The hardware processor 106 is configured to, among other functions, obtain and execute computer-readable instructions stored in the memory 102.
Memory 102 may include any computer-readable medium known in the art including, for example, volatile memory (such as Static Random Access Memory (SRAM) and Dynamic Random Access Memory (DRAM)), and/or non-volatile memory (such as Read Only Memory (ROM), erasable programmable ROM, flash memory, hard disk, optical disk, and magnetic tape). In one embodiment, memory 102 includes a plurality of modules 108 and a repository 110 for storing data processed, received, and generated by one or more modules 108. The modules 108 may include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types.
The data store 110 includes, among other things, a system database and other data. The other data may include data resulting from the execution of one or more of the modules 108. The system database stores input data and corresponding reference data for the input data, and sets of data pointers, which are generated as a result of execution of one or more of the modules 108, and are stored in an array and linked list based integrated structure. The data stored in the system database may be dynamically updated based on changes in demand associated with different applications.
In an embodiment, the data storage module 108 may be configured to store and process data in an array and linked list based integrated structure. Storing data in an array and linked list based integrated structure may be performed using the methods described in connection with fig. 2-5 and by way of example.
Referring to fig. 1, fig. 2 is an exemplary flow chart of a processor-implemented method of storing data in an array and linked list based integrated structure using the data storage module 108 of fig. 1, according to some embodiments of the present disclosure. In an embodiment of the present disclosure, at step 202 of fig. 2, one or more hardware processors 106 receive a plurality of input data and corresponding reference data for each of the plurality of input data. In an embodiment, the input data may include, but is not limited to, a plurality of incoming orders received at different time stamps with respect to one or more transaction applications. The one or more trading applications may include securities trading, retail trading, stock trading, and the like. In an embodiment, the incoming order may include, but is not limited to, buy and sell requests, bids, and deals placed with price value of securities, stocks, futures, options, exchange or other derivatives, currencies, bonds or other entities (e.g., corn, precious metals, electricity) and/or other items associated with other securities or item deals at the exchange. In an embodiment, the corresponding reference data for each of the plurality of input data may include, but is not limited to, an initial price for a plurality of incoming orders associated with one or more trading applications. In an embodiment, the reference data (e.g., initial price) is configurable and is determined based on an analysis of the closing price of the previous day order, an analysis of the market conditions, wherein the market conditions are determined based on the needs of the transaction item, the user's consumption of the transaction item, the production or manufacture of the transaction item, and the availability of the transaction item. For example, in a stock trading scenario, the stock initial price for company X may be, but is not limited to, $ 100. The initial price is assumed to be determined at a particular time stamp. Further, assume that the incoming order price of company X's stock received at different time stamps during the day may be, but is not limited to, $ 1 to $ 1000, such as $ 80, $ 120, $ 280, $ 320, etc.
Referring again to FIG. 2, at step 204, a data structure is created for storing a plurality of input data using one or more processors, wherein the data structure is created in at least one form including an array structure or a linked list. In an embodiment, the created data structure further comprises a plurality of data pointers defined for each of a plurality of input data in the created data structure. In an embodiment, the size of the array structure and the linked list structure may be configured based on the size of the system memory and may accommodate the maximum available memory size available to the system. In an embodiment, the data pointers may include, but are not limited to, price pointers for incoming orders related to one or more trading applications. In another embodiment, the data pointers defined in the created data structure may be associated with a plurality of time stamps, wherein the plurality of time stamps are structured as another linked list (alternatively referred to as a time linked list). In an embodiment, the array structure and the linked list structure hold price pointers for incoming orders related to one or more trading applications at different time stamps. In other words, the array structure and linked list structure include a plurality of data pointers (e.g., price pointers) associated with the time chain table, wherein incoming orders of a particular price (e.g., $120) received at different time stamps will be stored in the time chain table associated with the corresponding data pointers (e.g., price pointers) defined in the created data structure.
Referring back to FIG. 2, as shown at step 206, the one or more processors determine a unique set of data pointers from the plurality of data pointers for each of the plurality of input data. In an embodiment, the unique set of data pointers is determined by calculating a range of candidate (interchangeably referred to as "possible") data pointers that vary around the corresponding reference data. In an embodiment, a range of candidate (interchangeably referred to as "possible") data pointers is determined based on corresponding reference data. For example, assume that a price of a stock, security, item, etc. associated with one or more trading applications varies by no more than + -20% to 30% of an initial price. In an embodiment, based on this assumption, the range of candidate (interchangeably referred to as "possible") data pointers may be calculated by finding, but not limited to, ±20% or 30% from the corresponding reference data. The step of determining a unique set of data pointers from a plurality of data pointers for each of a plurality of input data is further illustrated by the example shown in fig. 3. As shown in fig. 3, for corresponding reference data (e.g., initial price) selected to be $ 100, a unique set of data pointers is determined as data pointers ranging between $ 80 and $ 120. The range (e.g., $ 80 to $120) is determined by finding a ± 20% change from the corresponding reference data (e.g., in this case the initial price is $100), where +20% change refers to a value of $120 (in this case the highest range limit) and-20% change refers to a value of $80 (in this case the lowest range limit). In an embodiment, the range is configurable. For example, the range may be set to be 20% or 30% different from the corresponding reference data, and may be changed to a different value, such as ±10%, ±40%, ±50% different from the corresponding reference data, and the like. In an embodiment, one or more data pointers included in a unique set of data pointers are separated by a predefined step size. For example, as shown in FIG. 3, the predetermined step size for separating one or more data pointers included in a unique set of data pointers is $1. Thus, as shown in FIG. 3, the unique set of data pointers refers to values of $80, $81, $82, and $120. In another embodiment, the predefined step size is selected based on the system configuration and the application type. For example, in a trading system application, the step size represents the minimum difference in price change. Suppose that if the step size is chosen to be $ 1, the price cannot change by a value less than $ 1. The step size of each transaction system organization is kept fixed to maintain consistency of the requested price. Furthermore, without predefining the step size, the user may ask for a step size (e.g., price difference) of $0.0005, for example. Such requests present challenges to maintenance and redemption of the transaction system, since dollars 0.0005 may no longer be a valid currency on the market.
Further, as shown in step 208 of FIG. 2, the one or more processors (i) define a unique set of data pointers in the array structure, and (ii) define remaining data pointers from the plurality of data pointers in the linked list. As shown in FIG. 3, a unique set of data pointers is defined in the array structure, including values of $80, $81, $82, and $120, and the remaining data pointers are defined in the linked list structure, such as $25, $40, $60, $150, $170, and $200.
Referring again to FIG. 2, at step 210, the one or more processors determine a data pointer for new input data from either (i) a unique set of data pointers defined in the array structure or (ii) one of the remaining data pointers from the plurality of data pointers defined in the linked list, using an increment-based mechanism. In an embodiment, when a data pointer is determined, the system creates a node corresponding to the determined data pointer. In an embodiment of the present disclosure, the system 100 utilizes the created nodes to form another linked list that is used for subsequent data processing and storage. In another embodiment, another linked list (alternatively referred to as a time linked list) formed from the created nodes also includes one or more nodes arranged in a vertical order. In an embodiment, one or more nodes included in another linked list (alternatively referred to as a time linked list) represent different time stamps at which multiple incoming orders were received. In an embodiment, an delta-based mechanism is utilized to determine the difference between the values of the reference data and the new input data. In an embodiment, the difference value indicates an index value for determining a node associated with the data pointer. In another embodiment, an delta-based mechanism facilitates identifying locations for storing new input data in a created data structure. In other words, for new incoming orders, an delta-based mechanism helps determine where in the created data structure the new incoming order can be placed or stored. The delta-based mechanism may be further understood with reference to fig. 3. For example, as shown in FIG. 3, for new input data having a received value of $80, the difference (interchangeably referred to as delta value) between the value of the reference data (in this case $100) and the value of the new input data (in this case $80) is determined. In this case, the difference between the value of the reference data and the value of the new input data (interchangeably referred to as the delta value) is calculated as $ 20 (e.g., $ 100-80= $ 20). In an embodiment, the difference value (interchangeably referred to as an increment value) helps to identify whether the determined data pointer is present in an array structure or linked list. For example, if the difference value (interchangeably referred to as the delta value) falls within the range of candidate (interchangeably referred to as "possible") data pointers (e.g., price pointers), then the determined data pointer exists in the array, otherwise the determined data pointer exists in the linked list. Further, the difference value (interchangeably referred to as an increment value) is used to determine an index value that indicates a location in the created data structure for placement or storage of a new incoming order. In this case, the index value is calculated as 80 (for example, index value=100-20). In this case, the index value is 80, representing the data pointer defined in the array structure. In the array structure, at location 80, a new node is created that corresponds to $80 of the data pointer. Further, as shown in FIG. 3, another linked list (alternatively referred to as a time linked list) including one or more nodes arranged in a vertical order is created from the created nodes for placement or storage of subsequent input data (e.g., multiple incoming orders received at different time stamps). Thus, for this step, the reference data and the difference value (interchangeably referred to as delta value) are used to identify the location where the incoming node is to be placed.
Further, as shown in step 212 of FIG. 2, new input data is stored at the address of the created node corresponding to the determined data pointer included in one of the array structure or the linked list. For example, as shown in FIG. 3, assume that an incoming order is received at a price of, for example, $ 120 at time t1 (e.g., 5:00 PM). Further, using the proposed method, it is determined whether a data pointer (e.g., price pointer) corresponding to an incoming order of $120 in price is present in the array structure. Further, a node corresponding to the data pointer is created in the array structure, where the created node represents the location pointed to by the corresponding data pointer (in this case, the node is created at $120 in the array structure at the location). In addition, incoming orders with a price of, for example, $ 120 are placed or stored at the address of the node created in the array structure. In addition, another incoming order is received at time t2 (e.g., 5:10 pm) with a price of $120. To store another incoming order for the same price, another linked list (alternatively referred to as a time linked list) is formed from the created nodes, wherein the other linked list may include one or more nodes arranged in a vertical chronological order. Incoming orders received at t2 that cost $ 120 are stored in descending chronological order in one or more nodes included in another linked list. In a similar manner, incoming orders with a price of, for example, $ 200, not identified in the array structure are stored in one or more nodes contained in another linked list associated with the linked list structure. In an embodiment, if the data pointer for the new input data is determined from the unique set of data pointers, storing the new input data in the array structure; otherwise, if the data pointer for the new input data is determined from data pointers not included in the unique set of data pointers, the new input data is stored in the linked list.
In an embodiment, the one or more processors are further configured to dynamically update at least one of the array structure and the linked list based on the incoming input data. For example, as shown in FIG. 3, the created data structure initially provides provision for storing data at locations 25 to 200. Thus, the data structure shown in FIG. 3 cannot store data for incoming input data (e.g., incoming orders) that costs more than $200. However, when the price of incoming input data (e.g., an incoming order) exceeds the size of the data structure, the system dynamically updates the linked list structure by creating a node at the end of the linked list. The created node corresponds to a data pointer that points to a location where incoming input data (e.g., an incoming order) must be placed or stored. For example, as shown in FIG. 4, when incoming input data (e.g., an incoming order) having a price of $10000 is received, the linked list structure is updated by creating a node corresponding to the data pointer of $10000 at the end of the linked list, and the incoming input data (e.g., an incoming order) having a price of $10000 is stored or placed at that location.
In an embodiment, the one or more processors are further configured to dynamically update at least one of the array structure and the linked list based on the reference data. In another embodiment, the reference data is configurable and may vary over time. In another embodiment, the reference data may remain constant throughout the day, but may change to a different value throughout the other day. In an embodiment, the size of the array structure is dynamically updated as the value of the reference data changes. For example, as shown in FIG. 3, assume that the reference data is $100, and that the candidate (interchangeably referred to as "possible") data pointers (e.g., price pointers) contained in the array structure range from $80 to $120 based on a change of + -20%. As shown in fig. 5, when the reference data is changed to a value of $ 200, the range of candidate (interchangeably referred to as "possible") candidate data pointers (e.g., price pointers) included in the array structure calculated based on the ±20% change is changed to $ 160 to $ 240. It is therefore apparent that the size of the array is dynamically updated to twice its original size.
Experimental results:
in an embodiment, a comparative analysis of a conventional data storage and processing system with the proposed system is provided. In an embodiment, a conventional data storage and processing system may provide a fully array-based implementation for storing multiple input data. Conventional systems that use a fully array-based structure include multiple data pointers and store all input data only at addresses of the array structure determined by the corresponding data pointers. However, when using a large number of data pointer definitions for storing or placing incoming orders, a completely array-based structure may take up more and unnecessary (or unavoidable) space and memory. In an embodiment, it is observed that the price of stocks, securities, items, etc. associated with one or more trading applications does not change much over a particular time frame (e.g., a day, a year, etc.) and is limited to a range around the initial price, where the range may be, but is not limited to, ±20% to ±30% of the initial price. In other words, it is observed that incoming orders are typically located between 20% and 30% of the initial price, and that the number of orders exceeding these values is rare. Thus, an entirely array-based implementation that stores a large number of data pointers to locations for storing incoming orders may result in wasted memory. For example, a completely array-based structure is designed for multiple incoming orders with data pointers between $1 and $10000. Furthermore, 10000 locations are provided even for reference data (e.g., initial price) of, for example, $ 200, in the case of an entirely array-based implementation. However, in the case where the reference data is selected to be 200, the possible price range for the received incoming order may be, but is not limited to, around 2000-3000. Thus, it is required to provide only 2000-3000 positions, whereas a completely array-based structure provides 10000 positions. Thus, an entirely array-based implementation that stores a large number of data pointers to locations for storing incoming orders may result in wasted memory.
In an embodiment, a conventional data storage and processing system may provide a fully linked list-based implementation for storing multiple input data. Implementations based entirely on linked lists require that any location pointed to by the corresponding data pointer be reached, and that the linked list be traversed from the first node describing the first location pointed to by the corresponding data pointer until the desired node describing the desired location pointed to by the corresponding data pointer. Thus, if the size of the linked list is N, then N nodes will be traversed. Traversing N nodes to reach a desired node may result in processing delays, resulting in time consuming activities. Thus, the temporal complexity of a linked list of size N is specified as O (N). The time complexity represents the computational complexity that describes the time required for the system to run the algorithm. The time complexity is typically estimated by counting the number of basic operations performed by the algorithm. In an embodiment, if the time taken for the computation is proportional to the input size (e.g., N), the complexity of the computation is referred to as O (N). The symbol O (N) reads as "order of N", which means that the run time grows at most linearly with the input size. In other words, it is described that there is a constant c such that for each input of size n, the runtime is at most cn. For example, if the addition time is constant, or at least limited by a constant, the time required for the process of adding all elements of the list is proportional to the length of the list. In an embodiment, the latency of search/insert/delete should be minimal for a data processing system. However, in the case of conventional implementations based entirely on linked lists, there is no measure for reducing latency. For example, a linked list is created with four nodes describing four locations pointed to by corresponding data pointers, and assume index values of, for example, 80, 120, 280, and 320. Then, in this case, the linked list would be formed as follows:
Here, a node symbolized "nd" is created for each received input data (e.g., an incoming order), which may be represented by data a, data B, data C, and data D. As can be seen from the above-described linked list (which is a horizontal linked list), the first node is 80. In addition, new input data (e.g., a new incoming order) represented as data C is received. To store or place new input data, new nodes describing the new locations pointed to by the corresponding data pointers are created and inserted into the linked list. The index value of the new node is, for example, 280. In this case, in order to insert a new node with index value 280 into the linked list, it is necessary to traverse from the node with index value 80 to the node with index value 120 to the node with index value 280. It can be seen that the new node is the desired node. Upon reaching location 280, a new node with new input data represented as data C is added and stored in the linked list. Obviously, in this case, a completely linked list based implementation is used, requiring 3 traversal steps. Therefore, the time taken is 3 units.
In an embodiment, the present disclosure provides modifications to existing legacy systems and methods that help optimize memory utilization and reduce latency to provide a system that is time efficient. The present disclosure provides an array and linked list based integrated structure for storing input data. The array structure presented in this disclosure includes a plurality of locations pointed to by a plurality of data pointers for storing a plurality of input data, wherein the plurality of locations contained in the array structure are provided for candidate (interchangeably referred to as "possible") price ranges for input data (e.g., incoming orders) determined over a particular time range. The range of candidate (interchangeably referred to as "possible") prices for the input data is determined by calculating the change in the price of the input data from the reference data (e.g., the initial price). It is observed that the prices of all orders received by one or more applications, such as trading system applications, during the day are limited to a range of prices around the initial price, so that only input data (e.g., incoming orders) whose prices fall within a range of determined candidate (interchangeably referred to as "possible") prices is stored in the array structure. Since the array structure is provided only for a certain candidate (interchangeably referred to as "possible") price range, the size of the array structure can be reduced. Thus, the present disclosure helps optimize memory by utilizing less memory. In addition, the array structure can be processed quickly, and thus, the time complexity is reduced. The probability of receiving input data (e.g., an incoming order) that exceeds the determined candidate (interchangeably referred to as "possible") price range is very small, and therefore, since the linked list structure does not take up much space, input data (e.g., an incoming order) that exceeds the determined candidate (interchangeably referred to as "possible") price range is stored in the linked list structure. Since there is less chance of receiving input data (e.g., an incoming order) with a price that exceeds a determined candidate (interchangeably referred to as "possible") price range, the need to search/insert/store the input data in the linked list is minimal, which results in reduced latency and timeliness. Optimization of memory and time using the proposed disclosure may be further explained by way of non-limiting example. In this example, a data structure is created having four nodes that describe four locations pointed to by respective data pointers, and assume index values of the four nodes as, for example, 80, 120, 280, and 320. In addition, assume that the reference data (e.g., initial price) is selected to be 200. Then, in this case, the data structure would be formed as follows:
Here, a node is created for each received input data (e.g., an incoming order), where the received input data may be represented by data a, data B, data C, and data D (as in the example above). As can be seen in the data structure described above, multiple data pointers ranging from $100 to $300 are defined in the array structure, because the candidate (interchangeably referred to as "possible") price range around the initial price of the input data is determined by calculating a +/-50% change in the initial price (having a value of $100 to $300). Further, as can be seen from the above data structure, nodes having index values 80 and 320, respectively, are included in a Linked List (LL). Assume that new input data (e.g., a new incoming order) represented as data C is received (as in the example above). To store or place new input data, new nodes describing the new locations pointed to by the corresponding data pointers are created and inserted into the linked list. The index value of the new node is, for example, 280. In this case, to insert a new node with an index value of 280 in the above data structure, the difference between (increment values), which is for example (280-200) =80, will be calculated and the data pointer to the corresponding position is determined by shifting +80 counts in the array structure. At location 280, the desired node is identified. Rather than traversing from a node with index value 80 to a node with index value 120 to a node with index value 280 as in the fully linked list-based embodiment, the proposed disclosure for this case uses an increment-based mechanism to directly identify a node with index value 280 in an array structure and stores or places new input data at the address of the identified node. Furthermore, the array structure enables fast processing. It can be seen that the new node is the desired node. Upon reaching location 280, a new node with new input data represented as data C is added and stored in the array structure. Obviously, by using the proposed data structure, only one traversal step is required. Thus, the time spent is 1 unit, rather than 3 units as provided by the traditional fully linked list based structure. The time complexity of the proposed data structure is time O (1) instead of nodes in the array and O (M) for nodes in the linked list, where M represents the number of nodes contained in the linked list. Thus, the proposed data processing and storage system provides an improvement in optimization of memory and time by combining the advantages of both array structures that provide faster processing and linked list structures that provide efficient memory utilization by taking up less memory space.
The steps of the illustrated method 200 are set forth to explain the exemplary embodiments shown, and it is contemplated that ongoing technical development may vary the manner in which certain functions are performed. These examples are presented herein for purposes of illustration and not limitation.
The written description describes the subject matter herein to enable any person skilled in the art to make and use the embodiments. The scope of the subject embodiments is defined by the claims and may include other modifications that will occur to those skilled in the art. Such other modifications are intended to fall within the scope of the claims if they have similar elements that do not differ from the literal language of the claims, or if they include equivalent elements with insubstantial differences from the literal languages of the claims.
It should be understood that the scope of protection extends to such programs, and in addition to computer readable means having messages therein; such computer readable storage means comprises program code means for performing one or more steps of the method when the program is run on a server or mobile device or any suitable programmable device. The hardware device may be any type of programmable device including, for example, any type of computer, such as a server or personal computer, or the like, or any combination thereof. The apparatus may further comprise means which may be: such as hardware devices, e.g., application Specific Integrated Circuits (ASICs), field Programmable Gate Arrays (FPGAs); or a combination of hardware and software means, such as an ASIC and an FPGA, or at least one microprocessor and at least one memory with software modules located therein. Thus, the apparatus may comprise hardware means and software means. The method embodiments described herein may be implemented in hardware and software. The apparatus may also include software means. Alternatively, embodiments may be implemented on different hardware devices, e.g., using multiple CPUs.
Embodiments herein may include both hardware and software elements. Embodiments implemented in software include, but are not limited to, firmware, resident software, microcode, etc. The functions performed by the various modules described herein may be implemented in other modules or combinations of other modules. For the purposes of this description, a computer-usable or computer readable medium can be any apparatus that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
The steps shown are set forth to explain the illustrative embodiments shown, and it is contemplated that ongoing technical development will alter the manner in which certain functions are performed. These examples are presented herein for purposes of illustration and not limitation. In addition, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternate boundaries may be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc. of those described herein) will be apparent to those skilled in the relevant art based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments. Furthermore, the words "comprising," "having," "containing," and "including," and other similar forms are intended to be equivalent in meaning and be open ended in that one or more items following any of these words are not meant to be an exhaustive list of these one or more items, or are meant to be limited to only the listed one or more items. It must also be noted that, as used herein and in the appended claims, the singular forms "a," "an," and "the" include plural referents unless the context clearly dictates otherwise.
Furthermore, one or more computer-readable storage media may be used to implement embodiments consistent with the present disclosure. Computer-readable storage media refers to any type of physical memory that can store information or data that is readable by a processor. Thus, a computer-readable storage medium may store instructions for execution by one or more processors, including instructions for causing the one or more processors to perform steps or stages consistent with embodiments described herein. The term "computer-readable medium" shall be taken to include tangible items and not include carrier waves and transient signals, i.e., be non-transitory. Examples include Random Access Memory (RAM), read Only Memory (ROM), volatile memory, nonvolatile memory, hard drives, CD ROMs, DVDs, flash drives, magnetic disks, and any other known physical storage medium.
It is intended that the disclosure and examples be considered as illustrative only, with a true scope and spirit of the disclosed embodiments being indicated by the following claims.

Claims (9)

1. A processor-implemented method, comprising:
receiving, by a processor, a plurality of input data and corresponding reference data for each of the plurality of input data, wherein the input data includes a plurality of incoming orders received at different time stamps for a plurality of transaction applications, and the corresponding reference data for each of the plurality of input data includes an initial price for the plurality of incoming orders associated with the plurality of transaction applications;
Creating a data structure for storing the plurality of input data, wherein the data structure is created in at least one form including an array structure or a linked list, and wherein the created data structure further includes a plurality of data pointers defined for each of the plurality of input data;
determining a unique set of data pointers from a plurality of data pointers for each of the plurality of input data, wherein the unique set of data pointers is determined by calculating a range of candidate data pointers that vary around the corresponding reference data;
(i) Defining the unique set of data pointers in the array structure, (ii) defining remaining data pointers from the plurality of data pointers in the linked list;
using an increment-based mechanism, a data pointer is determined for new input data from one of: (i) The unique set of data pointers defined in the array structure, or (ii) remaining data pointers from the plurality of data pointers defined in the linked list, wherein during determining the data pointers, nodes corresponding to the determined data pointers are created, wherein the delta-based mechanism determines differences between values of the reference data and values of the new input data, and wherein the differences indicate index values for determining the data pointers;
Storing the new input data at an address of a created node corresponding to the determined data pointer included in one of the array structure or linked list; and
dynamically updating at least one of the array structure and the linked list based on the new input data and the reference data.
2. The processor-implemented method of claim 1, wherein the range is configurable and determined based on the corresponding reference data.
3. The processor-implemented method of claim 1, further comprising: another linked list is formed from the created node corresponding to the determined data pointer for storing subsequent input data.
4. A processor-implemented system (100), comprising:
a memory (102);
one or more communication interfaces (104); and
one or more hardware processors (106) coupled to the memory through the one or more communication interfaces, wherein the one or more hardware processors are configured to:
receiving, by a processor, a plurality of input data and corresponding reference data for each of the plurality of input data, wherein the input data includes a plurality of incoming orders received at different time stamps for a plurality of transaction applications, and the corresponding reference data for each of the plurality of input data includes an initial price for the plurality of incoming orders associated with the plurality of transaction applications;
Creating a data structure for storing the plurality of input data, wherein the data structure is created in at least one form including an array structure or a linked list, and wherein the created data structure further includes a plurality of data pointers defined for each of the plurality of input data;
determining a unique set of data pointers from a plurality of data pointers for each of the plurality of input data, wherein the unique set of data pointers is determined by calculating a range of candidate data pointers that vary around the corresponding reference data;
(i) Defining the unique set of data pointers in the array structure, (ii) defining remaining data pointers from the plurality of data pointers in the linked list;
using an increment-based mechanism, a data pointer is determined for new input data from one of: (i) The unique set of data pointers defined in the array structure, or (ii) remaining data pointers from the plurality of data pointers defined in the linked list, wherein during determining the data pointers, nodes corresponding to the determined data pointers are created, wherein the delta-based mechanism determines differences between values of the reference data and values of the new input data, and wherein the differences indicate index values for determining the data pointers;
Storing the new input data at an address of a created node corresponding to the determined data pointer included in one of the array structure or linked list; and is also provided with
Dynamically updating at least one of the array structure and the linked list based on the new input data and the reference data.
5. The system of claim 4, wherein the range is configurable and determined based on the corresponding reference data.
6. The system of claim 4, wherein the one or more hardware processors are further configured to form another linked list from the created node corresponding to the determined data pointer for storing subsequent input data.
7. One or more non-transitory machine-readable information storage media comprising one or more instructions that, when executed by one or more hardware processors, cause:
receiving a plurality of input data and corresponding reference data for each of the plurality of input data, wherein the input data comprises a plurality of incoming orders received at different time stamps for a plurality of transaction applications, and the corresponding reference data for each of the plurality of input data comprises an initial price for the plurality of incoming orders associated with the plurality of transaction applications;
Creating a data structure for storing the plurality of input data, wherein the data structure is created in at least one form including an array structure or a linked list, and wherein the created data structure further includes a plurality of data pointers defined for each of the plurality of input data;
determining a unique set of data pointers from a plurality of data pointers for each of the plurality of input data, wherein the unique set of data pointers is determined by calculating a range of candidate data pointers that vary around the corresponding reference data;
(i) Defining the unique set of data pointers in the array structure, (ii) defining remaining data pointers from the plurality of data pointers in the linked list;
using an increment-based mechanism, a data pointer is determined for new input data from one of: (i) The unique set of data pointers defined in the array structure, or (ii) the remaining data pointers from the plurality of data pointers defined in the linked list, wherein during the determining of the data pointers, nodes corresponding to the determined data pointers are created, wherein the delta-based mechanism determines differences between the values of the reference data and the values of the new input data, and wherein the differences indicate index values for determining the data pointers;
Storing the new input data at an address of a created node corresponding to the determined data pointer included in one of the array structure or linked list and
dynamically updating at least one of the array structure and the linked list based on the new input data and the reference data.
8. The one or more non-transitory machine-readable information storage media of claim 7, wherein the range is configurable and determined based on the corresponding reference data.
9. The one or more non-transitory machine-readable information storage media of claim 7, wherein the one or more hardware processors are further configured to form another linked list from the created node corresponding to the determined data pointer for storing subsequent input data.
CN201911030016.6A 2018-10-26 2019-10-28 System and method for storing data in an integrated structure based on an array and a linked list Active CN111104401B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IN201821040565 2018-10-26
IN201821040565 2018-10-26

Publications (2)

Publication Number Publication Date
CN111104401A CN111104401A (en) 2020-05-05
CN111104401B true CN111104401B (en) 2023-09-22

Family

ID=68382145

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911030016.6A Active CN111104401B (en) 2018-10-26 2019-10-28 System and method for storing data in an integrated structure based on an array and a linked list

Country Status (5)

Country Link
US (1) US11263203B2 (en)
EP (1) EP3644196B1 (en)
CN (1) CN111104401B (en)
AU (1) AU2019253882B2 (en)
DK (1) DK3644196T3 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112036851A (en) * 2020-09-29 2020-12-04 东信和平科技股份有限公司 Storage device of digital currency and related processing method thereof
CN112711545B (en) * 2021-03-29 2021-07-20 广州宸祺出行科技有限公司 Data access method based on array linked list type queue structure
US11960483B1 (en) 2021-09-16 2024-04-16 Wells Fargo Bank, N.A. Constant time data structure for single and distributed networks

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011079748A1 (en) * 2009-12-31 2011-07-07 北京联想软件有限公司 Method and system for creating linked list, method and system for searching data
CN102479210A (en) * 2010-11-30 2012-05-30 金蝶软件(中国)有限公司 Treatment method, device and terminal for adding data into set
CN103023808A (en) * 2012-12-28 2013-04-03 南京邮电大学 Block link list structure based 6lowpan data packet repackaging buffering method
CN106101022A (en) * 2016-06-15 2016-11-09 珠海迈科智能科技股份有限公司 A kind of data request processing method and system
CN107004013A (en) * 2014-11-26 2017-08-01 微软技术许可有限责任公司 System and method for providing distributed tree traversal using hardware based processing

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5950191A (en) * 1997-05-21 1999-09-07 Oracle Corporation Method and system for accessing an item in a linked list using an auxiliary array
US20020174147A1 (en) 2000-05-19 2002-11-21 Zhi Wang System and method for transcoding information for an audio or limited display user interface
US20030125993A1 (en) * 2001-12-27 2003-07-03 Ho Chi Fai Method and system for event distribution
US7831624B2 (en) * 2005-06-24 2010-11-09 Seagate Technology Llc Skip list with address related table structure
US8032495B2 (en) 2008-06-20 2011-10-04 Perfect Search Corporation Index compression
CN101639769B (en) * 2008-07-30 2013-03-06 国际商业机器公司 Method and device for splitting and sequencing dataset in multiprocessor system
US8082258B2 (en) * 2009-02-10 2011-12-20 Microsoft Corporation Updating an inverted index in a real time fashion
US10564944B2 (en) * 2010-01-07 2020-02-18 Microsoft Technology Licensing, Llc Efficient immutable syntax representation with incremental change
US9836776B2 (en) * 2014-10-13 2017-12-05 Sap Se Generation and search of multidimensional linked lists and computing array
CN106598570A (en) 2016-11-15 2017-04-26 积成电子股份有限公司 Construction and display method of multilevel menu in embedded system
KR20170035879A (en) 2017-03-23 2017-03-31 박종명 Futures option trading system and method thereof

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011079748A1 (en) * 2009-12-31 2011-07-07 北京联想软件有限公司 Method and system for creating linked list, method and system for searching data
CN102479210A (en) * 2010-11-30 2012-05-30 金蝶软件(中国)有限公司 Treatment method, device and terminal for adding data into set
CN103023808A (en) * 2012-12-28 2013-04-03 南京邮电大学 Block link list structure based 6lowpan data packet repackaging buffering method
CN107004013A (en) * 2014-11-26 2017-08-01 微软技术许可有限责任公司 System and method for providing distributed tree traversal using hardware based processing
CN106101022A (en) * 2016-06-15 2016-11-09 珠海迈科智能科技股份有限公司 A kind of data request processing method and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于树与链表的关联规则增量更新的实现;胡锦美;闽江学院学报(05);全文 *

Also Published As

Publication number Publication date
EP3644196B1 (en) 2022-05-11
US11263203B2 (en) 2022-03-01
AU2019253882B2 (en) 2021-08-12
CN111104401A (en) 2020-05-05
AU2019253882A1 (en) 2020-05-14
DK3644196T3 (en) 2022-08-01
US20200133942A1 (en) 2020-04-30
EP3644196A1 (en) 2020-04-29

Similar Documents

Publication Publication Date Title
CN111104401B (en) System and method for storing data in an integrated structure based on an array and a linked list
US8321865B2 (en) Processing of streaming data with a keyed delay
CN108376364B (en) Payment system account checking method and device and terminal device
CN102129425B (en) The access method of big object set table and device in data warehouse
CN109388657B (en) Data processing method, device, computer equipment and storage medium
EP2858025A1 (en) An order book management device in a hardware platform
CN109951541A (en) A kind of serial number generation method and server
WO2017117216A1 (en) Systems and methods for caching task execution
CN110389812A (en) For managing the method, equipment and computer readable storage medium of virtual machine
CN105446990A (en) Service data processing method and equipment
WO2019153483A1 (en) Service charge determination method and apparatus, and terminal device and medium
CN109919357B (en) Data determination method, device, equipment and medium
CN108595251B (en) Dynamic graph updating method, device, storage engine interface and program medium
CN111753019A (en) Data partitioning method and device applied to data warehouse
CN111415168B (en) Transaction alarm method and device
CN112905677B (en) Data processing method and device, service processing system and computer equipment
CN107943923B (en) Telegram code database construction method, telegram code identification method and device
US11301587B2 (en) Systems and methods for masking and unmasking of sensitive data
CN114116799A (en) Abnormal transaction loop identification method, device, terminal and storage medium
CN111784512A (en) Bank-enterprise reconciliation flow processing method and device and electronic equipment
CN111198900A (en) Data caching method and device for industrial control network, terminal equipment and medium
CN108256989B (en) Data display method and system of fund preparation system
CN115329733B (en) Report form statistical method and device, computer equipment and storage medium
CN110555537A (en) Multi-factor multi-time point correlated prediction
CN116795987A (en) Transaction message processing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant