CN114637775A - Query optimization system, method and equipment based on Monte Carlo tree search and reinforcement learning - Google Patents

Query optimization system, method and equipment based on Monte Carlo tree search and reinforcement learning Download PDF

Info

Publication number
CN114637775A
CN114637775A CN202210319758.6A CN202210319758A CN114637775A CN 114637775 A CN114637775 A CN 114637775A CN 202210319758 A CN202210319758 A CN 202210319758A CN 114637775 A CN114637775 A CN 114637775A
Authority
CN
China
Prior art keywords
query
node
search
plan
tree
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210319758.6A
Other languages
Chinese (zh)
Inventor
王宏志
张恺欣
崔双双
丁小欧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Institute of Technology
Original Assignee
Harbin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Institute of Technology filed Critical Harbin Institute of Technology
Priority to CN202210319758.6A priority Critical patent/CN114637775A/en
Publication of CN114637775A publication Critical patent/CN114637775A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2453Query optimisation
    • G06F16/24534Query rewriting; Transformation
    • G06F16/24542Plan optimisation
    • G06F16/24545Selectivity estimation or determination
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/01Dynamic search techniques; Heuristics; Dynamic trees; Branch-and-bound
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N7/00Computing arrangements based on specific mathematical models
    • G06N7/01Probabilistic graphical models, e.g. probabilistic networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Analysis (AREA)
  • Probability & Statistics with Applications (AREA)
  • Computational Mathematics (AREA)
  • Algebra (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Pure & Applied Mathematics (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Operations Research (AREA)
  • Databases & Information Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

Query optimization system, method and equipment based on Monte Carlo tree search and reinforcement learning, and belongs to the technical field of computers. In order to solve the problems of weak compatibility and poor stability of the conventional NEO query optimization method, the system of the invention adopts the same framework as an NEO query optimization model, wherein a value model unit: predicting the cost of the query plan by utilizing the characteristics corresponding to the query plan based on the value model; the value model is a neural network model; the input of the value model is a vector tree used for representing a query plan needing to estimate the overhead, the topological structure of the vector tree is a binary tree structure, and the node codes are sequentially spliced according to the sequence traversal order of the tree; the node characteristics of the nodes consist of the codes of the node information; the query plan searching unit adopts a Monte Carlo tree searching method to search a query plan according to the prediction of query plan- > time cost, and generates an execution plan from a search space. The method is mainly used for query optimization in the computer.

Description

Query optimization system, method and equipment based on Monte Carlo tree search and reinforcement learning
Technical Field
The invention relates to a query optimization system and a query optimization method, and belongs to the technical field of computers.
Background
Currently, query optimization algorithms fall into two broad categories, namely traditional query optimization algorithms and AI-based query optimization algorithms. The former typically uses cardinality/cost estimation techniques to screen query plans from several candidate plans, or uses predefined rules to optimize an initial query plan. This type of technology is currently most widely used, but is relatively backward. The latter then uses AI techniques for cardinality/cost estimation or directly uses machine learning techniques to generate an optimized query plan. Among such methods, the most advanced and representative are the NEO and BAO technologies.
The NEO technology is an end-to-end learning-based query optimization method, and can optimize the connection sequence, index selection and physical operator selection in a relational database. The method models a query plan generation problem into a Markov model, trains a deep learning model based on a tree convolution neural network by using a reinforcement learning algorithm, and gradually generates a query plan tree equivalent to an input SQL query language by using the output of the model. The input to the markov model (i.e., the "state" in reinforcement learning) is divided into two parts: query-level encoding and plan-level encoding, the encoding schemes are shown in fig. 1 and 2. See the paper "Neo: A Learned Query Optimizer", or descriptions of related papers, such as https:// blog.csdn.net/cuiris/article/details/111466631, https:// zhuanlan.zhuhu.com/p/80233443, etc.
The query level encoding contains the relationships of join operations (red vectors) for the various tables in the database, and predicates (blue vectors) used in the join. And the plan level code contains the structure of the whole query (i.e. the structure of the query plan tree) and the access mode information (index or direct access table) of each operator.
The NEO predicts the "value" of the query plan by tree convolutional neural networks, using the two codes as inputs. The value model (value model) is trained by using historical query plans stored in the experience pool and corresponding expenses, the MSE is used as a loss function, and the label is the corresponding minimum expense in different query plans of the same query in the experience pool.
As shown in fig. 3, the main flow of NEO is as follows:
1. an initial execution plan, called bootstrap, is generated using a "weak" optimizer, such as that of PostgreSQL.
2. Experience (Experience) represents a set of (execution plan- > time cost), which is a training sample of the value model.
3. Characterization (featurer) extracts two types of features: query features, execution plan features.
4. And training a value model by using the extracted features, and establishing a prediction model of the query plan- > time spending.
5. A query plan search is performed on the model, where a simple greedy search is used to select an execution plan.
The query plan searching part maintains a small top heap, elements in the heap are sub query plans, the priority is a value model, a sub plan with the minimum overhead of the top heap (only a root node of a query tree in the initial situation) is taken each time, all legal sub plans (sub nodes of the query plan tree) are expanded, the minimum overhead (query execution time used here) of a complete query plan which can be generated by each expanded sub plan is predicted, and the plan with the minimum overhead is selected and inserted into the small top heap. This process is repeated until the sub-plan is expanded to obtain a complete query plan.
It therefore has the following problems:
1. this technique can only support a portion of the limited database relational operations, specifically only selection, projection, isojoin, and aggregate queries.
2. The performance is unstable, and on part of query loads, the query plan optimized by the scheme has longer execution time and the problem of long tail distribution exists.
Disclosure of Invention
The invention aims to solve the problems of weak compatibility and poor stability of the existing NEO query optimization method.
The query optimization system based on Monte Carlo tree search and reinforcement learning adopts a framework which is the same as an NEO query optimization model and comprises a value model unit and a query plan search unit;
a value model unit: predicting the cost of the query plan by utilizing the characteristics corresponding to the query plan based on the value model;
the value model is a neural network model; the input of the value model is a vector tree used for representing a query plan needing to estimate the overhead, the topological structure of the vector tree is a binary tree structure, and the node codes are sequentially spliced according to the sequence traversal order of the tree; the node characteristics of the nodes consist of the codes of the node information; the node information comprises a node type, a table involved in operation, an index type involved in operation, a predicate involved in operation and a cardinality estimation result of the node;
query plan search unit: and (3) performing query plan search according to the prediction of the query plan- > time overhead by adopting a Monte Carlo tree search method, and generating an execution plan from a search space.
Further, the node information is encoded as follows:
the node type is one-hot coding; the table involved in the operation is one-hot encoding; the index type involved in the operation is one-hot encoding; the predicate involved in the operation is one-hot encoding; the cardinality estimation result of the node is an integer value.
Further, a neural network model as a value model adopts a structure of a multilayer stacked tree convolution layer.
Further, when the query plan searching unit searches by adopting a Monte Carlo tree searching method, the process of generating a better query plan is modeled as a Markov process, the Monte Carlo tree searching is used for expanding the sub-query plans, vector tree codes corresponding to the sub-query plans with consistent semantics are used as tree nodes in the Monte Carlo tree searching, a value model is used for evaluating the cost of the sub-query plans during each searching, and then the Monte Carlo tree searching is realized.
Further, the process of searching by the query plan search unit by adopting the method of Monte Carlo tree search comprises the following steps:
(1) selecting: starting from a root node R, recursively selecting an optimal child node by using a UCT algorithm until a leaf node L is reached;
(2) expanding: if L is not a termination node, creating one or more child nodes that are not inconsistent with the semantics of the query, and selecting one of the child nodes C;
(3) simulation: from C, randomly selecting a non-contradictory child node until the current query plan is completely expressed;
(4) and (3) back propagation: predicting the cost of the simulated complete query plan by using a value model, and reversely updating the simulation times and reward rewarded of all parent nodes of the leaf node from the leaf node;
(5) and (4) circularly executing the steps (1) to (4), and continuously expanding the sub-query plans until a complete query plan is generated.
The query optimization method based on Monte Carlo tree search and reinforcement learning comprises the following steps:
the value model is used as a query plan- > time overhead prediction model; performing query plan search on a prediction model of query plan- > time overhead;
the value model is a neural network model; the input of the value model is a vector tree used for representing a query plan needing to estimate the overhead, the topological structure of the vector tree is a binary tree structure, and the node codes are sequentially spliced according to the sequence traversal order of the tree; the node characteristics of the nodes consist of codes of node information; the node information comprises a node type, a table related to operation, an index type related to operation, a predicate related to operation and a cardinality estimation result of the node;
the query plan search is carried out by adopting a Monte Carlo tree search method, the query plan search is carried out according to the prediction of the query plan- > time expense, and an execution plan is generated from a search space.
Further, the training process of the value model comprises the following steps:
s1, constructing an experience set by taking the execution plan and the time overhead corresponding to the query as experience; taking an execution plan and time expenditure corresponding to each experience in the experience combination as a training sample of the value model;
s2, characterizing the training samples, namely extracting features, wherein the features comprise query features and execution plan features;
s3, training a value model by using the extracted features, wherein the trained value model is an established query plan- > time overhead prediction model.
The query optimization device based on Monte Carlo tree search and reinforcement learning is a storage medium, at least one instruction is stored in the storage medium, and the at least one instruction is loaded and executed by a processor to realize the query optimization method based on Monte Carlo tree search and reinforcement learning.
The query optimization device based on Monte Carlo tree search and reinforcement learning comprises a processor and a memory, wherein at least one instruction is stored in the memory, and the at least one instruction is loaded and executed by the processor to realize the query optimization method based on Monte Carlo tree search and reinforcement learning.
Has the advantages that:
the invention realizes the compatibility of all relation operations in a relation database by improving the Value Model used in the NEO and improving the greedy-based Plan Search method in the NEO, and has higher and more stable performance compared with the NEO.
Drawings
FIG. 1 is a schematic illustration of NEO query level encoding;
FIG. 2 is a schematic illustration of NEO plan level encoding;
FIG. 3 is a schematic diagram of a NEO system framework;
FIG. 4 is an example of query plan vector tree encoding according to the present invention;
FIG. 5 is an example of a value model structure of the present invention;
FIG. 6 is a schematic diagram of a Monte Carlo tree search process.
Detailed Description
The first embodiment is as follows:
the embodiment is a query optimization method based on Monte Carlo tree search and reinforcement learning, and the query optimization process is realized by a query optimization system based on Monte Carlo tree search and reinforcement learning. The invention is a query optimization method based on cost and AI together with NEO, so the framework of the system (the query optimization system based on Monte Carlo tree search and reinforcement learning) of the invention is also shown in FIG. 3, and both the cost of the query plan is predicted by using a pre-trained Value Model (namely Value Model) and the search of the query plan is guided. The difference between them is the network structure of the value model and the query Plan Search (Plan Search) method; namely: the system of the invention has the same framework as the NEO query optimization model, and comprises a value model unit and a query plan search unit;
a value model unit: predicting the cost of the query plan by utilizing the characteristics corresponding to the query plan based on the value model;
the value model is a neural network model; the input of the value model is a vector tree used for representing a query plan needing to estimate the overhead, the topological structure of the vector tree is a binary tree structure, and the node codes are sequentially spliced according to the sequence traversal order of the tree; the node characteristics of the nodes consist of the codes of the node information; the node information comprises a node type, a table related to operation, an index type related to operation, a predicate related to operation and a cardinality estimation result of the node;
query plan search unit: and (3) performing query plan search according to the prediction of the query plan- > time overhead by adopting a Monte Carlo tree search method, and generating an execution plan from a search space.
The query optimization method based on Monte Carlo tree search and reinforcement learning comprises the following steps:
1. an initial execution plan, called bootstrap, is generated using a "weak" optimizer, such as that of PostgreSQL.
2. Experience (Experience) represents a set of (execution plan- > time cost), which is a training sample of the value model.
3. Characterization (featurer) extraction features: query features, execution plan features.
4. The neural network provided by the invention is used as a value model, the extracted characteristics are used for training the value model, and a prediction model of query plan- > time spending is established.
5. The method is different from NEO, and adopts a Monte Carlo tree search method to generate an execution plan from a search space.
In the aspect of a value model, as the query cardinality and the query cost are generally in a linear relation, and the accurate query cardinality estimation result can effectively improve the prediction precision of the query cost, the cardinality estimation strategy based on data distribution is introduced into the value model as a characteristic; that is, when estimating the overhead of a given query plan, the most effective Naru scheme in the cardinality estimation technique is used to estimate the cardinality of each node in the query plan. The estimation results are encoded into the vector tree as input to the value model.
The input of the value model is a vector tree, which is used to represent the query plan for which the cost needs to be estimated. As shown in FIG. 4, the value model is input as follows:
topological structure: the node codes are sequentially spliced according to the hierarchical traversal order of the tree;
node characteristics: a one-dimensional vector of 1; the node characteristics consist of the coding of the following information;
one-hot coding of node type (supporting all physical relationship operations, including null nodes);
table of operations involved: one-hot coding;
the type of index involved in the operation: one-hot coding;
operation-related predicates: one-hot coding;
cardinality estimation of nodes results: an integer value;
through the coding, the query level coding and the plan level coding in the NEO are represented in a unified and combined mode, and the extraction of the characteristics of a machine learning model is facilitated. Due to the change of input, the network architecture of the value model is also different from the NEO, a multi-layer stacked tree convolution layer architecture is adopted, and residual concatenation and batch normalization-full concatenation layer (BN) are introduced for stable training, as shown in fig. 5.
Finally, the prediction objective of the value model is also different from the NEO, the prediction objective of the NEO is the minimum cost of the complete query plan which can be generated by each expanded sub-plan, but the cost of the input query plan is only simply predicted by the method.
In terms of query Plan search (Plan search), the present invention models the process of generating a better query Plan as a Markov process, expands the sub-query Plan using Monte Carlo Tree Search (MCTS), and employs the value model as a heuristic function of Monte Carlo Tree search. Note: during the search, a query optimizer using a conventional database (e.g., PostgreSQL) determines the search space of each time, i.e., which expandable child nodes are searched for, so that the finally generated query plan can be semantically consistent with the initial query plan. All current cost-based query optimizers support this function and are not described in detail. The vector tree codes corresponding to the sub-query plans with consistent semantics are used as tree nodes in Monte Carlo tree search, and a value model is used for evaluating the expenditure of the sub-query plans in each search.
As shown in fig. 6, the monte carlo tree search process is as follows:
(1) selection (Selection): starting from the root node R, the optimal child node is recursively selected using the UCT algorithm until the leaf node L is reached.
(2) Extension (Expansion): if L is not a termination node (i.e., the entire query plan cannot be fully expressed), then one or more child nodes that do not contradict the semantics of the query are created based on the rules, and one of the child nodes C is selected.
(3) Simulation (Simulation): starting from C, a non-contradictory child node is randomly selected until the current query plan is completely expressed, using the same rules as in step (2) above.
(4) Back propagation (Backpropagation): the cost of the simulated complete query plan is predicted using the value model, and the simulation times and reward rewards (i.e., average cost) for all its parent nodes are updated in reverse starting from this leaf node.
(5) And (4) executing the steps (1) to (4) in a circulating way, and continuously expanding the sub-query plans until a complete query plan is generated.
The second embodiment is as follows:
the embodiment is a query optimization device based on monte carlo tree search and reinforcement learning, which is a storage medium having at least one instruction stored therein, and the at least one instruction is loaded and executed by a processor to implement the query optimization method based on monte carlo tree search and reinforcement learning according to one of claims 6 to 8.
The storage medium in this embodiment includes, but is not limited to, a usb disk, a hard disk, and the like.
The third concrete implementation mode:
the embodiment is a query optimization device based on monte carlo tree search and reinforcement learning, the device comprises a processor and a memory, the memory stores at least one instruction, and the at least one instruction is loaded and executed by the processor to realize the query optimization method based on monte carlo tree search and reinforcement learning according to one of claims 6 to 8.
The devices in this embodiment include, but are not limited to, mobile terminals, PCs, servers, workstations, and the like.
The present invention is capable of other embodiments and its several details are capable of modifications in various obvious respects, all without departing from the spirit and scope of the present invention.

Claims (10)

1. The query optimization system based on Monte Carlo tree search and reinforcement learning adopts a framework which is the same as an NEO query optimization model and comprises a value model unit and a query plan search unit; it is characterized in that the preparation method is characterized in that,
a value model unit: predicting the cost of the query plan by utilizing the characteristics corresponding to the query plan based on the value model;
the value model is a neural network model; the input of the value model is a vector tree used for representing a query plan needing to estimate the overhead, the topological structure of the vector tree is a binary tree structure, and the node codes are sequentially spliced according to the sequence traversal order of the tree; the node characteristics of the nodes consist of the codes of the node information; the node information comprises a node type, a table related to operation, an index type related to operation, a predicate related to operation and a cardinality estimation result of the node;
query plan search unit: and (3) performing query plan search according to the prediction of the query plan- > time overhead by adopting a Monte Carlo tree search method, and generating an execution plan from a search space.
2. The system of claim 1, wherein the node information is encoded as follows:
the node type is one-hot coding; the table involved in the operation is one-hot encoding; the index type involved in the operation is one-hot encoding; the predicate involved in the operation is one-hot encoding; the cardinality estimation result of the node is an integer value.
3. The system of claim 2, wherein the neural network model as a value model comprises a stacked tree convolution unit followed by two fully connected layers based on batch normalization; the stacked tree convolution unit is composed of a plurality of stacked tree convolution layers.
4. The system according to claim 1, 2 or 3, wherein when the search plan searching unit searches by using the Monte Carlo tree search method, the process of generating a better query plan is modeled as a Markov process, the Monte Carlo tree search is used to expand the sub-query plans, vector tree codes corresponding to the semantically consistent sub-query plans are used as tree nodes in the Monte Carlo tree search, and a cost model is used to evaluate the cost of the sub-query plans during each search, thereby realizing the Monte Carlo tree search.
5. The system of claim 4, wherein the search plan search unit searches using the Monte Carlo tree search method by the steps of:
(1) selecting: starting from a root node R, recursively selecting an optimal child node by using a UCT algorithm until a leaf node L is reached;
(2) expanding: if L is not a termination node, creating one or more child nodes that are not inconsistent with the semantics of the query, and selecting one of the child nodes C;
(3) simulation: from C, randomly selecting a non-contradictory child node until the current query plan is completely expressed;
(4) and (3) back propagation: predicting the cost of the simulated complete query plan by using a value model, and reversely updating the simulation times and reward rewarded of all parent nodes of the leaf node from the leaf node;
(5) and (4) circularly executing the steps (1) to (4), and continuously expanding the sub-query plans until a complete query plan is generated.
6. The query optimization method based on Monte Carlo tree search and reinforcement learning is characterized by comprising the following steps of:
the value model is used as a query plan- > time overhead prediction model; performing query plan search on a prediction model of query plan- > time overhead;
the value model is a neural network model; the input of the value model is a vector tree used for representing a query plan needing to estimate the overhead, the topological structure of the vector tree is a binary tree structure, and the node codes are sequentially spliced according to the sequence traversal order of the tree; the node characteristics of the nodes consist of the codes of the node information; the node information comprises a node type, a table related to operation, an index type related to operation, a predicate related to operation and a cardinality estimation result of the node;
the query plan search is carried out by adopting a Monte Carlo tree search method, the query plan search is carried out according to the prediction of the query plan- > time expense, and an execution plan is generated from a search space.
7. The method of claim 6, wherein the value model training process comprises the steps of:
s1, constructing an experience set by taking the execution plan and the time overhead corresponding to the query as experience; taking an execution plan and time expenditure corresponding to each experience in the experience combination as a training sample of the value model;
s2, characterizing the training samples, namely extracting features, wherein the features comprise query features and execution plan features;
s3, training a value model by using the extracted features, wherein the trained value model is an established query plan- > time overhead prediction model.
8. The method of claim 7, wherein the search using the Monte Carlo tree search method comprises the steps of:
(1) selecting: starting from a root node R, recursively selecting an optimal child node by using a UCT algorithm until a leaf node L is reached;
(2) expanding: if L is not a termination node, creating one or more child nodes that are not inconsistent with the semantics of the query, and selecting one of the child nodes C;
(3) simulation: from C, randomly selecting a non-contradictory child node until the current query plan is completely expressed;
(4) and (3) back propagation: predicting the cost of the simulated complete query plan by using a value model, and reversely updating the simulation times and reward rewarded of all parent nodes of the leaf node from the leaf node;
(5) and (4) circularly executing the steps (1) to (4), and continuously expanding the sub-query plans until a complete query plan is generated.
9. Query optimization device based on monte carlo tree search and reinforcement learning, characterized in that the device is a storage medium having stored therein at least one instruction, which is loaded and executed by a processor to implement the method for monte carlo tree search and reinforcement learning based query optimization according to one of claims 6 to 8.
10. Query optimization device based on monte carlo tree search and reinforcement learning, characterized in that the device comprises a processor and a memory, wherein at least one instruction is stored in the memory, and the at least one instruction is loaded and executed by the processor to implement the query optimization method based on monte carlo tree search and reinforcement learning according to one of claims 6 to 8.
CN202210319758.6A 2022-03-29 2022-03-29 Query optimization system, method and equipment based on Monte Carlo tree search and reinforcement learning Pending CN114637775A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210319758.6A CN114637775A (en) 2022-03-29 2022-03-29 Query optimization system, method and equipment based on Monte Carlo tree search and reinforcement learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210319758.6A CN114637775A (en) 2022-03-29 2022-03-29 Query optimization system, method and equipment based on Monte Carlo tree search and reinforcement learning

Publications (1)

Publication Number Publication Date
CN114637775A true CN114637775A (en) 2022-06-17

Family

ID=81951440

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210319758.6A Pending CN114637775A (en) 2022-03-29 2022-03-29 Query optimization system, method and equipment based on Monte Carlo tree search and reinforcement learning

Country Status (1)

Country Link
CN (1) CN114637775A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115168408A (en) * 2022-08-16 2022-10-11 北京永洪商智科技有限公司 Query optimization method, device, equipment and storage medium based on reinforcement learning
US20230315702A1 (en) * 2022-03-30 2023-10-05 Microsoft Technology Licensing, Llc Constraint-based index tuning in database management systems utilizing reinforcement learning

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160103900A1 (en) * 2014-10-08 2016-04-14 University Of Lancaster Data structuring and searching methods and apparatus
CN111611274A (en) * 2020-05-28 2020-09-01 华中科技大学 Database query optimization method and system
CN113515540A (en) * 2021-06-09 2021-10-19 清华大学 Query rewriting method for database
CN114036388A (en) * 2021-11-16 2022-02-11 北京百度网讯科技有限公司 Data processing method and device, electronic equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160103900A1 (en) * 2014-10-08 2016-04-14 University Of Lancaster Data structuring and searching methods and apparatus
CN111611274A (en) * 2020-05-28 2020-09-01 华中科技大学 Database query optimization method and system
CN113515540A (en) * 2021-06-09 2021-10-19 清华大学 Query rewriting method for database
CN114036388A (en) * 2021-11-16 2022-02-11 北京百度网讯科技有限公司 Data processing method and device, electronic equipment and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230315702A1 (en) * 2022-03-30 2023-10-05 Microsoft Technology Licensing, Llc Constraint-based index tuning in database management systems utilizing reinforcement learning
CN115168408A (en) * 2022-08-16 2022-10-11 北京永洪商智科技有限公司 Query optimization method, device, equipment and storage medium based on reinforcement learning
CN115168408B (en) * 2022-08-16 2024-05-28 北京永洪商智科技有限公司 Query optimization method, device, equipment and storage medium based on reinforcement learning

Similar Documents

Publication Publication Date Title
CN114637775A (en) Query optimization system, method and equipment based on Monte Carlo tree search and reinforcement learning
US9652498B2 (en) Processing queries using hybrid access paths
US10762087B2 (en) Database search
US20210056108A1 (en) Estimating query cardinality
US11514498B2 (en) System and method for intelligent guided shopping
CN104137095B (en) System for evolution analysis
CN114730618A (en) Systems and methods for designing organic synthesis pathways for desired organic molecules
US9594783B2 (en) Index selection for XML database systems
CN111966793B (en) Intelligent question-answering method and system based on knowledge graph and knowledge graph updating system
CN111444220A (en) Cross-platform SQ L query optimization method combining rule driving and data driving
CN110851584A (en) Accurate recommendation system and method for legal provision
Chen et al. Efficient join order selection learning with graph-based representation
Zhang et al. AlphaJoin: Join Order Selection à la AlphaGo.
Schwentick et al. Sketches of dynamic complexity
US11281817B2 (en) Systems and methods for generating programmatic designs of structures
CN110737779A (en) Knowledge graph construction method and device, storage medium and electronic equipment
Eirinaki et al. Web site personalization based on link analysis and navigational patterns
US20230126509A1 (en) Database management system and method for graph view selection for a relational-graph database
CN114048216B (en) Index selection method, electronic device and storage medium
CN115630136A (en) Semantic retrieval and question-answer processing method and device for long text and electronic equipment
CN114282497A (en) Method and system for converting text into SQL
CN111611274B (en) Database query optimization method and system
CN115438193B (en) Training method of path reasoning model and path reasoning method
CN111611274A (en) Database query optimization method and system
CN112905591B (en) Data table connection sequence selection method based on machine learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination