CN107025240A - The caching method and system of Ontology Query in a kind of semantic network - Google Patents
The caching method and system of Ontology Query in a kind of semantic network Download PDFInfo
- Publication number
- CN107025240A CN107025240A CN201610072372.4A CN201610072372A CN107025240A CN 107025240 A CN107025240 A CN 107025240A CN 201610072372 A CN201610072372 A CN 201610072372A CN 107025240 A CN107025240 A CN 107025240A
- Authority
- CN
- China
- Prior art keywords
- caching
- query
- ontology
- composition result
- cached
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/951—Indexing; Web crawling techniques
Landscapes
- Engineering & Computer Science (AREA)
- Databases & Information Systems (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention provides a kind of caching method of Ontology Query in semantic network, when performing an Ontology Query, Ontology Query sentence is parsed into data base querying instruction group;First caching query composition result corresponding with whether there is the data base querying instruction group in the second caching is judged according to instruction prefixes tree successively;If it is described first caching in be not present and it is described second caching in the presence of, perform it is described first caching it is described second caching between replacement with by the query composition result cache to described first caching in;If be not present in first caching and the described second caching, when lookup judges that the query composition result meets caching condition in ontology database, then by the query composition result cache into the described first caching.Present invention also offers corresponding system.The present invention realizes high level cache utilization rate and high level cache hit rate, and has the characteristics of query latency is low.
Description
Technical field
The present invention relates to field of computer technology, more specifically to Ontology Query in a kind of semantic network
Caching method and system.
Background technology
With the development of internet, how to be excavated from the information of magnanimity useful and meet user's actual need
Information becomes more and more important, and has turned into urgent problem to be solved.
The semantic network proposed by the father Tim Berners-Lee of WWW is by structuring and formalization come table
Show the resource on WWW so that computer can automatically analyze with these resources of reasoning to return to phase to user
The result answered, and as the core of semantic network, body is used for the semantic information for describing data, relative to
Existing data pattern, it can describe the semantic structure of more complicated object, available for expression isomery, distribution,
Semi-structured Web information resource, therefore, because quantity and the rule of body very huge in semantic network
Relation between mould, and extremely complex body, how body is stored and convenient inquiry turns into semanteme
Extremely important problem in network.
At present, in relevant database, alleviating inquiry by setting up query caching mechanism, time-consuming and effect
The problem of rate is low, when user performs identical inquiry again, can directly obtain corresponding data from caching,
And without being read out from the data file in hard disk, still, the method is in the application timeliness for Web
Fruit is more obvious, and the method is used in semantic network, due to limited buffer memory capacity, available caching
Data may cause cache failure by new data cover, and then make it that the effect of Ontology Query is poor, especially
One group of inquiry that is mutually related may be formed when semanteme being parsed into data base querying in semantic network to refer to
Order, now, the query caching mechanism of the method can not be played a role.
The content of the invention
The technical problem to be solved in the present invention is, for the importance of body storage in semantic network and now
There is semantic network in technology to use relevant database to look into carry out body by setting up query caching mechanism
There is provided a kind of caching method of Ontology Query in semantic network and system for the above-mentioned weak point ask.
Technical proposal that the invention solves the above-mentioned problems there is provided a kind of the slow of Ontology Query in semantic network
Method is deposited, the caching method comprises the following steps:
When S1, Ontology Query of execution, Ontology Query sentence is parsed into data base querying instruction group, institute
Data base querying instruction group is stated to instruct including a plurality of data base querying;
S2, judged according to instruction prefixes tree to whether there is the database in the first caching and the second caching successively
The corresponding query composition result of query statement group;
S3, described first cache in be not present the query composition result and it is described second caching in there is institute
When stating query composition result, the replacement between first caching and second caching is performed with by described group
Close Query Result to be cached in first caching, and return to the query composition result;
S4, described first caching and it is described second caching in be not present the query composition result when,
Search and judged the query composition knot when query composition result meets caching condition in ontology database
Fruit is cached in first caching, and returns to the query composition result.
In above-mentioned semantic network in the caching method of Ontology Query, upon step s 2, in addition to:
S5, cached described first in there is the query composition result, return to the query composition result.
In above-mentioned semantic network in the caching method of Ontology Query, the step S3 includes:
The query composition result is transferred to from the described second caching in first caching;
During described first is cached described second is deposited into by the data cached of query composition result replacement
In caching;
Return to the query composition result.
In above-mentioned semantic network in the caching method of Ontology Query, the step S4 also includes:
The the first data cached deposit described the replaced during described first is cached by the query composition result
In two cachings, and data cached lost by the second of the described first data cached replacement during described second is cached
Abandon.
In above-mentioned semantic network in the caching method of Ontology Query, the step S4 also includes:
Search and judged institute when the query composition result is unsatisfactory for the caching condition in ontology database
Query composition result is stated directly to be cached in the caching of the ontology database.
Present invention also offers a kind of caching system of Ontology Query in semantic network, the caching system includes
Parsing module, the first caching, the second caching and ontology database, wherein:
The parsing module, for when performing an Ontology Query, Ontology Query sentence to be parsed into data
Library inquiry instruction group, the data base querying instruction group is instructed including a plurality of data base querying;
First caching and the described second caching are used to judge whether institute according to instruction prefixes tree successively
State the corresponding query composition result of data base querying instruction group;
Second caching is additionally operable to the query composition result be not present and described in caching described first
In second caching when there is the query composition result, the replacement between first caching is performed with by institute
Query composition result cache is stated into the described first caching, and returns to the query composition result;
When being not present in the described first caching and the described second caching, the ontology database is used to look into
Look for when judging that the query composition result meets caching condition, by the query composition result cache to described
In one caching, and return to the query composition result.
In above-mentioned semantic network in the caching system of Ontology Query, first caching is additionally operable to described the
When there is the query composition result in one caching, the query composition result is returned.
In above-mentioned semantic network in the caching system of Ontology Query, second caching is additionally operable to described group
Query Result is closed to be transferred to from the described second caching in first caching;
First caching is additionally operable to first replaced during described first is cached by the query composition result
It is data cached to be deposited into second caching.
In above-mentioned semantic network in the caching system of Ontology Query, first caching is additionally operable to described the
In the first data cached deposit second caching replaced in one caching by the query composition result;
Second caching is additionally operable to during described second is cached by the second of the described first data cached replacement
It is data cached to abandon.
In above-mentioned semantic network in the caching system of Ontology Query, the ontology database is additionally operable to described
The query composition result is directly cached to described when query composition result is unsatisfactory for the caching condition
In the caching in volume data storehouse.
Implementing the caching method of Ontology Query and the beneficial effect of system in the semantic network of the present invention has:
Firstly, since the first caching and the second caching are constructed between Ontology Query and ontology database, will
The cache granularity of Ontology Query is refine to below Ontology Query sentence, data base querying instruction on, so as to put
Requirement of the pine to the complete match of Ontology Query sentence, to improve the probability of cache hit;
Secondly as the second caching is independently of ontology database, without changing the caching of ontology database in itself
Mechanism, so as to compatible with existing various ontology databases well, wide adaptability;
Finally, the preferential maximum Query Result of Query Cost that preserves is in first caches, in system operation,
The memory of ontology data banked cache is fresh, and short run effect is good, and both, which combine, improves buffer memory, and in inquiry
During can preferably balancing cost and income.
Brief description of the drawings
Fig. 1 is the structural representation of the caching system embodiment of Ontology Query in semantic network of the invention.
Fig. 2 is the flow chart of the caching method embodiment of Ontology Query in semantic network of the invention.
Fig. 3 is the particular flow sheet of the caching method embodiment of Ontology Query in the semantic network in Fig. 2.
Embodiment
The present invention is cached by being built between Ontology Query and ontology database, and the first caching and second delays
Deposit, the cache granularity of Ontology Query is refine to below Ontology Query sentence, data base querying instruction on,
So as to relax the requirement of the complete match to Ontology Query sentence, to improve the probability of cache hit.Moreover,
Because the second caching is independently of ontology database, without changing the caching mechanism of ontology database in itself, so that
Can be compatible with existing various ontology databases well, wide adaptability.Meanwhile, it is preferential to preserve inquiry generation
The maximum Query Result of valency is in first caches, and in system operation, the memory of ontology data banked cache is fresh,
Short run effect is good, and both, which combine, improves buffer memory, and can more preferable balancing cost and receipts in query process
Benefit.
In order to make the purpose , technical scheme and advantage of the present invention be clearer, below in conjunction with accompanying drawing and reality
Example is applied, the present invention will be described in further detail.It should be appreciated that specific embodiment described herein is only
To explain the present invention, it is not intended to limit the present invention.
As shown in figure 1, be the present invention semantic network in the structure of caching system embodiment of Ontology Query show
It is intended to.With reference to Fig. 1, the caching system 100 includes the caching 130 of the caching of parsing module 110, first 120, second
With ontology database 140, parsing module 110 is connected with the first caching 120, the first caching 120 and the second caching
130 are connected with each other, and the first caching 120 is also connected with ontology database 140, the first caching 120 and the second caching
130 response speed is differed, and the response speed of the first caching 120 is more than the second response speed for caching 130
Degree, wherein the first caching 120 is memory cache, it is ensured that higher response speed;Second caching 130 is number
According to banked cache, buffer memory capacity is improved, it is indexed by internal memory and regard query statement-Query Result as Key
Value data are stored.As can be seen that ontology database 140 and the second caching 130 are separate,
So as to can reach higher search efficiency.
In an embodiment of the present invention, parsing module 110 is used for when performing an Ontology Query, by body
Query statement is parsed into data base querying instruction group, wherein, the data base querying instruction group includes many datas
Library inquiry is instructed.
Perform an Ontology Query when, first caching 120 and second caching 130 successively be used for according to instruction before
Sew tree and judge whether the corresponding query composition result of the data base querying instruction group.In the present embodiment,
First caching 120 and second caching 130 in it is data cached can be a data library inquiry instruction inquiry knot
Fruit or the Query Result of a database instruction inquiry group, it is managed by instruction prefixes tree, and this refers to
Prefix trees are made to be formed by query statement index construction, it includes multiple nodes, and each node is represented from body
What is parsed in query statement instructs according to certain tactic data base querying, wherein, a node
Record untill the node is in itself, the position of the corresponding Query Result of its prefix matching success, and be somebody's turn to do
Query statement index is the encoded acquirement of query statement, and the query statement of such different length is encoded
Query statement index, convenient construction instruction prefixes tree are obtained after consistent.In an embodiment of the present invention, if
The group that the data base querying instruction group is made to be integral is not found in one caching 120 and the second caching 130
Query Result is closed, then every data library inquiry instruction in the data base querying instruction group is searched, this
When, instruct corresponding Query Result to belong to the query composition result per data library inquiry.
When there is the query composition result in the first caching 120, the first caching 120 is additionally operable to return to the combination
Query Result.
When being not present in the first caching 120 and there is the query composition result in the second caching 130, second delays
130 are deposited to be additionally operable to use cache replacement algorithm to perform the replacement between the first caching 120 to combine this
Query Result is cached in the first caching 120, and returns to the query composition result.Specifically, the second caching
130 are transferred to the query composition result in the first caching 120 from the second caching 130, the first caching 120
First is cached in 120 and first data cached is deposited into the second caching 130 by what the query composition result was replaced
In.Now, the data cached replacement of the first caching 120 and the second caching 130 is realized.
When being not present in the first caching 120 and the second caching 130, ontology database 140 is used to search
When judge whether the query composition result meets caching condition, if meet, by the query composition result cache
Into the first caching 120, and return to the query composition result.Wherein, caching condition is this Ontology Query
Query Cost whether be higher than the second preset value, in the present embodiment, if the Query Cost is to inquire about this
The cost that will be expended directly is performed in ontology database 140.Further, now, the first caching 120
First is cached first 120 data of caching the second caching 130 of deposit replaced in 120 by the query composition result
In, and the second caching 130 cache 130 by second in data cached lost by the second of the first data cached replacement
Abandon.Now, the data cached replacement of the first caching 120 and the second caching 130 is also achieved.As can be seen that
When the query composition result is not present in the first caching 120 and the second caching 130, just in ontology database
Searched in 140, and meet caching condition and be just cached in the first caching 120, i.e., by Query Cost
Maximum Query Result priority cache can so avoid same data base querying from instructing in the first caching 120
Performed repeatedly in ontology database 140.
In an embodiment of the present invention, the data cached replacement in the first caching 120 and the second caching 130 is equal
Realized using cache replacement algorithm, wherein, the cache replacement algorithm of the first caching 120 is calculated for LRU
Method or ARC algorithms, in the cache replacement algorithm of the second caching 130, are weighted to Query Cost, should
The weighted value of weighting is determined according to the height of the Query Cost of this Ontology Query, for example, this secondary body is looked into
When the Query Cost of inquiry is higher than the first preset value, correspondingly, its weighted value then correspondingly increases.
Further, ontology database 140 is additionally operable to incite somebody to action when the query composition result is unsatisfactory for caching condition
The query composition result is directly cached in the caching of the ontology database 140.Now, only will it not be stored in
The Query Result of first caching 120 and the second caching 130 is cached in the caching of ontology database 140, under
When secondary body is inquired about, it is same query statement, passes through the caching method of embodiments of the invention, the inquiry
Corresponding query composition result is instructed just to be cached in the first caching 120 or the second caching 130 and indirect slow
In the caching for being stored to ontology database 140, and then avoid frequency of the same query statement in ontology database 140
Numerous iterative parsing and execution.
As shown in Fig. 2 being the flow chart of the caching method embodiment of Ontology Query in semantic network of the invention.
With reference to Fig. 2, this method comprises the following steps:
When S10, Ontology Query of execution, Ontology Query sentence is parsed into data base querying instruction group;
S20, judge that according to instruction prefixes tree the first caching 120 and the second caching 130 whether there is the number successively
According to the corresponding query composition result of library inquiry instruction group;
S30, first caching 120 in be not present the query composition result and second caching 130 in exist the group
The replacement between the first caching 120 and the second caching 130 is performed when closing Query Result with by the query composition knot
Fruit is cached in the first caching 120, and returns to the query composition result of this Ontology Query;
S40, first caching 120 and second caching 130 in be not present the query composition result when, this
When lookup judges that the query composition result meets caching condition in volume data storehouse 140, by the query composition result
It is cached in the first caching 120, and returns to the query composition result.
Below in conjunction with accompanying drawing to the present invention semantic network in Ontology Query caching method work specifically
It is bright, as shown in Figure 3:
In step s 201, when performing an Ontology Query, Ontology Query sentence is parsed into data base querying
Instruction group, the data base querying instruction group is instructed including a plurality of data base querying.
In step S202, judged to whether there is the database in the first caching 120 according to instruction prefixes tree
The corresponding query composition result of query statement group, if existing in the first caching 120, performs step S211,
Return to the query composition result;If being not present in the first caching 120, step S203 is performed, according to instruction
Prefix trees come judge in the second caching 130 whether there is the corresponding query composition knot of the data base querying instruction group
Really, if existing in the second caching 130, step S209 is performed;If being not present in the second caching 130, hold
Row step S204.This is arrived, successively the first caching 120 and the second caching 130 have been carried out judging whether this
The corresponding query composition result of data base querying instruction group.
In step S204, now, it is not present in the first caching 120 and the second caching 130, in body number
Judge whether the query composition result meets caching condition according to when being searched in storehouse 140, if meeting, perform step
Rapid S205, if it is not satisfied, then performing step S211, ontology data is directly cached to by the query composition result
In the caching in storehouse 140, and return to the query composition result.Now, only will be without the caching of deposit first 120
It is cached to the Query Result of the second caching 130 in the caching of ontology database 140, in next Ontology Query
When, it is same query statement, by the caching method of embodiments of the invention, the query statement is corresponding
Query composition result is just cached in the first caching 120 or the second caching 130 and indirect is cached to body number
In caching according to storehouse 140, and then avoid frequent iterative parsing of the same query statement in ontology database 140
And execution.
Then, in step S205, by the query composition result cache into the first caching 120, then,
In step S206, first is cached the first 120 data of caching replaced in 120 by the query composition result and deposited
Enter in the second caching 130, then, in step S207, by first caching 120 in caching 130 by second
The second 130 data of caching that data are replaced are abandoned.As can be seen that by step S205-S207, realizing
The replacement of one caching 120 and the second caching 130, performs the query statement of same Ontology Query so next time
When, Query Result directly can be returned to from the second caching 130, be not used in being performed in ontology database 140,
Also, above-mentioned caching method is used, the Query Result of the same Ontology Query is transferred to the first caching 120
In, to reduce query latency, and improve cache hit rate.In an embodiment of the present invention, caching condition is
Whether the Query Cost of this Ontology Query is higher than the second preset value, if the Query Cost is that this inquiry is straight
It is connected on the cost that execution will expend in ontology database 140.After step S207, step S211 is performed,
Return to the query composition result.
In step S209, now, the first caching 120 is not present and existed in the second caching 130, by the group
Query Result is closed to be transferred in the first caching 120 from the second caching 130, then, will in step S210
First caching 120 in by the query composition result replace first it is data cached be deposited into the second caching 130,
Then or simultaneously, step S211 is performed, the query composition result is returned.Now, first is also achieved to delay
Deposit 120 and second caching 130 replacement.
In an embodiment of the present invention, it is data cached by instruction in the first caching 120 and the second caching 130
Prefix trees are managed, and the instruction prefixes tree is formed by query statement index construction, and it includes multiple nodes, and respectively
Individual node represents that what is parsed from Ontology Query sentence refers to according to certain tactic data base querying
Order, wherein, a nodes records untill the node is in itself, and the success of its prefix matching is corresponding to look into
The position of result is ask, and query statement index is the encoded acquirement of query statement, such different length
Query statement it is encoded it is consistent after obtain query statement index, convenient construction instruction prefixes tree.
In an embodiment of the present invention, the data cached replacement in the first caching 120 and the second caching 130 is equal
Realized using cache replacement algorithm, wherein, the cache replacement algorithm of the first caching 120 is calculated for LRU
Method or ARC algorithms;In the cache replacement algorithm of the second caching 130, Query Cost is weighted, should
The weighted value of weighting is determined according to the height of the Query Cost of this Ontology Query, for example, this secondary body is looked into
When the Query Cost of inquiry is higher than the first preset value, correspondingly, its weighted value then correspondingly increases.
The caching method and system of Ontology Query in summary, semantic network of the invention realize high level cache profit
With rate and high level cache hit rate, and with query latency it is low the characteristics of.
The foregoing is only a preferred embodiment of the present invention, but protection scope of the present invention not office
Be limited to this, any one skilled in the art the invention discloses technical scope in, can be easily
The change or replacement expected, should all be included within the scope of the present invention.Therefore, protection of the invention
Scope should be defined by scope of the claims.
Claims (10)
1. the caching method of Ontology Query in a kind of semantic network, it is characterised in that the caching method includes
Following steps:
When S1, Ontology Query of execution, Ontology Query sentence is parsed into data base querying instruction group, institute
Data base querying instruction group is stated to instruct including a plurality of data base querying;
S2, judged according to instruction prefixes tree to whether there is the database in the first caching and the second caching successively
The corresponding query composition result of query statement group;
S3, described first cache in be not present the query composition result and it is described second caching in there is institute
When stating query composition result, the replacement between first caching and second caching is performed with by described group
Close Query Result to be cached in first caching, and return to the query composition result;
S4, described first caching and it is described second caching in be not present the query composition result when,
Search and judged the query composition knot when query composition result meets caching condition in ontology database
Fruit is cached in first caching, and returns to the query composition result.
2. the caching method of Ontology Query in the semantic network according to claim 1, it is characterised in that
Upon step s 2, in addition to:
S5, cached described first in there is the query composition result, return to the query composition result.
3. the caching method of Ontology Query in the semantic network according to claim 1, it is characterised in that
The step S3 includes:
The query composition result is transferred to from the described second caching in first caching;
During described first is cached described second is deposited into by the data cached of query composition result replacement
In caching;
Return to the query composition result.
4. the caching method of Ontology Query in the semantic network according to claim 1, it is characterised in that
The step S4 also includes:
The the first data cached deposit described the replaced during described first is cached by the query composition result
In two cachings, and data cached lost by the second of the described first data cached replacement during described second is cached
Abandon.
5. the caching method of Ontology Query in the semantic network according to claim 4, it is characterised in that
The step S4 also includes:
Search and judged institute when the query composition result is unsatisfactory for the caching condition in ontology database
Query composition result is stated directly to be cached in the caching of the ontology database.
6. a kind of caching system of Ontology Query in semantic network, it is characterised in that the caching system bag
Parsing module, the first caching, the second caching and ontology database are included, wherein:
The parsing module, for when performing an Ontology Query, Ontology Query sentence to be parsed into data
Library inquiry instruction group, the data base querying instruction group is instructed including a plurality of data base querying;
First caching and the described second caching are used to judge whether institute according to instruction prefixes tree successively
State the corresponding query composition result of data base querying instruction group;
Second caching is additionally operable to the query composition result be not present and described in caching described first
In second caching when there is the query composition result, the replacement between first caching is performed with by institute
Query composition result cache is stated into the described first caching, and returns to the query composition result;
When being not present in the described first caching and the described second caching, the ontology database is used to look into
Look for when judging that the query composition result meets caching condition, by the query composition result cache to described
In one caching, and return to the query composition result.
7. the caching system of Ontology Query in the semantic network according to claim 6, it is characterised in that
When first caching is additionally operable to there is the query composition result in caching described first, described group is returned
Close Query Result.
8. the caching system of Ontology Query in the semantic network according to claim 6, it is characterised in that
Second caching is additionally operable to the query composition result being transferred to described first from the described second caching
In caching;
First caching is additionally operable to first replaced during described first is cached by the query composition result
It is data cached to be deposited into second caching.
9. the caching system of Ontology Query in the semantic network according to claim 6, it is characterised in that
First caching is additionally operable to the first caching replaced during described first is cached by the query composition result
In data deposit second caching;
Second caching is additionally operable to during described second is cached by the second of the described first data cached replacement
It is data cached to abandon.
10. the caching system of Ontology Query in the semantic network according to claim 9, its feature exists
In the ontology database is additionally operable to will be described when the query composition result is unsatisfactory for the caching condition
Query composition result is directly cached in the caching of the ontology database.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610072372.4A CN107025240A (en) | 2016-02-01 | 2016-02-01 | The caching method and system of Ontology Query in a kind of semantic network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610072372.4A CN107025240A (en) | 2016-02-01 | 2016-02-01 | The caching method and system of Ontology Query in a kind of semantic network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107025240A true CN107025240A (en) | 2017-08-08 |
Family
ID=59524967
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610072372.4A Pending CN107025240A (en) | 2016-02-01 | 2016-02-01 | The caching method and system of Ontology Query in a kind of semantic network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107025240A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111221862A (en) * | 2019-12-31 | 2020-06-02 | 五八有限公司 | Request processing method and device |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050138173A1 (en) * | 2003-12-22 | 2005-06-23 | Ha Young G. | Ontology-based service discovery system and method for ad hoc networks |
CN101201842A (en) * | 2007-10-30 | 2008-06-18 | 北京航空航天大学 | Digital museum gridding and construction method thereof |
CN102163195A (en) * | 2010-02-22 | 2011-08-24 | 北京东方通科技股份有限公司 | Query optimization method based on unified view of distributed heterogeneous database |
CN103064960A (en) * | 2012-12-31 | 2013-04-24 | 华为技术有限公司 | Method and equipment for database query |
CN103324724A (en) * | 2013-06-26 | 2013-09-25 | 华为技术有限公司 | Method and device for processing data |
KR101440359B1 (en) * | 2014-02-05 | 2014-09-17 | 고혜경 | OWL-S based Service Discovery Method and System |
CN104050276A (en) * | 2014-06-26 | 2014-09-17 | 北京思特奇信息技术股份有限公司 | Cache processing method and system of distributed database |
CN104750715A (en) * | 2013-12-27 | 2015-07-01 | ***通信集团公司 | Data elimination method, device and system in caching system and related server equipment |
-
2016
- 2016-02-01 CN CN201610072372.4A patent/CN107025240A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050138173A1 (en) * | 2003-12-22 | 2005-06-23 | Ha Young G. | Ontology-based service discovery system and method for ad hoc networks |
CN101201842A (en) * | 2007-10-30 | 2008-06-18 | 北京航空航天大学 | Digital museum gridding and construction method thereof |
CN102163195A (en) * | 2010-02-22 | 2011-08-24 | 北京东方通科技股份有限公司 | Query optimization method based on unified view of distributed heterogeneous database |
CN103064960A (en) * | 2012-12-31 | 2013-04-24 | 华为技术有限公司 | Method and equipment for database query |
CN103324724A (en) * | 2013-06-26 | 2013-09-25 | 华为技术有限公司 | Method and device for processing data |
CN104750715A (en) * | 2013-12-27 | 2015-07-01 | ***通信集团公司 | Data elimination method, device and system in caching system and related server equipment |
KR101440359B1 (en) * | 2014-02-05 | 2014-09-17 | 고혜경 | OWL-S based Service Discovery Method and System |
CN104050276A (en) * | 2014-06-26 | 2014-09-17 | 北京思特奇信息技术股份有限公司 | Cache processing method and system of distributed database |
Non-Patent Citations (1)
Title |
---|
郑帆: ""海量本体数据存储平台的研究与设计"", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111221862A (en) * | 2019-12-31 | 2020-06-02 | 五八有限公司 | Request processing method and device |
CN111221862B (en) * | 2019-12-31 | 2023-08-11 | 五八有限公司 | Request processing method and device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN102542052B (en) | Priority hash index | |
US11468027B2 (en) | Method and apparatus for providing efficient indexing and computer program included in computer readable medium therefor | |
US5305389A (en) | Predictive cache system | |
CN104331428B (en) | The storage of a kind of small documents and big file and access method | |
Cambazoglu et al. | Scalability challenges in web search engines | |
CN104794177B (en) | A kind of date storage method and device | |
US9871727B2 (en) | Routing lookup method and device and method for constructing B-tree structure | |
US7672935B2 (en) | Automatic index creation based on unindexed search evaluation | |
CN111966284A (en) | OpenFlow large-scale flow table elastic energy-saving and efficient searching framework and method | |
US20100094870A1 (en) | Method for massively parallel multi-core text indexing | |
US20100228914A1 (en) | Data caching system and method for implementing large capacity cache | |
EP3314464A2 (en) | Storage and retrieval of data from a bit vector search index | |
WO2016209932A1 (en) | Matching documents using a bit vector search index | |
JP3499105B2 (en) | Information search method and information search device | |
WO2016209952A1 (en) | Reducing matching documents for a search query | |
Hwang et al. | Binrank: Scaling dynamic authority-based search using materialized subgraphs | |
CN104424119A (en) | Storage space configuration method and device | |
CN105912696A (en) | DNS (Domain Name System) index creating method and query method based on logarithm merging | |
CN107025240A (en) | The caching method and system of Ontology Query in a kind of semantic network | |
CN113127515A (en) | Power grid-oriented regulation and control data caching method and device, computer equipment and storage medium | |
Li et al. | On mining webclick streams for path traversal patterns | |
CN106649462B (en) | A kind of implementation method for mass data full-text search scene | |
CN107820612A (en) | Bit vector search index | |
CN113722274A (en) | Efficient R-tree index remote sensing data storage model | |
CN103365897A (en) | Fragment caching method supporting Bigtable data model |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20170808 |
|
RJ01 | Rejection of invention patent application after publication |