CN104077397A - Response method for distributed big data classification retrieval webpage - Google Patents

Response method for distributed big data classification retrieval webpage Download PDF

Info

Publication number
CN104077397A
CN104077397A CN201410310820.0A CN201410310820A CN104077397A CN 104077397 A CN104077397 A CN 104077397A CN 201410310820 A CN201410310820 A CN 201410310820A CN 104077397 A CN104077397 A CN 104077397A
Authority
CN
China
Prior art keywords
data
index
page
server
buffer memory
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410310820.0A
Other languages
Chinese (zh)
Inventor
唐雪飞
张小盼
楚龙辉
王淋铱
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CHENGDU COMSYS INFORMATION TECHNOLOGY Co Ltd
Original Assignee
CHENGDU COMSYS INFORMATION TECHNOLOGY Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CHENGDU COMSYS INFORMATION TECHNOLOGY Co Ltd filed Critical CHENGDU COMSYS INFORMATION TECHNOLOGY Co Ltd
Priority to CN201410310820.0A priority Critical patent/CN104077397A/en
Publication of CN104077397A publication Critical patent/CN104077397A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/951Indexing; Web crawling techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/22Indexing; Data structures therefor; Storage structures
    • G06F16/2228Indexing structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2458Special types of queries, e.g. statistical queries, fuzzy queries or distributed queries
    • G06F16/2471Distributed queries

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Computational Linguistics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The invention discloses a response method for a distributed big data classification retrieval webpage, which belongs to the technical field of Web access. The response method comprises the following steps of: S1: starting a distributed server, classifying pages by keywords according to Hash Code, and caching the data retrieved from a database in each server; S2: when a user request arrives at, checking the retrieval conditions thereof, and selecting the matched server by the keywords of the user request according to Hash Code; S3: refreshing data in a memory, and cancelling the previously-cached pages. The response method disclosed by the invention has the beneficial effects that the response speed of the big data classification retrieval webpage can be effectively increased, the pressure of the database can be alleviated, and the burdens of the servers can be reduced.

Description

A kind of response method of distributed large Data classification searching web pages
Technical field
The invention belongs to web access technical field, be specifically related to a kind of response method of distributed large Data classification searching web pages.
Background technology
Flourish along with cloud and internet, people are more and more ready to handle day-to-day work with network, and the data message of social life each side is more and more concentrated, simultaneously, data volume is dramatic growth also, and therefore many systems and platform all will face mass data.This applies and has higher requirement Web.In Web application, mostly need mass data to carry out distributed storage, thereby improve the retrieval rate of data, respond rapidly user's demand, to user, bring better experience.
Yet, if be simple distributed treatment for these large data, because the many multidata dispersions of the server distributing also will cause retrieval rate to decline, thereby having reduced response speed, this will bring very bad experience to user.If a large number of users is online retrieving data simultaneously, can bring larger pressure to server.Therefore, on the basis of distributed computing technology, by the data buffer memory of classifying, quick-searching, thus reach quick response.
Prior art existing problems are summarized as follows:
The first, the dynamic page buffer memory based on server end, does not utilize the feature of distributed systematic searching, and read data carries out distributed caching.
The second, generally can only carry out partial buffering to static page, can not process the retrieval of distributed data storage dynamic page.
The 3rd, based on a station server, page cache, at server end, is carried out to buffer memory by the page of likely accessing, a large amount of processing time that has taken server and storage space.
Summary of the invention
The present invention is directed to the deficiencies in the prior art, a kind of response method of distributed large Data classification searching web pages is provided, can effectively improve the systematic searching page response speed of large data, alleviate database pressure, reduce server burden.
For overcoming the above problems, the technical solution used in the present invention is as follows: a kind of response method of distributed large Data classification searching web pages, comprise the following steps: S1: start distributed server, according to Hash Code (coded system in JAVA) by page key sorting, by the data buffer storage retrieving from database in each server;
S2: when user asks to arrive, first check its search condition, the server that the keyword root of user's request is selected to coupling according to HashCode is then searched and whether is had effective page cache in the buffer memory of this server, if exist, directly the effective page as a result of returned; If there is no, search effective page cache, the page finding is as a result of returned, and the page that arrives of cache lookup.
S3: refresh the data in internal memory, and the page of buffer memory before that cancels.
As preferably: during systematic searching, to the data cached index of setting up described in S2.
As preferably: the step of setting up index described in S2 is as follows:
S21. all data cached in searching loop internal memory, reading a certain data recording is N, obtains the value that its sorting field is corresponding, is recorded as key;
S22. from category index table indexTable, finding classification value is the index indexList of corresponding key.
S23. in this index indexList, insert record, and be numbered N.
As preferably: generated data buffer memory and when setting up index for data, data cached and index data need to be locked, in a station server, only allow a buffer memory in thread read/write memory and set up index for data.
As preferably: described page cache is the form that records that adopts xml.
As preferably: set up a Hash table in internal memory, whether Hash table is used for record data and exists in buffer memory and point out the existing position of its cache contents.
As preferably: the mode that refreshes of the internal storage data described in S3 is for refreshing in real time or periodic refreshing.
Beneficial effect of the present invention is as follows: the form by the data of needs retrievals with xml is buffered in internal memory, and the retrieval of data just not be used in lane database and inquires about like this, but inquires about in xml file in internal memory.So do the burden that has not only greatly alleviated database, also accelerated the speed of data query simultaneously, very high system performance.On server, the result cache of the systematic searching of various inquiries is got up, in the time of need not retrieving, all internal storage data is carried out to one query at every turn, but directly by the result cache returning, thereby response speed improved.
Accompanying drawing explanation
Fig. 1 is the distributed systematic searching process flow diagram of response fast;
Fig. 2 is the schematic diagram of data buffer memory in internal memory;
Fig. 3 is the structure of category index table;
Fig. 4 is the process flow diagram that the grouped data of buffer memory is set up index;
Fig. 5 is the process flow diagram of two kinds of sort merge retrievals;
Fig. 6 is the process flow diagram that obtains and produce buffer memory;
Fig. 7 is the Organization Chart of whole system.
Embodiment
For making object of the present invention, technical scheme and advantage clearer, referring to the accompanying drawing embodiment that develops simultaneously, the present invention is described in further details.
A kind of response method of distributed large Data classification searching web pages, it is characterized in that, comprise the following steps: S1: start distributed server, according to Hash Code by page key sorting, by the data buffer storage retrieving from database in each server;
S2: when user asks to arrive, first check its search condition, the server that the keyword root of user's request is selected to coupling according to HashCode is then searched and whether is had effective page cache in the buffer memory of this server, if exist, directly the effective page as a result of returned; If there is no, search effective page cache, the page finding is as a result of returned, and the page that arrives of cache lookup.
S3: refresh the data in internal memory, and the page of buffer memory before that cancels.
During systematic searching, to the data cached index of setting up described in S2.
The step of setting up index described in S2 is as follows:
S21. all data cached in searching loop internal memory, reading a certain data recording is N, obtains the value that its sorting field is corresponding, is recorded as key;
S22. from category index table indexTable, finding classification value is the index indexList of corresponding key.
S23. in this index indexList, insert record, and be numbered N.
Generated data buffer memory and when setting up index for data, need to lock data cached and index data, only allows a buffer memory in thread read/write memory and set up index for data in a station server.
Described page cache is the form that records that adopts xml.
In internal memory, set up Hash table, whether Hash table is used for record data and exists in buffer memory and point out the existing position of its cache contents.
The mode that refreshes of the internal storage data described in S3 is for refreshing in real time or periodic refreshing.
Specific embodiment:
Fig. 1 has described how to utilize data cached and page cache, obtains the flow process of the systematic searching page of user's request, is implemented as follows:
Step 101: user asks the page of certain category index.
Step 102: calculate cryptographic hash according to the key word of user's requests classification, find the affiliated server of this classification.
Step 103: server is received after request, judges whether to exist the buffer memory of this systematic searching condition, if buffer memory exists, directly as a result of returns to buffer memory, and request finishes; If there is no, perform step 104.
Xml can be buffered in internal memory, also can be buffered on disk.This depends on the quantity of the xml file generating within a certain period of time.If quantity of documents is not a lot, so can be directly by file cache in internal memory, can improve read or write speed like this, respond quickly user's request; If quantity of documents is very large, again file cache will be expended to a large amount of internal memories in internal memory, affected the performance of system, so should be buffered in pagefile in disk this time, but in internal memory, still to record the page cache having generated, conveniently search faster effective disk buffering.
Therefore, no matter pagefile is buffered in internal memory, or in disk.In internal memory, all to set up Hash table, be used for the position of key word content of retrieval user request.Various search conditions preserved in key word in Hash table, the content of pages that the value field in Hash table has generated or directly preserved when having preserved the page cache of corresponding search condition.So can be directly find buffer memory in internal memory or the buffer memory page on disk according to hash value.
The cache way of the page can be changed by configuring, and system manager adjusts under actual ruuning situation, so that system performance reaches best.
Step 104: whether the data that judge user's request are loaded in internal memory, if existed in internal memory, perform step 105, if the data of request are not in internal memory, perform step 106.
Step 105: load internal storage data buffer memory and set up index.
From database, inquiry obtains offering the data of user search, carries out buffer memory.Internal storage data buffer structure schematic diagram, as shown in Figure 2, it shows data and how in internal memory, to store.The data unit of being recorded as are stored in list, and are numbered with numbering, to facilitate, set up index.
Fig. 4 shows a kind of flow process of setting up category index.From the N of step 401, be in the of 0, if 402 N are not less than, record number, finish.Step 403 searching loop loads on the data cached record in internal memory, reads and records N; Obtain the classifying value of field corresponding to X, the classification value (keyX) of the X that classifies, as shown in step 404.Step 405 finds the index (indexListX) of this classification value (keyX) from the concordance list (indexTableX) of classification X.Concordance list can adopt the form storage of Hash table, as shown in Figure 3, and the storage organization of the concordance list (indexTableX) of the concordance list of classification X.Key assignments (key) in Hash table has been preserved the value keyX1 of this all possible classification of classifying, keyX2 ... keyXn; The index that value (value) in Hash table has been preserved corresponding classification value, index is with the form storage of list, the record number that the value that has recorded the X corresponding field of classifying is this classification value.So just can, with hash algorithm in the concordance list (indexTableX) of classification X, find fast the index (indexListX) of classification value (keyXn).Then, as shown in step 406, in this index (indexListXn), insert record number N, so just for recording N, set up the index of classification X.Same, step 407 to 409 has been set up the index of classification Y.After the index that records N has been set up, then set up the index of N+1, as step 410.
It should be noted that, when generating internal storage data buffer memory and setting up index, internal storage data buffer memory and index need to be pinned, the operation that only allows a thread execution to load internal storage data buffer memory and set up index, do not allow other threads to read while write buffer memory, otherwise there will be dirty data reading and repeat the problems such as loading.
Step 106: according to search condition, utilize indexed search internal storage data buffer memory, obtain result set.
In conjunction with the process flow diagram of two kinds of sort merge retrievals of Fig. 5, describe how to utilize concordance list, according to keyx and these two kinds of systematic searching conditions of keyY, retrieve a kind of process from startnum capable (from 1 counting) to the capable result set of endnum.As shown in step 501, first according to classification X search condition keyX, in classification X concordance list indexTableX, obtain index indexListX, in this list, stored the record number that all values of classifying X institute corresponding field are keyX; As shown in step 502, then according to classification Y search condition keyY, in classification Y concordance list indexTableY, obtain index, the record number that the value of having stored all classification Y corresponding field in this list is KeyY.
Then, as shown in step 503-512, obtain the common factor of indexListX and indexListY by circulation, the value of the X corresponding field of classifying is keyY, the aggregate list of the record number that the value of classification Y corresponding field is keyY.Because result for retrieval is Pagination Display, so the startnum that only need to obtain in this common factor is capable of the capable data of endnum, so, as shown in step 511, just can end loop after having obtained the capable data of endnum, return to result for retrieval.After being met the record number of condition, as shown in step 510, in internal storage data cache table dataTable, obtain the record of this numbering, be inserted in result for retrieval list.Circulation finishes just can obtain desired result for retrieval record set.
If only need to obtain a kind of result set of the Z of classification retrieval, above-mentioned flow process can be simplified, and only need to obtain the capable record set comprising to the capable data of endnum of startnum of manipulative indexing list indexListZ.
Step 107: according to the record set obtaining, generate xml, by xml buffer memory, and xml is as a result of returned, response user request, flow process finishes.
Application xml, as the form of page cache, makes the byte number of page cache less, and can be directly using buffer memory as response, and without calculating again.The structure of the xml of result for retrieval is mainly comprised of data, curPage, redPerPage, redCount and extraInfo node.Wherein each data node has been described a record, its child node fieldl, field2, and field3fieldN has described the value of each field in this record; CurPage describes current page number; RedPerPage describes the number that records of every one page; RedCount has described the record sum of a certain retrieval in the situation that of not paging; ExtraInfo is used for describing the extend information needing when some pages are shown.
The xml of this structure, the result for retrieval data of having cried complete description, and effectively reduced the redundant information of showing form for describing the page, reduced the space that buffer memory takies.In xml, the xsl style sheet of xml is shown in definition, as follows:
<?xml-stylesheet?type=”text/xsl”href=”style.xsl”>
The xml of this structure is sent to client, and browser is changed xml data by first xsl style sheet in xml, and result for retrieval is showed to user, meets user's demand.
It should be noted that generating page cache be page cache record need to be pinned, only allow a thread to generate the page cache of a certain search condition, do not allow other threads to read and write the page cache of this search condition.Concrete flow process as shown in Figure 6.
Step 601: the page cache of asking certain search condition.
Step 602: pin page cache Hash table.
Step 603: whether the page cache record that judges this search condition is lockable.If locked, perform step 604; If do not pin, perform step 606.
Step 604: the lock of freeing of page buffer memory Hash table.
Step 605: thread is in resting state, and wait is waken up.After thread is waken up, perform step 602, continue to judge whether the page cache record of this search condition is lockable.
Step 606: whether the page cache that judges this search condition generates.If generated, perform step 607; If do not generate, perform step 609.
Step 607: the institute of freeing of page buffer memory Hash table.
Step 608: read page cache and return, flow process finishes.
Step 609: the page cache record that locks this search condition.
Step 610: the lock of freeing of page buffer memory Hash table.
Step 611: generate xml, and buffer memory.
Step 612: pin page cache Hash table.
Step 613: discharge the lock of the page cache record of this search condition, and record the page cache having generated.
Step 614: the lock of freeing of page buffer memory Hash table.
Step 615: wake other threads in resting state up, return to xml, flow process finishes.
Adopt above-mentioned flow process to obtain and generate page cache, not only can prevent the problem that reads dirty data and repeat to generate page cache, and because adopted the lock of record level, therefore when generating the xml of a certain search condition, still can process the page cache request of other search conditions, improve the concurrent processing ability of system, accelerated response speed.
In addition, need real-time or periodic refreshing internal storage data buffer memory.Relatively fixing for data, upgrade less situation, can take the strategy that refreshes in real time, need in programing change related data, carry out refresh operation, empty or upgrade data and the page of buffer memory; And for Data Update system relatively frequently, if still take the strategy refreshing in real time, the good data cached and page not only, and can increase the expense of system, so need to adopt the mode of periodic refreshing, a timed task need to be set, regularly internal storage data buffer memory and page cache be emptied.
If adopt the strategy of periodic refreshing, the time interval of internal storage data buffer memory periodic refreshing can be set by configuration.System manager, in system management and safeguarding, can adjust according to the performance of system and data situation, to reach best user, experiences.
Those of ordinary skill in the art will appreciate that, embodiment described here is in order to help reader understanding's implementation method of the present invention, should be understood to that protection scope of the present invention is not limited to such special statement and embodiment.Those of ordinary skill in the art can make various other various concrete distortion and combinations that do not depart from essence of the present invention according to these technology enlightenments disclosed by the invention, and these distortion and combination are still in protection scope of the present invention.

Claims (7)

1. a response method for distributed large Data classification searching web pages, is characterized in that, comprises the following steps:
S1: start distributed server, according to Hash Code by page key sorting, by the data buffer storage retrieving from database in each server;
S2: when user asks to arrive, first check its search condition, the server that the keyword root of user's request is selected to coupling according to HashCode is then searched and whether is had effective page cache in the buffer memory of this server, if exist, directly the effective page as a result of returned; If there is no, search effective page cache, the page finding is as a result of returned, and the page that arrives of cache lookup.
S3: refresh the data in internal memory, and the page of buffer memory before that cancels.
2. method according to claim 1, is characterized in that, during systematic searching, to the data cached index of setting up described in S2.
3. method according to claim 2, is characterized in that, the step of setting up index described in S2 is as follows:
S21. all data cached in searching loop internal memory, reading a certain data recording is N, obtains the value that its sorting field is corresponding, is recorded as key;
S22. from category index table indexTable, finding classification value is the index indexList of corresponding key.
S23. in this index indexList, insert record, and be numbered N.
4. method according to claim 3, it is characterized in that, generated data buffer memory and when setting up index for data, need to lock data cached and index data, only allows a buffer memory in thread read/write memory and set up index for data in a station server.
5. method according to claim 4, is characterized in that, described page cache is the form that records that adopts xml.
6. according to the method described in claim 2-4 any one, it is characterized in that, set up Hash table in internal memory, whether Hash table is used for record data and exists in buffer memory and point out the existing position of its cache contents.
7. method according to claim 1, is characterized in that, the mode that refreshes of the internal storage data described in S3 is for refreshing in real time or periodic refreshing.
CN201410310820.0A 2014-07-01 2014-07-01 Response method for distributed big data classification retrieval webpage Pending CN104077397A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410310820.0A CN104077397A (en) 2014-07-01 2014-07-01 Response method for distributed big data classification retrieval webpage

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410310820.0A CN104077397A (en) 2014-07-01 2014-07-01 Response method for distributed big data classification retrieval webpage

Publications (1)

Publication Number Publication Date
CN104077397A true CN104077397A (en) 2014-10-01

Family

ID=51598651

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410310820.0A Pending CN104077397A (en) 2014-07-01 2014-07-01 Response method for distributed big data classification retrieval webpage

Country Status (1)

Country Link
CN (1) CN104077397A (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105516284A (en) * 2015-12-01 2016-04-20 深圳市华讯方舟软件技术有限公司 Clustered database distributed storage method and device
CN105683967A (en) * 2016-01-30 2016-06-15 深圳市博信诺达经贸咨询有限公司 Web page grabbing method and web page grabbing system based on big data
CN105874458A (en) * 2016-03-30 2016-08-17 马岩 Method and system for analyzing network information
CN108073521A (en) * 2016-11-11 2018-05-25 深圳市创梦天地科技有限公司 A kind of method and system of data deduplication
CN108241657A (en) * 2016-12-24 2018-07-03 北京亿阳信通科技有限公司 A kind of web data list processing method and processing device
CN108733701A (en) * 2017-04-20 2018-11-02 杭州施强教育科技有限公司 A kind of query page buffer control method applied to online education
CN109218395A (en) * 2018-08-01 2019-01-15 阿里巴巴集团控股有限公司 Cache classification, acquisition methods and the device and electronic equipment of the page
CN109684086A (en) * 2018-12-14 2019-04-26 广东亿迅科技有限公司 A kind of distributed caching automatic loading method and device based on AOP
CN110020270A (en) * 2017-08-01 2019-07-16 上海福网信息科技有限公司 A kind of method that webpage quickly accesses
CN110362580A (en) * 2019-07-25 2019-10-22 重庆市筑智建信息技术有限公司 BIM (building information modeling) construction engineering data retrieval optimization classification method and system thereof
CN111367952A (en) * 2020-03-02 2020-07-03 中国邮政储蓄银行股份有限公司 Paging query method and system for cache data and computer readable storage medium
CN113963659A (en) * 2020-07-21 2022-01-21 华为技术有限公司 Adjusting method of display equipment and display equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101154230A (en) * 2006-09-30 2008-04-02 中兴通讯股份有限公司 Responding method for large data volume specified searching web pages
CN101867607A (en) * 2010-05-21 2010-10-20 北京无限立通通讯技术有限责任公司 Distributed data access method, device and system
CN102760137A (en) * 2011-04-27 2012-10-31 上海特易信息科技有限公司 Distributed full-text search method and distributed full-text search system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101154230A (en) * 2006-09-30 2008-04-02 中兴通讯股份有限公司 Responding method for large data volume specified searching web pages
CN101867607A (en) * 2010-05-21 2010-10-20 北京无限立通通讯技术有限责任公司 Distributed data access method, device and system
CN102760137A (en) * 2011-04-27 2012-10-31 上海特易信息科技有限公司 Distributed full-text search method and distributed full-text search system

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105516284A (en) * 2015-12-01 2016-04-20 深圳市华讯方舟软件技术有限公司 Clustered database distributed storage method and device
CN105516284B (en) * 2015-12-01 2019-05-03 深圳市华讯方舟软件技术有限公司 A kind of method and apparatus of Cluster Database distributed storage
CN105683967A (en) * 2016-01-30 2016-06-15 深圳市博信诺达经贸咨询有限公司 Web page grabbing method and web page grabbing system based on big data
WO2017128357A1 (en) * 2016-01-30 2017-08-03 深圳市博信诺达经贸咨询有限公司 Big data-based method and system for webpage crawling
CN105874458A (en) * 2016-03-30 2016-08-17 马岩 Method and system for analyzing network information
WO2017166134A1 (en) * 2016-03-30 2017-10-05 马岩 Method and system for analyzing network information
CN108073521B (en) * 2016-11-11 2021-10-08 深圳市创梦天地科技有限公司 Data deduplication method and system
CN108073521A (en) * 2016-11-11 2018-05-25 深圳市创梦天地科技有限公司 A kind of method and system of data deduplication
CN108241657A (en) * 2016-12-24 2018-07-03 北京亿阳信通科技有限公司 A kind of web data list processing method and processing device
CN108241657B (en) * 2016-12-24 2022-01-07 北京亿阳信通科技有限公司 Web data list processing method and device
CN108733701A (en) * 2017-04-20 2018-11-02 杭州施强教育科技有限公司 A kind of query page buffer control method applied to online education
CN110020270A (en) * 2017-08-01 2019-07-16 上海福网信息科技有限公司 A kind of method that webpage quickly accesses
CN109218395A (en) * 2018-08-01 2019-01-15 阿里巴巴集团控股有限公司 Cache classification, acquisition methods and the device and electronic equipment of the page
CN109684086A (en) * 2018-12-14 2019-04-26 广东亿迅科技有限公司 A kind of distributed caching automatic loading method and device based on AOP
CN110362580B (en) * 2019-07-25 2021-09-24 重庆市筑智建信息技术有限公司 BIM (building information modeling) construction engineering data retrieval optimization classification method and system thereof
CN110362580A (en) * 2019-07-25 2019-10-22 重庆市筑智建信息技术有限公司 BIM (building information modeling) construction engineering data retrieval optimization classification method and system thereof
CN111367952A (en) * 2020-03-02 2020-07-03 中国邮政储蓄银行股份有限公司 Paging query method and system for cache data and computer readable storage medium
CN111367952B (en) * 2020-03-02 2023-08-25 中国邮政储蓄银行股份有限公司 Paging query method, system and computer readable storage medium for cache data
CN113963659A (en) * 2020-07-21 2022-01-21 华为技术有限公司 Adjusting method of display equipment and display equipment

Similar Documents

Publication Publication Date Title
CN104077397A (en) Response method for distributed big data classification retrieval webpage
CN101154230B (en) Responding method for large data volume specified searching web pages
CN107247808B (en) Distributed NewSQL database system and picture data query method
US10664497B2 (en) Hybrid database table stored as both row and column store
CN105630865B (en) N-bit compressed versioned column data array for memory columnar storage
US9465843B2 (en) Hybrid database table stored as both row and column store
US8768927B2 (en) Hybrid database table stored as both row and column store
US20160267132A1 (en) Abstraction layer between a database query engine and a distributed file system
CN107491523B (en) Method and device for storing data object
CN105718455A (en) Data query method and apparatus
US9053153B2 (en) Inter-query parallelization of constraint checking
US20150006466A1 (en) Multiversion concurrency control for columnar database and mixed OLTP/OLAP workload
CN104866434A (en) Multi-application-oriented data storage system and data storage and calling method
CN104123356A (en) Method for increasing webpage response speed under large data volume condition
CN108021717B (en) Method for implementing lightweight embedded file system
US8694508B2 (en) Columnwise storage of point data
US20150363446A1 (en) System and Method for Indexing Streams Containing Unstructured Text Data
US11550485B2 (en) Paging and disk storage for document store
US10482110B2 (en) Columnwise range k-nearest neighbors search queries
CN105915619A (en) Access heat regarded cyber space information service high performance memory caching method
US8321408B1 (en) Quick access to hierarchical data via an ordered flat file
US11151178B2 (en) Self-adapting resource aware phrase indexes
CN107888686B (en) User data validity verification method located at HBase client
US20160154812A1 (en) Hybrid database management system
US20190087440A1 (en) Hierarchical virtual file systems for accessing data sets

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20141001

RJ01 Rejection of invention patent application after publication