CN106648872A - Multi-thread processing method and device and server - Google Patents
Multi-thread processing method and device and server Download PDFInfo
- Publication number
- CN106648872A CN106648872A CN201611242037.0A CN201611242037A CN106648872A CN 106648872 A CN106648872 A CN 106648872A CN 201611242037 A CN201611242037 A CN 201611242037A CN 106648872 A CN106648872 A CN 106648872A
- Authority
- CN
- China
- Prior art keywords
- sub
- task
- released
- idle storage
- memory space
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4843—Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
- G06F9/4881—Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5011—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
- G06F9/5016—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5011—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
- G06F9/5022—Mechanisms to release resources
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Debugging And Monitoring (AREA)
Abstract
The invention belongs to the technical field of data processing and discloses a multi-thread processing method. The method includes the steps that a thread pool including multiple sub-threads is created; one or more remote connections are established; after a task queue of the one or more remote connections arrives, the sub-threads execute tasks in parallel. A task processing mode of the thread pool is a thread centralized and parallel processing mode, and is not a one-to-one mode. Therefore, the execution problem of multiple threads in a processor unit can be solved, the idle time of the processor unit can be remarkably shortened, and the throughput capacity of the processor unit is improved. The invention further discloses a multi-thread processing device and a server.
Description
Technical field
The present invention relates to technical field of data processing, more particularly to a kind of method and device and for multiple threads
Plant server.
Background technology
Two programs on network realize the exchange of data by a two-way communication connection, and one end of this connection claims
For a socket (socket).Socket communications are widely used in the every field of software systems, and generally its communication data is needed
Self-defining data form.Using the one-to-one mode of the multiple connections of multiple threads more than current communication modes.When there is a large amount of (examples
Such as 10000) connection request when, 10000 threads may be needed exist for process, resource consumption greatly, and can produce obstruction.
And once obstruction generation can bring about substantial amounts of thread and continually carry out context switching, so that time loss is excessive.
The content of the invention
A kind of method for multiple threads is embodiments provided, the one of the multiple connections of multiple threads is solved
The problem that one mode is brought.In order to have a basic understanding to some aspects for the embodiment for disclosing, shown below is
It is simple to summarize.The summarized section is not extensive overview, nor to determine key/critical component or describe these enforcements
The protection domain of example.Its sole purpose is that some concepts are presented with simple form, in this, as the sequence of following detailed description
Speech.
One purpose of the embodiment of the present invention there is provided a kind of method for multiple threads.
In some illustrative examples, the method for multiple threads includes:Establishment includes multiple sub-line journeys
Thread pool;Set up one or more long-range connections;It is described many after the task queue of one or more of long-range connections is reached
Individual sub- thread parallel performs task.
In some illustrative examples, the plurality of sub-line journey executing tasks parallelly, including:Each sub-line journey is respectively from institute
State and take out in task queue mutually different task, and perform taken out task respectively.
In some illustrative examples, received data is cached to same during the plurality of sub- thread execution task
One memory space, the memory space is made up of one or more unit memory spaces.
In some illustrative examples, the part or all of of the memory space is released when release conditions are met,
The release conditions include one or more of situations below:A) in the memory space, the length of idle storage space reaches
First threshold;B) in the memory space, there is the idle storage space that free time reaches Second Threshold;C) storage is empty
Interior, free time reaches the length of the idle storage space of Second Threshold and reaches the 3rd threshold value.
In some illustrative examples, the part or all of of the memory space is released when release conditions are met,
The release conditions include one or more of situations below:A) in the memory space, the length of idle storage space reaches
First threshold;B) in the memory space, there is the idle storage space that free time reaches Second Threshold;C) storage is empty
Interior, free time reaches the length of the idle storage space of Second Threshold and reaches the 3rd threshold value;Wherein, first threshold and the 3rd
Threshold value is the integral multiple of unit memory space length.
In some illustrative examples, when meeting release conditions, whole idle storage spaces are released, or, when idle
Between reach the idle storage space of the 4th threshold value and be released, or, the idle storage space of preseting length is released;Wherein, released
Put the integral multiple of the length for unit memory space length of idle storage space.
In other illustrative examples, the method for multiple threads also includes:According to the quantity of task
Sub-line journey is deleted or increases, including:When the quantity of communication task is less than first threshold, one or more sub-line journeys are deleted;When
When the quantity of communication task is more than Second Threshold, one or more new sub-line journeys are created.
Another purpose of the embodiment of the present invention is to provide a kind of device for multiple threads.
In some illustrative examples, the device for multiple threads includes:First module, for creating bag
Include the thread pool of multiple sub-line journeys;Second unit, for setting up one or more long-range connections;Unit the 3rd, for described
After the task queue of one or more long-range connections is reached, the plurality of sub-line journey executing tasks parallelly is managed.
In some illustrative examples, Unit the 3rd includes:Task transfers unit, for dispatching each sub-line journey point
Mutually different task is not taken out from the task queue;With task executing units are performed respectively for dispatching each sub-line journey
Being taken out for task.
In other illustrative examples, the device for multiple threads also includes providing depositing for memory space
Storage unit, for caching received data during the plurality of sub- thread execution task, the memory space is by one or more
Unit memory space is constituted.
In some illustrative examples, the part or all of of the memory space is released when release conditions are met,
The release conditions include one or more of situations below:A) in the memory space, the length of idle storage space reaches
First threshold;B) in the memory space, there is the idle storage space that free time reaches Second Threshold;C) storage is empty
Interior, free time reaches the length of the idle storage space of Second Threshold and reaches the 3rd threshold value.
In some illustrative examples, the part or all of of the memory space is released when release conditions are met,
The release conditions include one or more of situations below:A) in the memory space, the length of idle storage space reaches
First threshold;B) in the memory space, there is the idle storage space that free time reaches Second Threshold;C) storage is empty
Interior, free time reaches the length of the idle storage space of Second Threshold and reaches the 3rd threshold value;Wherein, first threshold and the 3rd
Threshold value is the integral multiple of unit memory space length.
In some illustrative examples, when meeting release conditions, whole idle storage spaces are released, or, when idle
Between reach the idle storage space of the 4th threshold value and be released, or, the idle storage space of preseting length is released;Wherein, released
Put the integral multiple of the length for unit memory space length of idle storage space.
In some illustrative examples, the first module is additionally operable to that sub-line is deleted or increased according to the quantity of task
Journey, including:When the quantity of communication task is less than first threshold, one or more sub-line journeys are deleted;When the quantity of communication task
During more than Second Threshold, one or more new sub-line journeys are created.
The further object of the embodiment of the present invention is to provide a kind of server, and the server includes any of the above-described embodiment
The described device for multiple threads.
Technical scheme provided in an embodiment of the present invention, thread pool concentrates parallel place for the processing mode of task using thread
The mode of reason, rather than one-to-one pattern, therefore, it is possible to solve the problems, such as that multiple threads are performed in processor unit, can be notable
The standby time of processor unit is reduced, increases the handling capacity of processor unit.
It should be appreciated that the general description of the above and detailed description hereinafter are only exemplary and explanatory, not
The present invention can be limited.
Description of the drawings
Accompanying drawing herein is merged in specification and constitutes the part of this specification, shows the enforcement for meeting the present invention
Example, and be used to explain the principle of the present invention together with specification.
Fig. 1 is an indicative flowchart for being used for multiple threads;
Fig. 2 is another indicative flowchart for multiple threads;
Fig. 3 is the indicative flowchart of a sub- thread process;
Fig. 4 is a schematic device for being used for multiple threads.
Specific embodiment
The following description and drawings fully illustrate specific embodiments of the present invention, to enable those skilled in the art to
Put into practice them.Other embodiments can include structure, logic, it is electric, process and it is other changes.Embodiment
Only represent possible change.Unless explicitly requested, otherwise single components and functionality is optional, and the order for operating can be with
Change.The part of some embodiments and feature can be included in or replace part and the feature of other embodiments.This
The scope of bright embodiment includes the gamut of claims, and all obtainable equivalent of claims
Thing.Herein, each embodiment individually or can be represented generally with term " invention ", this just for the sake of convenient,
And if in fact disclosing the invention more than, the scope for being not meant to automatically limit the application is any single invention
Or inventive concept.Herein, such as first and second or the like relational terms be used only for by an entity or operation with
Another entity or operation make a distinction, and do not require or imply these entities or there is any actual relation between operating
Or order.And, term " including ", "comprising" or its any other variant are intended to including for nonexcludability, so as to
So that a series of process, method or equipment including key elements not only includes those key elements, but also including being not expressly set out
Other key elements, or also include the key element intrinsic for this process, method or equipment.In the feelings without more restrictions
Under condition, the key element limited by sentence "including a ...", it is not excluded that in the process including the key element, method or equipment
In also there is other identical element.Herein each embodiment is described by the way of progressive, and each embodiment is stressed
Be all difference with other embodiment, between each embodiment identical similar portion mutually referring to.For enforcement
It is corresponding with method part disclosed in embodiment due to it for example disclosed method, product etc., so the comparison of description is simple
Single, related part is referring to method part illustration.
Fig. 1 illustrates a schematic flow embodiment for multiple threads.
In the embodiment, the method for multiple threads includes:Establishment includes the thread pool (step of multiple sub-line journeys
Rapid S11), sub-line journey waits task (step S12), sets up one or more long-range connections (step S13), remote at one or more
After the task queue of journey connection is reached, multiple sub-line journey executing tasks parallellies (step S14).
Thread pool is concentrated by the way of parallel processing for the processing mode of task using thread, rather than one-to-one pattern,
Therefore, it is possible to solve the problems, such as that multiple threads are performed in processor unit, can substantially reduce the standby time of processor unit,
Increase the handling capacity of processor unit.
The mode of multiple sub-line journey executing tasks parallellies has a lot, and a kind of optional embodiment is that each sub-line journey is distinguished
Mutually different task is taken out from task queue, and performs taken out task respectively.More specifically, each sub-line journey is same
When take out a task respectively from task queue, the task that each sub-line journey is taken out is different, then each sub-line journey point
Do not perform the task of its taking-up.
In another illustrative examples, the method for multiple threads includes:Establishment includes multiple sub-lines
The thread pool of journey;Set up one or more long-range connections;After one or more task queues for remotely connecting are reached, many height
Thread parallel performs task;Received data is cached in same memory space during the plurality of sub- thread execution task.
In another illustrative examples, the method for multiple threads includes:Establishment includes multiple sub-lines
The thread pool of journey;Set up one or more long-range connections;After one or more task queues for remotely connecting are reached, many height
Thread parallel performs task.Wherein, received data is cached in same storage during the plurality of sub- thread execution task
Space, the part or all of of the memory space is released when release conditions are met.
In the present embodiment, in the case where release conditions are met, memory headroom just can be released, therefore without the need for applying repeatedly
And releasing memory space, so as to effectively improve running efficiency of system.Additionally, in the case where release conditions are met release part or
Full memory space, can realize the dynamic control of memory headroom length, take into account demand of both efficiency and ease for use, drop
The quantity of the memory fragmentation that low system cannot be used, improves running efficiency of system.
In some alternative embodiments, the release conditions include one or more of situations below:
A) in the memory space, the length of idle storage space reaches first threshold;
B) in the memory space, there is the idle storage space that free time reaches Second Threshold;
C) in the memory space, free time reaches the length of the idle storage space of Second Threshold and reaches the 3rd threshold
Value.
Above-mentioned situation is exemplary illustration, and in practical application, the above this several condition can select one and use, and also may be used
To be applied in combination.As can be seen that the selection of release conditions needs flexibly to be set according to system design considerations or application scenarios demand
It is fixed, it is impossible to be realized using common knowledge or conventional techniques.
In another illustrative examples, the method for multiple threads includes:Establishment includes multiple sub-lines
The thread pool of journey;Set up one or more long-range connections;After one or more task queues for remotely connecting are reached, many height
Thread parallel performs task;Received data is cached in same memory space during the plurality of sub- thread execution task,
The part or all of of the memory space is released when release conditions are met;When meeting release conditions, all idle storage is empty
Between be released, or, the idle storage space that free time reaches the 4th threshold value is released, or, the idle storage of preseting length is empty
Between be released.
The method provided using this enforcement, can implement more accurately dynamic control to the length of memory headroom, so as to
The quantity of Installed System Memory fragment is further reduced, running efficiency of system is improved.
In some illustrative examples, the idle storage space of the preseting length is released, including:By default fixation
Length releasing idling memory space, or, obtain length value and by length value releasing idling storage sky according to preset rules
Between.
As can be seen that the delivery mode needs of idle storage space are clever according to system design considerations or application scenarios demand
Setting living, it is impossible to realized using common knowledge or conventional techniques.
It should be noted that in above-mentioned all embodiments, the implementation of memory space has a lot, a kind of optional reality
Existing mode is that memory space is made up of one or more unit memory spaces.In the embodiment, application of the thread pool to internal memory
Adopt memory pool technique with management such that it is able to distribute multiple size identical unit memory spaces, greatly accelerate Memory Allocation/
Release process, effectively improves running efficiency of system.
In the case where memory space is made up of one or more unit memory spaces, optional release conditions include following
One or more of situation:A) in the memory space, the length of idle storage space reaches first threshold;B) storage is empty
It is interior, there is the idle storage space that free time reaches Second Threshold;C) in the memory space, free time reaches second
The length of the idle storage space of threshold value reaches the 3rd threshold value.Alternatively, first threshold and/or the 3rd threshold value are that unit storage is empty
Between length integral multiple.
In the case where memory space is made up of one or more unit memory spaces, when release conditions are met, all
Idle storage space is released, or, the idle storage space that free time reaches the 4th threshold value is released, or, preseting length
Idle storage space is released.Alternatively, it is released the integral multiple of the length for unit memory space length of idle storage space.
When data volume increases, in the case that existing memory space cannot meet task process needs, can be by merging system
Idle storage space extension storage space adjacent in system.A kind of optional mode is to deposit two free time adjacent in system
Storage space merges, to extend the memory space for thread pool.
When Insufficient memory is used, expansible memory headroom length to 2 times or more, so as to realize that memory headroom is long
The dynamic control of degree, effectively reduces the quantity of Installed System Memory fragment.Merge adjacent free memory space, memory block can be realized
The dynamic control of length, takes into account the demand of ease for use and combined efficiency these two aspects again, improves running efficiency of system.
In above-mentioned all of embodiment, when the task quantity in task queue changes, can also include:According to
Sub-line journey is deleted or increased to the quantity of task, including:When the quantity of task is less than four threshold values, one or more sub-lines are deleted
Journey;When the quantity of task is more than five threshold values, one or more new sub-line journeys are created.So as to realize antithetical phrase number of threads
Dynamic control, improve running efficiency of system.
Fig. 2 illustrates another for the schematic flow embodiment of multiple threads.
In this exemplary embodiment, after program starts, main thread will start socket (Socket) service (step
S21), monitor designated port (step S22) and receive the new connection request of client (S23).If receiving new connection request,
Then set up newly connection and new connection is added into thread pool (S24), and by thread pool process and the data communication of client
(S25).New connection request is not received, then continues listening port.
Fig. 3 illustrates the schematic flow embodiment of a sub- thread process.
Several sub-line journeys (quantity can configure), the data that executed in parallel all clients are connected will be run in thread pool
Send and receive task.Sub-line journey waits receiving data (S31) after starting, and receives data then data cached (S32) and judges slow
Whether the data deposited are a complete data packets (S33), if caching is complete data packet, takes out packet and parse
(S34), then processing data (S35);Otherwise continue to data.If not receiving data, sub-line journey continues waiting for connecing
Receive data.
Fig. 4 illustrates a schematic apparatus embodiment for multiple threads.
In the illustrative examples, the device for multiple threads includes first module S01, second unit S02
With the 3rd cell S 03.
First module S01 is used to create the thread pool for including multiple sub-line journeys, and second unit S02 is used to set up one or many
Individual long-range connection, the 3rd cell S 03 is used for after the task queue of one or more of long-range connections is reached, and manages described many
Individual sub- thread parallel performs task.
In some illustrative examples, the plurality of sub-line journey is taken out one and is appointed from the task queue respectively simultaneously
Business executed in parallel.
In some illustrative examples, being set forth in the device of multiple threads also includes providing the storage list of memory space
Unit, for caching received data during the plurality of sub- thread execution task.Alternatively, the memory space is by one or many
Individual unit memory space composition.
In some exemplary embodiments, the part or all of of the memory space is released when release conditions are met,
The release conditions include one or more of situations below:
A) in the memory space, the length of idle storage space reaches first threshold;B) in the memory space, exist
Free time reaches the idle storage space of Second Threshold;C) in the memory space, free time reaches the sky of Second Threshold
The length of not busy memory space reaches the 3rd threshold value.
In some exemplary embodiments, in the case where memory space is made up of one or more unit memory spaces,
The part or all of of the memory space is released when release conditions are met, and the release conditions include one kind of situations below
Or it is several:
A) in the memory space, the length of idle storage space reaches first threshold;B) in the memory space, exist
Free time reaches the idle storage space of Second Threshold;C) in the memory space, free time reaches the sky of Second Threshold
The length of not busy memory space reaches the 3rd threshold value.Alternatively, first threshold and the 3rd threshold value are the whole of unit memory space length
Several times.
In some exemplary embodiments, when meeting release conditions, whole idle storage spaces are released, or, when idle
Between reach the idle storage space of the 4th threshold value and be released, or, the idle storage space of preseting length is released.
In some illustrative examples, the idle storage space of the preseting length is released, including:By default fixation
Length releasing idling memory space, or, obtain length value and by length value releasing idling storage sky according to preset rules
Between.
As can be seen that the delivery mode needs of idle storage space are clever according to system design considerations or application scenarios demand
Setting living, it is impossible to realized using common knowledge or conventional techniques.
In some exemplary embodiments, in the case where memory space is made up of one or more unit memory spaces,
When meeting release conditions, whole idle storage spaces are released, or, free time reaches the idle storage space quilt of the 4th threshold value
Release, or, the idle storage space of preseting length is released.Alternatively, the length for being released idle storage space is deposited for unit
The integral multiple of storage space length.
In some exemplary embodiments, first module S01 is additionally operable to that son is deleted or increased according to the quantity of task
Thread, including:When the quantity of communication task is less than first threshold, one or more sub-line journeys are deleted;When the number of communication task
When amount is more than Second Threshold, one or more new sub-line journeys are created.
In some exemplary embodiments, a kind of server is also disclosed, and the server is included such as above-mentioned any embodiment institute
The device for multiple threads stated.
In the exemplary embodiment, a kind of non-transitorycomputer readable storage medium including instruction, example are additionally provided
Such as include the memory of instruction, above-mentioned instruction can be by computing device completing previously described method.Above-mentioned non-transitory meter
Calculation machine readable storage medium storing program for executing can be read-only storage (Read Only Memory, ROM), random access memory (Random
Access Memory, RAM), tape and light storage device etc..
Those of ordinary skill in the art are it is to be appreciated that the list of each example with reference to the embodiments described herein description
Unit and algorithm steps, being capable of being implemented in combination in electronic hardware or computer software and electronic hardware.These functions are actually
Performed with hardware or software mode, depending on the application-specific and design constraint of technical scheme.Those of skill in the art
Each specific application can be used different methods to realize described function, but this realization it is not considered that exceeding
The scope of the present invention.Those skilled in the art can be understood that, for convenience and simplicity of description, foregoing description
The specific work process of system, device and unit, may be referred to the corresponding process in preceding method embodiment, and here is no longer gone to live in the household of one's in-laws on getting married
State.
In embodiments disclosed herein, it should be understood that disclosed method, product (including but not limited to device, set
It is standby etc.), can realize by another way.For example, device embodiment described above is only schematic, for example,
The division of the unit, only a kind of division of logic function can have other dividing mode when actually realizing, such as multiple
Unit or component can with reference to or be desirably integrated into another system, or some features can be ignored, or not perform.It is another
Point, shown or discussed coupling each other or direct-coupling or communication connection can be by some interfaces, device or
The INDIRECT COUPLING of unit or communication connection, can be electrical, mechanical or other forms.The list as separating component explanation
Unit can be or may not be physically separate, can be as the part that unit shows or may not be physics
Unit, you can be located at a place, or can also be distributed on multiple NEs.Can select according to the actual needs
Some or all of unit therein is realizing the purpose of this embodiment scheme.In addition, each in each embodiment of the invention
Functional unit can be integrated in a processing unit, or unit is individually physically present, it is also possible to two or two
Individual above unit is integrated in a unit.
It should be appreciated that the flow process and structure for being described above and being shown in the drawings is the invention is not limited in,
And can without departing from the scope carry out various modifications and changes.The scope of the present invention is only limited by appended claim
System.
Claims (13)
1. a kind of method for multiple threads, it is characterised in that include:
Establishment includes the thread pool of multiple sub-line journeys;
Set up one or more long-range connections;
After the task queue of one or more of long-range connections is reached, the plurality of sub-line journey executing tasks parallelly.
2. the method for claim 1, it is characterised in that the plurality of sub-line journey executing tasks parallelly, including:Each sub-line
Journey takes out respectively mutually different task from the task queue, and performs taken out task respectively.
3. method as claimed in claim 1 or 2, it is characterised in that the number received during the plurality of sub- thread execution task
According to being cached in same memory space.
4. method as claimed in claim 3, it is characterised in that the part or all of of the memory space is meeting release conditions
When be released.
5. method as claimed in claim 4, it is characterised in that when meeting release conditions, whole idle storage spaces are released,
Or, the idle storage space that free time reaches the 4th threshold value is released, or, the idle storage space of preseting length is released.
6. the method as described in any one of claim 1 to 5, it is characterised in that also include:Deleted according to the quantity of task or increased
Plus sub-line journey.
7. a kind of device for multiple threads, it is characterised in that include:
First module, for creating the thread pool for including multiple sub-line journeys;
Second unit, for setting up one or more long-range connections;
Unit the 3rd, for after the task queue of one or more of long-range connections is reached, managing the plurality of sub-line journey
Executing tasks parallelly.
8. device as claimed in claim 7, it is characterised in that Unit the 3rd includes:
Task transfers unit, and for dispatching each sub-line journey mutually different task is taken out from the task queue respectively;With,
Task executing units, for dispatching each sub-line journey taken out task is performed respectively.
9. device as claimed in claim 7 or 8, it is characterised in that also including the memory cell for providing memory space, for delaying
Received data when depositing the plurality of sub- thread execution task.
10. device as claimed in claim 9, it is characterised in that the part or all of of the memory space is meeting released strip
It is released during part.
11. devices as claimed in claim 10, it is characterised in that when meeting release conditions, whole idle storage spaces are released
Put, or, the idle storage space that free time reaches the 4th threshold value is released, or, the idle storage space of preseting length is released
Put.
12. devices as described in any one of claim 7 to 11, it is characterised in that the first module is additionally operable to according to task
Quantity delete or increase sub-line journey.
A kind of 13. servers, it is characterised in that include as described in any one of claim 7 to 12 for multiple threads
Device.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611242037.0A CN106648872A (en) | 2016-12-29 | 2016-12-29 | Multi-thread processing method and device and server |
PCT/CN2017/119561 WO2018121696A1 (en) | 2016-12-29 | 2017-12-28 | Multi-thread processing method and device, and server |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611242037.0A CN106648872A (en) | 2016-12-29 | 2016-12-29 | Multi-thread processing method and device and server |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106648872A true CN106648872A (en) | 2017-05-10 |
Family
ID=58836273
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201611242037.0A Pending CN106648872A (en) | 2016-12-29 | 2016-12-29 | Multi-thread processing method and device and server |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN106648872A (en) |
WO (1) | WO2018121696A1 (en) |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107220033A (en) * | 2017-07-05 | 2017-09-29 | 百度在线网络技术(北京)有限公司 | Method and apparatus for controlling thread pool thread quantity |
WO2018121696A1 (en) * | 2016-12-29 | 2018-07-05 | 深圳市优必选科技有限公司 | Multi-thread processing method and device, and server |
CN108492211A (en) * | 2018-04-04 | 2018-09-04 | 北京科东电力控制***有限责任公司 | Computational methods and device applied to electricity market business platform |
CN108510333A (en) * | 2018-04-27 | 2018-09-07 | 厦门南讯软件科技有限公司 | A kind of more clients integrate the processing method and processing device of fast-aging |
CN109582455A (en) * | 2018-12-03 | 2019-04-05 | 恒生电子股份有限公司 | Multithreading task processing method, device and storage medium |
CN109660569A (en) * | 2017-10-10 | 2019-04-19 | 武汉斗鱼网络科技有限公司 | A kind of Multi-task Concurrency executes method, storage medium, equipment and system |
CN109766131A (en) * | 2017-11-06 | 2019-05-17 | 上海宝信软件股份有限公司 | The system and method for the intelligent automatic upgrading of software is realized based on multithreading |
CN112578259A (en) * | 2019-09-29 | 2021-03-30 | 北京君正集成电路股份有限公司 | Thread scheduling method with data space setting |
CN112817771A (en) * | 2021-04-15 | 2021-05-18 | 成都四方伟业软件股份有限公司 | Shared multithreading service management method and device |
CN113360266A (en) * | 2021-06-23 | 2021-09-07 | 北京百度网讯科技有限公司 | Task processing method and device |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117472593B (en) * | 2023-12-27 | 2024-03-22 | 中诚华隆计算机技术有限公司 | Method and system for distributing resources among multiple threads |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050022173A1 (en) * | 2003-05-30 | 2005-01-27 | Codito Technologies Private Limited | Method and system for allocation of special purpose computing resources in a multiprocessor system |
CN101287166A (en) * | 2008-02-22 | 2008-10-15 | 北京航空航天大学 | Short message publishing system and method for auxiliary system of electronic meeting |
CN101957863A (en) * | 2010-10-14 | 2011-01-26 | 广州从兴电子开发有限公司 | Data parallel processing method, device and system |
CN102681889A (en) * | 2012-04-27 | 2012-09-19 | 电子科技大学 | Scheduling method of cloud computing open platform |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106648872A (en) * | 2016-12-29 | 2017-05-10 | 深圳市优必选科技有限公司 | Multi-thread processing method and device and server |
-
2016
- 2016-12-29 CN CN201611242037.0A patent/CN106648872A/en active Pending
-
2017
- 2017-12-28 WO PCT/CN2017/119561 patent/WO2018121696A1/en active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050022173A1 (en) * | 2003-05-30 | 2005-01-27 | Codito Technologies Private Limited | Method and system for allocation of special purpose computing resources in a multiprocessor system |
CN101287166A (en) * | 2008-02-22 | 2008-10-15 | 北京航空航天大学 | Short message publishing system and method for auxiliary system of electronic meeting |
CN101957863A (en) * | 2010-10-14 | 2011-01-26 | 广州从兴电子开发有限公司 | Data parallel processing method, device and system |
CN102681889A (en) * | 2012-04-27 | 2012-09-19 | 电子科技大学 | Scheduling method of cloud computing open platform |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018121696A1 (en) * | 2016-12-29 | 2018-07-05 | 深圳市优必选科技有限公司 | Multi-thread processing method and device, and server |
CN107220033A (en) * | 2017-07-05 | 2017-09-29 | 百度在线网络技术(北京)有限公司 | Method and apparatus for controlling thread pool thread quantity |
CN109660569B (en) * | 2017-10-10 | 2021-10-15 | 武汉斗鱼网络科技有限公司 | Multitask concurrent execution method, storage medium, device and system |
CN109660569A (en) * | 2017-10-10 | 2019-04-19 | 武汉斗鱼网络科技有限公司 | A kind of Multi-task Concurrency executes method, storage medium, equipment and system |
CN109766131A (en) * | 2017-11-06 | 2019-05-17 | 上海宝信软件股份有限公司 | The system and method for the intelligent automatic upgrading of software is realized based on multithreading |
CN108492211A (en) * | 2018-04-04 | 2018-09-04 | 北京科东电力控制***有限责任公司 | Computational methods and device applied to electricity market business platform |
CN108510333A (en) * | 2018-04-27 | 2018-09-07 | 厦门南讯软件科技有限公司 | A kind of more clients integrate the processing method and processing device of fast-aging |
CN109582455B (en) * | 2018-12-03 | 2021-06-18 | 恒生电子股份有限公司 | Multithreading task processing method and device and storage medium |
CN109582455A (en) * | 2018-12-03 | 2019-04-05 | 恒生电子股份有限公司 | Multithreading task processing method, device and storage medium |
CN112578259A (en) * | 2019-09-29 | 2021-03-30 | 北京君正集成电路股份有限公司 | Thread scheduling method with data space setting |
CN112817771A (en) * | 2021-04-15 | 2021-05-18 | 成都四方伟业软件股份有限公司 | Shared multithreading service management method and device |
CN113360266A (en) * | 2021-06-23 | 2021-09-07 | 北京百度网讯科技有限公司 | Task processing method and device |
CN113360266B (en) * | 2021-06-23 | 2022-09-13 | 北京百度网讯科技有限公司 | Task processing method and device |
Also Published As
Publication number | Publication date |
---|---|
WO2018121696A1 (en) | 2018-07-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106648872A (en) | Multi-thread processing method and device and server | |
US8949847B2 (en) | Apparatus and method for managing resources in cluster computing environment | |
CN109672627A (en) | Method for processing business, platform, equipment and storage medium based on cluster server | |
CN105554102A (en) | Elastic expansion method based on container cluster and application system thereof | |
CN105159775A (en) | Load balancer based management system and management method for cloud computing data center | |
CN106790022B (en) | Communication means and its system based on more inquiry threads | |
CN103412786A (en) | High performance server architecture system and data processing method thereof | |
CN101227416A (en) | Method for distributing link bandwidth in communication network | |
CN106201739A (en) | A kind of remote invocation method of Storm based on Redis | |
CN107733813B (en) | Message forwarding method and device | |
CN104935636A (en) | Network channel acceleration method and system | |
CN113490279B (en) | Network slice configuration method and device | |
CN105553882A (en) | Method for scheduling SDN data plane resources | |
CN108595259B (en) | Memory pool management method based on global management | |
CN101308467A (en) | Task processing method and device | |
CN113672391B (en) | Parallel computing task scheduling method and system based on Kubernetes | |
CN108111578B (en) | Method for accessing power distribution terminal data acquisition platform into terminal equipment based on NIO | |
CN109445931A (en) | A kind of big data resource scheduling system and method | |
CN113890827B (en) | Power communication resource allocation method, device, storage medium and electronic equipment | |
Jiang et al. | Adia: Achieving high link utilization with coflow-aware scheduling in data center networks | |
CN105278873B (en) | A kind of distribution method and device of disk block | |
CN109193653A (en) | A kind of method and device of power distribution | |
CN111245794B (en) | Data transmission method and device | |
CN114675972A (en) | Method and system for flexibly scheduling cloud network resources based on integral algorithm | |
CN113342466A (en) | Kubernetes cloud native container-based variable starting resource limitation method and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20170510 |