CN102970244B - A kind of network message processing method of multi -CPU inter-core load equilibrium - Google Patents

A kind of network message processing method of multi -CPU inter-core load equilibrium Download PDF

Info

Publication number
CN102970244B
CN102970244B CN201210484653.2A CN201210484653A CN102970244B CN 102970244 B CN102970244 B CN 102970244B CN 201210484653 A CN201210484653 A CN 201210484653A CN 102970244 B CN102970244 B CN 102970244B
Authority
CN
China
Prior art keywords
cpu
message
core
receiving queue
hash
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201210484653.2A
Other languages
Chinese (zh)
Other versions
CN102970244A (en
Inventor
裴建成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Huanchuang Communication Technology Co Ltd
Original Assignee
Shanghai Huanchuang Communication Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Huanchuang Communication Technology Co Ltd filed Critical Shanghai Huanchuang Communication Technology Co Ltd
Priority to CN201210484653.2A priority Critical patent/CN102970244B/en
Publication of CN102970244A publication Critical patent/CN102970244A/en
Application granted granted Critical
Publication of CN102970244B publication Critical patent/CN102970244B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Data Exchanges In Wide-Area Networks (AREA)
  • Computer And Data Communications (AREA)

Abstract

The present invention relates to a kind of network message processing method of multi -CPU inter-core load equilibrium, this method specifies a CPU core to collect message from network interface card receiving queue first, and it is distributed in the message receiving queue of other CPU cores, until message receiving queue reaches its maximum threshold values, other CPU cores collect message from corresponding message receiving queue, then carry out protocol stack processing to message.Compared with prior art, the present invention has the advantages that to make full use of cpu resource, automatic equalization can be achieved.

Description

A kind of network message processing method of multi -CPU inter-core load equilibrium
Technical field
The present invention relates to a kind of network data processing method, more particularly, to a kind of network of multi -CPU inter-core load equilibrium Message processing method.
Background technology
The prior art is received Message processing and is often received using hard break triggering poll for single receiving queue network card chip Message mode, due to being limited to the characteristic of single queue, message is generally sent on a CPU core, and so other CPU cores can not be with Message is taken parallel from single receiving queue together, when network interface card, which receives message load, is more than CPU disposal abilities, causes a CPU Core is busy, the situation of other all CPU core free time.Single receiving queue network card chip, the one of CPU can be only sent to when receiving message The processing of a core, the problems such as causing disposal ability not efficiently use.
The content of the invention
It is an object of the present invention to overcome the above-mentioned drawbacks of the prior art and provide one kind makes full use of CPU to provide Source, can be achieved automatic equalization the equilibrium of multi -CPU inter-core load network message processing method.
The purpose of the present invention can be achieved through the following technical solutions:
A kind of network message processing method of multi -CPU inter-core load equilibrium, this method specify a CPU core from network interface card first Receiving queue collects message, and is distributed in the message receiving queue of other CPU cores, until message receiving queue reaches its maximum Threshold values, other CPU cores collect message from corresponding message receiving queue, then carry out protocol stack processing to message.
Each CPU core is equipped with corresponding ID, and the value range of the ID is [0, CPU_CORE_ NUMBERS-1] between integer, wherein, CPU_CORE_NUMBERS be CPU core sum.
The network message processing method specifically includes following steps:
1) it is currently to work as pronucleus from what network interface card receiving queue collected message to specify the CPU core that ID is CURRENT_CPU_ID;
2) when pronucleus collects message from network interface card receiving queue, and the value of variable recv_packet_count adds one, described Variable recv_packet_count represents the message number collected when pronucleus;
3) judge whether hash_cpu is greater than or equal to CURRENT_CPU_ID, if so, then the value of hash_cpu adds one, hold Row step 4), if it is not, directly performing step 4);
Wherein, hash_cpu=recv_packet_count% (CPU_CORE_NUMBERS-1);
4) judge whether ID reaches maximum threshold values for the message number in the message receiving queue on the CPU core of hash_cpu, If so, step 5) is then performed, if it is not, then performing step 6);
5) when pronucleus directly carries out protocol stack processing, return to step 1 to the message collected);
6) when pronucleus sends the message collected into the message receiving queue for the CPU core that ID is hash_cpu, and notify The CPU core handles message, return to step 1).
The recv_packet_count and hash_cpu is static signless integer variable.
Compared with prior art, the present invention has the following advantages:
1) the method for the present invention, can be fully each using CPU in the case where single queue network interface card coordinates multi-core CPU hardware structure Core, so as to reach maximum network message processing capability, does not waste cpu resource;
2) automatic equalization of flowing water is realized between monokaryon distribution of the present invention and other core protocol stack disposal abilities.
Brief description of the drawings
Fig. 1 is the structure diagram of the present invention.
Embodiment
The present invention is described in detail with specific embodiment below in conjunction with the accompanying drawings.
Embodiment
As shown in Figure 1, a kind of network message processing method of multi -CPU inter-core load equilibrium, this method totally can be described as: Specify a CPU core to collect message from network interface card receiving queue first, and be distributed in the message receiving queue of other CPU cores, directly Reach its maximum threshold values to message receiving queue, other CPU cores collect message from corresponding message receiving queue, then right Message carries out protocol stack processing.Wherein, each CPU core is equipped with corresponding ID, and the value range of the ID is [0, CPU_ CORE_NUMBERS-1] between integer, wherein, CPU_CORE_NUMBERS be CPU core sum.
The static signless integer variable of definition:Recv_packet_count, for the message number for representing to have received;It is fixed Adopted static state signless integer variable:Hash_cpu, for representing the ID for the CPU core to be distributed to.
The network message processing method specifically includes following steps:
1) it is currently to work as pronucleus from what network interface card receiving queue collected message to specify the CPU core that ID is CURRENT_CPU_ID;
2) when pronucleus collects message from network interface card receiving queue, and the value of variable recv_packet_count adds one, described Variable recv_packet_count represents the message number collected when pronucleus;
3) judge the whether big sons of hash_cpu or equal to CURRENT_CPU_ID, if so, then the value of hash_cpu adds one, hold Row step 4), if it is not, directly performing step 4);
Wherein, hash_cpu=recv_packet_count% (CPU_CORE_NUMBERS-1), using based on having received The complete average algorithm of remainder is asked for after message number subtracts one to CPU core sum;
4) judge whether ID reaches maximum threshold values for the message number in the message receiving queue on the CPU core of hash_cpu, If so, step 5) is then performed, if it is not, then performing step 6);
5) when message progress protocol stack processing of the pronucleus directly to collecting, so as to reduce when pronucleus takes from network interface card receiving queue The chance of message, reduces other core Message processing pressure, reaches the automatic equalization of distribution and protocol stack stream treatment, be then back to Step 1);
6) when pronucleus sends the message collected into the message receiving queue for the CPU core that ID is hash_cpu, and notify The CPU core handles message, return to step 1), into next time from the literary process of network interface card receiving queue receiving.
The network message processing method of above-mentioned multi -CPU inter-core load equilibrium coordinates multi-core CPU hardware in single queue network interface card Under framework, each cores of CPU can be fully used, so as to reach maximum network message processing capability, do not waste cpu resource;Monokaryon The automatic equalization of flowing water is realized between distribution and other core protocol stack disposal abilities.

Claims (2)

1. a kind of network message processing method of multi -CPU inter-core load equilibrium, it is characterised in that this method specifies one first CPU core collects message from network interface card receiving queue, and is distributed in the message receiving queue of other CPU cores, until message receives team Row reach its maximum threshold values, other CPU cores collect message from corresponding message receiving queue, then carry out agreement to message Stack processing;
Each CPU core is equipped with corresponding ID, and the value range of the ID is [0, CPU_CORE_NUMBERS-1] Between integer, wherein, CPU_CORE_NUMBERS be CPU core sum;
The network message processing method specifically includes following steps:
1) it is currently to work as pronucleus from what network interface card receiving queue collected message to specify the CPU core that ID is CURRENT_CPU_ID;
2) when pronucleus collects message from network interface card receiving queue, and the value of variable recv_packet_count adds one, the variable Recv_packet_count represents the message number collected when pronucleus;
3) judge whether hash_cpu is greater than or equal to CURRENT_CPU_ID, if so, then the value of hash_cpu adds one, perform step It is rapid 4), if it is not, directly perform step 4);
Wherein, hash_cpu=recv_packet_count% (CPU_CORE_NUMBERS-1);
4) judge whether ID reaches maximum threshold values for the message number in the message receiving queue on the CPU core of hash_cpu, if so, Step 5) is then performed, if it is not, then performing step 6);
5) when pronucleus directly carries out protocol stack processing, return to step 1 to the message collected);
6) when pronucleus sends the message collected into the message receiving queue for the CPU core that ID is hash_cpu, and the CPU is notified Core handles message, return to step 1).
A kind of 2. network message processing method of multi -CPU inter-core load equilibrium according to claim 1, it is characterised in that The recv_packet_count and hash_cpu is static signless integer variable.
CN201210484653.2A 2012-11-23 2012-11-23 A kind of network message processing method of multi -CPU inter-core load equilibrium Active CN102970244B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210484653.2A CN102970244B (en) 2012-11-23 2012-11-23 A kind of network message processing method of multi -CPU inter-core load equilibrium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210484653.2A CN102970244B (en) 2012-11-23 2012-11-23 A kind of network message processing method of multi -CPU inter-core load equilibrium

Publications (2)

Publication Number Publication Date
CN102970244A CN102970244A (en) 2013-03-13
CN102970244B true CN102970244B (en) 2018-04-13

Family

ID=47800131

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210484653.2A Active CN102970244B (en) 2012-11-23 2012-11-23 A kind of network message processing method of multi -CPU inter-core load equilibrium

Country Status (1)

Country Link
CN (1) CN102970244B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104639578B (en) * 2013-11-08 2018-05-11 华为技术有限公司 Multi-protocol stack load-balancing method and device
CN104969533B (en) * 2013-12-25 2018-11-06 华为技术有限公司 A kind of data package processing method and device
CN105630731A (en) * 2015-12-24 2016-06-01 曙光信息产业(北京)有限公司 Network card data processing method and device in multi-CPU (Central Processing Unit) environment
CN106533978B (en) * 2016-11-24 2019-09-10 东软集团股份有限公司 A kind of network load balancing method and system
CN109218226A (en) * 2017-07-03 2019-01-15 迈普通信技术股份有限公司 Message processing method and the network equipment
CN107888626B (en) * 2017-12-25 2020-11-06 新华三信息安全技术有限公司 Message detection method and device
CN108259369B (en) * 2018-01-26 2022-04-05 迈普通信技术股份有限公司 Method and device for forwarding data message
CN110166373B (en) * 2019-05-21 2022-12-27 优刻得科技股份有限公司 Method, device, medium and system for sending data from source physical machine to destination physical machine
CN111277514B (en) * 2020-01-21 2023-07-18 新华三技术有限公司合肥分公司 Message queue distribution method, message forwarding method and related devices
CN111314249B (en) * 2020-05-08 2021-04-20 深圳震有科技股份有限公司 Method and server for avoiding data packet loss of 5G data forwarding plane
CN112073332A (en) * 2020-08-10 2020-12-11 烽火通信科技股份有限公司 Message distribution method, multi-core processor and readable storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101599028A (en) * 2009-07-08 2009-12-09 成都市华为赛门铁克科技有限公司 URL(uniform resource locator) is filtered in a kind of multi-core CPU method and device
CN101719872A (en) * 2009-12-11 2010-06-02 曙光信息产业(北京)有限公司 Zero-copy mode based method and device for sending and receiving multi-queue messages
CN101877666A (en) * 2009-11-13 2010-11-03 曙光信息产业(北京)有限公司 Method and device for receiving multi-application program message based on zero copy mode
CN102004673A (en) * 2010-11-29 2011-04-06 中兴通讯股份有限公司 Processing method and system of multi-core processor load balancing
CN102364455A (en) * 2011-10-31 2012-02-29 杭州华三通信技术有限公司 Balanced share control method and device for virtual central processing units (VCPUs) among cascaded multi-core central processing units (CPUs)
CN102571580A (en) * 2011-12-31 2012-07-11 曙光信息产业股份有限公司 Data receiving method and computer

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8312175B2 (en) * 2010-01-21 2012-11-13 Vmware, Inc. Virtual machine access to storage via a multi-queue IO storage adapter with optimized cache affinity and PCPU load balancing

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101599028A (en) * 2009-07-08 2009-12-09 成都市华为赛门铁克科技有限公司 URL(uniform resource locator) is filtered in a kind of multi-core CPU method and device
CN101877666A (en) * 2009-11-13 2010-11-03 曙光信息产业(北京)有限公司 Method and device for receiving multi-application program message based on zero copy mode
CN101719872A (en) * 2009-12-11 2010-06-02 曙光信息产业(北京)有限公司 Zero-copy mode based method and device for sending and receiving multi-queue messages
CN102004673A (en) * 2010-11-29 2011-04-06 中兴通讯股份有限公司 Processing method and system of multi-core processor load balancing
CN102364455A (en) * 2011-10-31 2012-02-29 杭州华三通信技术有限公司 Balanced share control method and device for virtual central processing units (VCPUs) among cascaded multi-core central processing units (CPUs)
CN102571580A (en) * 2011-12-31 2012-07-11 曙光信息产业股份有限公司 Data receiving method and computer

Also Published As

Publication number Publication date
CN102970244A (en) 2013-03-13

Similar Documents

Publication Publication Date Title
CN102970244B (en) A kind of network message processing method of multi -CPU inter-core load equilibrium
CN105430030B (en) Based on OSGI technology can parallel expansion application server
CN103929334A (en) Network abnormity notification method and apparatus
CN107204875B (en) Data reporting link monitoring method and device, electronic equipment and storage medium
CN108418743B (en) Chat room message distribution method and device and electronic equipment
CN102868635A (en) Multi-core and multi-thread method and system for preserving order of messages
CN104980515B (en) Message distribution treating method and apparatus in a kind of cloud storage system
CN102185801A (en) Information processing method in instant messaging and instant messaging tool
CN109769029B (en) Communication connection method based on electricity consumption information acquisition system and terminal equipment
CN112383585A (en) Message processing system and method and electronic equipment
CN105554049B (en) Distributed service amount control method and device
CN103870331B (en) A kind of method and electronic equipment of dynamically distributes processor cores
CN102945185A (en) Task scheduling method and device
CN104899088B (en) A kind of message treatment method and device
CN102811127A (en) Acceleration network card for cloud computing application layer
CN103179051B (en) A kind of retransmission method of Streaming Media and system
CN112019589B (en) Multi-level load balancing data packet processing method
CN115278395A (en) Network switching equipment, data stream processing control method and related equipment
CN110347518A (en) Message treatment method and device
CN105264499B (en) Message treatment method, device and reception core in a kind of shared queue
WO2016091003A1 (en) Method for implementing service collaborative scheduling, computing board, and storage medium
CN111148159A (en) Data transmission method, device, equipment and computer readable storage medium
CN105095042B (en) Management information system and its method for processing business
CN112817761A (en) Energy-saving method for enhancing cloud computing environment
CN112988417B (en) Message processing method, device, electronic equipment and computer readable medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant