CN106790115A - Nginx upstream agents service system and implementation method - Google Patents

Nginx upstream agents service system and implementation method Download PDF

Info

Publication number
CN106790115A
CN106790115A CN201611223573.6A CN201611223573A CN106790115A CN 106790115 A CN106790115 A CN 106790115A CN 201611223573 A CN201611223573 A CN 201611223573A CN 106790115 A CN106790115 A CN 106790115A
Authority
CN
China
Prior art keywords
data
nginx
user
chain
upstream
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201611223573.6A
Other languages
Chinese (zh)
Other versions
CN106790115B (en
Inventor
郭春碌
费恩达
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Topsec Technology Co Ltd
Beijing Topsec Network Security Technology Co Ltd
Beijing Topsec Software Co Ltd
Original Assignee
Beijing Topsec Technology Co Ltd
Beijing Topsec Network Security Technology Co Ltd
Beijing Topsec Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Topsec Technology Co Ltd, Beijing Topsec Network Security Technology Co Ltd, Beijing Topsec Software Co Ltd filed Critical Beijing Topsec Technology Co Ltd
Priority to CN201611223573.6A priority Critical patent/CN106790115B/en
Publication of CN106790115A publication Critical patent/CN106790115A/en
Application granted granted Critical
Publication of CN106790115B publication Critical patent/CN106790115B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/16Implementation or adaptation of Internet protocol [IP], of transmission control protocol [TCP] or of user datagram protocol [UDP]
    • H04L69/163In-band adaptation of TCP data exchange; In-band control procedures

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer And Data Communications (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The invention discloses a kind of Nginx upstream agents service system and implementation method, the proxy server includes chain of command proxy service module and data surface Nginx modules, and methods described includes:Shared drive is set up to the chain of command proxy service module and the data surface Nginx modules;The shared drive is connected the bearer path of load data as TCP;The TCP connections load data includes the HTTP request data of user and the http response data of upstream server.System and method connects the bearer path of load data by using shared drive as TCP in the present invention, effectively increases the data forwarding efficiency of agency plant.

Description

Nginx upstream agents service system and implementation method
Technical field
The present invention relates to network safety filed, more particularly to a kind of Nginx upstream agents service system and implementation method.
Background technology
The Network Security Device of many nuclear ages typically distinguishes chain of command and data surface, and chain of command is available to network management people Member carrys out management equipment using modes such as Telnet, Web, SSH, SNMP, and the basic task of data surface is to process and forward different ends The specific implementation procedure of each functions such as various types of data on mouth, such as L2/L3/ACL/Qos/ multicasts/security protection, all belongs to In the mission category of data forwarding plane.
Nginx (" engine x ") is a high performance HTTP and Reverse Proxy, is also an IMAP/ POP3/SMTP proxy servers.Nginx has efficiently processed TCP requests using the asynchronous non-blocking models of epoll on linux, It is that data surface realizes that agency's forwarding of HTTP request is first-selected.
In the prior art typically by data surface Nginx servers, listen to some or several ports provide Web clothes Business, client is processed by data surface fast protocol stack and is connected to Nginx.Nginx is taken by AF_INET domains socket and upstream Business device carries out reverse proxy, and the upstream and downstream message of Nginx and upstream server is required for by system protocol stack.Nginx passes through AF_INET domains socket and upstream server carry out reverse proxy, the upstream and downstream message of Nginx and upstream server be required for through Cross system protocol stack.Or, Nginx carries out reverse proxy by AF_LOCAL/AF_UNIX domains socket and upstream server, The upstream and downstream message of Nginx and upstream server needs not move through system protocol stack, by IPC (Inter-Process Communication, interprocess communication) mechanism carries out message transmission.
Therefore, there is following defect in prior art:
1st, upstream and downstream messaging system protocol stack treatment, can produce performance impact to the forwarding of message.
2nd, because under technology restriction some situations, data surface Nginx can not be entered with chain of command based on AF_INET domains socket Row communication.
3rd, upstream and downstream message is needed by kernel copy transfer, repeatedly copy influence efficiency and treatment dependence kernel dispatching, It is difficult using filtering control.
4th, because under technology restriction some situations, data surface Nginx can not be based on AF_LOCAL/AF_UNIX with chain of command Domain socket is communicated.
The content of the invention
In order to overcome the defect of above-mentioned prior art, the technical problem to be solved in the present invention to be to provide a kind of Nginx upstreams Proxy server and implementation method.
In order to solve the above technical problems, a kind of Nginx upstream agents service system implementation method is provided in the present invention, it is described Proxy server includes chain of command proxy service module and data surface Nginx modules, and methods described includes:
Shared drive is set up to the chain of command proxy service module and the data surface Nginx modules;
The shared drive is connected the bearer path of load data as TCP;The TCP connections load data includes using The HTTP request data at family and the http response data of upstream server.
In order to solve the above technical problems, the present invention also provides a kind of Nginx upstream agents service system, including mould is set Block, chain of command proxy service module and data surface Nginx modules;
The setup module, for being set up altogether to the chain of command proxy service module and the data surface Nginx modules Enjoy internal memory;
The shared drive is connected the bearer path of load data as TCP;The TCP connections load data includes using The HTTP request data at family and the http response data of upstream server.
The present invention has the beneficial effect that:
Method and system in the present invention, the bearer path of load data are connected by using shared drive, effectively as TCP Improve the data forwarding efficiency of agency plant.
Brief description of the drawings
Fig. 1 is the schematic layout pattern of Nginx upstream agent service systems in the embodiment of the present invention;
Fig. 2 is the structural representation of the proxy server with multi-core processor in the embodiment of the present invention.
Specific embodiment
In order to solve problem of the prior art, the invention provides a kind of Nginx upstream agents service system and realization side Method, below in conjunction with accompanying drawing and embodiment, the present invention will be described in further detail.It should be appreciated that tool described herein Body embodiment is only used to explain the present invention, does not limit of the invention.
As shown in figure 1, a kind of Nginx upstream agents service system implementation method in the embodiment of the present invention, agency's clothes Business system includes chain of command proxy service module and data surface Nginx modules, and methods described includes:
Shared drive is set up to the chain of command proxy service module and the data surface Nginx modules;
The shared drive is connected the bearer path of load data as TCP;The TCP connections load data includes using The HTTP request data at family and the http response data of upstream server.
Specifically, data surface Nginx modules:Run on one or more data surface processing cores, there is provided user's connecting tube Reason, proxy data forwarding.
Shared drive:The shared drive of data surface Nginx modules and chain of command Proxy programs, is responsible for proxy data caching. One TCP connection of client corresponds to a storage region in shared drive.
Chain of command proxy service module is chain of command Proxy programs:The client of chain of command Apache, each TCP connects A corresponding Proxy and Apache connections are connect, data have been obtained from shared drive, be sent to Apache;Apache is rung Data are answered to pass through shared drive returned data face Nginx.
Chain of command upstream server:Provide the user access Web Portal.Wherein, datum plane is anticipated with data surface in figure Justice is identical, and management plane is identical with chain of command meaning.
Furtherly, the bearer path that the shared drive is connected load data as TCP, including:
When the data surface Nginx modules receive user Portal requests, application one is held from the shared drive Passage is carried, and labeled as new connection;
The chain of command proxy service module scans the shared drive, when finding the mark of the new connection, create with The locality connection of upstream server.
Specifically, the data surface Nginx modules receive user Portal requests, including:
The data surface Nginx modules are set up TCP and are connected when user TCP requests are received with the user;
Connected by the TCP, when receiving the HTTP request first of the user, HTTP request is first described in parsing The user Portal requests.
On the basis of above-described embodiment, it is further proposed that the variant embodiment of above-described embodiment, needs explanation herein It is, in order that description is brief, the difference with above-described embodiment only to be described in each variant embodiment.
In one embodiment of the invention, the chain of command proxy service module scans the shared drive, finds institute When stating the mark of new connection, create after the locality connection with upstream server, also include:
The data surface Nginx modules receive the HTTP request data of user, and the HTTP request data is write into institute Bearer path is stated, and is reached labeled as the first new data;
Locality connection described in the chain of command proxy service module poll, finds the mark that first new data is reached When, the HTTP request data is read from the bearer path, and upstream server is sent to, and receive the upstream clothes The http response data are write the bearer path by the http response data of business device, and the new data of mark second is reached;
Locality connection described in the data surface Nginx module polls, when finding the mark that second new data is reached, from The http response data are read in the bearer path, and is sent to user.
In another embodiment of the present invention, the chain of command proxy service module scans the shared drive, finds During the mark of the new connection, create after the locality connection with upstream server, also include:
When the chain of command proxy service module detects the upstream server closing locality connection, carry logical described First is set on road and closes linkage flag;
When the data surface Nginx module polls are to the described first closing linkage flag, close with the user's socket;
When the data surface Nginx modules Nginx detects user's closing TCP connections, set on bearer path Second closes linkage flag;
When the chain of command proxy service module is polled to the second closing linkage flag, close and upstream server Locality connection.
In yet another embodiment of the present invention, as shown in Fig. 2 the proxy server is arranged on multi-core treatment The hardware platform of device;
Wherein each core corresponds to a data surface Nginx module.
Specifically, method uses shared drive as data TCP connection data transmission channel in the present invention, and its treatment includes Three processing procedures are that process, HTTP request, response agent process, connection closed process are set up in connection.
First, process is set up in connection:
User initiates TCP requests, and data surface Nginx servers set up TCP connections;
User send first HTTP request to data surface Nginx;
Data surface Nginx parses HTTP request, resolves to user Portal requests, then from one connection of shared drive application Resource, and labeled as " new connection ";
Chain of command Proxy program scanning shared drives, discovery has " new connection " then to create the sheet of and upstream server Connection, and HTTP request is sent to upstream server;
Upstream process HTTP request, and http response is produced, it is sent to Proxy Agents;
Proxy Agents receive http response, write the connection storage region, and labeled as " new data arrival ";
Data surface Nginx is connected by timer poll, and discovery has " new data arrival " to mark, and is obtained Data Concurrent and is given Client;
2nd, HTTP request, response agent process:
User sends HTTP request, data surface Nginx parsing HTTP requests in the TCP connections set up, and has carried out Whole property verification, verifies and successfully then write the connection storage region, and labeled as " new data arrival ", verification failure returns to HTTP Errored response;
Proxy Agents automatic regular polling is connected, and discovery has " new data arrival " to mark, and is obtained Data Concurrent and is given upstream Server;
Upstream server processes HTTP request, and produces http response, is sent to Proxy Agents;
Proxy Agents receive http response, write the connection storage region, and labeled as " new data arrival ";
Data surface Nginx is connected by timer poll, and discovery has " new data arrival " to mark, and is obtained Data Concurrent and is given Client.
3rd, connection closed process:
Upstream server closes locality connection, and Proxy program instrumentations are arrived, and the locality connection of release and upstream server is provided Source, sets " closing connection " mark in connection storage region, and Nginx is polled to " closing connection " and then closes with user's Socket, and discharge the connection resource in shared drive;
Equally, user is closed and is connected with the TCP of Nginx, and Nginx is detected, the connection resource of release and client, even Upper setting " closing connection " mark in storage region is connect, Proxy programs are polled to " close and connect " mark and then close and upstream clothes The connection resource that the locality connection of device of being engaged in discharges in shared drive simultaneously.
By the present invention in that connecting the bearer path of load data as TCP with shared drive, agency system is effectively increased The data forwarding efficiency of system.
Proxy Agents can flexibly and different upstream servers be docked, such as Apache, Nginx, Lighthttpd etc..
A data surface Nginx agent process, the company of effectively improving can be started on each core on polycaryon processor Connect concurrent quantity.Nginx agent processes are operated on data plane protocols stack, and Message processing process can inherit fast protocol stack Performance advantage.
It is engaged in user's access offer WEB service is combined into, that is, make use of chain of command global function to answer by data surface and chain of command The advantage of miscellaneous Business Processing, has also taken into account data surface treatment effeciency.
Present invention further propose that a kind of Nginx upstream agents server.
A kind of Nginx upstream agents service system in the embodiment of the present invention, including setup module, chain of command agency service mould Block and data surface Nginx modules;
The setup module, for being set up altogether to the chain of command proxy service module and the data surface Nginx modules Enjoy internal memory;
The shared drive is connected the bearer path of load data as TCP;The TCP connections load data includes using The HTTP request data at family and the http response data of upstream server.
Furtherly, the data surface Nginx modules, when being asked for receiving user Portal, from described sharing One bearer path of middle application is deposited, and labeled as new connection;
The chain of command proxy service module, for scanning the shared drive, when finding the mark of the new connection, wound Build the locality connection with upstream server.
Specifically, the data surface Nginx modules receive user Portal requests, including:
The data surface Nginx modules are set up TCP and are connected when user TCP requests are received with the user;
Connected by the TCP, when receiving the HTTP request first of the user, HTTP request is first described in parsing The user Portal requests.
Furtherly, the data surface Nginx modules, are additionally operable to receive the HTTP request data of user, will be described HTTP request data writes the bearer path, and is reached labeled as the first new data;And for locality connection described in poll, It was found that during the mark of the second new data arrival, the http response data are read from the bearer path, and be sent to User;
The chain of command proxy service module, is additionally operable to locality connection described in poll, it is found that first new data is reached Mark when, the HTTP request data is read from the bearer path, and be sent to upstream server, and receive described The http response data are write the bearer path, and the new data of mark second by the http response data of upstream server Reach.
Furtherly, the chain of command proxy service module, is additionally operable to detect the upstream server and close locally connect When connecing, the first closing linkage flag is set on the bearer path;And when being polled to the second closing linkage flag, close Close the locality connection with upstream server;
The data surface Nginx modules, when being additionally operable to be polled to the first closing linkage flag, close and the user Socket;And when detecting user's closing TCP connections, the second closing linkage flag is set on bearer path.
Wherein, the proxy server of stating is arranged on the hardware platform with multi-core processor;Each core correspondence One data surface Nginx module.
By the present invention in that connecting the bearer path of load data as TCP with shared drive, agency system is effectively increased The data forwarding efficiency of system.
Proxy Agents can flexibly and different upstream servers be docked, such as Apache, Nginx, Lighthttpd etc..
A data surface Nginx agent process, the company of effectively improving can be started on each core on polycaryon processor Connect concurrent quantity.Nginx agent processes are operated on data plane protocols stack, and Message processing process can inherit fast protocol stack Performance advantage.
It is engaged in user's access offer WEB service is combined into, that is, make use of chain of command global function to answer by data surface and chain of command The advantage of miscellaneous Business Processing, has also taken into account data surface treatment effeciency.
Although This application describes particular example of the invention, those skilled in the art can not depart from the present invention generally Variant of the invention is designed on the basis of thought.
Those skilled in the art on the basis of present invention is not departed from, go back under the inspiration that the technology of the present invention is conceived Various improvement can be made to the present invention, this still falls within the scope and spirit of the invention.

Claims (10)

1. a kind of Nginx upstream agents service system implementation method, it is characterised in that the proxy server includes chain of command Proxy service module and data surface Nginx modules, methods described include:
Shared drive is set up to the chain of command proxy service module and the data surface Nginx modules;
The shared drive is connected the bearer path of load data as TCP;The TCP connections load data includes user's The http response data of HTTP request data and upstream server.
2. the method for claim 1, it is characterised in that described to connect load data using the shared drive as TCP Bearer path, including:
When the data surface Nginx modules receive user Portal requests, apply for that one carries logical from the shared drive Road, and labeled as new connection;
The chain of command proxy service module scans the shared drive, when finding the mark of the new connection, creates and upstream The locality connection of server.
3. method as claimed in claim 2, it is characterised in that the data surface Nginx modules receive user Portal is asked, including:
The data surface Nginx modules are set up TCP and are connected when user TCP requests are received with the user;
Connected by the TCP, when receiving the HTTP request first of the user, HTTP request is described first described in parsing User Portal is asked.
4. method as claimed in claim 2, it is characterised in that the chain of command proxy service module scanning it is described it is shared in Deposit, when finding the mark of the new connection, create after the locality connection with upstream server, also include:
The data surface Nginx modules receive the HTTP request data of user, will be held described in HTTP request data write-in Passage is carried, and is reached labeled as the first new data;
Locality connection described in the chain of command proxy service module poll, when finding the mark that first new data is reached, from The HTTP request data is read in the bearer path, and is sent to upstream server, and receive the upstream server Http response data, the http response data are write into the bearer path, and the new data of mark second is reached;
Locality connection described in the data surface Nginx module polls, when finding the mark that second new data is reached, from described The http response data are read in bearer path, and is sent to user.
5. method as claimed in claim 2, it is characterised in that the chain of command proxy service module scanning it is described it is shared in Deposit, when finding the mark of the new connection, create after the locality connection with upstream server, also include:
When the chain of command proxy service module detects the upstream server closing locality connection, on the bearer path Set first and close linkage flag;
When the data surface Nginx module polls are to the described first closing linkage flag, the socket with the user is closed;
When the data surface Nginx modules Nginx detects user's closing TCP connections, second is set on bearer path Close linkage flag;
When the chain of command proxy service module is polled to the second closing linkage flag, close local with upstream server Connection.
6. the method as described in any one in claim 1-5, it is characterised in that the proxy server is arranged on to be had The hardware platform of multi-core processor;
Wherein each core corresponds to a data surface Nginx module.
7. a kind of Nginx upstream agents service system, it is characterised in that the proxy server includes setup module, management Face proxy service module and data surface Nginx modules;
The setup module, for being set up in shared to the chain of command proxy service module and the data surface Nginx modules Deposit;
The shared drive is connected the bearer path of load data as TCP;The TCP connections load data includes user's The http response data of HTTP request data and upstream server.
8. system as claimed in claim 7, it is characterised in that the data surface Nginx modules, for receiving user When Portal is asked, a bearer path is applied for from the shared drive, and labeled as new connection;
The chain of command proxy service module, for scanning the shared drive, when finding the mark of the new connection, create with The locality connection of upstream server.
9. system as claimed in claim 8, it is characterised in that the data surface Nginx modules, is additionally operable to receive user's HTTP request data, writes the bearer path, and reach labeled as the first new data by the HTTP request data;And use In locality connection described in poll, when finding the mark that the second new data is reached, the HTTP is read from the bearer path and is rung Data are answered, and is sent to user;
The chain of command proxy service module, is additionally operable to locality connection described in poll, finds the mark that first new data is reached Clock, the HTTP request data is read from the bearer path, and be sent to upstream server, and receive the upstream The http response data are write the bearer path by the http response data of server, and labeled as the described second new number According to arrival.
10. system as claimed in claim 8, it is characterised in that the chain of command proxy service module, is additionally operable to detect institute When stating upstream server closing locality connection, the first closing linkage flag is set on the bearer path;And it is polled to During two closing linkage flags, the locality connection with upstream server is closed;
The data surface Nginx modules, when being additionally operable to be polled to the first closing linkage flag, close with the user's socket;And when detecting user's closing TCP connections, described second is set on bearer path and closes linkage flag.
CN201611223573.6A 2016-12-27 2016-12-27 Nginx upstream agent service system and implementation method Active CN106790115B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611223573.6A CN106790115B (en) 2016-12-27 2016-12-27 Nginx upstream agent service system and implementation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611223573.6A CN106790115B (en) 2016-12-27 2016-12-27 Nginx upstream agent service system and implementation method

Publications (2)

Publication Number Publication Date
CN106790115A true CN106790115A (en) 2017-05-31
CN106790115B CN106790115B (en) 2019-11-05

Family

ID=58926668

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611223573.6A Active CN106790115B (en) 2016-12-27 2016-12-27 Nginx upstream agent service system and implementation method

Country Status (1)

Country Link
CN (1) CN106790115B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101082928A (en) * 2007-06-25 2007-12-05 腾讯科技(深圳)有限公司 Method for accessing database and data-base mapping system
CN101110088A (en) * 2007-04-17 2008-01-23 南京中兴软创科技有限责任公司 Database access interface method based on caching technology
CN103067484A (en) * 2012-12-25 2013-04-24 深圳市天维尔通讯技术有限公司 Method and system upgrading application program automatically
CN103631869A (en) * 2013-11-05 2014-03-12 北京奇虎科技有限公司 Method and device for releasing access pressure of server-side database

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101110088A (en) * 2007-04-17 2008-01-23 南京中兴软创科技有限责任公司 Database access interface method based on caching technology
CN101082928A (en) * 2007-06-25 2007-12-05 腾讯科技(深圳)有限公司 Method for accessing database and data-base mapping system
CN103067484A (en) * 2012-12-25 2013-04-24 深圳市天维尔通讯技术有限公司 Method and system upgrading application program automatically
CN103631869A (en) * 2013-11-05 2014-03-12 北京奇虎科技有限公司 Method and device for releasing access pressure of server-side database

Also Published As

Publication number Publication date
CN106790115B (en) 2019-11-05

Similar Documents

Publication Publication Date Title
CN103404103B (en) System and method for combining an access control system with a traffic management system
CN113347164B (en) Block chain-based distributed consensus system, method, device and storage medium
CN105991412B (en) Information push method and device
CN108886477A (en) A kind of equipment configuration method, device, customer terminal equipment and cloud server
EP2916522B1 (en) File transmission method and system thereof
CN106254379B (en) The processing system and processing method of network security policy
CN109167762B (en) IEC104 message checking method and device
CN106302817A (en) A kind of data/address bus implementation method based on Distributed Message Queue and device
Rajadurai et al. Steady state analysis of batch arrival feedback retrial queue with two phases of service, negative customers, Bernoulli vacation and server breakdown
US20170041265A1 (en) Methods and apparatus to manage message delivery in enterprise network environments
CN112800139A (en) Third-party application data synchronization system based on message queue
US8050199B2 (en) Endpoint registration with local back-off in a call processing system
CN106487598B (en) The more examples of isomery redundancy Snmp agreements realize system and its implementation
Collins et al. Online payments by merely broadcasting messages (extended version)
CN106790115A (en) Nginx upstream agents service system and implementation method
CN102316035A (en) Foreground and background communication and data safety processing method in cluster router system
CN108880866A (en) A kind of network service system
CN112769639A (en) Method and device for parallel issuing configuration information
CN112468549A (en) Method, equipment and storage medium for reverse communication and management of server
CN108243050A (en) A kind of method and apparatus that routing table is configured
CN107454210B (en) Communication method and system
CN110035082A (en) A kind of interchanger admission authentication method, interchanger and system
CN109327437A (en) Concurrent websocket business information processing method and server-side
CN112822080B (en) Bus system based on SOA architecture
CN110138668A (en) Stream description processing method and processing device, network entity and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant