CN112532683A - Edge calculation method and device based on micro-service architecture - Google Patents

Edge calculation method and device based on micro-service architecture Download PDF

Info

Publication number
CN112532683A
CN112532683A CN202011191305.7A CN202011191305A CN112532683A CN 112532683 A CN112532683 A CN 112532683A CN 202011191305 A CN202011191305 A CN 202011191305A CN 112532683 A CN112532683 A CN 112532683A
Authority
CN
China
Prior art keywords
service
edge
edge server
api
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011191305.7A
Other languages
Chinese (zh)
Inventor
王弢
莫家钟
杨磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sino Tel Technologies Co ltd
Original Assignee
Beijing Sino Tel Technologies Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sino Tel Technologies Co ltd filed Critical Beijing Sino Tel Technologies Co ltd
Priority to CN202011191305.7A priority Critical patent/CN112532683A/en
Publication of CN112532683A publication Critical patent/CN112532683A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/1008Server selection for load balancing based on parameters of servers, e.g. available memory or workload
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/08Network architectures or network communication protocols for network security for authentication of entities
    • H04L63/0807Network architectures or network communication protocols for network security for authentication of entities using tickets, e.g. Kerberos

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Computing Systems (AREA)
  • Stored Programmes (AREA)

Abstract

The invention provides an edge calculation method and equipment based on a micro-service architecture, wherein the method comprises the following steps: receiving a service request of an edge server through an API gateway; determining a service corresponding to the service request; adjusting the service instances through the service to realize load balance among the service instances; the adjustment comprises adding a service instance corresponding to the service request or closing the service instance corresponding to the service request. Based on the scheme, the high availability of the system can be ensured. When the load of a service instance is too high due to the increase of service requests, only the processing capacity of a specific service needs to be expanded, the requests among a plurality of same service instances are balanced through load balancing, and the efficiency of each service instance is guaranteed to be maximized; when the number of service requests is reduced, redundant service instances can be closed in time, power consumption is controlled, and cost is saved.

Description

Edge calculation method and device based on micro-service architecture
Technical Field
The invention relates to the technical field of Internet of things architectures, in particular to an edge computing method and device based on a micro-service architecture.
Background
Existing internet of things architectures are roughly classified into the following three categories: the first type is a traditional hierarchical DCM (Data Communication Module) Architecture, the second type is an internet of things Architecture based on an SOA (Service-Oriented Architecture), and the third type is an internet of things Architecture combined with cloud computing.
The first traditional architecture is based on a layering technology, so that roles and functions among all layers are mutually independent, easy to develop, high in testability, not easy to deploy and poor in overall flexibility. The second type SOA architecture organizes the dispersed functions in the enterprise application into a standard-based interoperation service; developers can quickly combine and reuse these services to meet business needs. The core of the SOA schema is separation of technology and services to achieve service multiplexing. In the SOA architecture, in order to solve the single application of the layered mode and the distributed application of the service-oriented architecture, an internet of things architecture based on micro-services has evolved. The Internet of things architecture based on the micro-service can be updated more flexibly, the scale of the Internet of things is expanded, and the service is iterated rapidly. The third type of internet of things architecture combined with cloud computing provides an extensible and high-performance mode for internet of things application to transmit data streams to the cloud, and meanwhile, the cloud provides a means for managing application programs and the data streams. Derivation of a fusion architecture of the internet of things and cloud computing is mainly used for solving the compatibility problem of various applications of the internet of things on program data flow and service. However, with the rapid increase of the edge device of the internet of things, data generated by the device increases explosively, all data are transmitted to the cloud for processing, so that the requirement for good user experience cannot be met, and a framework combined with edge computing is evolved on the basis.
The SOA architecture is most representative in the existing stage Internet of things architecture and is also the most extensive in application range. The traditional SOA development mode is that the whole application code library is composed of a project which comprises a plurality of business function modules, when a user initiates an Http request through a browser to call the functions of the code library modules, a WebUI (Web user interface) arranged in front of a platform system displays a static platform interface, dynamic data content in a page is completed by the following business modules, as shown in figure 1, the WebUI is erected in the business function modules of the whole platform system, a module A completes 10% of business requests, a module B completes 10% of business requests, a module C completes 80% of business requests, and the business modules directly interact with data at the back end to complete reading and writing of business operation data.
Therefore, in the service situation of the SOA architecture, when the service access demand of a company user is increased and the processing capacity of a rear-end platform needs to be horizontally expanded, a load balancer can be added before a service processing platform is provided, and the load balancer can distribute a service request from a user direction to a plurality of horizontally expanded instances by using a certain request load balancing algorithm to complete the distribution of service request pressure and achieve the purpose of increasing the service processing capacity. The platform architecture thus obtained is shown in fig. 2, and although the resolution of the service request pressure is completed and the back-end response speed of the service platform is increased, it can be seen that, in the horizontal extension process of the whole back-end platform, only the module C is always in high-pressure operation, and there is not much service pressure on the module a and the module B, but to complete the horizontal extension of the back-end platform, the horizontal extension of the plurality of modules together must be completed, and the horizontal extension of the module C cannot be completed independently. In such a service scenario, it may take a lot of time to complete a system restart or bring the system online again; secondly, the whole project is a unified whole, so the technical type selection is important, and after the project is started, the whole project can be basically realized only by using a single development language; in addition, the waste of server resources and storage resources is inevitably brought, and the development of the internet of things system is not facilitated.
Thus, there is a need for a better solution to the problems of the prior art.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides an edge calculation method and equipment based on a micro-service architecture, and based on the scheme, the high availability of the system can be ensured. When the load of a service instance is too high due to the increase of service requests, only the processing capacity of a specific service needs to be expanded, the requests among a plurality of same service instances are balanced through load balancing, and the efficiency of each service instance is guaranteed to be maximized; when the number of service requests is reduced, redundant service instances can be closed in time, power consumption is controlled, and cost is saved.
Specifically, the present invention proposes the following specific examples:
the embodiment of the invention provides an edge computing method based on a micro-service architecture, wherein the micro-service architecture comprises an API gateway and a plurality of independent service instances; each service instance and the edge server are connected with an API gateway; the method comprises the following steps:
receiving a service request of an edge server through the API gateway;
determining a service corresponding to the service request;
adjusting the service instances through the service to realize load balance among the service instances; the adjusting includes adding the service instance corresponding to the service request or closing the service instance corresponding to the service request.
In a specific embodiment, before receiving, by the API gateway, a service request of an edge server, the method further includes:
when the API gateway acquires the registration request of the edge server, recording the information of the edge server in a registry, generating a registration key and returning the registration key to the edge server;
and performing identity authentication on the edge server through the API gateway, distributing authority related to service for the edge computing server after the identity authentication is passed, generating a token associated with the authority, and returning the token to the edge computing server.
In a specific embodiment, after returning the token associated with the right back to the edge computing server, the method further includes:
acquiring a query request of the edge server through the API gateway; the query request comprises information of service;
inquiring based on the inquiry request to obtain an instance table containing all service instances of the service;
determining, by the API gateway, an operating condition of each service instance in the instance table;
after the load balancing is carried out through the API gateway, the API information of the service instance with the best state is inquired in a preset database;
and feeding the API information back to the edge server through the API gateway.
In a specific embodiment, after feeding back the API information to the edge server, the method further includes:
requesting, by the edge server, a service instance based on the Token.
In a specific embodiment, the requesting, by the edge server, a service instance based on the token includes:
detecting whether a token exists locally through the edge server;
if not, acquiring a token through the API gateway based on the identity authentication microservice, and storing the acquired token in a cache;
requesting a service instance through the token in the cache.
In a specific embodiment, the method further comprises the following steps:
when the edge server needs to install the data processing application, judging whether the data processing application is installed in other edge servers adjacent to the edge server;
if the judgment result is yes, and the load of other edge servers installed with the data processing application is smaller than a preset threshold, calling the other edge servers with the load smaller than the preset threshold through the PRC, and installing the data processing application.
In a specific embodiment, the method further comprises the following steps:
if the data processing application is not installed in other edge servers adjacent to the edge server, downloading a mirror image file of the data processing application through the API gateway;
and installing the mirror image file at the edge server.
In a specific embodiment, the image file is obtained by splitting an application with a preset requirement; the preset requirements comprise that the data delay is smaller than the preset delay and the data volume is larger than the preset size.
In a specific embodiment, the micro service architecture comprises a physical device layer, an edge computing layer, a support service layer and an intelligent application layer which are connected in sequence;
the physical device layer comprises intelligent devices with data acquisition, data processing and communication capabilities and resource-limited devices with only the capability of sensing the physical environment to acquire sensed data.
The edge computing layer is used for acquiring the data of the physical equipment layer and executing equipment registration, data processing or service arrangement based on the acquired data;
the support service layer is used for gathering all services and is provided with an API (application programming interface) management platform, a general micro service and a business micro service; the API interface is arranged in the API management platform;
the front end of the intelligent application layer is used for displaying services on different platforms, and the rear end of the intelligent application layer is connected with the support service layer to obtain service support of the service support layer.
The embodiment of the invention also provides edge computing equipment based on the micro-service architecture, wherein the micro-service architecture comprises an API gateway and a plurality of independent service instances; each service instance and the edge server are connected with an API gateway; the apparatus comprises:
the acquisition module is used for receiving a service request of the edge server through the API gateway;
the determining module is used for determining the service corresponding to the service request;
the load balancing module is used for adjusting the service instances through the service so as to realize load balancing among the service instances; the adjusting includes adding the service instance corresponding to the service request or closing the service instance corresponding to the service request.
Compared with the prior art, the scheme of the application has the following effects:
high availability of the system can be ensured. When the load of a service instance is too high due to the increase of service requests, only the processing capacity of a specific service needs to be expanded, the requests among a plurality of same service instances are balanced through load balancing, and the efficiency of each service instance is guaranteed to be maximized; when the number of service requests is reduced, redundant service instances can be closed in time, power consumption is controlled, and cost is saved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
FIG. 1 is a schematic diagram of a business processing architecture under a conventional SOA architecture;
FIG. 2 is a schematic diagram of a business processing architecture after expansion under an existing SOA architecture;
fig. 3 is a schematic flowchart of an edge calculation method based on a micro service architecture according to an embodiment of the present invention;
fig. 4 is a processing diagram of an edge calculation method based on a micro service architecture according to an embodiment of the present invention;
fig. 5 is a schematic diagram of service processing after an edge calculation method based on a micro-service architecture is expanded according to an embodiment of the present invention;
fig. 6 is a schematic diagram illustrating a registration flow of an edge server in an edge computing method based on a micro service architecture according to an embodiment of the present invention;
fig. 7 is a schematic diagram of an information flow service API data request flow under an edge computing method based on a micro service architecture according to an embodiment of the present invention;
fig. 8 is a schematic diagram of an information flow service request flow under an edge computing method based on a micro service architecture according to an embodiment of the present invention;
fig. 9 is a schematic diagram of a top-layer application sinking service request flow under an edge computing method based on a micro-service architecture according to an embodiment of the present invention;
fig. 10 is a schematic structural diagram of a micro service architecture according to an embodiment of the present invention;
fig. 11 is a schematic structural diagram of an edge computing device based on a micro service architecture according to an embodiment of the present invention.
Detailed Description
Various embodiments of the present disclosure will be described more fully hereinafter. The present disclosure is capable of various embodiments and of modifications and variations therein. However, it should be understood that: there is no intention to limit the various embodiments of the disclosure to the specific embodiments disclosed herein, but rather, the disclosure is to cover all modifications, equivalents, and/or alternatives falling within the spirit and scope of the various embodiments of the disclosure.
The terminology used in the various embodiments of the present disclosure is for the purpose of describing particular embodiments only and is not intended to be limiting of the various embodiments of the present disclosure. As used herein, the singular forms are intended to include the plural forms as well, unless the context clearly indicates otherwise. Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which the various embodiments of the present disclosure belong. The terms (such as those defined in commonly used dictionaries) should be interpreted as having a meaning that is consistent with their contextual meaning in the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined in various embodiments of the present disclosure.
Example 1
The embodiment 1 of the invention discloses an edge computing method based on a micro-service architecture, wherein the micro-service architecture comprises an Application Programming Interface (API) gateway and a plurality of independent service instances; each service instance and the edge server are connected with an API gateway; as shown in fig. 3, the method includes:
step 101, receiving a service request of an edge server through the API gateway;
step 102, determining a service corresponding to the service request;
103, adjusting the service instances through the service to realize load balance among the service instances; the adjusting includes adding the service instance corresponding to the service request or closing the service instance corresponding to the service request.
Specifically, when a service request to be processed is obtained, what type of service is requested is determined, taking fig. 4 or fig. 5 as an example, for example, the corresponding service is service C, and service C is executed by service instance C, in which case, the service instance C needs to be adjusted, and specifically, the service instance C, for example, service instance C' and service instance C ″ need to be added; and load balancing is carried out among all service instances, including all service instances, the service instance C 'and the service instance C', so that when the load of a certain service instance is overhigh due to the increase of service requests, only the processing capacity of a specific service needs to be expanded, the requests among a plurality of same service instances are balanced through load balancing, and the efficiency maximization of each service instance is ensured.
Of course, when the number of service requests is reduced, redundant service instances can be closed in time, power consumption is controlled, and cost is saved.
In a specific embodiment, before receiving, by the API gateway, a service request of an edge server, the method further includes:
when the API gateway acquires the registration request of the edge server, recording the information of the edge server in a registry, generating a registration key and returning the registration key to the edge server;
and performing identity authentication on the edge server through the API gateway, distributing authority related to service for the edge computing server after the identity authentication is passed, and generating a token (namely a certificate) associated with the authority and returning the token to the edge computing server.
Specifically, as shown in fig. 6, when the edge server first accesses the API management platform, registration is required, that is, information of the edge server is stored and a key is allocated for a subsequent service request, which includes the following specific processes:
1) the edge server initiates a registration request; 2) an API management platform (specifically, an API gateway) accepts a registration request, records edge server information in a registry, and generates an access key; 3) the API management platform returns the key to the edge server; 4) the API management platform investigates the identity authentication microservice, allocates the authority for the edge computing platform and returns to token; 5) the API management platform returns to the edge server access token.
In a specific embodiment, after returning the token associated with the right back to the edge computing server, the method further includes:
acquiring a query request of the edge server through the API gateway; the query request comprises information of service;
inquiring based on the inquiry request to obtain an instance table containing all service instances of the service;
determining, by the API gateway, an operating condition of each service instance in the instance table;
after the load balancing is carried out through the API gateway, the API information of the service instance with the best state is inquired in a preset database;
and feeding the API information back to the edge server through the API gateway.
Further, as shown in fig. 7, the information flow service arranges the API interface provided by the upper layer microservice to fulfill its own requirement, so that the interaction between the microservice and the edge computing is realized by the call obtained by the API interface. The edge computing layer firstly needs to initiate a request for obtaining API data, the request obtains relevant information of the micro-service instance through an API management platform agent, and the specific flow is as follows: 1) the edge server calls an API management platform inquiry interface, carries the name, ID or tag of the micro service A when requesting, and simultaneously needs to transmit the access key of the edge server; 2) the API management platform searches the information of the object service instance on the service tree through the information of the micro-service A provided by the edge server; 3) the service tree returns to the API management platform a service instance table which contains all instances of the running micro-service A; 4) the API management platform needs to perform load balancing on the requests, so that the whole service instance table is traversed, and the running condition of each instance is inquired through the monitoring service; 5) the monitoring service returns the running information of the service instance to the API management platform; 6) searching the registration information of the optimal example in a database after the load balancing by the API management platform; 7) the database returns information such as instance API; 8) the API platform returns API information of the micro-service instance to the edge server.
In addition, after feeding back the API information to the edge server, the method further includes:
requesting, by the edge server, a service instance based on the Token.
Further, the "requesting, by the edge server, a service instance based on the token" includes:
detecting whether a token exists locally through the edge server;
if not, acquiring a token through the API gateway based on the identity authentication microservice, and storing the acquired token in a cache;
requesting a service instance through the token in the cache.
Specifically, as shown in fig. 8, after acquiring data of the service instance API interface, the edge server may directly request the API interface to acquire a service. But the token provided by the authentication microservice is required before requesting the API and needs to be generated by the authentication microservice if not present in the cache of the current edge compute server.
Specifically, the method further comprises:
when the edge server needs to install the data processing application, judging whether the data processing application is installed in other edge servers adjacent to the edge server;
if the judgment result is yes, and the load of other edge servers installed with the data processing application is smaller than a preset threshold, calling the other edge servers with the load smaller than the preset threshold through the PRC, and installing the data processing application.
In addition, the method further comprises the following steps: if the data processing application is not installed in other edge servers adjacent to the edge server, downloading a mirror image file of the data processing application through the API gateway;
and installing the mirror image file at the edge server.
The mirror image file is obtained by splitting an application with a preset requirement; the preset requirements comprise that the data delay is smaller than the preset delay and the data volume is larger than the preset size.
Specifically, as shown in fig. 9, the top-level application sink service is a service related to data processing that is split from the top-level application, and such an application has a high requirement on data real-time performance and a large data volume, and is directly transmitted from a data source to a cloud center, and has a high requirement on bandwidth and a large time delay. The micro service architecture supports the splitting of the applications, packages the data processing applications into mirror image files, sinks to the edge computing side, and is installed in a container of an edge server to operate. Considering the volume of the image file and the stability of the network between the micro service instance and the edge server, when the edge server needs to install and run the service, firstly, searching whether a server which has the service installed exists between the adjacent edge servers, and if the server exists and the load of the adjacent server is not high, directly calling the service on the adjacent server through Remote Procedure Call (RPC); if the image file does not exist, the corresponding image file is required to be downloaded through the API management platform and installed locally.
In a specific embodiment, as shown in fig. 10, the micro service architecture includes a physical device layer, an edge computing layer, a support service layer, and an intelligent application layer, which are connected in sequence;
the physical device layer comprises intelligent devices with data acquisition, data processing and communication capabilities and resource-limited devices with only the capability of sensing the physical environment to acquire sensed data.
The edge computing layer is used for acquiring the data of the physical equipment layer and executing equipment registration, data processing or service arrangement based on the acquired data;
the support service layer is used for gathering all services and is provided with an API (application programming interface) management platform, a general micro service and a business micro service; the API interface is arranged in the API management platform;
the front end of the intelligent application layer is used for displaying services on different platforms, and the rear end of the intelligent application layer is connected with the support service layer to obtain service support of the service support layer.
Specifically, as shown in fig. 10, the physical device layer is composed of a series of physical devices, which include both some intelligent devices with data acquisition, data processing, and communication capabilities, and some common sensors and dedicated devices, and these devices only have the capability of sensing the physical environment and converting it into data, and are referred to as resource-constrained devices. Because of the difference in the device capabilities in the physical device layer, in order to balance this difference, a virtual device manager is used in the edge computing layer to manage the resource-constrained devices, so that device data can be transmitted to the message queue.
Edge calculation layer: the edge computing layer can be divided into edge devices, edge gateways and edge servers in a general sense, in the patent, the edge devices are divided into a physical device layer, and in the edge computing layer, the edge gateways and the edge servers are integrated together, so that the capability of device connection and management can be provided, and services can be provided. In the edge computing layer, the data flow direction is divided into three modules, namely device registration, data processing and service arrangement. Wherein the content of the first and second substances,
the device registration module faces to the bottom layer physical device and is used for counting the information of the device accessed to the edge computing layer. There are differences in capabilities between physical devices and thus different device registration methods. And for the intelligent equipment, equipment registration is carried out according to different communication protocols, such as MQTT, Modbus, RS485 and the like. The edge computing layer provides a registration service API, and a user can manually fill in equipment information and preset a control instruction at the same time, so that automatic management of the access equipment is realized. The resource-limited device does not have the capability of actively receiving and sending the instruction, so that the virtual device manager is configured in the device registration module, and a user can act on the resource-limited device in the virtual device manager, transmit the upward acting data and receive the instruction, and acquire and send the downward acting data and the instruction.
The data processing module is a support module of the whole edge computing layer, and maintains a message bus covering the whole layer for data circulation. Meanwhile, the functions of data acquisition, message queue, equipment registry, log management and data storage are realized, and the method is an important module for connecting lower-layer equipment service and supporting upper-layer business service.
The service arrangement module is used for managing services of the edge side, the edge computing layer is located between the bottom layer data source and the top layer application, and due to the characteristic that the edge computing layer is close to the data source, services such as data processing, data screening and data compression in the top layer application can be sunk into the edge computing layer, so that the service arrangement module can organize the services of the edge side, including edge local closed-loop services, top layer application sinking services and information flow services. Wherein, the local closed-loop service: the service acts on registered devices at the edge computing layer, and the logic is simple and generally does not involve network requests, and the running state of the devices is configured according to a state machine. Top-level application convergence service: the service belongs to a part of top-level application, generally does not need to be written by a user, and directly downloads the mirror image file from the support service layer and creates a container to run in a local edge server. Information flow service: the service realizes the requirement by arranging part of micro-services supporting a service layer and calling related service APIs. The service scheduling module also has the functions of configuration management, service scheduling, exception handling and the like in order to support the operation of the service.
Supporting a service layer: the support service layer is a collection of a plurality of cloud micro services, and the support service layer is composed of an API (application programming interface) management platform, a general micro service and a business micro service. API searching requests and API calls of all edge computing layers need to pass through an API management platform, the API management platform is directly accessed into the general micro service for service discovery and service monitoring, the running state of the micro service container is obtained, and flow is controlled to achieve load balancing. In the API management platform, a large amount of database lookup work is involved, and therefore, several pieces of microservice information with the largest number of requests need to be cached to provide high-quality service. There are some common microservices in microservice architectures that are used to manage business microservices, as follows:
service registration: the service micro service stores the information of the IP address, the port number, the request specification and the like of the service micro service in a database, and simultaneously exposes the API relevant to the service;
service discovery: the system performs polling on the micro-service examples at intervals, checks which micro-service examples are running, and automatically updates the micro-service examples into a database;
and (3) authorization and authentication: checking the identities of all the micro-service examples, allocating the identities for each micro-service example, and limiting the mutual calling among the micro-service examples by different identities;
configuring a service: managing configuration information for container installation;
log management: recording the operation data of the micro service instance;
service monitoring: and monitoring the running state of each micro-service instance, measuring parameters such as a CPU (Central processing Unit), a memory, the number of tasks and the like, analyzing the load state of the micro-service instance, and providing data for load balancing decision.
The business microservice is that a third party registers microservice examples on a support service layer, for microservice with higher performance requirements, the services can be divided into a core business part and a data processing part, the core business is registered on the support service layer, and the data processing is sent to an edge computing layer for data acceleration processing.
And (3) intelligent application layer: the intelligent application is the expression of the whole internet of things service, the front-end display can be realized on different platforms such as mobile terminal APP, web pages, applets and the like, and the back-end service is accessed to the support service layer to obtain service support.
Therefore, the scheme of the application is combined with the edge computing technology, so that the Internet of things can process more data at the edge side, the micro-service architecture can flexibly expand the service of the edge side, the task processing flow of the edge computing server is designed, and the change of the edge side service is met.
Example 2
For further explanation of the solution of the present application, embodiment 2 of the present invention further discloses an edge computing device based on a micro-service architecture, where the micro-service architecture includes an API gateway and multiple independent service instances; each service instance and the edge server are connected with an API gateway; as shown in fig. 11, the apparatus includes:
an obtaining module 201, configured to receive a service request of an edge server through the API gateway;
a determining module 202, configured to determine a service corresponding to the service request;
a load balancing module 203, configured to adjust the service instances through the service, so as to implement load balancing between the service instances; the adjusting includes adding the service instance corresponding to the service request or closing the service instance corresponding to the service request.
In a specific embodiment, the method further comprises the following steps: the registration module is used for recording the information of the edge server in a registry when the API gateway acquires the registration request of the edge server before receiving the service request of the edge server through the API gateway, generating a registration key and returning the registration key to the edge server;
and performing identity authentication on the edge server through the API gateway, distributing authority related to service for the edge computing server after the identity authentication is passed, generating a token associated with the authority, and returning the token to the edge computing server.
In a specific embodiment, the method further comprises the following steps: the API information acquisition module is used for acquiring the query request of the edge server through the API gateway after returning the token associated with the authority to the edge computing server; the query request comprises information of service;
inquiring based on the inquiry request to obtain an instance table containing all service instances of the service;
determining, by the API gateway, an operating condition of each service instance in the instance table;
after the load balancing is carried out through the API gateway, the API information of the service instance with the best state is inquired in a preset database;
and feeding the API information back to the edge server through the API gateway.
In a particular embodiment of the present invention,
and the request module is used for requesting a service instance based on the Token through the edge server after the API information is fed back to the edge server.
In a specific embodiment, the requesting module "requesting, by the edge server, a service instance based on the token" includes:
detecting whether a token exists locally through the edge server;
if not, acquiring a token through the API gateway based on the identity authentication microservice, and storing the acquired token in a cache;
requesting a service instance through the token in the cache.
In a specific embodiment, the method further comprises the following steps: a processing module for
When the edge server needs to install the data processing application, judging whether the data processing application is installed in other edge servers adjacent to the edge server;
if the judgment result is yes, and the load of other edge servers installed with the data processing application is smaller than a preset threshold, calling the other edge servers with the load smaller than the preset threshold through the PRC, and installing the data processing application.
The processing module is further configured to:
if the data processing application is not installed in other edge servers adjacent to the edge server, downloading a mirror image file of the data processing application through the API gateway;
and installing the mirror image file at the edge server.
In a specific embodiment, the image file is obtained by splitting an application with a preset requirement; the preset requirements comprise that the data delay is smaller than the preset delay and the data volume is larger than the preset size.
In a specific embodiment, the micro service architecture comprises a physical device layer, an edge computing layer, a support service layer and an intelligent application layer which are connected in sequence;
the physical device layer comprises intelligent devices with data acquisition, data processing and communication capabilities and resource-limited devices with only the capability of sensing the physical environment to acquire sensed data.
The edge computing layer is used for acquiring the data of the physical equipment layer and executing equipment registration, data processing or service arrangement based on the acquired data;
the support service layer is used for gathering all services and is provided with an API (application programming interface) management platform, a general micro service and a business micro service; the API interface is arranged in the API management platform;
the front end of the intelligent application layer is used for displaying services on different platforms, and the rear end of the intelligent application layer is connected with the support service layer to obtain service support of the service support layer.
Those skilled in the art will appreciate that the figures are merely schematic representations of one preferred implementation scenario and that the blocks or flow diagrams in the figures are not necessarily required to practice the present invention.
Those skilled in the art will appreciate that the modules in the devices in the implementation scenario may be distributed in the devices in the implementation scenario according to the description of the implementation scenario, or may be located in one or more devices different from the present implementation scenario with corresponding changes. The modules of the implementation scenario may be combined into one module, or may be further split into a plurality of sub-modules.
The above-mentioned invention numbers are merely for description and do not represent the merits of the implementation scenarios.
The above disclosure is only a few specific implementation scenarios of the present invention, however, the present invention is not limited thereto, and any variations that can be made by those skilled in the art are intended to fall within the scope of the present invention.

Claims (10)

1. An edge computing method based on a micro-service architecture is characterized in that the micro-service architecture comprises an API gateway and a plurality of independent service instances; each service instance and the edge server are connected with an API gateway; the method comprises the following steps:
receiving a service request of an edge server through the API gateway;
determining a service corresponding to the service request;
adjusting the service instances through the service to realize load balance among the service instances; the adjusting includes adding the service instance corresponding to the service request or closing the service instance corresponding to the service request.
2. The method of claim 1, prior to receiving a service request of an edge server through the API gateway, further comprising:
when the API gateway acquires the registration request of the edge server, recording the information of the edge server in a registry, generating a registration key and returning the registration key to the edge server;
and performing identity authentication on the edge server through the API gateway, distributing authority related to service for the edge computing server after the identity authentication is passed, generating a token associated with the authority, and returning the token to the edge computing server.
3. The method of claim 2, after returning the token associated with the privilege back to the edge computing server, further comprising:
acquiring a query request of the edge server through the API gateway; the query request comprises information of service;
inquiring based on the inquiry request to obtain an instance table containing all service instances of the service;
determining, by the API gateway, an operating condition of each service instance in the instance table;
after the load balancing is carried out through the API gateway, the API information of the service instance with the best state is inquired in a preset database;
and feeding the API information back to the edge server through the API gateway.
4. The method of claim 3, wherein after feeding back the API information to the edge server, further comprising:
requesting, by the edge server, a service instance based on the Token.
5. The method of claim 4, wherein said requesting, by the edge server, a service instance based on the token comprises:
detecting whether a token exists locally through the edge server;
if not, acquiring a token through the API gateway based on the identity authentication microservice, and storing the acquired token in a cache;
requesting a service instance through the token in the cache.
6. The method of claim 1, further comprising:
when the edge server needs to install the data processing application, judging whether the data processing application is installed in other edge servers adjacent to the edge server;
if the judgment result is yes, and the load of other edge servers installed with the data processing application is smaller than a preset threshold, calling the other edge servers with the load smaller than the preset threshold through the PRC, and installing the data processing application.
7. The method of claim 6, further comprising:
if the data processing application is not installed in other edge servers adjacent to the edge server, downloading a mirror image file of the data processing application through the API gateway;
and installing the mirror image file at the edge server.
8. The method of claim 7, wherein the image file is obtained by splitting an application with preset requirements; the preset requirements comprise that the data delay is smaller than the preset delay and the data volume is larger than the preset size.
9. The method of claim 1, wherein the micro-service architecture comprises a physical device layer, an edge computation layer, a support service layer, and an intelligent application layer connected in sequence;
the physical device layer comprises intelligent devices with data acquisition, data processing and communication capabilities and resource-limited devices with only the capability of sensing the physical environment to acquire sensed data.
The edge computing layer is used for acquiring the data of the physical equipment layer and executing equipment registration, data processing or service arrangement based on the acquired data;
the support service layer is used for gathering all services and is provided with an API (application programming interface) management platform, a general micro service and a business micro service; the API interface is arranged in the API management platform;
the front end of the intelligent application layer is used for displaying services on different platforms, and the rear end of the intelligent application layer is connected with the support service layer to obtain service support of the service support layer.
10. An edge computing device based on a micro-service architecture is characterized in that the micro-service architecture comprises an API gateway and a plurality of independent service instances; each service instance and the edge server are connected with an API gateway; the apparatus comprises:
the acquisition module is used for receiving a service request of the edge server through the API gateway;
the determining module is used for determining the service corresponding to the service request;
the load balancing module is used for adjusting the service instances through the service so as to realize load balancing among the service instances; the adjusting includes adding the service instance corresponding to the service request or closing the service instance corresponding to the service request.
CN202011191305.7A 2020-10-30 2020-10-30 Edge calculation method and device based on micro-service architecture Pending CN112532683A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011191305.7A CN112532683A (en) 2020-10-30 2020-10-30 Edge calculation method and device based on micro-service architecture

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011191305.7A CN112532683A (en) 2020-10-30 2020-10-30 Edge calculation method and device based on micro-service architecture

Publications (1)

Publication Number Publication Date
CN112532683A true CN112532683A (en) 2021-03-19

Family

ID=74979274

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011191305.7A Pending CN112532683A (en) 2020-10-30 2020-10-30 Edge calculation method and device based on micro-service architecture

Country Status (1)

Country Link
CN (1) CN112532683A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113535396A (en) * 2021-07-14 2021-10-22 的卢技术有限公司 Cloud application service-free architecture implementation system and method
CN114584544A (en) * 2022-02-25 2022-06-03 煤炭科学技术研究院有限公司 Intelligent cloud box system for coal mine
CN114760060A (en) * 2022-06-15 2022-07-15 杭州天舰信息技术股份有限公司 Service scheduling method for edge computing
CN115150356A (en) * 2021-03-30 2022-10-04 ***通信有限公司研究院 Method and device for calling edge capability by terminal
WO2024056042A1 (en) * 2022-09-16 2024-03-21 中兴通讯股份有限公司 Load balancing processing method and apparatus, storage medium and electronic apparatus

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107205020A (en) * 2017-05-05 2017-09-26 国网福建省电力有限公司 Service load balancing method and system under Service-Oriented Architecture Based
CN108712464A (en) * 2018-04-13 2018-10-26 中国科学院信息工程研究所 A kind of implementation method towards cluster micro services High Availabitity
CN110489139A (en) * 2019-07-03 2019-11-22 平安科技(深圳)有限公司 A kind of real-time data processing method and its relevant device based on micro services
CN111181727A (en) * 2019-12-16 2020-05-19 北京航天智造科技发展有限公司 Open API full life cycle management method based on micro service

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107205020A (en) * 2017-05-05 2017-09-26 国网福建省电力有限公司 Service load balancing method and system under Service-Oriented Architecture Based
CN108712464A (en) * 2018-04-13 2018-10-26 中国科学院信息工程研究所 A kind of implementation method towards cluster micro services High Availabitity
CN110489139A (en) * 2019-07-03 2019-11-22 平安科技(深圳)有限公司 A kind of real-time data processing method and its relevant device based on micro services
CN111181727A (en) * 2019-12-16 2020-05-19 北京航天智造科技发展有限公司 Open API full life cycle management method based on micro service

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李林哲: "边缘计算的架构、挑战与应用", 《大数据》, pages 2 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115150356A (en) * 2021-03-30 2022-10-04 ***通信有限公司研究院 Method and device for calling edge capability by terminal
CN113535396A (en) * 2021-07-14 2021-10-22 的卢技术有限公司 Cloud application service-free architecture implementation system and method
CN113535396B (en) * 2021-07-14 2023-08-15 西藏宁算科技集团有限公司 Cloud application non-service architecture implementation system and method
CN114584544A (en) * 2022-02-25 2022-06-03 煤炭科学技术研究院有限公司 Intelligent cloud box system for coal mine
CN114760060A (en) * 2022-06-15 2022-07-15 杭州天舰信息技术股份有限公司 Service scheduling method for edge computing
WO2024056042A1 (en) * 2022-09-16 2024-03-21 中兴通讯股份有限公司 Load balancing processing method and apparatus, storage medium and electronic apparatus

Similar Documents

Publication Publication Date Title
US11875173B2 (en) Execution of auxiliary functions in an on-demand network code execution system
CN112532683A (en) Edge calculation method and device based on micro-service architecture
US10817331B2 (en) Execution of auxiliary functions in an on-demand network code execution system
JP7275171B2 (en) Operating System Customization in On-Demand Network Code Execution Systems
US10623476B2 (en) Endpoint management system providing an application programming interface proxy service
US10120734B1 (en) Application programming interface and services engine with application-level multi-tenancy
WO2020005764A1 (en) Execution of auxiliary functions in an on-demand network code execution system
CN103119907A (en) Systems and methods for providing a smart group
US20040068553A1 (en) Dynamically selecting a Web service container for hosting remotely instantiated Web services
US10819650B2 (en) Dynamically adaptive cloud computing infrastructure
CN114172966A (en) Service calling method and device and service processing method and device under unitized architecture
KR20220141070A (en) Apparatus for container orchestration in geographically distributed multi cloud environment and method using the same
CN112243016B (en) Middleware platform, terminal equipment, 5G artificial intelligence cloud processing system and processing method
US11861386B1 (en) Application gateways in an on-demand network code execution system
CN105763545B (en) A kind of BYOD method and device
CN113114503B (en) Deployment method and device based on application delivery network requirements
CN111600755B (en) Internet access behavior management system and method
CN117642724A (en) Stream analysis using server-less computing system
KR20190015817A (en) Method, Apparatus and System for Monitoring Using Middleware
Hao Edge computing on low availability devices with K3S in a smart home IoT system
CN110719303B (en) Containerization NRF method and system
CN100429904C (en) A method for implementing dynamic deployment of network client-side application
US20240220305A1 (en) Execution of auxiliary functions in an on-demand network code execution system
CN116069481B (en) Container scheduling system and scheduling method for sharing GPU resources
CN117278323B (en) Third party information acquisition method, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210319