CN112487218B - Content processing method, system, device, computing equipment and storage medium - Google Patents

Content processing method, system, device, computing equipment and storage medium Download PDF

Info

Publication number
CN112487218B
CN112487218B CN202011359273.7A CN202011359273A CN112487218B CN 112487218 B CN112487218 B CN 112487218B CN 202011359273 A CN202011359273 A CN 202011359273A CN 112487218 B CN112487218 B CN 112487218B
Authority
CN
China
Prior art keywords
request
processing
content data
content
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011359273.7A
Other languages
Chinese (zh)
Other versions
CN112487218A (en
Inventor
胥昕昂
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202011359273.7A priority Critical patent/CN112487218B/en
Publication of CN112487218A publication Critical patent/CN112487218A/en
Application granted granted Critical
Publication of CN112487218B publication Critical patent/CN112487218B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/43Querying
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/43Querying
    • G06F16/435Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/43Querying
    • G06F16/438Presentation of query results
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/48Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/483Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Library & Information Science (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The present disclosure relates to a content processing method, system, apparatus, computing device, and storage medium, and relates to the field of cloud computing technology. The method includes obtaining, by an access stratum, a first request including content data and a processing command for the content data. The method also includes sending, by the access stratum, a second request to the service stratum in response to the source of the first request meeting the security criterion, wherein the second request includes content data and a processing command, and the second request does not include the source of the first request. The method further includes processing, by the service layer, the content data according to the processing command.

Description

Content processing method, system, device, computing equipment and storage medium
Technical Field
The disclosure relates to the technical field of cloud computing, and in particular relates to a content processing method, a system, a device, computing equipment and a storage medium.
Background
Content processing may include word processing, video processing, image processing, topic processing, and the like. As the amount of computation increases, implementing all content processing functions in a single module becomes increasingly tedious and inefficient. Thus, there is a need for a properly decoupled content processing method and system architecture.
The approaches described in this section are not necessarily approaches that have been previously conceived or pursued. Unless otherwise indicated, it should not be assumed that any of the approaches described in this section qualify as prior art merely by virtue of their inclusion in this section. Similarly, the problems mentioned in this section should not be considered as having been recognized in any prior art unless otherwise indicated.
Disclosure of Invention
According to one aspect of the present disclosure, a content processing method is disclosed. The method includes obtaining, by an access stratum, a first request including content data and a processing command for the content data. The method also includes sending, by the access layer, a second request to the service layer in response to the source of the first request meeting the security criterion. The second request includes the content data and the processing command, and the second request does not include the source of the first request. The method further includes processing, by a service layer, the content data according to the processing command.
In accordance with another aspect of the present disclosure, a content processing system is disclosed. The content processing system includes an access layer. The access layer is configured to obtain a first request including content data and a processing command for the content data, and send a second request to the service layer in response to a source of the first request meeting a security indicator. The second request includes the content data and the processing command, and the second request does not include a source of the first request. The content processing system also includes a service layer. The service layer is used for processing the content data according to the processing command.
According to yet another aspect of the disclosure, a computing device is disclosed that may include a processor; and a memory storing a program comprising instructions that when executed by the processor cause the processor to perform the content processing method described above.
According to yet another aspect of the present disclosure, a computer-readable storage medium storing a program is disclosed, the program may include instructions that when executed by a processor of a server cause the server to perform the above-described content processing method.
According to yet another aspect of the present disclosure, a computer program product is disclosed comprising computer instructions which, when executed by a processor of a server, cause the server to perform the above-described content processing method.
Drawings
The accompanying drawings illustrate exemplary embodiments and, together with the description, serve to explain exemplary implementations of the embodiments. The illustrated embodiments are for exemplary purposes only and do not limit the scope of the claims. Throughout the drawings, identical reference numerals designate similar, but not necessarily identical, elements.
FIG. 1 is a schematic diagram of an exemplary system in which various methods described herein may be implemented, according to an embodiment of the present disclosure;
FIG. 2 is a flow chart of a content processing method according to an embodiment of the present disclosure;
FIG. 3 is an example architectural diagram of a content processing system according to an embodiment of the present disclosure;
FIG. 4 (a) is a schematic diagram of concurrent processing of a service layer according to an embodiment of the present disclosure;
FIG. 4 (b) is an example functional schematic of a service layer according to an embodiment of the present disclosure;
FIG. 5 (a) is a flow chart of a content processing method according to another embodiment of the present disclosure;
FIG. 5 (b) is an example architecture diagram of a content processing system according to another embodiment of the present disclosure;
fig. 6 shows a block diagram of a content processing apparatus according to an embodiment of the present disclosure;
fig. 7 illustrates a block diagram of an exemplary server and client that can be used to implement embodiments of the present disclosure.
Detailed Description
In the present disclosure, the use of the terms "first," "second," and the like to describe various elements is not intended to limit the positional relationship, timing relationship, or importance relationship of the elements, unless otherwise indicated, and such terms are merely used to distinguish one element from another. In some examples, a first element and a second element may refer to the same instance of the element, and in some cases, they may also refer to different instances based on the description of the context.
The terminology used in the description of the various illustrated examples in this disclosure is for the purpose of describing particular examples only and is not intended to be limiting. Unless the context clearly indicates otherwise, the elements may be one or more if the number of the elements is not specifically limited. Furthermore, the term "and/or" as used in this disclosure encompasses any and all possible combinations of the listed items.
Embodiments of the present disclosure will be described in detail below with reference to the accompanying drawings.
Fig. 1 illustrates a schematic diagram of an exemplary system 100 in which various methods and apparatus described herein may be implemented, in accordance with an embodiment of the present disclosure. Referring to fig. 1, the system 100 includes one or more client devices 101, 102, 103, 104, 105, and 106, a server 120, and one or more communication networks 110 coupling the one or more client devices to the server 120. Client devices 101, 102, 103, 104, 105, and 106 may be configured to execute one or more applications.
In embodiments of the present disclosure, the server 120 may run one or more services or software applications that enable methods of content processing.
In some embodiments, server 120 may also provide other services or software applications that may include non-virtual environments and virtual environments. In some embodiments, these services may be provided as web-based services or cloud services, for example, provided to users of client devices 101, 102, 103, 104, 105, and/or 106 under a software as a service (SaaS) model.
In the configuration shown in fig. 1, server 120 may include one or more components that implement the functions performed by server 120. These components may include software components, hardware components, or a combination thereof that are executable by one or more processors. A user operating client devices 101, 102, 103, 104, 105, and/or 106 may in turn utilize one or more client applications to interact with server 120 to utilize the services provided by these components. It should be appreciated that a variety of different system configurations are possible, which may differ from system 100. Accordingly, FIG. 1 is one example of a system for implementing the various methods described herein and is not intended to be limiting.
The user may process the content using client devices 101, 102, 103, 104, 105, and/or 106. The client device may provide an interface that enables a user of the client device to interact with the client device. The client device may also output information to the user via the interface. Although fig. 1 depicts only six client devices, those skilled in the art will appreciate that the present disclosure may support any number of client devices.
Client devices 101, 102, 103, 104, 105, and/or 106 may include various types of computing systems, such as portable handheld devices, general purpose computers (such as personal computers and laptop computers), workstation computers, wearable devices, gaming systems, thin clients, various messaging devices, sensors or other sensing devices, and the like. These computing devices may run various types and versions of software applications and operating systems, such as Microsoft Windows, apple iOS, UNIX-like operating systems, linux, or Linux-like operating systems (e.g., *** Chrome OS); or include various mobile operating systems such as Microsoft Windows Mobile OS, iOS, windows Phone, android. Portable handheld devices may include cellular telephones, smart phones, tablet computers, personal Digital Assistants (PDAs), and the like. Wearable devices may include head mounted displays and other devices. The gaming system may include various handheld gaming devices, internet-enabled gaming devices, and the like. The client device is capable of executing a variety of different applications, such as various Internet-related applications, communication applications (e.g., email applications), short Message Service (SMS) applications, and may use a variety of communication protocols.
Network 110 may be any type of network known to those skilled in the art that may support data communications using any of a number of available protocols, including but not limited to TCP/IP, SNA, IPX, etc. For example only, the one or more networks 110 may be a Local Area Network (LAN), an ethernet-based network, a token ring, a Wide Area Network (WAN), the internet, a virtual network, a Virtual Private Network (VPN), an intranet, an extranet, a Public Switched Telephone Network (PSTN), an infrared network, a wireless network (e.g., bluetooth, WIFI), and/or any combination of these and/or other networks.
The server 120 may include one or more general purpose computers, special purpose server computers (e.g., PC (personal computer) servers, UNIX servers, mid-end servers), blade servers, mainframe computers, server clusters, or any other suitable arrangement and/or combination. The server 120 may include one or more virtual machines running a virtual operating system, or other computing architecture that involves virtualization (e.g., one or more flexible pools of logical storage devices that may be virtualized to maintain virtual storage devices of the server). In various embodiments, server 120 may run one or more services or software applications that provide the functionality described below.
The computing system in server 120 may run one or more operating systems including any of the operating systems described above as well as any commercially available server operating systems. Server 120 may also run any of a variety of additional server applications and/or middle tier applications, including HTTP servers, FTP servers, CGI servers, JAVA servers, database servers, etc.
In some implementations, server 120 may include one or more applications to analyze and consolidate data feeds and/or event updates received from users of client devices 101, 102, 103, 104, 105, and 106. Server 120 may also include one or more applications to display data feeds and/or real-time events via one or more display devices of client devices 101, 102, 103, 104, 105, and 106.
The system 100 may also include one or more databases 130. In some embodiments, these databases may be used to store data and other information. For example, one or more of databases 130 may be used to store information such as audio files and video files. The data store 130 may reside in a variety of locations. For example, the data store used by the server 120 may be local to the server 120, or may be remote from the server 120 and may communicate with the server 120 via a network-based or dedicated connection. The data store 130 may be of different types. In some embodiments, the data store used by server 120 may be a database, such as a relational database. One or more of these databases may store, update, and retrieve the databases and data from the databases in response to the commands.
In some embodiments, one or more of databases 130 may also be used by applications to store application data. The databases used by the application may be different types of databases, such as key value stores, object stores, or conventional stores supported by the file system.
The system 100 of fig. 1 may be configured and operated in various ways to enable application of the various methods and apparatus described in accordance with the present disclosure.
The flow of an example content processing method 200 according to an embodiment of the present disclosure is described below in connection with fig. 2.
At step S201, the access stratum acquires a first request including content data and a processing command for the content data.
At step S202, the access stratum transmits a second request to the service stratum in response to the source of the first request satisfying the security index. The second request includes content data and a process command, and the second request does not include a source of the first request.
At step S203, the service layer processes the content data according to the processing command.
According to the method, the access layer can receive the data subjected to preliminary processing, carry out security verification on the data, and then transmit the data to the service layer for calculation. The access stratum need not pass the source of the data to the service stratum, which need not know the source of the data. Therefore, functional decoupling among modules can be realized, and the workload required by updating iteration is simplified. The security indicator may be based on security requirements required for the service, etc., to determine whether the source of the first request (e.g., a preceding module such as a front end or presentation layer in the case of an internal request, or a gateway, etc., in the case of an external request) is able to access the service layer. By splitting the security check function, only a small number of interfaces are required to be subjected to security check, so that resources required by calculation are reduced.
The above method may be applied to a content processing scenario. The content data may include text content, picture content, audio content, video content, and may include topics, news, subjects, events, and the like. The processing of the content data may include editing, cropping, version replacement, retouching, rendering, augmentation, etc. of various types of content, and the disclosure is not limited thereto, and the content data may include any data capable of carrying content and being processed, and the processing command may refer to any desired corresponding manner of processing the content data presented in a format. The content data may be processed to obtain the desired data.
For example, content processing may include processing event content to expose hot events, with the ability of full-network crawling to identify timeliness, scarcity, and richness of related content of the event. The content processing may include extracting keywords from the text and the title, may include generating the title from the keywords, and may further include generating or searching corresponding copyrights maps, dynamic topics, hundred, background material, etc. from text content such as text, title, keywords, etc. Content processing may include detecting whether a picture contains low definition, solid color, advertising, mosaics, etc. undesirable features, detecting whether text has wrongly written words, paragraphs are properly formatted and optionally modified, detecting whether video is blurred and optionally processed or searched for or similar video is generated, etc.
Further, the above-described method may be further applied in combination with artificial intelligence in an intelligent authoring assistant or an aided authoring product to process content data automatically or following user preference without requiring a user to manually select a process command or the like. For example, the content processing functionality may surround different stages of author authoring and utilize artificial intelligence computing power to assist authors in completing better articles. The following functions may be implemented with intelligent content processing: through displaying the hot events and processing the content of the events, the method can guide the topic selection direction of the author before authoring, guide the author to author the non-homogenous articles, and promote the reading quantity of the articles; can provide rich writing materials in the authoring: according to the content such as text, title, keywords and the like input by the author, the generated related content is automatically displayed in the corresponding area of the editor, so that the number of times that the author jumps out of a page to automatically inquire about materials in the writing process is reduced, and the high efficiency of the author creation is ensured; the method can detect the authoring quality of the article after authoring, remind an author to modify and perfect in advance in an editing stage, and improve the final publishing quality of the article.
According to some embodiments, the content data may represent a picture. The corresponding processing command may be at least one of: obtaining keywords related to the picture, cutting the picture, rotating the picture, performing color processing on the picture, obtaining an associated picture of the picture, obtaining a higher definition version of the picture, and obtaining a copyrighted picture similar to the picture. The above processing command is a typical processing command for obtaining more desirable content data for picture content data, and can significantly improve user experience. The process command may not be limited to the above commands, and other process commands that satisfy user expectations or artificial intelligence calculation metrics are also possible. For example, the processing command may include replacing a person in the picture, adjusting an atmosphere of the picture, and the like.
According to some embodiments, the content data may represent text. The corresponding processing command may be at least one of: performing error correction processing on the text, acquiring keywords related to the text content, and acquiring topics related to the text content. Text processing is an important component in content processing. The above processing command is a typical processing command for obtaining more desirable content data for text content data, and can significantly improve user experience. The process command may not be limited to the above commands, and other process commands that satisfy user expectations or artificial intelligence calculation metrics are also possible. For example, processing commands may include expanding text, imitation writing, overwriting, etc.
According to some embodiments, the content data may represent video. The corresponding processing command may be at least one of: obtaining keywords related to the video, cropping the video, rotating the video, color processing the video, obtaining an associated video of the video, and obtaining a higher definition version of the video. Video tends to carry a large amount of content and requires a large amount of processing resources. The processing command aims at the video content data, so that the user experience can be obviously improved.
According to some embodiments, the content data may represent an event. The corresponding processing command may be at least one of: obtaining keywords related to the event, obtaining scarcity degree of the event, obtaining topic degree of the event, obtaining pictures related to the event, obtaining videos related to the event and obtaining articles related to the event. The event is atypical content data, for which the above-described processing can extract key points, popularity, value, etc. of the event in an intelligent manner, which is advantageous for user experience, and particularly useful in a context in which a user needs to author based on time. For example, the keywords may represent core features of the event, the obtaining of the scarcity may represent whether the event has related articles in the network, the click rate of the articles, and the like, and the topic degree may represent whether related authoring brings higher value, and related pictures or videos and the like are beneficial to reducing manual searching of users when authoring based on the event, thereby saving resources and improving efficiency.
If the codes of the content processing function are coupled to the same module, especially if all functions of flow source filtering, service logic and verification, database connection, network call and the like are completed in the same module, the front-end request directly reaches the module, and the functions cannot be well distinguished. Coupling codes to the same module also results in slow development iteration efficiency, and small changes may affect the functions of other modules, so the regression cost of each online test is high. In addition, if a simple retry polling policy is adopted for a long-time-consuming interface, policy service consumes time, wastes machine resources of the back-end, and needs to reduce the time consumption of the whole process to ensure user experience. Based on the above, the present disclosure provides a design scheme of a service architecture for combining product functions and service dependencies, improves the development efficiency of codes, optimizes the time-consuming experience of users, and provides appropriate security verification.
Fig. 3 is an example architecture of a content processing system 300 according to an embodiment of the disclosure.
Content processing system 300 may include an access layer 301 and a service layer 302:
the access layer 301 is configured to obtain a first request. The first request includes content data and a processing command for the content data. The access layer 301 is further configured to send a second request to the service layer 302 in response to the source of the first request satisfying the security criterion. The second request includes content data and a process command, and the second request does not include a source of the first request.
The service layer 302 is configured to process the content data according to the processing command.
Referring again to fig. 3, the access layer 301 may obtain the first request from the presentation layer 311 as well as from the gateway 321. The request from the presentation layer may characterize the request from the local, internal network or the product line. The request from gateway 321 may characterize a request from an external source, such as an external network or other product. In some embodiments, content processing system 300 may include presentation layer 311. The presentation layer is configured to parse the front-end request to obtain content data and process commands; and sending a first request to the access stratum. Therefore, the access layer can acquire the pre-processed request command, and further realize decoupling and lightweight implementation of the module. In other examples, no presentation layer may be present, or the functionality of the presentation layer described herein may be incorporated into a front end or the like, and any other method of obtaining content data and processing commands by an access layer is applicable herein.
For long time consuming feature calculations, the connection between the access layer and the service layer may have been closed due to a timeout. At this time, a producer-consumer model may be adopted, the service layer issues the calculation result of the corresponding picture to an asynchronous message queue, the message queue notifies the presentation layer, and the presentation layer puts the calculation result of the picture feature into the cache of the presentation layer after receiving the message. The presentation layer can check the results in the cache and respond directly when the front end next retries or polls. Therefore, an additional request service layer is not needed, and the time consumption and the cost of the network are reduced.
In some embodiments, the service layer 302 is further configured to create a message queue in response to the second request timeout after processing the content data according to the processing command to generate the result data. Service layer 302 is also configured to send the result data to the presentation layer via the message queue. Thus, for example, in the case of a large calculation amount or a busy network, if the request is timed out, a message queue is established, and the result data is directly transmitted to the presentation layer. Since the service layer does not have to know the source of the data, it only has to follow simple logic that issues to the message queue if it times out, and returns if it does not. The message queue is established by the service layer after the connection is closed. Since there is a single network request upper limit for traffic from outside, there is no problem of timeout and thus no message queue. Therefore, polling of a long time-consuming interface is not needed, computing resources are saved, and computing stability is improved.
In some embodiments, the method described in connection with fig. 2 may be implemented on the access layer 301 side of fig. 3.
Some alternative embodiments of the content processing method described in fig. 2 will be described below.
According to some embodiments, the first request further comprises a service interface identification, and the access layer sending the second request to the service layer comprises the access layer sending the second request to a service interface in the service layer. The service interface identifier can indicate a service interface in the service layer corresponding to the content data and the processing command, for example, a borrowing interface of the picture processing module or an interface of the video processing module is called. The service interface is obtained from the first request and the request parameters are forwarded directly to the corresponding interface, thereby eliminating further calculation of the request and requiring only forwarding of the request according to the routing rules. Thus, proper distribution of functions is achieved, the amount of computation is reduced, and decoupling between modules can be further achieved.
The extraction of the service interface identifier may be implemented by a presentation layer. According to the architecture of some embodiments of the present disclosure, the presentation layer is the first module to interact directly with the front end. That is, in addition to processing the front-end request to parse the data and process commands of the get request, the presentation layer may also perform preliminary processing on the request parameters from the front-end to select the service layer capability to be invoked, so that the access layer no longer needs to process the request parameters, but is responsible for delivering the data incoming by the presentation layer to the corresponding service layer based on the capability already selected by the presentation layer.
According to some embodiments, the content data satisfies at least one of: the content data is in a format suitable for processing the command, the content data is from a user having processing rights, or the content data is from a login user. This means that the content data has undergone a preliminary data check before being processed by the method 200. Thereby, decoupling of the modules and lightweight implementation of each functional module can be further achieved.
The above-described process may be implemented by the presentation layer 311 as described in fig. 3. For example, the presentation layer may perform a preliminary check on the content data, determine that the content data is in a format suitable for processing commands (e.g., clip processing commands for pictures, the content data is in a. JPG or. BMP format, etc.), determine that the content data is from a logged-in user (e.g., determine the user login status from a front-end request), or the content data is from a user with processing rights, etc. The presentation layer may also be used to perform the processing of validity checking of request parameters, front end response field formatting, data log dotting, etc. The data log dotting refers to the click rate of a certain function in a certain interface, statistics of a certain state, and the like. The data log file may be added at a designated location of the presentation layer. For some checks that are strongly coupled to the service, such as user login status, user rights, etc., the data of the corresponding user can be obtained at the presentation layer and checked. Filtering of whether the request parameters are null, whether they are required array types, etc. can also be done at the presentation layer. Of course, the present disclosure is not limited to scenarios in which data is received from presentation layer 311, nor does it require that a separate module of presentation layer or similar functionality be included in system 300. For example, the above processing may be realized by a front end, or the like. Alternatively, in the scenario of receiving the first request, for example from gateway 321, the interface may be defined in advance, so that the request from the external source needs to follow a certain rule or the data from the external source needs to be pre-processed to access the access layer.
According to some embodiments, the content data is an intranet-stored version of the content or a link to a location in the intranet where the content to be processed is stored. Such a locally stored version can improve computational efficiency. Such processing may also be implemented by the presentation layer 311 as described in fig. 3, for example, but as described above, the present disclosure is not limited thereto.
In addition, when interacting with the front-end request, there are some personalized field processes for facilitating the front-end to display and separate, such as the type of resources used for the front-end to distinguish the template, or the splicing of documents, etc. These processes may also be done by the presentation layer or the presentation layer in conjunction with the front-end, since they do not involve the field output of the core function.
According to some embodiments, before sending the second request to the service layer, it is checked by the access layer whether a corresponding port of the service layer is available. Sending the second request includes sending the second request to the service layer in response to the status of the corresponding port of the service layer being available. The availability status is checked before invoking the services of the service layer. Therefore, the call to the overtime or faulty service layer machine can be avoided, and the calculation efficiency and stability are improved.
The above procedure may be implemented by, for example, the access layer 301 of fig. 3. The access stratum may check the availability status of the machines of the service stratum. For example, if a connection failure occurs more than a certain number of times, the service of the machine is deemed to be unavailable and the next request traffic is distributed to other machines.
The access layer is also able to distribute traffic among the different ports and machines of the service layer. For example, the access stratum may distribute traffic among different machines or ports according to a random policy to avoid excessive traffic on the same machine. According to some embodiments, sending the second request includes sending the second request to the service layer based on the flow control indicator. The flow control index may be a per-port request amount or per-port calculation amount based on a machine, calculation amount, or predetermined allocation policy. The access stratum can realize flow control and load balancing. Therefore, on one hand, decoupling of the modules can be realized, on the other hand, the running stability of the system can be improved, and the problems of overload flow, faults caused by overload and the like are avoided.
The access layer is positioned above the service layer, and can perform load balancing and route forwarding on all request traffic. The access layer can be connected to the gateway in addition to the presentation layer. The gateway is used to access calls from external sources. This portion of the traffic is accessed from the gateway when the present functionality is invoked from an external source, such as another product line or team. The gateway can complete the authority check of external traffic sources, distinguish sources and carry out processes such as current limiting. In order to prevent the external traffic from containing various network security problems, the gateway needs to control the access rights, and also avoid excessive traffic from collapsing the service, so that the source needs to be checked and limited. The access layer is capable of receiving requests from both the presentation layer and the gateway. After receiving the request, the access layer may allocate the same interface to the service layer according to the same routing forwarding rule.
According to some embodiments, the method further comprises, after processing the content data according to the processing command to generate result data, sending, by the service layer to the access layer, the result data as a response to the second request in response to the second request not being timed out; and communicating, by the access stratum, the result data back to the source of the first request. Therefore, the result data can be directly received from the service layer, the service layer still does not need to know the source of the request, only the request is returned, and the burden of the service layer is saved. Based on the data source information stored in the access layer, for example, whether the request originally comes from the presentation layer or the gateway, the generated result data is returned, and an independent module and a convenient data return mechanism are realized.
According to some embodiments, the first request is received from the presentation layer, and the method further comprises creating a message queue between the service layer and the presentation layer in response to a timeout of the second request after processing the content data according to the processing command to generate the result data. The resulting data is then pushed to the message queue by the service layer. Such a design takes the idea of setting up a message queue if the request times out. Because the request from the external source has a single access upper limit, no overtime problem occurs, the source must be the presentation layer when overtime, and the message queue is directly established between the service layer and the presentation layer without judging the source. Thus, polling of long time consuming interfaces is no longer required, saving computing resources per module, and increasing computing efficiency.
The service layer is a core part of the implementation of the content calculation function. The service layer may connect to a relational database, full text search engine, cache, etc., or call other third party services via a network to obtain more data. According to some embodiments, the service layer generates the result data by invoking at least one of: relational databases, full text search engines, caches, third party services. The manner in which the result data is generated. Thus, the required content can be accurately processed. The service layer may provide different responses according to different content types. For example, for the picture content, features such as definition, information entropy and the like of the picture can be calculated, and the result of each picture is marked according to a threshold value; the method can cut according to the expected picture size, compress and save the picture to generate a new link; and searching the copyright map, inquiring similar pictures in the database according to the labels of the pictures, and providing the original pictures to avoid the creation risk. For text content, mispronounced words in long and short texts can be identified and corrected in a prompt mode, keywords of the text can be extracted to serve as topics or titles, and related content can be searched in a database by using the keywords of the text. For video content, video features can be computed, and whether the video is blurred or not can be checked, so that a video link with higher definition can be generated. For event content, the scarcity of the corresponding event in the full-network article can be retrieved, and related articles, dynamics, videos, etc. of the event can be acquired. It will be appreciated that the functionality of the service layer is not limited thereto and may be responsive to any content data and corresponding processing commands.
According to some embodiments, the service layer is capable of batch processing of a plurality of content data and a plurality of processing commands. The concurrency processing capability of the service layer can realize large-batch processing of data, and the computing efficiency is increased. The concurrent processing of the service layer is described in more detail below in conjunction with fig. 4 (a). For example, the logic of concurrent processing in the service layer may be described as first parallel processing features and then parallel processing the content. By the two-stage parallel processing, the calculation efficiency can be further increased. According to some embodiments, batch processing includes: establishing a plurality of coroutines for a plurality of processing features respectively corresponding to the plurality of processing commands, each coroutine of the plurality of coroutines corresponding to each processing feature of the plurality of processing features; processing the plurality of coroutines in parallel; and within each of the plurality of coroutines, processing the plurality of content data in parallel for a corresponding processing feature.
Here, a process feature refers to a process-related attribute. For example, for picture processing, the feature may be image sharpness/presence of two-dimensional code/picture size, or the like. The video may be sharpness, length, etc. The text may be for content attributes, wrongly written words, fluency, related topics, and the like. Computing features first and then processing content can more efficiently implement batch processing in a two-time concurrent processing manner.
The concurrent processing manner of the service layer is described below with reference to fig. 4 (a) by taking picture calculation as an example. It is to be understood that the concurrent processing manner of the service layer is not limited to the computation of the picture content, and that other types and formats of content processing and computation can be processed according to similar ideas and logic.
The interfaces in the service layer support multi-feature, multi-content batch processing. In the batch processing, a set of features, feature 1, feature 2 … …, feature n, respectively, and a set of contents, content 1, content 2 … …, content m, respectively, are specified in the input parameters. For example, the features 1 to n may be features corresponding to a clipping process, a sharpness process, acquisition of copyrighted-similar pictures, extraction of keywords, and the like, respectively. For example, the content 1 to the content m may be m different pictures, which respectively need to undergo the processing in the above-described aspect 1 to aspect n. Of course, the content is not limited to pictures, and fig. 4 (b) shows an example functional schematic of a service layer according to an embodiment of the present disclosure.
In the concurrent processing of multiple features, the time differences at which multiple processing units are activated in turn, differing for each feature on time point therein, form a loop, referred to herein as "first loop C1". After the loop is ended and started concurrently, multiple features can be processed simultaneously by multiple processing units. For example, a plurality of processing units simultaneously and respectively process characteristics such as clipping processing, sharpness processing, acquisition of copyrighted similar pictures, extraction of keywords, and the like. Specifically, in the first cycle C1, a coroutine is opened for each feature for processing, and each feature may be considered to start concurrency at the same point in time. Next, a plurality of contents are concurrently processed, and the time difference at the time point at which each of the contents is turned on constitutes one cycle, which is referred to herein as "second cycle C2". And after the second circulation is finished, the corresponding processing units in the processing units perform concurrent processing on the plurality of contents according to each characteristic. Similarly, in the second cycle, within each feature, a coroutine is opened for each content in the batch process for computation. Thus, the content within each feature may also be considered concurrent. For example, the cropping operation is performed in parallel on the plurality of pictures 1 to m by the first processing unit, the operation of acquiring copyrighted-similar pictures is performed in parallel on the plurality of pictures 1 to m by the second processing unit, and the like, and these processing units may also be concurrently performed.
If the calculation fails in a certain content of a certain feature, the corresponding coroutine is closed independently to discard the result, so that the processing of other coroutines is prevented from being influenced. After the calculation of the results of all the contents of all the features is waited, the results are stored into the corresponding caches of the service layer. Alternatively, the results may be formatted according to the specification before being stored in the corresponding cache of the service layer. Thus, the calculation results in the cache can be directly multiplexed the next time a request for the same content is received. Thereafter, if the access stratum request has not closed the connection, the calculation result is directly responded to in this connection. If the access layer request has been closed due to a timeout, the calculation result may be responded to by means of the message queue described above.
The flow of an example content processing method 500 according to another embodiment of the present disclosure is described below in conjunction with fig. 5 (a).
At step S501, a front-end request is acquired by the front-end and delivered to the presentation layer. For example, the content to be processed may be sent with the front-end request, or the content to be processed or a link thereto may be included within the front-end request, and the disclosure is not limited to messaging approaches herein.
At step S502, the presentation layer parses the request from the front end to obtain content data and a processing command, and sends a first request to the access layer. The data passed to the access stratum by the first request may include content data (e.g., a local link to save the content data file), processing commands, etc. (e.g., whether to clip or sharpness processing). Parsing the request from the front end by the presentation layer may also include preliminary processing of the content data. For example, the preliminary processing may include saving the content to be processed (e.g., a picture file or the like) to an intranet, and reading the intranet link of the saved file as content data. Processing of the content data by the presentation layer may include performing a normative (or legitimacy) check of the fine-grained request parameter, including checking whether the incoming content is in a legal format (e.g., jpg format, gif format, etc. for pictures), checking the request parameter, checking whether the user is a legal user (whether the user is logged in, user rights, etc.). The data communicated to the access stratum by the first request may include a data source. The data source may indicate to the access stratum that the request is from a "presentation layer" rather than an external source or external gateway. The data passed to the access stratum by the first request may optionally also include a service interface identification characterizing the service interface in the service stratum that needs to be invoked. That is, the presentation layer also performs preliminary processing on the request parameters from the front end to select the service layer capability to be invoked, so that the access layer no longer needs to process the request parameters, but is only responsible for transferring the data incoming by the presentation layer to the corresponding service layer based on the capability that the presentation layer has selected.
At step S503, the access layer receives the first request and sends a second request to the service layer based on meeting the forwarding indicator. The second request may include content data and a process command for processing by the service layer. For example, the access layer may forward content data and processing commands in the request to the corresponding interface of the service layer based on a predetermined routing rule or may be based on a service interface identification in the request representing the layer. There is no need to forward the source information, etc. in the request to the service layer. The forwarding indicator in step S503 may include a security checksum flow control indicator. The access stratum may determine whether the source has access to the service stratum based on the source of the data, i.e., the presentation stratum, or an external source gateway, etc. The access layer may distribute and equalize traffic from different sources and to different service layer ports. The access layer may communicate data including content and processing parameters to corresponding interfaces of the service layer in response to meeting the security checksum flow control indicator.
At step S504, the service layer receives the second request, processes the content data based on the processing command, and generates result data. The service layer may process the content data by calling various databases or external functions. The service layer may have a variety of different processing capabilities. For example, as for the picture content data, a cropped picture or a higher definition picture may be generated as the result data; for text content data, text, keywords, related topics, and the like after correction may be generated as result data. It is to be understood that the present disclosure is not so limited.
At step S505, it is determined whether the request times out.
If the request times out, the step goes to S506, and a message queue is established between the service layer and the presentation layer. The service layer pushes the result data to the message queue. At step S507, the presentation layer reads the results from the message queue and saves them to the cache. Next, at step S508, the presentation layer sends the results in the cache back to the front end in response to receiving the poll again from the front end.
Thereafter, the flow goes to step S509, the data is processed and presented by the front end, and the content processing procedure ends.
If the request does not timeout, the step goes to S510, where the service layer will transfer the resulting data back to the access layer directly in the request, e.g. as a response to a second request. And then, the access layer combines the data source information and returns the result data. For example, the access stratum, in combination with the data source information stored in itself, characterizes whether the request and corresponding data content originated from the presentation stratum or gateway, and transmits the data back to the corresponding presentation stratum or gateway. In the case where data is returned to the presentation layer, the presentation layer returns the data to the front end. The presentation layer here does not need to process the data anymore, or the presentation layer may do some processing that does not involve stitching, presentation, personalization, etc. of the core functions. If the data is passed back to the gateway, the resulting data is forwarded by the gateway to the corresponding network device in any manner that will be readily understood by those skilled in the art, and the data flow process in the case of a gateway is omitted here. Thereafter, the process may proceed to step S509 where the front-end processes and presents the data, and the content processing process ends.
Through different functional areas in the decoupling module, the representation layer, the service layer and the like are divided, each layering is respectively responsible for independent functional points, the mutual involvement in the same module is avoided, and each layering can be independently upgraded. The whole flow can control the source and the forwarding of the flow more normally, and can provide the abstract interface and capability of the service layer to the outside as a service center. Iteration efficiency is improved, research and development cost is reduced, and interface optimization is time-consuming. Fig. 5 (b) shows an example architecture diagram of a content processing system according to another embodiment of the present disclosure.
A content processing apparatus 600 according to another aspect of the present disclosure is described below in connection with fig. 6. The content processing apparatus 600 may include an access layer 601 and a service layer 602. The access layer 601 may be configured to obtain, by the access layer, a first request comprising content data and a processing command for the content data. The service layer 602 may be configured to send a second request to the service layer in response to the source of the first request satisfying the security criterion. And the service layer is used for processing the content data according to the processing command. The second request includes the content data and the processing command, and the second request does not include the source of the first request.
According to another aspect of the disclosure, there is also provided a computing device, which may include: a processor; and a memory storing a program comprising instructions that when executed by the processor cause the processor to perform the content processing method described above.
According to yet another aspect of the present disclosure, there is also provided a computer-readable storage medium storing a program, which may include instructions that when executed by a processor of a server, cause the server to perform the above-described content processing method.
With reference to fig. 7, a block diagram of a computing device 700 that may be a server or client of the present disclosure will now be described, which is an example of a hardware device that may be applied to aspects of the present disclosure.
Computing device 700 may include elements that are connected to bus 702 (possibly via one or more interfaces) or that communicate with bus 702. For example, computing device 700 may include a bus 702, one or more processors 704, one or more input devices 706, and one or more output devices 708. The one or more processors 704 may be any type of processor and may include, but is not limited to, one or more general purpose processors and/or one or more special purpose processors (e.g., special processing chips). The processor 704 may process instructions executing within the computing device 700, including instructions stored in or on memory to display graphical information of a GUI on an external input/output apparatus (such as a display device coupled to an interface). In other embodiments, multiple processors and/or multiple buses may be used, if desired, along with multiple memories and multiple memories. Also, multiple computing devices may be connected, with each device providing part of the necessary operations (e.g., as a server array, a set of blade servers, or a multiprocessor system). One processor 704 is illustrated in fig. 7.
Input device 706 may be any type of device capable of inputting information to computing device 700. The input device 706 may receive entered numeric or character information and generate key signal inputs related to user settings and/or functional control of the content processing computing device and may include, but is not limited to, a mouse, keyboard, touch screen, trackpad, trackball, joystick, microphone, and/or remote control. Output device 708 may be any type of device capable of presenting information and may include, but is not limited to, a display, speakers, a video/audio output terminal, a vibrator, and/or a printer.
Computing device 700 may also include a non-transitory storage device 710, or any storage device connected to non-transitory storage device 710, which may be non-transitory and may enable data storage, and may include, but is not limited to, a magnetic disk drive, an optical storage device, a solid state memory, a floppy disk, a flexible disk, a hard disk, a magnetic tape, or any other magnetic medium, an optical disk or any other optical medium, a ROM (read only memory), a RAM (random access memory), a cache memory, and/or any other memory chip or cartridge, and/or any other medium from which a computer may read data, instructions, and/or code. The non-transitory storage device 710 may be detachable from the interface. The non-transitory storage device 710 may have data/programs (including instructions)/code/modules (e.g., access layer 601 and service layer 602 shown in fig. 6) for implementing the methods and steps described above.
Computing device 700 may also include a communication device 712. The communication device 712 may be any type of device or system that enables communication with external devices and/or with a network, and may include, but is not limited to, a modem, a network card, an infrared communication device, a wireless communication device, and/or a chipset, such as a bluetooth (TM) device, 1301.11 device, wiFi device, wiMax device, cellular communication device, and/or the like.
Computing device 700 may also include a working memory 714, which may be any type of working memory that may store programs (including instructions) and/or data useful for the operation of processor 704, and may include, but is not limited to, random access memory and/or read-only memory devices.
Software elements (programs) may reside in the working memory 714 including, but not limited to, an operating system 716, one or more application programs 718, drivers, and/or other data and code. Instructions for performing the above-described methods and steps may be included in one or more applications 718, and the above-described methods may be implemented by the instructions of one or more applications 718 being read and executed by the processor 704. Executable code or source code for instructions of software elements (programs) may also be downloaded from a remote location.
It should also be understood that various modifications may be made according to specific requirements. For example, custom hardware may also be used, and/or particular elements may be implemented in hardware, software, firmware, middleware, microcode, hardware description languages, or any combination thereof. For example, some or all of the disclosed methods and apparatus may be implemented by programming hardware (e.g., programmable logic circuits including Field Programmable Gate Arrays (FPGAs) and/or Programmable Logic Arrays (PLAs)) in an assembly language or hardware programming language such as VERILOG, VHDL, c++ using logic and algorithms according to the present disclosure.
It should also be appreciated that the foregoing method may be implemented by a server-client mode. For example, a client may receive data entered by a user and send the data to a server. The client may also receive data input by the user, perform a part of the foregoing processes, and send the processed data to the server. The server may receive data from the client and perform the aforementioned method or another part of the aforementioned method and return the execution result to the client. The client may receive the result of the execution of the method from the server and may present it to the user, for example, via an output device. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computing devices and having a client-server relationship to each other. The server may be a server of a distributed system or a server that incorporates a blockchain. The server can also be a cloud server, or an intelligent cloud computing server or an intelligent cloud host with artificial intelligence technology.
It should also be appreciated that the components of computing device 700 may be distributed over a network. For example, some processes may be performed using one processor while other processes may be performed by another processor remote from the one processor. Other components of computing device 700 may be similarly distributed. As such, computing device 700 may be interpreted as a distributed computing system that performs processing at multiple locations.
Although embodiments or examples of the present disclosure have been described with reference to the accompanying drawings, it is to be understood that the foregoing methods, systems, and apparatus are merely illustrative embodiments or examples and that the scope of the present disclosure is not limited by these embodiments or examples but only by the claims following the grant and their equivalents. Various elements of the embodiments or examples may be omitted or replaced with equivalent elements thereof. Furthermore, the steps may be performed in a different order than described in the present disclosure. Further, various elements of the embodiments or examples may be combined in various ways. It is important that as technology evolves, many of the elements described herein may be replaced by equivalent elements that appear after the disclosure.

Claims (17)

1. A content processing method, comprising:
Obtaining, by an access layer, a first request comprising content data and a processing command for the content data, wherein the first request is received from a presentation layer;
transmitting, by the access layer, a second request to a service layer in response to a source of the first request meeting a security criterion, wherein the second request includes the content data and the processing command, and the second request does not include the source of the first request; and
processing the content data by the service layer according to the processing command,
the method may further comprise the steps of,
after processing the content data according to the processing command to generate result data, sending, by the service layer to the access layer, the result data as a response to the second request in response to the second request not being timed out; and
the result data is returned by the access layer to the source of the first request,
the method may further comprise the steps of,
creating a message queue between the service layer and the presentation layer in response to the second request timeout after processing the content data according to the processing command to generate result data; and
Pushing, by the service layer, the result data to the message queue.
2. The method of claim 1, wherein the first request further includes a service interface identification indicating a service interface in the service layer corresponding to the content data and the processing command, and
wherein sending, by the access layer, the second request to the service layer includes sending, by the access layer, the second request to the service interface in the service layer.
3. The method of claim 1, wherein prior to sending the second request to the service layer, checking by the access layer whether a corresponding port of the service layer is available, and
and sending the second request to the service layer in response to the state of the corresponding port of the service layer being available.
4. The method of claim 1, wherein sending the second request comprises sending the second request to the service layer based on a flow control indicator.
5. The method of claim 1, wherein the content data satisfies at least one of:
the content data is in a format suitable for the processing command,
The content data comes from a logged-in user, or
The content data is from a user having processing rights.
6. The method of claim 1, wherein the service layer generates the result data by invoking at least one of: relational databases, full text search engines, caches, third party services.
7. The method of claim 1, wherein the service layer is capable of batch processing a plurality of content data and a plurality of processing commands.
8. The method of claim 7, wherein the batch processing comprises:
establishing a plurality of coroutines for a plurality of processing features respectively corresponding to the plurality of processing commands, each coroutine of the plurality of coroutines corresponding to each processing feature of the plurality of processing features;
processing the plurality of coroutines in parallel; and is also provided with
And processing the plurality of content data in parallel for the corresponding processing features in each of the plurality of coroutines.
9. The method of any of claims 1-8, wherein the content data is an intranet-stored version of content or a link to a location in an intranet where the content to be processed is stored.
10. The method of any of claims 1-8, wherein the content data represents a picture, and the processing command is at least one of: obtaining keywords related to the picture, cropping the picture, rotating the picture, performing color processing on the picture, obtaining an associated picture of the picture, obtaining a higher definition version of the picture, and obtaining a copyrighted picture similar to the picture.
11. The method of any of claims 1-8, wherein the content data represents text and the processing command is at least one of: performing error correction processing on the text, acquiring keywords related to the text content, and acquiring topics related to the text content.
12. The method of any of claims 1-8, wherein the content data represents video and the processing command is at least one of: obtaining keywords related to the video, cropping the video, rotating the video, color processing the video, obtaining an associated video of the video, and obtaining a higher definition version of the video.
13. The method of any of claims 1-8, wherein the content data represents an event, and the processing command is at least one of: acquiring keywords related to the event, acquiring scarcity of the event, acquiring topic of the event, acquiring pictures related to the event, acquiring videos related to the event, and acquiring articles related to the event.
14. A content processing system, comprising:
an access layer for
Obtaining a first request comprising content data and a processing command for the content data, wherein the first request is received from a presentation layer;
in response to the source of the first request satisfying a security indicator, sending a second request to a service layer, wherein the second request includes the content data and the processing command, and the second request does not include the source of the first request;
the service layer is used for processing the content data according to the processing command;
a unit for performing the following operations:
after processing the content data according to the processing command to generate result data, sending, by the service layer to the access layer, the result data as a response to the second request in response to the second request not being timed out; and
returning, by the access stratum, the result data to the source of the first request, an
A unit for performing the following operations:
creating a message queue between the service layer and the presentation layer in response to the second request timeout after processing the content data according to the processing command to generate result data; and
Pushing, by the service layer, the result data to the message queue.
15. The content processing system of claim 14, further comprising a presentation layer to:
parsing a front-end request to obtain the content data and the processing command; and
and sending the first request to the access layer.
16. A computing device, comprising:
a processor; and
a memory storing a program comprising instructions that when executed by the processor cause the processor to perform the method of any one of claims 1 to 13.
17. A computer readable storage medium storing a program comprising instructions which, when executed by a processor of a server, cause the server to perform the method of any one of claims 1 to 13.
CN202011359273.7A 2020-11-27 2020-11-27 Content processing method, system, device, computing equipment and storage medium Active CN112487218B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011359273.7A CN112487218B (en) 2020-11-27 2020-11-27 Content processing method, system, device, computing equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011359273.7A CN112487218B (en) 2020-11-27 2020-11-27 Content processing method, system, device, computing equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112487218A CN112487218A (en) 2021-03-12
CN112487218B true CN112487218B (en) 2023-07-14

Family

ID=74936063

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011359273.7A Active CN112487218B (en) 2020-11-27 2020-11-27 Content processing method, system, device, computing equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112487218B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113792267B (en) * 2021-08-09 2023-03-14 中国人民银行数字货币研究所 Method and device for checking digital copyright of card surface picture of payment mechanism

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6336135B1 (en) * 1996-05-24 2002-01-01 International Business Machines Corporation Gateway for converting synchronous client/server protocols into asynchronous messaging protocols and storing session state information at the client
CN105763426A (en) * 2016-04-12 2016-07-13 北京理工大学 Multiprotocol instant messaging system-based Internet of Things business processing system
CN106664514A (en) * 2014-07-18 2017-05-10 康维达无线有限责任公司 Enhanced operations between service layer and management layer in an m2m system by allowing the execution of a plurality of commands on a plurality of devices
CN110311974A (en) * 2019-06-28 2019-10-08 东北大学 A kind of cloud storage service method based on asynchronous message
CN110609506A (en) * 2019-09-30 2019-12-24 重庆元韩汽车技术设计研究院有限公司 Signal conversion system and method for remote control
CN111143087A (en) * 2019-12-18 2020-05-12 中国平安财产保险股份有限公司 Interface calling method and device, storage medium and server

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10303678B2 (en) * 2016-06-29 2019-05-28 International Business Machines Corporation Application resiliency management using a database driver

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6336135B1 (en) * 1996-05-24 2002-01-01 International Business Machines Corporation Gateway for converting synchronous client/server protocols into asynchronous messaging protocols and storing session state information at the client
CN106664514A (en) * 2014-07-18 2017-05-10 康维达无线有限责任公司 Enhanced operations between service layer and management layer in an m2m system by allowing the execution of a plurality of commands on a plurality of devices
CN105763426A (en) * 2016-04-12 2016-07-13 北京理工大学 Multiprotocol instant messaging system-based Internet of Things business processing system
CN110311974A (en) * 2019-06-28 2019-10-08 东北大学 A kind of cloud storage service method based on asynchronous message
CN110609506A (en) * 2019-09-30 2019-12-24 重庆元韩汽车技术设计研究院有限公司 Signal conversion system and method for remote control
CN111143087A (en) * 2019-12-18 2020-05-12 中国平安财产保险股份有限公司 Interface calling method and device, storage medium and server

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于Agent的ERPⅡ网络数据传输模式实现;罗剑;;计算机应用与软件(08);全文 *

Also Published As

Publication number Publication date
CN112487218A (en) 2021-03-12

Similar Documents

Publication Publication Date Title
US11595477B2 (en) Cloud storage methods and systems
US11907642B2 (en) Enhanced links in curation and collaboration applications
US11252252B2 (en) Installable web applications
US10686788B2 (en) Developer based document collaboration
US11763076B2 (en) Document applet generation
CN110020278B (en) Page data display and provision method, client and server
WO2018077085A1 (en) Application processing method, device and storage medium
CN110971655B (en) Offline client playback and synchronization
US8543972B2 (en) Gateway data distribution engine
JP7397094B2 (en) Resource configuration method, resource configuration device, computer equipment, and computer program
US20220342518A1 (en) Card-based information management method and system
US10599753B1 (en) Document version control in collaborative environment
WO2015035897A1 (en) Search methods, servers, and systems
US11882154B2 (en) Template representation of security resources
WO2020227318A1 (en) Systems and methods for determining whether to modify content
CN112487218B (en) Content processing method, system, device, computing equipment and storage medium
CN112243016A (en) Middleware platform, terminal equipment, 5G artificial intelligence cloud processing system and processing method
CN114385382A (en) Light application access method and device, computer equipment and storage medium
US11228551B1 (en) Multiple gateway message exchange
WO2023239468A1 (en) Cross-application componentized document generation
JP2019220152A (en) Image filtering methods, electronic devices, and recording media
US20220070127A1 (en) Live database records in a chat platform
CN112328140B (en) Image input method, device, equipment and medium thereof
CN111367898B (en) Data processing method, device, system, electronic equipment and storage medium
US10878471B1 (en) Contextual and personalized browsing assistant

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant