CN112182452A - Page component rendering processing method, device, equipment and computer readable medium - Google Patents

Page component rendering processing method, device, equipment and computer readable medium Download PDF

Info

Publication number
CN112182452A
CN112182452A CN202011033891.2A CN202011033891A CN112182452A CN 112182452 A CN112182452 A CN 112182452A CN 202011033891 A CN202011033891 A CN 202011033891A CN 112182452 A CN112182452 A CN 112182452A
Authority
CN
China
Prior art keywords
request
instance
preset
network
queue
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011033891.2A
Other languages
Chinese (zh)
Inventor
温惠玲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Property and Casualty Insurance Company of China Ltd
Original Assignee
Ping An Property and Casualty Insurance Company of China Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Property and Casualty Insurance Company of China Ltd filed Critical Ping An Property and Casualty Insurance Company of China Ltd
Priority to CN202011033891.2A priority Critical patent/CN112182452A/en
Publication of CN112182452A publication Critical patent/CN112182452A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/957Browsing optimisation, e.g. caching or content distillation

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The application belongs to the technical field of data processing, and provides a page component rendering method, a page component rendering device, computer equipment and a computer readable storage medium. The method comprises the steps of responding to a network request corresponding to rendering of an instance contained in a page component, adding the network request into a preset instance request queue, processing the network request according to a processing sequence corresponding to the preset instance request queue to obtain request data corresponding to the network request, returning the request data to the instance to enable the instance to be rendered by the request data, and improving the processing efficiency of a plurality of embodiments in the same component when the internal request component contained in the same page is reused and instantiated in a large quantity and the page rendering efficiency and the page rendering fluency by constructing and using the preset instance request queue corresponding to the same component.

Description

Page component rendering processing method, device, equipment and computer readable medium
Technical Field
The present application relates to the field of web page development technologies, and in particular, to a method and an apparatus for rendering a page component, a computer device, and a computer-readable storage medium.
Background
In Web engineering, componentization development is a more and more common engineering means, pages and functions are designed into components with different specifications and elastic ranges, and the components can be multiplexed on a plurality of pages, so that the aims of rapid development and code reuse are fulfilled. However, due to the components designed according to the field, there are clear boundaries between components, and meanwhile, a global method can only be called, and control cannot be injected. And thus the coordination capability between the components becomes weak, for example, for status data sharing between the components, or network request coordination between the components, and the like.
In the conventional technology, the same component is reused in a single page in most cases only in a small amount, and in different pages in most cases, the same component is reused. So different instances of the same component do not compete for network problems. However, after a single page heavily reuses a component, different instances of the same component may compete for network requests and may cause request congestion. For example, if a page is to complete the rendering of a component list, each component will obtain its own detail data in the network, and if the list has 100 elements, the network request case is: the 1 st list request includes 100 detail requests.
As the network request of the component for independently managing the self accords with the attention separation principle, the page only pays attention to whether the content is placed or not, and the placed elements finish the data acquisition and content processing of the component by the self. If the Web browser is limited and there are 6-8 simultaneous network requests, then the 100 network requests will be queued to different degrees, even the latter instance will return data earlier than the former instance, and when the data is returned centrally, a large number of components may process the rendering at the same time. When the situation is good, the page state causes user confusion, and the poor situation may cause page jamming, even breakdown and flash back. For this situation, the common processing method is to extract the common status data and the network request, and still put them into global management, and the inside of the component just encapsulates the display and interaction. But also causes other problems that the boundary between the components is not clear, the coupling of the whole situation and the componentization is serious, the benefits of the componentization are lost under the long-term maintenance, and the engineering design can be returned.
Therefore, under the componentization design in the Web engineering, if a large amount of components in a single page reuse the same component, network competition exists among different instances, which causes the problem of page blocking.
Disclosure of Invention
The application provides a page component rendering processing method, a page component rendering processing device, computer equipment and a computer readable storage medium, which can solve the problem that in the prior art, if a large number of components reuse the same component in all components of a single page, network competition exists among different instances to cause page blocking.
In a first aspect, the present application provides a page component rendering processing method, where multiple systems share a same workbench for processing a task, the method including: responding to a network request corresponding to the rendering of an instance contained in a page component, and adding the network request to a preset instance request queue; processing the network request according to the processing sequence corresponding to the preset instance request queue to obtain request data corresponding to the network request; returning the request data to the instance to enable the instance to render with the request data.
In a second aspect, the present application further provides a page component rendering processing apparatus, including: the adding unit is used for responding to a network request corresponding to the rendering of the instance contained in the page component and adding the network request to a preset instance request queue; the processing unit is used for processing the network request according to the processing sequence corresponding to the preset instance request queue so as to obtain request data corresponding to the network request; and the return unit is used for returning the request data to the example so as to render the example by adopting the request data. .
In a third aspect, the present application further provides a computer device, which includes a memory and a processor, where the memory stores a computer program, and the processor implements the steps of the page component rendering processing method when executing the computer program.
In a fourth aspect, the present application further provides a computer-readable storage medium storing a computer program which, when executed by a processor, causes the processor to execute the steps of the page component rendering processing method.
The application provides a page component rendering method, a page component rendering device, computer equipment and a computer readable storage medium. The method comprises the steps of responding to a network request corresponding to rendering of an instance contained in a page component, adding the network request into a preset instance request queue, processing the network request according to a processing sequence corresponding to the preset instance request queue to obtain request data corresponding to the network request, returning the request data to the instance to enable the instance to be rendered by the request data, constructing and processing the network request according to the processing sequence corresponding to the preset instance request queue by means of the preset instance request queue corresponding to the same component, and controlling the processing sequence of the network request corresponding to a plurality of embodiments by the preset instance request queue, so that when the internal request component contained in the same page is massively reused, the problems of network blocking and blocking caused by the fact that network requests are separated and abstracted to the global context in the traditional technology are solved, the problems that components are coupled with the global context and are inconvenient to reuse in a plurality of pages or modules are solved, the processing efficiency of a plurality of embodiments in the same component is improved, and the page rendering efficiency and the page rendering fluency are improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic flowchart of a page component rendering processing method according to an embodiment of the present application;
fig. 2 is a schematic sub-flow diagram of a page component rendering processing method according to an embodiment of the present application;
fig. 3 is another sub-flow diagram of a page component rendering processing method according to an embodiment of the present application;
FIG. 4 is a schematic block diagram of a page component rendering processing apparatus provided in an embodiment of the present application; and
fig. 5 is a schematic block diagram of a computer device provided in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
Referring to fig. 1, fig. 1 is a schematic flowchart of a page component rendering processing method according to an embodiment of the present disclosure. As shown in fig. 1, the method comprises the following steps S101-S103:
s101, responding to a network request corresponding to rendering of an instance contained in a page assembly, and adding the network request to a preset instance request queue.
The object-oriented programming includes a concept Class (english Class) and an Instance (english is instant), the Class is an abstract template, and the Instance is a concrete "object" created according to the Class, each object has the same method, but the respective data may be different. Instantiation, english is instantiation, and refers to a process of creating an object with a class in object-oriented programming, which is a process of embodying an abstract conceptual class into a real object of the class. The class is called an Instance after instantiation, namely the result of the class after instantiation is the Instance, the Instance is English, the class is static and does not occupy the process memory, and the Instance has the dynamic memory.
Specifically, in the process of developing a page by using a component, the page component may include an instance, and in the process of rendering the page to obtain the page, the rendering of the instance may be involved. If a component is reused in a single page in a large amount, different instances of the same component may compete for network requests and may cause a problem of request blocking when rendering the page. For the problem, in the embodiment of the present application, in an assembly in which multiple instances may exist, for different embodiments of the same assembly, a preset instance request queue is constructed in advance for the assembly, and the preset instance request queue is used for placing network requests corresponding to different instances of the same assembly during rendering, and may be designed by encapsulating a network request method and using characteristics of a method and attributes of a shared class during instance construction, and designing an instance request queue on a class of the assembly, and initializing a queue on the class in common for an instance during subsequent instance initialization, and controlling a high-level assembly by abstracting to form a queue, so that different instances of different functional assemblies can be managed in a queued manner, and management efficiency of multiple instances in the same assembly is improved.
When the page is rendered, the network request corresponding to the rendering of the instance contained in the page component is responded, the network request is placed in a preset instance request queue, the queue has the first-in first-out queuing characteristic, so that the first-in first-out queuing characteristic of the queue can be used, the processing of the network request to the instance is controlled according to the sequence of the network request by different instances, the rendering processing of a plurality of instances in the same page component is realized, and particularly when a plurality of instances exist in the same page component, the problem that the network request is possibly blocked due to the competition of different instances of the same component for the network request can be avoided.
S102, processing the network request according to the processing sequence corresponding to the preset instance request queue to obtain request data corresponding to the network request.
The Queue is a First In First Out (FIFO) linear table, which is called FIFO for short. The end that allows insertion is called the tail of the queue and the end that allows deletion is called the head of the queue.
Specifically, in the embodiment of the present application, a queue is combined into network requests corresponding to different instances of the same component in a page component when rendering is performed, so as to fully utilize the first-in first-out characteristic of the queue, and control the processing sequence of the network requests corresponding to different instances of the same component when rendering is performed through the queue, so as to avoid the problem that after a component is reused in a single page in a large amount, different instances of the same component compete for network requests and may cause request network congestion when rendering the page. After the network requests are placed in a preset instance request queue, the network requests are processed according to a processing sequence corresponding to the preset instance request queue, request data corresponding to the network requests are obtained according to the processing sequence, original overall or intra-component network request design and encapsulation are not changed, and therefore the processing of the network requests and the return of the request data are both carried out according to the processing sequence, the situation that a large number of components are processed and rendered at the same time when the request data are returned in a centralized mode when a rear instance is sequenced before the request data are returned can be avoided.
S103, returning the request data to the instance to enable the instance to render by adopting the request data.
Specifically, after the network request is processed according to the processing sequence corresponding to the preset instance request queue to obtain the request data corresponding to the network request, returning the request data to the instance to cause the instance to render with the request data, because the network requests are processed and the request data are returned according to the processing sequence corresponding to the preset instance request queue, the unordered centralized processing of the network requests and the centralized return of the returned request data can be avoided, by improving the ordering of the example rendering, improving the efficiency of the example rendering, avoiding the problems of network blockage and the like caused by the unordered centralized processing of a plurality of examples of the same component, applying the method on the original project without a great deal of change, meanwhile, the method does not conflict with componentization design ideas and attention separation principles, and can improve the maintainability and the expansibility of the code by utilizing componentization simultaneously.
In the embodiment of the application, by responding to a network request corresponding to rendering of an instance contained in a page component, adding the network request into a preset instance request queue, processing the network request according to a processing sequence corresponding to the preset instance request queue to obtain request data corresponding to the network request, returning the request data to the instance to enable the instance to render by using the request data, and by constructing and using the preset instance request queue corresponding to the same component, processing the network request by a plurality of embodiments in the same component according to the processing sequence corresponding to the preset instance request queue, thereby controlling the processing sequence of the network request corresponding to the plurality of embodiments by the preset instance request queue, and solving the problem that when the internal request component contained in the same page is reused in a large amount, the problems of network blocking and blocking caused by the fact that network requests are separated and abstracted to the global context in the traditional technology are solved, the problems that components are coupled with the global context and are inconvenient to reuse in a plurality of pages or modules are solved, the processing efficiency of a plurality of embodiments in the same component is improved, and the page rendering efficiency and the page rendering fluency are improved.
Referring to fig. 2, fig. 2 is a sub-flow diagram of a page component rendering processing method according to an embodiment of the present application. As shown in fig. 2, in this embodiment, the step of processing the network request according to the processing sequence corresponding to the preset instance request queue to obtain the request data corresponding to the network request includes:
s201, distributing a preset state machine to the network request;
s202, judging whether the state machine is triggered or not;
s203, if the state machine is triggered, processing the network request and obtaining request data corresponding to the network request;
s204, continuing queuing in the preset instance request queue to wait for being triggered.
A state machine, called a Finite-state machine (FSM), is a mathematical model representing a Finite number of states and actions such as transitions and actions between the states, and generally, the FSM includes several elements: management of state, monitoring of state, triggering of state, actions triggered after state triggering. A Finite-state machine (Finite-state machine) has three features: the total number of states (state) is limited; at any one time, in only one state; under certain conditions, a Transition (Transition) from one state to another is possible.
Specifically, a preset state machine is allocated to the network request added to the preset instance request queue, and is used to describe a state of the network request, for example, the state of the network request may include a waiting state, a processing state, and a completion state, after the completion, the network request is deleted from the preset instance request queue, and a next network request of the network request is triggered until all network requests in the preset instance request queue are processed. Based on the control of a preset state machine, after the request of each instance is finished, the request of the next instance in a queue is triggered to start, the preset state machine is distributed to the network request, whether the state machine is triggered is judged according to the state of the preset state machine, in the preset instance request queue, if the processing of the previous network request of the network request is finished, the state machine corresponding to the network request is triggered, if the state machine is triggered, the network request is started to process, the request data corresponding to the network request is obtained, after the processing of the network request is finished, the next network request which is sequenced behind the network request in the preset instance request queue is triggered, if the state machine is not triggered, the queuing is continued in the preset instance request queue, and the triggering is waited until the processing of the previous network request is finished, it is the turn to trigger the state machine.
Referring to fig. 3, fig. 3 is a schematic view of another sub-flow of a page component rendering processing method according to an embodiment of the present application. As shown in fig. 3, in this embodiment, before the step of determining whether the state machine is triggered, the method further includes:
s301, counting the number of network requests corresponding to the state machine in the triggered state in the preset instance request queue;
s302, judging whether the number is smaller than a preset number threshold value or not;
s303, if the number is judged to be smaller than the preset number threshold value, executing the step of judging whether the state machine is triggered;
s304, if the number is judged to be larger than or equal to the preset number threshold value, the step of judging whether the state machine is triggered is not executed.
Specifically, in the preset instance request queue, the preset instance request queue may simultaneously perform a plurality of network requests to execute a plurality of network requests in parallel, so as to improve the efficiency of processing the network requests. When the preset instance request queue performs a plurality of network requests simultaneously, the number of network requests corresponding to state machines in a triggered state in the preset instance request queue needs to be counted within a preset time period, whether the number is smaller than a preset number threshold value is judged, if the number is smaller than the preset number threshold value is judged, based on the control of the preset state machines, the request start of the next instance in the queue will be triggered after the request of each instance is finished, and further whether the state machine corresponding to the network request is triggered is judged, namely, the step of judging whether the state machine is triggered is executed, if the number is judged to be equal to the preset number threshold value, because the number of the network requests executed in parallel in the preset instance request queue is limited, based on the control of the preset state machines, the request start of the next instance in the queue will not be triggered after the request of each instance is finished, and the state machine corresponding to the network request is not triggered, namely the step of judging whether the state machine is triggered is not required to be executed, and the step of judging whether the state machine is triggered is executed until the number is judged to be smaller than the preset number threshold. By expanding design and modifying queue control, executing a plurality of network requests in parallel in the preset instance request queue and processing the plurality of network requests in parallel, the processing efficiency of the network requests can be improved, and the problems of network blockage, centralized return of request data and the like can be avoided because the quantity of the plurality of network requests executed in parallel is limited and the network requests cannot be processed in an unordered and centralized manner.
In an embodiment, the step of counting the number of network requests corresponding to the state machine in the triggered state in the preset instance request queue includes:
counting the number of network requests corresponding to the state machine in the triggered state in the preset instance request queue through a preset counter; and reading the numerical value corresponding to the preset counter to obtain the number of the network requests corresponding to the state machine in the triggered state in the preset instance request queue.
Specifically, the queue may perform multiple tasks simultaneously by counting through a counter, count the number of network requests corresponding to the state machine in the triggered state in the preset instance request queue, and obtain the number of network requests corresponding to the state machine in the triggered state in the preset instance request queue by taking the value corresponding to the preset counter.
In one embodiment, before the step of adding the network request to a preset instance request queue in response to the network request corresponding to the rendering of the instance included in the page component, the method further includes:
judging whether the preset instance request queue is in an execution state or not; if the preset instance request queue is not in an execution state, triggering the preset instance request queue to execute the network request contained in the preset instance request queue; and if the preset instance request queue is in an execution state, executing the step of adding the network request to the preset instance request queue in response to the network request corresponding to the rendering of the instance contained in the page component so as to queue the network request.
Specifically, in response to a network request corresponding to rendering performed by an instance included in a page component, before the network request is added to a preset instance request queue, it is first determined whether the preset instance request queue is in an execution state, if the preset instance request queue is not in the execution state, the preset instance request queue is first triggered to execute the network request included in the preset instance request queue, so that the preset instance request queue starts to work to enter a state of processing the network request, and if the preset instance request queue is in the execution state, the network request is added to the preset instance request queue, so as to queue the network request and process the network request according to a processing sequence of the preset instance request queue. Core code logic example (non-actual code):
Figure BDA0002704517880000091
Figure BDA0002704517880000101
in a specific actual processing process, different processing may be performed according to actual requirements, for example, the above process further includes steps of request queue initialization and the like.
In one embodiment, after the step of returning the request data to the instance to make the instance render with the request data, the method further includes:
deleting the network request from the preset instance request queue;
judging whether the preset instance request queue is empty or not; and if the preset example request queue is not empty, triggering a next network request corresponding to the network request contained in the preset example request queue to execute the next network request corresponding to the network request.
Specifically, the request data is returned to the instance, so that the instance performs rendering by using the request data, after the page rendering corresponding to the instance is realized, the network request corresponding to rendering performed on the instance included in the page component is processed, that is, the network request is deleted from the preset instance request queue, so as to empty the preset instance request queue, and further determine whether the preset instance request queue is empty, if the preset instance request queue is empty, a plurality of instances, especially a large number of instances, included in the same component in the page component are processed, the preset instance request queue may be deleted, and if the preset instance request queue is not empty, a next network request corresponding to the network request included in the preset instance request queue is triggered, so as to continue to execute the next network request corresponding to the network request.
It should be noted that, in the page component rendering processing methods described in the foregoing embodiments, the technical features included in different embodiments may be recombined as needed to obtain a combined implementation, but all of the embodiments are within the protection scope claimed in the present application.
Referring to fig. 4, fig. 4 is a schematic block diagram of a page component rendering processing apparatus according to an embodiment of the present application. Corresponding to the above page component rendering method, an embodiment of the present application further provides a page component rendering device. As shown in fig. 4, the page component rendering processing apparatus includes a unit for executing the page component rendering processing method described above, and the page component rendering processing apparatus may be configured in a computer device. Specifically, referring to fig. 4, the page component rendering processing apparatus 400 includes an adding unit 401, a processing unit 402, and a returning unit 403.
The adding unit 401 is configured to add a network request corresponding to rendering performed by an instance included in a page component to a preset instance request queue in response to the network request;
a processing unit 402, configured to process the network request according to a processing sequence corresponding to the preset instance request queue to obtain request data corresponding to the network request;
a returning unit 403, configured to return the request data to the instance, so that the instance performs rendering with the request data.
In one embodiment, the processing unit 402 comprises:
the distribution subunit is used for distributing a preset state machine to the network request;
the first judging subunit is used for judging whether the state machine is triggered or not;
and the processing subunit is used for processing the network request and obtaining request data corresponding to the network request if the state machine is triggered.
In an embodiment, the processing unit 402 further comprises:
the first counting subunit is configured to count the number of network requests corresponding to the state machine in the triggered state in the preset instance request queue;
the second judging subunit is used for judging whether the number is smaller than a preset number threshold value;
and the execution subunit is configured to execute the step of determining whether the state machine is triggered if it is determined that the number is smaller than the preset number threshold.
In one embodiment, the first statistical subunit includes:
the second counting subunit is configured to count, by using a preset counter, the number of network requests corresponding to the state machine in the triggered state in the preset instance request queue;
and the reading subunit is configured to read a numerical value corresponding to the preset counter to obtain the number of network requests corresponding to the state machine in the triggered state in the preset instance request queue.
In an embodiment, the page component rendering processing apparatus 400 further includes:
the first judgment unit is used for judging whether the preset instance request queue is in an execution state or not;
the first triggering unit is used for triggering the preset example request queue to execute the network request contained in the preset example request queue if the preset example request queue is not in an execution state;
and the execution unit is used for executing the step of adding the network request to a preset instance request queue so as to queue the network request, wherein the step corresponds to rendering the network request in response to the instance contained in the page component if the preset instance request queue is in an execution state.
In an embodiment, the page component rendering processing apparatus 400 further includes:
a deleting unit, configured to delete the network request from the preset instance request queue;
the second judging unit is used for judging whether the preset instance request queue is empty or not;
a second triggering unit, configured to trigger a next network request corresponding to the network request included in the preset instance request queue if the preset instance request queue is not empty, so as to execute the next network request corresponding to the network request.
It should be noted that, as can be clearly understood by those skilled in the art, for the specific implementation processes of the page component rendering processing apparatus and each unit, reference may be made to the corresponding descriptions in the foregoing method embodiments, and for convenience and brevity of description, no further description is provided herein.
Meanwhile, the division and connection manner of each unit in the page component rendering device are only used for illustration, in other embodiments, the page component rendering device may be divided into different units as needed, and each unit in the page component rendering device may also adopt different connection orders and manners to complete all or part of the functions of the page component rendering device.
The above-described page component rendering processing apparatus may be implemented in the form of a computer program that can be run on a computer device as shown in fig. 5.
Referring to fig. 5, fig. 5 is a schematic block diagram of a computer device according to an embodiment of the present application. The computer device 500 may be a computer device such as a desktop computer or a server, or may be a component or part of another device.
Referring to fig. 5, the computer device 500 includes a processor 502, a memory, and a network interface 505 connected by a system bus 501, wherein the memory may include a non-volatile storage medium 503 and an internal memory 504, and the memory may also be a volatile computer-readable storage medium.
The non-volatile storage medium 503 may store an operating system 5031 and a computer program 5032. The computer program 5032, when executed, causes the processor 502 to perform one of the above-described page component rendering methods.
The processor 502 is used to provide computing and control capabilities to support the operation of the overall computer device 500.
The internal memory 504 provides an environment for the execution of the computer program 5032 in the non-volatile storage medium 503, and when the computer program 5032 is executed by the processor 502, the processor 502 can be enabled to execute a page component rendering processing method as described above.
The network interface 505 is used for network communication with other devices. Those skilled in the art will appreciate that the configuration shown in fig. 5 is a block diagram of only a portion of the configuration associated with the present application and does not constitute a limitation of the computer device 500 to which the present application may be applied, and that a particular computer device 500 may include more or less components than those shown, or may combine certain components, or have a different arrangement of components. For example, in some embodiments, the computer device may only include a memory and a processor, and in such embodiments, the structures and functions of the memory and the processor are consistent with those of the embodiment shown in fig. 5, and are not described herein again.
Wherein the processor 502 is configured to run the computer program 5032 stored in the memory to implement the following steps: responding to a network request corresponding to the rendering of an instance contained in a page component, and adding the network request to a preset instance request queue; processing the network request according to the processing sequence corresponding to the preset instance request queue to obtain request data corresponding to the network request; returning the request data to the instance to enable the instance to render with the request data.
In an embodiment, when the processor 502 implements the step of processing the network request according to the processing sequence corresponding to the preset instance request queue to obtain the request data corresponding to the network request, the following steps are specifically implemented:
allocating a preset state machine to the network request;
judging whether the state machine is triggered or not;
and if the state machine is triggered, processing the network request and obtaining request data corresponding to the network request.
In an embodiment, before implementing the step of determining whether the state machine is triggered, the processor 502 further implements the following steps:
counting the number of network requests corresponding to the state machine in the triggered state in the preset instance request queue;
judging whether the number is smaller than a preset number threshold value or not;
and if the number is smaller than the preset number threshold value, executing the step of judging whether the state machine is triggered.
In an embodiment, when the processor 502 performs the step of counting the number of network requests corresponding to the state machine in the triggered state in the preset instance request queue, the following steps are specifically performed:
counting the number of network requests corresponding to the state machine in the triggered state in the preset instance request queue through a preset counter;
and reading the numerical value corresponding to the preset counter to obtain the number of the network requests corresponding to the state machine in the triggered state in the preset instance request queue.
In an embodiment, before implementing the step of adding the network request corresponding to rendering in response to the instance included in the page component to a preset instance request queue, the processor 502 further implements the following steps:
judging whether the preset instance request queue is in an execution state or not;
if the preset instance request queue is not in an execution state, triggering the preset instance request queue to execute the network request contained in the preset instance request queue;
and if the preset instance request queue is in an execution state, executing the step of adding the network request to the preset instance request queue in response to the network request corresponding to the rendering of the instance contained in the page component so as to queue the network request.
In an embodiment, after the step of returning the request data to the instance to cause the instance to render with the request data, the processor 502 further performs the following steps:
deleting the network request from the preset instance request queue;
judging whether the preset instance request queue is empty or not;
and if the preset example request queue is not empty, triggering a next network request corresponding to the network request contained in the preset example request queue to execute the next network request corresponding to the network request.
It should be understood that in the embodiment of the present Application, the Processor 502 may be a Central Processing Unit (CPU), and the Processor 502 may also be other general-purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, and the like. Wherein a general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
It will be understood by those skilled in the art that all or part of the processes in the method for implementing the above embodiments may be implemented by a computer program, and the computer program may be stored in a computer readable storage medium. The computer program is executed by at least one processor in the computer system to implement the flow steps of the embodiments of the method described above.
Accordingly, the present application also provides a computer-readable storage medium. The computer readable storage medium may be a non-volatile computer readable storage medium, or a volatile computer readable storage medium, and the computer readable storage medium stores a computer program, and when the computer program is executed by a processor, the computer program causes the processor to execute the steps of the page component rendering processing method described in the above embodiments.
The computer readable storage medium may be an internal storage unit of the aforementioned device, such as a hard disk or a memory of the device. The computer readable storage medium may also be an external storage device of the device, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), etc. provided on the device. Further, the computer-readable storage medium may also include both an internal storage unit and an external storage device of the apparatus.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described apparatuses, devices and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
The storage medium is an entity and non-transitory storage medium, and may be various entity storage media capable of storing computer programs, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a magnetic disk, or an optical disk.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be embodied in electronic hardware, computer software, or combinations of both, and that the components and steps of the examples have been described in a functional general in the foregoing description for the purpose of illustrating clearly the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative. For example, the division of each unit is only one logic function division, and there may be another division manner in actual implementation. For example, various elements or components may be combined or may be integrated into another system, or some features may be omitted, or not implemented.
The steps in the method of the embodiment of the application can be sequentially adjusted, combined and deleted according to actual needs. The units in the device of the embodiment of the application can be combined, divided and deleted according to actual needs. In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing an electronic device (which may be a personal computer, a terminal, or a network device) to perform all or part of the steps of the method according to the embodiments of the present application.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive various equivalent modifications or substitutions within the technical scope of the present application, and these modifications or substitutions should be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A page component rendering processing method is characterized by comprising the following steps:
responding to a network request corresponding to the rendering of an instance contained in a page component, and adding the network request to a preset instance request queue;
processing the network request according to the processing sequence corresponding to the preset instance request queue to obtain request data corresponding to the network request;
returning the request data to the instance to enable the instance to render with the request data.
2. The method for rendering a page component according to claim 1, wherein the step of processing the network request according to the processing sequence corresponding to the preset instance request queue to obtain the request data corresponding to the network request includes:
allocating a preset state machine to the network request;
judging whether the state machine is triggered or not;
and if the state machine is triggered, processing the network request and obtaining request data corresponding to the network request.
3. The page component rendering processing method according to claim 2, wherein before the step of determining whether the state machine is triggered, the method further comprises:
counting the number of network requests corresponding to the state machine in the triggered state in the preset instance request queue;
judging whether the number is smaller than a preset number threshold value or not;
and if the number is smaller than the preset number threshold value, executing the step of judging whether the state machine is triggered.
4. The page component rendering processing method according to claim 3, wherein the step of counting the number of network requests corresponding to the state machine in the triggered state in the preset instance request queue comprises:
counting the number of network requests corresponding to the state machine in the triggered state in the preset instance request queue through a preset counter;
and reading the numerical value corresponding to the preset counter to obtain the number of the network requests corresponding to the state machine in the triggered state in the preset instance request queue.
5. The method for processing page assembly rendering according to claim 1, wherein before the step of adding the network request to a preset instance request queue in response to a network request corresponding to an instance included in a page assembly for rendering, the method further comprises:
judging whether the preset instance request queue is in an execution state or not;
if the preset instance request queue is not in an execution state, triggering the preset instance request queue to execute the network request contained in the preset instance request queue;
and if the preset instance request queue is in an execution state, executing the step of adding the network request to the preset instance request queue in response to the network request corresponding to the rendering of the instance contained in the page component so as to queue the network request.
6. The method for processing the page assembly rendering according to claim 1, wherein after the step of returning the request data to the instance to enable the instance to render with the request data, the method further comprises:
deleting the network request from the preset instance request queue;
judging whether the preset instance request queue is empty or not;
and if the preset example request queue is not empty, triggering a next network request corresponding to the network request contained in the preset example request queue to execute the next network request corresponding to the network request.
7. A page component rendering processing apparatus, comprising:
the adding unit is used for responding to a network request corresponding to the rendering of the instance contained in the page component and adding the network request to a preset instance request queue;
the processing unit is used for processing the network request according to the processing sequence corresponding to the preset instance request queue so as to obtain request data corresponding to the network request;
and the return unit is used for returning the request data to the example so as to render the example by adopting the request data.
8. The page component rendering apparatus according to claim 7, wherein the processing unit includes:
the distribution subunit is used for distributing a preset state machine to the network request;
the first judging subunit is used for judging whether the state machine is triggered or not;
and the processing subunit is used for processing the network request and obtaining request data corresponding to the network request if the state machine is triggered.
9. A computer device, comprising a memory and a processor coupled to the memory; the memory is used for storing a computer program; the processor is adapted to run the computer program to perform the steps of the method according to any of claims 1-6.
10. A computer-readable storage medium, characterized in that the storage medium stores a computer program which, when being executed by a processor, realizes the steps of the method according to any one of claims 1 to 6.
CN202011033891.2A 2020-09-27 2020-09-27 Page component rendering processing method, device, equipment and computer readable medium Pending CN112182452A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011033891.2A CN112182452A (en) 2020-09-27 2020-09-27 Page component rendering processing method, device, equipment and computer readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011033891.2A CN112182452A (en) 2020-09-27 2020-09-27 Page component rendering processing method, device, equipment and computer readable medium

Publications (1)

Publication Number Publication Date
CN112182452A true CN112182452A (en) 2021-01-05

Family

ID=73945099

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011033891.2A Pending CN112182452A (en) 2020-09-27 2020-09-27 Page component rendering processing method, device, equipment and computer readable medium

Country Status (1)

Country Link
CN (1) CN112182452A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114518912A (en) * 2022-02-21 2022-05-20 度小满科技(北京)有限公司 Page loading method, device and equipment and readable storage medium
CN114741147A (en) * 2022-03-30 2022-07-12 阿里巴巴(中国)有限公司 Method for displaying page on mobile terminal and mobile terminal
CN116991506A (en) * 2023-09-28 2023-11-03 腾讯科技(深圳)有限公司 Webpage rendering method and device, terminal and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9626197B1 (en) * 2010-07-30 2017-04-18 Amazon Technologies, Inc. User interface rendering performance
CN108880921A (en) * 2017-05-11 2018-11-23 腾讯科技(北京)有限公司 Webpage monitoring method
CN109726346A (en) * 2018-12-29 2019-05-07 北京创鑫旅程网络技术有限公司 Page assembly processing method and processing device
CN109818826A (en) * 2019-01-11 2019-05-28 西安电子科技大学工程技术研究院有限公司 A kind of network path delay measurement method and its device and clock synchronization system
US10530887B1 (en) * 2016-12-06 2020-01-07 Amazon Technologies, Inc. Pre-caching data for use upon execution of program code

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9626197B1 (en) * 2010-07-30 2017-04-18 Amazon Technologies, Inc. User interface rendering performance
US10530887B1 (en) * 2016-12-06 2020-01-07 Amazon Technologies, Inc. Pre-caching data for use upon execution of program code
CN108880921A (en) * 2017-05-11 2018-11-23 腾讯科技(北京)有限公司 Webpage monitoring method
CN109726346A (en) * 2018-12-29 2019-05-07 北京创鑫旅程网络技术有限公司 Page assembly processing method and processing device
CN109818826A (en) * 2019-01-11 2019-05-28 西安电子科技大学工程技术研究院有限公司 A kind of network path delay measurement method and its device and clock synchronization system

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114518912A (en) * 2022-02-21 2022-05-20 度小满科技(北京)有限公司 Page loading method, device and equipment and readable storage medium
CN114518912B (en) * 2022-02-21 2023-04-25 度小满科技(北京)有限公司 Page loading method, device, equipment and readable storage medium
CN114741147A (en) * 2022-03-30 2022-07-12 阿里巴巴(中国)有限公司 Method for displaying page on mobile terminal and mobile terminal
CN114741147B (en) * 2022-03-30 2023-11-14 阿里巴巴(中国)有限公司 Method for displaying page on mobile terminal and mobile terminal
CN116991506A (en) * 2023-09-28 2023-11-03 腾讯科技(深圳)有限公司 Webpage rendering method and device, terminal and storage medium
CN116991506B (en) * 2023-09-28 2024-04-30 腾讯科技(深圳)有限公司 Webpage rendering method and device, terminal and storage medium

Similar Documents

Publication Publication Date Title
EP3425502B1 (en) Task scheduling method and device
CN112182452A (en) Page component rendering processing method, device, equipment and computer readable medium
US20130031558A1 (en) Scheduling Mapreduce Jobs in the Presence of Priority Classes
CN113377348A (en) Task adjustment method applied to task engine, related device and storage medium
US11023277B2 (en) Scheduling of tasks in a multiprocessor device
CN115562838A (en) Resource scheduling method and device, computer equipment and storage medium
CN109521970B (en) Data processing method and related equipment
CN108958903B (en) Embedded multi-core central processor task scheduling method and device
CN107577962A (en) Method, system and the relevant apparatus that a kind of more algorithms of cipher card perform side by side
US11743200B2 (en) Techniques for improving resource utilization in a microservices architecture via priority queues
CN115981893A (en) Message queue task processing method and device, server and storage medium
CN113220368B (en) Storage client resource isolation method, system, terminal and storage medium
CN113345067B (en) Unified rendering method, device, equipment and engine
CN113296788B (en) Instruction scheduling method, device, equipment and storage medium
CN112988355B (en) Program task scheduling method and device, terminal equipment and readable storage medium
CN109634812A (en) Process CPU usage control method, terminal device and the storage medium of linux system
US9135058B2 (en) Method for managing tasks in a microprocessor or in a microprocessor assembly
CN114741165A (en) Processing method of data processing platform, computer equipment and storage device
CN113961364A (en) Large-scale lock system implementation method and device, storage medium and server
CN113806055A (en) Lightweight task scheduling method, system, device and storage medium
CN108958904B (en) Driver framework of lightweight operating system of embedded multi-core central processing unit
CN108958905B (en) Lightweight operating system of embedded multi-core central processing unit
CN115328528A (en) Flutter engine management method, system, medium and native terminal
JP6962717B2 (en) Information processing equipment, information processing methods and information processing programs
CN117376373B (en) Metadata operation request processing method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210105