CN116107710A - Method, apparatus, device and medium for processing offline rendering tasks - Google Patents

Method, apparatus, device and medium for processing offline rendering tasks Download PDF

Info

Publication number
CN116107710A
CN116107710A CN202211339002.4A CN202211339002A CN116107710A CN 116107710 A CN116107710 A CN 116107710A CN 202211339002 A CN202211339002 A CN 202211339002A CN 116107710 A CN116107710 A CN 116107710A
Authority
CN
China
Prior art keywords
rendering
queue
task
information
tasks
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211339002.4A
Other languages
Chinese (zh)
Inventor
梁阳
张世阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Publication of CN116107710A publication Critical patent/CN116107710A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Debugging And Monitoring (AREA)

Abstract

According to embodiments of the present disclosure, methods, apparatuses, devices, and media for processing offline rendering tasks are provided. The method for processing the offline rendering task comprises the following steps: obtaining queue information of a rendering queue and information of a rendering node for pulling rendering tasks from the rendering queue, wherein the rendering queue is used for carrying at least one rendering task submitted by a user. The method further includes evaluating a workload of the rendering queue based on the queue information and the information of the rendering node. The method further includes adjusting a number of rendering nodes that pull rendering tasks from the rendering queue based on the workload. In this way, the rendering platform may dynamically adjust rendering resources, thereby reducing processing latency of rendering tasks and improving resource utilization of the system.

Description

Method, apparatus, device and medium for processing offline rendering tasks
The present disclosure claims the priority of the inventive patent application entitled "method, apparatus, device, and medium for processing offline rendering tasks" filed with the national intellectual property office at day 8 and 10 of 2022, application number 202210956659.9.
Technical Field
Example embodiments of the present disclosure relate generally to the field of image processing, and in particular, relate to a method, apparatus, device, and computer-readable storage medium for processing offline rendering tasks.
Background
Rendering refers to the process of generating images from models with software, with the widespread use of computer graphics, whether enterprise users or personal users, increasingly requiring rendering. In recent years, with the rapid development of cloud computing, cloud rendering technology has also grown and attracted widespread attention in the industry.
At present, cloud rendering can support two rendering modes of online rendering and offline rendering so as to meet different rendering requirements of users. The on-line rendering is suitable for scenes with smaller service processing magnitude and higher real-time requirements. In contrast, offline rendering is particularly suitable for scenes with relatively large traffic orders of processing, and is therefore widely used for advertising, video, animation, and non-real-time image processing for individuals and businesses. The traditional cloud offline rendering technology cannot support the processing of massive rendering tasks, and the rendering resource scheduling lacks flexibility zero. Therefore, a flexible and efficient mechanism for processing offline rendering tasks needs to be provided, so as to reduce processing delay of the rendering tasks and improve utilization rate of rendering resources, thereby supporting mass level rendering tasks.
Disclosure of Invention
In a first aspect of the present disclosure, a method of processing an offline rendering task is provided. The method comprises the steps of obtaining queue information of a rendering queue and information of a rendering node for pulling rendering tasks from the rendering queue, wherein the rendering queue is used for carrying at least one rendering task submitted by a user. The method further includes evaluating a workload of the rendering queue based on the queue information and the information of the rendering node. The method further includes adjusting a number of rendering nodes that pull rendering tasks from the rendering queue based on the workload.
In a second aspect of the present disclosure, an apparatus for processing an offline rendering task is provided. The device comprises an information obtaining module, a processing module and a processing module, wherein the information obtaining module is configured to obtain queue information of a rendering queue and information of a rendering node for pulling rendering tasks from the rendering queue, and the rendering queue is used for carrying at least one rendering task submitted by a user. The apparatus further includes an evaluation module configured to evaluate a workload of the rendering queue based on the queue information and the information of the rendering node. The apparatus further includes an adjustment module configured to adjust a number of rendering nodes that pull rendering tasks from the rendering queue based on the workload.
In a third aspect of the present disclosure, an electronic device is provided. The apparatus comprises at least one processing unit; and at least one memory coupled to the at least one processing unit and storing instructions for execution by the at least one processing unit. The instructions, when executed by at least one processing unit, cause the apparatus to perform the method of the first aspect.
In a fourth aspect of the present disclosure, a computer-readable storage medium is provided. A medium has stored thereon a computer program for execution by a processor to implement the method of the first aspect.
It should be understood that what is described in this summary is not intended to limit the critical or essential features of the embodiments of the disclosure nor to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The above and other features, advantages and aspects of embodiments of the present disclosure will become more apparent by reference to the following detailed description when taken in conjunction with the accompanying drawings. In the drawings, wherein like or similar reference numerals denote like or similar elements, in which:
FIG. 1A shows a schematic diagram of an example environment in which embodiments of the present disclosure can be applied;
FIG. 1B illustrates a schematic diagram of another example environment in which embodiments of the present disclosure can be applied;
FIG. 2 illustrates a flow diagram of signaling flow for processing offline rendering tasks, according to some embodiments of the present disclosure;
FIG. 3 illustrates a flow chart of a process of handling offline rendering tasks, according to some embodiments of the present disclosure;
FIG. 4 illustrates a block diagram of an apparatus for processing offline rendering tasks, according to some embodiments of the present disclosure; and
fig. 5 illustrates a block diagram of an apparatus capable of implementing various embodiments of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure have been illustrated in the accompanying drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather, these embodiments are provided so that this disclosure will be more thorough and complete. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
In describing embodiments of the present disclosure, the term "comprising" and its like should be taken to be open-ended, i.e., including, but not limited to. The term "based on" should be understood as "based at least in part on". The term "one embodiment" or "the embodiment" should be understood as "at least one embodiment". The term "some embodiments" should be understood as "at least some embodiments". Other explicit and implicit definitions are also possible below.
It will be appreciated that the data (including but not limited to the data itself, the acquisition or use of the data) involved in the present technical solution should comply with the corresponding legal regulations and the requirements of the relevant regulations.
It will be appreciated that prior to using the technical solutions disclosed in the embodiments of the present disclosure, the user should be informed and authorized of the type, usage range, usage scenario, etc. of the personal information related to the present disclosure in an appropriate manner according to relevant legal regulations.
For example, in response to receiving an active request from a user, a prompt is sent to the user to explicitly prompt the user that the operation it is requesting to perform will require personal information to be obtained and used with the user. Thus, the user can autonomously select whether to provide personal information to software or hardware such as an electronic device, an application program, a server or a storage medium for executing the operation of the technical scheme of the present disclosure according to the prompt information.
As an alternative but non-limiting implementation, in response to receiving an active request from a user, the prompt information may be sent to the user, for example, in a popup window, where the prompt information may be presented in a text manner. In addition, a selection control for the user to select to provide personal information to the electronic device in a 'consent' or 'disagreement' manner can be carried in the popup window.
It will be appreciated that the above-described notification and user authorization process is merely illustrative and not limiting of the implementations of the present disclosure, and that other ways of satisfying relevant legal regulations may be applied to the implementations of the present disclosure.
In the traditional cloud offline rendering technology, a user submits a rendering task to a rendering platform of a cloud, and the rendering platform processes the rendering task submitted by the user by using a rendering node. Rendering nodes may also sometimes be referred to as rendering resources. However, in the conventional offline rendering technology, the rendering nodes are configured statically, which makes the rendering tasks of the user not processed in time when the amount of rendering tasks increases faster, resulting in a higher processing delay of the rendering tasks. Accordingly, when the rendering task is low, a large number of rendering nodes are idle or called with low utilization rate, resulting in waste of resources.
In recent years, users have increasingly demanded offline rendering, and the types of offline rendering tasks have also been increasingly diversified. In addition, the demands of users for scheduling and concurrent execution speeds of rendering tasks are also increasing. In this case, the conventional offline rendering technique has failed to meet the demands of increasing rendering tasks and increasingly diversified rendering types.
Therefore, an efficient and flexible offline rendering task processing mechanism needs to be provided to improve the scheduling of rendering tasks and rendering nodes, so that a rendering platform can support rendering tasks of sea level, reduce the processing delay of the rendering tasks, and improve the resource utilization rate of the system.
According to some implementations of the present disclosure, a scheme for processing offline rendering tasks is presented. According to the scheme of the disclosure, the rendering task submitted by the user is added/inserted into the rendering queue, and the rendering platform can dynamically obtain the queue information of the rendering queue and the information of the rendering nodes for pulling the rendering task from the rendering queue, evaluate the workload of the rendering queue according to the queue information and adjust the number of the rendering nodes for pulling the rendering task from the rendering queue.
In this way, the number of rendering nodes that pull rendering tasks from the rendering queue may be dynamically adjusted, thereby reducing processing latency of rendering tasks and improving resource utilization of the system.
Example Environment
FIG. 1A illustrates a schematic diagram of an example environment 100 in which embodiments of the present disclosure may be implemented. The environment 100 relates to a cloud rendering environment that includes N user devices 110-1 … … 110-k, … … 110-N, where N is an integer greater than 1, k=1, 2, … … N, and a rendering platform 120. For ease of discussion, the user devices 110-1 … … 110-k, … … 110-N may be referred to collectively or individually as user devices 110.
In some embodiments, user device 110 may be implemented as a terminal device. The terminal device may be any type of mobile terminal, fixed terminal, or portable terminal, including a mobile handset, desktop computer, laptop computer, notebook computer, netbook computer, tablet computer, media computer, multimedia tablet, personal Communication System (PCS) device, personal navigation device, personal Digital Assistant (PDA), audio/video player, digital camera/camcorder, positioning device, television receiver, radio broadcast receiver, electronic book device, game device, or any combination of the preceding, including accessories and peripherals for these devices, or any combination thereof. In some embodiments, the terminal device is also capable of supporting any type of interface to the user, such as "wearable" circuitry, etc.
In some embodiments, rendering platform 120 is implemented as a cloud device and provides rendering services to users.
In some embodiments, user device 110 submits a rendering request for one or more rendering tasks to rendering platform 120 using an application such as a browser or client, which is sent to rendering platform 120 via a network. In some embodiments, the rendering request is an offline rendering request. Rendering platform 120 processes the offline rendering request using cloud rendering resources. Unlike conventional rotation-based approaches to querying rendering results, in some embodiments, after rendering platform 120 processes the rendering request, rendering platform 120 notifies user device 110 of rendering results in an proactive notification.
In some embodiments, the rendering task of user device 110 is associated with the rendering of a virtual object. As a particular embodiment, the user device 110 may generate an avatar and generate rendering tasks for the avatar, such as rendering a background of the avatar, setting a face shape, hairstyle, clothing, etc. of the avatar.
As yet another particular embodiment, the user device 110 may perform a cartoon, retro-like process on the virtual character. As yet another particular embodiment, the user device 110 renders the environment of the virtual character into a different display style, such as a rural style, a retro style, or the like. As yet another particular example, user device 110 may add virtual props, such as fireworks, raindrops, snowflakes, etc., to a virtual character.
In some embodiments, any setting of the user device 110 for the avatar may be accomplished through a cloud rendering task.
Further, different avatar images of different user devices 110 may be synthetically rendered. In a particular implementation, the avatar of the user device 110-1 may be rendered with the avatar of the user device 110-2. In this case, user device 110-1 submits a first rendering task to rendering platform 120 and user device 110-2 submits a second rendering task to rendering platform 120. The cloud rendering platform may perform the first rendering task and the second rendering task to synthetically render the avatar of the user device 110-1 and the avatar of the user device 110-2 and return rendering results to the user devices 110-1 and 110-2.
It should be understood that the particular scenario described above is for illustrative purposes only and should not be construed as limiting the scope of the present disclosure.
In this way, the rendering platform 120 may process the offline rendering request of the user device 110 and timely inform the corresponding user device 110 of the rendering result.
It should be appreciated that although in fig. 1A, the rendering platform 120 is illustrated as one device, in a real application scenario, the rendering platform 120 may include multiple servers, gateways, rendering engines, configuration centers, databases, cluster devices, and any software/hardware network elements/modules related to a cloud rendering service. Embodiments of the disclosure are not limited in this respect.
FIG. 1B illustrates a schematic diagram of another example environment 150 in which embodiments of the present disclosure may be implemented. Specifically, in environment 150, rendering engine 120 in FIG. 1A is implemented as a plurality of network elements/modules. For ease of discussion, the environment 150 shown in FIG. 1B will be described with reference to the environment 100 of FIG. 1A. As shown in FIG. 1B, environment 150 includes user 151, user device 110-1, and rendering platform 120.
As shown in fig. 1B, rendering platform 120 includes M task schedulers 175-1 … … 175-k, … … 175-M, where M is an integer greater than 1, k=1, 2, … … M. For ease of discussion, task schedulers 175-1 … … 175-k, … … 175-M may be referred to collectively or individually as task scheduler 175. In some embodiments, each task scheduler 175 may maintain at least one rendering queue.
Rendering platform 120 also includes L rendering nodes 185-1 … … 185-k, … … 185-L, where L is an integer greater than 1, k=1, 2, … … L. For ease of discussion, the rendering nodes 185-1 … … 185-k, … … 185-L may be collectively or individually referred to as rendering nodes 185.
In some embodiments, rendering node 185 may include some modules, units, plug-ins, or the like, such that rendering node 185 may handle the corresponding rendering task. In some embodiments, an automation station, sometimes referred to as Automan, is configured in the processor of rendering node 185, which can load different engineering files to invoke rendering plug-ins to process different rendering tasks based on the type of rendering task. Additionally, in some embodiments, rendering node 185 is also configured with at least one rendering plug-in. Additionally, in some embodiments, rendering node 185 may also include third party applications, such as rendering plug-ins provided by third parties, and the like.
Further, as depicted in FIG. 1B, rendering platform 120 also includes resource scheduler 180. In some embodiments, the resource scheduler 180 synchronizes queue information with the task scheduler 175 and information of the rendering node with the rendering node 185. Based on the synchronized queue information and the information of the rendering node, the resource scheduler 180 may dynamically maintain a binding relationship between the rendering node 185 and the task scheduler 175.
Additionally, in some embodiments, rendering platform 120 further includes gateway 155 and task manager 170. Gateway 155 may receive a rendering request for at least one rendering task from user device 110-1. In some embodiments, gateway 155 may send rendering tasks to the corresponding task manager 170 based on the routing table. Task manager 170 may assign a task identification ID to each rendering task and return the task identification to user device 110-1 via gateway 155. In some embodiments, the task manager 170 may also store task information related to the rendering task to the entity database 165 and upload materials to be rendered of the rendering task, etc., to the object storage database 160.
Further, task manager 170 passes the rendering task to task scheduler 175 so that the rendering task may be pulled and processed by rendering node 185.
The task manager 170 may include some modules or units to enable the task manager 170 to better implement management of rendering tasks. Such as a persistence module that assigns task identifications to rendering tasks, a state migration module that manages rendering task processing states, a file upload module that uploads materials to be rendered to the object store database 160, a task retry module that implements task failure processing, a distribution module for distributing rendering tasks, a coordinator for coordinating task processing, and so forth.
Additionally, rendering platform 120 may also include message queue 190. In some embodiments, the rendering node 185 pulls and processes rendering tasks from their respective rendering queues of the task scheduler 175, further publishing the status information of the rendering tasks to the status message queue 190. Status information may be such as success, failure, progress of processing, cause of failure, and the like, sometimes referred to as job processing events.
In some embodiments, task manager 170 may subscribe to/consume state information in state message queue 190 and maintain state transitions of rendering tasks based on the state information, such as determining whether the rendering task has completed.
In some embodiments, if task manager 170 determines that the processing of the rendering task has completed, task manager 170 may issue a rendering task completion event, sometimes referred to as a job completion event, to status message queue 190. In some embodiments, rendering task completion events may be subscribed to/consumed by notification center 192 and notified to the corresponding user device 110-1 via business service 193.
In some embodiments, notification center 192 may include a task completion handler to pull and issue task completion events. In addition, remote procedure call (Remote Procedure Call Protocol, PRC) clients and hypertext transfer protocol (Hyper Text Transfer Protocol, HTTP) clients may also be included to better enable reading task completion events in the status message queue 190.
Optionally, rendering platform 120 may also include a configuration center 194 to complete the initial configuration and/or subsequent dynamic configuration of rendering node 185.
It should be appreciated that fig. 1A and 1B illustrate only an example offline rendering task processing environment. In other embodiments, the offline rendering task processing environment may include more or fewer network elements/modules. In other words, neither of the network elements/modules shown in fig. 1A and 1B is necessary, which may be omitted according to actual needs. Furthermore, the number and connection relationships of the network elements/modules shown in fig. 1A and 1B are also merely illustrative and schematic. For example, in other embodiments, rendering platform 120 may include more task managers 170, task schedulers 175, and rendering nodes 185, among others. Embodiments of the disclosure are not limited in this respect.
Example procedure
Fig. 2 illustrates a schematic block diagram of a signaling flow 200 for processing offline rendering tasks, according to some embodiments of the present disclosure. For ease of discussion, discussion is with reference to environment 100 of FIG. 1A and environment 150 of FIG. 1B. Signaling flow 200 relates to user device 110-1, gateway 155, task manager 170, task scheduler 175, rendering node 185, resource scheduler 180, and status message queue 190.
It should be appreciated that while signaling flow 200 is discussed with reference to environment 100 of fig. 1A and environment 150 of fig. 1B, as has been discussed in this disclosure, environment 100 and environment 150 are merely example environments of this disclosure, and thus environment 100 and environment 150 should not be construed as limiting the environment in which signaling flow 200 is implemented. Further, while certain operations in signaling flow 200 occur between certain network elements/modules, in other embodiments, the corresponding operations may be implemented by other network elements/modules. Embodiments of the disclosure are not limited in this respect.
In the particular embodiment shown in fig. 2, the rendering tasks to be processed are carried in respective rendering queues, each being uniquely identified by an identification of the rendering queue. Additionally, in some embodiments, the rendering queues are managed using a first-in-first-out mode.
Additionally, in some embodiments, each rendering queue corresponds to a particular rendering task type. In other words, each rendering queue is used to carry rendering tasks of a particular rendering type.
In some embodiments, the resource scheduler 180 obtains (230) queue information for a rendering queue and information for rendering nodes that pull rendering tasks from the rendering queue. As shown in signaling flow 200, resource scheduler 180 obtains (230-1) queue information for the rendering queue from task scheduler 175, while obtaining (230-2) information for the rendering node from rendering node 185.
Next, a process of obtaining queue information of the rendering queue will be discussed first.
In some embodiments, rendering platform 120 may include a plurality of task schedulers 175 (also sometimes referred to as task scheduler instances, such as task schedulers 175-1 through 175-M shown in FIG. 1B), each task scheduler 175 being identified by a task scheduler identification, also sometimes referred to as a task scheduler service name.
Additionally, in some embodiments, each task scheduler 175 maintains multiple rendering queues and periodically synchronizes queue information with resource scheduler 180. In a particular embodiment, the task scheduler 175 invokes an interface of the resource scheduler 180 in a first cycle to provide queue information for the rendering queues it manages to the resource scheduler 180.
Alternatively or additionally, in another embodiment, task scheduler 175 may provide queue information for the rendering queues it manages to resource scheduler 180 in response to detecting a predetermined event. One example of a predetermined event is the detection that the queue length of the rendering queue is greater than a first reporting threshold length. Another example of a predetermined event is detecting that the queue length of the rendering queue is less than a second reporting threshold length.
The queue information may include any parameter used to determine the render queue workload. One example of queue information is the current queue length of the rendering queue, sometimes referred to as the queue wait length. Another example of queue information is the rendering task types supported by the rendering queue. Yet another example of queue information is the average task insertion rate of the rendering queue, which may be an average of the insertion rates of the last N cycles. Other information of the queue information may also include: identification of a rendering queue, etc. It should be appreciated that the above examples are merely illustrative and exemplary, and that in other embodiments, the queue information may include any other suitable queue parameters. Further, when task scheduler 175 provides queue information, task scheduler 175 may combine any of the above example parameters. Embodiments of the disclosure are not limited in this respect.
Through the above-described process, the resource scheduler 180 may dynamically obtain queue information for each of the rendering queues in the rendering platform 120. Next, the process by which the resource scheduler 180 obtains information of the rendering node will be discussed continuously.
In some embodiments, rendering platform 120 includes a plurality of rendering nodes 185, such as rendering nodes 185-1 through 185-L shown in FIG. 1B. Each rendering node 185 is identified by a rendering node identification, sometimes referred to as a rendering node service name. In a particular embodiment, each rendering node 185 invokes an interface of the resource scheduler 180 in a second cycle to provide its own rendering information to the resource scheduler 180. In a particular embodiment, each rendering node 185 sends rendering information to the resource scheduler 180 in the form of a heartbeat.
One example of rendering information is rendering capabilities that include at least one type of rendering task supported by the rendering node 185. For example, the rendering node 185 reports a rendering capability list, each data item of the rendering capability list corresponding to a rendering task type name/index supported by the rendering node 185. Another example of rendering information is the respective processing capabilities of the rendering node 185 for at least one rendering task type that it supports. For example, rendering node 185 processes a processing duration and/or processing rate for a particular type of rendering task, such as the number of rendering tasks of that particular type per second. In a particular embodiment, the processing capability information is a statistical value obtained based on historical data. In another particular embodiment, the processing capability information is processing information that rendering node 185 last processed the rendering task of the particular rendering type. Other examples of rendering information include: a render node identification, an internet protocol, IP, address, render queue information for the render task currently pulled by the render node 185, and so on.
It should be appreciated that the above examples are merely illustrative and exemplary, and that in other embodiments, the rendering information may include any other suitable parameters. Further, when rendering node 185 provides rendering information, rendering node 185 may combine any of the example parameters described above. Embodiments of the disclosure are not limited in this respect.
In this way, the resource scheduler 180 may dynamically obtain rendering information for each rendering node 185 in the rendering platform 120.
Further, the information of the rendering nodes obtained by the resource scheduler 180 further includes the number of rendering nodes that currently pull rendering tasks from the rendering queue. For example, as discussed below, the resource scheduler 180 may dynamically provide association information, sometimes referred to as a binding relationship, that may indicate an association between the rendering node 185 and the at least one task scheduler 175. Alternatively or additionally, the association information may indicate an association between the rendering node 185 and at least one rendering queue. Based on the association, the resource scheduler 180 may obtain the number of rendering nodes currently pulling rendering tasks from the rendering queue.
Alternatively, in other embodiments, the rendering node 185 may report information of a rendering queue in which the rendering node 185 currently pulls a rendering task, and the resource scheduler 180 may also obtain the number of rendering nodes currently pulling a rendering task from the rendering queue based on the reported information.
In this way, the resource scheduler 180 may establish and maintain a panorama of bindings and matches between rendering queues and rendering nodes 185. Next, as shown in signaling 200, resource scheduler 180 may evaluate the workload of the rendering queue based on the obtained queue information and the information of the rendering node (235).
In some embodiments, the resource scheduler 180 determines a queue length of the rendering queue after a predetermined period of time based on the queue information and information of the rendering node.
Alternatively or additionally, in some embodiments, the resource scheduler 180 determines a length change trend of the rendering queue based on the queue information and information of the rendering node. For example, whether the rendering queue is increasing or decreasing trend, and further determine the rate of increase or decrease of the rendering queue.
Based on the evaluated workload, the resource scheduler 180 may adjust (240) the number of rendering nodes that pull rendering tasks from the rendering queue.
In some embodiments, if the resource scheduler 180 determines that the queue length of the rendering queue after the predetermined period of time is greater than the first threshold length, it is determined that the number of rendering nodes from which rendering tasks are pulled needs to be increased.
Alternatively or additionally, in some embodiments, if the resource scheduler 180 determines that the predetermined trend of change in length of the rendering queue is a trend of increase and the rate of increase is greater than the first threshold rate. It is determined that the number of rendering nodes that pull rendering tasks from the rendering queue needs to be increased.
Accordingly, in some embodiments, if the resource scheduler 180 determines that the queue length of the rendering queue after the predetermined period of time is less than the second threshold length, it is determined that the number of rendering nodes from which rendering tasks are pulled from the rendering queue needs to be reduced.
Alternatively or additionally, in some embodiments, if the resource scheduler 180 determines that the length change trend of the rendering queue is a decreasing trend and the decreasing speed is greater than the second threshold speed, it is determined that the number of rendering nodes from which rendering tasks are pulled from the rendering queue needs to be reduced.
In some embodiments, the resource scheduler 180 may provide (240-1) association information to the rendering node 185 based on the determined adjustment results, such that the rendering node 185 may pull rendering tasks from the specified rendering queue/task scheduler 175. In some embodiments, the resource scheduler 180 provides the rendering node 185 with an identification of the task scheduler 175 and/or an identification of the rendering queue, such as a rendering queue name/identification list. Additionally, in some embodiments, the identification of the rendering queue may be empty, in which case the rendering node 185 may determine by the task scheduler 175, specifically from which rendering queue to pull the rendering task.
In some embodiments, the resource scheduler 180 may provide the association to the rendering node 185 after determining that the association needs to be adjusted. Alternatively or additionally, in some embodiments, the resource scheduler 180 may periodically provide association information to the rendering node 185. For example, the resource scheduler 180 may provide the associated information to the rendering node 185 as a response message/return parameter after receiving the rendering information periodically reported from the rendering node 185.
Additionally, to prevent ping-pong effects, when the resource scheduler 180 determines that the number of rendering nodes from which rendering tasks are pulled needs to be adjusted, the adjustment of the number of rendering nodes is not triggered immediately, but is actually triggered when a certain trigger condition is satisfied. In a particular embodiment, the resource scheduler 180 increases the number of rendering nodes that pull rendering tasks from the rendering queue when it is determined that the number of rendering nodes that pull rendering tasks from the rendering queue needs to be increased more than a first threshold number of times, for example, a number of times that the rendering nodes need to be increased. Accordingly, when it is determined that the number of rendering nodes from which rendering tasks are pulled is required to be reduced, for example, it is determined that the number of times that rendering nodes are required to be reduced exceeds the second threshold number of times, the resource scheduler 180 reduces the number of rendering nodes from which rendering tasks are pulled.
Further, adjusting the number of rendering nodes that pull rendering tasks from the rendering queue may be constrained by some predetermined condition. One example constraint is a minimum queue length that triggers an increase in the number of rendering nodes. Another example constraint is a maximum length of queue threshold that triggers a reduction in the number of rendering nodes. Other example constraints may be an increase step size of the number of rendering nodes, a decrease step size of the number of rendering nodes, a maximum number of rendering nodes to pull rendering tasks from a rendering queue, a minimum number of rendering nodes to pull rendering tasks from a rendering queue, etc. In this way, scheduling of rendering resources may be more flexibly and reasonably implemented.
In one particular embodiment, if the queue length of the rendering queue at time T is greater than a minimum queue length that triggers an increase in the number of rendering nodes, and the quotient of the difference in the queue length of the rendering queue at time T and the queue length of the rendering queue at time T-1 is greater than a first threshold speed, then in determining the number T of rendering nodes to be increased, first a half of the quotient between the difference in the queue length of the rendering queue at time T and the queue length at time T-1 and the average task insertion rate is determined as an alternative value. Further, the maximum value of the following values is taken as the number of rendering nodes to be added: the minimum between the candidate value and 1, the number of maximum rendering nodes that pull rendering tasks from the rendering queue, and the length of the rendering queue at time T.
In another particular embodiment, if the queue length of the rendering queue at time T is greater than the minimum queue length that triggers an increase in the number of rendering nodes, and the queue length of the rendering queue at time T is greater than twice the average task insertion rate and the queue length of the rendering queue at time T, then the number of rendering nodes to be increased is determined to be the increase step size of the number of rendering nodes.
In another particular embodiment, if the queue length of the rendering queue at time T is less than a maximum length threshold of the queue that triggers a reduction in the number of rendering nodes, and the difference between the queue length of the rendering queue at time T and the reduction step size of the number of rendering nodes is greater than the number of minimum rendering nodes that pull rendering tasks from the rendering queue, the insertion rate is less than the product of the difference between the queue length of the rendering queue at time T and the reduction step size of the number of rendering nodes and the average task insertion rate, then the number of reduced rendering nodes is determined to be the reduction step size of the number of rendering nodes.
It should be appreciated that the particular manner in which the number of rendering nodes is adjusted described above is merely exemplary. In other embodiments, any other rule may be employed to adjust the number of rendering nodes. Embodiments of the disclosure are not limited in this respect.
Further, in other embodiments, the operation of adjusting the number of rendering nodes that pull rendering tasks from the rendering queue may be implemented by other network elements besides the resource scheduler 180. For example, the resource scheduler 180 determines whether the number of rendering nodes that pull rendering tasks from the rendering queue needs to be adjusted and provides this information to other network elements, such as the configuration core 194 in fig. 1B. The configuration center 194 may generate a corresponding configuration file for the rendering node 185 such that the rendering node 185 may pull rendering tasks from the corresponding task scheduler 175/rendering queue.
In this way, different types of rendering tasks may be added to different rendering queues. Further, the rendering platform 120 may dynamically adjust rendering resources/rendering nodes according to the length of the rendering queue, thereby reducing processing latency of rendering tasks and improving resource efficiency of the system.
According to some implementations of the present disclosure, processing events of rendering tasks may also be managed through the status message queue 190, thereby enabling asynchronous processing of rendering tasks. In signaling 200, rendering node 185 pulls (245) the rendering task from task scheduler 175. In a particular embodiment, the rendering node 185 invokes an interface of the task scheduler 175 to pull rendering tasks. In a particular implementation, the rendering node 185 invokes an interface of the task scheduler 175 based on an association or a configuration file obtained from the configuration center 194. For example, rendering node 185 may provide task scheduler 175 with one or more of the following request parameters: the number of rendering tasks that are expected to be pulled, the identity of the rendering queue, the identity of the task scheduler 175, the rendering capabilities of the rendering node 185, etc. In response, task scheduler 175 may return the rendering task list of the target rendering queue to rendering node 185. Rendering node 185 performs (250) the corresponding rendering task based on the obtained rendering task list.
In some embodiments, the rendering node 185 downloads the corresponding project file according to the rendering task parameters to load the task project and perform the rendering task. Further, the rendering node 185 submits the rendering product file to the object storage database 160 after rendering is completed, and obtains storage information of the rendering product file, such as a uniform resource identifier URI.
In some embodiments, the rendering node 185 publishes (255) the status information of the at least one rendering task to the status message queue 190. One example of the status information is processing result information of the rendering task, such as success of rendering task processing, storage information of a rendering product file, failure of rendering task processing, and failure cause. Another example of the status information is processing progress information of the rendering task, such as a processing percentage of the rendering task, a remaining processing time to complete the rendering task, and the like.
In some embodiments, task manager 170 may subscribe to/consume state information in state message queue 190 and maintain state transitions of rendering tasks based on the state information (such as determining whether the rendering task has ended). As shown in signaling 200, task manager 170 reads (260) the status information from status message queue 190 and determines whether the rendering task has ended, such as determining that the maintained rendering task has failed to process or that the maintained rendering task has been successfully processed.
In some embodiments, if it is determined that the rendering task was successfully processed, the task manager 170 modifies the task state in its own database to be processing successful. Optionally, when the rendering task processing fails, the task manager 170 may also retry the failed rendering task. Specifically, the task manager 170 determines whether the number of retries of the rendering task exceeds a threshold number of retries. If the threshold number of retries is not exceeded, the rendering task is again delivered to the task scheduler 175 and the number of retries is incremented by one. If the threshold number of retries is exceeded, it is determined that the rendering task is ended and the processing result is failure. Accordingly, the task manager 170 modifies the task state in its own database to process the failure.
In some embodiments, if the task manager 170 determines 265 that the processing of the rendering task has completed, such as a processing success or failure, the task manager 170 may issue a rendering task completion event, sometimes referred to as a job completion event, to the status message queue 190. For example, the task manager 170 composes a task success/failure message, invoking the corresponding service callback interface to issue rendering task completion events to the status message queue 190.
In some embodiments, rendering task completion events may be subscribed to/consumed by notification center 192 and notified to the corresponding user device 110-1 via business service 193.
In this way, the user device 110-1 does not need to query the status of the rendering task in a polling manner, saving processing logic of the user device 110-1.
Further, according to some implementations of the present disclosure, rendering platform 120 may include gateway 155. Gateway 155 may enable authentication of user 151 and distribution of rendering tasks to enable rendering platform 120 to enable support for mass level rendering tasks.
In some embodiments, gateway 155 is multi-machine-room deployment and belongs to a stateless service, so that it may be convenient to implement lateral expansion and support route distribution of mass level rendering tasks.
As shown in signaling 200, user device 110-1 generates (205) a rendering request that includes an identity of user 151 and may also include identification information of user device 110-1.
In some embodiments, the task protocol may define task parameter descriptions and rendering task types, and the client 110-1 assembles the rendering task according to the task protocol, encapsulates the identity and the rendering task into a rendering request, and sends the rendering request to the gateway 155. In some embodiments, each rendering task type corresponds to a set of rendering engineering schemes.
Gateway 155 receives (210) the rendering request and authenticates (215) user 151 based on the identity. In some embodiments, gateway 155, upon receiving the rendering request, first parses out the identity and authenticates based on the identity, e.g., determines if the identity is truly valid. If the authentication fails, a message of authentication failure is returned to the user equipment 110-1. If the authentication is successful, distribution of rendering tasks is performed.
Additionally, in some embodiments, the rendering request may also include a further description of the rendering task for which it is requested. In an embodiment, the rendering request further includes rendering type information for the rendering task. Alternatively or additionally, in another embodiment, the rendering request further comprises priority information of the rendering task. Alternatively or additionally, in a further embodiment, the rendering request further comprises rendering processing requirement information of the rendering task. Alternatively or additionally, in other embodiments, the rendering request may also include other parameters such as business body information that initiates the rendering task. It should be appreciated that the above examples are merely illustrative and exemplary, and that in other embodiments, the rendering request may include any suitable parameters related to the distribution of rendering tasks. Embodiments of the disclosure are not limited in this respect.
In some embodiments, gateway 155 sends (220) the rendering task to task manager 170 based on the routing rules (e.g., querying a routing table). Additionally, when the rendering request includes further description for the rendering task, gateway 155 may also send the rendering task to task manager 170 based on the description information. In particular, gateway 155 may send rendering tasks to task manager 170 based on rendering type information, the priority information, or rendering processing requirements information.
In some embodiments, gateway 155 may deploy a set of task management services, i.e., one or more task managers 170, separately for a business entity that has a large amount of rendering tasks or a business entity that has a high priority to ensure that task processing of other businesses is not affected.
Alternatively or additionally, gateway 155 may deploy a set of task management services, i.e., one or more task managers 170, separately for rendering types where rendering tasks are of high importance to ensure that task processing of other traffic is not affected.
In some embodiments, a small number of rendering tasks, prioritized traffic, and high importance rendering task types may share a set of task management services, i.e., one or more task managers 170.
In some embodiments, when the task manager 170 receives a rendering task, it may assign a task identification to the rendering task. Further, the task manager 170 may return the task identity to the user device 110-1 via the gateway 155. In some embodiments, the task manager 170 may also store task information related to the rendering task to the entity database 165 and upload materials to be rendered of the rendering task, etc., to the object storage database 160. Further, the corresponding task identification is also sent to the task scheduler 175. Task scheduler 175 posts the rendering tasks to the corresponding rendering queues for pull by rendering node 185.
With the present disclosure, the rendering platform 120 may dynamically adjust rendering resources, thereby reducing processing latency of rendering tasks and improving resource utilization of the system.
Example method embodiments
Fig. 3 illustrates a flow diagram of a process 300 for handling offline rendering tasks, implemented in accordance with some embodiments of the disclosure. For ease of discussion, discussion is with reference to environment 100 of FIG. 1A and environment 150 of FIG. 1B. Process 300 may be implemented at rendering platform 120.
It should be appreciated that while the method of process 300 is discussed with reference to environment 100 of fig. 1A and environment 150 of fig. 1B, as has been discussed in the present disclosure, environment 100 and environment 150 are merely example environments of the present disclosure, and thus environment 100 and environment 150 should not be construed as limiting the environment in which process 300 is implemented.
At block 310, the resource scheduler 180 obtains queue information for the rendering queue and information for rendering nodes that pull rendering tasks from the rendering queue. The rendering queue is used to carry at least one rendering task submitted by user 151.
At block 320, the resource scheduler 180 evaluates the workload of the rendering queue based on the queue information and the information of the rendering node.
At block 330, the resource scheduler 180 adjusts the number of rendering nodes 185 that pull rendering tasks from the rendering queue based on the workload.
In some embodiments, the queue information includes an identification of the rendering queue.
Alternatively or additionally, in some embodiments, the queue information includes a current queue length of the rendering queue.
Alternatively or additionally, in some embodiments, the queue information includes rendering task types supported by the rendering queue.
Alternatively or additionally, in some embodiments, the queue information includes an average task insertion rate of the rendering queue.
In some embodiments, the information of the rendering nodes includes the number of rendering nodes 185 that are currently pulling rendering tasks from the rendering queue.
Alternatively or additionally, in some embodiments, the information of the rendering nodes includes rendering capabilities of respective ones 185 of the rendering nodes, the rendering capabilities including at least one rendering task type supported by the respective rendering nodes 185.
Alternatively or additionally, in some embodiments, the information of the rendering nodes includes respective processing capabilities of the respective rendering node 185 for the supported at least one rendering task type.
In some embodiments, evaluating the workload of the rendering queue includes: and determining the queue length of the rendering queue after a preset time period based on the queue information and the information of the rendering node.
Alternatively or additionally, in some embodiments, evaluating the workload of the rendering queue includes: and determining the length change trend of the rendering queue based on the queue information and the information of the rendering node.
In some embodiments, adjusting the number of rendering nodes 185 that pull rendering tasks from the rendering queue includes: in response to the queue length of the rendering queue after the predetermined period of time being greater than the first threshold length, it is determined that the number of rendering nodes 185 from which rendering tasks are pulled from the rendering queue needs to be increased.
Alternatively or additionally, in some embodiments, adjusting the number of rendering nodes 185 that pull rendering tasks from the rendering queue includes: in response to the length trend of the rendering queue being a growing trend and the growing speed being greater than the first threshold speed, it is determined that the number of rendering nodes 185 that need to pull rendering tasks from the rendering queue needs to be increased.
Alternatively or additionally, in some embodiments, adjusting the number of rendering nodes 185 that pull rendering tasks from the rendering queue includes: if the queue length of the rendering queue after the predetermined period of time is less than the second threshold length, it is determined that the number of rendering nodes 185 that pull rendering tasks from the rendering queue needs to be reduced.
Alternatively or additionally, in some embodiments, adjusting the number of rendering nodes 185 that pull rendering tasks from the rendering queue includes: if the length change trend of the rendering queue is a decreasing trend and the decreasing speed is greater than the second threshold speed, it is determined that the number of rendering nodes 185 that need to pull rendering tasks from the rendering queue needs to be reduced.
In some embodiments, adjusting the number of rendering nodes 185 that pull rendering tasks from the rendering queue further comprises: in response to continuously determining that the number of times that the number of rendering nodes 185 that need to pull rendering tasks from the rendering queue exceeds the first threshold number of times, the number of rendering nodes 185 that pull rendering tasks from the rendering queue is increased.
Alternatively or additionally, in some embodiments, adjusting the number of rendering nodes 185 that pull rendering tasks from the rendering queue further comprises: in response to continuously determining that the number of times that the number of rendering nodes 185 that need to pull rendering tasks from the rendering queue exceeds the second threshold number of times, the number of rendering nodes 185 that pull rendering tasks from the rendering queue is reduced.
In some embodiments, the rendering queue is maintained via task scheduler 175, and task scheduler 175 maintains the rendering queue and at least one other rendering queue.
In some embodiments, the process 300 further comprises: the association information is periodically provided to the rendering node 185. In some embodiments, the association information indicates an association between the rendering node 185 and the at least one task scheduler 175. Alternatively or additionally, in some embodiments, the association information indicates an association between the rendering node 185 and at least one rendering queue.
In some embodiments, the process 300 further comprises: the rendering node 185 pulls at least one rendering task based on the association information, and issues status information of the at least one rendering task to the status message queue 190, the status information including at least one of processing result information and processing progress information of the at least one rendering task.
In some embodiments, the process 300 further comprises: based on the status information in the status message queue 190, it is determined whether the processing of the at least one rendering task is completed, and in response to the processing of the at least one rendering task having been completed, a notification is issued to the user 151 submitting the at least one rendering task that the processing of the at least one rendering task has been completed.
In some embodiments, the process 300 further comprises: a rendering request submitted by the user 151 for at least one rendering task is received via the gateway 155, the rendering request comprising an identity of the user 151, and the user 151 is authenticated based on the identity.
In some embodiments, the rendering request further includes at least one of rendering type information of the at least one rendering task, priority information of the at least one rendering task, and rendering processing requirements information of the at least one rendering task, and the process 300 further includes: based on at least one of the preconfigured routing rules, the rendering type information, the priority information, and the rendering processing requirements information, gateway 155 sends at least one rendering task to task manager 170.
In some embodiments, the process 300 further comprises: each of the at least one rendering tasks is assigned a respective task identification by task manager 170.
In some embodiments, at least one rendering task is associated with a rendering task of a virtual object.
In some embodiments, the at least one rendering task is a cloud rendering task.
Fig. 4 illustrates a block diagram of an apparatus 400 for processing offline rendering tasks, according to some embodiments of the present disclosure. The apparatus 400 may be implemented as or included in the rendering platform 120. The various modules/components in apparatus 400 may be implemented in hardware, software, firmware, or any combination thereof.
As shown, the apparatus 400 includes an information obtaining module 410 configured to obtain queue information of a rendering queue and information of a rendering node that pulls a rendering task from the rendering queue by the resource scheduler 180. The rendering queue is used to carry at least one rendering task submitted by user 151. The apparatus 400 further comprises an evaluation module 420 configured to evaluate the workload of the rendering queue based on the queue information and the information of the rendering node by the resource scheduler 180. The apparatus 400 further includes an adjustment module 430 configured to adjust the number of rendering nodes 185 that pull rendering tasks from the rendering queue based on the workload by the resource scheduler 180.
In some embodiments, the queue information includes an identification of the rendering queue.
Alternatively or additionally, in some embodiments, the queue information includes a current queue length of the rendering queue.
Alternatively or additionally, in some embodiments, the queue information includes rendering task types supported by the rendering queue.
Alternatively or additionally, in some embodiments, the queue information includes an average task insertion rate of the rendering queue.
In some embodiments, the information of the rendering nodes includes the number of rendering nodes 185 that are currently pulling rendering tasks from the rendering queue.
Alternatively or additionally, in some embodiments, the information of the rendering nodes includes rendering capabilities of respective ones 185 of the rendering nodes, the rendering capabilities including at least one rendering task type supported by the respective rendering nodes 185.
Alternatively or additionally, in some embodiments, the information of the rendering nodes includes respective processing capabilities of the respective rendering node 185 for the supported at least one rendering task type.
In some embodiments, the assessment module 420 is further configured to: and determining the queue length of the rendering queue after a preset time period based on the queue information and the information of the rendering node.
Alternatively or additionally, in some embodiments, the assessment module 420 is further configured to: and determining the length change trend of the rendering queue based on the queue information and the information of the rendering node.
In some embodiments, the adjustment module 430 is further configured to: in response to the queue length of the rendering queue after the predetermined period of time being greater than the first threshold length, it is determined that the number of rendering nodes 185 from which rendering tasks are pulled from the rendering queue needs to be increased.
Alternatively or additionally, in some embodiments, the adjustment module 430 is further configured to: in response to the length trend of the rendering queue being a growing trend and the growing speed being greater than the first threshold speed, it is determined that the number of rendering nodes 185 that need to pull rendering tasks from the rendering queue needs to be increased.
Alternatively or additionally, in some embodiments, the adjustment module 430 is further configured to: if the queue length of the rendering queue after the predetermined period of time is less than the second threshold length, it is determined that the number of rendering nodes 185 that pull rendering tasks from the rendering queue needs to be reduced.
Alternatively or additionally, in some embodiments, the adjustment module 430 is further configured to: if the length change trend of the rendering queue is a decreasing trend and the decreasing speed is greater than the second threshold speed, it is determined that the number of rendering nodes 185 that need to pull rendering tasks from the rendering queue needs to be reduced.
In some embodiments, the adjustment module 430 is further configured to: in response to continuously determining that the number of times that the number of rendering nodes 185 that need to pull rendering tasks from the rendering queue exceeds the first threshold number of times, the number of rendering nodes 185 that pull rendering tasks from the rendering queue is increased.
Alternatively or additionally, the adjustment module 430 is further configured to: in response to continuously determining that the number of times that the number of rendering nodes 185 that need to pull rendering tasks from the rendering queue exceeds the second threshold number of times, the number of rendering nodes 185 that pull rendering tasks from the rendering queue is reduced.
In some embodiments, the rendering queue is maintained via task scheduler 175, and task scheduler 175 maintains the rendering queue and at least one other rendering queue.
In some embodiments, the apparatus 400 further comprises: the association information providing module is configured to periodically provide association information to the rendering node 185. In some embodiments, the association information indicates an association between the rendering node 185 and the at least one task scheduler 175. Alternatively or additionally, in some embodiments, the association information indicates an association between the rendering node 185 and at least one rendering queue.
In some embodiments, the apparatus 400 further comprises: a status notification module configured to pull at least one rendering task based on the association information by the rendering node 185 and issue status information of the at least one rendering task to the status message queue 190, the status information including at least one of processing result information and processing progress information of the at least one rendering task.
In some embodiments, the apparatus 400 further comprises: a notification module configured to determine whether the processing of the at least one rendering task is completed based on the status information in the status message queue 190, and issue a notification to the user 151 submitting the at least one rendering task that the processing of the at least one rendering task is completed in response to the processing of the at least one rendering task being completed.
In some embodiments, the apparatus 400 further comprises: an authentication module configured to receive a rendering request submitted by the user 151 for at least one rendering task via the gateway 155, the rendering request including an identity of the user 151, and to authenticate the identity of the user 151 based on the identity.
In some embodiments, the rendering request further includes at least one of rendering type information of the at least one rendering task, priority information of the at least one rendering task, and rendering processing requirement information of the at least one rendering task, and the apparatus 400 further includes: a task delivery module: configured to send at least one rendering task to the task manager 170 based on at least one of pre-configured routing rules, rendering type information, priority information, and rendering processing requirements information.
In some embodiments, the apparatus 400 further comprises: a persistence module: is configured to assign, by the task manager 170, a respective task identification to each of the at least one rendering task.
In some embodiments, at least one rendering task is associated with a rendering task of a virtual object.
In some embodiments, the at least one rendering task is a cloud rendering task.
Fig. 5 illustrates a block diagram of a computing device/system 500 in which one or more embodiments of the disclosure may be implemented. It should be understood that the computing device/system 500 illustrated in fig. 5 is merely exemplary and should not be construed as limiting the functionality and scope of the embodiments described herein. The computing device/system 500 illustrated in fig. 5 may be used to implement the client node 110 or the service node 120 of fig. 1.
As shown in fig. 5, computing device/system 500 is in the form of a general purpose computing device. Components of computing device/system 500 may include, but are not limited to, one or more processors or processing units 510, memory 520, storage 530, one or more communication units 540, one or more input devices 550, and one or more output devices 560. The processing unit 510 may be a real or virtual processor and is capable of performing various processes according to programs stored in the memory 520. In a multiprocessor system, multiple processing units execute computer-executable instructions in parallel to increase the parallel processing capabilities of computing device/system 500.
Computing device/system 500 typically includes a number of computer storage media. Such media may be any available media that is accessible by computing device/system 500 and includes, but is not limited to, volatile and non-volatile media, removable and non-removable media. The memory 520 may be volatile memory (e.g., registers, cache, random Access Memory (RAM)), non-volatile memory (e.g., read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), flash memory), or some combination thereof. Storage device 530 may be a removable or non-removable media and may include machine-readable media such as flash drives, magnetic disks, or any other media that may be capable of storing information and/or data (e.g., training data for training) and may be accessed within computing device/system 500.
Computing device/system 500 may further include additional removable/non-removable, volatile/nonvolatile storage media. Although not shown in fig. 5, a magnetic disk drive for reading from or writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk may be provided. In these cases, each drive may be connected to a bus (not shown) by one or more data medium interfaces. Memory 520 may include a computer program product 525 having one or more program modules configured to perform the various methods or acts of the various embodiments of the present disclosure.
Communication unit 540 enables communication with other computing devices via a communication medium. Additionally, the functionality of the components of computing device/system 500 may be implemented as a single computing cluster or as multiple computing machines capable of communicating over a communication connection. Accordingly, the computing device/system 500 may operate in a networked environment using logical connections to one or more other servers, a network Personal Computer (PC), or another network node.
The input device 550 may be one or more input devices such as a mouse, keyboard, trackball, etc. The output device 560 may be one or more output devices such as a display, speakers, printer, etc. Computing device/system 500 may also communicate with one or more external devices (not shown), such as storage devices, display devices, etc., as needed through communication unit 540, with one or more devices that enable a user to interact with computing device/system 500, or with any device (e.g., network card, modem, etc.) that enables computing device/system 500 to communicate with one or more other computing devices. Such communication may be performed via an input/output (I/O) interface (not shown).
According to an exemplary implementation of the present disclosure, a computer-readable storage medium is provided, on which computer-executable instructions or a computer program are stored, wherein the computer-executable instructions or the computer program are executed by a processor to implement the method described above.
According to an exemplary implementation of the present disclosure, there is also provided a computer program product tangibly stored on a non-transitory computer-readable medium and comprising computer-executable instructions that are executed by a processor to implement the method described above.
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus, devices, and computer program products implemented according to the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer readable program instructions may be provided to a processing unit of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processing unit of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable medium having the instructions stored therein includes an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various implementations of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The foregoing description of implementations of the present disclosure has been provided for illustrative purposes, is not exhaustive, and is not limited to the implementations disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the various implementations described. The terminology used herein was chosen in order to best explain the principles of each implementation, the practical application, or the improvement of technology in the marketplace, or to enable others of ordinary skill in the art to understand each implementation disclosed herein.

Claims (18)

1. A method of processing an offline rendering task, comprising:
obtaining queue information of a rendering queue and information of a rendering node for pulling rendering tasks from the rendering queue, wherein the rendering queue is used for carrying at least one rendering task submitted by a user;
evaluating the workload of the rendering queue based on the queue information and the information of the rendering node; and
based on the workload, the number of rendering nodes that pull rendering tasks from the rendering queue is adjusted.
2. The method of claim 1, wherein the queue information comprises at least one of:
The identity of the rendering queue(s),
the current queue length of the rendering queue,
the rendering task types supported by the rendering queue,
the average task insertion rate of the rendering queue.
3. The method of claim 1, wherein the information of the rendering node comprises at least one of:
the number of rendering nodes currently pulling rendering tasks from the rendering queue,
rendering capabilities of respective ones of the rendering nodes, the rendering capabilities including at least one rendering task type supported by the respective rendering node,
the respective rendering node is directed to respective processing capabilities of the supported at least one rendering task type.
4. The method of claim 1, wherein evaluating the workload of the rendering queue comprises:
based on the queue information and the rendering node information, determining at least one of:
a queue length of the rendering queue after a predetermined period of time,
and the length of the rendering queue changes trend.
5. The method of claim 4, wherein adjusting the number of rendering nodes that pull rendering tasks from the rendering queue comprises:
determining that the number of rendering nodes that pull rendering tasks from the rendering queue needs to be increased in response to at least one of:
The queue length of the rendering queue after the predetermined period of time is greater than a first threshold length;
the length change trend of the rendering queue is a growing trend and the growing speed is greater than a first threshold speed; and
determining that a number of rendering nodes that pull rendering tasks from the rendering queue need to be reduced in response to at least one of:
the queue length of the rendering queue after the predetermined period of time is less than a second threshold length;
the length change trend of the rendering queue is a decreasing trend and a decreasing speed is greater than a second threshold speed.
6. The method of claim 5, wherein adjusting the number of rendering nodes that pull rendering tasks from the rendering queue further comprises:
in response to continuously determining that the number of times that the number of rendering nodes that pull rendering tasks from the rendering queue needs to be increased exceeds a first threshold number of times, increasing the number of rendering nodes that pull rendering tasks from the rendering queue, and
in response to continuously determining that the number of times that the number of rendering nodes that pull rendering tasks from the rendering queue needs to be reduced exceeds a second threshold number of times, the number of rendering nodes that pull rendering tasks from the rendering queue is reduced.
7. The method of claim 1, wherein the rendering queue is maintained via a task scheduler that maintains the rendering queue and at least one other rendering queue.
8. The method of claim 1, further comprising:
periodically providing association information to the rendering node, the association information indicating:
an association between the rendering node and at least one task scheduler, and/or
And the association relation between the rendering node and at least one rendering queue.
9. The method of claim 8, further comprising:
the rendering node pulls at least one rendering task based on the association information; and
and issuing the state information of the at least one rendering task to a state message queue, wherein the state information comprises at least one of processing result information and processing progress information of the at least one rendering task.
10. The method of claim 9, further comprising:
determining whether processing of the at least one rendering task is complete based on the status information in the status message queue; and
in response to the processing of the at least one rendering task having completed, a notification is issued to a user submitting the at least one rendering task that the processing of the at least one rendering task has completed.
11. The method of claim 1, further comprising:
receiving a rendering request submitted by the user for at least one rendering task via a gateway, wherein the rendering request comprises an identity of the user; and
and authenticating the identity of the user based on the identity.
12. The method of claim 11, wherein the rendering request further comprises at least one of rendering type information of the at least one rendering task, priority information of the at least one rendering task, and rendering processing requirements information of the at least one rendering task, and wherein the method further comprises:
the gateway sends the at least one rendering task to a corresponding task manager based on at least one of a pre-configured routing rule, the rendering type information, the priority information, and the rendering processing requirement information.
13. The method of claim 12, further comprising:
each of the at least one rendering task is assigned a respective task identification by the task manager.
14. The method of claim 1, wherein the at least one rendering task is associated with a rendering task of a virtual object.
15. The method of claim 1, wherein the at least one rendering task is a cloud rendering task.
16. An apparatus for processing offline rendering tasks, comprising:
the information acquisition module is configured to acquire queue information of a rendering queue and information of a rendering node for pulling rendering tasks from the rendering queue, wherein the rendering queue is used for carrying at least one rendering task submitted by a user;
an evaluation module configured to evaluate a workload of the rendering queue based on the queue information and the information of the rendering node; and
an adjustment module configured to adjust a number of rendering nodes that pull rendering tasks from the rendering queue based on the workload.
17. An electronic device, comprising:
at least one processing unit; and
at least one memory coupled to the at least one processing unit and storing instructions for execution by the at least one processing unit, which when executed by the at least one processing unit, cause the apparatus to perform the method of any one of claims 1 to 15.
18. A computer readable storage medium having stored thereon a computer program to be executed by a processor to implement the method of any of claims 1 to 15.
CN202211339002.4A 2022-08-10 2022-10-28 Method, apparatus, device and medium for processing offline rendering tasks Pending CN116107710A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210956659 2022-08-10
CN2022109566599 2022-08-10

Publications (1)

Publication Number Publication Date
CN116107710A true CN116107710A (en) 2023-05-12

Family

ID=86262064

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211339002.4A Pending CN116107710A (en) 2022-08-10 2022-10-28 Method, apparatus, device and medium for processing offline rendering tasks

Country Status (1)

Country Link
CN (1) CN116107710A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116680086A (en) * 2023-07-25 2023-09-01 联通沃音乐文化有限公司 Scheduling management system based on offline rendering engine

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116680086A (en) * 2023-07-25 2023-09-01 联通沃音乐文化有限公司 Scheduling management system based on offline rendering engine
CN116680086B (en) * 2023-07-25 2024-04-02 联通沃音乐文化有限公司 Scheduling management system based on offline rendering engine

Similar Documents

Publication Publication Date Title
US20210075876A1 (en) Push notification delivery system
CN108449410B (en) Message management method, system and related device in cloud platform
JP2018531465A6 (en) System and method for storing message data
JP2018532201A (en) System and method for transferring message data
JP2019535072A (en) System and method for providing messages to multiple subscribers
CN114363407B (en) Message service method and device, readable storage medium and electronic equipment
CN113220435A (en) Task processing method and related product
CN111163130A (en) Network service system and data transmission method thereof
CN111478781B (en) Message broadcasting method and device
CN112634122A (en) Cloud rendering method and system, computer equipment and readable storage medium
CN111200606A (en) Deep learning model task processing method, system, server and storage medium
CN116107710A (en) Method, apparatus, device and medium for processing offline rendering tasks
CN110727507B (en) Message processing method and device, computer equipment and storage medium
CN114610504A (en) Message processing method and device, electronic equipment and storage medium
CN109388501B (en) Communication matching method, device, equipment and medium based on face recognition request
Panarello et al. A big video data transcoding service for social media over federated clouds
CN113946389A (en) Federal learning process execution optimization method, device, storage medium, and program product
CN114064328A (en) Message queue cluster migration method and device
CN113472687A (en) Data processing method and device
CN110716809B (en) Method and device for scheduling cloud resources
CN115391053B (en) Online service method and device based on CPU and GPU hybrid calculation
Song et al. Utility adaptive service brokering mechanism for personal cloud service
US10091068B1 (en) System for modeling distributed systems
CN112565286A (en) Task response method, device, equipment and storage medium based on RPC access
CN112099976A (en) Government affair data exchange task scheduling method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination