CN116450353A - Processor core matching method and device, electronic equipment and storage medium - Google Patents

Processor core matching method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN116450353A
CN116450353A CN202310421243.1A CN202310421243A CN116450353A CN 116450353 A CN116450353 A CN 116450353A CN 202310421243 A CN202310421243 A CN 202310421243A CN 116450353 A CN116450353 A CN 116450353A
Authority
CN
China
Prior art keywords
processor
core
priority
internal thread
matching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310421243.1A
Other languages
Chinese (zh)
Inventor
徐士立
陈晶晶
张其田
刘专
洪楷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Tencent Network Information Technology Co Ltd
Original Assignee
Shenzhen Tencent Network Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Tencent Network Information Technology Co Ltd filed Critical Shenzhen Tencent Network Information Technology Co Ltd
Priority to CN202310421243.1A priority Critical patent/CN116450353A/en
Publication of CN116450353A publication Critical patent/CN116450353A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/3058Monitoring arrangements for monitoring environmental properties or parameters of the computing system or of the computing system component, e.g. monitoring of power, currents, temperature, humidity, position, vibrations
    • G06F11/3062Monitoring arrangements for monitoring environmental properties or parameters of the computing system or of the computing system component, e.g. monitoring of power, currents, temperature, humidity, position, vibrations where the monitored property is the power consumption
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5018Thread allocation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5021Priority
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/508Monitor
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computing Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Quality & Reliability (AREA)
  • Debugging And Monitoring (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The application provides a processor core matching method, a device, electronic equipment and a storage medium, wherein the method comprises the following steps: monitoring the current running scene of the process; acquiring time consumption of each internal thread in the running scene for image frame rendering of a process; determining the matching priority of each internal thread and a processor core of a terminal based on time consumption of each internal thread for image frame rendering of a process in the operation scene, wherein the processor core comprises at least one high-frequency processor big core and at least one low-frequency processor small core; and determining a matching result between each internal thread and the processor core based on the matching priority, wherein the matching result is used for describing the internal threads which are preferentially matched with the processor big core in the game process and the internal threads which are preferentially matched with the processor small core in the game process.

Description

Processor core matching method and device, electronic equipment and storage medium
The present application is a divisional application with application date 2021.04.21, application number 202110431231.8, and invention name "method, apparatus, electronic device, and storage medium for process management of internal threads".
Technical Field
The application relates to the field of intelligent terminals, in particular to a processor core matching method, a device, electronic equipment and a storage medium.
Background
In the intelligent terminals with rich man-machine interaction modes, internet access capability, various operating systems and strong processing capability, the chips of the intelligent terminals generally have a plurality of processor cores. Currently mainstream chips will carry at least one high frequency processor big core and at least one low frequency processor small core. The performance of the processor's large core is stronger than the processor's small core, but at the same time, higher power consumption is also incurred. In the prior art, the management rationality of the process is not high, so that the performance and the power consumption of the processor core for processing the threads inside the process cannot be balanced in the whole.
Disclosure of Invention
An object of the present application is to provide a processor core matching method, a device, an electronic apparatus and a storage medium.
According to an aspect of an embodiment of the present application, a processor core matching method is disclosed, the method comprising:
monitoring the current running scene of the game process;
acquiring time consumption of each internal thread in the running scene for image frame rendering of a game process;
determining the matching priority of each internal thread and a processor core of a terminal based on the time consumption of each internal thread for image frame rendering of a game process in the operation scene, wherein the processor core comprises at least one high-frequency processor big core and at least one low-frequency processor small core;
and determining a matching result between each internal thread and the processor core based on the matching priority, wherein the matching result is used for describing the internal threads which are preferentially matched with the processor big core in the game process and the internal threads which are preferentially matched with the processor small core in the game process.
According to an aspect of an embodiment of the present application, there is disclosed a processor core matching apparatus, the apparatus including:
the monitoring module is configured to monitor the current running scene of the game process;
the acquisition module is configured to acquire time consumption of each internal thread in the running scene for image frame rendering of the game process;
the determining module is configured to determine the matching priority of each internal thread and the processor core of the terminal based on the time consumption of each internal thread for image frame rendering of the game process in the running scene, wherein the processor core comprises at least one high-frequency processor big core and at least one low-frequency processor small core;
and the matching module is configured to determine a matching result between each internal thread and the processor core based on the matching priority, wherein the matching result is used for describing the internal thread which is preferentially matched with the processor big core in the game process and the internal thread which is preferentially matched with the processor small core in the game process.
In an exemplary embodiment of the present application, the apparatus is configured to:
deploying the running environment of the terminal in the virtual machine, and running a process in the running environment of the terminal;
and monitoring the running environment of the terminal in the virtual machine to acquire time consumption of each internal thread in the running scene for image frame rendering of the game process.
In an exemplary embodiment of the present application, the apparatus is configured to:
dividing each internal thread into a first priority, a second priority and a third priority according to the sequence of time consumption from high to low of image frame rendering of the game process of each internal thread in the running scene;
the internal threads of the first priority are preferentially matched with the processor big core, the internal threads of the third priority are preferentially matched with the processor small core, and the internal threads of the second priority are preferentially matched with the processor big core after the internal threads of the first priority are matched with the processor big core.
In an exemplary embodiment of the present application, the apparatus is configured to:
and fixedly dividing a main thread and a rendering thread into the first priority, and fixedly dividing a data acquisition thread and a data reporting thread into the third priority.
In an exemplary embodiment of the present application, the apparatus is configured to:
and marking the thread name of each internal thread based on the category of the matched processor core, and scheduling the processor big core and the processor small core for processing each internal thread by the terminal by taking the matching priority as a reference.
In an exemplary embodiment of the present application, the apparatus is configured to:
acquiring scheduling state information returned by the terminal, wherein the scheduling state information is used for describing internal threads processed by the processor big core and the processor small core respectively;
and adjusting the matching priority based on the scheduling state information.
In an exemplary embodiment of the present application, the apparatus is configured to:
uploading the scheduling state information to a process management end so that the process management end responds to the scheduling state information to update the matching priority;
and synchronizing the matching priority with the updated matching priority fed back by the process management end.
According to an aspect of an embodiment of the present application, an electronic device is disclosed, including: a memory storing computer readable instructions; a processor that reads the computer readable instructions stored by the memory to perform any of the methods provided in the alternative implementations above.
According to an aspect of embodiments of the present application, a computer program medium having computer readable instructions stored thereon, which when executed by a processor of a computer, cause the computer to perform any of the methods provided in the above alternative implementations is disclosed.
According to an aspect of embodiments of the present application, there is provided a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The computer instructions are read from the computer-readable storage medium by a processor of a computer device, and executed by the processor, cause the computer device to perform the methods provided in the various alternative implementations described above.
In the embodiment of the application, the process manages the internal thread and is associated with the current running scene of the process. Specifically, the process through the method enables the processor cores scheduled by the terminal for processing each internal thread to be associated with the image frame rendering consumption of each internal thread for the process under the operation scene, so that the internal threads are managed, the real-time scene requirement can be met in the aspect of the image frame rendering time consumption, and the rationality of the thread management is improved.
Other features and advantages of the present application will be apparent from the following detailed description, or may be learned in part by the practice of the application.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The above and other objects, features and advantages of the present application will become more apparent by describing in detail exemplary embodiments thereof with reference to the attached drawings.
FIG. 1 illustrates a system architecture diagram for process management of internal threads according to one embodiment of the present application.
Fig. 2 illustrates an interaction flow between a process client and a terminal according to one embodiment of the present application.
FIG. 3 illustrates a flow chart of a method for a process to manage internal threads according to an embodiment of the present application.
Fig. 4 illustrates a scheduling request generation flow diagram according to an embodiment of the present application.
FIG. 5 illustrates a block diagram of an apparatus for a process to manage internal threads, according to an embodiment of the present application.
Fig. 6 shows a hardware diagram of an electronic device according to an embodiment of the application.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. However, the exemplary embodiments may be embodied in many forms and should not be construed as limited to the examples set forth herein; rather, these example embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the example embodiments to those skilled in the art. The drawings are merely schematic illustrations of the present application and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus a repetitive description thereof will be omitted.
Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more example embodiments. In the following description, numerous specific details are provided to give a thorough understanding of example embodiments of the present application. One skilled in the relevant art will recognize, however, that the aspects of the application may be practiced without one or more of the specific details, or with other methods, components, steps, etc. In other instances, well-known structures, methods, implementations, or operations are not shown or described in detail to avoid obscuring aspects of the application.
Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in software or in one or more hardware modules or integrated circuits or in different networks and/or processor devices and/or microcontroller devices.
The application provides a method for a process to manage an internal thread. The process in the embodiment of the application adopts the method to interact with the terminal where the process is located, and the terminal dispatches the corresponding processor core to process the internal thread of the process on the basis of the interaction, so that the management of the internal thread is realized.
FIG. 1 illustrates a system architecture diagram for a process to manage internal threads in accordance with one embodiment of the present application.
Referring to fig. 1, in this embodiment, the process management process mainly involves a process client, a terminal 10 and a process management terminal 20.
The process client is an application installed in the terminal 10 to provide a corresponding service to the user, and the process is located in the process client. The process manager 20 is typically a server that controls and manages processes in the process clients. The server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs, basic cloud computing services such as big data and artificial intelligent platforms. The terminal 10 may be, but is not limited to, a smart phone, tablet, notebook, desktop computer, smart box, smart watch, etc. The terminal 10 and the server may be directly or indirectly connected through wired or wireless communication, and the present application is not limited thereto.
The process in the process client mainly comprises the following functional modules: the system comprises a thread monitoring module, a data reporting module, a thread matching module and an interaction control module.
The thread monitoring module is mainly used for monitoring the creation and destruction of threads in a process and maintaining a current running thread list.
The thread matching module is mainly used for carrying out processor core matching logic judgment on the internal threads in the current running thread list according to the matching priority and giving out a matching result between the internal threads and the processor cores.
The interaction control module in the process client is mainly used for establishing communication with the terminal, and in the process operation process, a large core matching request and a small core matching request are initiated to the terminal 10 according to a matching result given by the thread matching module, and scheduling state information which is fed back by the terminal 10 and used for describing the situation of verification of the inter-scheduling by the processor is obtained.
The data reporting module is mainly used for reporting the scheduling state information given by the interaction control module to the process management end 20 and receiving the adjusted matching priority issued by the process management end 20.
The terminal 10 mainly comprises the following functional modules: scheduling module, interactive control module.
The interaction control module of the terminal 10 is mainly used for receiving the large core matching request and the small core matching request sent by the process client and forwarding the received requests to the scheduling module.
The scheduling module is mainly used for scheduling the processor cores according to the received requests.
In one embodiment, the field word sense of the matching result given by the thread matching module is shown in Table 1 below.
TABLE 1 field sense of match results
It should be noted that the embodiment is only an exemplary illustration, and should not limit the functions and application scope of the present application.
Fig. 2 shows an interaction flow between a process client and a terminal according to an embodiment of the present application.
Referring to fig. 2, in this embodiment, a process client initiates a query request to a terminal to query whether the terminal supports a size core scheduling policy for a processor core.
And the terminal returns a supporting result to the process client. If the size core scheduling policy is not supported, the flow ends. If the large and small core scheduling strategies are supported, the terminal returns the large number of the processor cores and the small number of the processor cores to the process client.
The process client terminal determines the matching priority of each internal thread and the processor core of the terminal under the condition that the number of the large cores and the number of the small cores of the processor are determined based on the time consumption of each internal thread for rendering the image frames of the process under the current running scene.
Aiming at an internal thread with priority matched with the processor big core, a process client sends a big core scheduling request to a terminal so as to request the terminal to schedule the processor big core to process the internal thread with priority. After receiving the big core scheduling request, the terminal returns scheduling state information for the processor big core to the process client, so that the process client determines which internal threads the terminal actually schedules the processor big core.
For an internal thread with priority matching with the processor corelet, the process client sends a corelet scheduling request to the terminal to request the terminal to schedule the processor corelet to process the internal thread with priority. And after receiving the small core scheduling request, the terminal returns scheduling state information for the processor small core to the process client so that the process client determines which internal threads the terminal actually schedules the processor small core.
FIG. 3 illustrates a flow chart of a method for a process to manage internal threads provided by an embodiment of the present application, the method comprising:
step S310, monitoring the current running scene of the process;
step S320, obtaining time consumption of each internal thread in the running scene for image frame rendering of the process;
step S330, determining the matching priority of each internal thread and a processor core of the terminal based on time consumption of each internal thread for image frame rendering of a process in an operation scene, wherein the processor core comprises at least one high-frequency processor big core and at least one low-frequency processor small core;
and step 340, interacting with the terminal based on the matching priority, and scheduling the processor big core and the processor small core for processing each internal thread by the terminal by taking the matching priority as a reference.
In the embodiment of the application, the high-frequency processor big core refers to a processor core with stronger processing capability compared with the higher working frequency, and the low-frequency processor small core refers to a processor core with weaker processing capability compared with the lower working frequency.
In the embodiment of the application, the process monitors the running scene of the process in real time in the running process. The running scene of the process is divided mainly according to the running state of the process. For example: in the running process of the game process, the state in the loading of the game interface is divided into a first scene, the state in the same game interface with other game characters with less than N numbers is divided into a second scene, and the state in the same game interface with other game characters with more than N numbers is divided into a third scene. Wherein N is a preset positive number.
In the embodiment of the application, in the running process of the process, the time consumed by each internal thread in the running scene of the current process for rendering the image frames of the process is acquired, and then the matching priority of each internal thread and the processor core of the terminal is determined on the basis. And then, based on the matching priority, the terminal interacts with the terminal, the terminal obtains the information of the matching priority through interaction, and the terminal schedules the processor big core and the processor small core for processing each internal thread by taking the matching priority as a reference.
In this way, in the embodiment of the present application, the process manages the internal thread and is associated with the current running scenario of the process. Specifically, the process through the method enables the processor cores scheduled by the terminal for processing each internal thread to be associated with the image frame rendering consumption of each internal thread for the process under the operation scene, so that the internal threads are managed, the real-time scene requirement can be met in the aspect of the image frame rendering time consumption, and the rationality of the thread management is improved.
In one embodiment, the method provided herein is a gaming session. During the running process of the game process, the current game scene is monitored, the time consumption of each internal thread for rendering the image frames of the game process in the current game scene is further obtained, and the matching priority of each internal thread and the processor core of the terminal is further determined on the basis. And then interacting with the terminal based on the matching priority, so that the terminal uses the matching priority as a reference to schedule the processor big core and the processor small core for processing each internal thread.
In one embodiment, the method provided herein is performed as a cartographic process. In the running process of the drawing process, the drawing scene where the drawing process is currently located is monitored, the time consumption of each internal thread in the drawing scene for rendering the image frames of the drawing process is further obtained, and the matching priority of each internal thread and the processor core of the terminal is further determined on the basis. And then interacting with the terminal based on the matching priority, so that the terminal uses the matching priority as a reference to schedule the processor big core and the processor small core for processing each internal thread.
In one embodiment, a running environment of the terminal is deployed inside the virtual machine, and a process is run in the running environment of the terminal. In this way, the behavior of the process as it runs in the terminal is simulated. Wherein simulating the behavior of the resulting process when running in the terminal comprises: when each internal thread in the process runs in the terminal, the rendering of the image frames of the process is time-consuming.
And further, the running environment of the terminal is monitored in the virtual machine, so that time consumption of each internal thread in the running scene for image frame rendering of the process is obtained.
The method and the device have the advantages that the performance of the process running in the terminal is simulated in the virtual machine, so that the determined time consumption of each internal thread for the image frame rendering of the process is associated with the terminal, and the accuracy of the time consumption of the image frame rendering is improved.
In an embodiment, according to the sequence that the rendering time of each internal thread for the image frame of the process is from high to low in the operation scene, each internal thread is divided into a first priority, a second priority and a third priority, wherein the internal thread of the first priority is preferentially matched with the processor big core, the internal thread of the third priority is preferentially matched with the processor small core, and the internal thread of the second priority is preferentially matched with the processor big core after the internal threads of the first priority are all matched with the processor big core.
The internal thread priority of the first priority is matched with the processor big core, namely, the terminal dispatches the processor big core to process the internal thread of the first priority as much as possible under the condition that the terminal processor big core resource allows. The internal thread priority of the third priority is matched with the processor corelet, namely, the terminal dispatches the processor corelet to process the internal thread of the third priority as much as possible under the condition that the processor corelet resource of the terminal is allowed. And after the internal threads of the second priority are matched with the processor big cores, the internal threads of the first priority are preferentially matched with the processor big cores, namely, after the terminal distributes the processor cores for all the internal threads of the first priority, if the idle processor big cores exist, the idle processor big cores are scheduled to process the internal threads of the second priority as much as possible.
In one embodiment, the main thread and the rendering thread are fixedly divided into a first priority, i.e. the main thread and the rendering thread are preferentially matched with the processor big core no matter what running scene the process is in. The data acquisition thread and the data reporting thread are fixedly divided into a third priority, namely, the data acquisition thread and the data reporting thread are preferentially matched with the processor corelet no matter what operation scene the process is in.
The embodiment has the advantages that the terminal always preferentially schedules the processor big core to process the main thread, the rendering thread and other threads closely related to the user experience, thereby avoiding the damage of the user experience in the process operation process; the terminal always preferentially schedules the processor small core to process the data acquisition thread, the data reporting thread and other threads which are less relevant to the user experience, so that the occupation of the threads to the processor large core resources is avoided, and the damage of the user experience in the process operation process is further avoided.
In one embodiment, the process interacts with the terminal by establishing real-time communications with the terminal.
Specifically, in this embodiment, the process establishes real-time communication with the terminal, and sends a scheduling request to the terminal based on the determined matching priority. Therefore, after receiving the scheduling request, the terminal schedules the processor big core and the processor small core for processing each internal thread by taking the matching priority as a reference.
The embodiment has the advantage that the synchronization of thread management and operation scenes is ensured by interaction in a manner of establishing real-time communication.
Fig. 4 shows a scheduling request generation flow chart of an embodiment of the present application.
Referring to FIG. 4, in this embodiment, the process monitors the new thread to determine the newly created process from the last cycle to the current time. For new threads dynamically associated with the operation scene, determining the current operation scene, further determining the time consumption of image frame rendering of the new threads in the operation scene, further determining the matching priority of the new threads, and further determining whether the new threads are preferentially matched with the large processor core or the small processor core at the current time.
The process also traverses the old threads with the determined priority level and determines the current running scene, so as to redetermine whether the old threads are preferentially matched with the large processor core or the small processor core at the current time.
And according to the monitoring of the new thread and the traversing of the old thread, the process updates the scheduling list, generates a new scheduling list, and further sends a corresponding scheduling request to the terminal according to the new scheduling list.
In one embodiment, the process interacts with the terminal by marking the thread name.
Specifically, in this embodiment, after determining the matching priority, the process marks the thread name of each internal thread based on the class of the matched processor core. When the terminal needs to schedule the processor cores to process the internal threads, the terminal can determine the processor cores matched with the internal threads by reading the thread names, and then schedule the corresponding processor cores to process the matched internal threads respectively.
This embodiment has the advantage that by interacting in such a way that the line names are marked, the communication costs are reduced.
In one embodiment, for internal threads that are preferentially matched to the processor cores, the process marks a prefix field "Big" at its thread name; for internal threads that are preferentially matched to the processor corelet, the process marks a prefix field "Small" at its thread name. When the terminal needs to schedule the processor cores to process the internal threads, the prefix field is determined by reading the thread names, and whether the internal threads are preferentially matched with the processor big cores or the processor small cores is determined according to the prefix field, so that the corresponding processor cores are preferentially scheduled to process the matched internal threads respectively.
In one embodiment, the process interacts with the terminal based on the matching priority such that the terminal schedules the processor big core and the processor little core for processing each internal thread with the matching priority as a reference. However, due to limitation of terminal resources (for example, the number of processor cores is limited, and other processes in the terminal occupy the processor cores), or due to limitation of terminal scheduling policy (for example, the priority of the scheduling policy set by the terminal is higher than the priority of the matching priority), the scheduling actually performed by the terminal on the processor cores may come in or go out from the scheduling indicated by the matching priority.
The terminal generates scheduling state information according to the conditions of internal threads processed by the processor cores in actual scheduling, and returns the scheduling state information to the process.
And after the process acquires the scheduling state information, adjusting the matching priority based on the scheduling state information. And then interacting with the terminal based on the adjusted matching priority, so that the terminal schedules the processor big core and the processor small core for processing each internal thread by taking the adjusted matching priority as a reference.
In one embodiment, what is described by the matching priority that is not adjusted in the current running scenario is: "thread 1 and thread 2 and thread 3 each preferentially match the processor big core and thread 4 and thread 5 each preferentially match the processor little core".
After the process interacts with the terminal based on the unadjusted matching priority, according to the scheduling state information returned by the terminal, determining that the terminal actually schedules the processor big core to process the thread 2 and the thread 3, and actually schedules the processor small core to process the thread 1, the thread 4 and the thread 5.
Because the influence of the thread 1 and the thread 2 on the user experience is larger than that of the thread 3 on the user experience, the process adjusts the matching priority in the current running scene, so that the content described by the adjusted matching priority is: "thread 1 and thread 2 each preferentially match the processor big core and" thread 3 and thread 4 and thread 5 each preferentially match the processor little core ". The process interacts with the terminal based on the adjusted matching priority, so that the terminal preferentially allocates the resources of the processor big core to the thread 1 and the thread 2, and preferentially allocates the resources of the processor small core to the thread 3, the thread 4 and the thread 5.
It should be noted that the embodiment is only an exemplary illustration, and should not limit the functions and application scope of the present application.
In one embodiment, the matching priority is issued by the process manager that controls and manages the process.
After the process obtains the scheduling state information returned by the terminal, the scheduling state information is uploaded to the process management end, so that the process management end responds to the scheduling state information to update the matching priority. After the process management end updates the matching priority, the updated matching priority is fed back to the process, and then the process synchronizes the matching priority with the updated matching priority.
The embodiment has the advantage that the flexibility of thread management is improved by synchronously matching priorities with the process management side.
FIG. 5 illustrates an apparatus for a process to manage internal threads, according to an embodiment of the present application, the apparatus comprising:
the monitoring module 410 is configured to monitor an operation scene where the process is currently located;
the obtaining module 420 is configured to obtain time consumption of each internal thread in the running scene for image frame rendering of the process;
a determining module 430, configured to determine, based on time consumption of each internal thread for rendering an image frame of a process in the operation scene, a matching priority of each internal thread and a processor core of a terminal, where the processor core includes at least one high-frequency processor big core and at least one low-frequency processor small core;
and the interaction module 440 is configured to interact with the terminal based on the matching priority, so that the terminal can schedule the processor big core and the processor small core for processing the internal threads by taking the matching priority as a reference.
In an exemplary embodiment of the present application, the apparatus is configured to:
deploying the running environment of the terminal in the virtual machine, and running a process in the running environment of the terminal;
and monitoring the running environment of the terminal in the virtual machine to acquire time consumption of each internal thread in the running scene for image frame rendering of the process.
In an exemplary embodiment of the present application, the apparatus is configured to:
dividing each internal thread into a first priority, a second priority and a third priority according to the sequence of time consumption from high to low of each internal thread on image frame rendering of a process in the operation scene, wherein the internal thread of the first priority is preferentially matched with the processor big core, the internal thread of the third priority is preferentially matched with the processor small core, and the internal thread of the second priority is preferentially matched with the processor big core after the internal threads of the first priority are all matched with the processor big core.
In an exemplary embodiment of the present application, the apparatus is configured to:
and establishing real-time communication with the terminal, and sending a scheduling request to the terminal based on the matching priority, so that the terminal schedules the processor big core and the processor small core for processing each internal thread by taking the matching priority as a reference.
In an exemplary embodiment of the present application, the apparatus is configured to:
and marking the thread name of each internal thread based on the category of the matched processor core, and scheduling the processor big core and the processor small core for processing each internal thread by the terminal by taking the matching priority as a reference.
In an exemplary embodiment of the present application, the apparatus is configured to:
acquiring scheduling state information returned by the terminal, wherein the scheduling state information is used for describing internal threads processed by the processor big core and the processor small core respectively;
and adjusting the matching priority based on the scheduling state information.
In an exemplary embodiment of the present application, the apparatus is configured to:
uploading the scheduling state information to a process management end so that the process management end responds to the scheduling state information to update the matching priority;
and synchronizing the matching priority with the updated matching priority fed back by the process management end.
An electronic device 50 according to an embodiment of the present application is described below with reference to fig. 6. The electronic device 50 shown in fig. 6 is merely an example and should not be construed as limiting the functionality and scope of use of the embodiments herein.
As shown in fig. 6, the electronic device 50 is in the form of a general purpose computing device. Components of electronic device 50 may include, but are not limited to: the at least one processing unit 510, the at least one memory unit 520, and a bus 530 connecting the various system components, including the memory unit 520 and the processing unit 510.
Wherein the storage unit stores program code that is executable by the processing unit 510 such that the processing unit 510 performs the steps according to various exemplary embodiments of the present invention described in the description of the exemplary methods described above in this specification. For example, the processing unit 510 may perform the various steps as shown in fig. 3.
The storage unit 520 may include readable media in the form of volatile storage units, such as Random Access Memory (RAM) 5201 and/or cache memory unit 5202, and may further include Read Only Memory (ROM) 5203.
The storage unit 520 may also include a program/utility 5204 having a set (at least one) of program modules 5205, such program modules 5205 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment.
Bus 530 may be one or more of several types of bus structures including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or a local bus using any of a variety of bus architectures.
The electronic device 50 may also communicate with one or more external devices 600 (e.g., keyboard, pointing device, bluetooth device, etc.), one or more devices that enable a user to interact with the electronic device 50, and/or any device (e.g., router, modem, etc.) that enables the electronic device 50 to communicate with one or more other computing devices. Such communication may occur through an input/output (I/O) interface 550. An input/output (I/O) interface 550 is connected to the display unit 540. Also, electronic device 50 may communicate with one or more networks such as a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the Internet, through network adapter 560. As shown, network adapter 560 communicates with other modules of electronic device 50 over bus 530. It should be appreciated that although not shown, other hardware and/or software modules may be used in connection with electronic device 50, including, but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, data backup storage systems, and the like.
From the above description of embodiments, those skilled in the art will readily appreciate that the example embodiments described herein may be implemented in software, or may be implemented in software in combination with the necessary hardware. Thus, the technical solution according to the embodiments of the present application may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (may be a CD-ROM, a usb disk, a mobile hard disk, etc.) or on a network, and includes several instructions to cause a computing device (may be a personal computer, a server, a terminal device, or a network device, etc.) to perform the method according to the embodiments of the present application.
In an exemplary embodiment of the present application, there is also provided a computer-readable storage medium having stored thereon computer-readable instructions, which, when executed by a processor of a computer, cause the computer to perform the method described in the method embodiment section above.
According to an embodiment of the present application, there is also provided a program product for implementing the method in the above method embodiments, which may employ a portable compact disc read only memory (CD-ROM) and comprise program code and may be run on a terminal device, such as a personal computer. However, the program product of the present invention is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium can be, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The computer readable signal medium may include a data signal propagated in baseband or as part of a carrier wave with readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of remote computing devices, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., connected via the Internet using an Internet service provider).
It should be noted that although in the above detailed description several modules or units of a device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functions of two or more modules or units described above may be embodied in one module or unit, in accordance with embodiments of the present application. Conversely, the features and functions of one module or unit described above may be further divided into a plurality of modules or units to be embodied.
Furthermore, although the various steps of the methods herein are depicted in the accompanying drawings in a particular order, this is not required to either suggest that the steps must be performed in that particular order, or that all of the illustrated steps must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step to perform, and/or one step decomposed into multiple steps to perform, etc.
From the above description of embodiments, those skilled in the art will readily appreciate that the example embodiments described herein may be implemented in software, or may be implemented in software in combination with the necessary hardware. Thus, the technical solution according to the embodiments of the present application may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (may be a CD-ROM, a U-disk, a mobile hard disk, etc.) or on a network, and includes several instructions to cause a computing device (may be a personal computer, a server, a mobile terminal, or a network device, etc.) to perform the method according to the embodiments of the present application.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the application following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the application pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.

Claims (10)

1. A method of processor core matching, the method comprising:
monitoring the current running scene of the game process;
acquiring time consumption of each internal thread in the running scene for image frame rendering of a game process;
determining the matching priority of each internal thread and a processor core of a terminal based on the time consumption of each internal thread for image frame rendering of a game process in the operation scene, wherein the processor core comprises at least one high-frequency processor big core and at least one low-frequency processor small core;
and determining a matching result between each internal thread and the processor core based on the matching priority, wherein the matching result is used for describing the internal threads which are preferentially matched with the processor big core in the game process and the internal threads which are preferentially matched with the processor small core in the game process.
2. The method of claim 1, wherein obtaining the time consumed by each internal thread in the running scene for rendering the image frames of the game process comprises:
deploying the running environment of the terminal in the virtual machine, and running a process in the running environment of the terminal;
and monitoring the running environment of the terminal in the virtual machine to acquire time consumption of each internal thread in the running scene for image frame rendering of the game process.
3. The method of claim 1, wherein determining the matching priority of each internal thread to the processor core of the terminal based on the time consuming rendering of the image frames of the game process by each internal thread in the operational scenario comprises: dividing each internal thread into a first priority, a second priority and a third priority according to the sequence of time consumption from high to low of image frame rendering of the game process of each internal thread in the running scene;
determining a matching result between the internal threads and the processor core based on the matching priority, including: the internal threads of the first priority are preferentially matched with the processor big core, the internal threads of the third priority are preferentially matched with the processor small core, and the internal threads of the second priority are preferentially matched with the processor big core after the internal threads of the first priority are matched with the processor big core.
4. A method according to claim 3, characterized in that the method further comprises:
and fixedly dividing a main thread and a rendering thread into the first priority, and fixedly dividing a data acquisition thread and a data reporting thread into the third priority.
5. The method of claim 1, wherein after determining the matching priority of the internal threads to the processor cores of the terminal, the method further comprises:
and marking the thread name of each internal thread based on the category of the matched processor core, and scheduling the processor big core and the processor small core for processing each internal thread by the terminal by taking the matching priority as a reference.
6. The method according to claim 1, wherein the method further comprises:
acquiring scheduling state information returned by the terminal, wherein the scheduling state information is used for describing internal threads processed by the processor big core and the processor small core respectively;
and adjusting the matching priority based on the scheduling state information.
7. The method of claim 6, wherein the method further comprises:
uploading the scheduling state information to a process management end so that the process management end responds to the scheduling state information to update the matching priority;
and synchronizing the matching priority with the updated matching priority fed back by the process management end.
8. A processor core matching apparatus, the apparatus comprising:
the monitoring module is configured to monitor the current running scene of the game process;
the acquisition module is configured to acquire time consumption of each internal thread in the running scene for image frame rendering of the game process;
the determining module is configured to determine the matching priority of each internal thread and the processor core of the terminal based on the time consumption of each internal thread for image frame rendering of the game process in the running scene, wherein the processor core comprises at least one high-frequency processor big core and at least one low-frequency processor small core;
and the matching module is configured to determine a matching result between each internal thread and the processor core based on the matching priority, wherein the matching result is used for describing the internal thread which is preferentially matched with the processor big core in the game process and the internal thread which is preferentially matched with the processor small core in the game process.
9. An electronic device, comprising:
a memory storing computer readable instructions;
a processor reading computer readable instructions stored in a memory to perform the method of any one of claims 1-7.
10. A computer readable storage medium having stored thereon computer readable instructions which, when executed by a processor of a computer, cause the computer to perform the method of any of claims 1-7.
CN202310421243.1A 2021-04-21 2021-04-21 Processor core matching method and device, electronic equipment and storage medium Pending CN116450353A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310421243.1A CN116450353A (en) 2021-04-21 2021-04-21 Processor core matching method and device, electronic equipment and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202310421243.1A CN116450353A (en) 2021-04-21 2021-04-21 Processor core matching method and device, electronic equipment and storage medium
CN202110431231.8A CN113204425B (en) 2021-04-21 2021-04-21 Method, device, electronic equipment and storage medium for process management internal thread

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN202110431231.8A Division CN113204425B (en) 2021-04-21 2021-04-21 Method, device, electronic equipment and storage medium for process management internal thread

Publications (1)

Publication Number Publication Date
CN116450353A true CN116450353A (en) 2023-07-18

Family

ID=77027700

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202310421243.1A Pending CN116450353A (en) 2021-04-21 2021-04-21 Processor core matching method and device, electronic equipment and storage medium
CN202110431231.8A Active CN113204425B (en) 2021-04-21 2021-04-21 Method, device, electronic equipment and storage medium for process management internal thread

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202110431231.8A Active CN113204425B (en) 2021-04-21 2021-04-21 Method, device, electronic equipment and storage medium for process management internal thread

Country Status (1)

Country Link
CN (2) CN116450353A (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116703691B (en) * 2022-11-17 2024-05-14 荣耀终端有限公司 Image processing method, electronic device, and computer storage medium
CN117130771B (en) * 2023-03-30 2024-06-04 荣耀终端有限公司 Resource scheduling method, electronic equipment and storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9619282B2 (en) * 2012-08-21 2017-04-11 Lenovo (Singapore) Pte. Ltd. Task scheduling in big and little cores
US10639550B2 (en) * 2017-04-18 2020-05-05 Bullguard Ltd System and method for dynamically allocating resources to a game process
KR20200097579A (en) * 2019-02-08 2020-08-19 삼성전자주식회사 Electronic device, storage medium and method for process scheduling
CN110489228B (en) * 2019-07-16 2022-05-17 华为技术有限公司 Resource scheduling method and electronic equipment
CN111324454A (en) * 2020-02-03 2020-06-23 京东数字科技控股有限公司 Multi-core CPU allocation method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN113204425A (en) 2021-08-03
CN113204425B (en) 2023-04-07

Similar Documents

Publication Publication Date Title
US11146502B2 (en) Method and apparatus for allocating resource
CN109885316B (en) Hdfs-hbase deployment method and device based on kubernetes
CN109032796B (en) Data processing method and device
CN111866099B (en) Method, device, system, equipment and storage medium for downloading mirror image file
CN113204425B (en) Method, device, electronic equipment and storage medium for process management internal thread
CN111327692A (en) Model training method and device and cluster system
CN111818194A (en) Domain name based access system and method
CN111552550A (en) Task scheduling method, device and medium based on GPU (graphics processing Unit) resources
CN109117252A (en) Method, system and the container cluster management system of task processing based on container
CN110019539A (en) A kind of method and apparatus that the data of data warehouse are synchronous
US20220374742A1 (en) Method, device and storage medium for running inference service platform
CN110083341A (en) A kind of front end development platform, front end development approach and page display method
CN114924851A (en) Training task scheduling method and device, electronic equipment and storage medium
CN105933136B (en) A kind of resource regulating method and system
CN111800511B (en) Synchronous login state processing method, system, equipment and readable storage medium
CN116721007B (en) Task control method, system and device, electronic equipment and storage medium
CN117076096A (en) Task flow execution method and device, computer readable medium and electronic equipment
CN111767126A (en) System and method for distributed batch processing
CN104468505A (en) Safety audit log playing method and device
CN115454666A (en) Data synchronization method and device among message queue clusters
CN107547607B (en) Cluster migration method and device
CN110019059A (en) A kind of method and apparatus of Timing Synchronization
CN110347654A (en) A kind of method and apparatus of online cluster features
CN112667393A (en) Method and device for building distributed task computing scheduling framework and computer equipment
CN115086321B (en) Multi-cluster traffic forwarding method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination