CN113132263B - Kernel processor scheduling method, kernel processor scheduling device and storage medium - Google Patents

Kernel processor scheduling method, kernel processor scheduling device and storage medium Download PDF

Info

Publication number
CN113132263B
CN113132263B CN202010040360.XA CN202010040360A CN113132263B CN 113132263 B CN113132263 B CN 113132263B CN 202010040360 A CN202010040360 A CN 202010040360A CN 113132263 B CN113132263 B CN 113132263B
Authority
CN
China
Prior art keywords
application
core processor
processor
designated
appointed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010040360.XA
Other languages
Chinese (zh)
Other versions
CN113132263A (en
Inventor
刘杨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Priority to CN202010040360.XA priority Critical patent/CN113132263B/en
Publication of CN113132263A publication Critical patent/CN113132263A/en
Application granted granted Critical
Publication of CN113132263B publication Critical patent/CN113132263B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/24Traffic characterised by specific attributes, e.g. priority or QoS
    • H04L47/2475Traffic characterised by specific attributes, e.g. priority or QoS for supporting traffic characterised by the type of applications
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/131Protocols for games, networked simulations or virtual reality

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer And Data Communications (AREA)

Abstract

The disclosure relates to a kernel processor scheduling method, a kernel processor scheduling device and a storage medium. The kernel processor scheduling method is applied to a terminal, wherein the terminal comprises a multi-core processor, and the method comprises the following steps: detecting an application running based on network connection; and when the appointed application is detected, migrating the soft interrupt of the network transmission data packet of the appointed application to an appointed core processor, wherein the appointed core processor has the capability of processing the soft interrupt data throughput greater than a preset throughput threshold. Through the method and the device, the application can be smoothly operated, and blocking is avoided.

Description

Kernel processor scheduling method, kernel processor scheduling device and storage medium
Technical Field
The disclosure relates to the field of computer technology, and in particular, to a method for scheduling a core processor, a device for scheduling a core processor, and a storage medium.
Background
Currently, there is a class of applications that need to run in a network environment, where the network requirements are extremely high when running on a terminal, such as for example, network game applications. When the application runs on the terminal, the situation that the running of the application is slow based on network connection and the displayed application picture is blocked can occur when the wireless fidelity (Wireless Fidelity, wiFi) data throughput and WiFi data processing speed of the network game by the processor cannot meet the requirements, so that the user experience is affected.
Disclosure of Invention
In order to overcome the problems in the related art, the present disclosure provides a method for scheduling a core processor, a device for scheduling a core processor, and a storage medium.
According to a first aspect of an embodiment of the present disclosure, there is provided a method for scheduling a core processor, applied to a terminal, the terminal including a multi-core processor, and an application running based on network connection installed on the terminal, the method for scheduling a core processor including: detecting an application running based on network connection; and when the appointed application is detected, migrating the soft interrupt of the network transmission data packet of the appointed application to an appointed core processor, wherein the appointed core processor has the capability of processing the soft interrupt data throughput larger than a preset throughput threshold.
In an example, detecting an application running based on a network connection includes: acquiring a stack top task process of a task line stack currently running; and determining that the appointed application is detected when the task process at the stack top is the task process of the appointed application.
In an example, a top task process of a currently running task line stack is acquired in a timing acquisition manner.
In one example, upon detecting a specified application, migrating a soft interrupt of a network transport packet of the specified application into a specified core processor, comprising: when the appointed application is detected, transmitting a message of the appointed application to an underlying drive; and migrating the soft interrupt of the network transmission data packet of the appointed application to the appointed kernel processor by the bottom layer driver.
In one example, the designated application is a gaming application.
In an example, a multi-core processor includes a large core processor and a small core processor;
the core processor is designated as a designated large core processor.
According to a second aspect of the embodiments of the present disclosure, there is provided a kernel processor scheduling apparatus applied to a terminal, the terminal including a multi-core processor, and an application running based on a network connection installed on the terminal, the kernel processor scheduling apparatus including: a detection unit configured to detect an application running based on a network connection; and the processing unit is configured to migrate the soft interrupt of the network transmission data packet of the appointed application to the appointed core processor when the appointed application is detected, and the appointed core processor has the capability of processing the soft interrupt data throughput greater than a preset throughput threshold.
In an example, the core processor scheduling apparatus further comprises: the acquisition unit is configured to acquire a stack top task process of a task line stack which is currently running; the detection unit detects an application running based on the network connection in the following manner: and determining that the appointed application is detected when the task process at the stack top is the task process of the appointed application.
In an example, a top task process of a currently running task line stack is acquired in a timing acquisition manner.
In an example, the core processor scheduling apparatus further comprises: a transfer unit configured to transfer a message specifying the running of the application;
the processing unit migrates the soft interrupt of the network transmission data packet of the appointed application to the appointed core processor in the following way: when the detection unit detects the appointed application, the transmission unit transmits a message of the appointed application to the bottom layer drive; and migrating the soft interrupt of the network transmission data packet of the appointed application to the appointed kernel processor by the bottom layer driver.
In one example, the designated application is a gaming application.
In an example, a multi-core processor includes a large core processor and a small core processor; the core processor is designated as a designated large core processor.
According to a third aspect of the present disclosure, there is provided a non-transitory computer readable storage medium storing computer executable instructions which, when executed by a processor, perform the method of scheduling a core processor of the first aspect or any of the examples of the first aspect.
According to a fourth aspect of the present disclosure, there is provided a battery temperature determining apparatus including: and a memory configured to store instructions. And a processor configured to invoke instructions to perform the kernel processor scheduling method of the foregoing first aspect or any of the examples of the first aspect.
The technical scheme provided by the embodiment of the disclosure can comprise the following beneficial effects: and detecting an application running based on network connection, and when the designated application is detected, migrating the soft interrupt of the network transmission data packet of the designated application into a designated kernel processor, wherein the designated kernel processor has the capability of processing soft interrupt data throughput greater than a preset throughput threshold. The soft interrupt of the network transmission data packet of the appointed application is migrated to the kernel processor with strong processing capacity, so that the application operated by the network connection can be operated smoothly, and the user experience is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a flowchart illustrating a method of kernel processor scheduling, according to an example embodiment.
FIG. 2 is a flowchart illustrating a method of kernel processor scheduling, according to an example embodiment.
FIG. 3 is a flowchart illustrating a method of kernel processor scheduling, according to an example embodiment.
FIG. 4 is a diagram illustrating a method of scheduling a core processor according to an example embodiment.
FIG. 5 is a graph of specified application processing latency effects for a kernel processor scheduling method not employing embodiments of the present disclosure.
Fig. 6 is a diagram of a specified application processing latency effect applying a kernel processor scheduling method according to an embodiment of the present disclosure.
FIG. 7 is a block diagram illustrating a core processor scheduler in accordance with an exemplary embodiment.
Fig. 8 is a block diagram of an apparatus according to an example embodiment.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as detailed in the accompanying claims.
The technical scheme of the exemplary embodiment of the disclosure can be applied to an application scenario of operating wifi application on a multi-core terminal. In the exemplary embodiments described below, the terminal is sometimes also referred to as an intelligent terminal device, where the terminal may be a Mobile terminal, also referred to as a User Equipment (UE), a Mobile Station (MS), etc., and the terminal is a device that provides voice and/or data connection to a User, or a chip disposed in the device, for example, a handheld device, an in-vehicle device, etc. having a wireless connection function. Examples of terminals may include, for example: a mobile phone, a tablet computer, a notebook computer, a palm computer, a mobile internet device (Mobile Internet Devices, MID), a wearable device, a Virtual Reality (VR) device, an augmented Reality (Augmented Reality, AR) device, a wireless terminal in industrial control, a wireless terminal in unmanned operation, a wireless terminal in teleoperation, a wireless terminal in smart grid, a wireless terminal in transportation security, a wireless terminal in smart city, a wireless terminal in smart home, and the like.
At present, as the performance of a terminal central processing unit (central processing unit, CPU) is continuously improved, the number of core processors is increased, and the heating and power consumption of the terminal are also increased significantly, in order to meet the requirements of high performance and low power consumption of the CPU at the same time, terminal CPU manufacturers begin to design the core processors of the multi-core CPU into a large core processor and a small core processor, and the large core processor and the small core processor perform data processing in a mode of dividing work respectively. Among them, a core processor having a high processing capability and a high processing speed is called a large core processor, and a core processor having a low processing capability and a low processing speed is called a small core processor.
In the related art, because the network transmission data packets are processed in the form of software interrupt, the network transmission data packet software interrupt can be run on the CPU big core processor only when the flow is required to be more than 100M. While the flow of the network transmission data packets transmitted by the network is not high in operation, the real-time requirement on the network environment is high, and the data of the network connection application with high requirement on the network environment is processed in a small-core processor, so that the situation that the network connection application operates slowly and the displayed application picture is blocked can occur, and the user experience is affected.
For example, network game-like applications require very high demands on the network environment, but when the network game-like applications are running, the network transmission data throughput is much less than the threshold of 100M. Therefore, processing the network game in the small-core processor can cause the network game to run unsmoothly and the display screen to be blocked.
Therefore, how to increase the processing speed of applications with high requirements on network environment and smaller network transmission data throughput, so that the data of network connection applications can run smoothly, is an urgent problem to be solved.
Fig. 1 is a flowchart illustrating a method for scheduling a core processor, as shown in fig. 1, in a terminal including a multi-core processor, according to an exemplary embodiment, the method for scheduling a core processor including the following steps.
In step S11, an application running based on the network connection is detected.
The network referred in the present disclosure may be a data network or a WIFI network provided by a mobile operator, and the embodiments of the present disclosure are not limited thereto. The network connection-based application may be a network connection-based application running in the foreground of the terminal, such as a network game or the like.
In step S12, when the designated application is detected, the soft interrupt of the network transmission data packet of the designated application is migrated to the designated core processor, and the designated core processor has the capability of processing the soft interrupt data throughput greater than the preset throughput threshold.
The specified application in the present disclosure may be a preset specified type of application, or may be a specified application under a specified type.
In one embodiment, an application running based on network connection is detected, and when the application running based on network connection is detected as a designated application, soft interruption of network transmission data packets of the designated application is migrated to a designated kernel processor.
Wherein the designated core processor is one of the multi-core processors. The designated core processor has the capability to handle soft interrupt data throughput greater than a preset throughput threshold.
In one embodiment, the core processor is designated as a large core processor of the multi-core processors.
For example, an 8-core processor, 4 of which are large-core processors and 4 of which are small-core processors. The designated core processor is a designated large core processor of the 8-core processors.
In an exemplary embodiment of the present disclosure, an application running based on a network connection is detected, and when a designated application is detected, a soft interrupt of a network transmission data packet of the designated application is migrated to a designated core processor, where the designated core processor has a capability of processing soft interrupt data throughput greater than a preset throughput threshold. The soft interrupt of the network transmission data packet of the appointed application is migrated to the kernel processor with strong processing capacity, so that the application operated by the network connection can be operated smoothly, and the user experience is improved.
FIG. 2 is a flowchart illustrating a method of kernel processor scheduling, according to an example embodiment. As shown in fig. 2, step S11 shown in fig. 1 includes the following steps.
In step S111, a task process at the top of the stack of the task line stack currently running is acquired.
Stacks, which are a data structure, are special linear tables that can only be inserted and deleted at one end. The method stores data according to the principle of first-in and last-out, the first-in data is pressed into the stack bottom, and the last data is at the stack top, and the data is popped from the stack top when the data needs to be read.
According to the method and the device, the stack top task process of the currently running task line stack Running task stack can be obtained by utilizing the characteristics of the stack, and the application running in the foreground is determined according to the stack top task process of the Running task stack.
For example, in the present disclosure, a running task standard interface may be used to obtain a task process at the top of a running task stack, and by obtaining the task process at the top of the stack, it is determined whether the task process at the top of the stack is a task process of a specific application.
For example, the acquisition of a foreground running application may be determined in the user space of the operating system by:
in the method, by acquiring the Running task stack, if the current Running task stack is not empty, acquiring the activity of the stack top of the Running task stack, namely acquiring the task process of the stack top, acquiring the packet name of the activity, and further determining whether the running application is a designated application according to the acquired packet name of the activity.
By the method, the application operated by the foreground can be acquired. Based on the acquired foreground-running application, it may be determined whether the foreground-running application is a specified application.
In step S112, when the stack top task process is a task process of a specific application, it is determined that the specific application is detected.
In the disclosure, a stack top task process of a stack running an application is obtained, whether the task process is a task process of a designated application is judged according to the obtained task process, and if the obtained task process is the task process of the designated application, the designated application is determined to be detected.
In the exemplary embodiment of the disclosure, by acquiring the task process at the top of the running task stack, determining whether the task process is the task process of the designated application according to the acquired task process, and determining that the designated application is detected when the task process is the task process of the designated application, the running designated application can be acquired in time, and further the processing speed of the data of the subsequent designated application is ensured.
FIG. 3 is a flowchart illustrating a method of kernel processor scheduling, according to an example embodiment. As shown in fig. 3, step S12 shown in fig. 3 includes the following steps.
In step S121, upon detection of the specified application, a message specifying that the application is running is delivered to the underlying driver.
In this disclosure, when a specific application is detected, a message that the specific application runs is delivered to the bottom driver, for example, may be implemented by the following manner:
upon detecting an application running on a network connection and determining that a specified application is detected, an event of an application layer, i.e., an event of the specified application running, is passed to the underlying driver.
After the bottom layer drive receives the event transferred by the application layer, the original drive strategy is changed for the appointed application according to the transferred event operated by the appointed application, and the soft interrupt of the network transmission data packet of the appointed application is transferred to the appointed core processor, namely, the big core processor with strong processing capacity.
In step S122, the soft interrupt of the network transport packet of the specified application is migrated to the specified kernel processor by the bottom driver.
In the present disclosure, after receiving a message specifying the running of an application, the underlying driver may migrate a soft interrupt of a network transport packet of the specified application to a specified kernel processor using an affinity method.
Since the affinity method is a scheduling attribute of the CPU, processes can be "migrated" to one or a group of kernel processors using the affinity method.
In the present disclosure, an application running based on a network connection is detected, and when it is determined that a specified application is detected, an event that the specified application runs is delivered to the underlying driver. The bottom layer driver receives the event transmitted by the application layer, changes the original driving strategy for the appointed application according to the event transmitted as the event operated by the appointed application, and transfers the soft interrupt of the network transmission data packet of the appointed application to the appointed big core processor through an affinity (affinity) method.
For example, the designated application is a network game A, the bottom driver receives an event of running the network game A transmitted by the application layer, and the soft interrupt of the game application data packet is migrated to the designated big core processor through an afinity method.
Scheduling soft interrupts of a specified application data packet to a specified large core processor by an afinit method may be accomplished, for example, by:
in the method, scoring is carried out on the current received data packet throughput, if the received data packet throughput is lower, the method is carried out according to a driving strategy of a preset application, and if the received data packet throughput is higher, namely, when the appointed application is detected to run in the foreground, the appointed application is migrated to a large core processor for processing.
FIG. 4 is a schematic diagram showing the allocation of core tasks after the completion of the dispatch of a soft interrupt of a specified application packet to a specified large core processor by an afinity method. In fig. 4, after the soft interrupt of the designated application data packet is scheduled to the designated big core processor by the afinity method, the task allocation status of each core after setting is checked by using the snapdragon tool, so that the current task load after scheduling is mainly on the big core processors of the CPUs 0 to 3.
In the exemplary embodiment of the disclosure, an application running based on network connection is detected, in response to detection that the application is a designated application, soft interruption of a network transmission data packet of the designated application can be migrated to a designated big core processor by using a scheduling attribute of an afinity method, so that the processing speed of the designated application is improved, and the effect of improving the system performance of a terminal can be achieved through scheduling of the afinity method.
In order to further explain that the data processing capacity of the designated application becomes stronger after the kernel processor scheduling method related to the disclosure is applied, the disclosure further describes according to the actual test result.
Fig. 5 is a graph of the average latency effect of specifying application data processing without applying the kernel processor scheduling method according to the embodiments of the present disclosure. And FIG. 6 is a graph of the average latency effect of specifying application data processing for a kernel processor scheduling method according to an embodiment of the present disclosure.
As can be seen from fig. 5 and fig. 6, the maximum delay of the designated application data processing after the kernel processor scheduling method of the present disclosure is applied is reduced greatly, and the average delay of the designated application data processing is lower.
Based on the same inventive concept, the present disclosure also provides a kernel processor scheduling apparatus.
It will be appreciated that, in order to implement the above-described functions, the kernel processor scheduling apparatus provided in the embodiments of the present disclosure includes corresponding hardware structures and/or software modules that perform the respective functions. The disclosed embodiments may be implemented in hardware or a combination of hardware and computer software, in combination with the various example elements and algorithm steps disclosed in the embodiments of the disclosure. Whether a function is implemented as hardware or computer software driven hardware depends upon the particular application and design constraints imposed on the solution. Those skilled in the art may implement the described functionality using different approaches for each particular application, but such implementation is not to be considered as beyond the scope of the embodiments of the present disclosure.
FIG. 7 is a block diagram illustrating a core processor scheduler in accordance with an exemplary embodiment. Referring to fig. 7, a core processor scheduling apparatus, applied to a terminal including a multi-core processor, and having installed thereon an application running based on a network connection, includes: a detection unit 701 and a processing unit 702.
Wherein the detection unit 701 is configured to detect an application running based on a network connection;
the processing unit 702 is configured to migrate, when the specified application is detected, a soft interrupt of a network transmission data packet of the specified application to a specified core processor, where the specified core processor has a capability of processing soft interrupt data throughput greater than a preset throughput threshold.
In an example, the core processor scheduling apparatus further comprises: an obtaining unit 703, configured to obtain a top task process of a task line stack currently running; the detection unit 701 detects an application running based on a network connection in the following manner: and determining that the appointed application is detected when the task process at the stack top is the task process of the appointed application.
In an example, a timing acquisition mode is adopted to acquire the task process at the top of the currently running task line stack.
In an example, the core processor scheduling apparatus further comprises: a transfer unit 704 configured to transfer a message specifying the running of the application; the processing unit 702 migrates the soft interrupt of the network transport packet of the specified application into the specified core processor as follows: when the detection unit 701 detects a specified application, the transfer unit 704 transfers a message that the specified application runs to the bottom driver; and migrating the soft interrupt of the network transmission data packet of the appointed application to the appointed kernel processor by the bottom layer driver.
In one example, the designated application is a gaming application.
In an example, a multi-core processor includes a large core processor and a small core processor; the core processor is designated as a designated large core processor.
The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
FIG. 8 is a block diagram illustrating an apparatus 800 for kernel processor scheduling, according to an example embodiment. For example, apparatus 800 may be a mobile phone, computer, digital broadcast terminal, messaging device, game console, tablet device, medical device, exercise device, personal digital assistant, or the like.
Referring to fig. 8, apparatus 800 may include one or more of the following components: a processing component 802, a memory 804, a power component 806, a multimedia component 808, an audio component 810, an input/output (I/O) interface 812, a sensor component 814, and a communication component 816.
The processing component 802 generally controls overall operation of the apparatus 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 802 may include one or more processors 820 to execute instructions to perform all or part of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interactions between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the device 800. Examples of such data include instructions for any application or method operating on the device 800, contact data, phonebook data, messages, pictures, videos, and the like. The memory 804 may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
The power supply component 806 provides power to the various components of the device 800. The power components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the device 800.
The multimedia component 808 includes a screen between the device 800 and the user that provides an output interface. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may sense not only the boundary of a touch or slide action, but also the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front camera and/or a rear camera. The front camera and/or the rear camera may receive external multimedia data when the device 800 is in an operational mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have focal length and optical zoom capabilities.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may be further stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 further includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be a keyboard, click wheel, buttons, etc. These buttons may include, but are not limited to: homepage button, volume button, start button, and lock button.
The sensor assembly 814 includes one or more sensors for providing status assessment of various aspects of the apparatus 800. For example, the sensor assembly 814 may detect an on/off state of the device 800, a relative positioning of the components, such as a display and keypad of the apparatus 800, the sensor assembly 814 may also detect a change in position of the apparatus 800 or one component of the apparatus 800, the presence or absence of user contact with the apparatus 800, an orientation or acceleration/deceleration of the apparatus 800, and a change in temperature of the apparatus 800. The sensor assembly 814 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscopic sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate communication between the apparatus 800 and other devices, either in a wired or wireless manner. The device 800 may access a wireless network based on a communication standard, such as WiFi,2G or 3G, or a combination thereof. In one exemplary embodiment, the communication component 816 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In one exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic elements for executing the methods described above.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided, such as memory 804 including instructions executable by processor 820 of apparatus 800 to perform the above-described method. For example, the non-transitory computer readable storage medium may be ROM, random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
It is further understood that the term "plurality" in this disclosure means two or more, and other adjectives are similar thereto. "and/or", describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate: a exists alone, A and B exist together, and B exists alone. The character "/" generally indicates that the context-dependent object is an "or" relationship. The singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It is further understood that the terms "first," "second," and the like are used to describe various information, but such information should not be limited to these terms. These terms are only used to distinguish one type of information from another and do not denote a particular order or importance. Indeed, the expressions "first", "second", etc. may be used entirely interchangeably. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present disclosure.
It will be further understood that although operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing may be advantageous.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. A method for scheduling a core processor, the method being applied to a terminal, the terminal including a multi-core processor, the method comprising:
acquiring a stack top task process of a task line stack currently running;
when the stack top task process is a task process of a designated application, determining that the designated application is detected so as to detect the application running based on network connection, wherein the soft interrupt data throughput of a network transmission data packet of the designated application is smaller than a preset throughput threshold;
when a designated application is detected, migrating soft interrupt of a network transmission data packet of the designated application to a designated kernel processor, wherein the designated kernel processor has the capability of processing soft interrupt data throughput greater than the preset throughput threshold;
the multi-core processor comprises a big-core processor and a small-core processor;
the designated core processor is a designated large core processor.
2. The method for scheduling a kernel processor according to claim 1, wherein a top task process of the currently running task line stack is acquired by a timing acquisition manner.
3. The method for scheduling a kernel processor according to any one of claims 1-2, wherein the migrating the soft interrupt of the network transport packet of the specified application to the specified kernel processor when the specified application is detected comprises:
when a specified application is detected, transmitting a message operated by the specified application to an underlying drive;
and migrating the soft interrupt of the network transmission data packet of the appointed application to the appointed kernel processor by the bottom layer driver.
4. A method of scheduling a core processor according to claim 3 wherein the designated application is a gaming application.
5. A core processor scheduling apparatus, applied to a terminal, the terminal including a multi-core processor, and an application running based on network connection being installed on the terminal, the apparatus comprising:
the acquisition unit is configured to acquire a stack top task process of a task line stack which is currently running;
a detection unit configured to detect an application running based on a network connection;
the detection unit detects an application running based on network connection in the following manner:
when the trestle task process is a task process of a designated application, determining that the designated application is detected, wherein the soft interrupt data throughput of a network transmission data packet of the designated application is smaller than a preset throughput threshold;
the processing unit is configured to migrate the soft interrupt of the network transmission data packet of the appointed application to an appointed core processor when the appointed application is detected, and the appointed core processor has the capability of processing the soft interrupt data throughput larger than the preset throughput threshold;
the multi-core processor comprises a big-core processor and a small-core processor;
the designated core processor is a designated large core processor.
6. The kernel processor scheduling apparatus of claim 5, wherein a top of stack task process of the currently running task line stack is acquired by means of timing acquisition.
7. The core processor scheduling apparatus of any one of claims 5-6, wherein the apparatus further comprises: a transfer unit configured to transfer a message specifying the running of the application;
the processing unit migrates the soft interrupt of the network transmission data packet of the appointed application to the appointed core processor in the following way:
when the detection unit detects a specified application, the transmission unit transmits a message operated by the specified application to an underlying drive;
and migrating the soft interrupt of the network transmission data packet of the appointed application to the appointed kernel processor by the bottom layer driver.
8. The kernel processor scheduling apparatus of claim 7, wherein the designated application is a gaming application.
9. A core processor scheduler, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to: a method of scheduling a core processor as defined in any one of claims 1-4.
10. A non-transitory computer readable storage medium storing computer executable instructions which, when executed by a processor, perform the method of scheduling a core processor of any one of claims 1-4.
CN202010040360.XA 2020-01-15 2020-01-15 Kernel processor scheduling method, kernel processor scheduling device and storage medium Active CN113132263B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010040360.XA CN113132263B (en) 2020-01-15 2020-01-15 Kernel processor scheduling method, kernel processor scheduling device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010040360.XA CN113132263B (en) 2020-01-15 2020-01-15 Kernel processor scheduling method, kernel processor scheduling device and storage medium

Publications (2)

Publication Number Publication Date
CN113132263A CN113132263A (en) 2021-07-16
CN113132263B true CN113132263B (en) 2024-02-13

Family

ID=76771210

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010040360.XA Active CN113132263B (en) 2020-01-15 2020-01-15 Kernel processor scheduling method, kernel processor scheduling device and storage medium

Country Status (1)

Country Link
CN (1) CN113132263B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114356445B (en) * 2021-12-28 2023-09-29 山东华芯半导体有限公司 Multi-core chip starting method based on large and small core architecture

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109117291A (en) * 2018-08-27 2019-01-01 惠州Tcl移动通信有限公司 Data dispatch processing method, device and computer equipment based on multi-core processor
CN109726135A (en) * 2019-01-25 2019-05-07 杭州嘉楠耘智信息科技有限公司 Multi-core debugging method and device and computer readable storage medium
CN110347508A (en) * 2019-07-02 2019-10-18 Oppo广东移动通信有限公司 Thread distribution method, device, equipment and the readable storage medium storing program for executing of application program
CN110462590A (en) * 2017-03-31 2019-11-15 高通股份有限公司 For based on central processing unit power characteristic come the system and method for dispatcher software task

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10540300B2 (en) * 2017-02-16 2020-01-21 Qualcomm Incorporated Optimizing network driver performance and power consumption in multi-core processor-based systems

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110462590A (en) * 2017-03-31 2019-11-15 高通股份有限公司 For based on central processing unit power characteristic come the system and method for dispatcher software task
CN109117291A (en) * 2018-08-27 2019-01-01 惠州Tcl移动通信有限公司 Data dispatch processing method, device and computer equipment based on multi-core processor
CN109726135A (en) * 2019-01-25 2019-05-07 杭州嘉楠耘智信息科技有限公司 Multi-core debugging method and device and computer readable storage medium
CN110347508A (en) * 2019-07-02 2019-10-18 Oppo广东移动通信有限公司 Thread distribution method, device, equipment and the readable storage medium storing program for executing of application program

Also Published As

Publication number Publication date
CN113132263A (en) 2021-07-16

Similar Documents

Publication Publication Date Title
US20230325067A1 (en) Cross-device object drag method and device
WO2021032097A1 (en) Air gesture interaction method and electronic device
EP3848786A1 (en) Display control method for system navigation bar, graphical user interface, and electronic device
CN107888965B (en) Image gift display method and device, terminal, system and storage medium
EP4030817A1 (en) Data processing method and apparatus, and electronic device and computer readable storage medium
US11886894B2 (en) Display control method and terminal device for determining a display layout manner of an application
EP3232301B1 (en) Mobile terminal and virtual key processing method
US20210247888A1 (en) Method and device for controlling a touch screen, terminal and storage medium
WO2020007114A1 (en) Method and apparatus for switching split-screen application, storage medium, and electronic device
US11644942B2 (en) Method and device for displaying application, and storage medium
EP3232325B1 (en) Method and device for starting application interface
CN112749362B (en) Control creation method, device, equipment and storage medium
CN113721807B (en) Information display method and device, electronic equipment and storage medium
US20220150201A1 (en) Message sending method and terminal device
CN109032732B (en) Notification display method and device, storage medium and electronic equipment
CN115576645B (en) Virtual processor scheduling method and device, storage medium and electronic equipment
JP7252905B2 (en) Touch signal processing method, device and medium
CN113132263B (en) Kernel processor scheduling method, kernel processor scheduling device and storage medium
CN105630376A (en) Terminal control method and device
US11327639B2 (en) Split view exiting method, split view exiting device, and electronic device
EP3425533A1 (en) Displaying page
US20180267600A1 (en) Method and device for controlling virtual reality helmets
US20220066632A1 (en) Number input method, apparatus, and storage medium
US11586469B2 (en) Method, device and storage medium for processing overhead of memory access
CN116089025A (en) Processor frequency control method, device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant