CN115878330A - Thread operation control method and system - Google Patents
Thread operation control method and system Download PDFInfo
- Publication number
- CN115878330A CN115878330A CN202310077120.0A CN202310077120A CN115878330A CN 115878330 A CN115878330 A CN 115878330A CN 202310077120 A CN202310077120 A CN 202310077120A CN 115878330 A CN115878330 A CN 115878330A
- Authority
- CN
- China
- Prior art keywords
- thread
- semantic feature
- matrix
- description
- global
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 47
- 239000011159 matrix material Substances 0.000 claims abstract description 268
- 239000013598 vector Substances 0.000 claims abstract description 147
- 238000012545 processing Methods 0.000 claims abstract description 30
- 238000013527 convolutional neural network Methods 0.000 claims description 27
- 238000003062 neural network model Methods 0.000 claims description 17
- 238000013528 artificial neural network Methods 0.000 claims description 16
- 238000010586 diagram Methods 0.000 claims description 9
- 230000004913 activation Effects 0.000 claims description 6
- 238000011176 pooling Methods 0.000 claims description 6
- 238000005728 strengthening Methods 0.000 claims description 6
- 238000009795 derivation Methods 0.000 claims description 5
- 238000005457 optimization Methods 0.000 claims description 4
- 230000008569 process Effects 0.000 claims description 4
- 230000005540 biological transmission Effects 0.000 claims description 3
- 238000004364 calculation method Methods 0.000 claims description 3
- 230000001537 neural effect Effects 0.000 claims description 3
- 238000009826 distribution Methods 0.000 abstract description 36
- 230000000694 effects Effects 0.000 abstract description 21
- 230000003044 adaptive effect Effects 0.000 abstract description 8
- 238000013135 deep learning Methods 0.000 abstract description 8
- 238000005065 mining Methods 0.000 abstract description 8
- 238000005516 engineering process Methods 0.000 abstract description 5
- 238000012163 sequencing technique Methods 0.000 abstract description 5
- 238000003860 storage Methods 0.000 description 11
- 238000004590 computer program Methods 0.000 description 7
- 230000006870 function Effects 0.000 description 3
- 238000013461 design Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 230000002708 enhancing effect Effects 0.000 description 2
- 230000001788 irregular Effects 0.000 description 2
- 230000003278 mimic effect Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000000547 structure data Methods 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000003058 natural language processing Methods 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 230000002787 reinforcement Effects 0.000 description 1
- 230000003252 repetitive effect Effects 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Images
Classifications
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Programmable Controllers (AREA)
- Machine Translation (AREA)
Abstract
The application relates to the technical field of thread operation control, and particularly discloses a thread operation control method and a thread operation control system, wherein the method comprises the steps of firstly obtaining descriptions of threads to be assigned, wherein the descriptions of the threads to be assigned are scheduling contexts matched with sporadic task parameters related to the threads to be assigned; then, mining semantic feature information in the description of the threads and feature distribution information among the threads through a deep learning technology to obtain a topology global thread description semantic feature matrix; then, each row vector in the topology global thread description semantic feature matrix is respectively subjected to a classifier to obtain a plurality of probability values, and finally, the priority of each thread to be assigned is determined based on the sequencing of the probability values, so that the reasonable distribution of the thread priority is adaptively carried out based on the feature distribution among the threads, the distributed threads are adaptive to the tasks to be processed, and the processing efficiency and effect are improved.
Description
Technical Field
The present disclosure relates to the field of thread operation control technologies, and in particular, to a thread operation control method and system.
Background
Assigning priorities to threads is a user-level policy. One approach is to simply use rate monotonic scheduling, where priorities are assigned to threads according to their cycles, and threads use scheduling contexts that match their sporadic task parameters. Each thread in the system will be isolated in time because the kernel does not allow it to exceed the processing time reservation indicated by the scheduling context.
However, the system may offer more options than simple rate random fixed priority scheduling, ensuring policy-free minimization of design principles. And a reservation is merely a potential right to give a particular priority to processing time, which in effect represents an upper processing time limit for a particular thread. If the higher priority reservation uses all available CPUs, then the low priority thread cannot be guaranteed to run. However, the thread with the lower reservation priority will run within the system margin time, which will occur when the thread is not using its full reservation. Thus, it is desirable to use a high priority range for rate monotonic threads, while best effort threads and rate limiting threads run at a lower priority. However, when an actual thread runs, due to the difference of processing tasks, the fixed priority allocation scheme is difficult to achieve the desired effect, which causes the running speed of the thread to be low and the margin time to be insufficient.
Therefore, an optimized thread run control scheme is desired.
Disclosure of Invention
The present application is proposed to solve the above-mentioned technical problems. The embodiment of the application provides a thread operation control method and a system thereof, which comprises the steps of firstly obtaining the description of each thread to be assigned, wherein the description of the thread to be assigned is a scheduling context matched with sporadic task parameters related to the thread to be assigned; then, mining semantic feature information in the description of the threads and feature distribution information among the threads through a deep learning technology to obtain a topology global thread description semantic feature matrix; then, each row vector in the topology global thread description semantic feature matrix is respectively subjected to a classifier to obtain a plurality of probability values, and finally, the priority of each thread to be assigned is determined based on the sequencing of the probability values, so that the reasonable distribution of the thread priority is adaptively carried out based on the feature distribution among the threads, the distributed threads are adaptive to the tasks to be processed, and the processing efficiency and effect are improved.
According to an aspect of the present application, there is provided a method for controlling execution of a thread, including: obtaining the description of each thread to be assigned, wherein the description of the thread to be assigned is a scheduling context matched with the sporadic task parameters related to the thread to be assigned; respectively enabling the description of each thread to be assigned to pass through a context encoder comprising an embedded layer so as to obtain a plurality of thread description semantic feature vectors; calculating Euclidean distance between every two thread description semantic feature vectors in the thread description semantic feature vectors to obtain a distance topological matrix; enabling the distance topological matrix to pass through a convolutional neural network model serving as a feature extractor to obtain a distance topological feature matrix; the thread description semantic feature vectors are arranged in a two-dimensional mode to obtain a global thread description semantic feature matrix; enabling the global thread description semantic feature matrix and the distance topological feature matrix to pass through a graph neural network model to obtain a topological global thread description semantic feature matrix; based on the global thread description semantic feature matrix, carrying out small-scale feature correlation expression strengthening on the topology global thread description semantic feature matrix to obtain an optimized topology global thread description semantic feature matrix; respectively enabling each row vector in the optimized topology global thread description semantic feature matrix to pass through a classifier to obtain a plurality of probability values; and determining the priority of each thread to be assigned based on the ranking of the probability values.
In the above method for controlling the running of threads, the step of passing the descriptions of the threads to be assigned through a context encoder including an embedded layer to obtain a plurality of thread description semantic feature vectors includes: using an embedding layer of the context encoder to convert the description of each thread to be assigned into an embedding vector respectively to obtain a sequence of the embedding vectors corresponding to the description of each thread to be assigned; globally context-based semantic encoding the sequence of embedded vectors corresponding to the description of each thread to be assigned using a transformer-based Bert model of the context encoder to obtain a plurality of feature vectors corresponding to the description of each thread to be assigned; and cascading the plurality of feature vectors corresponding to the descriptions of the threads to be assigned to obtain the plurality of thread description semantic feature vectors.
In the above method for controlling the operation of threads, the calculating the euclidean distance between every two thread description semantic feature vectors in the plurality of thread description semantic feature vectors to obtain a distance topology matrix includes: calculating Euclidean distances between every two thread description semantic feature vectors in the thread description semantic feature vectors according to the following formula to obtain a plurality of Euclidean distances;
wherein , and />Represents any two thread description semantic feature vectors in the plurality of thread description semantic feature vectors, respectively, and/or is selected based on the result of the comparison>Representing the calculation between every two thread description semantic feature vectors in the plurality of thread description semantic feature vectorsIs in the Euclidean range of-> and />And respectively representing the characteristic value of each position of any two thread description semantic characteristic vectors in the thread description semantic characteristic vectors.
In the above method for controlling the operation of a thread, the passing the distance topology matrix through a convolutional neural network model as a feature extractor to obtain a distance topology feature matrix includes: further for: each layer of the convolutional neural network model respectively performs the following operations on input data in the forward transmission of the layer: performing convolution processing based on a two-dimensional convolution kernel on the input data by using convolution units of all layers of the convolution neural network model to obtain a convolution characteristic diagram; pooling units of all layers of the convolutional neural network model are used for pooling the convolutional characteristic graph along channel dimensions to obtain a pooled characteristic graph; using the activation units of each layer of the convolutional neural network model to carry out nonlinear activation on the feature values of each position in the pooled feature map so as to obtain an activated feature map; and the input of the first layer of the convolutional neural network model is the distance topological matrix, and the output of the last layer of the convolutional neural network model is the distance topological characteristic matrix.
In the above method for controlling the operation of a thread, the passing the global thread description semantic feature matrix and the distance topology feature matrix through a graph neural network model to obtain a topology global thread description semantic feature matrix includes: the graph neural network processes the global thread description semantic feature matrix and the distance topological feature matrix through learnable neural network parameters to obtain the topological global thread description semantic feature matrix containing Euclidean distance topological features and semantic understanding feature information described by each thread.
In the method for controlling the operation of the thread, the performing small-scale feature correlation expression enhancement on the topological global thread description semantic feature matrix based on the global thread description semantic feature matrix to obtain an optimized topological global thread description semantic feature matrix includes: calculating a small-scale local derivative matrix between the topological global thread description semantic feature matrix and the global thread description semantic feature matrix; and taking the small-scale local derivative matrix as a weighted feature matrix to multiply the topology global thread description semantic feature matrix according to position points to obtain the optimized topology global thread description semantic feature matrix.
In the above method for controlling the operation of threads, the calculating a small-scale local derivation matrix between the topological global thread description semantic feature matrix and the global thread description semantic feature matrix includes:
calculating the small-scale local derivative matrix between the topological global thread description semantic feature matrix and the global thread description semantic feature matrix according to the following formula; wherein the formula is:
wherein 、/> and />Is the ^ th or greater than the topological global thread description semantic feature matrix, the global thread description semantic feature matrix, and the small-scale local derivative matrix, respectively>A characteristic value of the location.
In the method for controlling the operation of the thread, the step of obtaining a plurality of probability values by respectively passing each row vector in the optimized topology global thread description semantic feature matrix through a classifier includes: processing each row vector in the optimized topology global thread description semantic feature matrix with the classifier to obtain the probability values;
wherein the formula is:, wherein />Represents each row vector, or a value, in the optimized topology global thread description semantic feature matrix>To/>For the weight matrix of each fully connected layer of the classifier, a->To>A bias vector representing each fully connected layer of the classifier, <' >>Representing respective ones of the plurality of probability values.
According to another aspect of the present application, there is provided a running control system of a thread, including: the description acquisition module is used for acquiring the description of each thread to be assigned, and the description of the thread to be assigned is a scheduling context matched with the sporadic task parameters related to the thread to be assigned; the context coding module is used for enabling the description of each thread to be assigned to pass through a context coder containing an embedded layer respectively so as to obtain a plurality of thread description semantic feature vectors; the Euclidean distance calculation module is used for calculating the Euclidean distance between every two thread description semantic feature vectors in the thread description semantic feature vectors to obtain a distance topology matrix; the convolutional coding module is used for enabling the distance topological matrix to pass through a convolutional neural network model serving as a feature extractor so as to obtain a distance topological feature matrix; the two-dimensional arrangement module is used for carrying out two-dimensional arrangement on the thread description semantic feature vectors to obtain a global thread description semantic feature matrix; the graph neural coding module is used for enabling the global thread description semantic feature matrix and the distance topological feature matrix to pass through a graph neural network model so as to obtain a topological global thread description semantic feature matrix; the matrix optimization module is used for carrying out small-scale feature correlation expression strengthening on the topological global thread description semantic feature matrix based on the global thread description semantic feature matrix to obtain an optimized topological global thread description semantic feature matrix; the probability value acquisition module is used for enabling each row vector in the optimized topology global thread description semantic feature matrix to pass through a classifier respectively to obtain a plurality of probability values; and a priority determination module for determining the priority of each thread to be assigned based on the ranking of the probability values.
In the above thread operation control system, the matrix optimization module includes: the small-scale local derivative matrix acquisition unit is used for calculating a small-scale local derivative matrix between the topological global thread description semantic feature matrix and the global thread description semantic feature matrix; and the point multiplication unit is used for performing point multiplication on the topology global thread description semantic feature matrix according to positions by taking the small-scale local derivative matrix as a weighting feature matrix to obtain the optimized topology global thread description semantic feature matrix.
Compared with the prior art, the thread operation control method and the thread operation control system provided by the application have the advantages that the description of each thread to be assigned is obtained at first, and the description of the thread to be assigned is a scheduling context matched with sporadic task parameters related to the thread to be assigned; then, mining semantic feature information in the description of the threads and feature distribution information among the threads through a deep learning technology to obtain a topology global thread description semantic feature matrix; then, each row vector in the topology global thread description semantic feature matrix is respectively subjected to a classifier to obtain a plurality of probability values, and finally, the priority of each thread to be assigned is determined based on the sequencing of the probability values, so that the reasonable distribution of the thread priority is adaptively carried out based on the feature distribution among the threads, the distributed threads are adaptive to the tasks to be processed, and the processing efficiency and effect are improved.
Drawings
The above and other objects, features and advantages of the present application will become more apparent by describing in more detail embodiments of the present application with reference to the attached drawings. The accompanying drawings are included to provide a further understanding of the embodiments of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the principles of the application. In the drawings, like reference numbers generally represent like parts or steps.
Fig. 1 is a flowchart of a method for controlling the operation of a thread and a system thereof according to an embodiment of the present application.
Fig. 2 is a schematic diagram of a model architecture of a method for controlling the operation of threads and a system thereof according to an embodiment of the present disclosure.
Fig. 3 is a flowchart of a method for controlling the running of threads and a system thereof according to an embodiment of the present application, in which descriptions of the threads to be assigned are respectively passed through a context encoder including an embedded layer to obtain a plurality of thread description semantic feature vectors.
Fig. 4 is a flowchart for performing small-scale feature correlation expression enhancement on the topology global thread description semantic feature matrix to obtain an optimized topology global thread description semantic feature matrix in the thread operation control method and the system thereof according to the embodiment of the present application.
Fig. 5 is a schematic block diagram illustrating a method for controlling the operation of a thread and a system thereof according to an embodiment of the present disclosure.
Fig. 6 is a block diagram of an electronic device according to an embodiment of the application.
Detailed Description
Hereinafter, example embodiments according to the present application will be described in detail with reference to the accompanying drawings. It should be apparent that the described embodiments are only a few embodiments of the present application, and not all embodiments of the present application, and it should be understood that the present application is not limited to the example embodiments described herein.
Scene overview: as described above, the system can provide more options than simple rate random fixed priority scheduling, ensuring policy-free minimization design principles. And a reservation is merely a potential right to give a particular priority to processing time, which in effect represents an upper processing time limit for a particular thread. If the higher priority reservation uses all available CPUs, then the low priority thread cannot be guaranteed to run. However, the thread with the lower reservation priority will run within the system margin time, which will occur when the thread is not using its full reservation. Thus, it is desirable to use a high priority range for rate monotonic threads, while best effort threads and rate limiting threads run at a lower priority. However, when an actual thread runs, due to the difference of processing tasks, the fixed priority allocation scheme is difficult to achieve the desired effect, which causes the running rate of the thread to be low and the margin time to be insufficient. Therefore, an optimized thread run control scheme is desired.
Accordingly, when the threads are actually operated, the operation speed of the threads is low when the tasks to be processed are processed according to the threads with fixed priorities due to different tasks to be processed, the allowance time is insufficient, and a good processing effect is difficult to achieve. Therefore, in the technical solution of the present application, it is desirable to adaptively allocate priorities to the respective threads through feature distribution among the threads, so that the allocated threads are adapted to the tasks to be processed. Extracting feature distribution information among threads requires sufficient and accurate semantic understanding of the description of the threads, which is here a scheduling context that matches the sporadic task parameters associated with the threads. However, since semantic information exists in the description of the threads, it is difficult to acquire useful information, which makes it difficult to extract feature distribution information between threads. Therefore, the difficulty in practical application lies in how to dig out semantic feature information in the description of the threads and feature distribution information among the threads, so as to reasonably distribute the thread priorities, so that the distributed threads are adapted to the tasks to be processed, and the processing efficiency and effect are improved.
In recent years, deep learning and neural networks have been widely used in the fields of computer vision, natural language processing, text signal processing, and the like. In addition, deep learning and neural networks also exhibit a level close to or even exceeding that of humans in the fields of image classification, object detection, semantic segmentation, text translation, and the like.
The deep learning and the development of the neural network provide a new solution idea and scheme for mining semantic feature information in the description of the threads and feature distribution information among the threads.
Specifically, in the technical solution of the present application, first, descriptions of each thread to be assigned are obtained, where the descriptions of the thread to be assigned are scheduling contexts matched with sporadic task parameters related to the thread to be assigned. Then, considering that the description of each thread to be assigned is composed of a plurality of words and each word has semantic association of context, in order to accurately perform semantic understanding of the description of each thread to be assigned and more accurately perform thread priority allocation, in the technical solution of the present application, the description of each thread to be assigned is further encoded in a context encoder including an embedding layer, so as to extract global context high-dimensional semantic feature information of the description of each thread to be assigned, respectively, thereby obtaining a plurality of thread description semantic feature vectors.
Then, for semantic understanding feature information of the descriptions of the threads to be assigned, in order to be able to mine feature distribution among the threads to be assigned to determine priority, in the technical solution of the present application, an euclidean distance between every two thread description semantic feature vectors in the thread description semantic feature vectors is further calculated to express spatial topology distribution information among context semantic understanding features described by the threads, thereby obtaining a distance topology matrix. And then, further performing feature mining on the obtained distance topological matrix in a convolutional neural network model serving as a feature extractor to extract spatial topological correlation features among the semantic features described by the threads so as to obtain the distance topological feature matrix.
Furthermore, each thread description semantic feature vector in the thread description semantic feature vectors is used as feature representation of a node, the distance topological feature matrix is used as feature representation of an edge between the nodes, and a global thread description semantic feature matrix obtained by two-dimensional arrangement of the thread description semantic feature vectors and the distance topological feature matrix are used for obtaining a topological global thread description semantic feature matrix through a graph neural network. Specifically, the graph neural network performs graph structure data coding on the global thread description semantic feature matrix and the distance topological feature matrix through learnable neural network parameters to obtain the topological global thread description semantic feature matrix containing irregular distance topological features and semantic understanding feature information described by each thread.
Then, the row vectors in the topology global thread description semantic feature matrix are respectively processed by a classifier to obtain a plurality of probability values. That is to say, each row vector in the topology global thread description semantic feature matrix is classified as a classification feature vector by a classifier to obtain a probability value for representing the description of each thread to be assigned, and the priority of each thread to be assigned is determined based on the ordering of the probability values. Therefore, the reasonable distribution of the thread priority can be adaptively carried out on the basis of the characteristic distribution among the threads, so that the distributed threads are adaptive to the tasks to be processed, and the processing efficiency and effect are improved.
Particularly, in the technical solution of the present application, here, the topology global thread description semantic feature matrix is obtained by passing the global thread description semantic feature matrix and the distance topology feature matrix through a graph neural network model, so that the topology global thread description semantic feature matrix can express the associated expression of the context semantic features described by each thread to be assigned under the semantic similarity topology of each thread to be assigned. However, since each global thread description semantic feature vector of the global thread description semantic feature matrix is a small-scale context semantic coding representation of a description of a thread to be assigned, it is still desirable to improve the small-scale feature correlation expression of the topology global thread description semantic feature matrix relative to the global thread description semantic feature matrix, so as to improve the expression effect of the small-scale context coding semantics of the description of the topology global thread description semantic feature matrix on each thread to be assigned.
Thus, the topological global thread description semantic feature matrix is computed, e.g. denoted asAnd the global thread description semantic feature matrix, for example be recorded as +>As a weighted feature matrix, the small-scale local derivation matrix of (a) is expressed as:
、/> and />Respectively the topological global thread describes a semantic feature matrix->The global thread describes a semantic feature matrix->And the small-scale local derivation matrix, for example denoted +>Is based on the fifth->A characteristic value of the location.
Here, semantic feature matrices are described by computing the topological global threadAnd the global thread description semantic feature matrix->Can mimic the physics of inter-expression between data sequences based on geometric approximations of corresponding locations therebetween, thereby enhancing the local non-linear dependence across feature domain locations with a position-by-position point-by-point regression between feature matrices. Thus, by locally deriving a matrix at the small scale->Describing a semantic feature matrix ≥ as a weighting matrix for the topological global thread>Performing a point multiplication to perform eigenvalue weighting may improve the topological global thread description semantic feature matrix->And the expression effect of the small-scale context coding semantics of the description of each thread to be assigned is improved, so that the accuracy of the classification result obtained by the row vector through the classifier is improved. Therefore, the reasonable distribution of the thread priority can be adaptively carried out on the basis of the characteristic distribution among the threads, so that the distributed threads are adaptive to the tasks to be processed, and the processing efficiency and effect are improved.
Having described the basic principles of the present application, various non-limiting embodiments of the present application will now be described with reference to the accompanying drawings.
An exemplary method: fig. 1 is a flowchart of a method for controlling the operation of a thread according to an embodiment of the present application. As shown in fig. 1, the method for controlling the running of a thread according to the embodiment of the present application includes: s110, obtaining the description of each thread to be assigned, wherein the description of the thread to be assigned is a scheduling context matched with the sporadic task parameters related to the thread to be assigned; s120, enabling the description of each thread to be assigned to pass through a context encoder comprising an embedded layer respectively to obtain a plurality of thread description semantic feature vectors; s130, calculating Euclidean distance between every two thread description semantic feature vectors in the thread description semantic feature vectors to obtain a distance topological matrix; s140, obtaining a distance topological characteristic matrix by using the distance topological matrix through a convolutional neural network model serving as a characteristic extractor; s150, performing two-dimensional arrangement on the thread description semantic feature vectors to obtain a global thread description semantic feature matrix; s160, enabling the global thread description semantic feature matrix and the distance topological feature matrix to pass through a graph neural network model to obtain a topological global thread description semantic feature matrix; s170, based on the global thread description semantic feature matrix, carrying out small-scale feature correlation expression strengthening on the topology global thread description semantic feature matrix to obtain an optimized topology global thread description semantic feature matrix; s180, enabling each row vector in the optimized topology global thread description semantic feature matrix to pass through a classifier respectively to obtain a plurality of probability values; and S190, determining the priority of each thread to be assigned based on the sorting of the probability values.
Fig. 2 is a schematic diagram of a model architecture of a thread operation control method according to an embodiment of the present application. As shown in fig. 2, in the method for controlling the running of threads according to the embodiment of the present application, first, descriptions of threads to be assigned are obtained, and the descriptions of the threads to be assigned are respectively passed through a context encoder including an embedded layer to obtain a plurality of thread description semantic feature vectors. Then, calculating Euclidean distance between every two thread description semantic feature vectors in the thread description semantic feature vectors to obtain a distance topology matrix, and enabling the distance topology matrix to pass through a convolutional neural network model serving as a feature extractor to obtain the distance topology feature matrix. And meanwhile, the thread description semantic feature vectors are arranged in two dimensions to obtain a global thread description semantic feature matrix. And then, passing the global thread description semantic feature matrix and the distance topological feature matrix through a graph neural network model to obtain a topological global thread description semantic feature matrix. And then, based on the global thread description semantic feature matrix, carrying out small-scale feature correlation expression reinforcement on the topology global thread description semantic feature matrix to obtain an optimized topology global thread description semantic feature matrix. And finally, respectively enabling each row vector in the optimized topology global thread description semantic feature matrix to pass through a classifier to obtain a plurality of probability values, and determining the priority of each thread to be assigned based on the sequencing of the probability values.
In step S110 of the embodiment of the present application, descriptions of each thread to be assigned are obtained, where the descriptions of the thread to be assigned are scheduling contexts matched with the sporadic task parameters related to the thread to be assigned. As described above, when the threads are actually run, the tasks to be processed are different, which may cause a low running rate of the threads when the threads with fixed priorities process the tasks, and the allowance time is insufficient, so that it is difficult to achieve a good processing effect. Therefore, in the technical solution of the present application, it is desirable to adaptively allocate priorities to the respective threads through feature distribution among the threads, so that the allocated threads are adapted to the tasks to be processed. Extracting feature distribution information among threads requires sufficient and accurate semantic understanding of the description of the threads, which is here a scheduling context that matches the sporadic task parameters associated with the threads. However, since semantic information exists in the description of the threads, it is difficult to acquire useful information, which makes it difficult to extract feature distribution information between threads. Therefore, the difficulty in practical application lies in how to dig out semantic feature information in the description of the threads and feature distribution information among the threads, so as to reasonably distribute the thread priorities, so that the distributed threads are adapted to the tasks to be processed, and the processing efficiency and effect are improved. The deep learning and the development of the neural network provide a new solution idea and scheme for mining semantic feature information in the description of the threads and feature distribution information among the threads.
In a specific example of the present application, descriptions of each thread to be assigned during thread running are obtained from a system, and the descriptions of the thread to be assigned are scheduling contexts matched with sporadic task parameters related to the thread to be assigned.
In step S120 of the embodiment of the present application, the descriptions of the respective threads to be assigned are respectively passed through a context encoder including an embedded layer to obtain a plurality of thread description semantic feature vectors. It should be understood that, in view of that the description of each thread to be assigned is composed of a plurality of words, and each word has semantic association with a context, in order to accurately perform semantic understanding of the description of each thread to be assigned so as to more accurately perform thread priority allocation, in the technical solution of the present application, the description of each thread to be assigned is further encoded in a context encoder including an embedding layer, so as to respectively extract global context-based high-dimensional semantic feature information of the description of each thread to be assigned, thereby obtaining a plurality of thread description semantic feature vectors.
Fig. 3 is a flowchart of obtaining a plurality of thread description semantic feature vectors by passing the descriptions of the respective threads to be assigned through a context encoder including an embedded layer, respectively, in the method for controlling the running of threads and the system thereof according to the embodiment of the present application. In a specific example of the present application, the passing the descriptions of the respective threads to be assigned through a context encoder including an embedded layer to obtain a plurality of thread description semantic feature vectors respectively includes: s210, converting the description of each thread to be assigned into an embedded vector by using an embedded layer of the context encoder to obtain a sequence of the embedded vectors corresponding to the description of each thread to be assigned; s220, performing global context semantic-based encoding on the sequence of the embedded vectors corresponding to the description of each thread to be assigned by using a Bert model of the context encoder based on a converter to obtain a plurality of feature vectors corresponding to the description of each thread to be assigned; and S230, cascading the plurality of feature vectors corresponding to the descriptions of the threads to be assigned to obtain the plurality of thread description semantic feature vectors.
More specifically, in the embodiment of the present application, the context encoder is a Bert model based on a transformer, where the Bert model is capable of performing context semantic coding based on the global input sequence on each input quantity in the input sequence based on an intrinsic mask structure of the transformer. That is, the converter-based Bert model is able to extract a globally based feature representation of each input quantity in the input sequence. More specifically, in the technical solution of the present application, taking the description of one thread to be assigned as an example, first, an embedding layer of the context encoder is used to convert one thread to be assigned into an embedding vector to obtain a sequence of embedding vectors, where the embedding layer is used to convert a text description into a digital description that can be recognized by a computer. Then, a global context-based semantic encoding is performed on the sequence of embedded vectors using the translator-based Bert model to obtain a plurality of feature vectors. It should be understood that each feature vector of the plurality of feature vectors is used to represent a global context deep implicit feature describing the overall sequence based on one thread to be assigned for each word. Here, one feature vector corresponds to one word. Then, the feature vectors are cascaded to obtain the thread description semantic feature vectors, that is, in a high-dimensional feature space, high-dimensional feature representations corresponding to words are subjected to lossless fusion to obtain a high-dimensional feature representation of the whole description sequence of the thread to be assigned to obtain one thread description semantic feature vector. Here, one of the thread description semantic feature vectors corresponds to global context-based high-dimensional semantic feature information of a description of one thread to be assigned.
In step S130 of the embodiment of the present application, a euclidean distance between every two thread description semantic feature vectors in the thread description semantic feature vectors is calculated to obtain a distance topology matrix. It should be understood that if there is a particularly urgent thread, the similarity between the description of this particularly urgent thread and the descriptions of other threads to be assigned is necessarily much less than the similarity between the descriptions of other threads to be assigned, considering that there is a difference between the descriptions of the respective threads to be assigned but it is not too large. Therefore, the spatial topological distribution information among the context semantic understanding features described by the threads can be introduced to improve the accuracy of the priority ordering. Specifically, in the technical solution of the present application, for semantic understanding feature information of descriptions of each thread to be assigned, in order to be able to mine feature distribution among the threads to be assigned so as to determine a priority, in the technical solution of the present application, a euclidean distance between every two thread description semantic feature vectors in the thread description semantic feature vectors is further calculated so as to represent spatial topology distribution information among context semantic understanding features described by each thread, thereby obtaining a distance topology matrix.
In a specific example of the present application, the calculating a euclidean distance between every two thread description semantic feature vectors in the plurality of thread description semantic feature vectors to obtain a distance topology matrix includes: calculating Euclidean distances between every two thread description semantic feature vectors in the thread description semantic feature vectors according to the following formula to obtain a plurality of Euclidean distances;
wherein , and />Represents any two thread description semantic feature vectors in the plurality of thread description semantic feature vectors, respectively, and/or is selected based on the result of the comparison>Means for calculating a Euclidean distance between every two of the plurality of thread description semantic feature vectors, and ` H `> and />And respectively representing the characteristic value of each position of any two thread description semantic characteristic vectors in the thread description semantic characteristic vectors.
In step S140 of the embodiment of the present application, the distance topology matrix is passed through a convolutional neural network model as a feature extractor to obtain a distance topology feature matrix. Namely, a convolutional neural network model which has excellent performance in implicit associated feature extraction and serves as a feature extractor is used for carrying out feature mining on the distance topology matrix so as to extract associated features of all positions in the distance topology matrix, namely spatial topology distribution information among context semantic understanding features described by all threads, and therefore the distance topology feature matrix is obtained.
In a specific example of the present application, the passing the distance topology matrix through a convolutional neural network model as a feature extractor to obtain a distance topology feature matrix includes: further for: each layer of the convolutional neural network model respectively performs the following operations on input data in the forward transmission of the layer: performing convolution processing based on a two-dimensional convolution kernel on the input data by using convolution units of each layer of the convolution neural network model to obtain a convolution characteristic diagram; pooling units of all layers of the convolutional neural network model are used for pooling the convolutional characteristic graph along channel dimensions to obtain a pooled characteristic graph; using the activation units of each layer of the convolutional neural network model to carry out nonlinear activation on the feature values of each position in the pooled feature map so as to obtain an activated feature map; and the input of the first layer of the convolutional neural network model is the distance topological matrix, and the output of the last layer of the convolutional neural network model is the distance topological characteristic matrix.
In step S150 of the embodiment of the present application, the multiple thread description semantic feature vectors are two-dimensionally arranged to obtain a global thread description semantic feature matrix. It should be understood that the plurality of thread description semantic feature vectors represent global context-based high-dimensional semantic feature information of the descriptions of the respective threads to be assigned, but the priority of the respective threads to be assigned should be ordered based on global features, so that the plurality of thread description semantic feature vectors are two-dimensionally arranged to obtain a global thread description semantic feature matrix, that is, the global context-based high-dimensional semantic feature information of the descriptions of the respective threads to be assigned is losslessly fused into one feature matrix.
In step S160 of the embodiment of the present application, the global thread description semantic feature matrix and the distance topological feature matrix are processed through a neural network model to obtain a topological global thread description semantic feature matrix. That is, each thread description semantic feature vector in the thread description semantic feature vectors is used as feature representation of a node, the distance topological feature matrix is used as feature representation of an edge between the nodes, and a global thread description semantic feature matrix obtained by two-dimensionally arranging the thread description semantic feature vectors and the distance topological feature matrix are passed through a graph neural network to obtain a topological global thread description semantic feature matrix. Specifically, the graph neural network performs graph structure data coding on the global thread description semantic feature matrix and the distance topological feature matrix through learnable neural network parameters to obtain the topological global thread description semantic feature matrix containing irregular distance topological features and semantic understanding feature information of each thread description.
In a specific example of the present application, the passing the global thread description semantic feature matrix and the distance topology feature matrix through a graph neural network model to obtain a topology global thread description semantic feature matrix includes: the graph neural network processes the global thread description semantic feature matrix and the distance topological feature matrix through learnable neural network parameters to obtain the topological global thread description semantic feature matrix containing Euclidean distance topological features and semantic understanding feature information described by each thread.
In step S170 of the embodiment of the present application, based on the globalAnd performing small-scale feature correlation expression strengthening on the topological global thread description semantic feature matrix to obtain an optimized topological global thread description semantic feature matrix. It should be understood that, in the technical solution of the present application, here, the global thread description semantic feature matrix and the distance topology feature matrix are obtained through a graph neural network model to obtain the topology global thread description semantic feature matrix, so that the topology global thread description semantic feature matrix expresses the associated expression of the context semantic features of the description of each thread to be assigned under the semantic similarity topology of each thread to be assigned. However, since each global thread description semantic feature vector of the global thread description semantic feature matrix is a small-scale context semantic coding representation of a description of a thread to be assigned, it is still desirable to improve the small-scale feature correlation expression of the topology global thread description semantic feature matrix relative to the global thread description semantic feature matrix, so as to improve the expression effect of the small-scale context coding semantics of the description of the topology global thread description semantic feature matrix on each thread to be assigned. Thus, the topological global thread description semantic feature matrix is computed, e.g. asAnd the global thread description semantic feature matrix, for example be recorded as +>As a weighted feature matrix.
Fig. 4 is a flowchart for performing small-scale feature correlation expression enhancement on the topology global thread description semantic feature matrix to obtain an optimized topology global thread description semantic feature matrix in the thread operation control method and the system thereof according to the embodiment of the present application. As shown in fig. 4, in a specific example of the present application, the performing small-scale feature correlation expression enhancement on the topology global thread description semantic feature matrix based on the global thread description semantic feature matrix to obtain an optimized topology global thread description semantic feature matrix includes: s310, calculating a small-scale local derivative matrix between the topological global thread description semantic feature matrix and the global thread description semantic feature matrix; and S320, taking the small-scale local derivative matrix as a weighted feature matrix to multiply the topology global thread description semantic feature matrix according to position points to obtain the optimized topology global thread description semantic feature matrix.
In a specific example of the present application, the calculating a small-scale local derivation matrix between the topological global thread description semantic feature matrix and the global thread description semantic feature matrix includes:
calculating the small-scale local derivative matrix between the topological global thread description semantic feature matrix and the global thread description semantic feature matrix according to the following formula; wherein the formula is:
wherein 、/> and />Is the ^ th or greater than the topological global thread description semantic feature matrix, the global thread description semantic feature matrix, and the small-scale local derivative matrix, respectively>A characteristic value of the location.
Here, semantic feature matrices are described by computing the topological global threadAnd the global thread description semantic feature matrix->In betweenSmall scale locally derived features can mimic the physics of inter-expression between data sequences based on geometric approximations of corresponding locations therebetween, enhancing local non-linear dependence across feature domain locations with position-wise point-by-point regression between feature matrices. Thus, by locally deriving a matrix at the small scale->Describing semantic feature matrices { -for the topological global thread as weighting matrices>Performing point multiplication for eigenvalue weighting may improve the topological global thread description semantic feature matrix>And the expression effect of the small-scale context coding semantics of the description of each thread to be assigned is improved, so that the accuracy of the classification result obtained by the row vector through the classifier is improved. Therefore, the reasonable distribution of the thread priority can be adaptively carried out based on the characteristic distribution among the threads, so that the distributed threads are adaptive to the tasks to be processed, and the processing efficiency and effect are improved.
In step S180 in the embodiment of the present application, each row vector in the topology-optimized global thread description semantic feature matrix is respectively passed through a classifier to obtain a plurality of probability values. That is to say, each row vector in the topology global thread description semantic feature matrix is classified as a classification feature vector by a classifier to obtain a probability value for representing the description of each thread to be assigned.
In a specific example of the present application, the obtaining, by respectively passing each row vector in the optimized topology global thread description semantic feature matrix through a classifier, a plurality of probability values includes: processing each row vector in the optimized topology global thread description semantic feature matrix with the following formula by using the classifier to obtain a plurality of probability values; wherein the formula is:, wherein />Represents each row vector, or a value, in the optimized topology global thread description semantic feature matrix>To>For the weight matrix of each fully connected layer of the classifier, a->To>A bias vector representing each fully connected layer of the classifier, <' >>Representing respective ones of the plurality of probability values.
In step S190 of the embodiment of the present application, the priority of each thread to be assigned is determined based on the ranking of the probability values. Therefore, the reasonable distribution of the thread priority can be adaptively carried out based on the characteristic distribution among the threads, so that the distributed threads are adaptive to the tasks to be processed, and the processing efficiency and effect are improved.
In summary, according to the method for controlling the running of the threads in the embodiment of the present application, descriptions of the threads to be assigned are first obtained, where the descriptions of the threads to be assigned are scheduling contexts matched with sporadic task parameters related to the threads to be assigned; then, mining semantic feature information in the description of the threads and feature distribution information among the threads through a deep learning technology to obtain a topology global thread description semantic feature matrix; then, each row vector in the topology global thread description semantic feature matrix is respectively subjected to a classifier to obtain a plurality of probability values, and finally, the priority of each thread to be assigned is determined based on the sequencing of the probability values, so that the reasonable distribution of the thread priority is adaptively carried out based on the feature distribution among the threads, the distributed threads are adaptive to the tasks to be processed, and the processing efficiency and effect are improved.
An exemplary system: FIG. 5 is a block diagram illustrating a system for controlling the operation of threads according to an embodiment of the present disclosure. As shown in fig. 5, the system 100 for controlling the operation of threads according to the embodiment of the present application includes: a description obtaining module 110, configured to obtain descriptions of threads to be assigned, where the descriptions of the threads to be assigned are scheduling contexts matched with the sporadic task parameters related to the threads to be assigned; a context encoding module 120, configured to pass descriptions of the threads to be assigned through a context encoder including an embedded layer, respectively, to obtain a plurality of thread description semantic feature vectors; a euclidean distance calculating module 130, configured to calculate a euclidean distance between every two thread description semantic feature vectors in the thread description semantic feature vectors to obtain a distance topology matrix; a convolutional coding module 140, configured to pass the distance topology matrix through a convolutional neural network model as a feature extractor to obtain a distance topology feature matrix; a two-dimensional arrangement module 150, configured to perform two-dimensional arrangement on the multiple thread description semantic feature vectors to obtain a global thread description semantic feature matrix; the graph neural coding module 160 is configured to pass the global thread description semantic feature matrix and the distance topology feature matrix through a graph neural network model to obtain a topology global thread description semantic feature matrix; the matrix optimization module 170 is configured to perform small-scale feature correlation expression enhancement on the topology global thread description semantic feature matrix based on the global thread description semantic feature matrix to obtain an optimized topology global thread description semantic feature matrix; a probability value obtaining module 180, configured to pass each row vector in the optimized topology global thread description semantic feature matrix through a classifier respectively to obtain multiple probability values; and a priority determination module 190, configured to determine a priority of each thread to be assigned based on the ranking of the plurality of probability values.
Here, it will be understood by those skilled in the art that the specific functions and operations of the respective units and modules in the operation control system of the thread described above have been described in detail in the above description of the operation control method of the thread with reference to fig. 1 to 4, and thus, a repetitive description thereof will be omitted.
An exemplary electronic device: next, an electronic apparatus according to an embodiment of the present application is described with reference to fig. 6.
Fig. 6 is a block diagram of an electronic device according to an embodiment of the application.
As shown in fig. 6, the electronic device 10 includes one or more processors 11 and memory 12.
The processor 11 may be a Central Processing Unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in the electronic device 10 to perform desired functions.
In one example, the electronic device 10 may further include: an input device 13 and an output device 14, which are interconnected by a bus system and/or other form of connection mechanism (not shown).
The input device 13 may include, for example, a keyboard, a mouse, and the like.
The output device 14 can output various information to the outside, including a plurality of probability values, priorities of respective threads to be assigned, and the like. The output devices 14 may include, for example, a display, speakers, a printer, and a communication network and its connected remote output devices, among others.
Exemplary computer program products and computer-readable storage media: in addition to the above-described methods and apparatus, embodiments of the present application may also be a computer program product comprising computer program instructions that, when executed by a processor, cause the processor to perform the steps of the method of operation control of a thread according to various embodiments of the present application described in the "exemplary methods" section of this specification above.
The computer program product may be written with program code for performing the operations of embodiments of the present application in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server.
Furthermore, embodiments of the present application may also be a computer-readable storage medium having stored thereon computer program instructions which, when executed by a processor, cause the processor to perform the steps of the method of controlling the operation of a thread according to various embodiments of the present application described in the "exemplary methods" section above in this specification.
The computer readable storage medium may take any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may include, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing describes the general principles of the present application in conjunction with specific embodiments, however, it is noted that the advantages, effects, etc. mentioned in the present application are merely examples and are not limiting, and they should not be considered essential to the various embodiments of the present application. Furthermore, the foregoing disclosure of specific details is provided for purposes of illustration and understanding only, and is not intended to limit the application to the details which are set forth in order to provide a thorough understanding of the present application.
Claims (10)
1. A method for controlling the operation of a thread, comprising: obtaining the description of each thread to be assigned, wherein the description of the thread to be assigned is a scheduling context matched with the sporadic task parameters related to the thread to be assigned; respectively passing the description of each thread to be assigned through a context encoder comprising an embedded layer to obtain a plurality of thread description semantic feature vectors; calculating Euclidean distance between every two thread description semantic feature vectors in the thread description semantic feature vectors to obtain a distance topological matrix; the distance topological characteristic matrix is obtained by passing the distance topological matrix through a convolutional neural network model serving as a characteristic extractor; the thread description semantic feature vectors are arranged in a two-dimensional mode to obtain a global thread description semantic feature matrix; enabling the global thread description semantic feature matrix and the distance topological feature matrix to pass through a graph neural network model to obtain a topological global thread description semantic feature matrix; based on the global thread description semantic feature matrix, carrying out small-scale feature correlation expression strengthening on the topology global thread description semantic feature matrix to obtain an optimized topology global thread description semantic feature matrix; respectively enabling each row vector in the optimized topology global thread description semantic feature matrix to pass through a classifier to obtain a plurality of probability values; and determining the priority of each thread to be assigned based on the ranking of the probability values.
2. The method for controlling the operation of threads according to claim 1, wherein said passing the descriptions of the threads to be assigned through a context encoder comprising an embedded layer to obtain a plurality of thread description semantic feature vectors respectively comprises: respectively converting the description of each thread to be assigned into an embedded vector by using an embedded layer of the context encoder so as to obtain a sequence of embedded vectors corresponding to the description of each thread to be assigned; globally context-based semantic encoding the sequence of embedded vectors corresponding to the description of each thread to be assigned using a transformer-based Bert model of the context encoder to obtain a plurality of feature vectors corresponding to the description of each thread to be assigned; and cascading the plurality of feature vectors corresponding to the description of each thread to be assigned to obtain the plurality of thread description semantic feature vectors.
3. The method according to claim 2, wherein the calculating the euclidean distance between every two thread description semantic feature vectors in the plurality of thread description semantic feature vectors to obtain a distance topology matrix comprises: calculating Euclidean distances between every two thread description semantic feature vectors in the thread description semantic feature vectors according to the following formula to obtain a plurality of Euclidean distances;
wherein , and />Represents any two thread description semantic feature vectors in the plurality of thread description semantic feature vectors, respectively, and/or is selected based on the result of the comparison>Means for calculating a Euclidean distance between every two of the plurality of thread description semantic feature vectors, and ` H `> and />And respectively representing the characteristic value of each position of any two thread description semantic characteristic vectors in the thread description semantic characteristic vectors.
4. The method for controlling the operation of a thread according to claim 3, wherein the passing the distance topology matrix through a convolutional neural network model as a feature extractor to obtain a distance topology feature matrix comprises: further for: each layer of the convolutional neural network model respectively performs the following operations on input data in the forward transmission of the layer: performing convolution processing based on a two-dimensional convolution kernel on the input data by using convolution units of all layers of the convolution neural network model to obtain a convolution characteristic diagram; performing pooling processing along channel dimensions on the convolution characteristic graph by using pooling units of each layer of the convolution neural network model to obtain a pooled characteristic graph; carrying out nonlinear activation on the feature values of all positions in the pooled feature map by using the activation units of all layers of the convolutional neural network model to obtain an activated feature map; and the input of the first layer of the convolutional neural network model is the distance topological matrix, and the output of the last layer of the convolutional neural network model is the distance topological characteristic matrix.
5. The method for controlling the operation of the thread according to claim 4, wherein the passing the global thread description semantic feature matrix and the distance topology feature matrix through a graph neural network model to obtain a topology global thread description semantic feature matrix comprises: the graph neural network processes the global thread description semantic feature matrix and the distance topological feature matrix through learnable neural network parameters to obtain the topological global thread description semantic feature matrix containing Euclidean distance topological features and semantic understanding feature information described by each thread.
6. The method for controlling the operation of the thread according to claim 5, wherein the performing small-scale feature correlation expression enhancement on the topological global thread description semantic feature matrix based on the global thread description semantic feature matrix to obtain an optimized topological global thread description semantic feature matrix comprises: calculating a small-scale local derivative matrix between the topological global thread description semantic feature matrix and the global thread description semantic feature matrix; and taking the small-scale local derivative matrix as a weighted feature matrix to multiply the topological global thread description semantic feature matrix according to position points to obtain the optimized topological global thread description semantic feature matrix.
7. The method according to claim 6, wherein the calculating a small-scale local derivation matrix between the topological global thread description semantic feature matrix and the global thread description semantic feature matrix comprises: calculating the small-scale local derivative matrix between the topological global thread description semantic feature matrix and the global thread description semantic feature matrix according to the following formula; wherein the formula is:
8. The method for controlling the operation of threads according to claim 7, wherein the step of passing each row vector in the optimized topology global thread description semantic feature matrix through a classifier to obtain a plurality of probability values comprises: processing each row vector in the optimized topology global thread description semantic feature matrix with the classifier to obtain the probability values; wherein the formula is:, wherein />Represents each row vector, or a value, in the optimized topology global thread description semantic feature matrix>To>For the weight matrix of each fully connected layer of the classifier, a->To>A bias vector representing each fully-connected layer of the classifier,representing respective ones of the plurality of probability values.
9. A system for controlling the operation of a thread, comprising: the description acquisition module is used for acquiring the description of each thread to be assigned, and the description of the thread to be assigned is a scheduling context matched with the sporadic task parameters related to the thread to be assigned; the context coding module is used for enabling the description of each thread to be assigned to pass through a context coder containing an embedded layer respectively so as to obtain a plurality of thread description semantic feature vectors; the Euclidean distance calculation module is used for calculating the Euclidean distance between every two thread description semantic feature vectors in the thread description semantic feature vectors to obtain a distance topology matrix; the convolutional coding module is used for enabling the distance topological matrix to pass through a convolutional neural network model serving as a feature extractor so as to obtain a distance topological feature matrix; the two-dimensional arrangement module is used for carrying out two-dimensional arrangement on the thread description semantic feature vectors to obtain a global thread description semantic feature matrix; the graph neural coding module is used for enabling the global thread description semantic feature matrix and the distance topological feature matrix to pass through a graph neural network model so as to obtain a topological global thread description semantic feature matrix; the matrix optimization module is used for carrying out small-scale feature correlation expression strengthening on the topology global thread description semantic feature matrix based on the global thread description semantic feature matrix to obtain an optimized topology global thread description semantic feature matrix; the probability value acquisition module is used for enabling each row vector in the optimized topology global thread description semantic feature matrix to pass through a classifier respectively to obtain a plurality of probability values; and a priority determining module for determining the priority of each thread to be assigned based on the ranking of the probability values.
10. The thread run control system of claim 9, wherein the euclidean distance calculating module comprises: calculating Euclidean distances between every two thread description semantic feature vectors in the thread description semantic feature vectors according to the following formula to obtain a plurality of Euclidean distances;
wherein , and />Represents any two thread description semantic feature vectors in the plurality of thread description semantic feature vectors, respectively, and/or is selected based on the result of the comparison>Means for calculating a Euclidean distance between every two of the plurality of thread description semantic feature vectors, and ` H `> and />And respectively representing the feature values of the positions of any two thread description semantic feature vectors in the thread description semantic feature vectors. />
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310077120.0A CN115878330B (en) | 2023-02-08 | 2023-02-08 | Thread operation control method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310077120.0A CN115878330B (en) | 2023-02-08 | 2023-02-08 | Thread operation control method and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115878330A true CN115878330A (en) | 2023-03-31 |
CN115878330B CN115878330B (en) | 2023-05-30 |
Family
ID=85760855
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310077120.0A Active CN115878330B (en) | 2023-02-08 | 2023-02-08 | Thread operation control method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115878330B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116663568A (en) * | 2023-07-31 | 2023-08-29 | 腾云创威信息科技(威海)有限公司 | Critical task identification system and method based on priority |
CN116957304A (en) * | 2023-09-20 | 2023-10-27 | 飞客工场科技(北京)有限公司 | Unmanned aerial vehicle group collaborative task allocation method and system |
CN118069331A (en) * | 2024-04-24 | 2024-05-24 | 湖北华中电力科技开发有限责任公司 | Intelligent acquisition task scheduling method and device based on digital twinning |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180308196A1 (en) * | 2017-04-21 | 2018-10-25 | Intel Corporation | Dynamic thread execution arbitration |
CN109144716A (en) * | 2017-06-28 | 2019-01-04 | 中兴通讯股份有限公司 | Operating system dispatching method and device, equipment based on machine learning |
CN109886407A (en) * | 2019-02-27 | 2019-06-14 | 上海商汤智能科技有限公司 | Data processing method, device, electronic equipment and computer readable storage medium |
CN113269323A (en) * | 2020-02-17 | 2021-08-17 | 北京达佳互联信息技术有限公司 | Data processing method, processing device, electronic equipment and storage medium |
CN114741186A (en) * | 2022-03-28 | 2022-07-12 | 慧之安信息技术股份有限公司 | Thread pool adaptive capacity adjustment method and device based on deep learning |
CN115373813A (en) * | 2022-04-03 | 2022-11-22 | 福建福清核电有限公司 | Scheduling method and system based on GPU virtualization in cloud computing environment and electronic equipment |
-
2023
- 2023-02-08 CN CN202310077120.0A patent/CN115878330B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180308196A1 (en) * | 2017-04-21 | 2018-10-25 | Intel Corporation | Dynamic thread execution arbitration |
CN109144716A (en) * | 2017-06-28 | 2019-01-04 | 中兴通讯股份有限公司 | Operating system dispatching method and device, equipment based on machine learning |
CN109886407A (en) * | 2019-02-27 | 2019-06-14 | 上海商汤智能科技有限公司 | Data processing method, device, electronic equipment and computer readable storage medium |
CN113269323A (en) * | 2020-02-17 | 2021-08-17 | 北京达佳互联信息技术有限公司 | Data processing method, processing device, electronic equipment and storage medium |
CN114741186A (en) * | 2022-03-28 | 2022-07-12 | 慧之安信息技术股份有限公司 | Thread pool adaptive capacity adjustment method and device based on deep learning |
CN115373813A (en) * | 2022-04-03 | 2022-11-22 | 福建福清核电有限公司 | Scheduling method and system based on GPU virtualization in cloud computing environment and electronic equipment |
Non-Patent Citations (3)
Title |
---|
HAOYU ZHU: "A Forward-wave Neural Network for Solving the Priority Shortest Path Problem", EITCE \'21: PROCEEDINGS OF THE 2021 5TH INTERNATIONAL CONFERENCE ON ELECTRONIC INFORMATION TECHNOLOGY AND COMPUTER ENGINEERING, pages 877 - 881 * |
周长宝: "基于深度强化学习的集群调度配置优化研究", 万方学位论文 * |
王磊;刘道福;陈云霁;陈天石;李玲;: "片上多核处理器共享资源分配与调度策略研究综述", 计算机研究与发展, no. 10, pages 186 - 201 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116663568A (en) * | 2023-07-31 | 2023-08-29 | 腾云创威信息科技(威海)有限公司 | Critical task identification system and method based on priority |
CN116663568B (en) * | 2023-07-31 | 2023-11-17 | 腾云创威信息科技(威海)有限公司 | Critical task identification system and method based on priority |
CN116957304A (en) * | 2023-09-20 | 2023-10-27 | 飞客工场科技(北京)有限公司 | Unmanned aerial vehicle group collaborative task allocation method and system |
CN116957304B (en) * | 2023-09-20 | 2023-12-26 | 飞客工场科技(北京)有限公司 | Unmanned aerial vehicle group collaborative task allocation method and system |
CN118069331A (en) * | 2024-04-24 | 2024-05-24 | 湖北华中电力科技开发有限责任公司 | Intelligent acquisition task scheduling method and device based on digital twinning |
Also Published As
Publication number | Publication date |
---|---|
CN115878330B (en) | 2023-05-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN115203380B (en) | Text processing system and method based on multi-mode data fusion | |
CN110032633B (en) | Multi-turn dialogue processing method, device and equipment | |
KR102434726B1 (en) | Treatment method and device | |
CN115878330A (en) | Thread operation control method and system | |
CN111105029B (en) | Neural network generation method, generation device and electronic equipment | |
CN109766557B (en) | Emotion analysis method and device, storage medium and terminal equipment | |
CN116415654A (en) | Data processing method and related equipment | |
CN115796173A (en) | Data processing method and system for supervision submission requirements | |
CN114676234A (en) | Model training method and related equipment | |
CN115759658B (en) | Enterprise energy consumption data management system suitable for smart city | |
CN111651573B (en) | Intelligent customer service dialogue reply generation method and device and electronic equipment | |
CN115827257B (en) | CPU capacity prediction method and system for processor system | |
CN115373813A (en) | Scheduling method and system based on GPU virtualization in cloud computing environment and electronic equipment | |
US20240135174A1 (en) | Data processing method, and neural network model training method and apparatus | |
CN116151604A (en) | Office system flow analysis system and method under web environment | |
CN111027681B (en) | Time sequence data processing model training method, data processing method, device and storage medium | |
CN115118675A (en) | Method and system for accelerating data stream transmission based on intelligent network card equipment | |
CN110913229B (en) | RNN-based decoder hidden state determination method, device and storage medium | |
CN117746186A (en) | Training method of low-rank adaptive model, text image generation method and system | |
CN117744855A (en) | Load prediction system and method based on machine learning | |
CN113111971A (en) | Intelligent processing method and device for classification model, electronic equipment and medium | |
CN110555099B (en) | Computer-implemented method and apparatus for language processing using neural networks | |
CN110851600A (en) | Text data processing method and device based on deep learning | |
CN116739219A (en) | Melt blown cloth production management system and method thereof | |
WO2020005599A1 (en) | Trend prediction based on neural network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |