CN111897639B - Image augmentation method, image augmentation device, computer device, and storage medium - Google Patents

Image augmentation method, image augmentation device, computer device, and storage medium Download PDF

Info

Publication number
CN111897639B
CN111897639B CN202010744544.4A CN202010744544A CN111897639B CN 111897639 B CN111897639 B CN 111897639B CN 202010744544 A CN202010744544 A CN 202010744544A CN 111897639 B CN111897639 B CN 111897639B
Authority
CN
China
Prior art keywords
gpu
image
augmentation
task
augmented
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010744544.4A
Other languages
Chinese (zh)
Other versions
CN111897639A (en
Inventor
叶明�
陈欣
张国辉
宋晨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN202010744544.4A priority Critical patent/CN111897639B/en
Priority to PCT/CN2020/111789 priority patent/WO2021139177A1/en
Publication of CN111897639A publication Critical patent/CN111897639A/en
Application granted granted Critical
Publication of CN111897639B publication Critical patent/CN111897639B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/547Remote procedure calls [RPC]; Web services

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the invention discloses an image augmentation method, an image augmentation device, computer equipment and a storage medium. The method belongs to the technical field of artificial intelligence, and comprises the following steps: receiving an image augmentation request; judging whether the image augmentation task set simultaneously contains GPU augmentation tasks and non-GPU augmentation tasks; if yes, obtaining a sample image from the sample library and distributing the sample image to a preset CPU; storing the intermediate processing data sent by the CPU into an intermediate data set; acquiring intermediate processing data from the intermediate data set and distributing the intermediate processing data to a GPU; and storing the first augmented image data sent by the GPU into a preset augmented data set, and sending the augmented data set to a model training server. According to the embodiment of the invention, the GPU augmentation tasks in the image augmentation request can be distributed to the GPU for processing, and the non-GPU augmentation tasks which cannot be processed by the GPU are handed to the CPU for processing, so that the image augmentation server has strong applicability and can improve the operation efficiency to the greatest extent.

Description

Image augmentation method, image augmentation device, computer device, and storage medium
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to an image augmentation method, an image augmentation device, computer equipment and a storage medium.
Background
Image augmentation (image augmentation) refers to the expansion of the size of a training data set by making a series of random changes to a training image to produce similar but different training samples. Image augmentation is an important link of artificial intelligence model training at present, and through various augmentations of sample images, the final product precision can be obviously improved, and the generalization capability of the sample images under different business scenes can be effectively improved.
Currently, image augmentation can be achieved by CPU (Central Processing Unit) or GPU (Graphics Processing Unit) computation. The CPU is capable of handling most image augmentation tasks and has the disadvantage of being slow in processing speed. The GPU has a fast processing speed, but can only process a small portion of image augmentation tasks, and has poor applicability.
Disclosure of Invention
The embodiment of the invention provides an image augmentation method, an image augmentation device, computer equipment and a storage medium, and aims to solve the problems that in the prior art, the CPU image augmentation speed is low and the GUP image augmentation applicability is poor.
In a first aspect, an embodiment of the present invention provides an image augmentation method, including:
receiving an image augmentation request sent by a model training server, wherein the image augmentation request comprises an image augmentation task set, and the image augmentation task set comprises at least one image augmentation task;
judging whether the image augmentation task set simultaneously contains GPU augmentation tasks and non-GPU augmentation tasks;
if the image augmentation task set simultaneously comprises a GPU augmentation task and a non-GPU augmentation task, obtaining a sample image from a preset sample library, and distributing the sample image to a preset CPU (central processing unit) so that the CPU executes the non-GPU augmentation task on the sample image;
storing intermediate processing data sent by the CPU into a preset intermediate data set, wherein the intermediate processing data are obtained after the CPU executes the non-GPU augmentation task on the sample image;
acquiring intermediate processing data from the intermediate data set, and distributing the intermediate processing data to a preset GPU so that the GPU executes the GPU augmentation task on the intermediate processing data;
and storing the first augmented image data sent by the GPU into a preset augmented data set, and sending the augmented data set to a model training server, wherein the first augmented image data is obtained after the GPU executes the GPU augmented task on the intermediate processing data.
In a second aspect, an embodiment of the present invention further provides an image augmenting apparatus, including:
the image augmentation system comprises a receiving unit, a processing unit and a processing unit, wherein the receiving unit is used for receiving an image augmentation request sent by a model training server, the image augmentation request comprises an image augmentation task set, and the image augmentation task set comprises at least one image augmentation task;
the first judgment unit is used for judging whether the image augmentation task set simultaneously contains a GPU augmentation task and a non-GPU augmentation task;
the first distribution unit is used for acquiring a sample image from a preset sample library and distributing the sample image to a preset CPU (central processing unit) if the image augmentation task set simultaneously contains a GPU augmentation task and a non-GPU augmentation task, so that the CPU executes the non-GPU augmentation task on the sample image;
a first storage unit, configured to store intermediate processing data sent by the CPU into a preset intermediate data set, where the intermediate processing data is obtained after the CPU executes the non-GPU augmentation task on the sample image;
the second distribution unit is used for acquiring intermediate processing data from the data set and distributing the intermediate processing data to a preset GPU so that the GPU executes the GPU augmentation task on the intermediate processing data;
and the first transmitting unit is used for storing the first augmented image data transmitted by the GPU into a preset augmented data set and transmitting the augmented data set to a model training server, wherein the first augmented image data is obtained after the GPU executes the GPU augmented task on the intermediate processing data.
In a third aspect, an embodiment of the present invention further provides a computer device, which includes a memory and a processor, where the memory stores a computer program, and the processor implements the above method when executing the computer program.
In a fourth aspect, the present invention also provides a computer-readable storage medium, which stores a computer program, and the computer program can implement the above method when being executed by a processor.
The embodiment of the invention provides an image augmentation method, an image augmentation device, computer equipment and a storage medium. Wherein the method comprises the following steps: receiving an image augmentation request sent by a model training server, wherein the image augmentation request comprises an image augmentation task set, and the image augmentation task set comprises at least one image augmentation task; judging whether the image augmentation task set simultaneously contains GPU augmentation tasks and non-GPU augmentation tasks; if the image augmentation task set simultaneously comprises a GPU augmentation task and a non-GPU augmentation task, acquiring a sample image from a preset sample library, and distributing the sample image to a preset CPU so that the CPU executes the non-GPU augmentation task on the sample image; storing intermediate processing data sent by the CPU into a preset intermediate data set, wherein the intermediate processing data are obtained after the CPU executes the non-GPU augmentation task on the sample image; acquiring intermediate processing data from the intermediate data set, and distributing the intermediate processing data to a preset GPU so that the GPU executes the GPU augmentation task on the intermediate processing data; and storing the first augmented image data sent by the GPU into a preset augmented data set, and sending the augmented data set to a model training server, wherein the first augmented image data is obtained after the GPU executes the GPU augmented task on the intermediate processing data. According to the embodiment of the invention, the GPU augmentation tasks in the image augmentation request can be distributed to the GPU for processing, and the non-GPU augmentation tasks which cannot be processed by the GPU are handed to the CPU for processing, so that the image augmentation server has strong applicability and can improve the operation efficiency to the greatest extent.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic view of an application scenario of an image augmentation method according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of an image augmentation method according to an embodiment of the present invention;
FIG. 3 is a schematic sub-flow chart of an image augmentation method according to an embodiment of the present invention;
FIG. 4 is a sub-flowchart of an image augmentation method according to an embodiment of the present invention;
FIG. 5 is a sub-flowchart of an image augmentation method according to an embodiment of the present invention;
FIG. 6 is a flowchart illustrating an image augmentation method according to another embodiment of the present invention;
FIG. 7 is a block diagram illustrating an image enhancement apparatus according to an embodiment of the present invention;
fig. 8 is a schematic block diagram of a first sending unit of an image augmentation apparatus according to an embodiment of the present invention;
fig. 9 is a schematic block diagram of a first distribution unit of an image augmentation apparatus according to an embodiment of the present invention;
fig. 10 is a schematic block diagram of a second distribution unit of an image augmentation apparatus according to an embodiment of the present invention;
FIG. 11 is a block diagram of an image augmenting device according to another embodiment of the invention;
fig. 12 is a schematic block diagram of a computer device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting. As used in the specification of the present invention and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to a determination" or "in response to a detection". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
Referring to fig. 1 and fig. 2, fig. 1 is a schematic view of an application scenario of an image augmentation method according to an embodiment of the present invention. Fig. 2 is a schematic flowchart of an image augmentation method according to an embodiment of the present invention. The image augmentation method is applied to the image augmentation server 1, and the image augmentation server 1 responds to an image augmentation request sent by the model training server 2 to process an image augmentation task.
Fig. 2 is a schematic flow chart of an image augmentation method according to an embodiment of the present invention. As shown, the method includes the following steps S1-S6.
S1, receiving an image augmentation request sent by a model training server, wherein the image augmentation request comprises an image augmentation task set, and the image augmentation task set comprises at least one image augmentation task.
In a specific implementation, the model training server refers to a server that performs a model training task. In the embodiment of the invention, the model training task and the image augmentation task are deployed on different servers to be executed.
And the image augmentation server receives an image augmentation request sent by the model training server. The image-augmenting server refers to a server that performs an image-augmenting task. The image augmentation request comprises an image augmentation task set, and the image augmentation task set comprises at least one image augmentation task. The image augmentation tasks may be, for example, flipping, cropping, color changing (adjusting brightness, contrast, saturation, and hue) superimposing, combining, etc. tasks of the image. At present, image augmentation tasks can all be handled by the CPU; while only a small portion of the image augmentation tasks may be processed (e.g., scaled and cropped) by the GPU. In the embodiment of the invention, the image augmentation task which can be processed by the GPU is defined as the GPU augmentation task. An image augmentation task that cannot be processed by the GPU is defined as a non-GPU augmentation task.
Further, the image expansion request includes a storage location of the sample image and parameter settings for image expansion. For example, the probability that image augmentation is applicable may be selected between 0.0 and 1.0. An augmented configurable parameter (e.g. Brightness = [0.8,1.2] in Brightness Contrast augmentation (Brightness Contrast), meaning that the Brightness value will be dynamic at random between 0.8-1.2 at each augmentation)
And S2, judging whether the image augmentation task set simultaneously contains GPU augmentation tasks and non-GPU augmentation tasks.
In specific implementation, whether the image augmentation task set simultaneously contains a GPU augmentation task and a non-GPU augmentation task is judged. It should be noted that the GPU augmented task and the non-GPU augmented task are preset by the user according to the processing capability of the GPU.
And S3, if the image augmentation task set simultaneously contains a GPU augmentation task and a non-GPU augmentation task, obtaining a sample image from a preset sample library, and distributing the sample image to a preset CPU so that the CPU executes the non-GPU augmentation task on the sample image.
In specific implementation, if the image augmentation task set simultaneously contains a GPU augmentation task and a non-GPU augmentation task, a sample image is obtained from a preset sample library and is distributed to a preset CPU, and the CPU executes the non-GPU augmentation task on the sample image.
The image augmentation request indicates a storage location of the sample library, from which the image augmentation server can obtain the sample library.
Referring to FIG. 3, in one embodiment, the CPU includes a plurality of CPU execution threads, and the number of CPU execution threads is determined according to the core number of the CPU. In this embodiment, the step S3 specifically includes the following steps: S31-S32.
And S31, acquiring a CPU execution thread in an idle state in the CPU as a target CPU execution thread.
In specific implementation, a CPU execution thread in an idle state in the CPU is acquired as a target CPU execution thread.
It should be noted that after the CPU execution thread obtains a task, it will mark itself as a working state, read a corresponding sample image, and perform augmentation processing on the sample image. And after the task processing is finished, marking the self as an idle state.
S32, distributing the sample image to the target CPU execution thread.
In a specific implementation, the image augmentation server allocates a sample image to the target CPU execution thread. Namely, sample images are distributed to all CPU execution threads in an idle state in real time, and the processing efficiency can be effectively improved.
And S4, storing the intermediate processing data sent by the CPU into a preset intermediate data set, wherein the intermediate processing data are obtained after the CPU executes the non-GPU augmentation task on the sample image.
In specific implementation, the CPU executes the non-GPU augmentation task on the sample image to obtain intermediate processing data. And storing the intermediate processing data into a preset intermediate data set. The intermediate processing data in the intermediate data set also needs to be further subjected to image augmentation processing by the GPU.
And S5, acquiring intermediate processing data from the intermediate data set, and distributing the intermediate processing data to a preset GPU so that the GPU executes the GPU augmentation task on the intermediate processing data.
In specific implementation, the image augmentation server acquires intermediate processing data from the intermediate data set and distributes the intermediate processing data to a preset GPU. Executing, by the GPU, the GPU augmentation task on the intermediate processing data.
Referring to fig. 4, in an embodiment, the GPU includes a plurality of GPU execution threads, and the number of GPU execution threads is determined according to the kernel number of the GPU. The step S5 specifically includes the following steps: S41-S42.
S41, acquiring the GPU execution thread in an idle state in the GPU as a target GPU execution thread.
In specific implementation, a GPU execution thread in an idle state in the GPU is obtained and used as a target GPU execution thread.
It should be noted that after the GPU execution thread obtains the task, it will mark itself as the working state, read the corresponding intermediate processing data, and further perform the augmentation processing on the intermediate processing data. After the task processing is completed, the GPU marks itself as an idle state.
And S42, distributing intermediate processing data to the target GPU execution thread.
In a specific implementation, the image augmentation server allocates intermediate processing data to the target GPU execution thread. In other words, intermediate processing data is distributed to all the CPU execution threads in the idle state in real time, and the processing efficiency can be effectively improved.
And S6, storing the first augmented image data sent by the GPU into a preset augmented data set, and sending the augmented data set to a model training server, wherein the first augmented image data is obtained after the GPU executes the GPU augmented task on the intermediate processing data.
In specific implementation, the GPU performs the GPU augmentation task on the intermediate processing data to obtain first augmented image data. The first augmented image data is obtained after the image augmentation server executes all image augmentation tasks in the image augmentation task set on the sample picture. The image augmentation server sends the augmented data set to the model training server for training the model by the model training server using the first augmented image data in the augmented data set.
Referring to fig. 5, in an embodiment, the step S6 specifically includes the following steps: S51-S53.
And S51, judging whether the model training server is deployed locally.
In specific implementation, whether the model training server is deployed locally is judged. Namely, whether the model training server and the image augmentation server are in the same local area network or not is judged.
And S52, if the model training server is deployed locally, the augmented data set is sent to the model training server in a memory sharing mode.
In a specific implementation, if the model training server is deployed locally, the augmented data set is sent to the model training server in a memory sharing manner.
Shared memory is a communication method between multiple processes in Unix, and this method is usually used for communication between multiple processes of one program, and in fact, information can be transferred between multiple programs through shared memory.
And S53, if the model training server is not deployed locally, transmitting the augmented data set to the model training server in an RPC transmission or HTTP transmission mode.
In specific implementation, if the model training server is not deployed locally, the augmented data set is sent to the model training server in an RPC transmission or HTTP transmission manner. RPC (Remote Procedure Call Protocol) is a Protocol that requests services from Remote computer programs over a network without knowledge of the underlying network technology. HTTP (Hyper Text Transfer Protocol) is one of the most widely used network transport protocols on the internet, which transfers data based on the TCP/IP communication Protocol.
By applying the technical scheme of the embodiment, the GPU augmentation tasks in the image augmentation requests can be distributed to the GPU for processing, and the non-GPU augmentation tasks which cannot be processed by the GPU are handed to the CPU for processing, so that the image augmentation server has strong applicability and can improve the operation efficiency to the maximum extent. Meanwhile, data augmentation and model training are processed on different servers respectively, and the data augmentation and the model training are not interfered with each other, so that the respective operation performance is kept, and the expandability is greatly improved. Furthermore, the CPU image augmentation and the GPU image augmentation are unified, and convenience in use is provided for users.
Fig. 6 is a flowchart illustrating an image augmentation method according to another embodiment of the present invention. As shown in fig. 6, the image augmentation method of the present embodiment includes steps S61 to S611. Steps S61 to S66 are similar to steps S1 to S6 in the above embodiments, and are not described herein again. The added steps S67 to S611 in the present embodiment are explained in detail below.
S67, if the image augmentation task set does not simultaneously contain the GPU augmentation task and the non-GPU augmentation task, judging whether the image augmentation task set only contains the non-GPU augmentation task.
In specific implementation, if the image augmentation task set does not simultaneously contain the GPU augmentation task and the non-GPU augmentation task, it is determined whether the image augmentation task set only contains the non-GPU augmentation task. If the GPU augmented task and the non-GPU augmented task are not included at the same time, all the tasks are only GPU augmented tasks or all the tasks are non-GPU augmented tasks.
And S68, if the image augmentation task set only contains the non-GPU augmentation tasks, acquiring sample images from a preset sample library, and distributing the sample images to a preset CPU (central processing unit) so that the CPU executes the non-GPU augmentation tasks on the sample images.
In specific implementation, if the image augmentation task set only contains non-GPU augmentation tasks, a sample image is obtained from a preset sample library, and the sample image is distributed to a preset CPU, so that the CPU executes the non-GPU augmentation tasks on the sample image.
The image augmentation request indicates a storage location of the sample library, from which the image augmentation server can obtain the sample library. When the image augmentation task set only contains non-GPU augmentation tasks, all the non-GPU augmentation tasks need to be executed by the CPU.
And S69, storing second augmented image data sent by the CPU into a preset augmented data set, and sending the augmented data set to a model training server, wherein the second augmented image data is obtained after the CPU executes the non-GPU augmented task on the sample image.
In specific implementation, since the image augmentation task set only includes non-GPU augmentation tasks, the CPU executes all the non-GPU augmentation tasks on the sample image to obtain second augmented image data. The second augmented image data is the final processing result of the sample image.
And the image augmentation server stores the second augmented image data into a preset augmented data set and sends the augmented data set to the model training server.
The specific sending method of the augmented data set is described in detail in steps S51 to S53 provided in the above embodiments, and is not described herein again.
S610, if the image augmentation task set only contains GPU augmentation tasks, obtaining sample images from a preset sample library, and distributing the sample images to a preset GPU, so that the GPU augmentation tasks are executed on the sample images by the GPU.
In specific implementation, if the image augmentation task set only contains a GPU augmentation task, a sample image is obtained from a preset sample library, and the sample image is distributed to a preset GPU, so that the GPU executes the GPU augmentation task on the sample image.
The image augmentation request indicates a storage location of the sample library, from which the image augmentation server can obtain the sample library. When the image augmentation task set only comprises GPU augmentation tasks, all the GPU augmentation tasks can be executed by the GPU, so that the operation speed is improved.
S611, storing third augmented image data sent by the GPU in a preset augmented data set, and sending the augmented data set to a model training server, where the third augmented image data is obtained after the GPU executes the GPU augmented task on the sample image.
In specific implementation, since the image augmentation task set only includes the GPU augmentation tasks, the GPU executes all the GPU augmentation tasks on the sample image to obtain the third augmented image data. The third augmented image data is the final processing result of the sample image.
And the image augmentation server stores the third augmented image data into a preset augmented data set and sends the augmented data set to the model training server.
The specific sending method of the augmented data set is described in detail in steps S51-S53 provided in the above embodiments, and will not be described herein again.
Fig. 7 is a schematic block diagram of an image augmenting device 60 according to an embodiment of the present invention. As shown in fig. 7, the present invention further provides an image augmentation device 60 corresponding to the above image augmentation method. The image augmentation apparatus 60 includes a unit for executing the image augmentation method described above, and the image augmentation apparatus 60 may be configured in a server. Specifically, referring to fig. 7, the image augmenting apparatus 60 includes a receiving unit 61, a first determining unit 62, a first distributing unit 63, a first storing unit 64, a second distributing unit 65, and a first transmitting unit 66.
The image augmentation system comprises a receiving unit 61, a processing unit and a processing unit, wherein the receiving unit 61 is used for receiving an image augmentation request sent by a model training server, the image augmentation request comprises an image augmentation task set, and the image augmentation task set comprises at least one image augmentation task;
a first determining unit 62, configured to determine whether the image augmentation task set includes both a GPU augmentation task and a non-GPU augmentation task;
a first distribution unit 63, configured to, if the image augmentation task set includes both a GPU augmentation task and a non-GPU augmentation task, obtain a sample image from a preset sample library, and distribute the sample image to a preset CPU, so that the CPU executes the non-GPU augmentation task on the sample image;
a first storage unit 64, configured to store intermediate processing data sent by the CPU into a preset intermediate data set, where the intermediate processing data is obtained after the CPU executes the non-GPU augmentation task on the sample image;
a second distribution unit 65, configured to obtain intermediate processing data from the data set, and distribute the intermediate processing data to a preset GPU, so that the GPU executes the GPU augmentation task on the intermediate processing data;
a first sending unit 66, configured to store the first augmented image data sent by the GPU in a preset augmented data set, and send the augmented data set to a model training server, where the first augmented image data is obtained after the GPU executes the GPU augmented task on the intermediate processing data.
In one embodiment, as shown in fig. 8, the first sending unit 66 includes a second determining unit 661, a second sending unit 662 and a third sending unit 663.
A second determining unit 661, configured to determine whether the model training server is deployed locally.
A second sending unit 662, configured to send the augmented data set to the model training server in a shared memory manner if the model training server is deployed locally.
A third sending unit 663, configured to send the augmented data set to the model training server in an RPC transmission or HTTP transmission manner if the model training server is not deployed locally.
In one embodiment, the CPU includes a plurality of CPU execution threads, and as shown in fig. 9, the first distributing unit 63 includes a first obtaining unit 631 and a first distributing unit 632.
A first obtaining unit 631, configured to obtain a CPU execution thread in an idle state in the CPU as a target CPU execution thread;
a first allocation unit 632 for allocating sample images to the target CPU execution threads.
In one embodiment, the GPU includes a plurality of GPU execution threads, and as shown in fig. 10, the second distributing unit 65 includes a second obtaining unit 651 and a second distributing unit 652.
A second obtaining unit 651, configured to obtain a CPU execution thread in an idle state in the GPU as a target CPU execution thread;
a second allocating unit 652, configured to allocate intermediate processing data to the target GPU execution thread.
Fig. 11 is a schematic block diagram of an image augmentation apparatus 60 according to another embodiment of the present invention. As shown in fig. 11, the image augmenting apparatus 60 of the present embodiment is the above-described embodiment, to which a third determining unit 67, a third distributing unit 68, a fourth transmitting unit 69, a fourth distributing unit 610, and a fifth transmitting unit 611 are added.
A third determining unit 67, configured to determine whether the image augmentation task set only includes the non-GPU augmentation task if the image augmentation task set does not include both the GPU augmentation task and the non-GPU augmentation task.
And a third distributing unit 68, configured to, if the image augmentation task set only includes a non-GPU augmentation task, obtain a sample image from a preset sample library, and distribute the sample image to a preset CPU, so that the CPU executes the non-GPU augmentation task on the sample image.
A fourth sending unit 69, configured to store the second augmented image data sent by the CPU in a preset augmented data set, and send the augmented data set to a model training server, where the second augmented image data is obtained after the CPU executes the non-GPU augmented task on the sample image.
A fourth distribution unit 610, configured to, if the image augmentation task set only includes a GPU augmentation task, obtain a sample image from a preset sample library, and distribute the sample image to a preset GPU, so that the GPU executes the GPU augmentation task on the sample image.
A fifth sending unit 611, configured to store the third augmented image data sent by the GPU in a preset augmented data set, and send the augmented data set to a model training server, where the third augmented image data is obtained after the GPU executes the GPU augmented task on the sample image.
It should be noted that, as can be clearly understood by those skilled in the art, the detailed implementation process of the image augmenting device 60 and each unit may refer to the corresponding description in the foregoing method embodiment, and for convenience and brevity of description, no further description is provided herein.
The image augmenting means 60 described above may be implemented in the form of a computer program that can be run on a computer device as shown in fig. 12.
Referring to fig. 12, fig. 12 is a schematic block diagram of a computer device according to an embodiment of the present application. The computer device 500 may be a terminal or a server, where the terminal may be an electronic device with a communication function, such as a smart phone, a tablet computer, a notebook computer, a desktop computer, a personal digital assistant, and a wearable device. The server may be an independent server or a server cluster composed of a plurality of servers.
Referring to fig. 12, the computer device 500 includes a processor 502, memory, and a network interface 505 connected by a system bus 501, where the memory may include a non-volatile storage medium 503 and an internal memory 504.
The non-volatile storage medium 503 may store an operating system 5031 and a computer program 5032. The computer program 5032, when executed, causes the processor 502 to perform an image augmentation method.
The processor 502 is used to provide computing and control capabilities to support the operation of the overall computer device 500.
The internal memory 504 provides an environment for the execution of the computer program 5032 in the non-volatile storage medium 503, and when the computer program 5032 is executed by the processor 502, the processor 502 can be caused to execute an image augmentation method.
The network interface 505 is used for network communication with other devices. Those skilled in the art will appreciate that the configuration shown in fig. 12 is a block diagram of only a portion of the configuration associated with the present application and does not constitute a limitation of the computer device 500 to which the present application may be applied, and that a particular computer device 500 may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
Wherein the processor 502 is configured to run the computer program 5032 stored in the memory to perform the steps of:
receiving an image augmentation request sent by a model training server, wherein the image augmentation request comprises an image augmentation task set, and the image augmentation task set comprises at least one image augmentation task;
judging whether the image augmentation task set simultaneously contains GPU augmentation tasks and non-GPU augmentation tasks;
if the image augmentation task set simultaneously comprises a GPU augmentation task and a non-GPU augmentation task, obtaining a sample image from a preset sample library, and distributing the sample image to a preset CPU (central processing unit) so that the CPU executes the non-GPU augmentation task on the sample image;
storing intermediate processing data sent by the CPU into a preset intermediate data set, wherein the intermediate processing data are obtained after the CPU executes the non-GPU augmentation task on the sample image;
acquiring intermediate processing data from the intermediate data set, and distributing the intermediate processing data to a preset GPU so that the GPU executes the GPU augmentation task on the intermediate processing data;
and storing the first augmented image data sent by the GPU into a preset augmented data set, and sending the augmented data set to a model training server, wherein the first augmented image data is obtained after the GPU executes the GPU augmented task on the intermediate processing data.
In an embodiment, when the step of sending the augmented data set to the model training server is implemented, the processor 502 specifically implements the following steps:
judging whether the model training server is deployed locally;
if the model training server is deployed locally, the augmented data set is sent to the model training server in a memory sharing mode;
and if the model training server is not deployed locally, the augmented data set is sent to the model training server in an RPC transmission or HTTP transmission mode.
In an embodiment, the CPU includes a plurality of CPU execution threads, and when the processor 502 implements the step of distributing the sample image to a preset CPU, the following steps are implemented:
acquiring a CPU execution thread in an idle state in the CPUs as a target CPU execution thread;
sample images are assigned to the target CPU execution threads.
In an embodiment, the GPU includes a plurality of GPU execution threads, and when the processor 502 implements the step of distributing the intermediate processing data to the preset GPU, the following steps are implemented:
acquiring a GPU execution thread in an idle state in the GPU as a target GPU execution thread;
and distributing intermediate processing data to the target GPU execution thread.
In one embodiment, processor 502 further implements the steps of:
if the image augmentation task set does not simultaneously contain the GPU augmentation task and the non-GPU augmentation task, judging whether the image augmentation task set only contains the non-GPU augmentation task;
if the image augmentation task set only contains non-GPU augmentation tasks, obtaining sample images from a preset sample library, and distributing the sample images to a preset CPU (central processing unit), so that the CPU executes the non-GPU augmentation tasks on the sample images;
storing second augmented image data sent by the CPU into a preset augmented data set, and sending the augmented data set to a model training server, wherein the second augmented image data is obtained after the CPU executes the non-GPU augmented task on the sample image;
if the image augmentation task set only contains GPU augmentation tasks, obtaining sample images from a preset sample library, and distributing the sample images to a preset GPU so that the GPU augmentation tasks are executed on the sample images by the GPU;
and storing third augmented image data sent by the GPU into a preset augmented data set, and sending the augmented data set to a model training server, wherein the third augmented image data is obtained after the GPU executes the GPU augmented task on the sample image.
It should be understood that, in the embodiment of the present Application, the Processor 502 may be a Central Processing Unit (CPU), and the Processor 502 may also be other general-purpose processors, digital Signal Processors (DSPs), application Specific Integrated Circuits (ASICs), field-Programmable Gate arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, and the like. Wherein a general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
It will be understood by those skilled in the art that all or part of the flow of the method implementing the above embodiments may be implemented by a computer program instructing associated hardware. The computer program may be stored in a storage medium that is computer-readable. The computer program is executed by at least one processor in the computer system to implement the flow steps of the embodiments of the method described above.
Accordingly, the present invention also provides a storage medium. The storage medium may be a computer-readable storage medium. The storage medium stores a computer program. The computer program, when executed by a processor, causes the processor to perform the steps of:
receiving an image augmentation request sent by a model training server, wherein the image augmentation request comprises an image augmentation task set, and the image augmentation task set comprises at least one image augmentation task;
judging whether the image augmentation task set simultaneously contains GPU augmentation tasks and non-GPU augmentation tasks;
if the image augmentation task set simultaneously comprises a GPU augmentation task and a non-GPU augmentation task, acquiring a sample image from a preset sample library, and distributing the sample image to a preset CPU so that the CPU executes the non-GPU augmentation task on the sample image;
storing intermediate processing data sent by the CPU into a preset intermediate data set, wherein the intermediate processing data are obtained after the CPU executes the non-GPU augmentation task on the sample image;
acquiring intermediate processing data from the intermediate data set, and distributing the intermediate processing data to a preset GPU so that the GPU executes the GPU augmentation task on the intermediate processing data;
and storing the first augmented image data sent by the GPU into a preset augmented data set, and sending the augmented data set to a model training server, wherein the first augmented image data is obtained after the GPU executes the GPU augmented task on the intermediate processing data.
In an embodiment, when the processor executes the computer program to implement the step of sending the augmented data set to the model training server, the following steps are specifically implemented:
judging whether the model training server is deployed locally;
if the model training server is deployed locally, the augmented data set is sent to the model training server in a memory sharing mode;
and if the model training server is not deployed locally, the augmented data set is sent to the model training server in an RPC transmission or HTTP transmission mode.
In an embodiment, the CPU includes a plurality of CPU execution threads, and when the processor executes the computer program to implement the step of distributing the sample image to a preset CPU, the following steps are specifically implemented:
acquiring a CPU execution thread in an idle state in the CPUs as a target CPU execution thread;
assigning a sample image to the target CPU execution thread.
In an embodiment, the GPU includes a plurality of GPU execution threads, and when the processor executes the computer program to implement the step of distributing the intermediate processing data to the preset GPU, the following steps are specifically implemented:
acquiring a GPU execution thread in an idle state in the GPU as a target GPU execution thread;
and allocating intermediate processing data to the target GPU execution thread.
In an embodiment, the processor, in executing the computer program, further implements the steps of:
if the image augmentation task set does not simultaneously contain the GPU augmentation task and the non-GPU augmentation task, judging whether the image augmentation task set only contains the non-GPU augmentation task;
if the image augmentation task set only contains non-GPU augmentation tasks, obtaining sample images from a preset sample library, and distributing the sample images to a preset CPU, so that the CPU executes the non-GPU augmentation tasks on the sample images;
storing second augmented image data sent by the CPU into a preset augmented data set, and sending the augmented data set to a model training server, wherein the second augmented image data is obtained after the CPU executes the non-GPU augmented task on the sample image;
if the image augmentation task set only contains GPU augmentation tasks, obtaining sample images from a preset sample library, and distributing the sample images to a preset GPU so that the GPU augmentation tasks are executed on the sample images by the GPU;
and storing third augmented image data sent by the GPU into a preset augmented data set, and sending the augmented data set to a model training server, wherein the third augmented image data is obtained after the GPU executes the GPU augmented task on the sample image.
The storage medium is an entity and non-transitory storage medium, and may be various entity storage media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a magnetic disk, or an optical disk.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be embodied in electronic hardware, computer software, or combinations of both, and that the components and steps of the examples have been described in a functional general in the foregoing description for the purpose of illustrating clearly the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative. For example, the division of each unit is only one logic function division, and there may be another division manner in actual implementation. For example, various elements or components may be combined or may be integrated in another system or some features may be omitted, or not implemented.
The steps in the method of the embodiment of the invention can be sequentially adjusted, combined and deleted according to actual needs. The units in the device of the embodiment of the invention can be merged, divided and deleted according to actual needs. In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a storage medium. Based on such understanding, the technical solution of the present invention essentially or partially contributes to the prior art, or all or part of the technical solution can be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a terminal, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, while the invention has been described with respect to the above-described embodiments, it will be understood that the invention is not limited thereto but may be embodied with various modifications and changes.
While the invention has been described with reference to specific embodiments, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (8)

1. An image augmentation method, comprising:
receiving an image augmentation request sent by a model training server, wherein the image augmentation request comprises an image augmentation task set, and the image augmentation task set comprises at least one image augmentation task;
judging whether the image augmentation task set simultaneously contains GPU augmentation tasks and non-GPU augmentation tasks;
if the image augmentation task set simultaneously comprises a GPU augmentation task and a non-GPU augmentation task, obtaining a sample image from a preset sample library, and distributing the sample image to a preset CPU (central processing unit) so that the CPU executes the non-GPU augmentation task on the sample image;
storing intermediate processing data sent by the CPU into a preset intermediate data set, wherein the intermediate processing data are obtained after the CPU executes the non-GPU augmentation task on the sample image;
acquiring intermediate processing data from the intermediate data set, and distributing the intermediate processing data to a preset GPU so that the GPU executes the GPU augmentation task on the intermediate processing data;
storing first augmented image data sent by the GPU into a preset augmented data set, and sending the augmented data set to a model training server, wherein the first augmented image data is obtained after the GPU executes the GPU augmented task on the intermediate processing data;
if the image augmentation task set does not simultaneously contain the GPU augmentation task and the non-GPU augmentation task, judging whether the image augmentation task set only contains the non-GPU augmentation task;
if the image augmentation task set only contains non-GPU augmentation tasks, obtaining sample images from a preset sample library, and distributing the sample images to a preset CPU (central processing unit) so that the CPU executes the non-GPU augmentation tasks on the sample images, wherein the image augmentation request comprises storage positions of the sample library, and the sample library is obtained based on the storage positions;
storing second augmented image data sent by the CPU into a preset augmented data set, and sending the augmented data set to a model training server, wherein the second augmented image data is obtained after the CPU executes the non-GPU augmented task on the sample image;
if the image augmentation task set only contains GPU augmentation tasks, obtaining sample images from a preset sample library, and distributing the sample images to a preset GPU so that the GPU augmentation tasks are executed on the sample images by the GPU;
and storing third augmented image data sent by the GPU into a preset augmented data set, and sending the augmented data set to a model training server, wherein the third augmented image data is obtained after the GPU executes the GPU augmented task on the sample image.
2. The image augmentation method of claim 1, wherein sending the augmented data set to a model training server comprises:
judging whether the model training server is deployed locally;
and if the model training server is deployed locally, the augmented data set is sent to the model training server in a memory sharing mode.
3. The image augmentation method of claim 2, wherein the sending the augmented data set to a model training server, further comprises:
and if the model training server is not deployed locally, the augmented data set is sent to the model training server in an RPC transmission or HTTP transmission mode.
4. The image augmentation method of claim 1, wherein the CPU comprises a plurality of CPU execution threads, and wherein distributing the sample image to a preset CPU comprises:
acquiring a CPU execution thread in an idle state in the CPUs as a target CPU execution thread;
assigning a sample image to the target CPU execution thread.
5. The method according to claim 1, wherein the GPU includes a plurality of GPU execution threads, and the distributing the intermediate processing data to a predetermined GPU includes:
acquiring a GPU execution thread in an idle state in the GPU as a target GPU execution thread;
and allocating intermediate processing data to the target GPU execution thread.
6. An image augmenting apparatus, comprising:
the image augmentation system comprises a receiving unit, a processing unit and a processing unit, wherein the receiving unit is used for receiving an image augmentation request sent by a model training server, the image augmentation request comprises an image augmentation task set, and the image augmentation task set comprises at least one image augmentation task;
the first judgment unit is used for judging whether the image augmentation task set simultaneously contains a GPU augmentation task and a non-GPU augmentation task;
the first distribution unit is used for acquiring a sample image from a preset sample library and distributing the sample image to a preset CPU (central processing unit) if the image augmentation task set simultaneously contains a GPU augmentation task and a non-GPU augmentation task, so that the CPU executes the non-GPU augmentation task on the sample image;
a first storage unit, configured to store intermediate processing data sent by the CPU into a preset intermediate data set, where the intermediate processing data is obtained after the CPU executes the non-GPU augmentation task on the sample image;
the second distribution unit is used for acquiring intermediate processing data from the data set and distributing the intermediate processing data to a preset GPU so that the GPU executes the GPU augmentation task on the intermediate processing data;
a first sending unit, configured to store first augmented image data sent by the GPU in a preset augmented data set, and send the augmented data set to a model training server, where the first augmented image data is obtained by the GPU after executing the GPU augmented task on the intermediate processing data;
a third determining unit, configured to determine whether the image augmentation task set only includes a non-GPU augmentation task if the image augmentation task set does not include both the GPU augmentation task and the non-GPU augmentation task;
a third distributing unit, configured to, if the image augmentation task set only includes a non-GPU augmentation task, obtain a sample image from a preset sample library, and distribute the sample image to a preset CPU, so that the CPU executes the non-GPU augmentation task on the sample image, where the image augmentation request includes a storage location of the sample library, and obtain the sample library based on the storage location;
a fourth sending unit, configured to store second augmented image data sent by the CPU in a preset augmented data set, and send the augmented data set to a model training server, where the second augmented image data is obtained after the CPU executes the non-GPU augmented task on the sample image;
a fourth distribution unit, configured to, if the image augmentation task set only includes a GPU augmentation task, obtain a sample image from a preset sample library, and distribute the sample image to a preset GPU, so that the GPU executes the GPU augmentation task on the sample image;
and a fifth sending unit, configured to store third augmented image data sent by the GPU in a preset augmented data set, and send the augmented data set to a model training server, where the third augmented image data is obtained after the GPU executes the GPU augmented task on the sample image.
7. A computer arrangement, characterized in that the computer arrangement comprises a memory having stored thereon a computer program and a processor implementing the method according to any of claims 1-5 when executing the computer program.
8. A computer-readable storage medium, characterized in that the storage medium stores a computer program which, when being executed by a processor, is adapted to carry out the method according to any one of claims 1-5.
CN202010744544.4A 2020-07-29 2020-07-29 Image augmentation method, image augmentation device, computer device, and storage medium Active CN111897639B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010744544.4A CN111897639B (en) 2020-07-29 2020-07-29 Image augmentation method, image augmentation device, computer device, and storage medium
PCT/CN2020/111789 WO2021139177A1 (en) 2020-07-29 2020-08-27 Image augmentation method and apparatus, computer device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010744544.4A CN111897639B (en) 2020-07-29 2020-07-29 Image augmentation method, image augmentation device, computer device, and storage medium

Publications (2)

Publication Number Publication Date
CN111897639A CN111897639A (en) 2020-11-06
CN111897639B true CN111897639B (en) 2022-12-27

Family

ID=73182520

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010744544.4A Active CN111897639B (en) 2020-07-29 2020-07-29 Image augmentation method, image augmentation device, computer device, and storage medium

Country Status (2)

Country Link
CN (1) CN111897639B (en)
WO (1) WO2021139177A1 (en)

Families Citing this family (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018176000A1 (en) 2017-03-23 2018-09-27 DeepScale, Inc. Data synthesis for autonomous control systems
US11893393B2 (en) 2017-07-24 2024-02-06 Tesla, Inc. Computational array microprocessor system with hardware arbiter managing memory requests
US10671349B2 (en) 2017-07-24 2020-06-02 Tesla, Inc. Accelerated mathematical engine
US11409692B2 (en) 2017-07-24 2022-08-09 Tesla, Inc. Vector computational unit
US11157441B2 (en) 2017-07-24 2021-10-26 Tesla, Inc. Computational array microprocessor system using non-consecutive data formatting
US11561791B2 (en) 2018-02-01 2023-01-24 Tesla, Inc. Vector computational unit receiving data elements in parallel from a last row of a computational array
US11215999B2 (en) 2018-06-20 2022-01-04 Tesla, Inc. Data pipeline and deep learning system for autonomous driving
US11361457B2 (en) 2018-07-20 2022-06-14 Tesla, Inc. Annotation cross-labeling for autonomous control systems
US11636333B2 (en) 2018-07-26 2023-04-25 Tesla, Inc. Optimizing neural network structures for embedded systems
US11562231B2 (en) 2018-09-03 2023-01-24 Tesla, Inc. Neural networks for embedded devices
EP3864573A1 (en) 2018-10-11 2021-08-18 Tesla, Inc. Systems and methods for training machine models with augmented data
US11196678B2 (en) 2018-10-25 2021-12-07 Tesla, Inc. QOS manager for system on a chip communications
US11816585B2 (en) 2018-12-03 2023-11-14 Tesla, Inc. Machine learning models operating at different frequencies for autonomous vehicles
US11537811B2 (en) 2018-12-04 2022-12-27 Tesla, Inc. Enhanced object detection for autonomous vehicles based on field view
US11610117B2 (en) 2018-12-27 2023-03-21 Tesla, Inc. System and method for adapting a neural network model on a hardware platform
US10997461B2 (en) 2019-02-01 2021-05-04 Tesla, Inc. Generating ground truth for machine learning from time series elements
US11150664B2 (en) 2019-02-01 2021-10-19 Tesla, Inc. Predicting three-dimensional features for autonomous driving
US11567514B2 (en) 2019-02-11 2023-01-31 Tesla, Inc. Autonomous and user controlled vehicle summon to a target
US10956755B2 (en) 2019-02-19 2021-03-23 Tesla, Inc. Estimating object properties using visual image data
CN112506483B (en) * 2020-12-04 2024-04-05 北京五八信息技术有限公司 Data augmentation method, device, electronic equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105900064A (en) * 2014-11-19 2016-08-24 华为技术有限公司 Method and apparatus for scheduling data flow task
CN107135257A (en) * 2017-04-28 2017-09-05 东方网力科技股份有限公司 Task is distributed in a kind of node cluster method, node and system
CN110489223A (en) * 2019-08-26 2019-11-22 北京邮电大学 Method for scheduling task, device and electronic equipment in a kind of isomeric group

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106296616B (en) * 2016-08-18 2019-01-29 中国航空工业集团公司洛阳电光设备研究所 A kind of infrared image detail enhancing method and a kind of infrared image details enhancement device
CN107688495B (en) * 2017-06-22 2020-11-03 平安科技(深圳)有限公司 Method and apparatus for scheduling processors
WO2018234869A2 (en) * 2017-06-22 2018-12-27 Banuba Limited Improving operation of computing devices by dynamically adaptive distribution of workload between central processing unit(s) and graphics processing unit(s), and computer systems and computer-implemented methods in accordance with thereof
CN108710897A (en) * 2018-04-24 2018-10-26 江苏科海智能***有限公司 A kind of online general target detecting system in distal end based on SSD-T
CN108921070B (en) * 2018-06-22 2021-06-22 北京旷视科技有限公司 Image processing method, model training method and corresponding device
CN109886859B (en) * 2019-01-30 2023-06-13 上海赜睿信息科技有限公司 Data processing method, system, electronic device and computer readable storage medium
CN109933429A (en) * 2019-03-05 2019-06-25 北京达佳互联信息技术有限公司 Data processing method, device, electronic equipment and storage medium
CN110992241A (en) * 2019-11-21 2020-04-10 支付宝(杭州)信息技术有限公司 Heterogeneous embedded system and method for accelerating neural network target detection
CN111144494A (en) * 2019-12-27 2020-05-12 睿魔智能科技(深圳)有限公司 Object detection model training method, object detection device, object detection equipment and object detection medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105900064A (en) * 2014-11-19 2016-08-24 华为技术有限公司 Method and apparatus for scheduling data flow task
CN107135257A (en) * 2017-04-28 2017-09-05 东方网力科技股份有限公司 Task is distributed in a kind of node cluster method, node and system
CN110489223A (en) * 2019-08-26 2019-11-22 北京邮电大学 Method for scheduling task, device and electronic equipment in a kind of isomeric group

Also Published As

Publication number Publication date
CN111897639A (en) 2020-11-06
WO2021139177A1 (en) 2021-07-15

Similar Documents

Publication Publication Date Title
CN111897639B (en) Image augmentation method, image augmentation device, computer device, and storage medium
WO2018119602A1 (en) Rendering method and device
CN112784989B (en) Inference system, inference method, electronic device, and computer storage medium
US8195882B2 (en) Shader complex with distributed level one cache system and centralized level two cache
US10474490B2 (en) Early virtualization context switch for virtualized accelerated processing device
US20150133214A1 (en) Video encoding based on areas of interest
CN111629212B (en) Method and device for transcoding video
CN116382880B (en) Task execution method, device, processor, electronic equipment and storage medium
CN111476851A (en) Image processing method, image processing device, electronic equipment and storage medium
CN112473126A (en) Scene blanking processing method and device, electronic equipment and medium
WO2014039457A1 (en) Protocol for communications between platforms and image devices
CN117058288A (en) Graphics processor, multi-core graphics processing system, electronic device, and apparatus
CN114077489A (en) Model loading method and related device
CN111629211B (en) Method and device for transcoding video
CN112882826B (en) Resource cooperative scheduling method and device
CN113536168A (en) Component processing method and device
CN116402673A (en) Data processing method, system, computing device and storage medium
US20230205608A1 (en) Hardware supported split barrier
WO2022133954A1 (en) Image rendering method, apparatus and system, and computer-readable storage medium
US20210089423A1 (en) Flexible multi-user graphics architecture
CN111724262B (en) Subsequent package query system of application server and working method thereof
CN113157415A (en) Farm rendering method and device, electronic equipment and storage medium
US8978042B2 (en) Method and system for maintaining game functionality for a plurality of game instances running on a computer system
CN112615928B (en) Data processing method, device and storage medium
CN113535378A (en) Resource allocation method, storage medium and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant