CN115713189A - Work order allocation correction method, equipment and system - Google Patents

Work order allocation correction method, equipment and system Download PDF

Info

Publication number
CN115713189A
CN115713189A CN202110962939.6A CN202110962939A CN115713189A CN 115713189 A CN115713189 A CN 115713189A CN 202110962939 A CN202110962939 A CN 202110962939A CN 115713189 A CN115713189 A CN 115713189A
Authority
CN
China
Prior art keywords
sample
work order
order data
samples
marked
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110962939.6A
Other languages
Chinese (zh)
Inventor
周逸凡
谢奕
曹高雄
李萍
蔡蔓菁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Cloud Computing Technologies Co Ltd
Original Assignee
Huawei Cloud Computing Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Cloud Computing Technologies Co Ltd filed Critical Huawei Cloud Computing Technologies Co Ltd
Priority to CN202110962939.6A priority Critical patent/CN115713189A/en
Publication of CN115713189A publication Critical patent/CN115713189A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application provides a work order allocation correction method, equipment and system. The system comprises: the allocation module is used for determining a committing department corresponding to the work order data according to the obtained work order data; the committing module is used for selectively calling the expert correction module according to the work order data and a committing department corresponding to the work order data; the expert correction module is configured to correct the corresponding relation between the work order data and the committee department when being called, so as to obtain a corrected seed sample; the error data mining module is used for mining error samples from the marked samples according to the corrected seed samples; and the expert correction module is used for correcting the error sample. The system can find out an error sample in a large-scale historical sample and correct the error sample, so that the accuracy of the algorithm model is improved.

Description

Work order allocation correction method, equipment and system
Technical Field
The application relates to the field of artificial intelligence, in particular to a work order allocation correction method, equipment and system.
Background
A work order (called a work order for short) is usually a task, work or request submitted to a service provider, an enterprise or a department when a customer or an employee in the enterprise encounters a problem on a product or a service, and the service provider, the enterprise or the department can arrange a corresponding employee or a department organization of a committee department to solve after receiving the work order. Historically, after a work order is received, the received work order is assigned to a corresponding committee for disposal by an experienced order dispatcher. However, as the number of work orders increases dramatically, this manual handling method is not only inefficient, but also prone to errors. With the development of machine learning and natural language processing techniques, more and more work orders are dispatched through an algorithm model. However, the algorithm model needs to be trained by using historical data, and if error data exists in the training data, the training result is to strengthen the error data, so that the original error can be continued. Specifically, as shown in fig. 1, the prior art provides a labeling method, which includes:
s101: and marking the first work order data manually by the user to obtain the committing department corresponding to the first work order data. The first work order data is a part of data extracted from the work order data set.
S102: the user takes the first work order data and the first committee office as annotated samples.
S103: and training the algorithm model by using the marked samples by the user to obtain the trained algorithm model.
S104: and the user takes the second work order data as the unmarked data. And the second work order data is another part or all of data extracted from the rest data in the work order data set.
S105: and the user inputs the second work order data into the trained algorithm model to obtain the committing department corresponding to the second work order data, and the committing department corresponding to the second work order data is used as a label result corresponding to the unmarked data.
S106: and the user takes the unmarked data and the label result corresponding to the unmarked data as the marked sample, and returns to the step S103 until all the data in the work order data set are marked.
Through the scheme, as shown in fig. 2, only a small part of the work order data can be labeled, so that labeled samples are obtained, and the trained algorithm model is obtained after the algorithm model is trained by using a small amount of labeled samples. And then, labeling the unlabeled data by using the trained algorithm model to obtain a label result corresponding to the unlabeled data. And taking the unlabeled data and the label result corresponding to the unlabeled data as a new labeled sample, and after several rounds, the algorithm model has higher precision and can label the work order data better. However, the above scheme cannot correct the error samples in the labeled samples, and the algorithm model fits the error samples, so that errors are accumulated continuously, and the accuracy of the algorithm model is lowered continuously.
Disclosure of Invention
In order to solve the above problems, the present application provides a work order allocation correction method, device, and system, which can correct an erroneous sample, thereby improving the accuracy of an algorithm model.
In a first aspect, a work order allocation correction system is provided, which includes:
the allocation module is used for determining a committing department corresponding to the first work order data according to the acquired first work order data;
the committing module is used for selectively calling the expert correcting module according to the first work order data and a committing department corresponding to the first work order data;
the expert correction module is configured to correct the corresponding relation between the first work order data and the committee department when the expert correction module is called, so that a corrected seed sample is obtained;
the error data mining module is used for mining error samples from the marked samples according to the corrected seed samples;
and the expert correction module is used for correcting the error sample to obtain a corrected sample.
In some possible designs, the allocating module is specifically configured to determine, according to the first work order data, a commission department corresponding to the first work order data through an algorithm model.
In some possible designs, the modified samples are used to iteratively train the algorithm model.
In some possible designs, the modified seed samples are used to iteratively train the algorithm model.
In some possible designs, the error data mining module is configured to mine an error sample from the labeled samples according to the modified seed samples and a matching rule.
In some possible designs, the matching rules include one or more of:
determining that the first annotated sample is an error sample if the similarity between the work order data of the corrected seed sample and the work order data of the first annotated sample in the annotated samples is greater than a threshold value and the forecasted committee department corresponding to the corrected seed sample is not the same as the forecasted committee department corresponding to the first annotated sample;
determining that the first marked sample is an error sample if the predicted committal department corresponding to the modified seed sample is the same as the predicted committal department corresponding to the first marked sample in the marked samples, and the similarity between the work order data of the modified seed sample and the work order data of the first marked sample in the marked samples is smaller than a threshold value;
and if the keyword matching of the work order data of the corrected seed sample and the work order data of the first labeled sample in the labeled samples is successful, and the predicted committing department corresponding to the corrected seed sample is not the same as the predicted committing department corresponding to the first labeled sample, determining that the first labeled sample is an error sample.
In some possible designs, the matching rules are set by the user.
In a second aspect, a work order allocation correction method is provided, and the method includes:
determining a committing department corresponding to the first work order data according to the acquired first work order data;
under the condition that the relationship between the first work order data and the committee department corresponding to the first work order data is incorrect, correcting the corresponding relationship between the first work order data and the committee department to obtain a corrected seed sample;
mining error samples from the marked samples according to the corrected seed samples;
and correcting the error sample to obtain a corrected sample.
In some possible designs, determining, according to the acquired first work order data, a committee unit corresponding to the first work order data includes:
and determining a committing department corresponding to the first work order data according to the acquired first work order data through an algorithm model.
In some possible designs, the modified samples are used to iteratively train the algorithm model.
In some possible designs, the modified seed samples are used to iteratively train the algorithm model.
In some possible designs, mining an error sample from the labeled samples according to the modified seed sample includes: and mining an error sample from the marked sample according to the corrected seed sample and a matching rule.
In some possible designs, the matching rules include one or more of:
determining that the first marked sample is an error sample when the similarity between the work order data of the modified seed sample and the work order data of the first marked sample in the marked samples is greater than a threshold value and the predicted committing department corresponding to the modified seed sample is different from the predicted committing department corresponding to the first marked sample;
determining that the first marked sample is an error sample if the predicted committal department corresponding to the modified seed sample is the same as the predicted committal department corresponding to the first marked sample in the marked samples, and the similarity between the work order data of the modified seed sample and the work order data of the first marked sample in the marked samples is smaller than a threshold value;
and if the keyword matching of the work order data of the corrected seed sample and the work order data of the first labeled sample in the labeled samples is successful, and the predicted committing department corresponding to the corrected seed sample is different from the predicted committing department corresponding to the first labeled sample, determining that the first labeled sample is an error sample.
In some possible designs, the matching rules are set by the user.
In a third aspect, there is provided a computer device comprising a processor and a memory, the processor being configured to execute instructions stored in the memory to perform the method according to any of the second aspect.
In a fourth aspect, there is provided a computer readable storage medium comprising computer program instructions which, when executed by a cluster of computing devices, perform the method of any of the second aspects.
In a fifth aspect, a computer program product is provided comprising instructions that, when executed by a cluster of computer devices, cause the cluster of computer devices to perform the method according to any of the second aspects.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments or the background art of the present application, the drawings required to be used in the embodiments or the background art of the present application will be described below.
FIG. 1 is a schematic diagram of a labeling method to which the present application relates;
FIG. 2 is a schematic diagram of a loop of the labeling method shown in FIG. 1 in which unlabeled data is changed into labeled data;
FIG. 3 is a schematic structural diagram of a work order allocation correction system provided in the present application;
FIG. 4 is a schematic diagram of a computer cluster provided herein;
FIG. 5 is a schematic diagram of a computer device provided in an embodiment of the present application;
FIG. 6 is a block diagram of a general purpose processor provided herein;
FIG. 7 is a schematic structural diagram of an AI chip provided by the present application;
fig. 8 is a schematic structural diagram of a work order allocation correction method provided in the present application;
FIG. 9 is a schematic illustration of a work order assignment interface provided herein;
FIG. 10 is a schematic structural diagram of a distribution module provided in the present application;
FIG. 11 is a schematic illustration of a work order processing interface provided herein;
FIG. 12 is a schematic view of a work order assignment interface provided herein;
FIG. 13 is a schematic illustration of a work order revision interface provided herein.
Detailed Description
Referring to fig. 3, fig. 3 is a schematic structural diagram of a work order allocation correction system provided in the present application. As shown in fig. 3, the work order allocation correction system provided in this embodiment includes: a distribution module 111, a commission module 112, an expert correction module 113, and an error data mining module 114. Wherein,
the allocating module 111 is configured to obtain first work order data, and determine a committing department corresponding to the first work order data according to the first work order data;
a commission department module 112 corresponding to the commission department, for determining whether the corresponding relationship between the first work order data and the commission department is correct;
an expert correction module 113, configured to correct the correspondence between the first work order data and the committing department to obtain a corrected seed sample when the correspondence between the first work order data and the committing department is incorrect.
An error data mining module 114, configured to mine an error sample from the labeled sample according to the modified seed sample;
the expert correcting module 113 is further configured to correct the error sample to obtain a corrected sample.
In a specific embodiment, the allocating module 111, the committing module 112, the expert correcting module 113 and the error data mining module 114 are all arranged in a computer cluster.
In another specific embodiment, the allocating module 111 and the error data mining module 114 may be disposed in the same or different computer clusters, and the committing module 112 and the expert correcting module 113 may be disposed in different terminal devices.
A computer cluster (computer cluster) refers to a group of computers working loosely or tightly together, typically for performing large jobs. Deploying clusters typically increases overall performance through concurrency, which is more cost-effective than a single computer of comparable speed or availability. The computing devices are interconnected via a network, and each computing device runs its own instance of an operating system. In most cases, each computing device uses the same hardware and the same operating system, and in some cases, different operating systems may be used on different hardware.
Fig. 4 is a schematic diagram of a computer cluster 210 provided in this embodiment. As shown in FIG. 4, computer cluster 210 includes a plurality of computing devices, such as 250A, 250B, 250C, 250D, and 250E. These computing devices are used to provide computing resources. In the case of a computing device, it may contain multiple processors or processor cores, each of which may be a computing resource, so that a physical computing device may provide multiple computing resources. Computing nodes 250A, 250B, 250C, 250D, and 250E are interconnected via network 212. In addition, a scheduler 260 is also connected to the network 212. In operation, scheduler 260 may control the execution of jobs submitted to computer cluster 210, such as jobs that determine correspondence of work order data to the committing department, mining erroneous sample jobs from annotated samples according to the modified seed samples, e.g., splitting a job into multiple tasks for execution by different computing nodes.
Jobs may be submitted to the computer cluster 210 from any suitable source. The embodiment does not limit the position of submitting the job, nor the specific mechanism of submitting the job by the user. In FIG. 4, for example, a user 232 may submit a job 236 from enterprise 230 to computer cluster 210. Specifically, in this example, user 232 operates client computer 234 to submit job 236 to computer cluster 210. In this example, enterprise 230 is connected to computer cluster 210 through network 220, and network 220 may be the Internet, or other network. Thus, a user may submit a job to the computer cluster 210 from a remote location. The job is usually a large job requiring many computing resources to be processed in parallel, and the present embodiment does not limit the nature and number of jobs. A job may contain multiple computing tasks that may be assigned to multiple computing resources for execution. Most tasks are executed concurrently or in parallel, while some tasks need to rely on data generated by other tasks. The embodiment does not limit the number of tasks and the data of the tasks that can be executed in parallel.
In a computer cluster, there are typically multiple jobs waiting to be executed simultaneously. If too much computing resources are allocated to a single job, the performance of other jobs may be affected. Thus, a scheduler 260 is provided in the computer cluster, and the scheduler 260 can control the computing resources allocated to the jobs to be executed so that the high-priority jobs can be executed with priority. Scheduler 260 may monitor the execution status of a job and change the allocation of resources to the job according to a policy. For example, the scheduler 260 may obtain the policy by accessing information in the database 262. In addition to the policies, the database 262 stores therein profiles of each of the executing jobs and the jobs waiting to be executed, which may indicate information such as processing time of the jobs.
Referring to fig. 5, fig. 5 is a schematic diagram of a computer device according to an embodiment of the present disclosure. As shown in fig. 5, the computer device includes a processor 302, a memory 305. Processor 302 is coupled to memory 305 via a Double Data Rate (DDR) bus 303. Here, different data buses may be used by different memories 305 to communicate with the processor 302, so the DDR bus 303 may be replaced by other types of data buses, and the bus type is not limited in the embodiments of the present application. In addition, the computer device also includes various I/O devices that the processor 302 can access 307 over the PCIe bus 301.
The processor (processor) 302 is the computational core and control core of the computer device. One or more processor cores (cores) 304 may be included in the processor 302. The processor 302 may be an ultra-large scale integrated circuit. An operating system and other software programs are installed in the processor 302 so that the processor 302 can access the memory 305 and various PCIe devices. It is understood that, in the embodiment of the present invention, the core 304 in the processor 302 may be, for example, a Central Processing Unit (CPU) or other specific integrated circuit (ASIC). The processor 302 may also be other general purpose processors, digital Signal Processors (DSPs), application Specific Integrated Circuits (ASICs), field Programmable Gate Arrays (FPGAs) or other programmable logic devices, discrete gate or transistor logic, discrete hardware components, and so forth. In practice, the computer device may also comprise a plurality of processors. The multiple processors may be homogeneous or heterogeneous. For example, the plurality of processors may be a plurality of CPUs, or may be a CPU + GPU, a CPU + DSP, a CPU + ASIC, a CPU + FPGA, a CPU + AI chip, and the like. Here, the CPU may be configured to execute functions of acquiring work order data and triggering the AI chip to determine, according to the work order data, a committee corresponding to the work order data; and acquiring a function of correcting the seed sample, and mining an error sample from the marked sample according to the corrected seed sample.
A memory controller (memory controller) is a bus circuit controller that controls the memory 305 within the computer device and is used to manage and schedule the transfer of data from the memory 305 to the cores 304. Data may be exchanged between memory 305 and cores 304 through a memory controller. The memory controller may be a separate chip and coupled to core 304 via a system bus. Those skilled in the art will appreciate that the memory controller may be integrated into the processor 302, built into the north bridge, or be a separate memory controller chip, and the specific location and existence of the memory controller are not limited in the embodiments of the present invention. In practice, the memory controller may control the necessary logic to write data to the memory 305 or to read data from the memory 305. The memory controller can be a memory controller in a processor system such as a general-purpose processor, a special-purpose accelerator, a GPU, an FPGA, an embedded processor and the like.
Memory 305 is the main memory of the computer device. Memory 305 is typically used to store various operating systems running software, input and output data, and information exchanged with external memory. In order to increase the access speed of the processor 302, the memory 308 needs to have the advantage of fast access speed. In a conventional computer system architecture, a Dynamic Random Access Memory (DRAM) is usually used as the memory 305. The processor 302 can access the memory 305 at a high speed through the memory controller, and perform read and write operations on any one of the memory locations in the memory 305. In addition to DRAM, the memory 305 may be other random access memory such as Static Random Access Memory (SRAM). In addition, the memory 305 may be a Read Only Memory (ROM). The rom may be, for example, a Programmable Read Only Memory (PROM), an Erasable Programmable Read Only Memory (EPROM), and the like. The number and type of the memories 305 are not limited in this embodiment. In addition, the memory 305 may be configured to have a power conservation function. The power-saving function means that when the system is powered off and powered on again, the data stored in the memory cannot be lost. The memory 305 having the power saving function is referred to as a nonvolatile memory. The memory 305 may store implementation codes of the allocation module and the error data mining module. Optionally, the memory 305 may further store implementation codes of the committing module, the expert correcting module, and the expert labeling module.
The input/output (I/O) device 307 is hardware that can perform data transmission, and may also be understood as a device interfacing with an I/O interface. Common I/O devices include network cards, printers, keyboards, mice, etc. All external memory may also be used as I/O devices, such as hard disks, floppy disks, optical disks, etc. The processor 302 may access the various IO devices 307 through the PCIe bus 301. It should be noted that the PCIe bus 301 is only one example, and may be replaced by other buses, such as a Unified Bus (UB) bus.
A Baseboard Management Controller (BMC) 306 may upgrade firmware of the device, manage an operation state of the device, and remove a fault. The processor 302 may access the baseboard management controller 306 through a PCIe bus or a USB, I2C, or the like. Baseboard management controller 306 may also be connected to at least one sensor. Acquiring status data of the computer device by a sensor, wherein the status data comprises: temperature data, current data, voltage data, and the like. The type of status data is not specifically limited in this application. Baseboard management controller 306 communicates with processor 302 via a PCIe bus or other type of bus, for example, passing the obtained status data to processor 302 for processing. Baseboard management controller 306 may also perform maintenance on the program code in memory, including upgrades or restores, etc. Baseboard management controller 306 may also control power circuits or clock circuits within the computer device, etc. In summary, baseboard management controller 306 can implement management of computer devices in the above manner. However, baseboard management controller 306 is only one optional device. In some embodiments, the processor 302 may communicate directly with the sensors, thereby directly managing and maintaining the computer device.
Referring to fig. 6, fig. 6 is a schematic structural diagram of a general processor provided in the present application. As shown in fig. 6, the general-purpose processor of this embodiment is configured to execute the work order data, obtain a function of correcting a seed sample, and mine an error sample from a labeled sample according to the corrected seed sample, and includes: a memory address register 401, a memory data register 402, a program counter 403, an instruction register 404, an instruction decoder 405, an operation controller 407, a calculation unit 408, a general purpose register 409, an accumulator 410, a status register 411, a timing circuit 412, and a processor bus 420. The processor bus 420 may be a data bus, a power bus, a control bus, a status signal bus, or the like.
Processing unit 40 is operative to process instructions and data stored in memory 440. In some embodiments, the instructions may include one or more instruction formats. The instruction format may indicate various fields (number of bits, location of bits, etc.) to specify the operand and the operation of the data processing to be performed on which the operation is to be performed. Some instruction formats may be further defined by instruction templates (or subformats).
In processing unit 40, memory address register 401 is used to store the address of memory 440 that processing unit 40 is currently accessing. Memory data register 402 is used to hold data read or written from the address and instructions to read or write by processing unit 40 to compensate for differences in operating speed that exist between the processor and memory.
Timing circuit 412 provides a time reference for each component by a fixed clock, and processing unit 40 executes an instruction for one instruction cycle. The program counter 403 is used to store the address of the next instruction, and when the instructions are executed sequentially, the program counter 403 automatically adds the byte number of one instruction after each instruction is fetched. When a branch instruction is encountered, program counter 403 specifies the address of the next instruction by way of an address code field in the branch instruction. Instruction register 404 is used to hold the currently executing instruction. The instruction includes two fields, an opcode portion and an address code portion, the opcode portion being decoded by instruction decoder 405 to generate the control potentials for the operation required by the instruction. The operation controller 407 may generate various operation control signals according to the control potential signal output by the instruction decoder and the timing signal generated by the timing circuit 412, so as to control the rest of the processing unit 40 to perform the operations of fetching and executing instructions.
The micro instruction is the minimum unit of executing instructions by the processor, and one instruction can be a single micro instruction or can be composed of a plurality of micro instructions. An instruction that is a combination of multiple microinstructions, referred to as a complex instruction, may be decoded by instruction decoder 405 using a variety of different mechanisms. Specific decoding mechanisms include, but are not limited to, lookup tables, hardware implementations, programmable Logic Arrays (PLAs), microcode Read Only Memories (ROMs), and the like. In one embodiment, the step of the micro instruction corresponding to the complex instruction may be stored in the microcode rom 406, and the instruction decoder 405 may query and obtain the operation code and the address code of the micro instruction constituting the complex instruction from the microcode rom 406 during the decoding process, and sequentially decode the operation code portion of the micro instruction to generate the control potential of the operation required by the micro instruction.
The operation controller 407 has a plurality of buffers and can send and store decoded instructions to the respective reservation stations according to the types of instructions. And dispatches the instructions that can be executed in advance to the corresponding computing unit 408 for execution after analyzing the state of the hardware circuit of the computing unit and the specific situation of whether each instruction can be executed in advance. During which the instruction stream is reordered to allow the instruction pipeline to progress and be smoothly scheduled. For example, for instructions for integer computations, the operation controller 407 may use an integer reservation station to hold the instructions and assign the instructions to the integer computation units to perform the computations; for instructions for floating point calculations, the operation controller 407 may use a floating point reservation station to hold instructions and allocate them to the floating point calculation unit to perform the calculations.
The general register group 409 is used for storing data corresponding to an address code according to the address code of the instruction. The calculation unit 408 is configured to receive an operation control signal from the operation controller 407 and perform calculations on data stored in the general register set 409, including arithmetic operations (including basic operations such as addition and subtraction of multipliers and additional operations thereof) and logical operations (including shifting, logical testing, or two-value comparison). Temporary variables generated during the calculation are stored in the accumulator 410, and generated state information, such as an operation result in/out flag (C), an operation result overflow flag (O), an operation result zero flag (Z), an operation result negative flag (N), an operation result sign flag (S), etc., is stored in the program state word register 411. The program status word register is also used to store information such as interrupts and computing device operating status so that the processing unit 40 can know the machine operating status and the program operating status in a timely manner.
The computing unit 408 includes various circuit blocks that can be used to execute different instructions. For example, the integer calculation unit 4081 and the floating-point number calculation unit 4082 are used to perform arithmetic operations and logical operations on integer and floating-point numbers, respectively.
Referring to fig. 7, fig. 7 is a schematic structural diagram of an AI chip provided in the present application. As shown in fig. 7, the AI chip of this embodiment is configured to execute an AI task corresponding to a function of an entrusting department corresponding to the work order data determined according to the work order data; the method comprises the following steps: an AI core (core) 510, an AI CPU 520, a system buffer/memory 530, and a DDR 506. The AI CPU 520 is configured to receive an AI task and call the AI core to execute the task. In the case where there are a plurality of AI cores 510, the AI CPU 520 is also configured to undertake scheduled tasks. The AI CPU 520 can be realized by an ARM processor, has small volume and low power consumption, adopts a 32-bit reduced instruction set, and is simple and flexible in addressing. Of course, in some embodiments, the AI CPU 520 may also be implemented by other processors. The AI core 510 is used to provide neural network models involved in AI tasks and perform corresponding operations. The system buffer/memory 530 mainly refers to an L2 buffer or an L2 cache, and is configured to temporarily store input data, an intermediate result, or a final result that passes through the AI chip. The DDR 506 is an off-chip Memory that may alternatively be either a High Bandwidth Memory (HBM) or other off-chip Memory. The DDR 506 is located between the AI chip and the external memory, overcoming the access speed limitation when the computing resource shared memory is read and written. The I/O devices 550 included in the AI chip are mainly peripherals such as a network interface card and the like. In some application scenarios, encoding or decoding of data is required, and therefore the AI chip may further include a codec 540 and an I/O device 550, the codec 540 being used for encoding or decoding data.
Fig. 7 shows the internal structure of the AI core. The AI core 510 includes a load/store unit (LSU) 501, a cube (cube) calculation unit 502, a scalar (scalar) calculation unit 504, a vector (vector) calculation unit 503, and a buffer (buffer) 505. The load/store unit 501 is configured to load data to be processed and store the processed data, and may also be used for performing read/write management of internal data in the AI core between different buffers and completing some format conversion operations. Cube calculation unit 502 is used to provide the core calculation power for matrix multiplication. Scalar calculation unit 504 is a Single instruction stream Single data (SISD) processor that processes only one piece of data (usually an integer or floating point) at a time. The vector calculation unit 503 is also called an array processor, and is a processor capable of directly operating a set of arrays or vectors for calculation. The number of buffers 505 may be one or more, only one of which is shown in fig. 7. The buffer 505 mainly refers to an L1 buffer, and is used to temporarily store some data that needs to be repeatedly used by the AI core 510, so as to reduce reading and writing from the bus, and in addition, some data format conversion functions are implemented, which also require source data to be located in the buffer 505. The buffer 505 is located in the AI core, which reduces the distance between the Cube arithmetic unit and the memory and reduces the access to the DDR 506.
For example, when the AI CPU 520 loads data to be processed by the AI task into the memory 506, the LSU501 in the AI core 510 reads the data from the system buffer/memory 530 and sends the data to one or more computing units for computation. After obtaining the result, the LSU501 loads the result into the memory 506, and the network interface card 505 sends the inference result to the host.
The present application provides a work order allocation correction method in combination with the work order allocation correction system shown in fig. 3. As shown in fig. 8, the work order allocation correction method provided by the present application includes the following steps:
s201: and the allocation module acquires the work order data.
In one embodiment, the work order data may be submitted by grid personnel, by government hotlines, by WeChat, and by government mailboxes.
S202: and the allocation module determines a committee department corresponding to the first work order data according to the acquired first work order data.
In a particular embodiment, the predicted behavior may be user-triggered. For example, the user may trigger a predicted behavior on a work order assignment interface as shown in FIG. 9. Wherein, the work order allocation interface can include: a title display field for displaying a work order processing interface, a display field for displaying one or more work order data, and a one-touch-and-dial key for triggering a predictive behavior, etc. Specifically, after acquiring a plurality of work order data including the hotline complaint work order _1, the mailbox complaint work order _2, the WeChat report work order _3, \8230, the hotline complaint work order 8230and the hotline complaint work order _ N, the allocating module displays the plurality of work order data in the work order allocating interface shown in FIG. 9. When a user clicks a 'one-key distribution' key, a distribution module is triggered to determine predicted committee departments corresponding to all work order data. It is understood that the work order assignment interface shown in fig. 9 is only a specific example, in other embodiments, the work order assignment interface may further include more or fewer controls, and the text content on the controls and the arrangement manner of the controls may be different. The interfaces in the following examples are also similar.
In a specific embodiment, the assignment module may be a trained algorithm model, such as machine learning, neural networks, random forest algorithms, and the like. For example, the allocation module may be expressed as:
y=f(x)
wherein y is the predicted committee, x is the obtained work order data, f () is the mapping relation between the obtained work order data and the predicted committee, and f () can be obtained by training through labeled samples. The allocation module can be obtained by training with marked samples in a database, wherein the marked samples comprise historical work order data and committal departments corresponding to the historical work order data. The training mode of the allocating module can include the following two modes: (1) The algorithm model is obtained by training a large amount of historical work order data and committing departments corresponding to the historical work order data. (2) Training the algorithm model through a small amount of historical work order data and the committing departments corresponding to the historical work order data, inputting new work order data into the distribution module for prediction, obtaining the committing departments corresponding to the new work order data, training the algorithm model by using the new work order data and the committing departments corresponding to the new work order data, and repeating for multiple times to obtain the trained algorithm model.
In a more specific embodiment, taking the allocating module as a deep neural network as an example, as shown in fig. 10, the allocating module may include an input layer, a plurality of hidden layers, and an output layer.
An input layer:
assuming that the input to the input layer is the acquired work order data, the output and input are equal, i.e., no processing is performed on the input. For convenience of presentation, it is assumed that the input layer does not perform any processing, but in practical applications, the input layer may be normalized and the like, and is not limited herein.
Hiding the layer:
the output of the input layer is taken as the input of the hidden layer S, assuming a total of L (L < 2 >) hidden layers, let Z l Represents the output result of the l-th layer, when l =1, Z 1 = S, where 1 ≦ L, then the relationship between the L-th tier and the L + 1-th tier is:
a l+1 =W l Z l +b l
Z l+1 =f l+1 (a l+1 )
wherein, W l Is the weight vector of the l-th layer, b l Is a bias vector of the l-th layer, a l+1 Intermediate vector of layer l +1, f l +1 Is the l +1 th layerExcitation function of, Z l+1 Is the hidden layer result of the l +1 th layer. The excitation function may be any one of a sigmoid function, a hyperbolic tangent function, a Relu function, an ELU (explicit Linear Units) function, and the like.
And (3) an output layer:
suppose output result Z of L-th layer L Specifically (z) 1 ,z 2 ,…,z C ) Then, the commission department to which the acquired work order data belongs can be calculated by the softmax function:
Figure BDA0003222763040000091
wherein p is j Probability of jth committee, e is natural number, z j Is output Z of the L-th layer L The j element of (1), z k Is output Z of L-th layer L The kth element in (1). It should be understood that the above example is described by taking the softmax function as an example, but in practical applications, a logistic function or the like may be used, and the present invention is not limited thereto.
In a specific embodiment, the determining, by the allocating module, an order department corresponding to the first work order data according to the acquired first work order data includes: and the allocation module determines a committee office corresponding to the first work order data according to the acquired first work order data, the feedback opinions of the committee office and the handling condition of the work order.
S203: and the allocating module sends the predicted committing department to the committing module. Accordingly, the committee module receives the predicted committee departments sent by the distribution module.
S204: the commission module determines whether the predicted commission authority is correct.
In a particular embodiment, after the order module receives the predicted order department, the staff member of the order department may operate on the work order processing interface shown in FIG. 11. The work order processing interface comprises a title display column for displaying the work order processing interface, a display control for displaying the currently processed work order data, a processing completion button for confirming that the prediction of the work order data is correct, an error feedback button for confirming that the prediction of the work order data is wrong, and the like. If the predicted delegation department is correct, the staff member of the delegation department can click the "process complete" button and proceed to step S205, and if the predicted delegation department is wrong, the staff member of the delegation department can click the "error feedback" button and proceed to step S206.
S205: and the committee module takes the obtained work order data and the committee department corresponding to the obtained work order data as a new sample to be added into the database in which the marked sample is positioned. And the newly added samples are used for training the algorithm model.
S206: and the committing module pushes the acquired work order data and the predicted committing department as error samples to the expert correction module. Accordingly, the expert correction module receives the error sample.
S207: and the expert correction module corrects the error sample to obtain a corrected seed sample.
In a specific embodiment, the action of the expert correction module to correct the error sample may be triggered by an expert. For example, the expert may operate on the work order assignment interface shown in FIG. 12 to trigger the correction of the erroneous sample. As shown in fig. 12, the work order allocation interface may include: the system comprises a title display column for displaying a work order allocation interface, a display control for displaying an error sample which is currently processed, and a key for triggering a manual selection allocation department for correcting the error sample. When the expert clicks the button of 'manual selection of the distribution department', the error sample displayed by the display control can be corrected.
S208: and the expert correction module sends the correction seed sample to the error data mining module. Accordingly, the error data mining module receives the correction seed sample sent by the expert correction module.
S209: and the error data mining module mines error samples from the marked samples according to the corrected seed samples and the matching rules.
In a particular embodiment, the matching rule may be one or more. For example:
the matching rule may be that if the similarity between the work order data of the modified seed sample and the work order data of the first labeled sample in the labeled samples is greater than a first threshold, but the committing department for prediction of the modified seed sample is not the same as the committing department for prediction of the first labeled sample, the first labeled sample is determined to be an error sample;
and if the similarity between the work order data of the corrected seed sample and the work order data of the first marked sample in the marked samples is larger than a first threshold value, but the predicted committal department of the corrected seed sample is the same as the predicted committal department of the first marked sample, performing no treatment, or determining that the first marked sample is a correct sample or a pending sample.
The matching rule may be that if the committee for prediction of the corrected seed sample is the same as the committee for prediction of the labeled sample, but the similarity between the work order data of the corrected seed sample and the work order data of a first labeled sample in the labeled samples is smaller than a first threshold value, the first labeled sample is determined to be an error sample;
the matching rule may be that if the committal for prediction of the modified seed sample is the same as the committal for prediction of the first annotated sample in the annotated sample, but the similarity between the work order data of the modified seed sample and the work order data of the first annotated sample is greater than or equal to a first threshold, no processing is performed, or the first annotated sample is determined to be a correct sample or an undetermined sample.
The matching rule may be that if the keyword matching of the work order data of the modified seed sample and the work order data of the first labeled sample in the labeled samples is successful, but the committing department for predicting the modified seed sample is different from the committing department for predicting the first labeled sample, the first labeled sample is determined to be an error sample;
and if the keyword matching of the work order data of the corrected seed sample and the work order data of the first marked sample in the marked samples is successful, but the predicted committing department of the corrected seed sample is the same as the predicted committing department of the first marked sample, no processing is performed, and the first marked sample is determined to be a correct sample or a sample to be determined.
The matching rule may be that if the committee for the prediction of the modified seed sample is the same as the committee for the prediction of the first labeled sample in the labeled samples, but the keywords of the work order data of the modified seed sample and the work order data of the labeled sample match, then nothing is done.
In a specific embodiment, the matching rule may be set by the user, and the user may modify the matching rule as desired. For example, at the time of initial use, the user sets the matching rule to determine that the first annotated sample is an erroneous sample if the similarity between the work order data of the modified seed sample and the work order data of the first annotated sample of the annotated samples is greater than a first threshold, but the committee for the prediction of the modified seed sample is not the same as the committee for the prediction of the first annotated sample. In subsequent applications, if the matching rule is not efficient enough, the matching rule may be changed to be that if the keyword matching between the work order data of the corrected seed sample and the work order data of the first labeled sample in the labeled samples is successful, but the committee for predicting the corrected seed sample is different from the committee for predicting the first labeled sample, the first labeled sample is determined to be an error sample.
S210: and the error data mining module sends the error sample to the expert correction module. Accordingly, the expert correction module receives the error samples sent by the error data mining module.
S211: and the expert correction module revises the error sample to obtain a corrected sample.
In a specific embodiment, the act of revising the error sample by the expert correction module may be expert-triggered. For example, the expert may have triggered a revision action on the work order revision interface as shown in FIG. 13. As shown in FIG. 13, the work order revision interface may include: the title display column is used for displaying a work order revision interface, the display control is used for displaying the error sample which is processed currently, and the go-to label key is used for triggering the correction of the error sample. When the expert clicks the go to label button, the error sample displayed by the display control can be revised.
S212: and adding the corrected sample into a database in which the marked sample is positioned by the expert correction module. Wherein the modified samples are used for training the algorithm model.
In a specific embodiment, the new sample and the corrected sample are both new added samples. The work order allocation correction system can be continuously used for training the algorithm model by using one or more of the newly added samples and the correction samples so as to improve the accuracy of the algorithm model. Or, the work order allocation correction system can retrain the algorithm model together with the newly added sample, the corrected sample and the original labeled sample so as to improve the accuracy of the algorithm model.
In the above scheme, all the error samples are sent to the expert correction module in step S210, but in other embodiments, only a part of the error samples may be sent to the expert correction module, for example, the error samples whose similarity between the work order data of the correction seed sample and the work order data of the first labeled sample in the labeled samples is greater than the second threshold may be corrected by using the label of the correction seed sample, and the rest of the error samples may be sent to the expert correction module. Wherein the first threshold and the second threshold may be set according to the accuracy requirement of the algorithm model. Specifically, if the accuracy requirement of the algorithm model is higher, the first threshold value and the second threshold value may be set higher. And, the second threshold may be greater than or equal to the first threshold.
In the scheme, the error sample is mined by correcting the seed sample, the mined error sample is corrected to obtain a corrected sample, and the algorithm model is retrained by using the newly added sample and the corrected sample, so that the accuracy of the algorithm model is improved.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, it may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. The procedures or functions described in accordance with the embodiments of the application are all or partially generated when the computer program instructions are loaded and executed on a computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another computer readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by wire (e.g., coaxial cable, fiber optic, digital subscriber line) or wirelessly (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that includes one or more available media. The usable medium may be a magnetic medium (e.g., floppy Disk, storage Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid State Disk (SSD)), among others.

Claims (17)

1. A work order allocation correction system, comprising:
the allocation module is used for determining a committee office corresponding to the first work order data according to the acquired first work order data;
the committing module is used for selectively calling the expert correcting module according to the first work order data and a committing department corresponding to the first work order data;
the expert correction module is configured to correct the corresponding relation between the first work order data and the committee department when being called, so as to obtain a corrected seed sample;
the error data mining module is used for mining error samples from the marked samples according to the corrected seed samples;
and the expert correction module is used for correcting the error sample to obtain a corrected sample.
2. The system of claim 1, wherein the assignment module is specifically configured to determine, according to the first work order data, a commission department corresponding to the first work order data through an algorithm model.
3. The system of claim 2, wherein the modified samples are used to iteratively train the algorithm model.
4. The system of claim 2, wherein the modified seed samples are used to iteratively train the algorithm model.
5. The system according to any one of claims 1 to 4, wherein the error data mining module is configured to mine an error sample from the labeled samples according to the modified seed sample and a matching rule.
6. The system of claim 5, wherein the matching rules comprise one or more of:
determining that the first marked sample is an error sample when the similarity between the work order data of the modified seed sample and the work order data of the first marked sample in the marked samples is greater than a threshold value and the predicted committing department corresponding to the modified seed sample is different from the predicted committing department corresponding to the first marked sample;
determining that the first marked sample is an error sample if the predicted committal department corresponding to the modified seed sample is the same as the predicted committal department corresponding to the first marked sample in the marked samples, and the similarity between the work order data of the modified seed sample and the work order data of the first marked sample in the marked samples is smaller than a threshold value;
and if the keyword matching of the work order data of the corrected seed sample and the work order data of the first labeled sample in the labeled samples is successful, and the predicted committing department corresponding to the corrected seed sample is not the same as the predicted committing department corresponding to the first labeled sample, determining that the first labeled sample is an error sample.
7. The system of claim 5, wherein the matching rule is set by a user.
8. A work order allocation correction method is characterized by comprising the following steps:
determining a committing department corresponding to the first work order data according to the acquired first work order data;
under the condition that the relationship between the first work order data and the committee department corresponding to the first work order data is incorrect, correcting the corresponding relationship between the first work order data and the committee department to obtain a corrected seed sample;
mining error samples from the marked samples according to the corrected seed samples;
and correcting the error sample to obtain a corrected sample.
9. The method of claim 8, wherein determining a committee unit corresponding to the first work order data according to the acquired first work order data comprises:
and determining a committing department corresponding to the first work order data according to the acquired first work order data through an algorithm model.
10. The method of claim 9, wherein the modified samples are used for iterative training of the algorithm model.
11. The method of claim 9, wherein the modified seed samples are used for iterative training of the algorithm model.
12. The method of any one of claims 8 to 11, wherein mining an error sample from the labeled samples based on the modified seed samples comprises: and mining error samples from the marked samples according to the corrected seed samples and the matching rules.
13. The method of claim 12, wherein the matching rules comprise one or more of:
determining that the first marked sample is an error sample when the similarity between the work order data of the modified seed sample and the work order data of the first marked sample in the marked samples is greater than a threshold value and the predicted committing department corresponding to the modified seed sample is different from the predicted committing department corresponding to the first marked sample;
determining that the first marked sample is an error sample if the predicted committal department corresponding to the modified seed sample is the same as the predicted committal department corresponding to the first marked sample in the marked samples, and the similarity between the work order data of the modified seed sample and the work order data of the first marked sample in the marked samples is smaller than a threshold value;
and if the keyword matching of the work order data of the corrected seed sample and the work order data of the first labeled sample in the labeled samples is successful, and the predicted committing department corresponding to the corrected seed sample is different from the predicted committing department corresponding to the first labeled sample, determining that the first labeled sample is an error sample.
14. The method of claim 12, wherein the matching rule is set by a user.
15. A computer device comprising a processor and a memory, the processor being configured to execute instructions stored in the memory to perform the method of any of claims 1 to 7.
16. A computer-readable storage medium comprising computer program instructions that, when executed by a cluster of computing devices, perform the method of any of claims 1 to 7.
17. A computer program product comprising instructions which, when executed by a cluster of computer devices, cause the cluster of computer devices to perform the method of any one of claims 1 to 7.
CN202110962939.6A 2021-08-20 2021-08-20 Work order allocation correction method, equipment and system Pending CN115713189A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110962939.6A CN115713189A (en) 2021-08-20 2021-08-20 Work order allocation correction method, equipment and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110962939.6A CN115713189A (en) 2021-08-20 2021-08-20 Work order allocation correction method, equipment and system

Publications (1)

Publication Number Publication Date
CN115713189A true CN115713189A (en) 2023-02-24

Family

ID=85230207

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110962939.6A Pending CN115713189A (en) 2021-08-20 2021-08-20 Work order allocation correction method, equipment and system

Country Status (1)

Country Link
CN (1) CN115713189A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116777148A (en) * 2023-05-31 2023-09-19 江苏瑞德信息产业有限公司 Intelligent distribution processing system for service work orders based on data analysis

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116777148A (en) * 2023-05-31 2023-09-19 江苏瑞德信息产业有限公司 Intelligent distribution processing system for service work orders based on data analysis
CN116777148B (en) * 2023-05-31 2023-12-05 江苏瑞德信息产业有限公司 Intelligent distribution processing system for service work orders based on data analysis

Similar Documents

Publication Publication Date Title
CN107885762B (en) Intelligent big data system, method and equipment for providing intelligent big data service
CN109522254B (en) Arithmetic device and method
EP3832499B1 (en) Matrix computing device
CN109190120B (en) Neural network training method and device and named entity identification method and device
US10235180B2 (en) Scheduler implementing dependency matrix having restricted entries
CN101290565B (en) Method and apparatus of executing multiplication function
CN105190539B (en) The method and apparatus of deadlock during avoiding instruction from dispatching are remapped using dynamic port
CN104662538A (en) Semi-join acceleration
CN101154192A (en) Administering an access conflict in a computer memory cache
US20220075651A1 (en) Highly parallel processing architecture with compiler
US20180004527A1 (en) Operation of a multi-slice processor implementing prioritized dependency chain resolution
CN114675890A (en) Instruction execution method, device, equipment and storage medium
US20220107812A1 (en) Highly parallel processing architecture using dual branch execution
CN115713189A (en) Work order allocation correction method, equipment and system
CN111460832A (en) Object coding method, device, system, equipment and computer storage medium
US11106469B2 (en) Instruction selection mechanism with class-dependent age-array
US11829764B2 (en) Address manipulation using indices and tags
US10241905B2 (en) Managing an effective address table in a multi-slice processor
US20230077629A1 (en) Assignment of microprocessor register tags at issue time
CN105320494A (en) Memory sequencing with coherent and non-coherent sub-systems
US10936320B1 (en) Efficient performance of inner loops on a multi-lane processor
WO2020146724A1 (en) Address manipulation using indices and tags
US11714652B2 (en) Zero operand instruction conversion for accelerating sparse computations in a central processing unit pipeline
CN112347122B (en) SQL workflow processing method, device, electronic equipment and storage medium
US11836518B2 (en) Processor graph execution using interrupt conservation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination