CN111950830B - Task allocation method and device - Google Patents

Task allocation method and device Download PDF

Info

Publication number
CN111950830B
CN111950830B CN201910412005.8A CN201910412005A CN111950830B CN 111950830 B CN111950830 B CN 111950830B CN 201910412005 A CN201910412005 A CN 201910412005A CN 111950830 B CN111950830 B CN 111950830B
Authority
CN
China
Prior art keywords
bill
task
processed
pool
processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910412005.8A
Other languages
Chinese (zh)
Other versions
CN111950830A (en
Inventor
王奔
牟宗祺
王伟楠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingbangda Trade Co Ltd
Beijing Jingdong Zhenshi Information Technology Co Ltd
Original Assignee
Beijing Jingdong Zhenshi Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Zhenshi Information Technology Co Ltd filed Critical Beijing Jingdong Zhenshi Information Technology Co Ltd
Priority to CN201910412005.8A priority Critical patent/CN111950830B/en
Publication of CN111950830A publication Critical patent/CN111950830A/en
Application granted granted Critical
Publication of CN111950830B publication Critical patent/CN111950830B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0631Resource planning, allocation, distributing or scheduling for enterprises or organisations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/08Logistics, e.g. warehousing, loading or distribution; Inventory or stock management
    • G06Q10/087Inventory or stock management, e.g. order filling, procurement or balancing against orders
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0633Lists, e.g. purchase orders, compilation or processing
    • G06Q30/0635Processing of requisition or of purchase orders

Landscapes

  • Business, Economics & Management (AREA)
  • Engineering & Computer Science (AREA)
  • Economics (AREA)
  • Human Resources & Organizations (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Finance (AREA)
  • Development Economics (AREA)
  • Marketing (AREA)
  • Accounting & Taxation (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Tourism & Hospitality (AREA)
  • Quality & Reliability (AREA)
  • Operations Research (AREA)
  • Educational Administration (AREA)
  • Game Theory and Decision Science (AREA)

Abstract

The invention discloses a task allocation method and device, and relates to the technical field of computers. One embodiment of the method comprises the following steps: generating at least one level bill pool for the bill to be processed, wherein the number of each level bill pool is at least one; determining a task generation strategy of a first processing link according to the processibility of a second processing link positioned at the downstream of the first processing link, acquiring a document to be processed from a document pool according to the task generation strategy, and generating a document processing task according to the acquired document combination to be processed; and distributing the generated bill processing task to a first processing link for processing. According to the embodiment, task allocation can be automatically carried out without relying on manual experience, so that the task allocation is systematized, labor cost is saved, instability of manual operation is avoided, accuracy of task allocation and operation is improved, dynamic balance of an upstream ring section and a downstream ring section is realized, effect of task processing time efficiency achievement and warehouse production time efficiency are avoided, and user experience is good.

Description

Task allocation method and device
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a task allocation method and apparatus.
Background
In a WMS (warehouse management system) system of large-scale B2C (Business-to-Customer), an order is usually manually distributed and formed into tasks according to a production rhythm on site after being downloaded from a fulfillment system to a warehouse system, so as to control the work saturation of a picker and a rechecker according to task distribution, thereby controlling the production rhythm in a warehouse and controlling the production efficiency of the warehouse. Task allocation is a very important ring in warehouse production, but because task allocation needs to consider many environmental factors such as special order quantity, order cross-storage area quantity, rechecking backlog, picking backlog and the like, the prior art usually analyzes the factors manually and manually performs task allocation according to experience.
In the process of implementing the present invention, the inventor finds that at least the following problems exist in the prior art:
the training cost is high when the stations are vacant due to the fact that the task allocator is seriously dependent on personal experience of the task allocator, and a new person cannot take over the task allocator in a short time;
The task allocation accuracy is low, and the single-region/cross-multi-region orders cannot be accurately combined and allocated, so that the task processing efficiency is low, a certain operation error rate exists, and after the task is formed, a reverse degradation scheme is basically difficult to realize;
the task allocation can not be used for dynamically balancing the upstream and downstream ring nodes of the task processing, so that the task backlog of a certain link is easy to cause, the efficiency of the task processing is affected, the production efficiency of a warehouse is affected, and the customer odds are high.
Disclosure of Invention
In view of the above, the embodiment of the invention provides a task allocation method and a task allocation device, which can automatically perform task allocation without relying on manual experience, make task allocation systematic, save labor cost, avoid instability of manual operation, accurately allocate according to span of single/multiple storage areas, improve accuracy of task allocation and operation, realize dynamic balance of upstream and downstream ring nodes of task processing, avoid influencing efficiency of task processing and efficiency of warehouse production due to backlog of a certain link, and have good user experience.
To achieve the above object, according to an aspect of an embodiment of the present invention, there is provided a task allocation method.
A task allocation method, comprising: generating at least one level bill pool for the bill to be processed, wherein the number of each level bill pool is at least one; determining a task generation strategy of a first processing link according to the processibility of a second processing link positioned at the downstream of the first processing link, acquiring a document to be processed from the document pool according to the task generation strategy, and generating a document processing task according to the acquired document combination to be processed; and distributing the generated bill processing task to the first processing link for processing.
Optionally, before generating the document pool of at least one hierarchy for the document to be processed, the method includes: and screening the documents to be processed from the documents in the current batch according to the configured environmental factors, wherein the environmental factors comprise the types of the documents and the upper limit number of the articles in the single document.
Optionally, the document to be processed includes information of the number and the position of the articles, and the step of generating at least one hierarchical document pool for the document to be processed includes: obtaining article position distribution information of each document to be processed according to the position information of the articles in each document to be processed; generating bill pools of at least one level according to the quantity information of the articles in each bill to be processed and the article position distribution information of each bill to be processed, wherein the total number of the articles in each bill pool is larger than the article upper limit number of the bill processing task, the level of each bill pool is related to the article position distribution number of the bill pool, and the article position distribution number of one bill pool is determined according to the article position distribution information of each bill to be processed in the bill pool.
Optionally, the processibility of the second processing link is determined according to the current task backlog quantity of the second processing link, the current task of the second processing link is generated according to a previous batch of bill combination, the previous batch is a batch before the corresponding batch of the to-be-processed bill, the task generating strategy of the first processing link is determined according to the processibility of the second processing link located at the downstream of the first processing link, the to-be-processed bill is obtained from the bill pool according to the task generating strategy, and the bill processing task is generated according to the obtained to-be-processed bill combination, which comprises the following steps: under the condition that the task backlog quantity is larger than a preset threshold value, selecting the to-be-processed bills in the related-level bill pool one by one according to the sequence of the object position distribution quantity from large to small so as to generate the bill processing tasks according to the selected to-be-processed bill combination, and stopping selecting the to-be-processed bills until the quantity of the generated bill processing tasks is larger than or equal to the task quantity threshold value; and under the condition that the task backlog number is smaller than or equal to the preset threshold value, selecting the to-be-processed bills in the related-level bill pool one by one according to the order of the object position distribution number from small to large, generating the bill processing tasks according to the selected to-be-processed bill combination, and stopping selecting the to-be-processed bills until the number of the generated bill processing tasks is larger than or equal to the task number threshold value.
According to another aspect of the embodiment of the invention, a task allocation device is provided.
A task allocation device comprising: the bill pool generation module is used for generating at least one level bill pool for the bill to be processed, and the number of each level bill pool is at least one; the task generation module is used for determining a task generation strategy of the first processing link according to the processibility of the second processing link positioned at the downstream of the first processing link, acquiring a document to be processed from the document pool according to the task generation strategy, and generating a document processing task according to the acquired document combination to be processed; and the task distribution module is used for distributing the generated bill processing task to the first processing link for processing.
Optionally, the document screening module is further included for: and screening the documents to be processed from the documents in the current batch according to the configured environmental factors, wherein the environmental factors comprise the types of the documents and the upper limit number of the articles in the single document.
Optionally, the document to be processed includes information of the number and the position of the articles, and the document pool generating module is further configured to: obtaining article position distribution information of each document to be processed according to the position information of the articles in each document to be processed; generating bill pools of at least one level according to the quantity information of the articles in each bill to be processed and the article position distribution information of each bill to be processed, wherein the total number of the articles in each bill pool is larger than the article upper limit number of the bill processing task, the level of each bill pool is related to the article position distribution number of the bill pool, and the article position distribution number of one bill pool is determined according to the article position distribution information of each bill to be processed in the bill pool.
Optionally, the processibility of the second processing link is determined according to the current task backlog number of the second processing link, the current task of the second processing link is generated according to previous batch bill combination, the previous batch is a batch before the batch corresponding to the to-be-processed bill, and the task generating module is further configured to: under the condition that the task backlog quantity is larger than a preset threshold value, selecting the to-be-processed bills in the related-level bill pool one by one according to the sequence of the object position distribution quantity from large to small so as to generate the bill processing tasks according to the selected to-be-processed bill combination, and stopping selecting the to-be-processed bills until the quantity of the generated bill processing tasks is larger than or equal to the task quantity threshold value; and under the condition that the task backlog number is smaller than or equal to the preset threshold value, selecting the to-be-processed bills in the related-level bill pool one by one according to the order of the object position distribution number from small to large, generating the bill processing tasks according to the selected to-be-processed bill combination, and stopping selecting the to-be-processed bills until the number of the generated bill processing tasks is larger than or equal to the task number threshold value.
According to yet another aspect of an embodiment of the present invention, an electronic device is provided.
An electronic device, comprising: one or more processors; and the memory is used for storing one or more programs, and when the one or more programs are executed by the one or more processors, the one or more processors are enabled to realize the task allocation method provided by the invention.
According to yet another aspect of an embodiment of the present invention, a computer-readable medium is provided.
A computer readable medium having stored thereon a computer program which when executed by a processor implements the task allocation method provided by the present invention.
One embodiment of the above invention has the following advantages or benefits: generating at least one level bill pool for the bill to be processed, wherein the number of each level bill pool is at least one; determining a task generation strategy of a first processing link according to the processibility of a second processing link positioned at the downstream of the first processing link, acquiring a document to be processed from a generated document pool according to the task generation strategy, and generating a document processing task according to the acquired document combination to be processed; and distributing the generated bill processing task to a first processing link for processing. The task allocation can be automatically carried out without relying on manual experience, so that the task allocation is systemized, the labor cost is saved, the instability of manual operation is avoided, the task allocation and operation accuracy is improved by accurately allocating according to the span of single/multiple storage areas, the dynamic balance of the upstream and downstream ring nodes of the task processing is realized, the influence on the time efficiency of the task processing and the time efficiency of warehouse production due to the backlog of a certain link task is avoided, and the user experience is good.
Further effects of the above-described non-conventional alternatives are described below in connection with the embodiments.
Drawings
The drawings are included to provide a better understanding of the invention and are not to be construed as unduly limiting the invention. Wherein:
FIG. 1 is a schematic diagram of the main steps of a task allocation method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a warehouse production flow in accordance with an embodiment of the present invention;
FIG. 3 is a diagram showing the comparison of data index values of a collection sheet according to the prior art and an embodiment of the present invention;
FIG. 4 is a schematic diagram of the main modules of a task assigning device according to an embodiment of the present invention;
FIG. 5 is an exemplary system architecture diagram in which embodiments of the present invention may be applied;
Fig. 6 is a schematic diagram of a computer system suitable for use in implementing an embodiment of the invention.
Detailed Description
Exemplary embodiments of the present invention will now be described with reference to the accompanying drawings, in which various details of the embodiments of the present invention are included to facilitate understanding, and are to be considered merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the invention. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Those skilled in the art will appreciate that embodiments of the invention may be implemented as a system, apparatus, device, method, or computer program product. Accordingly, the present disclosure may be embodied in the following forms, namely: complete hardware, complete software (including firmware, resident software, micro-code, etc.), or a combination of hardware and software.
Fig. 1 is a schematic diagram of main steps of a task allocation method according to an embodiment of the present invention.
As shown in fig. 1, the task allocation method according to the embodiment of the present invention mainly includes the following steps S101 to S103.
Step S101: generating at least one level bill pool for the bill to be processed, wherein the number of each level bill pool is at least one.
Before step S101, documents to be processed may be screened from the documents in the current batch according to the configured environmental factor, where the environmental factor may include a document type and an upper limit number of articles in a single document.
Taking the current batch of bills as an example, the bill types can comprise common customer orders, personalized orders, aging orders, large customer orders and the like, the upper limit number of the articles in a single bill can be the upper limit threshold value of SKU (stock quantity unit) in the order, and other environmental factors can be configured according to business requirements. The document type determines how many orders need automatic task allocation, and the upper limit number of items in a single document determines how many SKUs within the order can use the automatic task allocation function. These environmental factors are pre-configured by the server responsible for task allocation prior to task allocation, e.g., during the configuration phase, one or more order types, SKU upper threshold, etc., may be configured according to business needs.
The environmental factors may also include some basic parameters for task allocation, taking task allocation in an e-commerce warehouse as an example, the environmental factors may also include a pick number, a pick container order upper limit, a collection list review upper limit threshold, and the like, where the pick number is the number of people handling the pick task in the pick link, the pick container order upper limit may be used to limit the number of commodity SKUs of the pick container, and the collection list review upper limit threshold is the maximum number of tasks that the review loop can backlog. These environmental factors also need to be preconfigured for use in the subsequent steps of task allocation. The order picking link and the rechecking link are processing links of the electronic commerce warehouse, the tasks of the processing links are generated by combining orders, and the tasks of a certain processing link are processed corresponding to the processing links, such as order picking processing, rechecking processing and the like, for the orders in the task.
The document to be processed may include information on the number and location of the items.
Step S101 may specifically include: obtaining article position distribution information of each document to be processed according to the position information of the articles in each document to be processed; generating at least one level bill pool according to the quantity information of the articles in each bill to be processed and the article position distribution information of each bill to be processed, wherein the total number of the articles in each bill pool is larger than the article upper limit number of the bill processing task, the level of each bill pool is related to the article position distribution number of the bill pool, and the article position distribution number of one bill pool is determined according to the article position distribution information of each bill to be processed in the bill pool.
Taking a document to be processed as an order to be processed for automatic task allocation as an example, acquiring position information of commodities in the order, such as information of storage areas, roadways, storage positions and the like of the commodity storage, wherein each storage area comprises a plurality of roadways, and each roadway comprises a plurality of storage positions. For each order to be processed, the commodity position distribution information of each order can be obtained according to the storage positions of commodities, for example, a plurality of commodities in a certain order are specifically distributed in which storage areas, roadways and storage positions. And generating an order pool of one or more levels according to the number of commodities in each order and the commodity position distribution information, wherein the order pool is equivalent to a bill pool, and the orders are used for generating bill processing tasks, such as a picking task.
The order pool that a certain order uses to generate a specific level is related to the number of goods in the order and the goods position distribution information of the order. For example, if one or more individual orders (i.e., an order includes only one type of merchandise) have merchandise locations distributed in the same storage area, lane, or location, and the total number of merchandise for the individual orders is greater than the upper limit of the number of aggregate orders, then the individual orders are generated into a class S order pool. Wherein, the collection list is obtained by order combination, and one collection list is a task.
Taking a collection list as an example of a picking task, the upper limit of the number of collection lists is also the upper limit of the goods of the picking task, which corresponds to the upper limit of the goods of the bill processing task. When the order picking task is generated, each collection list combines orders according to the upper limit of the number of the collection list, so that higher order picking efficiency can be ensured. Because the total number of the commodities in the S-level order pool is larger than the upper limit number of the commodities of the order picking task, at least one order picking task can be generated by combining the orders in the S-level order pool, the commodities of the order picking task are concentrated in the same storage area, roadway and storage position, and the order picking efficiency is extremely high. The pick efficiency and order group (i.e., composition aggregate) effect of the S-level order pool may be reflected by a number of metrics, which may include:
a/d: the order storage area ratio reflects the single-effect macro index of the order pool, and the greater the order pool, the better the order effect;
a/c: order roadway ratio, reflecting single-effect microscopic indexes of an order pool, wherein the larger the order pool, the better the order effect;
b/d: the commodity storage area ratio reflects the macroscopic index of the picking efficiency, and the higher the picking rate is, the higher the picking rate is;
b/c: the commodity roadway ratio reflects microscopic indexes of the picking efficiency, and the larger the picking rate is, the higher the picking rate is;
wherein a, b, c, d are calculation factors, respectively representing:
Calculating a factor a: representing the total number of orders in the aggregate sheet;
Calculating a factor b: representing the number of the commodities in the collection list;
Calculating a factor c: the number of lanes representing the distribution of order SKUs in the aggregate sheet;
Calculating a factor d: representing the number of bits spanned within the aggregate sheet.
For an S-level order pool, because orders in the S-level order pool are concentrated in the same storage position and are single-storage-position orders, the commodity storage area ratio b/d and the commodity roadway ratio b/c are the largest, and the picking efficiency is extremely high.
Still taking the order in the order pool as an example for generating the order picking task, for the order which can not generate the S-level order pool, generating the order pools of other levels according to the number of commodities and commodity position distribution information. Specifically:
For a number of orders for which a class S order pool cannot be generated, but the positions of the goods are distributed in the same storage area, and the total number of orders is greater than the upper limit number of goods for the picking task, a class A1 order pool can be generated for the number of orders. Orders in an A1 class pool of orders are, for example, multiple orders distributed in the same storage area (i.e., one order includes more than one commodity), or are, for example, single orders distributed in the same storage area but not in the same storage location.
For a number of orders for which a class S, class A1 order pool cannot be generated, but the commodity locations are distributed in the same two storage areas, and the total number is greater than the upper limit number of commodities for the picking task, a class A2 order pool can be generated for the number of orders.
For a plurality of orders for which the S-level, A1-level and A2-level order pools cannot be generated, but the commodity positions are distributed in the same three storage areas, and the total number of the orders is larger than the commodity upper limit number of the picking task, the B1-level order pools can be generated for the plurality of orders.
For a plurality of orders for which the order pools of the S level, the A1 level, the A2 level and the B1 level cannot be generated, but the commodity positions are distributed in four or more storage areas which are the same, and the total number of the orders is larger than the commodity upper limit number of the picking task, the order pool of the B2 level can be generated for the orders.
Orders that span the storage area and are most in roadway can be extracted from the B2 order pool, and a snail order pool is generated and represents that the orders are combined into a collection order (one order picking task), so that the order picking efficiency is extremely low.
The above is an example of generating an order pool from the S level to the snail level according to the quantity information of the commodities in the order to be processed and the commodity position distribution information of the order to be processed. The level of each order pool is related to the distribution number of the article positions of the order pool, for example, the positions of the articles in the S-level order pool are distributed in the same storage area, roadway and storage position, and the distribution number of the article positions is equal to 1; the commodity positions of the A1-level order pool are distributed in the same storage area, the same or different roadways and different storage positions, wherein the distribution number of the commodity positions is that the storage area number is equal to 1, the roadway number is equal to or greater than 1, and the storage number is greater than 1; the commodity positions of the A2-level order pool are distributed in two storage areas, the same or different roadways and different storage positions in each storage area, wherein the distribution number of the commodity positions is that the number of the storage areas is equal to 2, the number of the roadways is greater than 2, and the number of the storage positions is greater than 2; the number of distribution of the positions of the articles in the order pool of other levels is not necessarily listed. The distribution number of the article positions of an order pool is determined according to the commodity position distribution information of each order in the order pool, namely, the characteristics of the distribution number of the article positions of the order pool can be determined according to the information of storage areas, roadways and storage positions in which the commodity positions in the order pool are generally distributed, so that the level of the order pool is determined.
In addition, the above indicators of the order picking efficiency and the order grouping effect of the S-level order pool are applicable to order pools of other levels. The order picking efficiency of each level order pool and each index are more specifically as follows:
the S-level order pool is used for generating a picking task for a single storage order, so that the picking efficiency is extremely high, and the commodity storage area ratio b/d and the commodity roadway ratio b/c are the largest;
The A1-level order pool meets the requirement of picking a goods in a single storage area of a pickers, and a goods picking task is generated by using the order pool, so that the goods picking efficiency is higher, the b/d is maximum, and the b/c is larger;
The A2-level order pool meets the requirement of picking in less storage areas of pickers, and the pickers are used for generating picking tasks, so that the picking efficiency is high, the b/d times are high, and the b/c is large;
the B1 order pool spans three picking areas, and the picking tasks are generated by using the order pool, so that the picking efficiency is general, the B/d times are smaller, and the B/c is smaller;
The B2-level order pool spans four or more picking areas, and is used for generating picking tasks, so that the picking efficiency is low, the B/d times are low, and the B/c is small;
The snail order pool is an extracted order which spans the storage area and has the largest roadway, the distribution number of the positions of the articles is largest, the snail order pool is used for generating a picking task, the picking efficiency is extremely low, and the calculation factors b/d and b/c are smallest.
Comparing the distribution numbers of the positions of the articles according to the order pools of each level, wherein the distribution numbers of the positions of the articles are from large to small: snail level order pool > B2 level order pool > B1 level order pool > A2 level order pool > A1 level order pool > S level order pool, wherein the distribution numbers of the positions of the articles are compared according to the sequence of the storage area number, the roadway number and the storage position number, namely, the order pools of different levels are as follows: if the storage area number is large, the distribution number of the article positions is large, if the storage area number is the same, the distribution number of the article positions of the order pool with large roadway number is large, and if the roadway number is also the same, the distribution number of the article positions of the order pool with large storage number is large, in a word, the distribution degree of the overall article positions of the order pool is reflected by the distribution number of the article positions, and the order pool with larger article position distribution number has more distributed commodity, lower picking efficiency and longer picking time.
Step S102: and determining a task generation strategy of the first processing link according to the processibility of the second processing link positioned at the downstream of the first processing link, acquiring a document to be processed from the generated document pool according to the task generation strategy, and generating a document processing task according to the acquired document combination to be processed.
The task processing process may include a plurality of processing links, the first processing link and the second processing link may be any two of the processing links, and the second processing link is a processing link located downstream of the first processing link. And processing the documents of each batch according to the sequence of each processing link in the task processing process. The processibility of the second processing link is an index for reflecting the current processing pressure of the second processing link, and can be specifically determined according to the backlog quantity of the current task of the second processing link, wherein the current task of the second processing link is generated according to the combination of past batch receipts, and the past batch is the batch before the batch corresponding to the receipts to be processed.
Step S102, specifically, under the condition that the number of task backlog is greater than a preset threshold, selecting the to-be-processed bills in the related-level bill pool one by one according to the sequence of the distribution number of the article positions from large to small, so as to generate bill processing tasks according to the selected to-be-processed bill combination, and stopping selecting the to-be-processed bills until the number of the generated bill processing tasks is greater than or equal to the threshold of the number of the tasks;
And under the condition that the number of the backlog of the tasks is smaller than or equal to a preset threshold value, selecting the to-be-processed bills in the bill pool of the related level one by one according to the order of the distribution number of the article positions from small to large, generating the bill processing tasks according to the selected to-be-processed bill combination, and stopping selecting the to-be-processed bills until the number of the generated bill processing tasks is larger than or equal to the threshold value of the number of the tasks.
The preset threshold, that is, the task volume upper limit threshold of the second processing link, the task number threshold may be the number of people processing the task in the first processing link, or may be set to other values according to the need, and both the preset threshold and the task number threshold may be preconfigured as environmental factors before step S101.
Taking an e-commerce warehouse as an example, the processing links can comprise a plurality of links such as picking and rechecking, wherein in the picking link, a warehouse personnel takes a picking task generated by a server for distributing tasks, the picking task can be specifically in the form of a picking bill, and commodities are taken out from a storage position of a goods shelf according to the picking bill and are collected; the rechecking link is a subsequent processing link of the picking link, and mainly separates the picked commodities, and arranges the commodities according to order dimensions for delivery to packing staff.
The processing capacity of the review link can be determined according to the current task backlog quantity of the review link, and the batch of the order currently used for generating the picking task is called a current batch, so that the current task of the review link is generated according to the order combination of the batches before the current batch.
When the current task backlog quantity of the rechecking link is larger than the configured rechecking link task backlog upper limit threshold value, the task processing pressure of the rechecking link is large, the picking task needs to be generated preferentially according to the order pool with low picking efficiency when the picking task is generated, and the picking mode under the condition can be called as a slow picking mode so as to avoid more backlog and higher pressure of the rechecking link task. Otherwise, when the current task backlog quantity of the rechecking link is smaller than or equal to the configured rechecking link task backlog upper limit threshold value, the task processing pressure of the rechecking link is small, and when the picking task is generated, the picking task is required to be generated preferentially according to the order pool with high picking efficiency, and the picking mode under the condition can be called as a quick picking mode, so that the task is transferred from the picking link to the rechecking link as soon as possible.
Based on the analysis, under the condition that the task backlog quantity is larger than the task backlog upper limit threshold of the rechecking link, the orders in the order pools of the related levels are selected one by one according to the order of the distribution quantity of the object positions from large to small, namely, the orders are selected according to the order of the snail order pool > B2 order pool > B1 order pool > A2 order pool > A1 order pool > S order pool to generate a picking task, so that the picking link enters a slow picking mode. And when selecting orders from the order pools of all levels, firstly selecting orders from the snail order pools according to the sequence, and forming a collection list, wherein each collection list corresponds to one order picking task, if the number of the order picking tasks is smaller than the task number threshold, selecting orders from the B2 order pool to generate order picking tasks, and when the total number of the generated order picking tasks is larger than or equal to the task number threshold, not selecting orders from the order pools. The task number threshold may be specifically set as the pick number, and the pick person gradually consumes the pick task, but does not distribute the pick task excessively, so that the maximum number of orders in the order pool may be ensured, so as to obtain an optimal task distribution result.
Under the condition that the task backlog quantity is smaller than or equal to the task backlog upper limit threshold value of the rechecking link, selecting orders in the order pools of related levels one by one according to the order of the distribution quantity of the object positions from small to large, namely selecting orders according to the order of the S-level order pool > A1-level order pool > A2-level order pool > B1-level order pool > B2-level order pool > snail-level order pool to generate a picking task, so that the picking link enters a quick picking mode. Likewise, the order is first selected from the S-level order pool to generate order picking tasks, and the order is selected from the step-by-step level until the total number of the generated order picking tasks is greater than or equal to the task number threshold.
In the above-described order picking task generation, each collection sheet is subjected to order picking task generation according to the upper limit of the number of collection sheets, so that a high picking efficiency can be ensured, but it should be noted that, for an order of a lot, there is generally a corresponding shipment deadline, and when the shipment deadline is reached, all orders of the lot must be taken out. Therefore, when the delivery deadline is reached, if the total number of commodities after some orders are assembled does not reach the upper limit of the number of the assembled commodities, the assembled commodities are also generated, and automatic task allocation is forced to ensure that the commodities in the orders are delivered before the delivery deadline.
Step S103: and distributing the generated bill processing task to a first processing link for processing.
The bill processing task of the first processing link is generated according to the processibility of the second processing link, so that the production rhythm of the first processing link can be controlled according to the processibility of the second processing link. By combining the above examples of task allocation to the e-commerce warehouse, the production rhythm of picking can be controlled through fast and slow picking in the picking link according to the task backlog condition of the rechecking link, the dynamic balance of the upstream link and the downstream link of task processing is realized, the influence on the task processing time efficiency achievement and the warehouse production time efficiency caused by the task backlog of the rechecking link is avoided, and the user experience is good.
Fig. 2 is a schematic diagram of a warehouse production flow according to an embodiment of the invention.
As shown in fig. 2, the warehouse production process according to the embodiment of the present invention includes:
a warehouse production system (abbreviated as system) receives orders, such as an order for a lot;
Initializing a system, wherein the system mainly comprises initializing some configurations, such as configuring various environmental factors, including order types, upper limit numbers of articles in single orders, picking numbers, upper limit numbers of picking container orders, upper limit thresholds of aggregate single review products and the like;
The system performs positioning, namely positioning storage positions of commodities in each order, such as storage areas, roadways, storage positions and the like where the commodities in the order are stored;
The system automatically performs task allocation, wherein the system can select orders needing to perform automatic allocation tasks according to configuration of an initialization step, generate at least one level of order pools for the orders performing the automatic allocation tasks, generally the order pools can be divided into an S level, an A level (which is divided into an A1 level and an A2 level), a B level (which is divided into an B1 level and an B2 level), a snail level, specifically which level or levels of order pools are generated is/are specifically determined according to storage positions and quantity of commodities in each order, specifically, commodity position distribution information of each order can be determined according to storage positions of commodities in each order, and the order pools are generated according to quantity information and commodity position distribution information of commodities in each order. And then determining a task generation strategy of the picking link according to the rechecking backlog quantity fed back by the rechecking packing link, wherein the task generation strategy determines which level of order pool is preferentially used for acquiring orders to generate picking tasks, the number of the generated picking tasks does not exceed the number of pickers, namely, when the number of the generated picking tasks is greater than or equal to the number of pickers, the picking tasks are not continuously generated any more so as to ensure the maximum number of orders in the order pool, and the optimal allocation result is calculated in the follow-up process. Finally, issuing the generated picking task to a picking link;
in the picking link, a picker picks the order commodities in the picking task one by one according to the picking task, and the order commodities are sent to the rechecking and packaging link after the task processing is completed;
in the rechecking and packaging link, rechecking personnel separate the selected commodities, sort the commodities according to order dimensions, and then deliver the sorted commodities to packaging personnel, and the packaging personnel package the ordered commodities.
In the above flow, the data index after the order is assembled according to the order pool is expressed in the following aspects: aggregate single saturation percentage, total number of items/total number of reserves ("/" indicates division), aggregate single pick difficulty structure. Wherein, the saturation degree of the collection list refers to the quantity of commodities in one collection list, and the more the commodities are, the higher the picking efficiency is. The saturation percentage of the collection list refers to the ratio of the number of the saturation collection list to the total number of the collection list in the collection list generated according to a batch of orders, wherein the saturation collection list refers to the collection list of which the commodity number in the collection list reaches the preset upper limit of the commodity number of the collection list; the total commodity number/total storage area number is a macroscopic index showing the picking efficiency; the pick-up difficulty structure is related to the pick-up difficulty structure. In this embodiment, the pick structure of the aggregate sheet reflects the source distribution of order picks in the aggregate sheet, that is, the storage location distribution of the ordered commodities, for example, the storage area distribution, the roadway distribution, the storage location distribution, and the like.
FIG. 3 is a diagram showing the comparison of the index values of data in a collection sheet according to the prior art and an embodiment of the present invention. Based on the data index embodiment after the order is assembled according to the order pool, the data index value obtained according to the actual order data (adopting manual task allocation) in a certain warehouse for one month is called as a data index actual value, and is called as an actual value for short; the method according to the embodiment of the invention optimizes the prior art, and comprises the steps of generating order pools of all levels, carrying out automatic task allocation on order groups according to a task generation strategy according to the levels of the order pools, and obtaining the data index value based on the embodiment of the invention, namely a data index optimized value, namely an optimized value for short.
In fig. 3 (a), the upper limit of the number of commodities in a single bill is set to be 50, and the number of commodities in the bill is controlled to be 50 as much as possible through optimization in the embodiment of the invention, compared with the prior art, the distribution of the number of commodities in the whole bill is changed in the embodiment of the invention, compared with the prior art, the saturation percentage of the average commodity in the bill is improved from 30.9% to 42.1% after optimization, the saturation percentage of the average commodity in the bill is equal to the ratio of the sum of the number percentages in each bill to the total number of the bill, and the number percentage of the commodity in one bill is equal to the ratio of the number of ordered commodities distributed to the bill to the upper limit of the preset number of the bill; compared with the prior art, the embodiment of the invention has the advantages that more collection sheets reach saturation and become saturated collection sheets, and the saturation percentage of the collection sheets is improved from 40% (actual value) to 79% (optimized value).
In fig. 3 (b), the histogram shows the comparison of the prior art (state of the art) with the optimized data index value implemented by the present invention. Wherein, on the basis of the data index of the total commodity number/total storage area number, several data indexes are also expanded, which are respectively: total number of orders/total number of reserves, total number of orders/total number of lanes, total number of orders/total number of storage bits, total number of commodities/total number of storage lanes, total number of commodities/total number of storage bits (in the above data index, "/" indicates division). Each set of histograms includes a comparison of the current situation (prior art) with the present invention of examples v1.0, v1.1, v1.2, and each set of histograms in fig. 3 (b) corresponds to the current situation, v1.0, v1.1, v1.2, respectively, from left to right. Wherein v1.0 is optimizing only the storage area, i.e. only the storage area is considered when generating the order pool, and a "quick pick" mode is implemented, i.e. orders in the order pool are distributed in a concentrated priority pick position; v1.1 is to optimize the storage area, the tunnel and the storage position, namely, when an order pool is generated, not only the storage area but also the tunnel and the storage position are considered, the storage area, the tunnel and the storage position are synthesized to ensure optimal picking efficiency, and the mode of 'quick picking' is implemented, so that compared with v1.0, the picking efficiency is reduced; v1.2 is to optimize the reserves, lanes, storage and to implement a "slow pick" mode, i.e. to prioritize orders in an order pool with a distributed picking position, e.g. to prioritize order picking from a snail order pool to generate aggregate orders with the lowest pick efficiency. According to the histogram in fig. 3 (b) or the comparison result in table 1, the total commodity number/total storage area number in the embodiment v1.1 of the present invention is improved from 17.2% to 30% compared with the prior art, and is improved by 74%, the cross storage area number of the collection sheet is effectively improved, and the larger the index value of the total commodity number/total storage area number is, the higher the group sheet efficiency and the pick efficiency are. The two comprehensive comparisons V1.1 and V1.0 are both quick picking modules, but the V1.1 and V1.0 are improved on each index value of total order number/total lane number, total order number/total storage number, total commodity number/total storage lane number and total commodity number/total storage number. In addition, v1.2 carries out "slow picking" mode each data index value and also generally reduces than prior art, can reach the purpose that reduces picking efficiency and picking speed. The above analysis was performed using only the comparison result between the individual indexes as an example, and the analysis concept of the comparison result between the other data index values of fig. 3 (b) is similar, and the analysis is not performed one by one.
TABLE 1
Fig. 3 (c) shows a comparison result of the aggregate single lane number distribution and the storage bit number distribution in the prior art and the embodiment of the present invention, which is a specific expression of comparing the actual value and the optimized value of the aggregate single pick difficulty structure data index. As can be seen from fig. 3 (c), the number of lanes and the storage position of the aggregate list allocated by the manual task are irregular (because the manual task allocation can only form an order pool and group the order according to the order placing order), and after the optimization of the embodiment of the invention, the overall number of lanes and the storage position of the aggregate list are distributed in a small-to-large distribution manner, so that when the picking task is generated, the modes of 'quick picking' and 'slow picking' can be flexibly switched according to the rechecking speed of the rechecking link, and the order is selected from the order pool as required, so that the dynamic balance of the processing speeds of the picking link and the rechecking link is achieved.
According to the embodiment of the invention, an order pool grading concept is introduced, orders are automatically combined into an aggregate order according to a specific order (sequence or reverse order) of an order pool level, dynamic calculation is automatically carried out according to the processing capacity of a review link, quick picking and slow picking modes are flexibly switched, and optimized data indexes of the prior art and the embodiment of the invention are compared, so that various advantages of the automatic task allocation of the embodiment of the invention compared with the manual task allocation scheme of the prior art are intuitively presented, the embodiment of the invention can balance the production rhythm of a warehouse, and the rhythm of each link of the warehouse production is dynamically balanced.
Fig. 4 is a schematic diagram of main modules of a task assigning apparatus according to an embodiment of the present invention.
The task allocation device 400 according to the embodiment of the present invention mainly includes: document pool generation module 401, task generation module 402, task allocation module 403.
The document pool generating module 401 is configured to generate at least one level document pool for a document to be processed, where the number of document pools in each level is at least one.
The document to be processed may include information on the number and location of the items.
The document pool generation module 401 may specifically be configured to: obtaining article position distribution information of each document to be processed according to the position information of the articles in each document to be processed; generating at least one level bill pool according to the quantity information of the articles in each bill to be processed and the article position distribution information of each bill to be processed, wherein the total number of the articles in each bill pool is larger than the article upper limit number of the bill processing task, the level of each bill pool is related to the article position distribution number of the bill pool, and the article position distribution number of one bill pool is determined according to the article position distribution information of each bill to be processed in the bill pool.
The task allocation device 400 may further include a document filtering module for: and screening the documents to be processed from the documents in the current batch according to the configured environmental factors, wherein the environmental factors can comprise the types of the documents and the upper limit number of the articles in the single document.
The task generating module 402 is configured to determine a task generating policy of the first processing link according to a processible capability of the second processing link located downstream of the first processing link, acquire a document to be processed from the generated document pool according to the task generating policy, and generate a document processing task according to the acquired document to be processed combination.
The processibility of the second processing link is determined according to the backlog quantity of the current task of the second processing link, the current task of the second processing link is generated according to the combination of past batch receipts, and the past batch is the batch before the batch corresponding to the receipts to be processed.
The task generation module 402 may be specifically configured to: under the condition that the number of task backlog is larger than a preset threshold, selecting the to-be-processed bills in the related-level bill pool one by one according to the sequence of the distribution number of the article positions from large to small, generating bill processing tasks according to the selected to-be-processed bill combination, and stopping selecting the to-be-processed bills until the number of the generated bill processing tasks is larger than or equal to the threshold of the number of the tasks; and under the condition that the number of the backlog of the tasks is smaller than or equal to a preset threshold value, selecting the to-be-processed bills in the bill pool of the related level one by one according to the order of the distribution number of the article positions from small to large, generating bill processing tasks according to the selected to-be-processed bill combination, and stopping selecting the to-be-processed bills until the number of the generated bill processing tasks is larger than or equal to the threshold value of the number of the tasks.
The task allocation module 403 is configured to allocate the generated document processing task to the first processing link for processing.
In addition, the specific implementation of the task allocation device in the embodiment of the present invention has been described in detail in the task allocation method described above, so that the description is not repeated here.
Fig. 5 illustrates an exemplary system architecture 500 to which the task allocation method or task allocation device of embodiments of the present invention may be applied.
As shown in fig. 5, the system architecture 500 may include terminal devices 501, 502, 503, a network 504, and a server 505. The network 504 is used as a medium to provide communication links between the terminal devices 501, 502, 503 and the server 505. The network 504 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
A user may interact with the server 505 via the network 504 using the terminal devices 501, 502, 503 to receive or send messages or the like. Various communication client applications may be installed on the terminal devices 501, 502, 503, such as shopping class applications, web browser applications, search class applications, instant messaging tools, mailbox clients, social platform software, etc. (by way of example only).
The terminal devices 501, 502, 503 may be a variety of electronic devices having a display screen and supporting web browsing, including but not limited to smartphones, tablets, laptop and desktop computers, and the like.
The server 505 may be a server providing various services, such as a background management server (by way of example only) providing support for shopping-type websites browsed by users using the terminal devices 501, 502, 503. The background management server may analyze and process the received data such as the product information query request, and feedback the processing result (e.g., the target push information, the product information—only an example) to the terminal device.
It should be noted that, the task allocation method provided in the embodiment of the present invention is generally executed by the server 505, and accordingly, the task allocation device is generally disposed in the server 505.
It should be understood that the number of terminal devices, networks and servers in fig. 5 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
Referring now to FIG. 6, there is illustrated a schematic diagram of a computer system 600 suitable for use in implementing a terminal device or server in accordance with an embodiment of the present application. The terminal device or server shown in fig. 6 is only an example, and should not impose any limitation on the functions and scope of use of the embodiments of the present application.
As shown in fig. 6, the computer system 600 includes a Central Processing Unit (CPU) 601, which can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 602 or a program loaded from a storage section 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data required for the operation of the system 600 are also stored. The CPU 601, ROM 602, and RAM 603 are connected to each other through a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
The following components are connected to the I/O interface 605: an input portion 606 including a keyboard, mouse, etc.; an output portion 607 including a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, a speaker, and the like; a storage section 608 including a hard disk and the like; and a communication section 609 including a network interface card such as a LAN card, a modem, or the like. The communication section 609 performs communication processing via a network such as the internet. The drive 610 is also connected to the I/O interface 605 as needed. Removable media 611 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is installed as needed on drive 610 so that a computer program read therefrom is installed as needed into storage section 608.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication portion 609, and/or installed from the removable medium 611. The above-described functions defined in the system of the present application are performed when the computer program is executed by a Central Processing Unit (CPU) 601.
The computer readable medium shown in the present application may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present application, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules involved in the embodiments of the present invention may be implemented in software or in hardware. The described modules may also be provided in a processor, for example, as: a processor comprises a bill pool generation module, a task generation module and a task allocation module. The names of these modules do not constitute a limitation on the module itself in some cases, and for example, the document pool generation module may also be described as "a module for generating at least one hierarchical level of document pools for documents to be processed".
As another aspect, the present invention also provides a computer-readable medium that may be contained in the apparatus described in the above embodiments; or may be present alone without being fitted into the device. The computer readable medium carries one or more programs which, when executed by a device, cause the device to include: generating at least one level bill pool for the bill to be processed, wherein the number of each level bill pool is at least one; determining a task generation strategy of a first processing link according to the processibility of a second processing link positioned at the downstream of the first processing link, acquiring a document to be processed from the document pool according to the task generation strategy, and generating a document processing task according to the acquired document combination to be processed; and distributing the generated bill processing task to the first processing link for processing.
According to the technical scheme of the embodiment of the invention, the bill to be processed generates at least one level bill pool, and the number of each level bill pool is at least one; determining a task generation strategy of a first processing link according to the processibility of a second processing link positioned at the downstream of the first processing link, acquiring a document to be processed from a generated document pool according to the task generation strategy, and generating a document processing task according to the acquired document combination to be processed; and distributing the generated bill processing task to a first processing link for processing. The task allocation can be automatically carried out without relying on manual experience, so that the task allocation is systemized, the labor cost is saved, the instability of manual operation is avoided, the task allocation and operation accuracy is improved by accurately allocating according to the span of single/multiple storage areas, the dynamic balance of the upstream and downstream ring nodes of the task processing is realized, the influence on the time efficiency of the task processing and the time efficiency of warehouse production due to the backlog of a certain link task is avoided, and the user experience is good.
The above embodiments do not limit the scope of the present invention. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives can occur depending upon design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present invention should be included in the scope of the present invention.

Claims (8)

1. A method of task allocation, comprising:
Generating at least one level bill pool for the bill to be processed, wherein the number of each level bill pool is at least one;
Determining a task generation strategy of a first processing link according to the processibility of a second processing link positioned at the downstream of the first processing link, acquiring a document to be processed from the document pool according to the task generation strategy, and generating a document processing task according to the acquired document combination to be processed;
Distributing the generated bill processing task to the first processing link for processing;
The processibility of the second processing link is determined according to the current task backlog quantity of the second processing link, the current task of the second processing link is generated according to the past batch bill combination, the past batch is the batch before the batch corresponding to the bill to be processed,
Determining a task generation strategy of a first processing link according to the processibility of a second processing link positioned at the downstream of the first processing link, acquiring a document to be processed from the document pool according to the task generation strategy, and generating a document processing task according to the acquired document to be processed combination, wherein the method comprises the steps of: under the condition that the task backlog quantity is larger than a preset threshold value, selecting the to-be-processed bills in the related-level bill pool one by one according to the order of the distribution quantity of the article positions from large to small, generating the bill processing tasks according to the selected to-be-processed bill combination, and stopping selecting the to-be-processed bills until the quantity of the generated bill processing tasks is larger than or equal to the task quantity threshold value; and under the condition that the task backlog number is smaller than or equal to the preset threshold value, selecting the to-be-processed bills in the related-level bill pool one by one according to the order of the distribution number of the article positions from small to large, generating the bill processing tasks according to the selected to-be-processed bill combination, and stopping selecting the to-be-processed bills until the number of the generated bill processing tasks is larger than or equal to the task number threshold value.
2. The method of claim 1, comprising, prior to generating at least one hierarchical document pool for documents to be processed:
and screening the documents to be processed from the documents in the current batch according to the configured environmental factors, wherein the environmental factors comprise the types of the documents and the upper limit number of the articles in the single document.
3. The method of claim 1, wherein the document to be processed includes information on the quantity and location of the item,
A step of generating at least one hierarchical document pool for documents to be processed, comprising:
Obtaining article position distribution information of each document to be processed according to the position information of the articles in each document to be processed;
Generating bill pools of at least one level according to the quantity information of the articles in each bill to be processed and the article position distribution information of each bill to be processed, wherein the total number of the articles in each bill pool is larger than the article upper limit number of the bill processing task, the level of each bill pool is related to the article position distribution number of the bill pool, and the article position distribution number of one bill pool is determined according to the article position distribution information of each bill to be processed in the bill pool.
4. A task assigning apparatus, comprising:
the bill pool generation module is used for generating at least one level bill pool for the bill to be processed, and the number of each level bill pool is at least one;
The task generation module is used for determining a task generation strategy of the first processing link according to the processibility of the second processing link positioned at the downstream of the first processing link, acquiring a document to be processed from the document pool according to the task generation strategy, and generating a document processing task according to the acquired document combination to be processed;
The task distribution module is used for distributing the generated bill processing task to the first processing link for processing;
The processibility of the second processing link is determined according to the current task backlog quantity of the second processing link, the current task of the second processing link is generated according to the past batch bill combination, the past batch is the batch before the batch corresponding to the bill to be processed,
The task generating module is further configured to: under the condition that the task backlog quantity is larger than a preset threshold value, selecting the to-be-processed bills in the related-level bill pool one by one according to the order of the distribution quantity of the article positions from large to small, generating the bill processing tasks according to the selected to-be-processed bill combination, and stopping selecting the to-be-processed bills until the quantity of the generated bill processing tasks is larger than or equal to the task quantity threshold value; and under the condition that the task backlog number is smaller than or equal to the preset threshold value, selecting the to-be-processed bills in the related-level bill pool one by one according to the order of the distribution number of the article positions from small to large, generating the bill processing tasks according to the selected to-be-processed bill combination, and stopping selecting the to-be-processed bills until the number of the generated bill processing tasks is larger than or equal to the task number threshold value.
5. The apparatus of claim 4, further comprising a document screening module to:
and screening the documents to be processed from the documents in the current batch according to the configured environmental factors, wherein the environmental factors comprise the types of the documents and the upper limit number of the articles in the single document.
6. The apparatus of claim 4, wherein the document to be processed includes information on the quantity and location of the item,
The bill pool generation module is further used for:
Obtaining article position distribution information of each document to be processed according to the position information of the articles in each document to be processed;
Generating bill pools of at least one level according to the quantity information of the articles in each bill to be processed and the article position distribution information of each bill to be processed, wherein the total number of the articles in each bill pool is larger than the article upper limit number of the bill processing task, the level of each bill pool is related to the article position distribution number of the bill pool, and the article position distribution number of one bill pool is determined according to the article position distribution information of each bill to be processed in the bill pool.
7. An electronic device, comprising:
one or more processors;
A memory for storing one or more programs,
The one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method of any of claims 1-3.
8. A computer readable medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the method according to any of claims 1-3.
CN201910412005.8A 2019-05-17 2019-05-17 Task allocation method and device Active CN111950830B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910412005.8A CN111950830B (en) 2019-05-17 2019-05-17 Task allocation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910412005.8A CN111950830B (en) 2019-05-17 2019-05-17 Task allocation method and device

Publications (2)

Publication Number Publication Date
CN111950830A CN111950830A (en) 2020-11-17
CN111950830B true CN111950830B (en) 2024-05-21

Family

ID=73336710

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910412005.8A Active CN111950830B (en) 2019-05-17 2019-05-17 Task allocation method and device

Country Status (1)

Country Link
CN (1) CN111950830B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112149925A (en) * 2019-06-26 2020-12-29 北京京东振世信息技术有限公司 Warehousing task automatic allocation method and device, and warehousing management method and system
CN113762872A (en) * 2021-03-01 2021-12-07 北京京东振世信息技术有限公司 Order combining method and device
CN113283738A (en) * 2021-05-20 2021-08-20 北京京东振世信息技术有限公司 Object positioning method and device, warehousing system, medium, equipment and product

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2000044649A1 (en) * 1999-01-27 2000-08-03 St. Onge Company Automated order pick process
WO2005113389A1 (en) * 2004-05-19 2005-12-01 SSI Schäfer PEEM GmbH Commissioning method and commissioning device
CN103440566A (en) * 2013-08-27 2013-12-11 北京京东尚科信息技术有限公司 Method and device for generating order picking collection lists and method for optimizing order picking route
CN104881769A (en) * 2015-05-27 2015-09-02 北京京东尚科信息技术有限公司 Order processing method and device
CN105353745A (en) * 2015-11-24 2016-02-24 四川长虹电器股份有限公司 Multi-stage series-parallel production line distributed collaborative decision-making control system and method
CN106809586A (en) * 2017-03-28 2017-06-09 北京京东尚科信息技术有限公司 Method and apparatus for determining picking path
CN106980949A (en) * 2017-03-22 2017-07-25 湖北九州通达科技开发有限公司 The scattered commodity of Medical Logistics check sorting method and system
CN107392543A (en) * 2017-07-28 2017-11-24 网易无尾熊(杭州)科技有限公司 Information processing method, data handling system, medium and computing device
WO2018064937A1 (en) * 2016-10-08 2018-04-12 阿里巴巴集团控股有限公司 Order information processing method, apparatus and system
CN108205739A (en) * 2016-12-20 2018-06-26 北京京东尚科信息技术有限公司 Gather single group construction method and system
CN109658027A (en) * 2018-12-17 2019-04-19 北京极智嘉科技有限公司 A kind of processing method of order taking responsibility, device, server and medium
CN109658018A (en) * 2017-10-10 2019-04-19 北京京东尚科信息技术有限公司 The method and apparatus for improving storage outbound efficiency
CN109670708A (en) * 2018-12-19 2019-04-23 中山乐心电子有限公司 The distribution system and method for picking business of storing in a warehouse

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210269244A1 (en) * 2018-06-25 2021-09-02 Robert D. Ahmann Automated warehouse system and method for optimized batch picking

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2000044649A1 (en) * 1999-01-27 2000-08-03 St. Onge Company Automated order pick process
WO2005113389A1 (en) * 2004-05-19 2005-12-01 SSI Schäfer PEEM GmbH Commissioning method and commissioning device
CN103440566A (en) * 2013-08-27 2013-12-11 北京京东尚科信息技术有限公司 Method and device for generating order picking collection lists and method for optimizing order picking route
CN104881769A (en) * 2015-05-27 2015-09-02 北京京东尚科信息技术有限公司 Order processing method and device
CN105353745A (en) * 2015-11-24 2016-02-24 四川长虹电器股份有限公司 Multi-stage series-parallel production line distributed collaborative decision-making control system and method
WO2018064937A1 (en) * 2016-10-08 2018-04-12 阿里巴巴集团控股有限公司 Order information processing method, apparatus and system
CN108205739A (en) * 2016-12-20 2018-06-26 北京京东尚科信息技术有限公司 Gather single group construction method and system
CN106980949A (en) * 2017-03-22 2017-07-25 湖北九州通达科技开发有限公司 The scattered commodity of Medical Logistics check sorting method and system
CN106809586A (en) * 2017-03-28 2017-06-09 北京京东尚科信息技术有限公司 Method and apparatus for determining picking path
CN107392543A (en) * 2017-07-28 2017-11-24 网易无尾熊(杭州)科技有限公司 Information processing method, data handling system, medium and computing device
CN109658018A (en) * 2017-10-10 2019-04-19 北京京东尚科信息技术有限公司 The method and apparatus for improving storage outbound efficiency
CN109658027A (en) * 2018-12-17 2019-04-19 北京极智嘉科技有限公司 A kind of processing method of order taking responsibility, device, server and medium
CN109670708A (en) * 2018-12-19 2019-04-23 中山乐心电子有限公司 The distribution system and method for picking business of storing in a warehouse

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"A电商企业仓储中心订单拣选作业优化研究";吴楠;《中国优秀硕士学位论文全文数据库 经济与管理科学辑》(第2017(02)期);第J145-755页 *
陈方宇.《多区块仓库下订单分批与拣选路线规划研究》.北京:知识产权出版社,2017,(第1版),第1-13页. *

Also Published As

Publication number Publication date
CN111950830A (en) 2020-11-17

Similar Documents

Publication Publication Date Title
CN110472899B (en) Method and device for distributing articles out of warehouse
US20170206481A1 (en) Order sourcing with asynchronous communication and using optimization for large sourcing networks
CN110348771B (en) Method and device for order grouping of orders
CN111950830B (en) Task allocation method and device
CN109961247B (en) Method and device for generating article storage information
CN110826831A (en) Method and device for restocking a picking zone of a warehouse
CN109961306B (en) Method and device for distributing inventory of articles
CN110889656B (en) Warehouse rule configuration method and device
CN111507651A (en) Order data processing method and device applied to man-machine mixed warehouse
WO2021143510A1 (en) Task determination method and device
CN114118888A (en) Order ex-warehouse method and device
CN115660380B (en) Order processing method and device for picking goods to person
CN113495779A (en) Task scheduling method and device and task execution system
CN111260270A (en) Method and device for improving order processing efficiency of store
CN111260288B (en) Order management method, device, medium and electronic equipment
CN111798167B (en) Warehouse replenishment method and device
CN111461583B (en) Inventory checking method and device
CN112784212B (en) Inventory optimization method and device
CN113222490A (en) Inventory allocation method and device
CN112446652A (en) Method and device for processing task set
CN112446658A (en) Method and device for shunting and shelving storage articles
CN111260271A (en) Method and device for shortening order picking time of store
CN113379173B (en) Method and device for marking warehouse goods with labels
CN110689291A (en) Method and device for determining warehousing and boxing amount
CN116629495A (en) Method and device for outputting information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
TA01 Transfer of patent application right

Effective date of registration: 20210302

Address after: 6 / F, 76 Zhichun Road, Haidian District, Beijing 100086

Applicant after: Beijing Jingdong Zhenshi Information Technology Co.,Ltd.

Address before: 101, 1st floor, building 2, yard 20, Suzhou street, Haidian District, Beijing 100080

Applicant before: Beijing Jingbangda Trading Co.,Ltd.

Effective date of registration: 20210302

Address after: 101, 1st floor, building 2, yard 20, Suzhou street, Haidian District, Beijing 100080

Applicant after: Beijing Jingbangda Trading Co.,Ltd.

Address before: 100086 8th Floor, 76 Zhichun Road, Haidian District, Beijing

Applicant before: BEIJING JINGDONG SHANGKE INFORMATION TECHNOLOGY Co.,Ltd.

Applicant before: BEIJING JINGDONG CENTURY TRADING Co.,Ltd.

TA01 Transfer of patent application right
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant