Detailed Description
Exemplary embodiments of the present invention will now be described with reference to the accompanying drawings, in which various details of the embodiments of the present invention are included to facilitate understanding, and are to be considered merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the invention. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Those skilled in the art will appreciate that embodiments of the invention may be implemented as a system, apparatus, device, method, or computer program product. Accordingly, the present disclosure may be embodied in the following forms, namely: complete hardware, complete software (including firmware, resident software, micro-code, etc.), or a combination of hardware and software.
Fig. 1 is a schematic diagram of main steps of a task allocation method according to an embodiment of the present invention.
As shown in fig. 1, the task allocation method according to the embodiment of the present invention mainly includes the following steps S101 to S103.
Step S101: generating at least one level bill pool for the bill to be processed, wherein the number of each level bill pool is at least one.
Before step S101, documents to be processed may be screened from the documents in the current batch according to the configured environmental factor, where the environmental factor may include a document type and an upper limit number of articles in a single document.
Taking the current batch of bills as an example, the bill types can comprise common customer orders, personalized orders, aging orders, large customer orders and the like, the upper limit number of the articles in a single bill can be the upper limit threshold value of SKU (stock quantity unit) in the order, and other environmental factors can be configured according to business requirements. The document type determines how many orders need automatic task allocation, and the upper limit number of items in a single document determines how many SKUs within the order can use the automatic task allocation function. These environmental factors are pre-configured by the server responsible for task allocation prior to task allocation, e.g., during the configuration phase, one or more order types, SKU upper threshold, etc., may be configured according to business needs.
The environmental factors may also include some basic parameters for task allocation, taking task allocation in an e-commerce warehouse as an example, the environmental factors may also include a pick number, a pick container order upper limit, a collection list review upper limit threshold, and the like, where the pick number is the number of people handling the pick task in the pick link, the pick container order upper limit may be used to limit the number of commodity SKUs of the pick container, and the collection list review upper limit threshold is the maximum number of tasks that the review loop can backlog. These environmental factors also need to be preconfigured for use in the subsequent steps of task allocation. The order picking link and the rechecking link are processing links of the electronic commerce warehouse, the tasks of the processing links are generated by combining orders, and the tasks of a certain processing link are processed corresponding to the processing links, such as order picking processing, rechecking processing and the like, for the orders in the task.
The document to be processed may include information on the number and location of the items.
Step S101 may specifically include: obtaining article position distribution information of each document to be processed according to the position information of the articles in each document to be processed; generating at least one level bill pool according to the quantity information of the articles in each bill to be processed and the article position distribution information of each bill to be processed, wherein the total number of the articles in each bill pool is larger than the article upper limit number of the bill processing task, the level of each bill pool is related to the article position distribution number of the bill pool, and the article position distribution number of one bill pool is determined according to the article position distribution information of each bill to be processed in the bill pool.
Taking a document to be processed as an order to be processed for automatic task allocation as an example, acquiring position information of commodities in the order, such as information of storage areas, roadways, storage positions and the like of the commodity storage, wherein each storage area comprises a plurality of roadways, and each roadway comprises a plurality of storage positions. For each order to be processed, the commodity position distribution information of each order can be obtained according to the storage positions of commodities, for example, a plurality of commodities in a certain order are specifically distributed in which storage areas, roadways and storage positions. And generating an order pool of one or more levels according to the number of commodities in each order and the commodity position distribution information, wherein the order pool is equivalent to a bill pool, and the orders are used for generating bill processing tasks, such as a picking task.
The order pool that a certain order uses to generate a specific level is related to the number of goods in the order and the goods position distribution information of the order. For example, if one or more individual orders (i.e., an order includes only one type of merchandise) have merchandise locations distributed in the same storage area, lane, or location, and the total number of merchandise for the individual orders is greater than the upper limit of the number of aggregate orders, then the individual orders are generated into a class S order pool. Wherein, the collection list is obtained by order combination, and one collection list is a task.
Taking a collection list as an example of a picking task, the upper limit of the number of collection lists is also the upper limit of the goods of the picking task, which corresponds to the upper limit of the goods of the bill processing task. When the order picking task is generated, each collection list combines orders according to the upper limit of the number of the collection list, so that higher order picking efficiency can be ensured. Because the total number of the commodities in the S-level order pool is larger than the upper limit number of the commodities of the order picking task, at least one order picking task can be generated by combining the orders in the S-level order pool, the commodities of the order picking task are concentrated in the same storage area, roadway and storage position, and the order picking efficiency is extremely high. The pick efficiency and order group (i.e., composition aggregate) effect of the S-level order pool may be reflected by a number of metrics, which may include:
a/d: the order storage area ratio reflects the single-effect macro index of the order pool, and the greater the order pool, the better the order effect;
a/c: order roadway ratio, reflecting single-effect microscopic indexes of an order pool, wherein the larger the order pool, the better the order effect;
b/d: the commodity storage area ratio reflects the macroscopic index of the picking efficiency, and the higher the picking rate is, the higher the picking rate is;
b/c: the commodity roadway ratio reflects microscopic indexes of the picking efficiency, and the larger the picking rate is, the higher the picking rate is;
wherein a, b, c, d are calculation factors, respectively representing:
Calculating a factor a: representing the total number of orders in the aggregate sheet;
Calculating a factor b: representing the number of the commodities in the collection list;
Calculating a factor c: the number of lanes representing the distribution of order SKUs in the aggregate sheet;
Calculating a factor d: representing the number of bits spanned within the aggregate sheet.
For an S-level order pool, because orders in the S-level order pool are concentrated in the same storage position and are single-storage-position orders, the commodity storage area ratio b/d and the commodity roadway ratio b/c are the largest, and the picking efficiency is extremely high.
Still taking the order in the order pool as an example for generating the order picking task, for the order which can not generate the S-level order pool, generating the order pools of other levels according to the number of commodities and commodity position distribution information. Specifically:
For a number of orders for which a class S order pool cannot be generated, but the positions of the goods are distributed in the same storage area, and the total number of orders is greater than the upper limit number of goods for the picking task, a class A1 order pool can be generated for the number of orders. Orders in an A1 class pool of orders are, for example, multiple orders distributed in the same storage area (i.e., one order includes more than one commodity), or are, for example, single orders distributed in the same storage area but not in the same storage location.
For a number of orders for which a class S, class A1 order pool cannot be generated, but the commodity locations are distributed in the same two storage areas, and the total number is greater than the upper limit number of commodities for the picking task, a class A2 order pool can be generated for the number of orders.
For a plurality of orders for which the S-level, A1-level and A2-level order pools cannot be generated, but the commodity positions are distributed in the same three storage areas, and the total number of the orders is larger than the commodity upper limit number of the picking task, the B1-level order pools can be generated for the plurality of orders.
For a plurality of orders for which the order pools of the S level, the A1 level, the A2 level and the B1 level cannot be generated, but the commodity positions are distributed in four or more storage areas which are the same, and the total number of the orders is larger than the commodity upper limit number of the picking task, the order pool of the B2 level can be generated for the orders.
Orders that span the storage area and are most in roadway can be extracted from the B2 order pool, and a snail order pool is generated and represents that the orders are combined into a collection order (one order picking task), so that the order picking efficiency is extremely low.
The above is an example of generating an order pool from the S level to the snail level according to the quantity information of the commodities in the order to be processed and the commodity position distribution information of the order to be processed. The level of each order pool is related to the distribution number of the article positions of the order pool, for example, the positions of the articles in the S-level order pool are distributed in the same storage area, roadway and storage position, and the distribution number of the article positions is equal to 1; the commodity positions of the A1-level order pool are distributed in the same storage area, the same or different roadways and different storage positions, wherein the distribution number of the commodity positions is that the storage area number is equal to 1, the roadway number is equal to or greater than 1, and the storage number is greater than 1; the commodity positions of the A2-level order pool are distributed in two storage areas, the same or different roadways and different storage positions in each storage area, wherein the distribution number of the commodity positions is that the number of the storage areas is equal to 2, the number of the roadways is greater than 2, and the number of the storage positions is greater than 2; the number of distribution of the positions of the articles in the order pool of other levels is not necessarily listed. The distribution number of the article positions of an order pool is determined according to the commodity position distribution information of each order in the order pool, namely, the characteristics of the distribution number of the article positions of the order pool can be determined according to the information of storage areas, roadways and storage positions in which the commodity positions in the order pool are generally distributed, so that the level of the order pool is determined.
In addition, the above indicators of the order picking efficiency and the order grouping effect of the S-level order pool are applicable to order pools of other levels. The order picking efficiency of each level order pool and each index are more specifically as follows:
the S-level order pool is used for generating a picking task for a single storage order, so that the picking efficiency is extremely high, and the commodity storage area ratio b/d and the commodity roadway ratio b/c are the largest;
The A1-level order pool meets the requirement of picking a goods in a single storage area of a pickers, and a goods picking task is generated by using the order pool, so that the goods picking efficiency is higher, the b/d is maximum, and the b/c is larger;
The A2-level order pool meets the requirement of picking in less storage areas of pickers, and the pickers are used for generating picking tasks, so that the picking efficiency is high, the b/d times are high, and the b/c is large;
the B1 order pool spans three picking areas, and the picking tasks are generated by using the order pool, so that the picking efficiency is general, the B/d times are smaller, and the B/c is smaller;
The B2-level order pool spans four or more picking areas, and is used for generating picking tasks, so that the picking efficiency is low, the B/d times are low, and the B/c is small;
The snail order pool is an extracted order which spans the storage area and has the largest roadway, the distribution number of the positions of the articles is largest, the snail order pool is used for generating a picking task, the picking efficiency is extremely low, and the calculation factors b/d and b/c are smallest.
Comparing the distribution numbers of the positions of the articles according to the order pools of each level, wherein the distribution numbers of the positions of the articles are from large to small: snail level order pool > B2 level order pool > B1 level order pool > A2 level order pool > A1 level order pool > S level order pool, wherein the distribution numbers of the positions of the articles are compared according to the sequence of the storage area number, the roadway number and the storage position number, namely, the order pools of different levels are as follows: if the storage area number is large, the distribution number of the article positions is large, if the storage area number is the same, the distribution number of the article positions of the order pool with large roadway number is large, and if the roadway number is also the same, the distribution number of the article positions of the order pool with large storage number is large, in a word, the distribution degree of the overall article positions of the order pool is reflected by the distribution number of the article positions, and the order pool with larger article position distribution number has more distributed commodity, lower picking efficiency and longer picking time.
Step S102: and determining a task generation strategy of the first processing link according to the processibility of the second processing link positioned at the downstream of the first processing link, acquiring a document to be processed from the generated document pool according to the task generation strategy, and generating a document processing task according to the acquired document combination to be processed.
The task processing process may include a plurality of processing links, the first processing link and the second processing link may be any two of the processing links, and the second processing link is a processing link located downstream of the first processing link. And processing the documents of each batch according to the sequence of each processing link in the task processing process. The processibility of the second processing link is an index for reflecting the current processing pressure of the second processing link, and can be specifically determined according to the backlog quantity of the current task of the second processing link, wherein the current task of the second processing link is generated according to the combination of past batch receipts, and the past batch is the batch before the batch corresponding to the receipts to be processed.
Step S102, specifically, under the condition that the number of task backlog is greater than a preset threshold, selecting the to-be-processed bills in the related-level bill pool one by one according to the sequence of the distribution number of the article positions from large to small, so as to generate bill processing tasks according to the selected to-be-processed bill combination, and stopping selecting the to-be-processed bills until the number of the generated bill processing tasks is greater than or equal to the threshold of the number of the tasks;
And under the condition that the number of the backlog of the tasks is smaller than or equal to a preset threshold value, selecting the to-be-processed bills in the bill pool of the related level one by one according to the order of the distribution number of the article positions from small to large, generating the bill processing tasks according to the selected to-be-processed bill combination, and stopping selecting the to-be-processed bills until the number of the generated bill processing tasks is larger than or equal to the threshold value of the number of the tasks.
The preset threshold, that is, the task volume upper limit threshold of the second processing link, the task number threshold may be the number of people processing the task in the first processing link, or may be set to other values according to the need, and both the preset threshold and the task number threshold may be preconfigured as environmental factors before step S101.
Taking an e-commerce warehouse as an example, the processing links can comprise a plurality of links such as picking and rechecking, wherein in the picking link, a warehouse personnel takes a picking task generated by a server for distributing tasks, the picking task can be specifically in the form of a picking bill, and commodities are taken out from a storage position of a goods shelf according to the picking bill and are collected; the rechecking link is a subsequent processing link of the picking link, and mainly separates the picked commodities, and arranges the commodities according to order dimensions for delivery to packing staff.
The processing capacity of the review link can be determined according to the current task backlog quantity of the review link, and the batch of the order currently used for generating the picking task is called a current batch, so that the current task of the review link is generated according to the order combination of the batches before the current batch.
When the current task backlog quantity of the rechecking link is larger than the configured rechecking link task backlog upper limit threshold value, the task processing pressure of the rechecking link is large, the picking task needs to be generated preferentially according to the order pool with low picking efficiency when the picking task is generated, and the picking mode under the condition can be called as a slow picking mode so as to avoid more backlog and higher pressure of the rechecking link task. Otherwise, when the current task backlog quantity of the rechecking link is smaller than or equal to the configured rechecking link task backlog upper limit threshold value, the task processing pressure of the rechecking link is small, and when the picking task is generated, the picking task is required to be generated preferentially according to the order pool with high picking efficiency, and the picking mode under the condition can be called as a quick picking mode, so that the task is transferred from the picking link to the rechecking link as soon as possible.
Based on the analysis, under the condition that the task backlog quantity is larger than the task backlog upper limit threshold of the rechecking link, the orders in the order pools of the related levels are selected one by one according to the order of the distribution quantity of the object positions from large to small, namely, the orders are selected according to the order of the snail order pool > B2 order pool > B1 order pool > A2 order pool > A1 order pool > S order pool to generate a picking task, so that the picking link enters a slow picking mode. And when selecting orders from the order pools of all levels, firstly selecting orders from the snail order pools according to the sequence, and forming a collection list, wherein each collection list corresponds to one order picking task, if the number of the order picking tasks is smaller than the task number threshold, selecting orders from the B2 order pool to generate order picking tasks, and when the total number of the generated order picking tasks is larger than or equal to the task number threshold, not selecting orders from the order pools. The task number threshold may be specifically set as the pick number, and the pick person gradually consumes the pick task, but does not distribute the pick task excessively, so that the maximum number of orders in the order pool may be ensured, so as to obtain an optimal task distribution result.
Under the condition that the task backlog quantity is smaller than or equal to the task backlog upper limit threshold value of the rechecking link, selecting orders in the order pools of related levels one by one according to the order of the distribution quantity of the object positions from small to large, namely selecting orders according to the order of the S-level order pool > A1-level order pool > A2-level order pool > B1-level order pool > B2-level order pool > snail-level order pool to generate a picking task, so that the picking link enters a quick picking mode. Likewise, the order is first selected from the S-level order pool to generate order picking tasks, and the order is selected from the step-by-step level until the total number of the generated order picking tasks is greater than or equal to the task number threshold.
In the above-described order picking task generation, each collection sheet is subjected to order picking task generation according to the upper limit of the number of collection sheets, so that a high picking efficiency can be ensured, but it should be noted that, for an order of a lot, there is generally a corresponding shipment deadline, and when the shipment deadline is reached, all orders of the lot must be taken out. Therefore, when the delivery deadline is reached, if the total number of commodities after some orders are assembled does not reach the upper limit of the number of the assembled commodities, the assembled commodities are also generated, and automatic task allocation is forced to ensure that the commodities in the orders are delivered before the delivery deadline.
Step S103: and distributing the generated bill processing task to a first processing link for processing.
The bill processing task of the first processing link is generated according to the processibility of the second processing link, so that the production rhythm of the first processing link can be controlled according to the processibility of the second processing link. By combining the above examples of task allocation to the e-commerce warehouse, the production rhythm of picking can be controlled through fast and slow picking in the picking link according to the task backlog condition of the rechecking link, the dynamic balance of the upstream link and the downstream link of task processing is realized, the influence on the task processing time efficiency achievement and the warehouse production time efficiency caused by the task backlog of the rechecking link is avoided, and the user experience is good.
Fig. 2 is a schematic diagram of a warehouse production flow according to an embodiment of the invention.
As shown in fig. 2, the warehouse production process according to the embodiment of the present invention includes:
a warehouse production system (abbreviated as system) receives orders, such as an order for a lot;
Initializing a system, wherein the system mainly comprises initializing some configurations, such as configuring various environmental factors, including order types, upper limit numbers of articles in single orders, picking numbers, upper limit numbers of picking container orders, upper limit thresholds of aggregate single review products and the like;
The system performs positioning, namely positioning storage positions of commodities in each order, such as storage areas, roadways, storage positions and the like where the commodities in the order are stored;
The system automatically performs task allocation, wherein the system can select orders needing to perform automatic allocation tasks according to configuration of an initialization step, generate at least one level of order pools for the orders performing the automatic allocation tasks, generally the order pools can be divided into an S level, an A level (which is divided into an A1 level and an A2 level), a B level (which is divided into an B1 level and an B2 level), a snail level, specifically which level or levels of order pools are generated is/are specifically determined according to storage positions and quantity of commodities in each order, specifically, commodity position distribution information of each order can be determined according to storage positions of commodities in each order, and the order pools are generated according to quantity information and commodity position distribution information of commodities in each order. And then determining a task generation strategy of the picking link according to the rechecking backlog quantity fed back by the rechecking packing link, wherein the task generation strategy determines which level of order pool is preferentially used for acquiring orders to generate picking tasks, the number of the generated picking tasks does not exceed the number of pickers, namely, when the number of the generated picking tasks is greater than or equal to the number of pickers, the picking tasks are not continuously generated any more so as to ensure the maximum number of orders in the order pool, and the optimal allocation result is calculated in the follow-up process. Finally, issuing the generated picking task to a picking link;
in the picking link, a picker picks the order commodities in the picking task one by one according to the picking task, and the order commodities are sent to the rechecking and packaging link after the task processing is completed;
in the rechecking and packaging link, rechecking personnel separate the selected commodities, sort the commodities according to order dimensions, and then deliver the sorted commodities to packaging personnel, and the packaging personnel package the ordered commodities.
In the above flow, the data index after the order is assembled according to the order pool is expressed in the following aspects: aggregate single saturation percentage, total number of items/total number of reserves ("/" indicates division), aggregate single pick difficulty structure. Wherein, the saturation degree of the collection list refers to the quantity of commodities in one collection list, and the more the commodities are, the higher the picking efficiency is. The saturation percentage of the collection list refers to the ratio of the number of the saturation collection list to the total number of the collection list in the collection list generated according to a batch of orders, wherein the saturation collection list refers to the collection list of which the commodity number in the collection list reaches the preset upper limit of the commodity number of the collection list; the total commodity number/total storage area number is a macroscopic index showing the picking efficiency; the pick-up difficulty structure is related to the pick-up difficulty structure. In this embodiment, the pick structure of the aggregate sheet reflects the source distribution of order picks in the aggregate sheet, that is, the storage location distribution of the ordered commodities, for example, the storage area distribution, the roadway distribution, the storage location distribution, and the like.
FIG. 3 is a diagram showing the comparison of the index values of data in a collection sheet according to the prior art and an embodiment of the present invention. Based on the data index embodiment after the order is assembled according to the order pool, the data index value obtained according to the actual order data (adopting manual task allocation) in a certain warehouse for one month is called as a data index actual value, and is called as an actual value for short; the method according to the embodiment of the invention optimizes the prior art, and comprises the steps of generating order pools of all levels, carrying out automatic task allocation on order groups according to a task generation strategy according to the levels of the order pools, and obtaining the data index value based on the embodiment of the invention, namely a data index optimized value, namely an optimized value for short.
In fig. 3 (a), the upper limit of the number of commodities in a single bill is set to be 50, and the number of commodities in the bill is controlled to be 50 as much as possible through optimization in the embodiment of the invention, compared with the prior art, the distribution of the number of commodities in the whole bill is changed in the embodiment of the invention, compared with the prior art, the saturation percentage of the average commodity in the bill is improved from 30.9% to 42.1% after optimization, the saturation percentage of the average commodity in the bill is equal to the ratio of the sum of the number percentages in each bill to the total number of the bill, and the number percentage of the commodity in one bill is equal to the ratio of the number of ordered commodities distributed to the bill to the upper limit of the preset number of the bill; compared with the prior art, the embodiment of the invention has the advantages that more collection sheets reach saturation and become saturated collection sheets, and the saturation percentage of the collection sheets is improved from 40% (actual value) to 79% (optimized value).
In fig. 3 (b), the histogram shows the comparison of the prior art (state of the art) with the optimized data index value implemented by the present invention. Wherein, on the basis of the data index of the total commodity number/total storage area number, several data indexes are also expanded, which are respectively: total number of orders/total number of reserves, total number of orders/total number of lanes, total number of orders/total number of storage bits, total number of commodities/total number of storage lanes, total number of commodities/total number of storage bits (in the above data index, "/" indicates division). Each set of histograms includes a comparison of the current situation (prior art) with the present invention of examples v1.0, v1.1, v1.2, and each set of histograms in fig. 3 (b) corresponds to the current situation, v1.0, v1.1, v1.2, respectively, from left to right. Wherein v1.0 is optimizing only the storage area, i.e. only the storage area is considered when generating the order pool, and a "quick pick" mode is implemented, i.e. orders in the order pool are distributed in a concentrated priority pick position; v1.1 is to optimize the storage area, the tunnel and the storage position, namely, when an order pool is generated, not only the storage area but also the tunnel and the storage position are considered, the storage area, the tunnel and the storage position are synthesized to ensure optimal picking efficiency, and the mode of 'quick picking' is implemented, so that compared with v1.0, the picking efficiency is reduced; v1.2 is to optimize the reserves, lanes, storage and to implement a "slow pick" mode, i.e. to prioritize orders in an order pool with a distributed picking position, e.g. to prioritize order picking from a snail order pool to generate aggregate orders with the lowest pick efficiency. According to the histogram in fig. 3 (b) or the comparison result in table 1, the total commodity number/total storage area number in the embodiment v1.1 of the present invention is improved from 17.2% to 30% compared with the prior art, and is improved by 74%, the cross storage area number of the collection sheet is effectively improved, and the larger the index value of the total commodity number/total storage area number is, the higher the group sheet efficiency and the pick efficiency are. The two comprehensive comparisons V1.1 and V1.0 are both quick picking modules, but the V1.1 and V1.0 are improved on each index value of total order number/total lane number, total order number/total storage number, total commodity number/total storage lane number and total commodity number/total storage number. In addition, v1.2 carries out "slow picking" mode each data index value and also generally reduces than prior art, can reach the purpose that reduces picking efficiency and picking speed. The above analysis was performed using only the comparison result between the individual indexes as an example, and the analysis concept of the comparison result between the other data index values of fig. 3 (b) is similar, and the analysis is not performed one by one.
TABLE 1
Fig. 3 (c) shows a comparison result of the aggregate single lane number distribution and the storage bit number distribution in the prior art and the embodiment of the present invention, which is a specific expression of comparing the actual value and the optimized value of the aggregate single pick difficulty structure data index. As can be seen from fig. 3 (c), the number of lanes and the storage position of the aggregate list allocated by the manual task are irregular (because the manual task allocation can only form an order pool and group the order according to the order placing order), and after the optimization of the embodiment of the invention, the overall number of lanes and the storage position of the aggregate list are distributed in a small-to-large distribution manner, so that when the picking task is generated, the modes of 'quick picking' and 'slow picking' can be flexibly switched according to the rechecking speed of the rechecking link, and the order is selected from the order pool as required, so that the dynamic balance of the processing speeds of the picking link and the rechecking link is achieved.
According to the embodiment of the invention, an order pool grading concept is introduced, orders are automatically combined into an aggregate order according to a specific order (sequence or reverse order) of an order pool level, dynamic calculation is automatically carried out according to the processing capacity of a review link, quick picking and slow picking modes are flexibly switched, and optimized data indexes of the prior art and the embodiment of the invention are compared, so that various advantages of the automatic task allocation of the embodiment of the invention compared with the manual task allocation scheme of the prior art are intuitively presented, the embodiment of the invention can balance the production rhythm of a warehouse, and the rhythm of each link of the warehouse production is dynamically balanced.
Fig. 4 is a schematic diagram of main modules of a task assigning apparatus according to an embodiment of the present invention.
The task allocation device 400 according to the embodiment of the present invention mainly includes: document pool generation module 401, task generation module 402, task allocation module 403.
The document pool generating module 401 is configured to generate at least one level document pool for a document to be processed, where the number of document pools in each level is at least one.
The document to be processed may include information on the number and location of the items.
The document pool generation module 401 may specifically be configured to: obtaining article position distribution information of each document to be processed according to the position information of the articles in each document to be processed; generating at least one level bill pool according to the quantity information of the articles in each bill to be processed and the article position distribution information of each bill to be processed, wherein the total number of the articles in each bill pool is larger than the article upper limit number of the bill processing task, the level of each bill pool is related to the article position distribution number of the bill pool, and the article position distribution number of one bill pool is determined according to the article position distribution information of each bill to be processed in the bill pool.
The task allocation device 400 may further include a document filtering module for: and screening the documents to be processed from the documents in the current batch according to the configured environmental factors, wherein the environmental factors can comprise the types of the documents and the upper limit number of the articles in the single document.
The task generating module 402 is configured to determine a task generating policy of the first processing link according to a processible capability of the second processing link located downstream of the first processing link, acquire a document to be processed from the generated document pool according to the task generating policy, and generate a document processing task according to the acquired document to be processed combination.
The processibility of the second processing link is determined according to the backlog quantity of the current task of the second processing link, the current task of the second processing link is generated according to the combination of past batch receipts, and the past batch is the batch before the batch corresponding to the receipts to be processed.
The task generation module 402 may be specifically configured to: under the condition that the number of task backlog is larger than a preset threshold, selecting the to-be-processed bills in the related-level bill pool one by one according to the sequence of the distribution number of the article positions from large to small, generating bill processing tasks according to the selected to-be-processed bill combination, and stopping selecting the to-be-processed bills until the number of the generated bill processing tasks is larger than or equal to the threshold of the number of the tasks; and under the condition that the number of the backlog of the tasks is smaller than or equal to a preset threshold value, selecting the to-be-processed bills in the bill pool of the related level one by one according to the order of the distribution number of the article positions from small to large, generating bill processing tasks according to the selected to-be-processed bill combination, and stopping selecting the to-be-processed bills until the number of the generated bill processing tasks is larger than or equal to the threshold value of the number of the tasks.
The task allocation module 403 is configured to allocate the generated document processing task to the first processing link for processing.
In addition, the specific implementation of the task allocation device in the embodiment of the present invention has been described in detail in the task allocation method described above, so that the description is not repeated here.
Fig. 5 illustrates an exemplary system architecture 500 to which the task allocation method or task allocation device of embodiments of the present invention may be applied.
As shown in fig. 5, the system architecture 500 may include terminal devices 501, 502, 503, a network 504, and a server 505. The network 504 is used as a medium to provide communication links between the terminal devices 501, 502, 503 and the server 505. The network 504 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
A user may interact with the server 505 via the network 504 using the terminal devices 501, 502, 503 to receive or send messages or the like. Various communication client applications may be installed on the terminal devices 501, 502, 503, such as shopping class applications, web browser applications, search class applications, instant messaging tools, mailbox clients, social platform software, etc. (by way of example only).
The terminal devices 501, 502, 503 may be a variety of electronic devices having a display screen and supporting web browsing, including but not limited to smartphones, tablets, laptop and desktop computers, and the like.
The server 505 may be a server providing various services, such as a background management server (by way of example only) providing support for shopping-type websites browsed by users using the terminal devices 501, 502, 503. The background management server may analyze and process the received data such as the product information query request, and feedback the processing result (e.g., the target push information, the product information—only an example) to the terminal device.
It should be noted that, the task allocation method provided in the embodiment of the present invention is generally executed by the server 505, and accordingly, the task allocation device is generally disposed in the server 505.
It should be understood that the number of terminal devices, networks and servers in fig. 5 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
Referring now to FIG. 6, there is illustrated a schematic diagram of a computer system 600 suitable for use in implementing a terminal device or server in accordance with an embodiment of the present application. The terminal device or server shown in fig. 6 is only an example, and should not impose any limitation on the functions and scope of use of the embodiments of the present application.
As shown in fig. 6, the computer system 600 includes a Central Processing Unit (CPU) 601, which can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 602 or a program loaded from a storage section 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data required for the operation of the system 600 are also stored. The CPU 601, ROM 602, and RAM 603 are connected to each other through a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
The following components are connected to the I/O interface 605: an input portion 606 including a keyboard, mouse, etc.; an output portion 607 including a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, a speaker, and the like; a storage section 608 including a hard disk and the like; and a communication section 609 including a network interface card such as a LAN card, a modem, or the like. The communication section 609 performs communication processing via a network such as the internet. The drive 610 is also connected to the I/O interface 605 as needed. Removable media 611 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is installed as needed on drive 610 so that a computer program read therefrom is installed as needed into storage section 608.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication portion 609, and/or installed from the removable medium 611. The above-described functions defined in the system of the present application are performed when the computer program is executed by a Central Processing Unit (CPU) 601.
The computer readable medium shown in the present application may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present application, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules involved in the embodiments of the present invention may be implemented in software or in hardware. The described modules may also be provided in a processor, for example, as: a processor comprises a bill pool generation module, a task generation module and a task allocation module. The names of these modules do not constitute a limitation on the module itself in some cases, and for example, the document pool generation module may also be described as "a module for generating at least one hierarchical level of document pools for documents to be processed".
As another aspect, the present invention also provides a computer-readable medium that may be contained in the apparatus described in the above embodiments; or may be present alone without being fitted into the device. The computer readable medium carries one or more programs which, when executed by a device, cause the device to include: generating at least one level bill pool for the bill to be processed, wherein the number of each level bill pool is at least one; determining a task generation strategy of a first processing link according to the processibility of a second processing link positioned at the downstream of the first processing link, acquiring a document to be processed from the document pool according to the task generation strategy, and generating a document processing task according to the acquired document combination to be processed; and distributing the generated bill processing task to the first processing link for processing.
According to the technical scheme of the embodiment of the invention, the bill to be processed generates at least one level bill pool, and the number of each level bill pool is at least one; determining a task generation strategy of a first processing link according to the processibility of a second processing link positioned at the downstream of the first processing link, acquiring a document to be processed from a generated document pool according to the task generation strategy, and generating a document processing task according to the acquired document combination to be processed; and distributing the generated bill processing task to a first processing link for processing. The task allocation can be automatically carried out without relying on manual experience, so that the task allocation is systemized, the labor cost is saved, the instability of manual operation is avoided, the task allocation and operation accuracy is improved by accurately allocating according to the span of single/multiple storage areas, the dynamic balance of the upstream and downstream ring nodes of the task processing is realized, the influence on the time efficiency of the task processing and the time efficiency of warehouse production due to the backlog of a certain link task is avoided, and the user experience is good.
The above embodiments do not limit the scope of the present invention. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives can occur depending upon design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present invention should be included in the scope of the present invention.