AU2017272252A1 - Resource evaluation for complex task execution - Google Patents

Resource evaluation for complex task execution Download PDF

Info

Publication number
AU2017272252A1
AU2017272252A1 AU2017272252A AU2017272252A AU2017272252A1 AU 2017272252 A1 AU2017272252 A1 AU 2017272252A1 AU 2017272252 A AU2017272252 A AU 2017272252A AU 2017272252 A AU2017272252 A AU 2017272252A AU 2017272252 A1 AU2017272252 A1 AU 2017272252A1
Authority
AU
Australia
Prior art keywords
resource
task
assessment
dimensional
dimension
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
AU2017272252A
Inventor
Kumar Abhinav
Alpana DUBEY
Sakshi Jain
Alex Kass
Manish Mehta
Gurdeep Singh
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Accenture Global Solutions Ltd
Original Assignee
Accenture Global Solutions Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from AU2016247051A external-priority patent/AU2016247051A1/en
Application filed by Accenture Global Solutions Ltd filed Critical Accenture Global Solutions Ltd
Priority to AU2017272252A priority Critical patent/AU2017272252A1/en
Publication of AU2017272252A1 publication Critical patent/AU2017272252A1/en
Abandoned legal-status Critical Current

Links

Landscapes

  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

A system for facilitating selection of a resource for a task, the system including a communication interface, a memory including a multi-dimensional assessment framework including assessment dimensions including at least a resource dimension and a task dimension, wherein the task dimension includes a resource experience dimension relating to resource experience regarding similar tasks, dimensional metrics assigned to the assessment dimensions, and dimensional weights for the assessment dimensions, resource analysis circuitry configured to obtain, from a task controller device, task characteristics specified by a task controller for a task posted by the task controller device to a resource data platform, the posted task including a text narrative describing the task, connect to the resource data platform through the communication interface, obtain resource characteristics from the resource data platform that characterize two or more available resources that may be selected for the posted task, each resource characteristic including a text narrative describing the available resource, determine a resource assessment for each available resource by analyzing the text narrative of the posted task and each available resource to determine matching characteristics, calculating a task similarity score reflecting a level of similarity between a previous task completed by each resource and the posted task and by executing resource assessment according to, the assessment dimensions and dimensional metrics in the multi-dimensional assessment framework, including the task similarity score, and the dimensional weights, and store characteristics of available resources and resource assessments in separate database tables, and machine interface generation circuitry configured to generate a machine interface including a resource review interface that provides a visualization of available resources and associated resource assessments in a display that provides a side-by-side comparison of the available resources and associated resource assessments, the display including a customization section which enables a user to adjust one or more resource assessment thresholds, and an analysis section which enables a user to adjust one or more task requirement thresholds, wherein only the resources that satisfy the one or more resource assessment and task requirement thresholds are displayed, deliver the machine interface to the task controller device through the communication interface, and receive, from the task controller device through the machine interface, a task controller selection of one of the displayed available resources. CoCC C\j (I)) co co a) .0 g U I C/, ?a, (n E ) C (D E C/) 0 C C C ca C" Cc= C" CEl

Description

TECHNICAL FIELD [0002] This application relates to a technical evaluation of resources to carry out complex tasks.
BACKGROUND OF THE INVENTION [0003] The global proliferation of high speed communication networks has created unprecedented opportunities for geographically distributed resource identification, evaluation, selection, and allocation. However, while the opportunities exist and continue to grow, the realization of those opportunities has fallen behind. In part, this is due to the enormous technical challenges of finding the resources, evaluating the resources, and determining how to allocate the resources to achieve the highest likelihood of successfully completing the task.
2017272252 07 Dec 2017
SUMMARY OF THE INVENTION [0004] In one aspect, the present invention provides a system for facilitating selection of a resource for a task, the system including a communication interface, a memory including a multi-dimensional assessment framework including assessment dimensions including at least a resource dimension and a task dimension, wherein the task dimension includes a resource experience dimension relating to resource experience regarding similar tasks, dimensional metrics assigned to the assessment dimensions, and dimensional weights for the assessment dimensions, resource analysis circuitry configured to obtain, from a task controller device, task characteristics specified by a task controller for a task posted by the task controller device to a resource data platform, the posted task including a text narrative describing the task, connect to the resource data platform through the communication interface, obtain resource characteristics from the resource data platform that characterize two or more available resources that may be selected for the posted task, each resource characteristic including a text narrative describing the available resource, determine a resource assessment for each available resource by analyzing the text narrative of the posted task and each available resource to determine matching characteristics, calculating a task similarity score reflecting a level of similarity between a previous task completed by each resource and the posted task and by executing resource assessment according to the assessment dimensions and dimensional metrics in the multi-dimensional assessment framework, including the task similarity score, and the dimensional weights, and store characteristics of available resources and resource assessments in separate database tables; and machine interface generation circuitry configured to generate a machine interface including a resource review interface that provides a visualization of available resources and associated resource assessments in a display that provides a side-by-side comparison of the available resources and associated resource assessments, the display including a customization section which enables a user to adjust one or more resource assessment thresholds, and an analysis section which enables a user to adjust one or more task requirement thresholds, wherein only the resources that satisfy the one or more resource assessment and task requirement thresholds are displayed, deliver the machine interface to the task controller device through the communication interface, and receive, from the task controller device through the machine interface, a task controller selection of one of the displayed available resources.
2017272252 07 Dec 2017 [0005] In another aspect, the present invention provides a computer-implemented method for facilitating selection of a resource for a task, the method including establishing in memory a multi-dimensional assessment framework including assessment dimensions including at least a resource dimension and a task dimension, wherein the task dimension includes at least a resource experience dimension relating to resource experience regarding similar tasks, dimensional metrics assigned to the assessment dimensions, and dimensional weights for the assessment dimensions, executing resource analysis circuitry that obtains, from a task controller device, task characteristics specified by a task controller for a task posted by the task controller device to a resource data platform, the posted task including a text narrative describing the task, connects to the resource data platform through a communication interface, obtains resource characteristics from the resource data platform that characterize two or more available resources that may be selected for the posted task, each resource characteristic including a text narrative describing the available resource, determines a resource assessment for each available resource by analyzing the text narrative of the posted task and each available resource to determine matching characteristics, calculating a task similarity score reflecting a level of similarity between a previous task completed by each resource and the posted task, and by executing resource assessment according to the assessment dimensions and dimensional metrics in the multi-dimensional assessment framework including the task similarity score, and the dimensional weights, and stores characteristics of available resources and resource assessments in separate database tables, and executing machine interface generation circuitry to generate a machine interface including a resource review interface that provides a visualization of available resources and associated resource assessments in a display that provides a side-by-side comparison of the available resources and associated resource assessments, the display including a customization section which enables a user to adjust one or more resource assessment thresholds, and an analysis section which enables a user to adjust one or more task requirement thresholds, wherein only the resources that satisfy the one or more resource assessment and task requirement thresholds are displayed, and deliver the machine interface to the task controller device through the communication interface, and receive, from the task controller device through the machine interface, a task controller selection of one of the displayed available resources.
[0006] In yet another aspect, the present invention provides a system for facilitating
2017272252 07 Dec 2017 selection of a resource for a task, the system including a communication interface, a memory including a multi-dimensional assessment framework including assessment dimensions and dimensional metrics including a resource dimension configured to evaluate resource specific characteristics, the resource dimension defining a past rating metric, and an experience metric, a task dimension configured to evaluate resource-totask compatibility, the task dimension defining an availability metric, and a skill fitness metric, a controller dimension configured to evaluate resource-to-task controller compatibility, the controller dimension defining a cultural match metric, and a task controller collaboration metric, a team dimension configured to evaluate resource-to-team compatibility, the team dimension defining, a team collaboration metric, and a time-zone match metric, and a goal dimension configured to evaluate resource goals, the goal dimension defining a skill opportunity metric, and dimensional weights for the assessment dimensions, including a resource dimensional weight for the resource dimension, a task dimensional weight for the task dimension, a controller dimensional weight for the controller dimension, a team dimensional weight for the team dimension, and a goal dimensional weight for the goal dimension, resource analysis circuitry configured to obtain, from a task controller device, task characteristics specified by a task controller for a task posted by the task controller device to a resource data platform, the posted task including a text narrative describing the task, connect to the resource data platform through the communication interface, obtain resource characteristics from the resource data platform that characterize two or more available resources that may be selected for the posted task, each resource characteristic including a text narrative describing the available resource, and determine a resource assessment for each available resource by analyzing the text narrative of the posted task and each available resource to determine matching characteristics, calculating a task similarity score using the past rating metric, the task similarity score reflecting a similarity between a previous task, and by executing resource assessment according to the assessment dimensions and dimensional metrics in the multi-dimensional assessment framework, including the task similarity score and store characteristics of available resources and resource assessments in separate database tables; and the dimensional weights, and machine interface generation circuitry configured to generate machine interfaces including a resource assessment result interface including assessment results for each of the two or more available resources, a resource detail interface for a selected resource among the two or more available resources, the resource detail interface including a skill fitness section, a similar task
2017272252 07 Dec 2017 analysis section, an ongoing tasks analysis section, a recent reviews analysis section, a prior task analysis section, and a summary analysis section, and a resource review interface that provides a visualization of available resource and associated resource assessments in a display that provides a side-by-side comparison of and the available resources and associated resource assessments, the display including a customization section which enables a user to adjust one or more resource assessment thresholds, and an analysis section which enables a user to adjust one or more task requirement thresholds, wherein only the resources that satisfy the one or more resource assessment and task requirement thresholds are displayed, deliver the machine interfaces to the task controller device through the communication interface, and receive, from the task controller device through one or more of the machine interfaces, a task controller selection of the displayed available resources.
2017272252 07 Dec 2017
BRIEF DESCRIPTION OF THE DRAWINGS [0007] Figure 1 shows a global network architecture.
[0008] Figure 2 shows an example implementation of a resource analysis system.
[0009] Figure 3 shows one example of a multi-dimensional analysis framework that the resource analysis system may implement.
[0010] Figure 4 shows another example of a multi-dimensional analysis framework that the resource analysis system may implement.
[0011] Figure 5 shows another example of a multi-dimensional analysis framework that the resource analysis system may implement.
[0012] Figure 6 shows the resource analysis system in communication with sources of resource data, and with platforms that consume resource analysis services.
[0013] Figure 7 shows an example potential resource interface.
[0014] Figure 8 shows an example resource analysis result interface.
[0015] Figure 9 shows an example resource detail interface.
[0016] Figure 10 shows additional detail from the resource detail interface.
[0017] Figure 11 shows an example similar task analysis interface.
[0018] Figure 12 shows an example ongoing tasks analysis interface.
[0019] Figure 13 shows an example recent reviews analysis interface.
[0020] Figure 14 shows an example prior task analysis interface.
[0021] Figure 15 shows an example summary analysis interface.
[0022] Figure 16 shows an example resource comparison interface.
[0023] Figure 17 shows an example of process flow for resource analysis.
[0024] Figure 18 shows another example of process flow for resource analysis when integrated with a hybrid sourcing platform.
2017272252 07 Dec 2017
DETAILED DESCRIPTION OF THE EMBODIMENT(S) OF THE INVENTION [0025] Finding, evaluating, and applying the right set of resources to a complex task is a key to successful task execution and completion. The dynamic resource analysis machine (DRAM) described below implements a technical multi-dimensional assessment system for resources, performs complex assessments of the resources, and defines and generates improved machine interfaces that deliver the assessments, e.g., for consideration and possible selection of a resource. The DRAM may perform the complex assessments on a wide range of resource types for any type of task. A few examples of resource types include: software programs; trained and untrained machine learning models; artificial intelligence engines; robots; machines; tools; mechanical, chemical, and electrical equipment, database models; machines; individual workers with specific skills; and mechanical, chemical, or electrical components. A few examples of tasks include: deploying cloud infrastructure; building a web site; building an oil rig; performing legal services; creating a new software application; architecting and building an office, factory, or school; or designing, simulating, prototyping, and manufacturing high performance analog or digital circuitry.
[0026] The DRAM may be implemented using any set of dimensions and metrics organized into a multi-dimensional analysis framework that is suitable for the resources in question. For instance, a tailored set of analysis dimensions may be present for chemical or electrical component selection, and those dimensions may be very different form the dimensions in place for assessing workers. As one specific example, while the DRAM may measure timezone match, education compensation goals, or cultural matches for workers, the DRAM may instead measure component tolerance, number of suppliers, cost of materials handling, or other metrics for chemical, electrical, or other resources. For purposes of explanation, the discussion below concerns one possible set of dimensions for worker assessment, but the dimensions and metrics may change in any degree needed to suit the resource and tasks in question.
[0027] Figures 1 and 2 provide an example context for the discussion below of the technical solutions in the DRAM. The examples in Figures 1 and 2 show one of many possible different implementation contexts. In that respect, the technical solutions are not limited in their application to the architectures and systems shown in Figures 1 and 2, but are applicable to many other system implementations, architectures, and connectivity.
2017272252 07 Dec 2017 [0028] Figure 1 shows a global network architecture 100. Connected through the global network architecture 100 are geographically distributed data platforms 102, 104, 106, and 108. The data platforms 102 - 108 provide resource characteristic data on any number or type of available resources.
[0029] Throughout the global network architecture 100 are networks, e.g., the network 110. The networks provide connectivity between the data platforms 102 - 108 and the DRAM 112. The networks 110 may include private and public networks defined over any pre-determined and possibly dynamic internet protocol (IP) address ranges.
[0030] The DRAM 112 performs complex technical resource assessments. As an overview, the DRAM 112 may include communication interfaces 114, assessment engines 116, and machine interfaces 118. The communication interfaces 114 connect the DRAM 112 to the networks 110 and the data platforms 102 - 108, and facilitate data exchange 152, including exchanging resource characteristic data, and the delivery of machine interfaces (which may include Graphical User Interfaces (GUIs)) for improved interaction with the DRAM 112 regarding the assessments.
[0031] Figure 2 shows an example implementation 200 of the DRAM 112. The DRAM 112 includes communication interfaces 202, system circuitry 204, input/output (I/O) interfaces 206, and display circuitry 208 that generates machine interfaces 210 locally or for remote display, e.g., in a web browser running on a local or remote machine. The machine interfaces 210 and the I/O interfaces 206 may include GUIs, touch sensitive displays, voice or facial recognition inputs, buttons, switches, speakers and other user interface elements. Additional examples of the I/O interfaces 206 include microphones, video and still image cameras, headset and microphone input I output jacks, Universal Serial Bus (USB) connectors, memory card slots, and other types of inputs. The I/O interfaces 206 may further include magnetic or optical media interfaces (e.g., a CDROM or DVD drive), serial and parallel bus interfaces, and keyboard and mouse interfaces.
[0032] The communication interfaces 202 may include wireless transmitters and receivers (transceivers) 212 and any antennas 214 used by the transmit and receive circuitry of the transceivers 212. The transceivers 212 and antennas 214 may support WiFi network communications, for instance, under any version of IEEE 802.11, e.g., 802.11η or 802.11ac. The communication interfaces 202 may also include wireline transceivers 216. The wireline transceivers 216 may provide physical layer interfaces for
2017272252 07 Dec 2017 any of a wide range of communication protocols, such as any type of Ethernet, data over cable service interface specification (DOCSIS), digital subscriber line (DSL), Synchronous Optical Network (SONET), or other protocol.
[0033] The system circuitry 204 may include hardware, software, firmware, or other circuitry in any combination. The system circuitry 204 may be implemented, for example, with one or more systems on a chip (SoC), application specific integrated circuits (ASIC), microprocessors, discrete analog and digital circuits, and other circuitry. The system circuitry 204 is part of the implementation of any desired functionality in the DRAM 112, including the assessment engines 116. As just one example, the system circuitry 204 may include one or more instruction processors 218 and memories 220. The memory 220 stores, for example, control instructions 222 and an operating system 224. In one implementation, the processor 218 executes the control instructions 222 and the operating system 224 to carry out any desired functionality for the DRAM 112. The control parameters 226 provide and specify configuration and operating options for the control instructions 222, operating system 224, and other functionality of the DRAM 112.
[0034] The DRAM 112 may include technical data table structures 232 hosted on volume storage devices, e.g., hard disk drives (HDDs) and solid state disk drives (SDDs). The storage devices may define and store database table structures that the control instructions 222 access, e.g., through a database control system, to perform the functionality implemented in the control instructions 222.
[0035] In the example shown in Figure 2, the databases store resource characteristic data 228, completed resource assessments 230, and other data elements supporting the multi-dimensional analysis described below. Any of the databases may be part of a single database structure, and, more generally, may be implemented as data stores logically or physically in many different ways. As one example, the data table structures 232 may be databases tables storing records that the control instructions 222 read, write, delete, and modify in connection with performing the multi-dimensional processing noted below.
[0036] In one implementation, the control instructions 222 include resource analysis instructions 234. The resource assessment instructions 234 execute resource assessment according to a multi-dimensional assessment framework 236 and according to pre-determined fixed or dynamic framework parameters 238 (e.g., weighting factors). Further, the control instructions 222 include machine interface generation instructions 240
2017272252 07 Dec 2017 that generate machine interfaces, including GUIs, that achieve improved interaction with the DRAM 112 regarding the assessments.
[0037] The data table structures 232, resource assessment instructions 234, multidimensional assessment framework 236, framework parameters 238, and machine interface generation instructions 240 improve the functioning of the underlying computer hardware itself. That is, these features (among others described below) are specific improvements in way that the underlying system operates. The improvements facilitate more efficient, accurate, and precise execution of complex resource analysis.
[0038] Figures 3 - 5 show example implementations 300, 400, 500 of the multidimensional assessment framework 236. The multi-dimensional assessment framework 236 may take a very wide range of implementations, including additional, fewer, or different dimensions and dimensional characteristics to evaluate. The example implementation 300, for instance, includes a resource dimension 302, a task dimension 304, a controller dimension 306, a team dimension 308, and a goal dimension 310.
[0039] The resource dimension 302 includes resource specific characteristics 312. The resource specific characteristics 312 typically provide data on the individual performance of the resource itself. Examples are given below in Table 1.
Table 1
Resource Dimension
Dimensional Characteristics Examples
Past Rating numerical rating on prior tasks completed more than a threshold amount of time ago
Recent Rating numerical rating on prior tasks completed less than a threshold amount of time ago
Hours of Experience length of time the resource has been working on tasks
2017272252 07 Dec 2017
Education specific skills, degrees, training, or other capability indicators possessed by the resource
Work Experience specific types and characteristics of tasks previously performs; specific roles filled by the resource
Billed Assignments the number of prior tasks for which the resource submitted an invoice
Cost purchase and maintenance costs, rental rates, billing rates
[0040] The task dimension 304 includes resource-task compatibility characteristics 314. The resource-task compatibility characteristics 314 typically provide data on how well the resource is suited for the posted task. Examples are given below in Table 2.
Table 2
Task Dimension
Dimensional Examples
Characteristics
Availability whether or not a resource is available for the posted task; the DRAM 112 may derive this metric from information about other tasks (e.g., duration, start, and termination data) to which the resource is assigned
Skill Fitness data reflecting whether the resource has the ability to perform a particular task; e.g., whether the skills or capabilities possessed by the resource match task requirement descriptors; the DRAM 112 may take this data from the profile description of the resource, may extract the data from descriptions of
2017272252 07 Dec 2017
prior tasks completed, or may obtain the data from other sources. The Skill Fitness score may also take into account resource test scores, e.g., if the resource has been measured against any given test of skills.
Similar Task Experience The prior experience of a resource demonstrates knowledge, skills and ability to do various types of tasks. Also, resources may be better adapted to working on tasks similar to prior tasks. As such, the DRAM 112 may evaluate resource experience on similar tasks when making its evaluation. Further details are provided below.
Profile Overview This metric provides an assessment of resource competency to perform a task based on a profile overview of the resource. The resource profile may include a description about the resource, prior tasks in academia or industry, and other information. In that regard, the resource profile may demonstrate work history and how likely the resource is to fit the task requirements. The DRAM 112 may implement a content matching approach such as that described in Modern Information Retrieval, Volume 463, ACM press New York, 1999 to find similarity between the resource profile and the task description.
2017272252 07 Dec 2017 [0041] As one example, the DRAM 112 may determine the Skill Fitness metric according to:
Skill_Fitness (St,Sw) =
Match (St,Sw) _\ f + y γ * γ
I Q I £—4 s / |όί I where Si is the set of skills required for a task, Sw is the set of skills possessed by the resource, Ts is the Test score and Tp is the Test percentile. Match (St, Sw) computes the number of matched skills between Si and Sw and |Si| is the total number of skills required by the task.
[0042] As one example, the DRAM 112 may determine the Similar Task Experience metric in the following manner. The DRAM represents the task, 't', as a tuple <Tt, Dt, St, Dut, Rt> where Tt is title of the task, Dt is description of the task, Si is skills required for the task, Dut is duration of the task, Rt is rating received by the resource on task completion. Let tp <Ttp, Dtp, Stp, DutP> be the posted task and th (th e Th, where Th is set of tasks completed by the resource in the past) <Tth, Dth, Sth, Duth, Rth> be a past task performed by the resource in past.
[0043] Experience on a similar task does not necessarily ensure better performance by the resource. Therefore, to evaluate similarity between two tasks, the DRAM 112 may use a rating Rt. In that regard, the DRAM 112 may carry out the processing for calculating task similarity described in Algorithm 1. Note that the DRAM 112 evaluates both structured (e.g., explicitly defined task categories) and unstructured information (e.g., free form resource profile text descriptions) about the tasks to identify similar tasks, thus capturing task similarity with more accuracy.
Algorithm 1
Task Similarity
Input: tp and Th
2017272252 07 Dec 2017
Output: Task similarity score Ts v thE Th, compute the similarity between th and tp by following below steps:
1. Tokenize the title and description of tasks tp and th
2. Remove the stop words (most common words) and perform stemming
3. Calculate TF-IDF (Term frequency-inverse document frequency) weight for each token
4. Compute the cosine similarity between title of tp and th; and description of tp and th cos_sim
Figure AU2017272252A1_D0001
(τ T } cos_sim\ th’ tp }
Figure AU2017272252A1_D0002
Figure AU2017272252A1_D0003
5. Calculate the skill similarity as:
2017272252 07 Dec 2017
Skill_similarity (th, tp) =
Figure AU2017272252A1_D0004
6. Compute similarity between Duration of tp and th as:
Durationl_sim (th, tp) =
Figure AU2017272252A1_D0005
: 0
7. Calculate Task similarity score Ts as:
Ts = (Skill_similarity (th, tp) + cos_sim \ ^th’^tp + cos_sim {^th Duration_sim (th, tp)) χ Rt [0044] The controller dimension 306 includes resource-task controller compatibility characteristics 316. The resource-task controller compatibility characteristics 316 typically provide data on how compatible the resource will be with the task controller (e.g., an individual who posted the task). For this compatibility measure, the DRAM 112 may measure parameters such as, e.g., tasks completed by the resource for the task controller and overlapping working hours between the task controller and the resource. In one implementation, the DRAM 112 evaluates the metrics shown in Table 3 to measure this compatibility.
Table 3
Controller Dimension
2017272252 07 Dec 2017
Dimensional Characteristics Examples
Cultural Match The DRAM 112 may define cultural matching of a resource to a task controller in terms of values, norms, assumptions, interaction with resources, and the like. The DRAM 112 may evaluate the cultural match when, e.g., resources belong to different geographies and use different languages in order to perform a posted task. This metric assesses how well a task controller of one country (Ctp) collaborates with a resource from another country (Cw). This metric may also capture preference of a task controller to work with a resource in a particular country. The DRAM 112 may evaluate, e.g., Jaccard similarity to measure the association between the two, e.g.,: C,p^cw Cultural_Match (Ctp. Cw) = ''tp ''-'w
Task Controller Collaboration The DRAM 112 may determine the extent to which the resource has collaborated with the task controller (tp) from past assignments of the resource (cw). If a resource has collaborated well with a task controller in past, the task controller may feel more confident selecting the resource again. The DRAM 112 may evaluate task controller collaboration according to, e.g.: N Ratingt Collaboration_Score (tp, cw) -
2017272252 07 Dec 2017
where N is the total number of tasks for which cw has collaborated with tp and Rating] is the feedback score given to cw by tp upon task completion.
Similar Task Controller Experience A resource may not have worked directly with the task controller before, but may have worked with similar task controllers. Using this metric, the DRAM 112 identifies similarities between task controller of the posted task tpp, and the task controllers with which the resource has worked in the past TPh. That is, this metric measures how well the resource has worked with similar task controllers in past. The DRAM 112 may define the task controller tp as a tuple <feedback score, hiring rate, past hires, total hours worked, and total assignments:». The DRAM 112 may use a measure such as Euclidean Distance (ED) to evaluate the similarity, e.g.: TaskPoster_Similarity (tpplPh) = max (ED (tPh ’ lP„ TPh The DRAM 112 may also evaluate cosine similarity to determine similarity between the task controller of the present task, and the other task controllers that the resource has worked for:
2017272252 07 Dec 2017
Input: cP, Cm // cP is the information about the task controller and Cm, is the list of task controllers who previously worked with the resource Output: Similar controller experience score. Steps: Step 1: Compute the cosine similarity between cP and all the controllers Cm in Cm. The similarity may be computed based on feedback score, hiring rate, past hires, total hours worked, and total assignments. Step 2: Determine the similar controller experience score as follows: Similarity client _ experience _ score = max(Vcw e Cm,(cQsinQ_similarity( cp,cm)} The above metric measures how well the resource has worked with similar task controllers in the past. The metric helps the DRAM 112 understand and build confidence in the task controller for the resource, e.g., when the task controller does not have prior experience working with the resource.
Timezone Match Resources from different geographies and time zones come together to perform a task. The DRAM 112 may measure compatibility of resources from different time zones with that of the task controller. For instance, the DRAM 112 may measure how many working hours of the task controller overlap with the working hours of the resource.
2017272252 07 Dec 2017 [0045] The team dimension 308 includes resource-team compatibility characteristics 318. The resource-team compatibility characteristics 318 typically provide data on how compatible the resource will be with a team of other resources selected for the task. In one implementation, the DRAM 112 evaluates the metrics shown in Table 4 to measure this compatibility.
Table 4 Team Dimension
Dimensional Characteristics Examples
Team Collaboration The DRAM 112 may measure compatibility with the level of collaboration among the task team members. High collaboration among team members may lead to better performance of the resource. The DRAM 112 may evaluate the team collaboration score based on the collaboration of the resource with team members, if they have worked together on any task in past. The DRAM 112 may evaluate resource-team compatibility according to, e.g.,: ίN Ί 1 „ YRatbs, Team Collaboration (w, team) = -Y —- N V )
2017272252 07 Dec 2017
where n is the total number of resources that part of a team, N is the total number of tasks in which resource w has collaborated with w, (w, e team) and Rating^ is the feedback score received by resource w in collaboration with w, on task tj.
Timezone Match In a globally distributed task, resources from different time zones work together on interdependent tasks. Hence, the DRAM 112 may measure the compatibility of the time zone of one resource with other team members the resource will collaborate with to complete the task. Team members having overlapping time zones are likely to collaborate better.
[0046] The goal dimension 310 captures intrinsic and extrinsic forces which drive a resource to accomplish a task. Those forces may vary depending on the type of resource, and may include, as examples, opportunity to learn new skills, domains, career growth, compensation, and other forces. The DRAM 112 determine whether the attributes of a task fulfill a goal or match a motivation of a resource. Table 5 provides some example characteristics the DRAM 112 may analyze in this regard.
Table 5 Goal Dimension
onal Dimensi Examples
Characteristics
I m porta n Resources may have goals to learn what they deem
t Skill important skills. This metrics captures whether the task gives a resource an opportunity to learn those skills. In one implementation,
2017272252 07 Dec 2017
Opportu the DRAM 112 determines whether the task provides an opportunity
nity to learn skills that are of high demand in the marketplace and are high paying. The DRAM 112 may follow, e.g., Algorithm 2 to identify important skills.
Compen sation Goals The DRAM 112 may evaluate whether the resource may, e.g.: obtain a better monetary payoff for a task, increase in skill level by performing the task (e.g., from Beginner to Intermediate, or from Intermediate to Expert), or meet other compensation goals. The DRAM 112 may evaluate this metric with reference to, e.g., expected compensation (e.g., payoff) for each skill required by the task, and the expectation expressed by the resource for that skill. For instance, the DRAM 112 may define an expected payoff as the difference in average pay offered for the skill level required for the task and the average compensation expected by the resource for those skills at the level possessed by the resource. This differential may drive a further determination of monetary goals for the resource as shown below: ExpectedPayDif f {Skill, lt,lf) = Aver agePay {Skill, lt)- AveragePay (Skill, lw} MontearyMotivation - Average (ExpectedPayDiff{Skill,Skill e S'
2017272252 07 Dec 2017
where A is the skill level required for the task and lw is the skill level possessed by resource.
Algorithm 2
Important Skill Identification
Input: All task data Tm from the marketplace, skills required for the posted task S, lp and skillset of the resource Sw
Output: GoalSkill Opportunity GS
1. Rank all the skills listed in tasks Tm based on the percentage of tasks requiring them, average pay given for the task requiring those skills and percentage of resources possessing those skills. Order skills in the descending order along these to identify the goalskills
2. Calculate the percentile score for each skill based on their rank.
Percentile (Skill) = (Ns -Rs)/ (Ns) where Ns is the total number of skills available in a field and Rs is the rank of a particular skill.
3. Calculate GoalSkill opportunity for resource as:
2017272252 07 Dec 2017
Σ '7SkilleSt& SkilliS,
Percentile( Skill)
Skill eSt& Skill £ Sw [0047] Analysis using Random Forest provides indicators of importance of the metrics discussed above. Table 6, below, shows one analysis in terms of information gain for ranking importance of each dimension in the multi-dimensional analysis. That is, Table 6 shows the top metrics for making a likely successful resource selection decision, with the metrics are ranked in decreasing order of Information gain. The DRAM 112 may, for instance, give greater weight to these metrics than others when determining an overall evaluation for a potential resource.
Table 6
Dimensional Importance
Metric Information Gain
Similar Task Experience 0.96
Skill Fitness 0.675
Cultural Match 0.616
Billed Assignments 0.612
Billing Rate 0.602
Task Controller Collaboration 0.552
2017272252 07 Dec 2017 [0048] Figure 6 shows a system environment 600 in which the DRAM 112 communicates with sources 602, 604 of resource data, and with platforms 606 that consume resource analysis services. The sources 602, 604 may provide any of the data described above to the DRAM 112 for analysis, e.g., identifications of available resources, resource profiles, reviews, test scores, prior task experience, cost, time zone, and other data. The sources 602 are external to the DRAM 112, and in this example include crowd sourcing systems Upwork 608 and Applause 610. The source 604 is internal to the organization hosting the DRAM 112, and may be an internal talent database or knowledge exchange, as just two examples. In other implementations, the source 604 and the DRAM 112 are present in or implemented by physically separate systems. That is, the source 604 and DRAM 112 may be independent, with, for instance, the source 604 internal to an enterprise, and the enterprise connecting to the DRAM 112 for performing the resource analysis.
[0049] The platforms 606 connect to the DRAM 112, e.g., via resource assessment machine interfaces 210, application programming interface (API) calls defined by the DRAM 112, and the like. The platforms 606 may access the DRAM 112 as a service for resource evaluation, e.g., to find resources for a testing environment 612 that performs testing tasks (e.g., software testing), orfor a development environment 614 that performs development tasks (e.g., software development). The DRAM 112 receives task posting data from any source, performs the requested analyses on available resources against the task postings, and returns responses with evaluations to the platforms 606. In that regard, the DRAM 112 may render and deliver any number of predefined machine interfaces 210 to the platforms 606, e.g., as GUIs in a web browser. A few examples of the machine interfaces 210 follow.
[0050] Figure 7 shows an example resource review interface 700. The resource review interface 700 shows a resource section 704, a task section 706, and a customization section 708. The resource section 704 shows potential resources for a task, e.g., the potential resources 750, 752, whose profile details were retrieved from, e.g., the sources 602, 604. The resource section 704 also provides an overview of specific analysis dimensions 710 applicable to each resource, e.g., the task compatibility dimension, resource characteristic (e.g., personal performance) dimension, task controller compatibility dimension, team compatibility dimension, and goal dimension. The task section 706 provides a description of the task that the task controller has posted. The
2017272252 07 Dec 2017 customization section 708 provides preference inputs for adjusting the processing of the DRAM 112.
[0051] More specifically, the customization section 708 provides GUI elements that the operator may modify to adjust the DRAM 112 processing. In this example, the customization section 708 includes a score control 712, which eliminates from the display the resources falling below the score threshold; a task compatibility control 714, for setting an analysis weight for the task dimension 304; a task controller compatibility control 716, for setting an analysis weight for the controller dimension 306; a team compatibility control 718 for setting an analysis weight for the team dimension 308; a resource characteristic control 720, for setting an analysis weight for the resource dimension 302; and a goal control 722, for setting an analysis weight for the goal dimension 310. The analysis weights may be pre-determined or have default values, and the operator may adjust the analysis weights for any particular task.
[0052] The customization section 708 also includes an analysis section 724. In the analysis section 724, a duration control 726 allows the operator to specify task duration. In addition, a budget control 728 allows the operator to specify a task budget. The DRAM 112 evaluates these factors and others when assessing resource availability and cost.
[0053] Figures 8-16 show additional examples of the machine interfaces 210 that the DRAM 112 may generate. The machine interfaces 210 facilitate improved interaction with the DRAM 112, including more efficient understanding and review of each resource and the evaluation of each resource. The machine interfaces 210 may vary widely, and any particular implementation may include additional, fewer, and different interfaces.
[0054] Figure 8 shows an example resource analysis result interface 800. The resource analysis result interface 800 displays result rows (e.g., rows 802, 804). Each result row may include a resource identifier 806; overall analysis result 808, e.g., a numerical score determined responsive to the weights set in the customization section 708; and dimensional analysis results 810. The dimensional analysis results 810 may include any specific results the DRAM 112 has determined along any dimension or dimensional characteristic. Examples in Figure 8 include: whether the resource is available, task compatibility, the skill fitment of the resource to the task, the completion percentage of prior tasks taken by the resource, prior rating, cultural similarity, and task controller compatibility. A framework visualization section 812 provides a visualization of the
2017272252 07 Dec 2017 particular multi-dimensional assessment framework 236 that the DRAM 112 is analyzing for the resources.
[0055] Figure 9 shows an example resource detail interface 900. The resource detail interface 900 includes a resource details section 902, and a categorized details section 904. The resource details section 902, in this example, includes a profile summary section 906, a derived metrics section 908, as well as profile visualizations, e.g., the visualizations 910 and 912. The resource details section 902 may include profile text, educational highlights, career highlights, or other details.
[0056] The categorized details section 904 may provide a categorized tabbed display of resource profile details that lead to, e.g., derived metrics as well as resource data received from the sources 102 - 106. That resource data may include profile summary data, displayed in the profile summary section 906. The derived metrics section 908 may display selected metrics that the DRAM 112 has derived starting from the resource data obtained from the data platforms 102 - 108. Examples of derived data include the determinations of metrics along the multi-dimensional framework, such as overall score, ranking, task similarity score, task controller similarity score, and the like. The resource detail interface 900 may include any number or type of visualizations 910 and 912 to provide, e.g., a graphical interpretation of resource characteristics.
[0057] Figure 10 shows additional detail 1000 from the resource detail interface 900. In this example, the skill section 1002 shows the resource skills that match to the skills required by the task controller in the task that the task controller posted. The skill section 1002 shows those matching skills with a match indicator, in this case highlighting. The skill section 1002 renders skill selectors, such as check boxes 1004, to allow the operator to make selections of skills. The visualization 910 provides a graphical representation of the tasks taken by the resource under each of the selected skills.
[0058] Figure 11 shows additional detail from the resource detail interface 900 in the form of an example similar task analysis interface 1100. The similar task analysis interface 1100 includes a similar task section 1102. The interface 1100 provides, in the similar task section 1102, narratives, skills, dates, feedback, comments, and other details concerning tasks that the DRAM 112 has identified as similar to the posted task and performed by the resource under evaluation.
2017272252 07 Dec 2017 [0059] Figure 12 shows additional detail from the resource detail interface 900 in the form of an example ongoing tasks analysis interface 1200. The interface 1200 includes an ongoing task section 1202. The interface 1200 provides, in the similar task section 1202, narratives concerning ongoing tasks handled by the resource, including required skills and other ongoing task descriptors.
[0060] Figure 13 shows additional detail from the resource detail interface 900 in the form of an example recent reviews analysis interface 1300. The interface 1300 includes a recent reviews section 1302. The interface 1300 provides, in the recent reviews section 1302, details concerning evaluations of recent tasks taken by the resource. The evaluations may include details such as task title, review comments, scores or ratings, inception and completion dates, and the like. The interface 1300 may also include visualizations of review data, such as the time history min/max rating visualization 1304.
[0061] Figure 14 shows additional detail from the resource detail interface 900 in the form of an example prior task analysis interface 1400. The interface 1400 includes a past tasks section 1402. The interface 1400 provides, in the past tasks section 1402, details concerning tasks previously performed by the resource. The past task details may include, as just a few examples, narratives describing the past task, dates worked, and skills required, learned, or improved.
[0062] Figure 15 shows additional detail from the resource detail interface 900 in the form of an example summary analysis interface 1500. The interface 1500 includes a summary section 1502. The summary section 1502 may provide details on resource characteristics, including how well the resource matches to a particular new task along any dimensions or metrics. For instance, the summary section 1502 may include feedback scores, availability scores, deadline scores, collaboration scores, skill scores, and quality scores. Other types of summary information may be provided, including applicable dates, minimum and maximum scores, averages, and the like.
[0063] Figure 16 shows an example resource comparison interface 1600. A resource identification section 1602 identifies each resource being compared. The interface 1600 renders any number or type of displays (e.g., the visualizations 1604, 1606, 1608, 1610) that provide a side-by-side comparison of each resource along any specific selected dimension or metric within a dimension.
2017272252 07 Dec 2017 [0064] Figure 17 shows an example of process flow 1700 for resource analysis. In this example, the task controller posts a task description to a sourcing platform (e.g., one of the data platforms 102 - 108) (1). The task description includes data characterizing the task, as examples: a text narrative describing the task, required skills, optional skills, skill that will be learned, skill levels required, compensation, start date, end date, location, and team composition characteristics. Resources indicate their availability for tasks, e.g., by transmitting availability notifications to any one or more of the data platforms 102-108 (2). Any source of resource data, including the resource itself, may provide data characterizing the resources to any one or more of the data platforms 102 - 108 (3). The resource characteristics may include, as examples: skills known, skill levels, skill evaluation scores, experience (e.g., prior task descriptions), resource location, prior task locations, resource goals, availability, education, prior ratings, and cultural characteristics.
[0065] The task controller may set weighting preferences for the DRAM 112 to use in determining assessments (4). The weighting preferences may include default weights to use unless changed for a particular assessment. That is, the task controller may also set specific weighting preferences for the DRAM 112 to apply for any given assessment.
[0066] The DRAM 112 executes the assessment on the resources with respect to the posted task, and delivers the assessments to the task controller via the machine interfaces 210 (5). In response, the task controller may review and consider the resource assessments, interacting with the machine interfaces to do so. The task controller may then make resource selections for the task (6), and transmit the resource selections to the data platforms 102 - 108. If the resource will take on the task, then the resource may indicate acceptance of the task to the data platforms 102 - 108 (7).
[0067] The DRAM 112 may execute information retrieval and text mining operations, as examples, to match resources to tasks and determine the assessments. The DRAM 112 may apply these techniques when analyzing, e.g., text narratives of task descriptions and resource descriptions to find matching characteristics. Figure 17 shows some examples of processing that the DRAM 112 may perform. For instance, the DRAM 112 may obtain content descriptions such as task descriptions and resource profile narratives (1701), and tokenize the descriptions (1702). The DRAM 112 optionally performs stop word removed (1704), e.g., to eliminate words present on a pre-defined stop word list that
2017272252 07 Dec 2017 have little or no value in matching resource characteristics to task characteristics. The DRAM 112 may also execute term frequency-inverse document frequency (TF-IDF) as part of ascertaining how important a word is within the content descriptions (1706). To measure similarity, the DRAM 112 employs any desired distance measure between two documents 'a' and 'b' represented in vector space, such as the Cosine similarity measure (1708):
a£b — ||«|| Z?||cosΘ ~ aPb COS# = [0068] Figure 18 shows another example of process flow 1800 for resource analysis when integrated with a hybrid sourcing platform. Figure 18 extends the example of Figure 17. In Figure 18, an intermediate, hybrid resource data platform 1802 is implemented between the task controller and the external data platforms 102 - 108. The hybrid resource data platform 1802 may represent, for instance, a private company internal system that initially receives task postings for review and consideration by company personnel. The hybrid resource data platform 1802 may determine when and whether to pass the task postings to the external data platforms 102 - 108, e.g., when the company desires to extend the resource search outside of the company. In that regard, the hybrid resource data platform 1802 receives resource characteristics from the external data platforms 102 - 108.
[0069] In the example shown in Figure 18, the DRAM 112 is implemented as part of the hybrid resource data platform 1802. The hybrid resource data platform 1802 executes the DRAM functionality to determine and report resource assessments to the task controller. Selections of resources (or offers made to resources) for the task flow first to the hybrid resource data platform 1802, and possibly to the external data platforms 102 108, e.g., when resources external to the company have been selected or offered a task.
Further Examples
2017272252 07 Dec 2017 [0070] Many variations of the DRAM implementation are possible. The DRAM 112 may include modules for determining similarity between tasks based on the task features such as task type, duration, skills required, and so on. The DRAM 112 may also include modules for computing similarity between task controllers. The similarity computation modules are used for recommending resources for tasks. The DRAM 112 may also include modules for sentiment analysis of the textual feedback given by the task controller for the resources. The sentiment analysis identifies, e.g., whether the task controller is satisfied with the completed task. It takes as input the textual feedback and outputs sentiment details. Some examples of sentiments are: positive, negative, and neutral. Furthermore, the sentiment analysis module may categorize the textual feedback based on any number of pre-defined aspects, such as Skill, Quality, Communication, Collaboration, Deadline and Availability based on the defined rules. Note that the DRAM 112 may determine metrics pertaining to any pre-defined set of dimensions, e.g., as show in Figures 3 - 5.
Combined assessment score [0071] In some implementations, the DRAM 112 determines a combined assessment score for the resource under assessment. In that respect, the DRAM 112 may implement a machine learning module trained to learn the weightings of each metric to arrive at the final assessment score for each resource, as just one example. Expressed another way, the DRAM 112 optionally combines determined metrics to arrive at a final assessment score for each resource. Each metric may be given a weight to arrive at the final score. For example, the equation below combines Availability, Skill Fitness, and Task Similarity metrics to arrive at a final score for a the task dimension 304 according to the dimensional component weights dcwi, dcw2, dcw3, and dcw4:
FinalScore = dcwi* Availability + dcw2 * SkillFit + dcw3 * Experience + dcw4 *
Profile;
[0072] More generally, the DRAM 112 may assess a resource by combining any selected dimensions using pre-determined dimensional weights to arrive at a final
2017272252 07 Dec 2017 assessment score, across any number of included dimensions, e.g., for the framework 300:
ResourceScore = dwi* ResourceDimension + dw2 * TaskDimension + dw3 * ControllerDimension + dw4 * TeamDimension + dws * GoalDimension;
[0073] The DRAM 112 may determine or select weights (including setting default weights) using from the machine learning algorithms or linear or logistic regression techniques. As one example, the DRAM 112 may optimize using the following equation with a specific optimization objective:
x = %+t./;+z/2 +...+wjn in which w, - represent weights, f, -dimension/metric score, Y- observed value, e.g., selected or not selected for the task.
[0074] The machine learning module in the DRAM 112 may, for instance, learn the importance of each metric from the historical data about tasks and resources. The DRAM 112 may employ an online learning algorithm and the weights may be refined with each new data sample.
An example machine learning implementation follows:
Input: Set of tasks and assessment metrics for all the resources who worked I applied on those tasks.
Output: Weights I importance of each metric that models a successful resource selection behavior.
Execution
Step 1: Prepare the training data as follows:
2017272252 07 Dec 2017 [0075] For each task that has more than one potential resource, create a data point for each resource with the following attributes:
a) all the evaluation metrics for the resource, task, and task controller
b) indication of whether the resource was hired for the task.
Step 2: Apply a machine learning algorithm on the training data.
Step 3: Returns the weights identified by the algorithm.
Customizing assessment model [0076] The DRAM 112 also accepts and responds to modifications of the weights supplied by, e.g., task controllers. For instance, a task controller may prefer to select a resource that he has worked with in past and with which he had a good past experience. Hence, the DRAM 112 allows the task controller to override the default weightings to express these preferences.
What-if analysis [0077] The DRAM 112 may perform what-if analyses to help assess resources if, e.g., the pre-defined scores and metrics are not sufficient to make a selection decision. In one implementation, the DRAM 112 predicts the likelihood of task completion and the quality of the completed task if done by a particular resource. The DRAM 112 may execute the analysis on the historical data about tasks and resources, using a machine learning module which trains models for task completion and task quality. The trained model may predict task completion and the quality of the completed task.
[0078] The what-if analyses allow the task controller to vary, e.g., the duration and budget for the task to see anticipated outcomes. The task controller may then use the anticipated outcomes to negotiate (if applicable) with the resources to reach agreement on a set of task characteristics (e.g., duration and budget) that achieve a beneficial outcome. The DRAM 112 responds by assessing and displaying the task completion probability and quality of completed task for all the applicants. The DRAM 112 may estimate and report task completion probability assuming any given resource is assigned to a task. For any or all of the potential resources, the DRAM 112 may determine this probability. The system operator, through an operator GUI, can vary the task duration,
2017272252 07 Dec 2017 budget, skills, and system displays the revised task completion probability for each resource based on a machine learning model trained from the past data. The DRAM 112 may also predicts the quality of a completed task. The system operator, through an operator GUI, can vary the task duration, budget, and skills. The DRAM 112 responds by determining the quality of the completed task for each resource by executing, e.g., a machine learning model trained on the prior data.
Evaluating new resources [0079] Note that the DRAM 112 may obtain and analyze information available from other sources than the data platforms 102 - 108 in its assessments. Examples of additional information include resource public profiles on professional networks, publically available references, and descriptions and feedback on tasks done during their academic training or in an industry.
Assessment as a service [0080] A local or cloud based hosting platform may offer the DRAM 112 as a service subscribed to or paid for by any third party platform.
[0081] Table 7 below provides further examples of metrics the DRAM 112 may assess in any particular implementation, including examples of how the DRAM 112 may derive metrics starting with certain data.
Table 7: Example metrics for assessing resources
Metric Description How to Assess
Availability whether or not the resource is available for the given duration Available working hours 1 day for a resource id determined based on number of tasks the resource is doing and estimated completion time for each task.
2017272252 07 Dec 2017
For example, suppose resource is working on two tasks that need to be completed in 20 days and estimated to take 40 hours. So on an average, the resource spends 2 hours on each task. Thus the remaining time available is (8- (2+2) = 4 hours).
Skill Fitness Matching score between the required skill and resources skills % skill matched: Number of skills that are required by the posted task and also possessed by the resource / Total number of skills required by the task.
Rating How the resource has been rated in past. Average Rating: Average resource rating as given by others in past.
Goals What goals the resource has specified for itself in information available in the resource profile. Task-goal match: Percentage skills that are required by the tasks and also aspired as 'must have skills' by the resource / Total number of skills required by the task
Task Controller Collaboration How well the resource has collaborated with the task controller in the past Task completion rate for the task controller: 100* Total tasks completed by the resource for the task controller / Total tasks completed by the resource in past
Sentiment Perform sentiment analysis on the written comments for the resource to score it on aspects such as: 'motivated', 'works well with others', 'follows well', 'timeliness'.
How well he has collaborated with similar task controllers in the past Tasks completion rate for similar task controllers: 100* Total number of tasks completed by the resource for similar
2017272252 07 Dec 2017
task controllers / Total tasks completed successfully. Similarity score may be defined based on geography, total tasks posted, total tasks completed, and other factors.
Cultural / Geographical Overlapping workhour Overlapping work hours: number of overlapping work hours for the resource and the task controller.
Is there a restriction on geography to choose a resource. Geographical match: Boolean value whether resource geography is matched with task controller specified requirement
Cost of hiring - fixed time How much the resource is likely to charge for the task. Cost of similar task: how much resource has quoted in past for similar tasks.
Cost of hiring - hourly How much resource charges per hour Hourly rate: charge rate for the resource
Similar task experience How many similar tasks he has performed in the past and what were the scores for those tasks. Similar task performance: 100* Number of similar tasks performed by the resource in past/ Total number of tasks performed by the resource in past. Similarity is defined based on some task attributes.
Task completion How many tasks resource has successfully completed. Task completion rate: 100* Number of tasks successfully completed / Total number of tasks undertaken
Resource score Average score Averages score: Average of all the scores given to the resource on a select set of (e.g., all) prior tasks.
Max score Max score: Maximum score resource has obtained so far.
2017272252 07 Dec 2017
Min score Min score: Minimum score the resource has obtained so far.
Recency of max score Recency of max score: Date when resource has scored the maximum score.
Average recent score. Average rating of the resource for last one year.
Resume score Educational background Rating based on educational degrees.
Diversity of experience Number of skills resource has experience.
Awards and recognition Score based on awards and recognition.
Cost of onboarding This may include cost of training, knowledge transfer, etc. for onboarding a crowd resource. This metric may assess skill gaps, domain knowledge, and prior experience of the resource.
Team collaboration To capture how well the crowd resource fits with other team member % tasks worked with team member in past: percentage of tasks the crowd resource has worked with any of the team member in the past.
Time zone compatibility How well the time zone matches with other team resources. Time zone gap: min, max, and average of time zone difference with rest of the team members.
Determining text similarity [0082] In order to find the similarity between text features such as task title, task description, resource profile overview and resource experience details, the DRAM 112 may implement a TF-IDF approach:
2017272252 07 Dec 2017
TF — IDF weight = (l + log (term -frequency)} * log (N / DocumentJrequency'} [0083] In the TF-IDF approach, 'term_frequency' represents the number of times a search term, such as a word in a task title or task description, appears in a document, such as a resource description. In this particular example, the approach uses a log-scaled metric, 1 + log(term_frequency), for frequency, although other frequency metrics may be used. Note that the TF-IDF approach includes a term frequency-inverse adjustment, log (N I document_frequency). This adjustment reduces the weight score in log relation to the number of documents in a pre-defined collection of N documents that include the search term. As a result, common terms used in all documents (e.g., the, an) provide little weight (because log(1) is 0) and are effectively filtered out in the TF-IDF calculation.
[0084] The methods, devices, processing, frameworks, circuitry, and logic described above may be implemented in many different ways and in many different combinations of hardware and software. For example, all or parts of the implementations may be circuitry that includes an instruction processor, such as a Central Processing Unit (CPU), microcontroller, or a microprocessor; or as an Application Specific Integrated Circuit (ASIC), Programmable Logic Device (PLD), or Field Programmable Gate Array (FPGA); or as circuitry that includes discrete logic or other circuit components, including analog circuit components, digital circuit components or both; or any combination thereof. The circuitry may include discrete interconnected hardware components or may be combined on a single integrated circuit die, distributed among multiple integrated circuit dies, or implemented in a Multiple Chip Module (MCM) of multiple integrated circuit dies in a common package, as examples.
[0085] Accordingly, the circuitry may store or access instructions for execution, or may implement its functionality in hardware alone. The instructions may be stored in a tangible storage medium that is other than a transitory signal, such as a flash memory, a Random Access Memory (RAM), a Read Only Memory (ROM), an Erasable Programmable Read Only Memory (EPROM); or on a magnetic or optical disc, such as a Compact Disc Read Only Memory (CDROM), Hard Disk Drive (HDD), or other magnetic or optical disk; or in or on another machine-readable medium. A product, such as a computer program
2017272252 07 Dec 2017 product, may include a storage medium and instructions stored in or on the medium, and the instructions when executed by the circuitry in a device may cause the device to implement any of the processing described above or illustrated in the drawings.
[0086] The implementations may be distributed. For instance, the circuitry may include multiple distinct system components, such as multiple processors and memories, and may span multiple distributed processing systems. Parameters, databases, and other data structures may be separately stored and controlled, may be incorporated into a single memory or database, may be logically and physically organized in many different ways, and may be implemented in many different ways. Example implementations include linked lists, program variables, hash tables, arrays, records (e.g., database records), objects, and implicit storage mechanisms. Instructions may form parts (e.g., subroutines or other code sections) of a single program, may form multiple separate programs, may be distributed across multiple memories and processors, and may be implemented in many different ways. Example implementations include stand-alone programs, and as part of a library, such as a shared library like a Dynamic Link Library (DLL). The library, for example, may contain shared data and one or more shared programs that include instructions that perform any of the processing described above or illustrated in the drawings, when executed by the circuitry.
[0087] Various implementations have been specifically described. However, many other implementations are also possible.
[0088] Throughout this specification and the claims which follow, unless the context requires otherwise, the word “comprise”, and variations such as “comprises”, and “comprising”, will be understood to mean the inclusion of a stated feature or step, or group of features or steps, but not the exclusion of any other feature or step, or group of features or steps.
[0089] The reference to any prior art in this specification is not, and should not be taken as an acknowledgement, or any suggestion that, the prior art forms part of the common general knowledge.
2017272252 07 Dec 2017

Claims (17)

The claims defining the invention are as follows:
1/18
Figure 1
2017272252 07 Dec 2017 °°l οΰ| ο
Ο
Ο
Ο οι
CM cm!
-Μ-Ι
CM cm!
co
1 / Λ
Team_Collaboration (w, team) = - V f N Λ ^Rating. M’
N where n is the total number of resources that part of a team, N is the total number of tasks in which resource w has collaborated with w, (w, e team) and Rating^ is a feedback score received by resource w in collaboration with w, on task tj\ the task controller collaboration metric includes:
N
Rating.
/=1_
Collaboration_Score (tp, cw) = where N is a total number of tasks for which cw has collaborated with tp and Rating] is a feedback score given to cw by tp upon task completion;
the cultural match metric includes:
Cultural_Match (CtP, Cw) = c,„^cw clp^cw where Ctp represents a task controller of one country in collaboration with a resource from another country Cw; and the skill fitness metric includes:
Skill_Fitness (St,Sw) = *TP /St
2017272252 07 Dec 2017 where Si is a set of skills required for a task, Sw is a set of skills possessed by the resource, Ts is a Test score and Tpis a Test percentile, and Match (St, Sw) computes a number of matched skills between Si and Sw and |Si| is a total number of skills required by the task.
1a. A system including:
a communication interface;
a memory including:
a multi-dimensional assessment framework including: assessment dimensions; and dimensional metrics assigned to the assessment dimensions; and dimensional weights for the assessment dimensions; resource analysis circuitry configured to:
obtain task characteristics specified by a task controller for a posted task;
connect to a resource data platform through the communication interface;
obtain resource characteristics from the resource data platform that characterize an available resource that may be selected for the posted task; and determine a resource assessment for the available resource according to:
the assessment dimensions and dimensional metrics in the multi-dimensional assessment framework; and the dimensional weights; and machine interface generation circuitry configured to:
generate a machine interface including a visualization of the resource assessment; and deliver the machine interface to the task controller through the communication interface.
2a. A method including:
establishing in memory:
a multi-dimensional assessment framework including: assessment dimensions; and
2017272252 07 Dec 2017 dimensional metrics assigned to the assessment dimensions; and dimensional weights for the assessment dimensions; executing resource analysis circuitry that:
obtains task characteristics specified by a task controller for a posted task;
connects to a resource data platform through a communication interface;
obtains resource characteristics from the resource data platform that characterize an available resource that may be selected for the posted task; and determines a resource assessment for the available resource according to:
the assessment dimensions and dimensional metrics in the multi-dimensional assessment framework; and the dimensional weights; and executing machine interface generation circuitry to:
generate a machine interface including a visualization of the resource assessment; and deliver the machine interface to the task controller through the communication interface.
3a. A system including:
a communication interface; a memory including:
a multi-dimensional assessment framework including assessment dimensions and dimensional metrics including:
a resource dimension configured to evaluate resource specific characteristics, the resource dimension defining:
a past rating metric; and an experience metric;
a task dimension configured to evaluate resource-to-task compatibility, the task dimension defining:
an availability metric; and a skill fitness metric;
2017272252 07 Dec 2017 a controller dimension configured to evaluate resource-totask controller compatibility, the controller dimension defining:
a cultural match metric; and a task controller collaboration metric;
a team dimension configured to evaluate resourceto-team compatibility, the team dimension defining:
a team collaboration metric; and a timezone match metric; and a goal dimension configured to evaluate resource goals, the goal dimension defining:
a skill opportunity metric; and dimensional weights for the assessment dimensions, including:
a resource dimensional weight for the resource dimension;
a task dimensional weight for the task dimension; a controller dimensional weight for the controller dimension;
a team dimensional weight for the team dimension; and a goal dimensional weight for the goal dimension; resource analysis circuitry configured to:
obtain task characteristics specified by a task controller for a posted task;
connect to a resource data platform through the communication interface;
obtain resource characteristics from the resource data platform that characterize an available resource that may be selected for the posted task; and determine a resource assessment for the available resource according to:
the assessment dimensions and dimensional metrics in the multi-dimensional assessment framework; and the dimensional weights; and
2017272252 07 Dec 2017 machine interface generation circuitry configured to: generate machine interfaces including:
a resource assessment result interface including assessment results for multiple of the available resources;
a resource detail interface for a selected resource among the available resources, the resource detail interface including:
a skill fitness section; a similar task analysis section; an ongoing tasks analysis section; a recent reviews analysis section; a prior task analysis section; and a summary analysis section; and.
a resource comparison interface presenting a side-by-side comparison of the multiple available resources; and deliver the machine interfaces to the task controller through the communication interface.
2017272252 07 Dec 2017
1. A system for facilitating selection of a resource for a task, the system including: a communication interface;
a memory including:
a multi-dimensional assessment framework including:
assessment dimensions including at least a resource dimension and a task dimension, wherein the task dimension includes a resource experience dimension relating to resource experience regarding similar tasks;
dimensional metrics assigned to the assessment dimensions; and dimensional weights for the assessment dimensions; resource analysis circuitry configured to:
obtain, from a task controller device, task characteristics specified by a task controller for a task posted by the task controller device to a resource data platform, the posted task including a text narrative describing the task;
connect to the resource data platform through the communication interface;
obtain resource characteristics from the resource data platform that characterize two or more available resources that may be selected for the posted task, each resource characteristic including a text narrative describing the available resource;
determine a resource assessment for each available resource by analyzing the text narrative of the posted task and each available resource to determine matching characteristics, calculating a task similarity score reflecting a level of similarity between a previous task completed by each resource and the posted task and by executing resource assessment according to:
the assessment dimensions and dimensional metrics in the multi-dimensional assessment framework, including the task similarity score; and the dimensional weights; and store characteristics of available resources and resource assessments in separate database tables; and machine interface generation circuitry configured to:
2017272252 07 Dec 2017 generate a machine interface including a resource review interface that provides a visualization of available resources and associated resource assessments in a display that provides a side-byside comparison of the available resources and associated resource assessments, the display including:
a customization section which enables a user to adjust one or more resource assessment thresholds, and an analysis section which enables a user to adjust one or more task requirement thresholds, wherein only the resources that satisfy the one or more resource assessment and task requirement thresholds are displayed; deliver the machine interface to the task controller device through the communication interface; and receive, from the task controller device through the machine interface, a task controller selection of one of the displayed available resources.
2/18 ο
ο cm δ
Ξ cd ο
Ο_
Ο
COI cm| =□ ο ο (Ζ) ο ςξ cd α
co _ο
Ο
CO α
CO ζ\ ο
CMI co cm I ο ο ο δ Ξ ο 03 03 ο CD □C \/ΙΙ τ
CO
CM
CM
Q θ5
C_3
CD
CO _Q
C_3
CD
CO
O => Q
Figure 2
2017272252 07 Dec 2017
2. A system according to claim 1, where:
the machine interface generation circuitry is further configured to:
generate a customization graphical user interface including a preference control that changes at least one of the dimensional weights.
3/ 18
Figure 3
CO
3. A system according to either claim 1 or claim 2, further including:
a hybrid data platform that hosts the resources analysis circuitry and the posted task internal to a pre-defined organization; and where the resource analysis circuitry is further configured to:
determine whether to transmit the posted task to an external data platform outside of the pre-defined organization.
4/ 18
2017272252 07 Dec 2017
Figure 4
2017272252 07 Dec 2017
4. A system according to claim 3, where:
the resource data platform is internal to the pre-defined organization.
5/ 18
Figure 5
5. A system according to claim 4, where:
the resource data platform is external to the pre-defined organization; and
2017272252 07 Dec 2017 the resource analysis circuitry is further configured to:
obtain at least some of the resource characteristics from the resource data platform after transmitting the posted task to the external data platform.
6/18
2017272252 07 Dec 2017
°°l Cd| 220 224 :em Processor(s) Memory Operating Sysl
. o ' o co co
Φ =5
CD
6. A system according to any one of the preceding claims, where: the assessment dimensions include a controller dimension.
7/18
2017272252 07 Dec 2017 ο/ ο '
Figure 7
7. A system according to any one of the preceding claims, where: the assessment dimensions include a goal dimension.
8/18
2017272252 07 Dec 2017
CM ^|Figure 8 „/ o ' co
8. A computer-implemented method for facilitating selection of a resource for a task, the method including:
establishing in memory:
a multi-dimensional assessment framework including: assessment dimensions including at least a resource dimension and a task dimension, wherein the task dimension includes at least a resource experience dimension relating to resource experience regarding similar tasks;
dimensional metrics assigned to the assessment dimensions; and dimensional weights for the assessment dimensions; executing resource analysis circuitry that:
obtains, from a task controller device, task characteristics specified by a task controller for a task posted by the task controller device to a resource data platform, the posted task including a text narrative describing the task;
connects to the resource data platform through a communication interface;
obtains resource characteristics from the resource data platform that characterize two or more available resources that may be selected for the posted task, each resource characteristic including a text narrative describing the available resource;
determines a resource assessment for each available resource by analyzing the text narrative of the posted task and each available resource to determine matching characteristics, calculating
2017272252 07 Dec 2017 a task similarity score reflecting a level of similarity between a previous task completed by each resource and the posted task, and by executing resource assessment according to:
the assessment dimensions and dimensional metrics in the multi-dimensional assessment framework including the task similarity score; and the dimensional weights; and stores characteristics of available resources and resource assessments in separate database tables; and executing machine interface generation circuitry to:
generate a machine interface including a resource review interface that provides a visualization of available resources and associated resource assessments in a display that provides a side-by-side comparison of the available resources and associated resource assessments, the display including:
a customization section which enables a user to adjust one or more resource assessment thresholds, and an analysis section which enables a user to adjust one or more task requirement thresholds, wherein only the resources that satisfy the one or more resource assessment and task requirement thresholds are displayed; and deliver the machine interface to the task controller device through the communication interface; and receive, from the task controller device through the machine interface, a task controller selection of one of the displayed available resources.
9/18
2017272252 07 Dec 2017
TJ _ .9? φ t c t ο φ w Ο w φ ^=: > P φ Λ o
CM =/ ο '
CD
Φ φ
>
ο
W in -P
IO _ Φ
J2 _φ □) P, ί 2 _ φ Q.
ω ω (Λ η CZ χ 0_ ι_
Lu ω φ —> 4=! - Φ ϊ 0_φ ο _Φ =3 (Λ ϋ □) Φ Φ < Λθ Ζ ” ο ° ο“ -> ο -= φ
©
Ε φ =5 C φ :> c τϊ φ r Ο Γ
-= ω Ο φ ιη -±= £ ο φ φ
- Λ φ ,±ϊ -C Ο -c © Η ο £3^ σ> 69φ
Ί—>
Φ w or φ Q σ)
Θ
C φ
or ©
co
CM
Ο
CD 'xjΟ
CD
Figure 9
9. A method according to claim 8, further including:
generating a customization graphical user interface including a preference control that changes at least one of the dimensional weights.
10/18
2017272252 07 Dec 2017
Ο
Figure 10
10. A method according to either claim 8 or claim 9, further including:
hosting, in a hybrid data platform internal to a pre-defined organization, the resources analysis circuitry and the posted task; and
2017272252 07 Dec 2017 determining whether to transmit the posted task to an external data platform outside of the pre-defined organization.
11/18
2017272252 07 Dec 2017 co
E
E ω
(/) φ
>
φ or
Έ φ
o
Φ or w
w co
ΙΟ) c
o
CT c
O
Figure 11 w
w co w
co
CL
11. A method according to claim 10, where:
the resource data platform is internal to the pre-defined organization.
12/18
2017272252 07 Dec 2017 o
o
CM
CJ o
CJ co
E
E to ω
(/) φ
>
φ □r c
Φ o
Φ □r w
w co
ΙΟ) c
o
CT c
O w
w co w
co
CL
Φ
E to w
Φ □c w
w co
TO
E co w
w
Φ c
Figure 12
12. A method according to claim 11, where:
the resource data platform is external to the pre-defined organization; and further including:
obtaining at least some of the resource characteristics from the resource data platform after transmitting the posted task to the external data platform.
13/18
2017272252 07 Dec 2017
Figure 13 g / co Z
13. A method according to any one of claims 8 to 12, where:
establishing assessment dimensions includes establishing a task controller dimension.
14/18
2017272252 07 Dec 2017
Figure 14
Ο ο
'χί-
14. A method according to any one of claims 8 to 13, where:
establishing assessment dimensions includes establishing a goal dimension.
15/18
2017272252 07 Dec 2017
Figure 15
15. A system for facilitating selection of a resource for a task, the system including: a communication interface;
a memory including:
a multi-dimensional assessment framework including assessment dimensions and dimensional metrics including:
a resource dimension configured to evaluate resource specific characteristics, the resource dimension defining:
a past rating metric; and an experience metric;
a task dimension configured to evaluate resource-to-task compatibility, the task dimension defining:
an availability metric; and a skill fitness metric;
a controller dimension configured to evaluate resource-totask controller compatibility, the controller dimension defining:
2017272252 07 Dec 2017 a cultural match metric; and a task controller collaboration metric;
a team dimension configured to evaluate resourceto-team compatibility, the team dimension defining:
a team collaboration metric; and a timezone match metric; and a goal dimension configured to evaluate resource goals, the goal dimension defining:
a skill opportunity metric; and dimensional weights for the assessment dimensions, including:
a resource dimensional weight for the resource dimension;
a task dimensional weight for the task dimension; a controller dimensional weight for the controller dimension;
a team dimensional weight for the team dimension; and a goal dimensional weight for the goal dimension; resource analysis circuitry configured to:
obtain, from a task controller device, task characteristics specified by a task controller for a task posted by the task controller device to a resource data platform, the posted task including a text narrative describing the task;
connect to the resource data platform through the communication interface;
obtain resource characteristics from the resource data platform that characterize two or more available resources that may be selected for the posted task, each resource characteristic including a text narrative describing the available resource; and determine a resource assessment for each available resource by analyzing the text narrative of the posted task and each available resource to determine matching characteristics, calculating a task similarity score using the past rating metric, the task similarity score
2017272252 07 Dec 2017 reflecting a similarity between a previous task, and by executing resource assessment according to:
the assessment dimensions and dimensional metrics in the multi-dimensional assessment framework including the task similarity score; and store characteristics of available resources and resource assessments in separate database tables; and the dimensional weights; and machine interface generation circuitry configured to: generate machine interfaces including:
a resource assessment result interface including assessment results for each of the two or more available resources;
a resource detail interface for a selected resource among the two or more available resources, the resource detail interface including:
a skill fitness section; a similar task analysis section; an ongoing tasks analysis section; a recent reviews analysis section; a prior task analysis section; and a summary analysis section; and a resource review interface that provides a visualization of available resources and associated resource assessments in a display that provides a side-by-side comparison of the available resources and associated resource assessments, the display including:
a customization section which enables a user to adjust one or more resource assessment thresholds, and an analysis section which enables a user to adjust one or more task requirement thresholds, wherein only the resources that satisfy the one or more resource assessment and task requirement thresholds are displayed; deliver the machine interfaces to the task controller device through the communication interface; and receive, from the task controller device through one or more of the machine interfaces, a task controller selection of the displayed
2017272252 07 Dec 2017
16/18
2017272252 07 Dec 2017
Ο ο
co
Figure 16
2017272252 07 Dec 2017
16.
available resources.
A system according to claim 15, where: the team collaboration metric includes:
17/ 18
Figure 17
2017272252 07 Dec 2017
Figure 18
AU2017272252A 2016-06-08 2017-12-07 Resource evaluation for complex task execution Abandoned AU2017272252A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2017272252A AU2017272252A1 (en) 2016-06-08 2017-12-07 Resource evaluation for complex task execution

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
IN201641019688 2016-09-10
IN201641019688. 2016-09-10
AU2016247051A AU2016247051A1 (en) 2016-06-08 2016-10-18 Resource evaluation for complex task execution
AU2017272252A AU2017272252A1 (en) 2016-06-08 2017-12-07 Resource evaluation for complex task execution

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
AU2016247051A Division AU2016247051A1 (en) 2016-06-08 2016-10-18 Resource evaluation for complex task execution

Publications (1)

Publication Number Publication Date
AU2017272252A1 true AU2017272252A1 (en) 2018-01-04

Family

ID=60788033

Family Applications (1)

Application Number Title Priority Date Filing Date
AU2017272252A Abandoned AU2017272252A1 (en) 2016-06-08 2017-12-07 Resource evaluation for complex task execution

Country Status (1)

Country Link
AU (1) AU2017272252A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109783532A (en) * 2018-12-12 2019-05-21 航天信息股份有限公司 Food/pharmaceutical analysis method and system based on micro services framework
CN111290785A (en) * 2020-03-06 2020-06-16 北京百度网讯科技有限公司 Method and device for evaluating deep learning framework system compatibility, electronic equipment and storage medium
US20210142801A1 (en) * 2019-11-13 2021-05-13 Ford Global Technologies, Llc Determining a controller for a controlled system
CN114490094A (en) * 2022-04-18 2022-05-13 北京麟卓信息科技有限公司 GPU (graphics processing Unit) video memory allocation method and system based on machine learning

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109783532A (en) * 2018-12-12 2019-05-21 航天信息股份有限公司 Food/pharmaceutical analysis method and system based on micro services framework
US20210142801A1 (en) * 2019-11-13 2021-05-13 Ford Global Technologies, Llc Determining a controller for a controlled system
CN111290785A (en) * 2020-03-06 2020-06-16 北京百度网讯科技有限公司 Method and device for evaluating deep learning framework system compatibility, electronic equipment and storage medium
CN114490094A (en) * 2022-04-18 2022-05-13 北京麟卓信息科技有限公司 GPU (graphics processing Unit) video memory allocation method and system based on machine learning
CN114490094B (en) * 2022-04-18 2022-07-12 北京麟卓信息科技有限公司 GPU (graphics processing Unit) video memory allocation method and system based on machine learning

Similar Documents

Publication Publication Date Title
CA2946306C (en) Resource evaluation for complex task execution
US11276007B2 (en) Method and system for composite scoring, classification, and decision making based on machine learning
US10664777B2 (en) Automated recommendations for task automation
US20190385071A1 (en) Automated Accuracy Assessment in Tasking System
Yeh A problem‐based selection of multi‐attribute decision‐making methods
US8892605B2 (en) Systems and methods for managing social networks based upon predetermined objectives
US20110125590A1 (en) System and method for managing and optimizing advertising campaigns managed on the internet
AU2017272252A1 (en) Resource evaluation for complex task execution
De Sanctis et al. Resilience for lean organisational network
US20130262175A1 (en) Ranking of jobs and job applicants
US20090240549A1 (en) Recommendation system for a task brokerage system
US20130246317A1 (en) System, method and computer readable medium for identifying the likelihood of a student failing a particular course
US20090083103A1 (en) Method and system for measuring organisational culture
US20110313863A1 (en) Systems and Methods for Opportunity-Based Services
US20200210908A1 (en) Dynamic optimization for jobs
US20100114621A1 (en) System And Methods For Modeling Consequences Of Events
US20220198399A1 (en) Conversational recruiting system
Okfalisa et al. Integrated analytical hierarchy process and objective matrix in balanced scorecard dashboard model for performance measurement
Kokkodis et al. The utility of skills in online labor markets
WO2016130538A1 (en) Methods and systems for providing management service
US20070168247A1 (en) Survey-based management performance evaluation systems
Safitri et al. Strategy Based Technology-Based Startups to Drive Digital Business Growth
US20160300288A1 (en) Recommendation system
Lu et al. Using the Fuzzy Linguistic preference relation approach for assessing the importance of risk factors in a software development project
Ghosh Distributed task scheduling and human resource distribution in industrial service solution production: a simulation application

Legal Events

Date Code Title Description
MK5 Application lapsed section 142(2)(e) - patent request and compl. specification not accepted