WO2022081930A1 - Automated device data retrieval and analysis platform - Google Patents

Automated device data retrieval and analysis platform Download PDF

Info

Publication number
WO2022081930A1
WO2022081930A1 PCT/US2021/055107 US2021055107W WO2022081930A1 WO 2022081930 A1 WO2022081930 A1 WO 2022081930A1 US 2021055107 W US2021055107 W US 2021055107W WO 2022081930 A1 WO2022081930 A1 WO 2022081930A1
Authority
WO
WIPO (PCT)
Prior art keywords
application
data
instance
user
voice
Prior art date
Application number
PCT/US2021/055107
Other languages
French (fr)
Inventor
Jakub Burgis
Raoul Johnson
Eric Nunes
Meethill Vijay YADAV
Michael Weideman
Blake Morgan BUTLER
Original Assignee
Paypal, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US17/070,737 external-priority patent/US11277437B1/en
Priority claimed from US17/070,755 external-priority patent/US11700250B2/en
Priority claimed from US17/070,714 external-priority patent/US11610206B2/en
Priority claimed from US17/070,682 external-priority patent/US20220114593A1/en
Application filed by Paypal, Inc. filed Critical Paypal, Inc.
Publication of WO2022081930A1 publication Critical patent/WO2022081930A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/50Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
    • G06F21/55Detecting local intrusion or implementing counter-measures
    • G06F21/552Detecting local intrusion or implementing counter-measures involving long-term monitoring or reporting

Definitions

  • the present application generally relates to transaction processing systems and more particularly to automated device dataretrieval and analysis platform, accordingto various implementations.
  • Electronic service providers may provide an online marketplace environment for users, which may be used to buy and sell goods with other entities. Some of these services may be used maliciously by fraudulent users, such as overtaking a merchant account and taking control of payments made for goods sold, which poses a significant risk to these service providers. Since the service providers may process thousands (or more) transactions daily, it may be difficult to review without a large review and compliance team.
  • FIG. 1 illustrates a block diagram of a networked system suitable for implementing the processes described herein, according to an implementation of the present disclosure
  • FIG. 2 illustrates a block diagram of an event detection server, accordingto an implementation of the present disclosure
  • FIG. 3 is a flowchart of an example process of actionable insight analysis into user interaction data, accordingto an implementation of the present disclosure
  • FIG. 4 is a flowchart of an example process of probabilistic anomaly detection and mediation, accordingto an implementation of the present disclosure
  • FIG. 5 illustrates a block diagram of a networked system for automated device data retrieval and analysis, according to an implementation of the present disclosure
  • FIG. 6 conceptually illustrates an exemplary workflow of the automated device data retrieval and analysis of FIG. 5, accordingto an implementation of the present disclosure
  • FIG. 7 is a flowchart of an example process of automated device data retrieval and analysis, accordingto an implementation of the present disclosure
  • FIG. 8 conceptually illustrates an exemplary workflow of a voice vector framework for authenticating user interactions, according to an implementation of the present disclosure
  • FIG. 9 is a flowchart of an example process of a voice vector framework for verifying user interactions, accordingto an implementation of the present disclosure
  • FIG. 10 is a flowchart of an example process of a voice vector framework for detecting malicious activity in user interactions, according to an implementation of the present disclosure.
  • FIG. 11 is a block diagram of a computer system suitable for implementing one or more components in FIG. 1 and FIG. 2, accordingto an implementation.
  • An online service provider that provides electronic services may offer numerous avenues for users to interact with the online service provider.
  • users may interact with the online service provider by accessing a website or a mobile application associated with the online service provider.
  • the user may communicate with the online service provider by calling a hotline associated with the online service provider, sending emails to a designated email address associated with the online service provider, and/or chatting with an agent (or a chatbot) of the online service provider via a chat application.
  • These various avenues provide convenient access to the online service provider, but also offer opportunities for malicious users to perform fraudulent access of data and/or service of the online service provider.
  • various aspects of the disclosure as discussed in detail below, provide frameworks and platforms for efficiently detecting malicious activities during the interactions with the online service provider and performing actions to reduce loses caused by the malicious activities.
  • the streaming device data may be associated with device attributes associated with devices that have attempted connection with an online service provider (e.g., an online server).
  • the devices may attempt to access electronic services offered by the online service provider (e.g., logging in to a user account with the online service provider, accessing data from the online service provider, initiating an electronic transaction such as an electronic payment transaction through a user account with the online service provider, etc.)
  • identity anomalies For example, when looking through events that contain different descriptors for mobile devices, it is difficult to determine which log entries represent prohibited transactions (e.g., fraudulent/malicious/falsified data).
  • an anomaly detection system provides an anomaly detection framework that identifies anomalies during interactions of the online service provider with a user device in real time based on device attributes associated with the user device.
  • the anomaly detection system may leverage the likelihood of a given combination of device attributes in a dataset.
  • the anomaly detection system may automatically take one or more actions, including but not limited to, alerting an agent, providing a differed experience for a user of the user device, re-securing a user accountthat is being accessed by the user device, etc.
  • the anomaly detection system may provide additional restrictions to data and/or services that can be accessed by the user device.
  • Devices of the same type may share similar qualities (e.g., attributes).
  • Devices of a particular type can look similar to each other when viewed via a log file. This similarity also extends to software platforms. For example, if a smartphone of a particular brand and a particular model version (e.g., Apple® iPhone 8) has a specific screen height and width, then every smartphone of the same model version that connects to a service provider system should have features that align with others using that same model version.
  • similar qualities e.g., attributes
  • Devices of a particular type can look similar to each other when viewed via a log file. This similarity also extends to software platforms. For example, if a smartphone of a particular brand and a particular model version (e.g., Apple® iPhone 8) has a specific screen height and width, then every smartphone of the same model version that connects to a service provider system should have features that align with others using that same model version.
  • the anomaly detection system may determine that an anomaly is present and shouldbe investigated.
  • the anomaly detection system may isolate anomalies by holding some device attribute values constant and generating machine learning-trained statistical models around the remaining device attributes.
  • the anomaly detection system may identify outliers and anomalies in real time over streaming device data, and automatically take a remedial action based on desired outcomes.
  • anomaly detection system can acquire a set of data, plot that data, and attempt to visualize what is different among the rendered data.
  • the anomaly detection system of some embodiments leverages machine learning to identify fraudulent and/or suspicious devices attempting to connect with the service provider system in real time via their device attributes in an effort to automatically invoke a prohibited transaction.
  • the anomaly detection framework disclosed herein is more cost efficient than existing anomaly detection systems and flexible enough to be useful for service providers and/or users attempting to defend systems from bad actors.
  • a research individual is attempting to identify malicious activity on an online payment processing system.
  • the research individual generates logs that cover a specified time-period of suspected prohibited activity.
  • the log contains thousands, if not millions, of lines of data.
  • the research individual first tries to plot features to find correlations and analyze the data.
  • the research individual plots histograms and attempts clustering techniques, butto no avail.
  • the research individual leverages the anomaly detection system to isolate device features and generate multi-dimensional probability maps of the remaining device features. These probability maps help to identify outliers within the data.
  • the research individual now has a much smaller subset of data from which to identify the malicious activity within the system.
  • devices of a certain type can have certain features that are static (e.g. their model number).
  • the anomaly detection system may use that information to hold that value constant and generate a dynamic density map of interactions with the remaining device features by leveraging a subset of last-known instances of that device. Based on the density map, the subject technology can identify and detect which device-to-system connections have a minimal (or low probability) of occurrence given the other constant values.
  • connection attempt can be automatically directed to a secondary flow for further analysis
  • connection attempt can be stepped up with additional user authentication prompts
  • agent can be alerted to an anomalous connection
  • security checks can be automatically performed to validate the authenticity of the connection (e.g., vendor identifier (VID) lookup, account review, etc.).
  • the online service provider that provides electronic transaction services may receive a large amount of communication attempts from legitimate users and malicious users, which include emails that are transmitted to a dedicated e-mail address associated with the online service provider, phone calls received via a support hotline associated with the online service provider, chat sessions initiated via a website of the online service provider, etc.
  • These e-mails, calls, and chat interactions with the online service provider can be collectively referred to as customer inputs.
  • customer inputs Many of the customer inputs may be from legitimate users who are requesting help for resolving issues related to their user accounts (e.g., disputing a transaction, issues with logging into an account, etc.). However, at least some of the customer inputs is not related to legitimate users’ concerns (e.g., subscriptions, advertisements, etc.), and some may even include malicious content (e.g., malware attachments, phishing emails or phone calls, etc.).
  • an analysis system may be configured to automatically classify and analyze customer inputs (e.g., email, phone, chat, etc.) for producing actionable insights related to fraud campaigns, phishing attacks, malware distribution, or product issues.
  • customer inputs e.g., email, phone, chat, etc.
  • the analysis system may classify customer inputs (e.g., customer complaints and/or issues reported by e-mails, voice calls, and/or chat sessions) into respective categories for automated analysis and remedial action.
  • the analysis system may classify a customer input into one or more fraud categories based on matching the customer input to one or more known (or stored) fraud patterns.
  • the analysis system may also identify new fraud campaigns not yet known.
  • the analysis system may detect common victim patterns in user accounts that have been exposed to fraud. Based on the detected patterns, the analysis system may invoke risk rules to prevent fraudulent activities associated with the user account. For example, the analysis system may detect that a customer input is associated with a phishing scheme.
  • the analysis system may then forward the customer input to a phishing submodule for analysis and action.
  • the analysis system may detect whether a customer input (e.g., an email) includes an attachment (e.g., a data file) that is malicious.
  • the analysis system may then generate a signature for the attachment and may block the customer input from being presented in an email inbox.
  • the analysis system may classify the customer input into one of multiple categories.
  • Example categories may include: (1) fraud complaints, (2) phishing, (3) malware, (4) other, and/or (5) noise.
  • the analysis system may include additional categories for classification to accommodate larger datasets of customer input.
  • the analysis system may include or use a machine learning model, such as support vector machines (SVMs) or Random forest classifiers that are trained on previously labeled customer input for each category.
  • SVMs support vector machines
  • Random forest classifiers that are trained on previously labeled customer input for each category.
  • the labeled dataset can be gathered from customer support agents.
  • the analysis system may utilize different representations including bag-of-words, term frequency inverse document frequency (TF-IDF), document to vector representation (Doc2Vec) that uses a deep learning approach.
  • TF-IDF term frequency inverse document frequency
  • Doc2Vec document to vector representation
  • the analysis system may extract topics using Latent Dirichlet Allocation (LDA), and select the best performing feature extraction model to extract features from the customer inputs.
  • LDA Latent Dirichlet Allocation
  • the analysis system may also augment the model with information regarding the caller/sender of the customer input. Once a customer input is classified in one of these categories, further action canbe taken on each interaction.
  • the analysis system may cluster the customer input with other similar reported fraud activities (e.g., other customer inputs that have been classified as the same category). For example, a user may contact the online service provider to complain about receiving invoices related to renewing a website domain from a domain provider.
  • the analysis system may cluster such a customer input with other related complaints over invoices associated with renewing website domains.
  • the analysis system may implement a probabilistic clustering technique, such as Gaussian mixture model (GMM). GMM can assign a probability to each fraud complaint based on which cluster is assigned to the fraud complaint.
  • GMM Gaussian mixture model
  • a new cluster can be created. Once a cluster reaches a certain number of complaints, a report generation model can be triggered to generate a report.
  • the analysis system may identify specific information from the complaints, such as email addresses of the customers or any transaction details, and derive patterns from the complaints that belong the same cluster such as country of origin of the customers, customer age range, network addresses used for the transaction, transaction amount that was charged, a description of the service if any, and so on. This information can be added to the report with all the insights gathered and forwarded to an agent device for further analysis and action. As such, the analysis system may increase the efficiency of handling customer inputs (e.g., legitimate customer inputs may be forwarded to the right personnel or chatbots to handle), while malicious customer inputs are properly classified and forwarded to different modules for further analysis and actions.
  • the online service provider may receive user complaints in the form of calls, e-mails, and/or chat regarding invoices sent to them to pay for a web site domain renewal by a web hosting company.
  • these complaints can be grouped together based on the content of the e-mail, the amount requested, and the hosting provider.
  • the analysis system may be triggered to generate a report.
  • the analysis system may analyze the grouped customer inputs (e.g., within the same cluster) to derive patterns.
  • the analysis system may determine that a majority of the customers associated with the grouped customer inputs fall in a specified age band (e.g., older than 50 years of age) and are all known to reside in the United States.
  • the invoices are determined to originate from accounts registered in Russia where the hosting provider has no place of business or business presence.
  • the analysis system may determine a risk level based on the derived patterns, and may alert a risk team to investigate the complaints in more detail when the risk exceeds a threshold.
  • the analysis system may also perform actions such as restricting access to the user accounts associated with the complaints when the risk level exceeds the threshold.
  • a browser analysis system of the online service provider may employ anti-fraud defense techniques to generate fingerprints for various types of web applications (e.g., browsers or other web clients) connecting to the online service provider.
  • the browser analysis system may transmit code to the user device and cause the user device to execute the code.
  • the code when executed by the web browser, is configured to examine webpage attributes (e.g., document object model (DOM) attributes) and/or values of a webpage being loaded and presented on the user device.
  • the webpage attributes and/or values may enable the browser analysis system to a) track a unique user over time, and b) determine an anomaly, such as whether the web application is providing inaccurate information of its actual identity through malicious manipulation of the web browser attributes and values.
  • Devices of a particular type can look similar to each other when viewed via a log file. This similarity also extends to software platforms. For example, if a smartphone of a particular brand and a particular model version (e.g., Apple® iPhone 8) has a specific screen height and width, then every smartphone of the same model version that connects to a service provider system should have features that align with others using that same model version.
  • a smartphone of a particular brand and a particular model version e.g., Apple® iPhone 8
  • every smartphone of the same model version that connects to a service provider system should have features that align with others using that same model version.
  • the online platform may determine that an anomaly is present and should be investigated.
  • the browser analysis system provides in-depth and automatic testing of current and pre-release web applications with the intent of generating a comprehensive database of their behaviors, features, and functionality in order to provide additional capabilities for detection and mitigation of malicious actors using modified or spoofed client devices.
  • the browser analysis system may also allow base lining for statistical modeling of browser traffic and can generate alert notifications of new functionality that can be implemented into anti-fraud defense mechanisms.
  • Fraudulent calls are often made to customer service agents with the goal of lifting restrictions on an account or performing account takeover using password reset initiated by the customer service agent.
  • Social engineering techniques are often employed where the caller provides enough information to convince the agent to perform the desired action on an account that is not theirs. Additionally, these types of attacks occur on a scale where the same person may be calling for multiple accounts.
  • a voice authentication system may detect fraudulent calls based on analyzing voice characteristics of the callers.
  • the voice authentication system may authenticate a caller as a legitimate user associated with a user account of the online service provider and also determine if the caller has called before on multiple other user accounts of the online service provider.
  • the voice authentication system may provide real-time feedback by alerting an automated chat module (e.g., a chatbot) or customer service agents of the online service provider of such suspicious activity.
  • an automated chat module e.g., a chatbot
  • the voice authentication system may leverage various aspects of speech recognition and voice identification technology as well as intent identification on the incoming customer call.
  • the voice authentication system may provide a framework for a two- stage procedure that first verifies the identity of a caller and second to check if the same caller has previously called with the same intention on a different account not belonging to him or her. This framework can identify fraudsters, generate a voice blacklist, and alert customer service agents in real time to mitigate any security lapse.
  • the voice authentication system may generate multiple machine learning-based voice models that represent variations of voices associated with different user accounts of the online service provider.
  • the voice authentication system may generate multiple generic voice models, where each of the generic voice models may correspond to a particular cross-section of demographics. Examples of the demographics may include gender, country of origin (e.g., accent), age, among others.
  • An example cross-section can be a young male (e.g., younger than 30 years old) from France, or an elderly female (e.g., older than 60 years old) from New York.
  • the voice authentication system may generate the machine learning-based generic voice models based on different combinations of a particular subset of the demographics attributes, such as gender, age, and accent.
  • the machine learning-based voice models may be configured to identify age groups in a binary manner such as old/young, or categorically by dividingthe different ages into different age groups.
  • the voice authentication system may select, for each user account, a particular machine learning-based generic voice model based on voice characteristics of a user of the user account.
  • the voice authentication system may select, for a user account associated with a female user who is 26 years old from France, a generic voice model corresponding to a female- young-French voice.
  • the voice authentication system may select, for a user account associated with a male user who is 65 years old from England, a generic voice model corresponding to a male-old-English voice.
  • the generic voice model can be used by the voice authentication system to generate multiple different voice models specific to each of the user accounts.
  • the voice authentication system may train, for a particular user account, the generic voice model using different audio files associated with the particular user account to generate different voice models corresponding to different call intentions.
  • the audio files associated with the user account may be obtained based on historical calls as previously identified by customer service agents (e.g., verifying that the caller was, in fact, the rightful owner of the user account).
  • the voice authentication system may identify different audio files associated with a user account corresponding to different call intentions.
  • the online service provider may determine one or more call intentions of the callers based on the type of services provided by the online service provider.
  • Common call intentions may include “password reset,” “payment transaction,” “payment dispute,” or other types of intentions.
  • the voice authentication system may categorize the audio files associated with the user account based on the different call intentions. The voice authentication system may then extract keywords from the audio files that correspond to each of the call intentions. For example, for the call intention of “password reset,” the voice authentication system may extract, from the audio files corresponding to the “password reset” call intention, phrases such as “password reset,” “resetting my password,” “password resetting,” etc. These audio files with the extracted keywords can be grouped together for training a “password reset” model for the user account.
  • the voice authentication system may generate and train a voice model, for the user account and a corresponding call intention, based on the extracted keywords from the audio files using the generic voice model.
  • the voice authentication system may generate, for each user account based on the corresponding generic voice models, multiple voice models for the different call intentions. Since each of the voice models is trained with the same phrase (or multiple similar phrases), any variation within each of the resulting voice models can be due to audio quality and patterns of speech.
  • Pre-processing can be performed on the audio files (having the extracted keyword) to normalize the variation and eliminate noise.
  • the result can be a set of trained voice models whose variation wouldbe the result of the differences in speech patterns from one group to the other.
  • the machine learning-based voice models can be periodically updated using previously saved and tagged data.
  • the voice authentication system may determine whether the caller is associated with the user account based on the generated voice models associated with the user account.
  • the incoming call may include voice data associated with phrases and utterances made by the caller.
  • the voice authentication system may identify a call intent of the incoming call based on the voice data. For example, the voice authentication system may extract one or more keywords from the voice data, and classify the call as one of the multiple call intentions based on the extracted keywords.
  • the voice authentication system may classify the call as the call intention associated with the voice model (e.g., the “password reset” call intention). The voice authentication system may then select the voice model that corresponds to the extracted key word(s) to authenticate the caller.
  • a phrase that is linked to one of the voice models e.g., the phrase “resetting my password”
  • the voice authentication system may classify the call as the call intention associated with the voice model (e.g., the “password reset” call intention).
  • the voice authentication system may then select the voice model that corresponds to the extracted key word(s) to authenticate the caller.
  • the voice authentication system may determine whether the caller is a legitimate user of the user account using a voice model associated with the user account and corresponding to the call intention.
  • the voice authentication system or the voice model may generate an “accuracy” or “confidence” score.
  • the voice model may output a value (e.g., a confidence score) indicating how close the audio sample is to a voice of the same phrase generated by the model.
  • the voice authentication system and/or the voice models may determine the confidence score based on voice vectors associated with the voice data of the caller. For example, for each voice model generated for the user account, the voice authentication system may generate a signature in the form of a vector that represents the attributes of the voice represented by the machine learning-based voice model. The attributes can be generated using a combination of selective phrases and individual words. Within the voice authentication system that records the calls, the phrasesand words can be segmented and stored as tagged data. For example, someone saying “password reset” can be a key phrase that is captured. Using methods such as Hidden Markov model, gaussian mixture model, deep learning models, among others, the voice authentication system may generate a vector representation and use this vector for comparison.
  • the vector comparison could be accomplished using a vector similarity metric.
  • An example of such a metric would be a multi-dimensional relative distance metric, such as leveraging an ‘n’ -dimensional Euclidean distance calculation to generate a perdimension distance as well as an overall distance.
  • the technique for measuring the distance would depend on the underlying technique used to generate the voice representation. By examining the relative distance of an incoming sample to existing ones, the voice authentication system would generate a confidence score.
  • the voice authentication system may generate a voice vector based on the voice data of the incoming call.
  • the voice authentication system may then compare the voice vector associated with the voice data of the caller with the voice vector generated for the voice model using the techniques disclosed herein.
  • the voice authentication system and/or the voice model may determine a confidence score based on a similarity between the two voice vectors. If the confidence score is above a threshold, then the voice authentication system may consider the voice a match to the model, and may determine that the caller is a legitimate user of the user account.
  • the voice authentication system may determine that the caller is the legitimate user of the user account when the confidence score is above a threshold, and may determine that the caller is not the legitimate user of the user account when the confidence score is below the threshold.
  • the confidence score can be leveraged as a way to gauge any drift that has occurred in an individual’s speech over time as a way of improving the model for the user account.
  • the voice data e.g., the voice vector
  • the voice data may be classified into one of the generic voice model by comparing the voice data against each of the generic voice model.
  • the voice authentication system may use the voice data to determine whether the caller has previously called for another user account.
  • the voice authentication system may comparevoice data (and/or the voice vector determined based on the voice data) against different voice models associated with different user accounts. For example, the voice authentication system may first compare the voice data (and/or the voice vector) against the different generic voice models to determine which generic voice model matches the caller. The voice authentication system may determine a confidence score for each generic voice model indicating how well the voice data of the caller matches the generic voice model. This confidence score can determine which model more closely represents the caller.
  • a young male with a French accent would yield a high confidence score on the male-young-French-accent voice model but yield a much lower confidence score on the female-old-Russian model.
  • the voice authentication system may alert another module of the online service provider (e.g., a chatbot) or the customer service agent accordingly.
  • the voice authentication system may have a set of such models for every important intent that customers call the online service provider frequently or for what reasons fraudsters have historically called the online service provider.
  • the system may create a voice blacklist of fraudsters that can be shared.
  • the voice authentication system may facilitate detecting malicious activity relating to unverified user accounts.
  • a “Know your customer (KYC)” restriction can be placed on a user account when malicious activity associated with the user account has been detected (e g., a fraudulent user has called to perform a transaction on that user account, etc.), which limits the number of activities that can be done on the user account including restricted withdrawals of any funds from the user account.
  • KYC Know your customer
  • Such accounts are often created for fraud purposes and once money is contributed into such accounts, customer service calls are made to lift the restrictions to allow transfer of money outside the user account.
  • a single customer can make multiple calls for multiple accounts.
  • the system may compare the voice vector with the trained machine learning-based models and detect that the same caller has called previously with the same intention for multiple accounts.
  • the customer service agent can prompt the caller for more authentication information and place the caller on hold until more verification can be completed.
  • FIG. 1 is a block diagram of a networked system 100 suitable for implementing the anomaly detection system, the analysis system, the browser analysis system, and the voice authentication system described herein.
  • system 100 may comprise or implement a plurality of devices, servers, and/or software components that operate to perform various methodologies in accordance with the described implementations.
  • Exemplary devicesand servers may include device, stand-alone, and enterprise-class servers, operating an OS such as a MICROSOFT® OS, a UNIX® OS, a LINUX® OS, or another suitable device and/or serverbased OS. It can be appreciated that the devices and/or servers illustrated in FIG.
  • System 100 includes a service provider server 110 and an agent device 120 in communication over a network 140.
  • Agent device 120 may be utilized to provide training data, view flagged transactions, and process additional transaction data to identify transactions indicating a prohibited transaction including potential fraud.
  • an agent may process and review the data with service provider server 110, where service provider server 110 may generate a machine learning model based on iteratively training using the training data, and further process the transaction data using the model to flag further transactions. Additionally, service provider server 110 may be used to output narratives for flagged transactions based on feature analysis that caused the machine learning engine to perform a classification.
  • Agent device 120 and service provider server 110 may each include one or more processors, memories, and other appropriate components for executing instructions such as program code and/or data stored on one or more computer readable mediums to implement the various applications, data, and steps described herein.
  • instructions may be stored in one or more computer readable media such as memories or data storage devices internal and/or external to various components of system 100, and/or accessible over network 140.
  • Agent device 120 may be implemented as a communication device that may utilize appropriate hardware and software configured for wired and/or wireless communication with service provider server 110.
  • agent device 120 may be implemented as a personal computer (PC), a smart phone, laptop/tablet computer, wristwatch with appropriate computer hardware resources, eyeglasses with appropriate computer hardware (e.g. GOOGLE GLASS ®), other type of wearable computing device, implantable communication devices, and/or other types of computing devices capable of transmitting and/or receiving data, such as an IPAD® from APPLE®.
  • PC personal computer
  • a smart phone e.g. GOOGLE GLASS ®
  • eyeglasses e.g. GOOGLE GLASS ®
  • other type of wearable computing device e.g. GOOGLE GLASS ®
  • implantable communication devices e.g. GOOGLE GLASS ®
  • computing devices capable of transmitting and/or receiving data, such as an IPAD® from APPLE®.
  • Agent device 120 includes an alert review application 122, a report review application 124, a database 126, and a network interface component 128.
  • Alert review application 122 may correspond to executable processes, procedures, and/or applications with associated hardware.
  • agent device 120 may include additional or different modules having specialized hardware and/or software as required.
  • Report review application 124 may correspond to one or more processes to execute software modules and associated components of agent device 120 to provide features, services, and other operations associated with training a machine learning, deep learning, or other artificial intelligence (Al) model, as well as using the model for detection of prohibited transactions in transaction data sets.
  • report review application 124 may correspond to specialized hardware and/or software utilized by a user of agent device 120 that may be used to provide training and transaction data, as well as review results of a supervised machine learning engine having a model trained for fraudulent pattern recognition and narrative text output.
  • report review application 124 may be used to first provide training data and/or sets of data to service provider server 110 that includes transaction data sets for transaction processed by a financial entity, such as a bank or financial institution, payment service provider, or other transaction processor.
  • Service provider server 110 may utilize features within the data sets to classify the transactions according to one or more classifiers, which may flag one or more transactions as potentially prohibited based on laws, rules, or regulations.
  • the data sets may be annotated, and flagged transactions may be displayed through report review application 124.
  • An agent may identify any false positives in the flagging of transactions as potentially prohibited, which may be provided backto service provider server 110 for retraining (e.g., iteratively and/or continuously training) of the machine learning model.
  • the flagged transactions may include a narrative display able through report review application 124, such as a textual description of the reason for flagging the transaction(s) by the model.
  • Agent device 120 may further be used to view the results of the model processing other transaction data sets, such as for other transaction processed by one or more entities.
  • Agent device 120 may further include database 126 stored on a transitory and/or non- transitory memory of agent device 120, which may store various applications and data and be utilized during execution of various modules of agent device 120.
  • Database 126 may include, for example, identifiers such as operating system registry entries, cookies associated with alert review application 122 and/or other applications 112, identifiers associated with hardware of agent device 120, or other appropriate identifiers, such as identifiers used for payment/user/device authentication or identification, which may be communicated as identifying the user/agent device 120 to service provider server 110.
  • Database 126 may further include any transaction data sets used for training and/or processing with a machine learning model generated by service provider server 110.
  • Agent device 120 includes at least one network interface component 128 adapted to communicate with service provider server 110.
  • network interface component 128 may include a DSL (e.g., Digital Subscriber Line) modem, a PSTN (Public Switched Telephone Network) modem, an Ethernet device, a broadband device, a satellite device and/or various other types of wired and/or wireless network communication devices including microwave, radio frequency, infrared, Bluetooth, and near field communication devices.
  • DSL Digital Subscriber Line
  • PSTN Public Switched Telephone Network
  • Ethernet device e.g., Ethernet device, a broadband device, a satellite device and/or various other types of wired and/or wireless network communication devices including microwave, radio frequency, infrared, Bluetooth, and near field communication devices.
  • Service provider server 110 may be maintained, for example, by an online service provider, which may provide identification of prohibited transactions, such as fraudulent transactions, in transaction data sets processed by a financial or transaction processing entity (including service provider server 110) using a machine learning or other Al model.
  • service provider server 110 includes one or more processing applications which may be configured to interact with agent device 120 to train and utilize the model f or prohibited transaction identification.
  • service provider server 110 maybe provided by PAYPAL®, Inc. of San Jose, CA, USA. However, in other implementations, service provider server 110 may be maintained by or include another type of service provider.
  • Service provider server 110 of FIG. 1 includes an event detection server 150, a transaction processing server 160, and a network interface component 170.
  • Event detection server 150, transaction processing server 160, and other applications 134 may correspond to executable processes, procedures, and/or applications with associated hardware.
  • service provider server 110 may include additional or different modules having specialized hardware and/or software as required.
  • the event detection server 150 may implement the functionalities of the anomaly detection system, the analysis system, the browser analysis system, and the voice authentication system described herein.
  • the event detection server 150 may also be associated specialized hardware of service provider server 110 to provide a framework to train a machine learning model for one or more prediction engines that can detect prohibited transactions, such as fraud, by probabilistically detecting anomalies in transaction datasets, which will be described in more detail in FIGS. 2 and 4.
  • the event detection server 150 may provide a framework for an analysis platform for actionable insight into user interaction data, which will be described in more detail in FIGS. 2 and 3.
  • the event detection server 150 may provide a platform for automated device data retrieval and analysis, which will be described in more detail in FIGS. 5-7.
  • the event detection server 150 may provide a voice vector framework for authenticating user interactions, which will be described in more detail in FIGS. 8-10.
  • service provider server 110 includes at least one network interface component 170 adapted to communicate agent device 120 and/or other entities over network 140.
  • network interface component 170 may comprise a DSL (e.g., Digital Subscriber Line) modem, a PSTN (Public Switched Telephone Network) modem, an Ethernet device, a broadband device, a satellite device and/or various other types of wired and/or wireless network communication devices including microwave, radio frequency (RF), and infrared (IR) communication devices.
  • DSL Digital Subscriber Line
  • PSTN Public Switched Telephone Network
  • Network 140 may be implemented as a single network or a combination of multiple networks.
  • network 140 may include the Internet or one or more intranets, landline networks, wireless networks, and/or other appropriate types of networks.
  • network 140 may correspond to small scale communication networks, such as a private or local area network, or a larger scale network, such as a wide area network or the Internet, accessible by the various components of system 100.
  • the customer device 130 in various implementations, maybe implemented as a communication device using any appropriate combination of hardware and/or software configured for wired and/or wireless communication over the network 140.
  • the user device maybe implemented as a personal computer (PC), a smart phone, a smart phone with additional hardware such as NFC chips, BLE hardware etc., wearable devices with similar hardware configurations such as a gaming device, a Virtual Reality Headset, or that talk to a smart phone with unique hardware configurations and running appropriate software, laptop computer, and/or other types of computing devices capable of transmitting and/or receiving data.
  • PC personal computer
  • smart phone a smart phone with additional hardware such as NFC chips, BLE hardware etc.
  • wearable devices with similar hardware configurations such as a gaming device, a Virtual Reality Headset, or that talk to a smart phone with unique hardware configurations and running appropriate software, laptop computer, and/or other types of computing devices capable of transmitting and/or receiving data.
  • the customer device 130 may install and execute a client-side service application 132 received from the transaction processing server 110 to facilitate one or more transaction processes (e.g., point-of-sale transactions).
  • the client-side service application 132 may allow a user to send payment transaction requests to the transaction processing server 110, which includes communication of data or information needed to complete the request, such as funding source information.
  • the customer device 130 may include one or more browser applications that may be used, for example, to provide a convenient interface to permit a user to browse information available over network 140.
  • the one or more browser applications may be implemented as a web browser configured to view information available over the Internet, such as a user account for online shopping and/or merchant sites for viewing and purchasing goods and/or services.
  • the customer device 130 may include other applications as may be desired in one or more implementations of the present disclosure to provide additional features available to the user.
  • the other applications may include security applications for implementing server-side security features, programmatic client applications for interfacing with appropriate APIs over network 140, or othertypes of applications.
  • the other applications may also include email, texting, voice and IM applications that allow a user to send and receive emails, calls, texts, and other notifications through network 140.
  • the other applications may include financial applications, such as banking, online payments, money transfer, or other applications associated with transaction processing server 110.
  • the other applications include a software program, such as a graphical user interface (GUI), executable by a processor that is configured to interface to a user.
  • GUI graphical user interface
  • the customer device 130 may further include cache 134 stored to a transitory and/or non-transitory memory of customer device 130, which may store various applications and data and be utilized during execution of various modules of customer device 130.
  • cache 134 may include, for example, identifiers such as operating system registry entries, cookies associated with the one or more browser applications and/or the other applications, identifiers associated with hardware of customer device 130, or other appropriate identifiers, such as identifiers used for payment/user/d evice authentication or identification, which may be communicated as identifying customer device 130 to the service provider server 110.
  • the cache 134 may store device attributes of the customer device 130, including, but not limited to, application information associated with various applications on the customer device 130, version information of the various applications, cookie information, a language used on the customer device 130, screen attributes such as resolution, size, etc., and other information associated with the customer device 130.
  • account information and/or digital wallet information may be stored to cache 134 for use by the customer device 130.
  • the cache 134 may include atleast one user identifier, which may be implemented, for example, as operating system registry entries, cookies, identifiers associated with hardware of the customer device 130 (e.g., a media control access (MAC) address), or various other appropriate identifiers.
  • the user identifier may include one or more attributes related to the user of the customer device 130, such as personal information related to the user (e.g., one or more user names, passwords, photograph images, biometric IDs, addresses, phone numbers, social security number, etc.) and banking information and/or funding sources (e.g., one or more banking institutions, credit card issuers, user account numbers, security data and information, etc.).
  • the user identifier may be passed with a user login request to the transaction processing server 110 via the network 140, and the user identifier may be used by the service provider server 1 lOto associate the user with a particular user account maintained by the transaction processing server 160.
  • Customer device 130 includes at least one network interface component 170 adapted to communicate with the service provider server 110 and/or the transaction processing server 160.
  • network interface component 170 may include a modem, an Ethernet device, a broadband device, a satellite device and/or various other types of wired and/or wireless network communication devices including microwave, radio frequency, infrared, Bluetooth, and near field communication devices.
  • customer device 130 Even though only one customer device 130 is shown in FIG. l, ithas been contemplated that one or more user devices (each similar to customer device 130) may be communicatively coupled with the service provider server 110 via the network 140 within the networked system 100.
  • FIG. 2 illustrates a block diagram of the event detection server 150, according to an implementation of the present disclosure.
  • the event detection server 150 includes a feature extraction module 210, a classification module 230, a first pass fraud detection module 240, a data profile module 250, a report generation module 260, and a second pass fraud detection module 270.
  • the network interface component 125 includes API 202.
  • the API 202 is coupled to the event detection server 150 with bidirectional signal paths to the second pass fraud detection module 270.
  • the feature extraction module 210 includes a text preprocessing module 211, a feature extraction engine 212, an e-mail features repository 213, a voice recognition module 214, a text feature extraction engine 215, a transcript features repository 216, an audio feature extraction engine 217, a voice vectors repository 218, an email training dataset 219 and a voice training dataset 220.
  • the feature extraction module 210 may extract features from the communication.
  • the extracted features may include textual data features extracted from a text portion of the communication and/or audio data features extracted from an audio portion of the communication.
  • the feature extraction engine 212 of the feature extraction module 210 can generate textual data features with the textual data.
  • the textual data refers to text extracted from electronic document items received by the text preprocessing module 211.
  • the textual data features can be stored in the e-mail features repository 213.
  • the voice recognition module 214 can process the audio properties of a received audio file (e.g., voicemail) to produce a textual transcript of the received audio file.
  • the voice recognition module 214 may utilize standard speech processing techniques to translate the audio to text.
  • the text feature extraction engine 215 can generate textual data features with the textual transcript associated with the received audio file.
  • the textual data features from the text feature extraction engine 215 can be stored in the transcript features repository 216.
  • the voice recognition module 214 may provide a processed speech datafile that includes speech properties of the received audio file.
  • the audio feature extraction engine 217 may generate audio data features with the processed speech datafile.
  • the audio feature extraction engine 217 may generate voice vectors with the audio data features, where each portion (or field) of respective voice vectors corresponds to a different combination of audio data features. For example, each combination of audio data features may represent word and/or phrases of a specific spoken language.
  • the voice vectors can be stored in the voice vectors repository 218.
  • the feature extraction module 210 can map (or transform) the textual data features and the audio data features into respective ones of different feature representations.
  • the preprocessing module 231 may be tasked to perform the feature representation mapping in lieu of the feature extraction module 210.
  • the classification module 230 includes a preprocessing module 231, a classifier 232, a fraud class module 233 , a voice scoring module 234, a phishing class module 235 , a malware class module 236, other class module 237 and a noise class module 238.
  • the classification module 230 may determine an intent of the communication based on the extracted features, and may select a machine learning-trained classifier to classify the communication in one or more of communication categories.
  • the first pass fraud detection module 240 includes a clustering engine 242 and fraud type clusters 244, 246, and 248.
  • the first pass fraud detection module 204 may use the clustering engine 242 to assign the communication to one of the clusters (e.g., one of the fraud type clusters 244, 246, and 248) of past communications based on the intent and the categories associated with the communication. Embodiments are discussed with reference to FIG 3.
  • the data profile module 250 includes a controller 252, a retrieval engine 254 and a data profile repository 256.
  • the second passfraud detection module 270 includes an anomaly detection module 280, a feedback module 272 and a remedial action module 274.
  • the second pass fraud detection module 270 may derive a unique pattern of activity from the communication and may detect one or more anomalies in the communication based on the unique pattern of activity.
  • FIG. 3 is a flowchart of an example process 300 of actionable insight analysis into user interaction data, according to an implementation of the present disclosure.
  • One or more of the steps 302-314 of process 300 may be implemented, at least in part, in the form of executable code stored on non-transitory, tangible, machine-readable media that when run by one or more processors may cause the one or more processors to perform one or more of the steps 302-314.
  • Some examples of computing devices, such as computer system 1100 of FIG. 11 may include non-transitory, tangible, machine readable media that include executable code that when run by one or more processors (e.g., processor 1112) may cause the one or more processors to perform the steps of process 300.
  • the process 300 includes a number of enumerated steps, but aspects of the process 300 may include additional steps before, after, and in between the enumerated steps. In some aspects, one or more of the enumerated steps may be omitted or performed in a different order.
  • the process 300 begins at step 302, where the event detection server 150 receives user interaction data (e.g., e-mails, an audio clip of a phone call, etc.) associated with an interaction between the customer device 130 and a service provider server (e.g., the transaction processing server 160).
  • user interaction data e.g., e-mails, an audio clip of a phone call, etc.
  • the text preprocessing module 211 may access, through the API 202, textual data associated with a first type of interaction from a first data structure in a data repository communicab ly coupled to the service provider server 110.
  • the text preprocessing module 211 may prepare the received textual data for processing by the feature extraction engine 212, such as removing any unnecessary terms and/or characters that help improve the extraction performance.
  • the voice recognition module 214 may access, through the API 202, audio data associated with a second type of interaction from a second data structure in the data repository.
  • the feature extraction module 210 may group the textual data with the audio data into the user interaction data to combine the first type of interaction with the second type of interaction.
  • the textual data and the audio data may correspond to different portions of the user interaction data.
  • the data repository may be remote and/or external to the service provider server 110.
  • the data repository may be internal to the service provider server 110.
  • the data repository is, or includes at least a portion of, the data profde repository 256.
  • the feature extraction module 210 of the event detection server 150 processes the user interaction data by extracting one or more features from the user interaction data.
  • the extracted features includes textual data featuresand audio data features as described herein.
  • the feature extraction module 210 can select one of multiple different feature extraction algorithms based on a comparison of performance metrics between each of the different feature extraction algorithms.
  • the different feature extraction algorithms include, among others, bag-of-words, term frequency -inverse document frequency (TF-IDF), Doc2Vec, and Latent Dirichlet Allocation (LDA). As such, the feature extraction module 210 can apply the selected feature extraction algorithm to the user interaction data.
  • the classifier 232 of the event detection server 150 classifies the extracted features into one of many actionable insight categories with a machine learning-trained classifier.
  • a first actionable insight category may correspond to a fraudulent activity implemented with the fraud class module 233
  • a second actionable insight category may correspond to a phishing activity implemented with the phishing class module 235
  • a third actionable insight category may correspond to a malware activity implemented with the malware class module 236
  • a fourth actionable insight category may correspond to a noise activity implemented with the noise class module 238, and
  • a fifth actionable insight category may correspond to other (or miscellaneous) activities implemented with the other class module 237.
  • the output of the fraud class module 233 is fed to the first pass fraud detection module 240.
  • the output of the phishing class module 235 may include a notification transmitted to a phishing detection system (not shown), indicating that the user interaction may correspond to a phishing scheme.
  • the phishing detection system can analyze and report phishing network links (e.g., URLs) extracted from phishing emails.
  • the output of the malware class module 236 includes signaling that is sent to a malware detection system (not shown).
  • the signaling may be first sent to a virus analysis process to identify whether the manipulated file has been previously identified as malicious and to acquire an associated signature that can be used for blocking. If the virus analysis process does not return a result, then the malware detection system can receive customer input with malware attachments for download into an isolated virtual environment.
  • the customer input with malware attachments may be analyzed for static (codebased) and dynamic (interaction with the sandbox system) to identify the impact and report the findings to the agent device 120.
  • the output of the other class module 237 includes signalingthat is sent to the agent device 120.
  • Some examples of the other category may include customer complaints and/or emails about mobile applications networking properly, queries for information regarding products and/or services associated with the service provider server 110, or the like.
  • the classifier 232 may identify customer complaints regarding usage of products and/or services associated with the service provider server 110 that can serve as feedback for use by the transaction processing server 160 to identify any underlying issues in the services and/or products.
  • the output of the noise class module 238 includes signaling that is purged.
  • Some examples of noise may include any data that does not relate to the service associated with the service provider server 110, advertisements, subscription emails and the like. Purging the noise activity can help increase the efficiency and accuracy of the system. For purposes of brevity, the remainder of the steps in the process 300 will be discussed in reference to the fraudulent activity category, but can also apply to other categories.
  • the classifier 232 determines an intent of the interaction from the feature representations using the machine learning-trained classifier.
  • the intent corresponds to one of the different actionable insight categories.
  • the classification module 230 selects one of the machine learning-trained classifiers implemented by the classifier 232 based on a comparison of performance metrics between each of the machine learning-trained classifiers. In some aspects of determining the intent of the interaction, the classifier 232 determines the intent of the interaction using the selected one of the machine learning-trained classifiers.
  • the feature extraction module 210 accesses user interaction datasets associated with respective ones of interactions between different communication devices (including the customer device 130) and the service provider server 110.
  • the classifier 232 can then determine an intent of each of the interactions from extracted features associated with each of the interactions using the machine learning-trained classifier.
  • the classifier 232 can classify each of the interactions as a respective category of the actionable insight categories based at least in part on the intent of that interaction.
  • the classification module 230 may include one or more processors adapted to generate multiple machine learning-based networks based on the actionable insight categories.
  • the machine learning-based networks correspond to the respective actionable insight categories.
  • the one or more processors in the classification module 230 may be adapted to train each of the machine learning-based networks with a respective training dataset to form different machine learning-trained classifiers.
  • the respective training dataset facilitates supervised learning by including labeled interaction data indicating what information pertains to which of the actionable insight categories.
  • the classification module 230 can select the machine learning-trained classifier from the machine learning-trained classifiers.
  • the clustering engine 242 of the first pass fraud detection module 240 generates multiple clusters based on the extracted features using one or more clustering algorithms.
  • the clusters may include cluster 244 (depicted as “fraud type cluster 1”), cluster 246 (depicted as “fraud type cluster 2”), and cluster 248 (depicted as “fraud type cluster N”).
  • the one or more clustering algorithms include, among others, K- means, Gaussian mixture model (GMM), and hierarchical.
  • the clustering engine 242 maps the interaction to a first cluster of the different first clusters based at least in part on the determined intent.
  • each of the first clusters corresponds to a particular type of activity in a first actionable insight category of the different actionable insight categories.
  • the first actionable insight category corresponds a fraudulent activity classification.
  • the particular type of activity may correspond to one of multiple types of fraudulent activity (e.g., fraud type cluster 1, fraud type cluster 2, fraud type cluster N).
  • the clustering engine 242 determines whether the interaction maps to at least one of the first clusters and generates a new cluster to be included in the first clusters when the interaction is determined to not map to the at least one of the first clusters.
  • the new cluster corresponds to a new type of event in the first actionable insight category.
  • the event detection server 150 detects one or more anomalies in at least one of the clusters through an anomaly detection operation.
  • the event detection server 150 may utilize the second pass fraud detection module 270 to generate second clusters within the at least one of the first clusters using one or more clustering algorithms.
  • each of the second clusters corresponds to a unique pattern of activity associated with the particular type of activity in the first actionable insight category.
  • the second pass fraud detection module 270 may utilize the cluster engine 286 to generate the second clusters.
  • the second pass fraud detection module 270 may determine whether the unique pattern of activity includes the one or more anomalies for the particular type of activity.
  • the second pass fraud detection module 270 determines that the particular type of activity represents malicious activity when the unique pattern of activity is determined to include the one or more anomalies.
  • the event detection server 150 issues a remedial action for the interaction by further authenticating the customer device 130 based on the one or more anomalies detected in the at least one of the clusters.
  • the event detection server 150 issues the remedial action based on the mapping of the interaction to the first cluster.
  • the event detection server 150 issues the remedial action using the remedial action module 274.
  • the event detection server 150 issues the remedial action using the report generation module 260.
  • the event detection server 150 provides, through the API 202 to a second communication device (e.g., the agent device 120) associated with the service provider server 110, an indication of the one or more anomalies.
  • the event detection server 150 generates an alert notification that identifies the malicious activity using the remedial action module 274.
  • the remedial action module 274 sends, through the API 202 to the agent device 120 over the network 140, the alert notification.
  • the remedial action module 274 conditionally sends the alert notification.
  • the remedial action module 274 may determine whether a threshold number of anomalies is detected in the first cluster.
  • the remedial action module 274 generates a report indicating the one or more anomalies when the threshold number of anomalies is detected. In an aspect, the detected number of anomalies exceeds the threshold number of anomalies.
  • FIG. 4 is a flowchart of an example process 400 of probabilistic anomaly detection and mediation, according to an implementation of the present disclosure.
  • One or more of the steps 402-410 of process 400 may beimplemented, at leastin part, in the form of executable code stored on non-transitory, tangible, machine-readable media that when run by one or more processors may cause the one or more processors to perform one or more of the steps 402-410.
  • Some examples of computing devices, such as computer system 1100 may include non- transitory, tangible, machine readable media that include executable code that when run by one or more processors (e.g., processor 1112) may cause the one or more processors to perform the steps of process 400.
  • the process 400 includes a number of enumerated steps, but aspects of the process 400 may include additional steps before, after, and in between the enumerated steps. In some aspects, one or more of the enumerated steps may be omitted or performed in a different order.
  • the process 400 begins at step 402, where the feature preprocessing module 281 of the anomaly detection module 280 receives device data of a user device (e.g., the customer device 130) during a communication session between the user device and a service provider server 110.
  • a user may communicate with the service provider server 110 (e.g., accessing a website associated with the service provider server 110, initiating an electronic transaction with the service provider server 110, etc.).
  • the feature preprocessing module 281 may obtain device data associated with the user device.
  • the device data may include device attributes, that in combination, is unique to a type of device (e.g., a manufacturer, a model, etc.) associated with the customer device 130.
  • the combination of the device attributes maybe used by the feature preprocessing module 281 to verify whether the user device is of a particular type (e.g., a particular manufacturer, a particular model, etc.) that the user device purports to be.
  • each device attribute in the different of device attributes is marked with a different descriptor and a corresponding attribute value that are specific in identifying the device type of the customer device 130.
  • the different device attributes may include, among others, a pixel depth attribute, version information of a particular application (e.g., a web browser) of the customer device 130, an identifier, a list of plug-ins installed on the customer device 130, a font used in the customer device 130, a language used in the customer device 130, screen attributes such as a width, a height, a resolution, and other attributes.
  • a particular application e.g., a web browser
  • an identifier e.g., a list of plug-ins installed on the customer device 130
  • a font used in the customer device 130 e.g., a font used in the customer device 130
  • a language used in the customer device 130 e.g., a language used in the customer device 130
  • screen attributes such as a width, a height, a resolution, and other attributes.
  • the feature preprocessing module 281 may receive the device data of the customer device 130 in response to occurrence of an event associated with the customer device 130.
  • the event occurrence may include a user interaction between the customer device 130 and the service provider server 110, such as a connection attempt to the service provider server 110.
  • the event may identify a device type of the customer device 130 (e.g., a particular manufacturer, a particular model, etc.) and a type of the user interaction (e.g., connection attempt).
  • the feature preprocessing module 281 may receive a first logs indicating a first number of events associated with user devices (e.g., other customer devices 130d) that occur in a first window oftime.
  • each event of the first number of events may include first device data corresponding to the first window of time for a user device of the user devices.
  • the feature preprocessing module 281 also may receive second logs indicating a second number of events associated with the user devices that occur in a second window of time different from the first window of time (e.g., occurring at a later time).
  • each event of the second number of events may include second device data corresponding to the second window of time for a user device of the user devices.
  • the feature preprocessing module 281 may receive user queries.
  • each of the user queries may include a user interaction between a user account of a service (e.g., electronic payment processing) and the service provider server 110 associated with the service.
  • the feature preprocessing module 281 may perform an extraction operation, where the feature preprocessing module 281 may extract metadata of user interactions from the user queries.
  • the metadata includes the device data of the customer device 130.
  • the feature combination module 282 of the anomaly detection module 280 can select a combination of features from the different features.
  • the combination of features includes features having a variance of expected values that exceeds a threshold variance.
  • the feature combination module 282 selects multiple device attribute combinations from the different of device attributes that represent features of interest. In some aspects, each of the multiple device attribute combinations corresponds to a different combination among a subset of the different device attributes.
  • the anomaly detection module 280 selects a prediction engine to process the device data and generate a vector of likelihood scores for different device attribute combinations.
  • the anomaly detection module 280 may select a prediction engine from different prediction engines to process the device data, where the selected prediction engine corresponds to a non-parametric statistical model of different of non-parametric statistical models.
  • the anomaly detection module 280 may generate a vector of likelihood scores for different device attribute combinations with different non-parametric statistical models.
  • the non-parametric estimation module 283 may generate a histogram for each of the different device attribute combinations of the particular device type and estimate a probability density function distribution that best fits the histogram.
  • the histogram includes a two-dimensional relationship between first attribute values of a first device attribute against second attribute values of a second device attribute in a given device attribute combination of the device attribute combinations.
  • the histogram can indicate relationships between N device attributes in N-dimensional space in a given device attribute combination of the device attribute combinations, where N is a positive integer.
  • the anomaly detection module 280 in an offline operation, the anomaly detection module 280 generates a first number of non-parametric statistical models with first predetermined thresholds based on a first window of time, and the anomaly detection module 280generates a second number of non-parametric statistical models with a second predetermined thresholds based on a second window of time.
  • the second window of time is subsequent to the first window of time.
  • at least one of the first predetermined thresholds is different from at least one of the second predetermined thresholds.
  • the non-parametric statistical models include a machine learningbased models.
  • the anomaly detection module 280 can train the machine learning-based models with a training dataset that indicates user interactions captured within one or more predetermined windows of time and a mapping of the user interactions to predetermined probability density function distributions. The training of the machine learningbased models can occur in an offline mode of the anomaly detection module 280.
  • the anomaly detection module 280 processes device attributes from the device data with the machine learning-based models.
  • the anomaly detection module 280 can generate a likelihood prediction for each of the device attributes with the different machine learning-based models.
  • each of the machine learning-based models includes one or more executable programs and/or models configured to initially process one or more training data sets having transactions processed by an entity, including service provider server 110.
  • the transactions in the training datasets 219, 220 may include legitimate transactions and malicious and/or fraudulent transactions, such as those transactions prohibited due to money launder laws, rules, and regulations when entity engage in illegal and/or malicious behavior.
  • the training data set may include labeled and/or unlabeled data, which may include classifications of valid transactions and prohibited transactions (e.g., “no fraud” or “potential or detected fraud,” respectively).
  • classifiers may be established by the agent or entity processingthe data, ormay be determined based on outlier transactions or transactions having features indicating prohibited conduct or behavior.
  • the classifiers may be built and trained so that classifications may be assigned to particular data points (e.g., transactions) within each of the training datasets 219, 220.
  • the training datasets 219, 220 may include different features, such as a platform for the transaction (e.g., mobile, web, etc.), an account number, a transaction identifier (ID), a transaction type (e.g., payment, gambling, etc.), an encrypted transaction ID, a parent transaction ID, a created and/or update date, a US dollar equivalent amount (e.g., where credits and sent payments may be in a negative format), a local currency amount and/or code, a billing and/or shipping address, a funding source and/or backup funding source, a bank account number, a bank hash-based message authentication code (HMAC), a card number and/or hash, a card bun HMAC, a card issuer, a balance and/or impact on a balance due to the transaction, a transaction status and/or items within the transaction, notes and/or subject lines within messages for the transaction, an automated clearinghouse return codes, an ID on another marketplace or platform, a counterparty name, a counterparty account number,
  • Other exemplary features and/or categories of features in the training datasets 219, 220 that may be important to training the values and weights of a machine learning model may include risk rules regarding flagging of transactions as incorrect descriptions or messages, complaints and flags by other parties within transactions, gambling activities including fantasy sports, specific country accounts and transaction activities from countries marked as high risk for fraud, a same or similar account owner for a sender and receiver in a transaction, counterfeit flagged accounts, volume of payments in a high risk transaction corridor or category, a spike in activity or transaction value after a dormant or inactive period, a number of transactions and total amount (including if the transactions were cross-border transactions), a previous account takeover flag, a malicious seller flag, an account restriction due to previous malicious use or rule violation, a cross-border payment from a device using in-person payment instrument processing (e.g., through processing a payment card EMV chip or magnetic stripe to provide the payment), a check deposit amount and transfer of deposited funds, a deposit and withdrawal/transfer of all or
  • the machine learning-based model may utilize a supervised machine learning algorithm, function, or technique that utilizes continuous and/or iterative learning to generate the model.
  • the machine learning-based model may be implemented as a deep learning network, including a convolution neural network, a recurrent neural network, or a deep neural network.
  • the anomaly detection module 280 may utilize feedback and annotations or labeling from the agent device 120 to iteratively train the model.
  • transactions in the training data set and/or other data sets may be flagged using the machine learning technique to identify prohibited transactions, where the agent device 120 may send an indication that the flagged transactions were not actually prohibited (e.g., not indicative or including fraud).
  • Identification of these false positives may be used to retrain the machine learning-based model in a continuous and/or iterative process so that false positives may be reduced and/or eliminated, and the machine learning-based model may more accurately predict and detect fraud or other prohibited transactions.
  • the machine learning-based model is trained for detection of prohibited transactions, as well as review of results from the machine learning-based model that has been modeled for prohibited transaction detection.
  • SVMs support vector machines
  • a SVM training algorithm which may be a non-probabilistic binary linear classifier — may build a model that predicts whether a new example falls into one category or another.
  • Bayesian networks may be used to implement machine learning.
  • a Bayesian network is an acyclic probabilistic graphical model that represents a set of random variables and their conditional independence with a directed acyclic graph (DAG). The Bayesian network could present the probabilistic relationship between one variable and another variable.
  • DAG directed acyclic graph
  • the anomaly detection module 280 determines whether at least one likelihood score in the vector of likelihood scores fails to satisfy one or more predetermined thresholds. For example, the anomaly detection module 280 may use the threshold comparator module 284 to compare each of the vector of likelihood scores against the one or more predetermined thresholds to determine whether at least one likelihood score fails the one or more predetermined thresholds. The anomaly detection module 280 may also determine that the at least one likelihood score corresponds to a particular device attribute combination. [0104] Next, at step 408, the anomaly detection module 280 detects an anomaly in the user interaction based on the at least one likelihood score failing to satisfy the one or more predetermined thresholds.
  • the anomaly detection module 280 may select an anomaly detection engine from different anomaly detection engines to detect an anomaly in the user interaction based on a result of the comparison operation that indicates at least one likelihood score in the vector of likelihood scores does not satisfy the one or more predetermined thresholds.
  • the probability density function module 285 may process device attributes from the device data with a probability density function algorithm. The probability density function module 285 can generate a probability density distribution with the probability density function algorithm. In some aspects, the probability density distribution includes a two-dimensional relationship between different likelihood values against different device attribute values for a given device attribute combination from the different device attribute combinations.
  • the anomaly detection engine can generate a likelihood metric indicating that a corresponding attribute value associated with the user interaction is an actual occurrence.
  • the probability density function module 285 may obtain a user profile that indicates a user behavior associated with one or more of the user device or a user account of a service, and the probability density function module 285 may determine a likelihood metric indicating that a corresponding attribute value associated with the user interaction is an actual occurrence, based on the user profile.
  • the cluster engine 286 of the anomaly detection module 280 can apply a clustering algorithm to a vector of likelihood scores to generate one or more clusters. This may be an alternative track to the use of the non-parametric estimation module 283 and the probability density function module 285.
  • the outlier detection module 289 of the anomaly detection module 280 can detect a data point corresponding to at least one likelihood score of the vector of likelihood scores that is outside of the one or more clusters.
  • the outlier detection module 289 can apply a Euclidean multi-dimensional distance calculation to determine a distance from the one or more clusters in N-dimensional space, where N is a positive integer.
  • the outlier detection module 289 determines whether the data point is a member of the one or more clusters or an outlier based on the distance. In some aspects, the outlier detection module 289 determines that the data point is an outlierbased on the distance being greater than a threshold distance. In this respect, the outlier corresponds to the anomaly in the user interaction. [0106] In still other implementations, the singular value decomposition engine 287 of the anomaly detection module 280 can apply a singular value decomposition algorithm to a vector of likelihood scores to determine one or more principle components in the vector of likelihood scores in terms of eigenvalues and eigenvectors.
  • each of the one or more principle components identifies which of the device attributes has a highest variance based at least on a ranking of the eigenvectors by order of variance.
  • the principle component analysis engine 288 of the anomaly detection module 280 then applies a projection operation by matrix multiplication to project new attribute values in a direction of a principle component through an eigenvector that corresponds to the principle component.
  • the principle component analysis engine 288 can generate a cluster in an eigenspace with the projected new attribute values.
  • the outlier detection module 289 canthen detect a data point corresponding to at least one projected new attribute value that is outside of the cluster.
  • the outlier detection module 289 applies a Euclidean multi-dimensional distance calculation to determine a distance from the cluster and determines whether the data point is a member of the cluster or an outlier based on the distance. In some aspects, the outlier detection module 289 determines that the data point is an outlier based on the distance being greater than a threshold distance. In this respect, the outlier corresponds to the anomaly in the user interaction.
  • the remedial action module 274 of the second pass fraud detection module 270 issues a remedial action to the customer device 130 through the API 202 in response to detecting the anomaly in the user interaction.
  • the remedial action module 274 may further authenticate the customer device 130 when the anomaly in the user interaction is detected.
  • the remedial action module 274 issues, through the API 202, a first remedial action that denies the customer device 130 with access to the service when the at least one likelihood score is lesser than a first predetermined threshold.
  • the remedial action module 274 also may issue a second remedial action that prompts the customer device 130 to provide additional authentication information when the at least one likelihood score is greater than the first predetermined threshold and lesser than a second predetermined threshold.
  • the event detection server 150 may grant the customer device 130 with access to the service when each likelihood score in the vector of likelihood scores is greater than the second predetermined threshold.
  • the remedial action module 274 sends one or more messages to the customer device 130 as a next remedial action to prompt the customer device 130 to supply further authentication information, when an anomaly in the user interaction is detected.
  • the feedback module 272 of the second pass fraud detection module 280 may receive user feedback in response to the one or more messages.
  • the feedback module 272 may aggregate the feedback and/or post-process the user feedback to a suitable format.
  • the feedback module 272 can provide the user feedbackto the non -parametric statistical models in the nonparametric estimation module 283.
  • the user feedback indicates updated authentication information of the customer device 130.
  • the anomaly detection module 280 may adjust the one or more predetermined thresholds based on the user feedback.
  • FIG. 5 illustrates a block diagram of a data profile module 250 for browser data retrieval and analysis, according to an implementation of the present disclosure.
  • the data profile module 250 represents a novel framework for in-depth and automatic testing of current and prerelease web browser applications with the intent of generating a comprehensive database of their behaviors, features, and functionality in order to provide additional capabilities for detection and mitigation of malicious actors using modified or spoofed client devices.
  • the data profile module 250 may allow baselining for statistical modeling of browser traffic and can generate alert notifications of new functionality that can be implemented into anti-fraud defense mechanisms.
  • the data profile module 250 includes a retrieval engine 254 that is adaptedto fetch device data, such as web browser information from multiple data sources 510.
  • the data sources 510 may include browser repositories 510a-510c storing information associated with different types of browsers (e.g., different manufacturers, different models, different versions which include current and also pre-release versions and models, etc.) and a fraud tool repository 5 lOd storing information associated with known tools used by malicious users for launching attacks on web servers.
  • the data profile module 250 knowledgeable about different vendor’s browser release channels and their typical download URLs, regularly fetches new versions. This may be in response to a new release, or may be part of a regular daily process. These may be precompiled binary downloads, or the data profile module 250 itself maybe able to compile instances of the browser from retrieved source code.
  • the data profile module 250 may retrieve or otherwise obtain executable code associated with different browser applications (e.g., from different manufacturer, different models, different versions, etc.). For example, the data profile module 250 may obtain a Google Chromebrowser version 2.1, a Google Chrome browser version 3.0, an Apple Safari browser version 11.0, an Apple Safari browserversion 12.0, a MicrosoftEdge browser version 3.2, etc., and store the different browser applications in the data profile repository 256.
  • executable code associated with different browser applications e.g., from different manufacturer, different models, different versions, etc.
  • the data profile module 250 may obtain a Google Chromebrowser version 2.1, a Google Chrome browser version 3.0, an Apple Safari browser version 11.0, an Apple Safari browserversion 12.0, a MicrosoftEdge browser version 3.2, etc.
  • the controller 252 of the data profile module 250 can deploy a browser instance 530 (associated with any one of the browser applications stored in the data profile repository 256) in a testing environment 502.
  • the testing environment 502 may include a virtual machine or a container using virtualization technologies.
  • the controller 252 can deploy the browser instance 530 in a virtual machine. The intent is to have an entirely ‘clean’ instance of the browser with no customizations or user data that may change its behavior from the pre-set defaults.
  • This automation process to deploy the browser instance 530 can target several different methods, for example: (1) deployment of the browser instance 530 to a new virtual machine, using a fully virtualized environment, (2) deployment of the web browser instance 53 Oto a segregated container, (3) automation of the web browser instance 530 on a hardware device, for example, using a remote automation process to remotely control the web browser on a device (e.g., a smart phone such as an ANDROIDTM phone), and (4) automation of the web browser instance 530 on an emulated device, for example, using the web browser instance 530 (e.g., SAFARITM, etc.) within an emulator.
  • a device e.g., a smart phone such as an ANDROIDTM phone
  • an emulated device for example, using the web browser instance 530 (e.g., SAFARITM, etc.) within an emulator.
  • the browser instance 530 of the fetched web browser is invoked in the testing environment 502.
  • the browser instance 530 can control and run through a series of interactions with a web server 540.
  • the browser instance 530 caninteractwith the web server 540 via the virtual machine.
  • the controller 252 can inspect the data that the web browser instance 530 sends for anti-fraud purposes, and can run programming code (e.g., JavaScript fingerprinting code, etc.) in an attempt to identify users over time for anti-fraud purposes.
  • programming code e.g., JavaScript fingerprinting code, etc.
  • the data profde module 250 can detect spoofed web browsers running on the customer device 130 used for abuse and financial fraud based on comparing the device data obtained from the web browsers running on the customer device 130 and the data obtained through the automated analysis process via the interactions between the browser instance 530 and the web server 540 as described herein.
  • These features may include network/communi cation details, such as: (1) The HTTP headers the browser sends, in a brand new unmodified install, (i) the values that are provided, (ii) the order the headers are sent in, and (iii) the headers that are sent in what conditions; (2) when connecting to a secure site, the TLS ciphers that the web browser supports, and in what preference order they are requested in; and (3) the protocols the web browser supports (e.g., HTTP 1.0, 1 .1 , QUIC, 2.0, 3.0, etc.) and the supported communication methods (e.g., Websockets, WebRTC, etc.)
  • the programming code e.g., the JavaScript fingerprinting code
  • the programming code can identify device features, such as: (l) the document object model (DOM) processes and attributes the web browser claims to support; (2) the order that the web browser lists these attributes, when requested; (3) an indication of whether the aforementioned processes appear to be “native” values generated by the browser software, or overwritten by the user; (4) the browser-specific processes and attributes it exposes (e.g.
  • FIREFOXTM exposes FIREFOXTM-specific CSS attributes with the “-moz-“ prefix); (5) how the web browser handles and responds to error conditions, including those deliberately invoked (e.g., when executing JavaScript code to evaluate the mathematical expression “ThisIsNotANumber / 0”, CHROMETM returns the error message “Uncaught ReferenceError: ThisIsNotANumber is not defined” whereas SAFARITM returns the error message “ReferenceError: Can’t find variable ThisIsNotANumber”).
  • the dataprofile module 250 can thus record both client-side and server-side details.
  • a remote process can be invoked to serve as an automation tooling mechanism to drive the web browser instance 530 through a suite of tests and actions.
  • the browser instance 530 can record the web browser traffic to the web server 540 as well as the return signaling from the web server 540.
  • the web server 540 can record its outgoing web browser traffic to the browser instance 530.
  • the recorded client-side data and server-side data can be stored in a data structure to maintain a database of web browser behavior, attributes, functionality, etc.
  • the data profile module 250 can be used for alerting of new or forthcoming attributes or behaviors in web browsers.
  • a currently -released version of a particular web browser may expose a particular number of JavaScript DOM attributes (e.g., 100), as determined through the automated deployment process described herein.
  • the data profile module 250 can download the latest Beta version and observe that the number of JavaScript DOM attributes exposed by the latest Beta version is different from the current version (e g., 103 attributes).
  • the data profile module 250 can alert a developer or analyst, giving them the name and location of the additional attrib ute(s) that is not exposed by the current version, its default value, and how it behaves in certain forced-error conditions.
  • the data profile module 250 may also determine if the additional attribute(s) is useful for anti-fraud purposes, and implement the additional attribute(s) within a fingerprinting mechanism such that when the new version of the particular web browser is released, the dataprofile module 250 can be updated with new defensive capabilities.
  • the data profile module 250 can be used for alerting of changed or removed attributes or behaviors. Similarly to the first use case, if the data profile module 250 determines that a useful feature used by an anti-fraud system is removed in an upcoming version, or that it now returns a different result or behaves in a different way than previously observed, the data profile module 250 can determine mitigations or alternative approaches (e g., updating the fraud determination process to reflect the changes) instead of taking a reactive action when data quality suddenly drops or fraud increases.
  • mitigations or alternative approaches e g., updating the fraud determination process to reflect the changes
  • the data profile module 250 baseline the features.
  • the data profile module 250 may perform risk scoring based on technical data that the web browser sends to an online service provider (e.g., service provider server 110).
  • the technical data may indicate that the web browser supports a specified number of encryption ciphers and indicates a prioritized order for the ciphers. This type of information may be part of the browser’s internal operations and is not typically modifiable by the user.
  • the data profile module 250 can overtime observe typical patterns of data associated with a particular browser version.
  • the data profile module 250 can determine “known good” baseline profiles that canbe used to accelerate this learning process. This known good data can then be sent to an analyst (e.g., to the agent device 120) for implementation in a rule or model, can be fed automatically into a model for use in training, or can be fed into a live risk assessment for immediate use.
  • the data profile module 250 can be used for analyzing criminal fraud and/or abuse tools. For example, malicious attackers may invoke anti-detect processes to submit spoofed browser data when interacting with the online service provider. The spoofed browser data is used to hide the fact that these malicious attackers are using such a tool, instead pretending to be a normal web browser. In other examples, the malicious attackers may spoof certain attributes in an attempt to appear like a particular customer who they know uses a certain browser profile. It is common for these types of tools to be built on top of existing web browser processes with additional modifications made. The data profile module 250 may load copies of criminal fraud tools and profile them in depth, comparing results to a legitimate version of the web browser application in order to determine the exact changes made by the malicious attacker, which can then be used to implement anti-fraud defenses against the tool.
  • malicious attackers may invoke anti-detect processes to submit spoofed browser data when interacting with the online service provider.
  • the spoofed browser data is used to hide the fact that these malicious attackers
  • the data profile module 250 can be used for detection of mismatching features.
  • the data profile module 250 may detect valid but mismatching feature sets compared to the claimed agent device. For example, a client (e.g., the customer device 130) that claims to run a non-desktop version of a select web browser but has the feature set and attributes of a desktop version of another select web browser.
  • the data profile module 250 may determine that the client is involved in malicious activities with the service provider server 110 when such a mis-match is detected.
  • the data profile module 250 can send precisely targeted anti -fraud challenges to suspect devices. For example, existing anti-fraud defenses may examine particular DOM values or execute particular code statements in an attempt to fingerprint the client device.
  • the data profile module 250 may deploy highly -targeted fingerprinting code. For example, if the data profile module 250 can determine that a particular version of FIREFOXTM has one hundred unique attributes, the data profile module 250 may prompt each client for three different attributes (out of the one hundred unique attributes), thus yielding over 20 million potential combinations of tests. This can significantly increase the complexity for an adversary to gain a complete understanding of the tests that the data profile module 250 can run.
  • FIG. 6 conceptually illustrates an exemplary workflow 600 of the automated device data retrieval and analysis of FIG. 5, according to an implementation of the present disclosure.
  • FIG. 6 conceptually illustrates an exemplary workflow 600 of the automated device data retrieval and analysis of FIG. 5, according to an implementation of the present disclosure.
  • the block diagram of FIG. 6 will be described in reference to the processes of FIG. 7.
  • FIG. 7 is a flowchart of an example process 700 of automated device data retrieval and analysis, according to an implementation of the present disclosure.
  • One or more of the steps 702-714 of process 700 may be implemented, at least in part, in the form of executable code stored on non-transitory, tangible, machine-readable media that when run by one or more processors may cause the one or more processors to perform one or more of the steps 702-714.
  • Some examples of computing devices, such as computer system 1100 may include non- transitory , tangible, machine readable media that include executable code that when run by one or more processors (e.g., processor 1112) may cause the one or more processors to perform the steps of process 700.
  • the process 700 includes a number of enumerated steps, but aspects of the process 700 may include additional steps before, after, and in between the enumerated steps. In some aspects, one or more of the enumerated steps may be omitted or performed in a different order.
  • the process 700 begins at step 702, where the retrieval engine 254 of the data profile module 250 receives, over the network 140, device data associated with an application over a communication channel from one or more data sources.
  • the retrieval engine 254 may receive device data associated with multiple applications (e.g., different manufacturers, different models, different versions, etc.) from the one or more data sources.
  • the retrieval engine 254 can receive first device data that corresponds to a first version of different versions for the at least one application, for example, from one of the browser repositories 510a- c.
  • the retrieval engine 254 can receive the first device data and second device data that corresponds to a second version of the different versions that at least in part different from the first version, for example, from another one of the browser repositories 510a-c.
  • the retrieval engine 254 is communicably coupled to the API 202.
  • the controller 252 of the data profile module 250 invokes an instance (e.g., the browser instance 530) of the application in a remote processing environment (e.g., the testing environment 502) using the device data.
  • the remote processing environment is remote to the customer device 130.
  • the instance of a web browser is invoked and controlled by an automation process executable in the remote processing environment.
  • the controller 252 deploys the at least one application to a virtual machine of different virtual machines using a virtualized environment. In other aspects of invoking the instance of the at least one application in the remote processing environment, the controller 252 deploys the at least one application to a segregated container accessible to the controller of the service provider server 110. In still other aspects of invoking the instance of the at least one application in the remote processing environment, the controller 252 establishes a connection with the instance of the at least one application executed on a remote communication device. In this respect, the controller 252 can establish remote control of the instance of the at least one application through the connection to the remote communication device.
  • the controller 252 can establish a connection with the instance of the at least one application executed on a remote emulation device. In this respect, the controller 252 can establish remote control of the instance of the at least one application through the connection to the remote emulation device.
  • the controller 252 causes the instance of the application to interact with a web server over the network.
  • the controller 252 can send, to a remote processing environment, a control message that causes the instance of the at least one application to interact with the web server 540 over the network 140.
  • the controller 252 sends, to the remote processing environment, a first control message that prompts the instance of the at least one application to send a request to the web server 540 for returning a process script that is executable on the remote processing environment.
  • the first control message may include an instruction to the web browser to request “Test 1.”
  • the instance of the web browser transmits a request for “Test 1.”
  • the web server 540 transmits interrogation scripts and web page documents to the web browser in response to the request.
  • the web server 540 responds to the requestby transmitting a return message containing script code for executing “Test 1.”
  • the controller 252 can access traffic data associated with one or more interactions between the instance of the application and the web server.
  • the traffic data indicates a behavior of application data associated with the at least one application based on the one or more interactions with the web server 540.
  • the controller 252 can acquire first traffic data transmitted from the instance of the at least one application to the web server.
  • the first traffic data includes one or more of (1) behavioral patterns associated with the at least one application in the one or more interactions over a predetermined period of time, (2) multiple attributes of the at least one application that is set for the one or more interactions, or (3) multiple functionality parameters of the at least one application that is activated in the one or more interactions.
  • the controller 252 executes the process script on the remote processing environment with the instance 530 of the at least one application.
  • the controller 252 can interrogate the at least one application with the executed process script.
  • the interrogating includes the one or more interactions between the instance 530 and the web server 540.
  • the controller 252 can then log results of the interrogating in a data structure of the data profile repository 256.
  • the controller 252 can record client-side activity data (e.g., occurring at the browser instance 530), such as DOM attributes and values and/or generated error messages, among others.
  • the controller 252 can record data sent from the browser instance 530 to the web server 540, such as supported TLS ciphers, HTTP request headers, among others.
  • the controller 252 can invoke a predetermined error condition in the one or more interactions.
  • the controller 252 can log a response of the browser instance 530 of the at least one application based on the predetermined error condition.
  • the controller 252 determines one or more features of the application in a native state from the traffic data.
  • the traffic data may indicate the behavior of the application.
  • the controller 252 may determinethe one or more features of the application based on the behavior of the application.
  • valuesin the native state may correspond to default values (or reset values) set for attributes of the at least one application.
  • the controller 252 generates a data profile of the application that indicates the one or more features in the native state.
  • the controller 252 can generate a data structure that includes application data that corresponds to the first version.
  • the application data indicates one or more of (1) behavioral patterns associated with the first version over a predetermined period of time, (2) multiple attributes of the at least one application that correspond to the first version, or (3) multiple functionality parameters of the at least one application that correspond to the first version.
  • the controller 252 can generate a first data structure that comprises first application data the corresponds to the first version and a second data structure that comprises second application data that corresponds to the second version.
  • the first data structure and the second data structure are prioritized based on a first weighting associated with the first application data and a second weighting associated with the second application data. In some aspects, the first weighting and the second weighting are based at least in part on respective lifecycles of the first application data and the second application data.
  • malware activity performed by a user device maybe detected based on the data profile.
  • the controller 252 provides the traffic data and the data profile to the data profile repository 256 accessible to one or more remote devices associated with a service.
  • the remote devices may detect malicious activity performed by the customer device 130 by comparing device data associated with the customer device 130 and the data profile.
  • the remote device may compare the device data received from the customer device 130 (which may include data associated with interactions between the customer device 130 and the service provider server 110) against the data profile associated with an application of the particulartype. When the device data deviates from the data profile by more than a threshold, the remote device may determine that the customer device 130 is used to conduct malicious activity (e.g., account take-over, etc.). In some implementations, the controller 252 generates a notification that indicates the behavior of the application data associated with the at least one application.
  • a particular type e.g., a particular manufacturer, a particular model, a particular version, etc.
  • the controller 252 may provide, through the API 202 over the network 140, the notification to the customer device 130.
  • the controller 252 provides the data profile from the data profile repository 256 to a remote engine associated with the service that performs one or more detection operations with the data profile and detects potential malicious activity associated with the at least one application from the one or more detection operations.
  • the stored data can be used to enhance anti-fraud capabilities.
  • FIG. 8 conceptually illustrates an exemplary workflow 800 of a voice vector framework for authenticating user interactions, according to an implementation of the present disclosure.
  • FIG. 8 illustrates actions that authenticate a caller who calls the service provider server 110 in association with a user account with the service provider server 110.
  • the caller may be calling to perform a transaction in association with the user account (e.g., resetting a password, performing a payment transaction, etc.).
  • the workflow 800 may include actions for authenticating the caller for accessing the user account (or for performing the transaction for the user account) based on voice data associated with the caller.
  • the voice vector framework first authenticates the caller as legitimate customer and then checks if the same caller has called before with similar intentions on multiple accounts.
  • the framework can provide real-time feedbackby alerting customer service agents of such suspicious activity.
  • the framework can leverage various aspects of speech recognition and voice identification technology as well as intent identification on the incoming customer call.
  • the framework includes a process of generating a suite of machine learning-based models that encompasses the variations that are representative an entity’s customers.
  • the framework provides a two-stage procedure that first verifies the identity of a caller and second to check if the same caller has previously called with the same intention on a different account not belonging to him or her. This framework can identify fraudsters, generate a voice blacklist, and alert customer service agents in real time to mitigate any security lapse.
  • FIG. 9 is a flowchart of an example process 900 of a voice vector framework for verifying user interactions, according to an implementation of the present disclosure.
  • One or more of the steps 902-916 of process 900 maybe implemented, at least in part, in the form of executable code stored onnon-transitory, tangible, machine-readable media that when run by one or more processors may cause the one or more processors to perform one or more of the steps 902-916.
  • Some examples of computing devices, such as computer system 1100 may include non- transitory , tangible, machine readable media that include executable code that when run by one or more processors (e.g., processor 1112) may cause the one or more processors to perform the steps of process 900.
  • the process 900 includes a number of enumerated steps, but aspects of the process 900 may include additional steps before, after, and in between the enumerated steps. In some aspects, one or more of the enumerated steps may be omitted or performed in a different order.
  • the process 900 begins at step 902, where the feature extraction module 210 receives voice data associated with a voice communication between a user of a user device and a service provider server.
  • a service provider server For example, when a caller uses a device (e.g., the customer device 130) to make a phone call to an agent (e.g., a human agent, a robot, etc.) of the service provider server 110, the voice vector framework, at action 810, may receive the voice data associated with the phone call.
  • the feature extraction module 210 extracts user attributes from the voice data.
  • the feature extraction module 210 can extract words and/or phrases from the voice transcript.
  • the controller 252 obtains user account information from a user account associated with the voice communication. For example, based on the extracted user attributes, the controller 252 may determine that the voice communication is regarding a user account with the service provider server 110 (e.g., disputing a transaction associated with the user account, resetting the user account, etc.).
  • the classification module 230 can select a classifier that corresponds to a select combination of features based on the user account information. For example, at action 822, the classification module 230 can retrieve a trained model based on the account information of the caller generated by the feature extraction module 210. In some examples, the trained model may be a gender-age-accent model. In some aspects of selecting the classifier that corresponds to the predetermined combination of features, the classification module 230 can select a select number of classifiers that correspond to different user attribute combinations based on different attributes in one or more of the user attributes or the user account information.
  • the classification module 230 applies the classifier to the user attributes and/or the voice vectors.
  • the audio feature extraction engine 217 in coordination with the classifier 232, can generate a voice vector comprising multiple scores based on the applied classifier. For example, at action 820, the audio feature extraction engine 217 can generate the voice vector with a representation of words and/or phrases as audio attributes extracted from the audio call. In some aspects, each of the scores in the voice vector indicates a likelihood that an attribute of the different user attributes corresponds to an attribute of the select combination of features. In some aspects of generatingthe voice vector, the audio feature extraction engine 217 can mask one or more of the scores based at least in part on one or more attributes in the user account information.
  • the audio feature extraction engine217 in coordination with the classifier 232, can generate the voice vector with the masked one or more of the scores.
  • the masked one or more of the scores are excluded from the comparing operation in step 914.
  • the audio feature extraction engine 217 can apply a weighting to one or more of the scores based at least in part on one or more attributes in the user account information.
  • the audio feature extraction engine 217 in coordination with the classifier 232, can generate the voice vector with the weighting applied to the one or more of the scores.
  • the voice scoring module 234 may compare the voice vector to a baseline vector that corresponds to a predetermined combination offeatures (e.g., the selected combination of features). In some aspects of comparing the voice vector to the baseline vector, the voice scoring module 234 can determine a distance between one or more of the scores in the voice vector and corresponding values in the baseline vector. In this respect, the voice scoring module 234 can determine whether the distance between each of the one or more of the scores and the corresponding values exceeds a predetermined distance threshold. In some aspects of comparing the voice vector to the baseline vector, the voice scoring module 234 can determine whether a difference between the voice vector and the baseline vector is within a predetermined tolerance threshold.
  • the voice scoring module 234 can determine that the user device is verified based on the difference between the voice vector and the baseline vector is determined to be within the predetermined tolerance threshold. In other aspects of comparing the voice vector to the baseline vector, the voice scoring module 234 can obtain multiple voice vectors from user account information associated with respective ones of multiple different user accounts, and the voice scoring module 234 can compare the voice vector to each of the voice vectors.
  • the report generation module 260 may send a notification to the agent device 120 associated with the service provider server 110.
  • the report generation module 260 can provide an indication of the caller verification to the agent device 120.
  • the notification includes an indication of whether the user device is verified based on the comparing.
  • the report generation module 260 generates the notification with the indication that indicates that the user device is verified when the voice scoring module 234 determines that the distance between each of the one or more of the scores and the corresponding values does not exceed the predetermined distance threshold.
  • the report generation module 260 generates the notification with the indication that indicates that the user device is not verified when the voice scoring module 234 determines that the distance between the one or more of the scores and the corresponding values exceeds the predetermined distance threshold.
  • the notification prompts the agent device 120 to send one or more queries to the customer device 130 for additional verification information. For example, at action 826, if the caller is verified, notification can be sent to the agent device 120. Otherwise, an alert is generated and sent to the agent device 120 for additional verification.
  • FIG. 10 is a flowchart of an example process 1000 of a voice vector framework for detecting malicious activity in user interactions, according to an implementation of the present disclosure.
  • One or more of the steps 1002-1020 of process 1000 may be implemented, at least in part, in the form of executable code stored on non-transitory, tangible, machine-readable media that when run by one or more processors may cause the one or more processors to perform one or more of the steps 1002-1020.
  • Some examples of computing devices, such as computer system 1100 may include non-transitory, tangible, machine readable media that include executable code that when run by one or more processors (e.g., processor 1112) may cause the one or more processors to perform the steps of process 1000.
  • the process 1000 includes a number of enumerated steps, but aspects of the process 1000 may include additional steps before, after, and in between the enumerated steps. In some aspects, one or more of the enumerated steps may be omitted or performed in a different order.
  • the process 1000 begins at step 1002, where the feature extraction module 210 receives voice data associated with a voice communication between a user of a user device and a service provider server.
  • the voice communication may be from a phone call from a user of the customer device 130 and an agent (e.g., a human agent, a robot) associated with the service provider server 110.
  • the phone call may be regarding a user account (e.g., disputing a transaction associated with the user account, resetting a password for the user account, etc.).
  • the feature extraction module 210 can extract user attributes from the voice data. For example, at action 810, the feature extraction module 210 can extract words and/or phrases from the voice transcript.
  • the classification module 230 determines an intent of the voice communication based on the user attributes. For example, at action 832, the classification module 230 may determine that the intent of the voice call is a request to reset a password for a user account. Other example intents may include account verification, lift restriction from a user account, among others.
  • the classification module 230 may select one or more of several machine learning-trained classifiers that correspond to the determinedintent. As discussed, various voice models corresponding to different intents have been trained using historic voice data associated with one or more user accounts. After a call has been classified as one of the different intents, the voice data associated with the call may be usedto train the voice model corresponding to the classified intent.
  • the classification module 230 retrieves trained models based on the intent, (inventors: can you elaborate on how the In some implementations, the classification module 230 may obtain user account information from a user account associated with the customer device 130, and the classification module 230 selects a machine learning-trained classifier that corresponds to a select combination of features based on the user account information.
  • the classifier 232 applies each of the one or more machine learning-trained classifiers to the user attributes.
  • the classifier 232 applies a selected machine learning-trained classifier to the different user attributes.
  • the audio feature extraction engine 217 in coordination with the classifier 232, generates voice vectors for the one or more machine learning-trained classifiers.
  • the audio feature extraction engine 217 may generate the voice vectors based on a baseline voice vector associated with each of the one or more of the machine learning-trained classifiers.
  • the audio feature extraction engine 217, in coordination with the classifier 232 generates a voice vector that includes scores based on the applied machine learning-trained classifier.
  • each of the scores in the voice vector indicates a likelihood that an attribute of the different user attributes corresponds to an attribute of a predetermined combination of features.
  • the classifier 232 can select a machine learning-trained classifier that corresponds to a voice vector having the highest aggregate score. For example, the classifier 232 may compare the scores associated with the different voice vectors of the different machine learning-trained classifiers. The score may represent a similarity between the voice vector and the user attributes extracted from the voice data of the voice communication. Thus, the higher the score, the more similar the voice in the voice communication is to the voice vector. For example, at action 838, the classifier232 identifies which voice the caller matches in identifying a matched model.
  • the fraud class module 233 in coordination with the voice scoring module 234, may determine a voice signature associated with the selected one of the machine learning-trained classifiers.
  • the fraud class module 233 can determine whether the customer device 130 interacted with the service provider server 110 in a prior interaction associated with a different user account based on the voice signature. For example, at action 840, the fraud class module 233 can retrieve account information for which the same caller called for review in a prior occasion. If a match is found, the fraud class module 233, in coordination with the report generation module 260, can alert an agent associated with the agent device 120, indicating that the current caller called previously for a different user account. Otherwise, the fraud class module 233 may not generate nor send a notification to the agent device 120.
  • the fraud class module 233 can access user account information in a data repository communicably coupled to the service provider server 110 and compare the voice signature to a historical voice signature associated with the accessed user account information that is stored in a data structure of the data repository.
  • the fraud class module 233 can determine that a distance between the voice signature and the historical voice signature is within a predetermined tolerance threshold. In this regard, the fraud class module 233 determines that the customer device 130 interacted with the service provider server 110 in the prior interaction.
  • the fraud class module 233 can obtain different historical voice signatures from user account information associated with respective ones of different user accounts, and the fraud class module 233 compares the voice signature to each of the historical voice signatures.
  • the report generation module 260 can send a notification to a communication device associated with the service provider server 110 (e g., the agent device 120).
  • the notification may include an indication of whether the user device interacted with the service provider server in the prior interaction.
  • the report generation module 260 sends notification of detected fraudulent callers to the agent device 120.
  • a method comprises obtaining, by one or more hardware processors, user interaction data corresponding to a communication from a user to a service provider server via a first communication device, wherein the communication is regarding a user account with the service provider server; extracting, by the one or more hardware processors from the user interaction data, a plurality of features comprising one or more of textual data features or audio data features; determining, by the one or more hardware processors, an intent of the communication using a machine learning-trained classifier based on the plurality of features; grouping, by the one or more hardware processors, the communication with a plurality of past communications that is associated with the intent; deriving, by the one or more hardware processors and from the communication and the plurality of past communications, one or more common attributes; determining, by the one or more hardware processors, a risk level for the communication based on the one or more common attributes; and restricting, by the one or more hardware processors, access to one or more services of the service provider server through the user
  • the first communication device was authenticated to access the user account via a first authentication process
  • the method further comprises analyzing the plurality of past communications; detecting one or more anomalies within the plurality of past communications based on the analyzing; and in response to detecting the one or more anomalies, performing a second authentication process for the first communication device for accessing the user account; 2) the detecting the one or more anomalies comprises generating a plurality of clusters within the plurality of previous communications using one or more clustering algorithms, wherein each of the plurality of clusters corresponds to aunique pattern of activity; and determining whether the unique pattern of activity corresponds to a malicious activity; 3) the providing the indication comprises determining whether a threshold number of anomalies is detected in the communication and the plurality of past communications generating a report indicating the one or more anomalies when the threshold number of anomalies is detected; and transmitting the report to a second communication device; 4) the obtaining the user interaction data comprises obtaining textual data associated with a
  • the method further comprising generating a plurality of machine learning-based networks corresponding to plurality of actionable insight categories; 8) the method above further comprising training each of the plurality of machine learning-based networks with a respective training dataset to form a plurality of machine learning-trained classifiers, the respective training dataset comprising labeled interaction data indicating what information pertains to which of the plurality of actionable insight categories; and selecting, from the plurality of machine learning- trained classifiers, the machine learning-trained classifier for determining the intent; 9) the method further comprising determining a plurality of clusters of communications, wherein each cluster in the plurality of clusters of communications corresponds to a type of event; determining that the communication is not mapped to any one of a plurality of clusters of communications; determining a particular event type corresponding to the communication; and generating a particular cluster for the particular event type; and/or 10) the method further comprising accessing a plurality of user interaction datasets associated with respective ones of a plurality of interactions between
  • a system comprises a non-transitory memory; and one or more hardware processors coupled with the non-transitory memory and configured to read instructions from the non-transitory memory to causes the system to perform operations comprising in response to detecting a communication from a first communication device to a service provider server, obtaining user interaction data associated with the communication, wherein the first communication device is authenticated to access the service provider server via a first authentication process; extracting a plurality of features from the user interaction data, wherein the plurality of extracted features comprises textual data features and audio data features; classifying the plurality of extracted features into an actionable insight category from a plurality of actionable insight categories using a classifier, wherein the actionable insight category corresponds to fraudulent activity; generating a plurality of clusters with the plurality of extracted features using one or more clustering algorithms; detecting one or more anomalies in at least one of the plurality of clusters through an anomaly detection operation; and performing a second authentication process for further authenticating the first communication device based on the
  • the operations further comprise determining whether a threshold number of anomalies is detected in the at least one of the plurality of clusters; generating a report indicating the one or more anomalies when the threshold number of anomalies is detected; and sending, through the application programming interface to the second communication device, the report; 2) the classifying the plurality of extracted features comprises determining an intent of the interaction from the plurality of extracted features using the classifier, wherein the intent corresponds to the actionable insight category; 3) the operations further comprise selecting one of a plurality of machine learning-trained classifiers based on a comparison of performance metrics between each of the plurality of machine learning-trained classifiers, wherein the determining the intent of the interaction comprises determining the intent of the interaction using the selected one of the plurality of machine learning-trained classifiers; and/or 4) the operations further comprise generating a plurality of machine learning-based networks based on the plurality of actionable insight categories, wherein the plurality of machine learning-based networks corresponds to
  • a n on-transitory machine-readable medium having stored thereon machine-readable instructions executable to cause a machine to perform operations comprising accessing user interaction data associated with an interaction between a first communication device and a service provider server associated with a service; generating, using a feature extraction engine of the service provider server, a plurality of feature representations of the user interaction data, the plurality of feature representations respectively corresponding to extracted features that comprise one or more of textual data features or audio data features; determining an intent of the interaction from the plurality of feature representations using a classifier, wherein the intent corresponds to one of a plurality of actionable insight categories; mapping the interaction to a first cluster of a plurality of first clusters based at least in part on the intent, wherein each of the plurality of first clusters corresponds to a particular type of activity in a first actionable insight category of the plurality of actionable insight categories; and issuing a remedial action for the interaction based on the mapping of the interaction to the first cluster, the
  • the operations further comprise applying the plurality of first clusters through an anomaly detection operation; detecting whether one or more anomalies are present in at least one of the plurality of first clusters based on the anomaly detection operation; issuing, through an application programming interface to the first communication device, a request that further authenticates the communication device when the one or more anomalies are detected in the at least one of the plurality of first clusters; and providing, through the application programming interface to a second communication device associated with the service provider server, an indication of the one or more anomalies; 2) the providingthe indication comprises determining whether a threshold number of anomalies is detected in the first cluster; generating a report indicating the one or more anomalies when the threshold number of anomalies is detected; and sending, through the application programming interface to the second communication device, the report; and/or 3) the operations further comprise generating a plurality of machine learning-based networks based on the plurality of actionable insight categories, wherein the plurality of machine learning-based networks corresponds to respective ones of the
  • a system comprises a non-transitory memory; and one or more hardware processors coupled with the non-transitory memory and configured to read instructions from the non-transitory memory to cause the system to perform operations comprising receiving device data of a user device during a communication session between the user device and a service provider server, wherein the device data represents a plurality of device attributes associated with the user device, and wherein the user device is authenticated through a first authentication process for the communication session; determining, from the plurality of device attributes, a plurality of device attribute combinations; generating vector of likelihood scores for the plurality of device attribute combinations based on a plurality of statistical models; determining, from the plurality of device attribute combinations, at least one device attribute combination that does not satisfy one or more predetermined thresholds based on the generated vector of likelihood scores; detecting an anomaly in the communication session based on the determining that the at least one device attribute combination does not satisfy the one or more predetermined thresholds; and in response to detecting the anomaly, performing a second
  • the operations further comprise denying the user device access to a service of the service provider server in response to determining that the vector of likelihood scores is less than a threshold; 2) the second authentication process comprise prompting the user device to provide additional authentication information; 3) the operations further comprise granting the user device with access to a service of the service provider server when the user device is authenticated through the second authentication process; 4) the operations further comprise detecting that the user device has established the communication session with the service provider server, wherein the device data is received in response to the detecting that the user device has established the communication session, and wherein the communication session indicates a device type of the user device; 5) the operations further comprise receiving a first plurality of logs indicating a first plurality of events associated with a plurality of user devices that occur in a first window of time, wherein each event of the first plurality of events comprises first device data corresponding to the first window of time for a user device of the plurality of user devices; and receiving a second plurality of logs
  • a method comprises in response to detecting an interaction between a user device and a service provider server, obtaining, by one or more hardware processors, a plurality of features associated with the user device; determining, by the one or more hardware processors, that a combination of features from the plurality of features has a variance of expected values exceeding a threshold variance; generating, by the one or more hardware processors, a prediction indicating a likelihood that the interaction is associated with a fraudulent activity based on the combination of features; and restricting, by the one or more hardware processors, the user device to one or more services of the service provider server based on the prediction.
  • the plurality of features corresponds to a plurality of device attributes for a particular device type
  • the method further comprises selecting, from the plurality of device attributes, a plurality of device attribute combinations, wherein each of the plurality of device attribute combinations corresponds to a different subset in the plurality of device attributes; generating a histogram for each of the plurality of device attribute combinations; and estimating a probability density function distribution that best fits the histogram for each of the plurality of device attribute combinations, wherein the histogram for each of the plurality of device attribute combinations comprises a two- dimensional relationship between first attribute values of a first device attribute against second attribute values of a second device attribute in the corresponding device attribute combination; and/or 2) the histogram indicates a plurality of relationships among a plurality of device attributes in a multi-dimensional space for the corresponding device attribute combination.
  • a n on-transitory machine-readable medium having stored thereon machine-readable instructions executable to cause a machine to perform operations comprising receiving device data of a user device in response to detecting an interaction between the user device and a service provider server associated with a service; selecting a device attribute combination from a plurality of device attributes included in the device data with a feature combination engine, wherein the device attribute combination comprises device attributes having a variance of expected values that exceeds a threshold variance; selecting a prediction engine of a plurality of prediction engines to process the device attribute combination with a corresponding anomaly detection model of a plurality of anomaly detection models and to generate a prediction indicating a likelihood that the device attribute combination represents an anomaly corresponding to fraudulent activity; and issuing a remedial action to the user device through an application programming interface with a remedial action engine based on the prediction for applying the remedial action on the user interaction.
  • the selecting the prediction engine comprises processing a plurality of device attributes from the device data with a probability density function algorithm; and generating a probability density distribution with the probability density function algorithm, wherein the probability density distribution comprises a two- dimensional relationship between a plurality of likelihood values against different device attribute values for a given device attribute combination of a plurality of device attribute combinations; 2) the selecting the prediction engine comprises, for each device attribute combination in the plurality of device attribute combinations, generating a likelihood metric indicating that a corresponding attribute value associated with the user interaction is an actual occurrence; 3) the operations further comprise obtaining a user profile that indicates a user behavior associated with one or more of the user device or a user account of a service; and determining a likelihood metric indicating that a corresponding attribute value associated with the user interaction is an actual occurrence, based on the user profile; 4) the selecting the prediction engine comprises applying a clustering algorithm to a vector of likelihood scores to generate one or more clusters; detecting a
  • a method comprises receiving, by one or more hardware processors, voice data associated with a voice communication between a user of a user device and a service provider server; determining, by the one or more hardware processors, that the voice communication is associated with a user account of the user with the service provider server; extracting, by the one or more hardware processors, a plurality of user attributes from the voice data; determining, from the plurality of user attributes, a particular user attribute combination; selecting from a plurality of voice classifiers, a particular classifier corresponding to the particular user attribute combination, wherein the voice vector is generated using the particular classifier; generating, by the one or more hardware processors, a voice vector for the user based on the particular classifier; determining a difference between the voice vector and a baseline vector that corresponds to the user account; determining that the user is a malicious user when the difference exceeds a threshold; and in response to determining that the user is a malicious user, restricting the user from accessing the user account during the voice
  • the comparing the voice vector to the baseline vector comprises determining a distance between one or more values in the voice vector and corresponding values in the baseline vector; and determining whether the distance between each of the one or more values and the corresponding values exceeds a predetermined distance threshold; 2) the method further comprising generating a notification indicating that the user is not authenticated to access the user account; and transmitting the notification to a communication device associated with the service provider server; 3) the notification prompts the communication device to prompt the user device for additional verification information; 4) the particular user attribute combination comprises a gender-age-accent combination; 5) the generating the voice vector comprises masking one or more values associated with the plurality of user attributes based at least in part on one or more characteristics of the user account; and generating the voice vector based on the masked one or more values, wherein the masked one or more values are excluded from the determining the difference; 6) the generating the voice vector comprises applying corresponding weights to one or more values associated with the pluralit
  • the method further comprising obtaining a plurality of baseline vectors associated with a plurality of the user accounts; and comparing the voice vector to each of the plurality of baseline vectors; 9) the user account is a first user account, wherein the method further comprises determining that the user has previously called on a second user account based on the comparing the voice vector to each of the plurality of baseline vectors, wherein the determining that the user is a malicious user is further based on determining that the user has previously called on the second user account; 10) the method further comprising determining, from the plurality of user attributes, a plurality of user attribute combinations; and generating a plurality of machine learning-based networks based on the plurality of user attribute combinations, wherein the plurality of machine learning-based networks corresponds to respective ones of the plurality of user attribute combinations; and/or 11) the method further comprising training each of the plurality of machine learning-based networks with a training dataset to form a plurality of machine learning-trained classifiers, the training dataset comprising attribute data
  • a system comprises a non-transitory memory; and one or more hardware processors coupled to the non-transitory memory and configured to execute instructions from the non-transitory memory to cause the system to perform operations comprising receiving user interaction data associated with an interaction between a user device and a service provider server associated with a service, the user interaction data comprising audio data associated with the interaction; extracting, using a feature extraction engine, a plurality of user attributes from the audio data; determining an intent of the interaction from the plurality of user attributes; selecting one or more of a plurality of machine learning-trained classifiers that correspond to the determined intent; applying each of the one or more of the plurality of machine learning-trained classifiers to the plurality of user attributes; generating a plurality of voice vectors with a voice vector associated with each of the one or more of the plurality of machine learning-trained classifiers; selecting one of the plurality of machine learning-trained classifiers that corresponds to one of the plurality of voice vectors having a
  • the determining whether the user device interacted with the service provider server comprises accessing user account information in a data repository communicab ly coupled to the service provider server; comparing the voice signature to a historical voice signature associated with the accessed user account information that is stored in a data structure of the data repository; determining that a distance between the voice signature and the historical voice signature is within a predetermined tolerance threshold; and determining that the user device interacted with the service provider server in the prior interaction; 2) the comparing the voice signature to the historical voice signature comprises obtaining a plurality of historical voice signatures from user account information associated with respective ones of a plurality of user accounts; and comparing the voice signature to each of the plurality of historical voice signatures; and/or 3) the operations further comprise obtaining user account information from a user account associated with the user device; determining, from the plurality of user attributes, a particular user attribute combination based on the user account information; selecting, from the plurality of machine learning-trained classifiers, a first machine
  • a n on-transitory machine-readable medium having instructions stored thereon, the instructions executable to cause performance of operations comprising receiving user interaction data associated with an interaction between a user device and a service provider server associated with a service, the user interaction data comprising audio data associated with the interaction; extracting, using a feature extraction engine, a plurality of user attributes from the audio data; obtaining user account information from a user account associated with the user device; selecting a machine learning-trained classifier that corresponds to a select combination of features based on the user account information; applying the machine learning-trained classifier to the plurality of user attributes; generating a voice vector comprising a plurality of scores based on the applied machine learning-trained classifier, wherein each of the plurality of scores in the voice vector indicates a likelihood that an attribute of the plurality of user attributes corresponds to an attribute of a predetermined combination of features; comparing the voice vector to a baseline vector that correspondsto a predetermined combination of features; and sending a notification to
  • the operations further comprise determining an intent of the interaction from the plurality of user attributes; selecting one or more of a plurality of machine learning-trained classifiers that correspond to the determined intent; generating a plurality of voice vectors with a voice vector associated with each of the one or more of the plurality of machine learning-trained classifiers; selecting a plurality of machine learning-trained classifiers that correspond to the determined intent; selecting one of the plurality of machine learning-trained classifiers that corresponds to one of the plurality of voice vectors having a highest aggregate score; determining a voice signature that corresponds to the selected one of the plurality of machine learning-trained classifiers; determining whether the user device interacted with the service provider server in a prior interaction based on the voice signature; and sending a notification to a communication device associated with the service provider server, the notification comprising an indication of whether the user device interacted with the service provider server in the prior interaction; 2) the operations further comprise determining a plurality of
  • FIG. 11 is a block diagram of a computer system suitable for implementing one or more components in FIGS. 1 and 2, according to an implementation.
  • the communication device may comprise a personal computing device e.g., smartphone, a computing tablet, a personal computer, laptop, a wearable computing device such as glasses or a watch, Bluetooth device, key FOB, badge, etc.) capable of communicating with the network.
  • the service provider may utilize a network computing device (e.g., a network server) capable of communicating with the network. It should be appreciated thateachof the devices utilized by users and service providers may be implemented as computer system 1100 in a manner as follows.
  • Computer system 1100 includes a bus 1102 or other communication mechanism for communicating information data, signals, and information between various components of computer system 1100.
  • Components include an input/output (I/O) component 1104 that processes a user action, such as selecting keys from a keypad/key board, selecting one or more buttons, image, or links, and/or moving one or more images, etc., and sends a corresponding signal to bus 1102.
  • I/O component 1104 may also include an output component, such as a display 1111 and a cursor control 1113 (such as a keyboard, keypad, mouse, etc.).
  • An optional audio input/output component 1105 may also be included to allow a user to use voice for inputting information by converting audio signals.
  • Audio I/O component 1105 may allow the user to hear audio.
  • a transceiver or network interface 1106 transmits and receives signals between computer system 1100 and other devices, such as another communication device, service device, or a service provider server via network 140. In one implementation, the transmission is wireless, although other transmission mediumsand methods may also be suitable.
  • One or more processors 1112 which can be a micro-controller, digital signal processor (DSP), or other processing component, processes these various signals, such as for display on computer system 1100 or transmission to other devices via a communication link 1118. Processor(s) 1112 may also control transmission of information, such as cookies or IP addresses, to other devices.
  • DSP digital signal processor
  • Components of computer system 1100 also include a system memory component 1114 (e.g., RAM), a static storage component 1116 (e.g., ROM), and/or a disk drive 1117.
  • Computer system 1100 performs specific operations by processor(s) 1112 and other components by executing one or more sequences of instructions contained in system memory component 1114.
  • Logic may be encoded in a computer readable medium, which may refer to any medium that participates in providing instructions to processor(s) 1112 for execution. Such a medium may take many forms, including but not limited to, non-volatile media, volatile media, and transmission media.
  • non-volatile media includes optical or magnetic disks
  • volatile media includes dynamic memory, such as system memory component 1114
  • transmission media includes coaxial cables, copper wire, and fiber optics, including wires that comprise bus 1102.
  • the logic is encoded in non-transitory computer readable medium.
  • transmission media may take the form of acoustic or light waves, such as those generated during radio wave, optical, and infrared data communications.
  • Some common forms of computer readable media includes, for example, floppy disk, flexible disk, hard disk, magnetic tape, any other magnetic medium, CD-ROM, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, RAM, PROM, EEPROM, FLASH-EEPROM, any other memory chip or cartridge, or any other medium from which a computer is adapted to read.
  • execution of instruction sequences to practice the present disclosure may be performed by computer system 1100.
  • a plurality of computer systems 1100 coupled by communication link 1118 to the network e.g., such as a LAN, WLAN, PTSN, and/or various other wired or wireless networks, including telecommunications, mobile, and cellular phone networks
  • the network e.g., such as a LAN, WLAN, PTSN, and/or various other wired or wireless networks, including telecommunications, mobile, and cellular phone networks
  • various implementations provided by the present disclosure may be implemented using hardware, software, or combinations of hardware and software. Also, where applicable, the various hardware components and/or software components set forth herein may be combined into composite components that include software, hardware, and/or both without departing from the spirit of the present disclosure. Where applicable, the various hardware components and/or software components set forth herein may be separated into subcomponents that include software, hardware, or both without departing from the scope of the present disclosure. In addition, where applicable, it is contemplated that software components may be implemented as hardware components and vice-versa.
  • Software in accordance with the present disclosure, such as program code and/or data, may be stored on one or more computer readable mediums. It is also contemplated that software identified herein may be implemented using one or more general purpose or specific purpose computers and/or computer systems, networked and/or otherwise. Where applicable, the ordering of various steps described herein may be changed, combined into composite steps, and/or separated into sub-steps to provide features described herein.
  • the various featuresand steps described herein may be implemented as systems that include one or more memories storing various information described herein and one or more processors coupled to the one or more memories and a network, wherein the one or more processors are operable to perform steps as described herein, as non-transitory machine-readable medium that includes a plurality of machine-readable instructions which, when executed by one or more processors, are adapted to cause the one or more processors to perform a method that includes steps described herein, and methods performed by one or more devices, such as a hardware processor, user device, server, and other devices described herein.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

There are provided systems and methods for an automated device data retrieval and analysis platform. A service provider server invokes an instance of an application in a remote processing environment using device data associated with the application and sends a control message that prompts the instance to send a request to a web server for a process script that invokes a process executable in the remote processing environment. The service provider server obtains traffic data a behavior of application databased on an interaction between the instance and the web server, and determines features of the application in a native state from the behavior of the application data. The server generates a data profile of the application that indicates the features in the native state and provides the data profile to a remote engine to detect potential malicious activity associated with the application from the detection operation.

Description

AUTOMATED DEVICE DATA RETRIEVAL AND ANALYSIS PLATFORM
TECHNICAL FIELD
[0001] The present application generally relates to transaction processing systems and more particularly to automated device dataretrieval and analysis platform, accordingto various implementations.
BACKGROUND
[0002] Electronic service providers may provide an online marketplace environment for users, which may be used to buy and sell goods with other entities. Some of these services may be used maliciously by fraudulent users, such as overtaking a merchant account and taking control of payments made for goods sold, which poses a significant risk to these service providers. Since the service providers may process thousands (or more) transactions daily, it may be difficult to review without a large review and compliance team.
[0003] Moreover, tactics in performing prohibited transactions electronically are everevolving and becoming more sophisticated. Electronic service providers need to keep pace with the fraudulent users in providing security measures, such as accurately evaluating risk (e.g., detecting prohibited transactions) in real-time. In this regard, computer models are often utilized to assist in evaluating risks of electronic transactions.
BRIEF DESCRIPTION OF THE DRAWINGS
[0004] FIG. 1 illustrates a block diagram of a networked system suitable for implementing the processes described herein, according to an implementation of the present disclosure;
[0005] FIG. 2 illustrates a block diagram of an event detection server, accordingto an implementation of the present disclosure;
[0006] FIG. 3 is a flowchart of an example process of actionable insight analysis into user interaction data, accordingto an implementation of the present disclosure;
[0007] FIG. 4 is a flowchart of an example process of probabilistic anomaly detection and mediation, accordingto an implementation of the present disclosure; [0008] FIG. 5 illustrates a block diagram of a networked system for automated device data retrieval and analysis, according to an implementation of the present disclosure;
[0009] FIG. 6 conceptually illustrates an exemplary workflow of the automated device data retrieval and analysis of FIG. 5, accordingto an implementation of the present disclosure;
[0010] FIG. 7 is a flowchart of an example process of automated device data retrieval and analysis, accordingto an implementation of the present disclosure;
[0011] FIG. 8 conceptually illustrates an exemplary workflow of a voice vector framework for authenticating user interactions, according to an implementation of the present disclosure; [0012] FIG. 9 is a flowchart of an example process of a voice vector framework for verifying user interactions, accordingto an implementation of the present disclosure;
[0013] FIG. 10 is a flowchart of an example process of a voice vector framework for detecting malicious activity in user interactions, according to an implementation of the present disclosure; and
[0014] FIG. 11 is a block diagram of a computer system suitable for implementing one or more components in FIG. 1 and FIG. 2, accordingto an implementation.
[0015] Implementations of the present disclosure and their advantages are best understood by referring to the detailed description that follows. It should be appreciated that like reference numerals are used to identify like elements illustrated in one or more of the figures, wherein showings therein are for purposes of illustrating implementations of the present disclosure and not for purposes of limiting the same.
DETAILED DESCRIPTION
[0016] An online service provider that provides electronic services, such as electronic content access, electronic transactions, etc. may offer numerous avenues for users to interact with the online service provider. For example, users may interact with the online service provider by accessing a website or a mobile application associated with the online service provider. In another example, the user may communicate with the online service provider by calling a hotline associated with the online service provider, sending emails to a designated email address associated with the online service provider, and/or chatting with an agent (or a chatbot) of the online service provider via a chat application. These various avenues provide convenient access to the online service provider, but also offer opportunities for malicious users to perform fraudulent access of data and/or service of the online service provider. Thus, various aspects of the disclosure, as discussed in detail below, provide frameworks and platforms for efficiently detecting malicious activities during the interactions with the online service provider and performing actions to reduce loses caused by the malicious activities.
[0017] Probabilistic Anomaly Detection in Streaming Device Data
[0018] It is often difficult to identify unusual data when viewing steaming device data(e.g., reviewing logs attempting to find fraudulent information). The streaming device data may be associated with device attributes associated with devices that have attempted connection with an online service provider (e.g., an online server). The devices may attempt to access electronic services offered by the online service provider (e.g., logging in to a user account with the online service provider, accessing data from the online service provider, initiating an electronic transaction such as an electronic payment transaction through a user account with the online service provider, etc.) When reviewing the data manually, it is not easy or intuitive to identity anomalies. For example, when looking through events that contain different descriptors for mobile devices, it is difficult to determine which log entries represent prohibited transactions (e.g., fraudulent/malicious/falsified data).
[0019] Thus, according to an aspect of the disclosure, an anomaly detection system provides an anomaly detection framework that identifies anomalies during interactions of the online service provider with a user device in real time based on device attributes associated with the user device. In some embodiments, the anomaly detection system may leverage the likelihood of a given combination of device attributes in a dataset. Upon detecting an anomaly during the interactions with the user device, the anomaly detection system may automatically take one or more actions, including but not limited to, alerting an agent, providing a differed experience for a user of the user device, re-securing a user accountthat is being accessed by the user device, etc. For example, the anomaly detection system may provide additional restrictions to data and/or services that can be accessed by the user device.
[0020] Devices of the same type (e.g., same manufacturers, same models, etc.) may share similar qualities (e.g., attributes). Devices of a particular type can look similar to each other when viewed via a log file. This similarity also extends to software platforms. For example, if a smartphone of a particular brand and a particular model version (e.g., Apple® iPhone 8) has a specific screen height and width, then every smartphone of the same model version that connects to a service provider system should have features that align with others using that same model version. If, however, the anomaly detection system detects that the user device attempted to connect to the service provider system (e.g., to access the website of the service provider system, to access a user account with the service provider system, to perform an electronic transaction through a user account with the service provider system, etc.) has features that differed greatly from those of its same type, the anomaly detection system may determine that an anomaly is present and shouldbe investigated. By leveraging a statistical analysis technique with machine learning, such as a probability density function (PDF), among others, the anomaly detection system may isolate anomalies by holding some device attribute values constant and generating machine learning-trained statistical models around the remaining device attributes. The anomaly detection system may identify outliers and anomalies in real time over streaming device data, and automatically take a remedial action based on desired outcomes.
[0021] Existing approaches in anomaly detection can acquire a set of data, plot that data, and attempt to visualize what is different among the rendered data. In contrast, the anomaly detection system of some embodiments leverages machine learning to identify fraudulent and/or suspicious devices attempting to connect with the service provider system in real time via their device attributes in an effort to automatically invoke a prohibited transaction. The anomaly detection framework disclosed herein is more cost efficient than existing anomaly detection systems and flexible enough to be useful for service providers and/or users attempting to defend systems from bad actors.
[0022] In a first use case example, a research individual is attempting to identify malicious activity on an online payment processing system. The research individual generates logs that cover a specified time-period of suspected prohibited activity. The log contains thousands, if not millions, of lines of data. The research individual first tries to plot features to find correlations and analyze the data. The research individual plots histograms and attempts clustering techniques, butto no avail. The research individual then leverages the anomaly detection system to isolate device features and generate multi-dimensional probability maps of the remaining device features. These probability maps help to identify outliers within the data. The research individual now has a much smaller subset of data from which to identify the malicious activity within the system. [0023] In a second use case example, devices of a certain type can have certain features that are static (e.g. their model number). As such, the anomaly detection system may use that information to hold that value constant and generate a dynamic density map of interactions with the remaining device features by leveraging a subset of last-known instances of that device. Based on the density map, the subject technology can identify and detect which device-to-system connections have a minimal (or low probability) of occurrence given the other constant values. Those connections can thenbe actedupon with the following outcomes: (1) the connection attempt can be automatically directed to a secondary flow for further analysis, (2) the connection attempt can be stepped up with additional user authentication prompts, (3) an agent can be alerted to an anomalous connection, and/or (4) additionally security checks can be automatically performed to validate the authenticity of the connection (e.g., vendor identifier (VID) lookup, account review, etc.).
[0024] Analysis Platform for Actionable Insight into User Interaction Data
[0025] The online service provider that provides electronic transaction services may receive a large amount of communication attempts from legitimate users and malicious users, which include emails that are transmitted to a dedicated e-mail address associated with the online service provider, phone calls received via a support hotline associated with the online service provider, chat sessions initiated via a website of the online service provider, etc. These e-mails, calls, and chat interactions with the online service provider can be collectively referred to as customer inputs. Many of the customer inputs may be from legitimate users who are requesting help for resolving issues related to their user accounts (e.g., disputing a transaction, issues with logging into an account, etc.). However, at least some of the customer inputs is not related to legitimate users’ concerns (e.g., subscriptions, advertisements, etc.), and some may even include malicious content (e.g., malware attachments, phishing emails or phone calls, etc.).
[0026] Processing through such a high volume of customer inputs manually is not feasible both in terms of resources and time. Thus, according to another aspect of the disclosure, an analysis system may be configured to automatically classify and analyze customer inputs (e.g., email, phone, chat, etc.) for producing actionable insights related to fraud campaigns, phishing attacks, malware distribution, or product issues.
[0027] In some embodiments, the analysis system may classify customer inputs (e.g., customer complaints and/or issues reported by e-mails, voice calls, and/or chat sessions) into respective categories for automated analysis and remedial action. The analysis system may classify a customer input into one or more fraud categories based on matching the customer input to one or more known (or stored) fraud patterns. The analysis system may also identify new fraud campaigns not yet known. In some embodiments, the analysis system may detect common victim patterns in user accounts that have been exposed to fraud. Based on the detected patterns, the analysis system may invoke risk rules to prevent fraudulent activities associated with the user account. For example, the analysis system may detect that a customer input is associated with a phishing scheme. The analysis system may then forward the customer input to a phishing submodule for analysis and action. In another example, the analysis system may detect whether a customer input (e.g., an email) includes an attachment (e.g., a data file) that is malicious. The analysis system may then generate a signature for the attachment and may block the customer input from being presented in an email inbox.
[0028] In some embodiments, upon receiving a customer input, the analysis system may classify the customer input into one of multiple categories. Example categoriesmay include: (1) fraud complaints, (2) phishing, (3) malware, (4) other, and/or (5) noise. The analysis system may include additional categories for classification to accommodate larger datasets of customer input. For classification, the analysis system may include or use a machine learning model, such as support vector machines (SVMs) or Random forest classifiers that are trained on previously labeled customer input for each category. The labeled dataset can be gathered from customer support agents. For feature extraction, the analysis system may utilize different representations including bag-of-words, term frequency inverse document frequency (TF-IDF), document to vector representation (Doc2Vec) that uses a deep learning approach. The analysis system may extract topics using Latent Dirichlet Allocation (LDA), and select the best performing feature extraction model to extract features from the customer inputs. In some embodiments, the analysis system may also augment the model with information regarding the caller/sender of the customer input. Once a customer input is classified in one of these categories, further action canbe taken on each interaction.
[0029] When the customer input is classified as a fraud complaint, the analysis system may cluster the customer input with other similar reported fraud activities (e.g., other customer inputs that have been classified as the same category). For example, a user may contact the online service provider to complain about receiving invoices related to renewing a website domain from a domain provider. When the analysis system classifies such a customer input as a fraud complaint, the analysis system may cluster such a customer input with other related complaints over invoices associated with renewing website domains. In some embodiments, since certain complaints may relate to multiple types of frauds, the analysis system may implement a probabilistic clustering technique, such as Gaussian mixture model (GMM). GMM can assign a probability to each fraud complaint based on which cluster is assigned to the fraud complaint. If a complaint does not seem to fall in any one of the existing clusters, a new cluster can be created. Once a cluster reaches a certain number of complaints, a report generation model can be triggered to generate a report. The analysis system may identify specific information from the complaints, such as email addresses of the customers or any transaction details, and derive patterns from the complaints that belong the same cluster such as country of origin of the customers, customer age range, network addresses used for the transaction, transaction amount that was charged, a description of the service if any, and so on. This information can be added to the report with all the insights gathered and forwarded to an agent device for further analysis and action. As such, the analysis system may increase the efficiency of handling customer inputs (e.g., legitimate customer inputs may be forwarded to the right personnel or chatbots to handle), while malicious customer inputs are properly classified and forwarded to different modules for further analysis and actions.
[0030] In an example use case, the online service provider may receive user complaints in the form of calls, e-mails, and/or chat regarding invoices sent to them to pay for a web site domain renewal by a web hosting company. With a clustering technique, these complaints can be grouped together based on the content of the e-mail, the amount requested, and the hosting provider. Once a predetermined number of complaints (e.g., exceeding a threshold) is identified, the analysis system may be triggered to generate a report. The analysis system may analyze the grouped customer inputs (e.g., within the same cluster) to derive patterns. For example, the analysis system may determine that a majority of the customers associated with the grouped customer inputs fall in a specified age band (e.g., older than 50 years of age) and are all known to reside in the United States. The invoices are determined to originate from accounts registered in Russia where the hosting provider has no place of business or business presence. The analysis system may determine a risk level based on the derived patterns, and may alert a risk team to investigate the complaints in more detail when the risk exceeds a threshold. In some embodiments, the analysis system may also perform actions such as restricting access to the user accounts associated with the complaints when the risk level exceeds the threshold.
[0031] Automated Device Data Retrieval and Analysis Platform
[0032] According to another aspect of the disclosure, a browser analysis system of the online service provider may employ anti-fraud defense techniques to generate fingerprints for various types of web applications (e.g., browsers or other web clients) connecting to the online service provider. In some embodiments, upon detecting an attempt by a web application of a user device to connect to the online service provider, the browser analysis system may transmit code to the user device and cause the user device to execute the code. The code, when executed by the web browser, is configured to examine webpage attributes (e.g., document object model (DOM) attributes) and/or values of a webpage being loaded and presented on the user device. The webpage attributes and/or values may enable the browser analysis system to a) track a unique user over time, and b) determine an anomaly, such as whether the web application is providing inaccurate information of its actual identity through malicious manipulation of the web browser attributes and values.
[0033] To determine which attributes, behaviors, values, etc., are relevant for detecting anomalies is a time-consuming and daunting task that requires skilled developers to manually examine each individual type of web applications. Since the attributes, behavior, and/or values that are relevant for detecting anomalies for one type of web applications may not be relevant for another type of web applications, the skilled developers may be required to perform this manual examination for each type and each version of web applications, and may have to continue to perform manual examination when a new version of a browser is released. Otherwise, the existing device data can lose effectiveness over time as the web browser changes and evolves. [0034] Devices of the same type (e.g., same manufacturers, same models, etc.) may share similar qualities (e.g., attributes). Devices of a particular type can look similar to each other when viewed via a log file. This similarity also extends to software platforms. For example, if a smartphone of a particular brand and a particular model version (e.g., Apple® iPhone 8) has a specific screen height and width, then every smartphone of the same model version that connects to a service provider system should have features that align with others using that same model version. If, however, the online platform detects that the user device attempted to connect to the service provider system (e.g., to access the website of the service provider system, to access a user account with the service provider system, to perform an electronic transaction through a user account with the service provider system, etc.) has features that differed greatly from those of its same type, the online platform may determinethat an anomaly is present and should be investigated.
[0035] In some embodiments, the browser analysis system provides in-depth and automatic testing of current and pre-release web applications with the intent of generating a comprehensive database of their behaviors, features, and functionality in order to provide additional capabilities for detection and mitigation of malicious actors using modified or spoofed client devices. The browser analysis system may also allow base lining for statistical modeling of browser traffic and can generate alert notifications of new functionality that can be implemented into anti-fraud defense mechanisms.
[0036] Voice Vector Framework for Authenticating User Interactions
[0037] Fraudulent calls are often made to customer service agents with the goal of lifting restrictions on an account or performing account takeover using password reset initiated by the customer service agent. Social engineering techniques are often employed where the caller provides enough information to convince the agent to perform the desired action on an account that is not theirs. Additionally, these types of attacks occur on a scale where the same person may be calling for multiple accounts.
[0038] As such, in another aspect of the disclosure, a voice authentication system may detect fraudulent calls based on analyzing voice characteristics of the callers. In some embodiments, using various voice analysis techniques disclosed herein, the voice authentication system may authenticate a caller as a legitimate user associated with a user account of the online service provider and also determine if the caller has called before on multiple other user accounts of the online service provider. When the voice authentication system has determined that the caller is suspicious (e.g., that the caller is not a legitimate user associated with the user account and/or that the caller is linked to previous c all (s) associated with other user account), the voice authentication system may provide real-time feedback by alerting an automated chat module (e.g., a chatbot) or customer service agents of the online service provider of such suspicious activity. In some embodiments, the voice authentication system may leverage various aspects of speech recognition and voice identification technology as well as intent identification on the incoming customer call. The voice authentication system may provide a framework for a two- stage procedure that first verifies the identity of a caller and second to check if the same caller has previously called with the same intention on a different account not belonging to him or her. This framework can identify fraudsters, generate a voice blacklist, and alert customer service agents in real time to mitigate any security lapse.
[0039] In order to facilitate analyses of voice characteristics of incoming callers, the voice authentication system may generate multiple machine learning-based voice models that represent variations of voices associated with different user accounts of the online service provider. In some embodiments, the voice authentication system may generate multiple generic voice models, where each of the generic voice models may correspond to a particular cross-section of demographics. Examples of the demographics may include gender, country of origin (e.g., accent), age, among others. An example cross-section can be a young male (e.g., younger than 30 years old) from France, or an elderly female (e.g., older than 60 years old) from New York. There can be a number of generic voice models generated to represent individual attributes and selected cross-sections that together encompass the widest possible group of individuals. In some embodiments, the voice authentication system may generate the machine learning-based generic voice models based on different combinations of a particular subset of the demographics attributes, such as gender, age, and accent. The machine learning-based voice models may be configured to identify age groups in a binary manner such as old/young, or categorically by dividingthe different ages into different age groups.
[0040] The voice authentication system may select, for each user account, a particular machine learning-based generic voice model based on voice characteristics of a user of the user account. Thus, the voice authentication system may select, for a user account associated with a female user who is 26 years old from France, a generic voice model corresponding to a female- young-French voice. In another example, the voice authentication system may select, for a user account associated with a male user who is 65 years old from England, a generic voice model corresponding to a male-old-English voice.
[0041] The generic voice model can be used by the voice authentication system to generate multiple different voice models specific to each of the user accounts. For example, the voice authentication system may train, for a particular user account, the generic voice model using different audio files associated with the particular user account to generate different voice models corresponding to different call intentions. The audio files associated with the user account may be obtained based on historical calls as previously identified by customer service agents (e.g., verifying that the caller was, in fact, the rightful owner of the user account). In some embodiments, the voice authentication system may identify different audio files associated with a user account corresponding to different call intentions. In general, the online service provider may determine one or more call intentions of the callers based on the type of services provided by the online service provider. Common call intentions may include “password reset,” “payment transaction,” “payment dispute,” or other types of intentions. The voice authentication system may categorize the audio files associated with the user account based on the different call intentions. The voice authentication system may then extract keywords from the audio files that correspond to each of the call intentions. For example, for the call intention of “password reset,” the voice authentication system may extract, from the audio files corresponding to the “password reset” call intention, phrases such as “password reset,” “resetting my password,” “password resetting,” etc. These audio files with the extracted keywords can be grouped together for training a “password reset” model for the user account.
[0042] The voice authentication system may generate and train a voice model, for the user account and a corresponding call intention, based on the extracted keywords from the audio files using the generic voice model. Thus, the voice authentication system may generate, for each user account based on the corresponding generic voice models, multiple voice models for the different call intentions. Since each of the voice models is trained with the same phrase (or multiple similar phrases), any variation within each of the resulting voice models can be due to audio quality and patterns of speech. Pre-processing can be performed on the audio files (having the extracted keyword) to normalize the variation and eliminate noise. The result can be a set of trained voice models whose variation wouldbe the result of the differences in speech patterns from one group to the other. The machine learning-based voice models can be periodically updated using previously saved and tagged data.
[0043] In the first stage of the framework, when the voice authentication system detects an incoming call inquiring about a user account with the online service provider, the voice authentication system may determine whether the caller is associated with the user account based on the generated voice models associated with the user account. The incoming call may include voice data associated with phrases and utterances made by the caller. In some embodiments, the voice authentication system may identify a call intent of the incoming call based on the voice data. For example, the voice authentication system may extract one or more keywords from the voice data, and classify the call as one of the multiple call intentions based on the extracted keywords. In one example, if the voice authentication system detects a phrase that is linked to one of the voice models (e.g., the phrase “resetting my password”) in the voice data, the voice authentication system may classify the call as the call intention associated with the voice model (e.g., the “password reset” call intention). The voice authentication system may then select the voice model that corresponds to the extracted key word(s) to authenticate the caller.
[0044] In some embodiments, upon classifying the call as a particular call intention, the voice authentication system may determine whether the caller is a legitimate user of the user account using a voice model associated with the user account and corresponding to the call intention. When comparingthe caller's audio sample (e.g., the keyword(s) extracted from the voice data) against the selected voice model, the voice authentication system or the voice model may generate an “accuracy” or “confidence” score. In some embodiments, the voice model may output a value (e.g., a confidence score) indicating how close the audio sample is to a voice of the same phrase generated by the model.
[0045] In some embodiments, the voice authentication system and/or the voice models may determine the confidence score based on voice vectors associated with the voice data of the caller. For example, for each voice model generated for the user account, the voice authentication system may generate a signature in the form of a vector that represents the attributes of the voice represented by the machine learning-based voice model. The attributes can be generated using a combination of selective phrases and individual words. Within the voice authentication system that records the calls, the phrasesand words can be segmented and stored as tagged data. For example, someone saying “password reset” can be a key phrase that is captured. Using methods such as Hidden Markov model, gaussian mixture model, deep learning models, among others, the voice authentication system may generate a vector representation and use this vector for comparison. The vector comparison could be accomplished using a vector similarity metric. An example of such a metric would be a multi-dimensional relative distance metric, such as leveraging an ‘n’ -dimensional Euclidean distance calculation to generate a perdimension distance as well as an overall distance. The technique for measuring the distance would depend on the underlying technique used to generate the voice representation. By examining the relative distance of an incoming sample to existing ones, the voice authentication system would generate a confidence score.
[0046] In one example, upon detectingthe incoming call, the voice authentication system may generate a voice vector based on the voice data of the incoming call. The voice authentication system may then compare the voice vector associated with the voice data of the caller with the voice vector generated for the voice model using the techniques disclosed herein. The voice authentication system and/or the voice model may determine a confidence score based on a similarity between the two voice vectors. If the confidence score is above a threshold, then the voice authentication system may consider the voice a match to the model, and may determine that the caller is a legitimate user of the user account. For example, the voice authentication system may determine that the caller is the legitimate user of the user account when the confidence score is above a threshold, and may determine that the caller is not the legitimate user of the user account when the confidence score is below the threshold. Once the caller has been verified, the confidence score can be leveraged as a way to gauge any drift that has occurred in an individual’s speech over time as a way of improving the model for the user account. Furthermore, regardless whether the caller is determined to be a legitimate user of the user account, the voice data(e.g., the voice vector) may be storedin association of the user account, to indicate that such a caller has called regarding the user account. In addition, if the caller’ s voice data is determined to be not a legitimate user of the user account, the voice data may be classified into one of the generic voice model by comparing the voice data against each of the generic voice model.
[0047] In the second stage of the framework, the voice authentication system may use the voice data to determine whether the caller has previously called for another user account. In some embodiments, the voice authentication system may comparevoice data (and/or the voice vector determined based on the voice data) against different voice models associated with different user accounts. For example, the voice authentication system may first compare the voice data (and/or the voice vector) against the different generic voice models to determine which generic voice model matches the caller. The voice authentication system may determine a confidence score for each generic voice model indicating how well the voice data of the caller matches the generic voice model. This confidence score can determine which model more closely represents the caller. [0048] For example, a young male with a French accent would yield a high confidence score on the male-young-French-accent voice model but yield a much lower confidence score on the female-old-Russian model. Once the caller’s voice matches with one of the generic voice models based on a certain threshold, individual voice vectors from the voice models specific to various user accounts, that were derived from the matched generic voice model, and various voice vectors associated with the matched generic voice model can be used to determine if the voice signature matches any specific previous caller. When the voice authentication system determines that the voice of the caller matches one or more previous caller that called on one or more other user accounts, the voice authentication system may alert another module of the online service provider (e.g., a chatbot) or the customer service agent accordingly. The voice authentication system may have a set of such models for every important intent that customers call the online service provider frequently or for what reasons fraudsters have historically called the online service provider. Once the system identifies voice vectors that have called multiple times for different accounts, the system may create a voice blacklist of fraudsters that can be shared.
[0049] In an example use case, the voice authentication system may facilitate detecting malicious activity relating to unverified user accounts. For example, a “Know your customer (KYC)” restriction can be placed on a user account when malicious activity associated with the user account has been detected (e g., a fraudulent user has called to perform a transaction on that user account, etc.), which limits the number of activities that can be done on the user account including restricted withdrawals of any funds from the user account. Such accounts are often created for fraud purposes and once money is contributed into such accounts, customer service calls are made to lift the restrictions to allow transfer of money outside the user account. A single customer can make multiple calls for multiple accounts. Based on an intention of “lifting the KYC flag,” the system may compare the voice vector with the trained machine learning-based models and detect that the same caller has called previously with the same intention for multiple accounts. The customer service agent can prompt the caller for more authentication information and place the caller on hold until more verification can be completed.
[0050] FIG. 1 is a block diagram of a networked system 100 suitable for implementing the anomaly detection system, the analysis system, the browser analysis system, and the voice authentication system described herein. As shown, system 100 may comprise or implement a plurality of devices, servers, and/or software components that operate to perform various methodologies in accordance with the described implementations. Exemplary devicesand servers may include device, stand-alone, and enterprise-class servers, operating an OS such as a MICROSOFT® OS, a UNIX® OS, a LINUX® OS, or another suitable device and/or serverbased OS. It can be appreciated that the devices and/or servers illustrated in FIG. 1 may be deployed in other ways and that the operations performed, and/or the services provided by such devices and/or servers may be combined or separated for a given implementation and may be performed by a greater number or fewer number of devices and/or servers. One or more devices and/or servers may be operated and/or maintained by the same or different entity [0051] System 100 includes a service provider server 110 and an agent device 120 in communication over a network 140. Agent device 120 may be utilized to provide training data, view flagged transactions, and process additional transaction data to identify transactions indicating a prohibited transaction including potential fraud. In this regard, an agent may process and review the data with service provider server 110, where service provider server 110 may generate a machine learning model based on iteratively training using the training data, and further process the transaction data using the model to flag further transactions. Additionally, service provider server 110 may be used to output narratives for flagged transactions based on feature analysis that caused the machine learning engine to perform a classification.
[0052] Agent device 120 and service provider server 110 may each include one or more processors, memories, and other appropriate components for executing instructions such as program code and/or data stored on one or more computer readable mediums to implement the various applications, data, and steps described herein. For example, such instructions maybe stored in one or more computer readable media such as memories or data storage devices internal and/or external to various components of system 100, and/or accessible over network 140.
[0053] Agent device 120 may be implemented as a communication device that may utilize appropriate hardware and software configured for wired and/or wireless communication with service provider server 110. For example, in one implementation, agent device 120 may be implemented as a personal computer (PC), a smart phone, laptop/tablet computer, wristwatch with appropriate computer hardware resources, eyeglasses with appropriate computer hardware (e.g. GOOGLE GLASS ®), other type of wearable computing device, implantable communication devices, and/or other types of computing devices capable of transmitting and/or receiving data, such as an IPAD® from APPLE®. Although only one device is shown, a plurality of devices may function similarly and/or be connected to provide the functionalities described herein.
[0054] Agent device 120 includes an alert review application 122, a report review application 124, a database 126, and a network interface component 128. Alert review application 122 may correspond to executable processes, procedures, and/or applications with associated hardware. In other implementations, agent device 120 may include additional or different modules having specialized hardware and/or software as required.
[0055] Report review application 124 may correspond to one or more processes to execute software modules and associated components of agent device 120 to provide features, services, and other operations associated with training a machine learning, deep learning, or other artificial intelligence (Al) model, as well as using the model for detection of prohibited transactions in transaction data sets. In this regard, report review application 124 may correspond to specialized hardware and/or software utilized by a user of agent device 120 that may be used to provide training and transaction data, as well as review results of a supervised machine learning engine having a model trained for fraudulent pattern recognition and narrative text output. For example, report review application 124 may be used to first provide training data and/or sets of data to service provider server 110 that includes transaction data sets for transaction processed by a financial entity, such as a bank or financial institution, payment service provider, or other transaction processor.
[0056] Service provider server 110 may utilize features within the data sets to classify the transactions according to one or more classifiers, which may flag one or more transactions as potentially prohibited based on laws, rules, or regulations. The data sets may be annotated, and flagged transactions may be displayed through report review application 124. An agent may identify any false positives in the flagging of transactions as potentially prohibited, which may be provided backto service provider server 110 for retraining (e.g., iteratively and/or continuously training) of the machine learning model. The flagged transactions may include a narrative display able through report review application 124, such as a textual description of the reason for flagging the transaction(s) by the model. After training, agent device 120 may further be used to view the results of the model processing other transaction data sets, such as for other transaction processed by one or more entities. [0057] Agent device 120 may further include database 126 stored on a transitory and/or non- transitory memory of agent device 120, which may store various applications and data and be utilized during execution of various modules of agent device 120. Database 126 may include, for example, identifiers such as operating system registry entries, cookies associated with alert review application 122 and/or other applications 112, identifiers associated with hardware of agent device 120, or other appropriate identifiers, such as identifiers used for payment/user/device authentication or identification, which may be communicated as identifying the user/agent device 120 to service provider server 110. Database 126 may further include any transaction data sets used for training and/or processing with a machine learning model generated by service provider server 110.
[0058] Agent device 120 includes at least one network interface component 128 adapted to communicate with service provider server 110. In various implementations, network interface component 128 may include a DSL (e.g., Digital Subscriber Line) modem, a PSTN (Public Switched Telephone Network) modem, an Ethernet device, a broadband device, a satellite device and/or various other types of wired and/or wireless network communication devices including microwave, radio frequency, infrared, Bluetooth, and near field communication devices.
[0059] Service provider server 110 may be maintained, for example, by an online service provider, which may provide identification of prohibited transactions, such as fraudulent transactions, in transaction data sets processed by a financial or transaction processing entity (including service provider server 110) using a machine learning or other Al model. In this regard, service provider server 110 includes one or more processing applications which may be configured to interact with agent device 120 to train and utilize the model f or prohibited transaction identification. In one example, service provider server 110 maybe provided by PAYPAL®, Inc. of San Jose, CA, USA. However, in other implementations, service provider server 110 may be maintained by or include another type of service provider.
[0060] Service provider server 110 of FIG. 1 includes an event detection server 150, a transaction processing server 160, and a network interface component 170. Event detection server 150, transaction processing server 160, and other applications 134 may correspond to executable processes, procedures, and/or applications with associated hardware. In other implementations, service provider server 110 may include additional or different modules having specialized hardware and/or software as required. [0061] In some embodiments, the event detection server 150 may implement the functionalities of the anomaly detection system, the analysis system, the browser analysis system, and the voice authentication system described herein. The event detection server 150 may also be associated specialized hardware of service provider server 110 to provide a framework to train a machine learning model for one or more prediction engines that can detect prohibited transactions, such as fraud, by probabilistically detecting anomalies in transaction datasets, which will be described in more detail in FIGS. 2 and 4. In one or more implementations, the event detection server 150 may provide a framework for an analysis platform for actionable insight into user interaction data, which will be described in more detail in FIGS. 2 and 3. In other implementations, the event detection server 150 may provide a platform for automated device data retrieval and analysis, which will be described in more detail in FIGS. 5-7. In still other implementations, the event detection server 150 may provide a voice vector framework for authenticating user interactions, which will be described in more detail in FIGS. 8-10.
[0062] In various implementations, service provider server 110 includes at least one network interface component 170 adapted to communicate agent device 120 and/or other entities over network 140. In various implementations, network interface component 170 may comprise a DSL (e.g., Digital Subscriber Line) modem, a PSTN (Public Switched Telephone Network) modem, an Ethernet device, a broadband device, a satellite device and/or various other types of wired and/or wireless network communication devices including microwave, radio frequency (RF), and infrared (IR) communication devices.
[0063] Network 140 may be implemented as a single network or a combination of multiple networks. For example, in various implementations, network 140 may include the Internet or one or more intranets, landline networks, wireless networks, and/or other appropriate types of networks. Thus, network 140 may correspond to small scale communication networks, such as a private or local area network, or a larger scale network, such as a wide area network or the Internet, accessible by the various components of system 100.
[0064] The customer device 130, in various implementations, maybe implemented as a communication device using any appropriate combination of hardware and/or software configured for wired and/or wireless communication over the network 140. For example, in one implementation, the user device maybe implemented as a personal computer (PC), a smart phone, a smart phone with additional hardware such as NFC chips, BLE hardware etc., wearable devices with similar hardware configurations such as a gaming device, a Virtual Reality Headset, or that talk to a smart phone with unique hardware configurations and running appropriate software, laptop computer, and/or other types of computing devices capable of transmitting and/or receiving data.
[0065] The customer device 130 may install and execute a client-side service application 132 received from the transaction processing server 110 to facilitate one or more transaction processes (e.g., point-of-sale transactions). The client-side service application 132 may allow a user to send payment transaction requests to the transaction processing server 110, which includes communication of data or information needed to complete the request, such as funding source information. The customer device 130 may include one or more browser applications that may be used, for example, to provide a convenient interface to permit a user to browse information available over network 140. For example, in one implementation, the one or more browser applications may be implemented as a web browser configured to view information available over the Internet, such as a user account for online shopping and/or merchant sites for viewing and purchasing goods and/or services.
[0066] The customer device 130, in various implementations, may include other applications as may be desired in one or more implementations of the present disclosure to provide additional features available to the user. For example, the other applications may include security applications for implementing server-side security features, programmatic client applications for interfacing with appropriate APIs over network 140, or othertypes of applications. The other applications may also include email, texting, voice and IM applications that allow a user to send and receive emails, calls, texts, and other notifications through network 140. In various implementations, the other applications may include financial applications, such as banking, online payments, money transfer, or other applications associated with transaction processing server 110. The other applications include a software program, such as a graphical user interface (GUI), executable by a processor that is configured to interface to a user.
[0067] The customer device 130 may further include cache 134 stored to a transitory and/or non-transitory memory of customer device 130, which may store various applications and data and be utilized during execution of various modules of customer device 130. Thus, cache 134 may include, for example, identifiers such as operating system registry entries, cookies associated with the one or more browser applications and/or the other applications, identifiers associated with hardware of customer device 130, or other appropriate identifiers, such as identifiers used for payment/user/d evice authentication or identification, which may be communicated as identifying customer device 130 to the service provider server 110. For example, the cache 134 may store device attributes of the customer device 130, including, but not limited to, application information associated with various applications on the customer device 130, version information of the various applications, cookie information, a language used on the customer device 130, screen attributes such as resolution, size, etc., and other information associated with the customer device 130. In various implementations, account information and/or digital wallet information may be stored to cache 134 for use by the customer device 130.
[0068] The cache 134, in one implementation, may include atleast one user identifier, which may be implemented, for example, as operating system registry entries, cookies, identifiers associated with hardware of the customer device 130 (e.g., a media control access (MAC) address), or various other appropriate identifiers. The user identifier may include one or more attributes related to the user of the customer device 130, such as personal information related to the user (e.g., one or more user names, passwords, photograph images, biometric IDs, addresses, phone numbers, social security number, etc.) and banking information and/or funding sources (e.g., one or more banking institutions, credit card issuers, user account numbers, security data and information, etc.). In various implementations, the user identifier may be passed with a user login request to the transaction processing server 110 via the network 140, and the user identifier may be used by the service provider server 1 lOto associate the user with a particular user account maintained by the transaction processing server 160.
[0069] Customer device 130 includes at least one network interface component 170 adapted to communicate with the service provider server 110 and/or the transaction processing server 160. In various implementations, network interface component 170 may include a modem, an Ethernet device, a broadband device, a satellite device and/or various other types of wired and/or wireless network communication devices including microwave, radio frequency, infrared, Bluetooth, and near field communication devices.
[0070] Even though only one customer device 130 is shown in FIG. l, ithas been contemplated that one or more user devices (each similar to customer device 130) may be communicatively coupled with the service provider server 110 via the network 140 within the networked system 100.
[0071] FIG. 2 illustrates a block diagram of the event detection server 150, according to an implementation of the present disclosure. The event detection server 150 includes a feature extraction module 210, a classification module 230, a first pass fraud detection module 240, a data profile module 250, a report generation module 260, and a second pass fraud detection module 270. In various aspects, the network interface component 125 includes API 202. The API 202 is coupled to the event detection server 150 with bidirectional signal paths to the second pass fraud detection module 270. The feature extraction module 210 includes a text preprocessing module 211, a feature extraction engine 212, an e-mail features repository 213, a voice recognition module 214, a text feature extraction engine 215, a transcript features repository 216, an audio feature extraction engine 217, a voice vectors repository 218, an email training dataset 219 and a voice training dataset 220. In some embodiments, when the event detection server 150 detects a communication (e.g., an email, a call, a chat) from a user device to the service provider server 110, the feature extraction module 210 may extract features from the communication. The extracted features may include textual data features extracted from a text portion of the communication and/or audio data features extracted from an audio portion of the communication.
[0072] For example, the feature extraction engine 212 of the feature extraction module 210 can generate textual data features with the textual data. In some examples, the textual data refers to text extracted from electronic document items received by the text preprocessing module 211. The textual data features can be stored in the e-mail features repository 213. In some aspects, the voice recognition module 214 can process the audio properties of a received audio file (e.g., voicemail) to produce a textual transcript of the received audio file. The voice recognition module 214 may utilize standard speech processing techniques to translate the audio to text. The text feature extraction engine 215 can generate textual data features with the textual transcript associated with the received audio file. The textual data features from the text feature extraction engine 215 can be stored in the transcript features repository 216. The voice recognition module 214 may provide a processed speech datafile that includes speech properties of the received audio file. The audio feature extraction engine 217 may generate audio data features with the processed speech datafile. The audio feature extraction engine 217 may generate voice vectors with the audio data features, where each portion (or field) of respective voice vectors corresponds to a different combination of audio data features. For example, each combination of audio data features may represent word and/or phrases of a specific spoken language. The voice vectors can be stored in the voice vectors repository 218. In some aspects, the feature extraction module 210 can map (or transform) the textual data features and the audio data features into respective ones of different feature representations. In other aspects, the preprocessing module 231 may be tasked to perform the feature representation mapping in lieu of the feature extraction module 210.
[0073] The classification module 230 includes a preprocessing module 231, a classifier 232, a fraud class module 233 , a voice scoring module 234, a phishing class module 235 , a malware class module 236, other class module 237 and a noise class module 238. In some embodiments, the classification module 230 may determine an intent of the communication based on the extracted features, and may select a machine learning-trained classifier to classify the communication in one or more of communication categories.
[0074] The first pass fraud detection module 240 includes a clustering engine 242 and fraud type clusters 244, 246, and 248. In some embodiments, the first pass fraud detection module 204 may use the clustering engine 242 to assign the communication to one of the clusters (e.g., one of the fraud type clusters 244, 246, and 248) of past communications based on the intent and the categories associated with the communication. Embodiments are discussed with reference to FIG 3.
[0075] The data profile module 250 includes a controller 252, a retrieval engine 254 and a data profile repository 256. The second passfraud detection module 270 includes an anomaly detection module 280, a feedback module 272 and a remedial action module 274. In some embodiments, the second pass fraud detection module 270 may derive a unique pattern of activity from the communication and may detect one or more anomalies in the communication based on the unique pattern of activity. For purposes of brevity and efficient explanation of the functional aspects associated with the components describedin FIG. 2, the block diagram of FIG. 2 will be described in reference to the processes of FIGS. 3 and 4.
[0076] FIG. 3 is a flowchart of an example process 300 of actionable insight analysis into user interaction data, according to an implementation of the present disclosure. One or more of the steps 302-314 of process 300 may be implemented, at least in part, in the form of executable code stored on non-transitory, tangible, machine-readable media that when run by one or more processors may cause the one or more processors to perform one or more of the steps 302-314. Some examples of computing devices, such as computer system 1100 of FIG. 11 may include non-transitory, tangible, machine readable media that include executable code that when run by one or more processors (e.g., processor 1112) may cause the one or more processors to perform the steps of process 300. As illustrated, the process 300 includes a number of enumerated steps, but aspects of the process 300 may include additional steps before, after, and in between the enumerated steps. In some aspects, one or more of the enumerated steps may be omitted or performed in a different order.
[0077] The process 300 begins at step 302, where the event detection server 150 receives user interaction data (e.g., e-mails, an audio clip of a phone call, etc.) associated with an interaction between the customer device 130 and a service provider server (e.g., the transaction processing server 160). In some aspects of receiving the user interaction data, the text preprocessing module 211 may access, through the API 202, textual data associated with a first type of interaction from a first data structure in a data repository communicab ly coupled to the service provider server 110. In some examples, the text preprocessing module 211 may prepare the received textual data for processing by the feature extraction engine 212, such as removing any unnecessary terms and/or characters that help improve the extraction performance. In one or more aspects, the voice recognition module 214 may access, through the API 202, audio data associated with a second type of interaction from a second data structure in the data repository. In some aspects, the feature extraction module 210 may group the textual data with the audio data into the user interaction data to combine the first type of interaction with the second type of interaction. In this respect, the textual data and the audio data may correspond to different portions of the user interaction data. In some aspects, the data repository may be remote and/or external to the service provider server 110. In other aspects, the data repository may be internal to the service provider server 110. For example, the data repository is, or includes at least a portion of, the data profde repository 256.
[0078] Next, at step 304, the feature extraction module 210 of the event detection server 150 processes the user interaction data by extracting one or more features from the user interaction data. In some aspects, the extracted features includes textual data featuresand audio data features as described herein. In various aspects, the feature extraction module 210 can select one of multiple different feature extraction algorithms based on a comparison of performance metrics between each of the different feature extraction algorithms. In some examples, the different feature extraction algorithms include, among others, bag-of-words, term frequency -inverse document frequency (TF-IDF), Doc2Vec, and Latent Dirichlet Allocation (LDA). As such, the feature extraction module 210 can apply the selected feature extraction algorithm to the user interaction data.
[0079] Subsequently, at step 306, the classifier 232 of the event detection server 150 classifies the extracted features into one of many actionable insight categories with a machine learning-trained classifier. For example, a first actionable insight category may correspond to a fraudulent activity implemented with the fraud class module 233, a second actionable insight category may correspond to a phishing activity implemented with the phishing class module 235, a third actionable insight category may correspond to a malware activity implemented with the malware class module 236, a fourth actionable insight category may correspond to a noise activity implemented with the noise class module 238, and a fifth actionable insight category may correspond to other (or miscellaneous) activities implemented with the other class module 237. The output of the fraud class module 233 is fed to the first pass fraud detection module 240. The output of the phishing class module 235 may include a notification transmitted to a phishing detection system (not shown), indicating that the user interaction may correspond to a phishing scheme.
[0080] The phishing detection system can analyze and report phishing network links (e.g., URLs) extracted from phishing emails. The output of the malware class module 236 includes signaling that is sent to a malware detection system (not shown). The signaling may be first sent to a virus analysis process to identify whether the manipulated file has been previously identified as malicious and to acquire an associated signature that can be used for blocking. If the virus analysis process does not return a result, then the malware detection system can receive customer input with malware attachments for download into an isolated virtual environment. The customer input with malware attachments may be analyzed for static (codebased) and dynamic (interaction with the sandbox system) to identify the impact and report the findings to the agent device 120. The output of the other class module 237 includes signalingthat is sent to the agent device 120. [0081] Some examples of the other category may include customer complaints and/or emails about mobile applications networking properly, queries for information regarding products and/or services associated with the service provider server 110, or the like. In the other category, the classifier 232 may identify customer complaints regarding usage of products and/or services associated with the service provider server 110 that can serve as feedback for use by the transaction processing server 160 to identify any underlying issues in the services and/or products. The output of the noise class module 238 includes signaling that is purged. Some examples of noise may include any data that does not relate to the service associated with the service provider server 110, advertisements, subscription emails and the like. Purging the noise activity can help increase the efficiency and accuracy of the system. For purposes of brevity, the remainder of the steps in the process 300 will be discussed in reference to the fraudulent activity category, but can also apply to other categories.
[0082] In some implementations, the classifier 232 determines an intent of the interaction from the feature representations using the machine learning-trained classifier. In some aspects, the intent corresponds to one of the different actionable insight categories. In some implementations, the classification module 230 selects one of the machine learning-trained classifiers implemented by the classifier 232 based on a comparison of performance metrics between each of the machine learning-trained classifiers. In some aspects of determining the intent of the interaction, the classifier 232 determines the intent of the interaction using the selected one of the machine learning-trained classifiers.
[0083] In some implementations, the feature extraction module 210 accesses user interaction datasets associated with respective ones of interactions between different communication devices (including the customer device 130) and the service provider server 110. The classifier 232 can then determine an intent of each of the interactions from extracted features associated with each of the interactions using the machine learning-trained classifier. In turn, the classifier 232 can classify each of the interactions as a respective category of the actionable insight categories based at least in part on the intent of that interaction.
[0084] In an offline operation, the classification module 230 may include one or more processors adapted to generate multiple machine learning-based networks based on the actionable insight categories. In some aspects, the machine learning-based networks correspond to the respective actionable insight categories. The one or more processors in the classification module 230 may be adapted to train each of the machine learning-based networks with a respective training dataset to form different machine learning-trained classifiers. In some aspects, the respective training dataset facilitates supervised learning by including labeled interaction data indicating what information pertains to which of the actionable insight categories. In some aspects of determining the intent of the interaction using the machine learning-trained classifier, the classification module 230 can select the machine learning-trained classifier from the machine learning-trained classifiers.
[0085] Next, at step 308, the clustering engine 242 of the first pass fraud detection module 240 generates multiple clusters based on the extracted features using one or more clustering algorithms. For example, the clusters may include cluster 244 (depicted as “fraud type cluster 1”), cluster 246 (depicted as “fraud type cluster 2”), and cluster 248 (depicted as “fraud type cluster N”). In some examples, the one or more clustering algorithms include, among others, K- means, Gaussian mixture model (GMM), and hierarchical. In one or more implementations, the clustering engine 242 maps the interaction to a first cluster of the different first clusters based at least in part on the determined intent. In some aspects, each of the first clusters corresponds to a particular type of activity in a first actionable insight category of the different actionable insight categories. In some examples, the first actionable insight category corresponds a fraudulent activity classification. In some examples, the particular type of activity may correspond to one of multiple types of fraudulent activity (e.g., fraud type cluster 1, fraud type cluster 2, fraud type cluster N). In some implementations, the clustering engine 242 determines whether the interaction maps to at least one of the first clusters and generates a new cluster to be included in the first clusters when the interaction is determined to not map to the at least one of the first clusters. In some aspects, the new cluster corresponds to a new type of event in the first actionable insight category.
[0086] Subsequently, at step 310, the event detection server 150 detects one or more anomalies in at least one of the clusters through an anomaly detection operation. In some aspects of detecting whether the one or more anomalies are present, the event detection server 150 may utilize the second pass fraud detection module 270 to generate second clusters within the at least one of the first clusters using one or more clustering algorithms. In some aspects, each of the second clusters corresponds to a unique pattern of activity associated with the particular type of activity in the first actionable insight category. In some implementations, the second pass fraud detection module 270 may utilize the cluster engine 286 to generate the second clusters. The second pass fraud detection module 270 may determine whether the unique pattern of activity includes the one or more anomalies for the particular type of activity. In some aspects, the second pass fraud detection module 270 determines that the particular type of activity represents malicious activity when the unique pattern of activity is determined to include the one or more anomalies.
[0087] Next, at step 312, the event detection server 150 issues a remedial action for the interaction by further authenticating the customer device 130 based on the one or more anomalies detected in the at least one of the clusters. In some aspects, the event detection server 150 issues the remedial action based on the mapping of the interaction to the first cluster. In some implementations, the event detection server 150 issues the remedial action using the remedial action module 274. In other implementations, the event detection server 150 issues the remedial action using the report generation module 260.
[0088] Subsequently, at step 314, the event detection server 150 provides, through the API 202 to a second communication device (e.g., the agent device 120) associated with the service provider server 110, an indication of the one or more anomalies. In some aspects, the event detection server 150 generates an alert notification that identifies the malicious activity using the remedial action module 274. In some aspects, in providing the indication, the remedial action module 274 sends, through the API 202 to the agent device 120 over the network 140, the alert notification. In some implementations, the remedial action module 274 conditionally sends the alert notification. For example, the remedial action module 274 may determine whether a threshold number of anomalies is detected in the first cluster. The remedial action module 274 generates a report indicating the one or more anomalies when the threshold number of anomalies is detected. In an aspect, the detected number of anomalies exceeds the threshold number of anomalies.
[0089] FIG. 4 is a flowchart of an example process 400 of probabilistic anomaly detection and mediation, according to an implementation of the present disclosure. One or more of the steps 402-410 of process 400 may beimplemented, at leastin part, in the form of executable code stored on non-transitory, tangible, machine-readable media that when run by one or more processors may cause the one or more processors to perform one or more of the steps 402-410. Some examples of computing devices, such as computer system 1100 may include non- transitory, tangible, machine readable media that include executable code that when run by one or more processors (e.g., processor 1112) may cause the one or more processors to perform the steps of process 400. As illustrated, the process 400 includes a number of enumerated steps, but aspects of the process 400 may include additional steps before, after, and in between the enumerated steps. In some aspects, one or more of the enumerated steps may be omitted or performed in a different order.
[0090] The process 400 begins at step 402, where the feature preprocessing module 281 of the anomaly detection module 280 receives device data of a user device (e.g., the customer device 130) during a communication session between the user device and a service provider server 110. For example, a user, through the user device, may communicate with the service provider server 110 (e.g., accessing a website associated with the service provider server 110, initiating an electronic transaction with the service provider server 110, etc.). Upon detecting the communication session between the user device and the service provider server 110, the feature preprocessing module 281 may obtain device data associated with the user device. In some aspects, the device data may include device attributes, that in combination, is unique to a type of device (e.g., a manufacturer, a model, etc.) associated with the customer device 130. Thus, the combination of the device attributes maybe used by the feature preprocessing module 281 to verify whether the user device is of a particular type (e.g., a particular manufacturer, a particular model, etc.) that the user device purports to be. In an aspect, each device attribute in the different of device attributes is marked with a different descriptor and a corresponding attribute value that are specific in identifying the device type of the customer device 130. For example, the different device attributes may include, among others, a pixel depth attribute, version information of a particular application (e.g., a web browser) of the customer device 130, an identifier, a list of plug-ins installed on the customer device 130, a font used in the customer device 130, a language used in the customer device 130, screen attributes such as a width, a height, a resolution, and other attributes.
[0091] In some aspects of receiving the device data, the feature preprocessing module 281 may receive the device data of the customer device 130 in response to occurrence of an event associated with the customer device 130. For example, the event occurrence may include a user interaction between the customer device 130 and the service provider server 110, such as a connection attempt to the service provider server 110. In some aspects, the event may identify a device type of the customer device 130 (e.g., a particular manufacturer, a particular model, etc.) and a type of the user interaction (e.g., connection attempt).
[0092] In other aspects of receiving the device data, the feature preprocessing module 281 may receive a first logs indicating a first number of events associated with user devices (e.g., other customer devices 130d) that occur in a first window oftime. In some aspects, each event of the first number of events may include first device data corresponding to the first window of time for a user device of the user devices. The feature preprocessing module 281 also may receive second logs indicating a second number of events associated with the user devices that occur in a second window of time different from the first window of time (e.g., occurring at a later time). In some aspects, each event of the second number of events may include second device data corresponding to the second window of time for a user device of the user devices.
[0093] In still other aspects of receiving the device data, the feature preprocessing module 281 may receive user queries. In some aspects, each of the user queries may include a user interaction between a user account of a service (e.g., electronic payment processing) and the service provider server 110 associated with the service. The feature preprocessing module 281 may perform an extraction operation, where the feature preprocessing module 281 may extract metadata of user interactions from the user queries. In some aspects, the metadata includes the device data of the customer device 130.
[0094] In some implementations, the feature combination module 282 of the anomaly detection module 280 can select a combination of features from the different features. In some aspects, the combination of features includes features having a variance of expected values that exceeds a threshold variance. In some aspects of selecting the combination of features, the feature combination module 282 selects multiple device attribute combinations from the different of device attributes that represent features of interest. In some aspects, each of the multiple device attribute combinations corresponds to a different combination among a subset of the different device attributes.
[0095] Next, at step 404, the anomaly detection module 280 selects a prediction engine to process the device data and generate a vector of likelihood scores for different device attribute combinations. For example, the anomaly detection module 280 may select a prediction engine from different prediction engines to process the device data, where the selected prediction engine corresponds to a non-parametric statistical model of different of non-parametric statistical models. The anomaly detection module 280 may generate a vector of likelihood scores for different device attribute combinations with different non-parametric statistical models. In some aspects, the non-parametric estimation module 283 may generate a histogram for each of the different device attribute combinations of the particular device type and estimate a probability density function distribution that best fits the histogram. In some aspects, the histogram includes a two-dimensional relationship between first attribute values of a first device attribute against second attribute values of a second device attribute in a given device attribute combination of the device attribute combinations. In other aspects, the histogram can indicate relationships between N device attributes in N-dimensional space in a given device attribute combination of the device attribute combinations, where N is a positive integer.
[0096] In some implementations, in an offline operation, the anomaly detection module 280 generates a first number of non-parametric statistical models with first predetermined thresholds based on a first window of time, and the anomaly detection module 280generates a second number of non-parametric statistical models with a second predetermined thresholds based on a second window of time. In some aspects, the second window of time is subsequent to the first window of time. In one or more aspects, at least one of the first predetermined thresholds is different from at least one of the second predetermined thresholds.
[0097] In some aspects, the non-parametric statistical models include a machine learningbased models. In one or more implementations, the anomaly detection module 280 can train the machine learning-based models with a training dataset that indicates user interactions captured within one or more predetermined windows of time and a mapping of the user interactions to predetermined probability density function distributions. The training of the machine learningbased models can occur in an offline mode of the anomaly detection module 280. In some aspects of selecting the prediction engine of the prediction engines, the anomaly detection module 280 processes device attributes from the device data with the machine learning-based models. In turn, the anomaly detection module 280 can generate a likelihood prediction for each of the device attributes with the different machine learning-based models.
[0098] In some aspects, each of the machine learning-based models includes one or more executable programs and/or models configured to initially process one or more training data sets having transactions processed by an entity, including service provider server 110. The transactions in the training datasets 219, 220 may include legitimate transactions and malicious and/or fraudulent transactions, such as those transactions prohibited due to money launder laws, rules, and regulations when entity engage in illegal and/or malicious behavior. The training data set may include labeled and/or unlabeled data, which may include classifications of valid transactions and prohibited transactions (e.g., “no fraud” or “potential or detected fraud,” respectively). These may be labeled by a human operator, such as an agent that reviews transactions for prohibited activity such as fraud, and the like for reporting to a regulatory agency, body, or entity. Thus, one or more classifiers may be established by the agent or entity processingthe data, ormay be determined based on outlier transactions or transactions having features indicating prohibited conduct or behavior. Thus, the classifiers may be built and trained so that classifications may be assigned to particular data points (e.g., transactions) within each of the training datasets 219, 220.
[0099] The training datasets 219, 220 may include different features, such as a platform for the transaction (e.g., mobile, web, etc.), an account number, a transaction identifier (ID), a transaction type (e.g., payment, gambling, etc.), an encrypted transaction ID, a parent transaction ID, a created and/or update date, a US dollar equivalent amount (e.g., where credits and sent payments may be in a negative format), a local currency amount and/or code, a billing and/or shipping address, a funding source and/or backup funding source, a bank account number, a bank hash-based message authentication code (HMAC), a card number and/or hash, a card bun HMAC, a card issuer, a balance and/or impact on a balance due to the transaction, a transaction status and/or items within the transaction, notes and/or subject lines within messages for the transaction, an automated clearinghouse return codes, an ID on another marketplace or platform, a counterparty name, a counterparty account number, a counterparty account type, a counterparty country code, a counterparty email, a counterparty transaction ID, a counterparty ID on a marketplace or platform, a counterparty account status, a referring URL, an IP address, whether the transaction was successful, and a date (e.g., month/year) of transaction.
[0100] Other exemplary features and/or categories of features in the training datasets 219, 220 that may be important to training the values and weights of a machine learning model may include risk rules regarding flagging of transactions as incorrect descriptions or messages, complaints and flags by other parties within transactions, gambling activities including fantasy sports, specific country accounts and transaction activities from countries marked as high risk for fraud, a same or similar account owner for a sender and receiver in a transaction, counterfeit flagged accounts, volume of payments in a high risk transaction corridor or category, a spike in activity or transaction value after a dormant or inactive period, a number of transactions and total amount (including if the transactions were cross-border transactions), a previous account takeover flag, a malicious seller flag, an account restriction due to previous malicious use or rule violation, a cross-border payment from a device using in-person payment instrument processing (e.g., through processing a payment card EMV chip or magnetic stripe to provide the payment), a check deposit amount and transfer of deposited funds, a deposit and withdrawal/transfer of all or a substantial portion of the deposit within a time period, a gift card usage and withdrawal/transfer of such funds, a premier account usage and activity /inactivity, and/or a number of transactions between the same parties.
[0101] When generating machine learning engine 132, the features in the training datasets
219, 220 may be used to generate different layers of a machine learning model used to detect the prohibited transactions, which may include different nodes, values, weights, and the like. The machine learning-based model may utilize a supervised machine learning algorithm, function, or technique that utilizes continuous and/or iterative learning to generate the model. In some implementations, the machine learning-based model may be implemented as a deep learning network, including a convolution neural network, a recurrent neural network, or a deep neural network. When training the model, the anomaly detection module 280 may utilize feedback and annotations or labeling from the agent device 120 to iteratively train the model. For example, transactions in the training data set and/or other data sets may be flagged using the machine learning technique to identify prohibited transactions, where the agent device 120 may send an indication that the flagged transactions were not actually prohibited (e.g., not indicative or including fraud). Identification of these false positives may be used to retrain the machine learning-based model in a continuous and/or iterative process so that false positives may be reduced and/or eliminated, and the machine learning-based model may more accurately predict and detect fraud or other prohibited transactions. Thus, the machine learning-based model is trained for detection of prohibited transactions, as well as review of results from the machine learning-based model that has been modeled for prohibited transaction detection.
[0102] Although the above discussions pertain to an artificial neural network as an example of machine learning, it is understood that other types of machine learning methods may also be suitable to implement the various aspects of the present disclosure. For example, support vector machines (SVMs) may be used to implement machine learning. SVMs are a set of related supervised learning methods used for classification and regression. A SVM training algorithm — which may be a non-probabilistic binary linear classifier — may build a model that predicts whether a new example falls into one category or another. As another example, Bayesian networks may be used to implement machine learning. A Bayesian network is an acyclic probabilistic graphical model that represents a set of random variables and their conditional independence with a directed acyclic graph (DAG). The Bayesian network could present the probabilistic relationship between one variable and another variable. Other types of machine learning algorithms are not discussed in detail herein for reasons of simplicity.
[0103] Subsequently, at step 406, the anomaly detection module 280 determines whether at least one likelihood score in the vector of likelihood scores fails to satisfy one or more predetermined thresholds. For example, the anomaly detection module 280 may use the threshold comparator module 284 to compare each of the vector of likelihood scores against the one or more predetermined thresholds to determine whether at least one likelihood score fails the one or more predetermined thresholds. The anomaly detection module 280 may also determine that the at least one likelihood score corresponds to a particular device attribute combination. [0104] Next, at step 408, the anomaly detection module 280 detects an anomaly in the user interaction based on the at least one likelihood score failing to satisfy the one or more predetermined thresholds. For example, the anomaly detection module 280 may select an anomaly detection engine from different anomaly detection engines to detect an anomaly in the user interaction based on a result of the comparison operation that indicates at least one likelihood score in the vector of likelihood scores does not satisfy the one or more predetermined thresholds. In some embodiments, the probability density function module 285 may process device attributes from the device data with a probability density function algorithm. The probability density function module 285 can generate a probability density distribution with the probability density function algorithm. In some aspects, the probability density distribution includes a two-dimensional relationship between different likelihood values against different device attribute values for a given device attribute combination from the different device attribute combinations. For example, for each device attribute combination in the different device attribute combinations, the anomaly detection engine can generate a likelihood metric indicating that a corresponding attribute value associated with the user interaction is an actual occurrence. In other aspects, the probability density function module 285 may obtain a user profile that indicates a user behavior associated with one or more of the user device or a user account of a service, and the probability density function module 285 may determine a likelihood metric indicating that a corresponding attribute value associated with the user interaction is an actual occurrence, based on the user profile.
[0105] In other implementations, the cluster engine 286 of the anomaly detection module 280 can apply a clustering algorithm to a vector of likelihood scores to generate one or more clusters. This may be an alternative track to the use of the non-parametric estimation module 283 and the probability density function module 285. The outlier detection module 289 of the anomaly detection module 280 can detect a data point corresponding to at least one likelihood score of the vector of likelihood scores that is outside of the one or more clusters. The outlier detection module 289 can apply a Euclidean multi-dimensional distance calculation to determine a distance from the one or more clusters in N-dimensional space, where N is a positive integer. The outlier detection module 289 determines whether the data point is a member of the one or more clusters or an outlier based on the distance. In some aspects, the outlier detection module 289 determines that the data point is an outlierbased on the distance being greater than a threshold distance. In this respect, the outlier corresponds to the anomaly in the user interaction. [0106] In still other implementations, the singular value decomposition engine 287 of the anomaly detection module 280 can apply a singular value decomposition algorithm to a vector of likelihood scores to determine one or more principle components in the vector of likelihood scores in terms of eigenvalues and eigenvectors. In some aspects, each of the one or more principle components identifies which of the device attributes has a highest variance based at least on a ranking of the eigenvectors by order of variance. The principle component analysis engine 288 of the anomaly detection module 280 then applies a projection operation by matrix multiplication to project new attribute values in a direction of a principle component through an eigenvector that corresponds to the principle component. In turn, the principle component analysis engine 288 can generate a cluster in an eigenspace with the projected new attribute values. The outlier detection module 289 canthen detect a data point corresponding to at least one projected new attribute value that is outside of the cluster. The outlier detection module 289 applies a Euclidean multi-dimensional distance calculation to determine a distance from the cluster and determines whether the data point is a member of the cluster or an outlier based on the distance. In some aspects, the outlier detection module 289 determines that the data point is an outlier based on the distance being greater than a threshold distance. In this respect, the outlier corresponds to the anomaly in the user interaction.
[0107] Subsequently, at step 410, the remedial action module 274 of the second pass fraud detection module 270 issues a remedial action to the customer device 130 through the API 202 in response to detecting the anomaly in the user interaction. For example, the remedial action module 274 may further authenticate the customer device 130 when the anomaly in the user interaction is detected.
[0108] In some aspects, in issuing the remedial action, the remedial action module 274 issues, through the API 202, a first remedial action that denies the customer device 130 with access to the service when the at least one likelihood score is lesser than a first predetermined threshold. In some aspects, the remedial action module 274 also may issue a second remedial action that prompts the customer device 130 to provide additional authentication information when the at least one likelihood score is greater than the first predetermined threshold and lesser than a second predetermined threshold. In other aspects, the event detection server 150 may grant the customer device 130 with access to the service when each likelihood score in the vector of likelihood scores is greater than the second predetermined threshold.
[0109] In some implementations, the remedial action module 274 sends one or more messages to the customer device 130 as a next remedial action to prompt the customer device 130 to supply further authentication information, when an anomaly in the user interaction is detected. In turn, the feedback module 272 of the second pass fraud detection module 280 may receive user feedback in response to the one or more messages. The feedback module 272 may aggregate the feedback and/or post-process the user feedback to a suitable format. The feedback module 272 can provide the user feedbackto the non -parametric statistical models in the nonparametric estimation module 283. In some aspects, the user feedback indicates updated authentication information of the customer device 130. In turn, the anomaly detection module 280 may adjust the one or more predetermined thresholds based on the user feedback.
[0110] FIG. 5 illustrates a block diagram of a data profile module 250 for browser data retrieval and analysis, according to an implementation of the present disclosure. The data profile module 250 represents a novel framework for in-depth and automatic testing of current and prerelease web browser applications with the intent of generating a comprehensive database of their behaviors, features, and functionality in order to provide additional capabilities for detection and mitigation of malicious actors using modified or spoofed client devices. The data profile module 250 may allow baselining for statistical modeling of browser traffic and can generate alert notifications of new functionality that can be implemented into anti-fraud defense mechanisms. [OHl] The data profile module 250 includes a retrieval engine 254 that is adaptedto fetch device data, such as web browser information from multiple data sources 510. The data sources 510 may include browser repositories 510a-510c storing information associated with different types of browsers (e.g., different manufacturers, different models, different versions which include current and also pre-release versions and models, etc.) and a fraud tool repository 5 lOd storing information associated with known tools used by malicious users for launching attacks on web servers. The data profile module 250, knowledgeable about different vendor’s browser release channels and their typical download URLs, regularly fetches new versions. This may be in response to a new release, or may be part of a regular daily process. These may be precompiled binary downloads, or the data profile module 250 itself maybe able to compile instances of the browser from retrieved source code. Thus, the data profile module 250 may retrieve or otherwise obtain executable code associated with different browser applications (e.g., from different manufacturer, different models, different versions, etc.). For example, the data profile module 250 may obtain a Google Chromebrowser version 2.1, a Google Chrome browser version 3.0, an Apple Safari browser version 11.0, an Apple Safari browserversion 12.0, a MicrosoftEdge browser version 3.2, etc., and store the different browser applications in the data profile repository 256.
[0112] Subsequently, the controller 252 of the data profile module 250 can deploy a browser instance 530 (associated with any one of the browser applications stored in the data profile repository 256) in a testing environment 502. In some embodiments, the testing environment 502 may include a virtual machine or a container using virtualization technologies. For example, the controller 252 can deploy the browser instance 530 in a virtual machine. The intent is to have an entirely ‘clean’ instance of the browser with no customizations or user data that may change its behavior from the pre-set defaults. This automation process to deploy the browser instance 530 can target several different methods, for example: (1) deployment of the browser instance 530 to a new virtual machine, using a fully virtualized environment, (2) deployment of the web browser instance 53 Oto a segregated container, (3) automation of the web browser instance 530 on a hardware device, for example, using a remote automation process to remotely control the web browser on a device (e.g., a smart phone such as an ANDROID™ phone), and (4) automation of the web browser instance 530 on an emulated device, for example, using the web browser instance 530 (e.g., SAFARI™, etc.) within an emulator.
[0113] In some implementations, the browser instance 530 of the fetched web browser is invoked in the testing environment 502. Using a browser automation process, the browser instance 530 can control and run through a series of interactions with a web server 540. For example, the browser instance 530 caninteractwith the web server 540 via the virtual machine. The controller 252 can inspect the data that the web browser instance 530 sends for anti-fraud purposes, and can run programming code (e.g., JavaScript fingerprinting code, etc.) in an attempt to identify users over time for anti-fraud purposes. Most importantly, the data profde module 250 can detect spoofed web browsers running on the customer device 130 used for abuse and financial fraud based on comparing the device data obtained from the web browsers running on the customer device 130 and the data obtained through the automated analysis process via the interactions between the browser instance 530 and the web server 540 as described herein. These features may include network/communi cation details, such as: (1) The HTTP headers the browser sends, in a brand new unmodified install, (i) the values that are provided, (ii) the order the headers are sent in, and (iii) the headers that are sent in what conditions; (2) when connecting to a secure site, the TLS ciphers that the web browser supports, and in what preference order they are requested in; and (3) the protocols the web browser supports (e.g., HTTP 1.0, 1 .1 , QUIC, 2.0, 3.0, etc.) and the supported communication methods (e.g., Websockets, WebRTC, etc.)
[0114] The programming code (e.g., the JavaScript fingerprinting code) that is used for fingerprinting can identify device features, such as: (l) the document object model (DOM) processes and attributes the web browser claims to support; (2) the order that the web browser lists these attributes, when requested; (3) an indication of whether the aforementioned processes appear to be “native” values generated by the browser software, or overwritten by the user; (4) the browser-specific processes and attributes it exposes (e.g. FIREFOX™ exposes FIREFOX™- specific CSS attributes with the “-moz-“ prefix); (5) how the web browser handles and responds to error conditions, including those deliberately invoked (e.g., when executing JavaScript code to evaluate the mathematical expression “ThisIsNotANumber / 0”, CHROME™ returns the error message “Uncaught ReferenceError: ThisIsNotANumber is not defined” whereas SAFARI™ returns the error message “ReferenceError: Can’t find variable ThisIsNotANumber”).
[0115] By controlling the browser’s actions and havingthe ability to serve content to it and record its responses and behaviors, the dataprofile module 250 can thus record both client-side and server-side details. For example, a remote process can be invoked to serve as an automation tooling mechanism to drive the web browser instance 530 through a suite of tests and actions. The browser instance 530 can record the web browser traffic to the web server 540 as well as the return signaling from the web server 540. The web server 540 can record its outgoing web browser traffic to the browser instance 530. The recorded client-side data and server-side data can be stored in a data structure to maintain a database of web browser behavior, attributes, functionality, etc.
[0116] There are several use cases that the data profile module 250 can be applicable. For example, the data profile module 250 can be used for alerting of new or forthcoming attributes or behaviors in web browsers. For example, a currently -released version of a particular web browser may expose a particular number of JavaScript DOM attributes (e.g., 100), as determined through the automated deployment process described herein. Through automation, the data profile module 250 can download the latest Beta version and observe that the number of JavaScript DOM attributes exposed by the latest Beta version is different from the current version (e g., 103 attributes). The data profile module 250 can alert a developer or analyst, giving them the name and location of the additional attrib ute(s) that is not exposed by the current version, its default value, and how it behaves in certain forced-error conditions. The data profile module 250 may also determine if the additional attribute(s) is useful for anti-fraud purposes, and implement the additional attribute(s) within a fingerprinting mechanism such that when the new version of the particular web browser is released, the dataprofile module 250 can be updated with new defensive capabilities.
[0117] In another use case, the data profile module 250 can be used for alerting of changed or removed attributes or behaviors. Similarly to the first use case, if the data profile module 250 determines that a useful feature used by an anti-fraud system is removed in an upcoming version, or that it now returns a different result or behaves in a different way than previously observed, the data profile module 250 can determine mitigations or alternative approaches (e g., updating the fraud determination process to reflect the changes) instead of taking a reactive action when data quality suddenly drops or fraud increases.
[0118] In another use case, the data profile module 250 baseline the features. The data profile module 250 may perform risk scoring based on technical data that the web browser sends to an online service provider (e.g., service provider server 110). For example, the technical data may indicate that the web browser supports a specified number of encryption ciphers and indicates a prioritized order for the ciphers. This type of information may be part of the browser’s internal operations and is not typically modifiable by the user. The data profile module 250 can overtime observe typical patterns of data associated with a particular browser version. The data profile module 250 can determine “known good” baseline profiles that canbe used to accelerate this learning process. This known good data can then be sent to an analyst (e.g., to the agent device 120) for implementation in a rule or model, can be fed automatically into a model for use in training, or can be fed into a live risk assessment for immediate use.
[0119] In still another use case, the data profile module 250 can be used for analyzing criminal fraud and/or abuse tools. For example, malicious attackers may invoke anti-detect processes to submit spoofed browser data when interacting with the online service provider. The spoofed browser data is used to hide the fact that these malicious attackers are using such a tool, instead pretending to be a normal web browser. In other examples, the malicious attackers may spoof certain attributes in an attempt to appear like a particular customer who they know uses a certain browser profile. It is common for these types of tools to be built on top of existing web browser processes with additional modifications made. The data profile module 250 may load copies of criminal fraud tools and profile them in depth, comparing results to a legitimate version of the web browser application in order to determine the exact changes made by the malicious attacker, which can then be used to implement anti-fraud defenses against the tool.
[0120] In yet another use case, the data profile module 250 can be used for detection of mismatching features. The data profile module 250 may detect valid but mismatching feature sets compared to the claimed agent device. For example, a client (e.g., the customer device 130) that claims to run a non-desktop version of a select web browser but has the feature set and attributes of a desktop version of another select web browser. The data profile module 250 may determine that the client is involved in malicious activities with the service provider server 110 when such a mis-match is detected. [0121] In still yet another use case, the data profile module 250 can send precisely targeted anti -fraud challenges to suspect devices. For example, existing anti-fraud defenses may examine particular DOM values or execute particular code statements in an attempt to fingerprint the client device. These defenses are commonly designed to work on as many device types and browser versions as possible, to ensure effective collection from the widest possible audience. With granular visibility into all browser versions, the data profile module 250 may deploy highly -targeted fingerprinting code. For example, if the data profile module 250 can determine that a particular version of FIREFOX™ has one hundred unique attributes, the data profile module 250 may prompt each client for three different attributes (out of the one hundred unique attributes), thus yielding over 20 million potential combinations of tests. This can significantly increase the complexity for an adversary to gain a complete understanding of the tests that the data profile module 250 can run.
[0122] FIG. 6 conceptually illustrates an exemplary workflow 600 of the automated device data retrieval and analysis of FIG. 5, according to an implementation of the present disclosure. For purposes of brevity and efficient explanation of the functional aspects associated with the components described in FIG. 6, the block diagram of FIG. 6 will be described in reference to the processes of FIG. 7.
[0123] FIG. 7 is a flowchart of an example process 700 of automated device data retrieval and analysis, according to an implementation of the present disclosure. One or more of the steps 702-714 of process 700 may be implemented, at least in part, in the form of executable code stored on non-transitory, tangible, machine-readable media that when run by one or more processors may cause the one or more processors to perform one or more of the steps 702-714. Some examples of computing devices, such as computer system 1100 may include non- transitory , tangible, machine readable media that include executable code that when run by one or more processors (e.g., processor 1112) may cause the one or more processors to perform the steps of process 700. As illustrated, the process 700 includes a number of enumerated steps, but aspects of the process 700 may include additional steps before, after, and in between the enumerated steps. In some aspects, one or more of the enumerated steps may be omitted or performed in a different order.
[0124] The process 700 begins at step 702, where the retrieval engine 254 of the data profile module 250 receives, over the network 140, device data associated with an application over a communication channel from one or more data sources. In some aspects, the retrieval engine 254 may receive device data associated with multiple applications (e.g., different manufacturers, different models, different versions, etc.) from the one or more data sources. For example, the retrieval engine 254 can receive first device data that corresponds to a first version of different versions for the at least one application, for example, from one of the browser repositories 510a- c. In other aspects, the retrieval engine 254 can receive the first device data and second device data that corresponds to a second version of the different versions that at least in part different from the first version, for example, from another one of the browser repositories 510a-c. In various implementations, the retrieval engine 254 is communicably coupled to the API 202. [0125] Next, at step 704, the controller 252 of the data profile module 250 invokes an instance (e.g., the browser instance 530) of the application in a remote processing environment (e.g., the testing environment 502) using the device data. The remote processing environment is remote to the customer device 130. For example, at action 610, the instance of a web browser is invoked and controlled by an automation process executable in the remote processing environment. In some aspects of invoking the instance of the at least one application in the remote processing environment, the controller 252 deploys the at least one application to a virtual machine of different virtual machines using a virtualized environment. In other aspects of invoking the instance of the at least one application in the remote processing environment, the controller 252 deploys the at least one application to a segregated container accessible to the controller of the service provider server 110. In still other aspects of invoking the instance of the at least one application in the remote processing environment, the controller 252 establishes a connection with the instance of the at least one application executed on a remote communication device. In this respect, the controller 252 can establish remote control of the instance of the at least one application through the connection to the remote communication device. In yet other aspects of invoking the instance of the at least one application in the remote processing environment, the controller 252 can establish a connection with the instance of the at least one application executed on a remote emulation device. In this respect, the controller 252 can establish remote control of the instance of the at least one application through the connection to the remote emulation device.
[0126] Subsequently, at step 706, the controller 252 causes the instance of the application to interact with a web server over the network. For example, the controller 252 can send, to a remote processing environment, a control message that causes the instance of the at least one application to interact with the web server 540 over the network 140. In one or more implementations, the controller 252 sends, to the remote processing environment, a first control message that prompts the instance of the at least one application to send a request to the web server 540 for returning a process script that is executable on the remote processing environment. For example, at action 620, the first control message may include an instruction to the web browser to request “Test 1.” Thereafter, at action 622, the instance of the web browser transmits a request for “Test 1.” At action 624, the web server 540 transmits interrogation scripts and web page documents to the web browser in response to the request. For example, at action 626, the web server 540 responds to the requestby transmitting a return message containing script code for executing “Test 1.”
[0127] Next, at step 708, the controller 252 can access traffic data associated with one or more interactions between the instance of the application and the web server. In some aspects, the traffic data indicates a behavior of application data associated with the at least one application based on the one or more interactions with the web server 540. In some aspects of lhe accessing the traffic data, the controller 252 can acquire first traffic data transmitted from the instance of the at least one application to the web server. In some aspects, the first traffic data includes one or more of (1) behavioral patterns associated with the at least one application in the one or more interactions over a predetermined period of time, (2) multiple attributes of the at least one application that is set for the one or more interactions, or (3) multiple functionality parameters of the at least one application that is activated in the one or more interactions.
[0128] In some aspects of accessing the traffic data, the controller 252 executes the process script on the remote processing environment with the instance 530 of the at least one application. The controller 252 can interrogate the at least one application with the executed process script. In some aspects, the interrogating includes the one or more interactions between the instance 530 and the web server 540. The controller 252 can then log results of the interrogating in a data structure of the data profile repository 256. For example, at action 632, the controller 252 can record client-side activity data (e.g., occurring at the browser instance 530), such as DOM attributes and values and/or generated error messages, among others. At action 634, the controller 252 can record data sent from the browser instance 530 to the web server 540, such as supported TLS ciphers, HTTP request headers, among others. In some aspects of interrogating the at least one application with the executed process script, the controller 252 can invoke a predetermined error condition in the one or more interactions. As such, the controller 252 can log a response of the browser instance 530 of the at least one application based on the predetermined error condition.
[0129] Subsequently, at step 710, the controller 252 determines one or more features of the application in a native state from the traffic data. As discussed above, the traffic data may indicate the behavior of the application. Thus, the controller 252 may determinethe one or more features of the application based on the behavior of the application. In some examples, valuesin the native state may correspond to default values (or reset values) set for attributes of the at least one application.
[0130] Next, at step 712, the controller 252 generates a data profile of the application that indicates the one or more features in the native state. In some aspects of generating the data profile, the controller 252 can generate a data structure that includes application data that corresponds to the first version. In some aspects, the application data indicates one or more of (1) behavioral patterns associated with the first version over a predetermined period of time, (2) multiple attributes of the at least one application that correspond to the first version, or (3) multiple functionality parameters of the at least one application that correspond to the first version. In some aspects of generating the data profile, the controller 252 can generate a first data structure that comprises first application data the corresponds to the first version and a second data structure that comprises second application data that corresponds to the second version. In some aspects, the first data structure and the second data structure are prioritized based on a first weighting associated with the first application data and a second weighting associated with the second application data. In some aspects, the first weighting and the second weighting are based at least in part on respective lifecycles of the first application data and the second application data.
[0131] Subsequently, at step 714, malicious activity performed by a user device maybe detected based on the data profile. For example, the controller 252 provides the traffic data and the data profile to the data profile repository 256 accessible to one or more remote devices associated with a service. The remote devices may detect malicious activity performed by the customer device 130 by comparing device data associated with the customer device 130 and the data profile. For example, when the customer device 130 indicates that the application that is in communication with the service provider server 110 is of a particular type (e.g., a particular manufacturer, a particular model, a particular version, etc.), the remote device may compare the device data received from the customer device 130 (which may include data associated with interactions between the customer device 130 and the service provider server 110) against the data profile associated with an application of the particulartype. When the device data deviates from the data profile by more than a threshold, the remote device may determine that the customer device 130 is used to conduct malicious activity (e.g., account take-over, etc.). In some implementations, the controller 252 generates a notification that indicates the behavior of the application data associated with the at least one application. The controller 252 may provide, through the API 202 over the network 140, the notification to the customer device 130. In various aspects, the controller 252 provides the data profile from the data profile repository 256 to a remote engine associated with the service that performs one or more detection operations with the data profile and detects potential malicious activity associated with the at least one application from the one or more detection operations. For example, at action 640, the stored data can be used to enhance anti-fraud capabilities.
[0132] FIG. 8 conceptually illustrates an exemplary workflow 800 of a voice vector framework for authenticating user interactions, according to an implementation of the present disclosure. For purposes of brevity and efficient explanation of the functional aspects associated with the components described in FIG. 8, actions of the workflow diagram of FIG. 8 will be described in reference to the processes of FIGS. 9 and 10. In general, FIG. 8 illustrates actions that authenticate a caller who calls the service provider server 110 in association with a user account with the service provider server 110. For example, the caller may be calling to perform a transaction in association with the user account (e.g., resetting a password, performing a payment transaction, etc.). The workflow 800 may include actions for authenticating the caller for accessing the user account (or for performing the transaction for the user account) based on voice data associated with the caller.
[0133] In some embodiments, according to the workflow 800, the voice vector framework first authenticates the caller as legitimate customer and then checks if the same caller has called before with similar intentions on multiple accounts. The framework can provide real-time feedbackby alerting customer service agents of such suspicious activity. The framework can leverage various aspects of speech recognition and voice identification technology as well as intent identification on the incoming customer call. The framework includes a process of generating a suite of machine learning-based models that encompasses the variations that are representative an entity’s customers. The framework provides a two-stage procedure that first verifies the identity of a caller and second to check if the same caller has previously called with the same intention on a different account not belonging to him or her. This framework can identify fraudsters, generate a voice blacklist, and alert customer service agents in real time to mitigate any security lapse.
[0134] FIG. 9 is a flowchart of an example process 900 of a voice vector framework for verifying user interactions, according to an implementation of the present disclosure. One or more of the steps 902-916 of process 900 maybe implemented, at least in part, in the form of executable code stored onnon-transitory, tangible, machine-readable media that when run by one or more processors may cause the one or more processors to perform one or more of the steps 902-916. Some examples of computing devices, such as computer system 1100 may include non- transitory , tangible, machine readable media that include executable code that when run by one or more processors (e.g., processor 1112) may cause the one or more processors to perform the steps of process 900. As illustrated, the process 900 includes a number of enumerated steps, but aspects of the process 900 may include additional steps before, after, and in between the enumerated steps. In some aspects, one or more of the enumerated steps may be omitted or performed in a different order.
[0135] The process 900 begins at step 902, where the feature extraction module 210 receives voice data associated with a voice communication between a user of a user device and a service provider server. For example, when a caller uses a device (e.g., the customer device 130) to make a phone call to an agent (e.g., a human agent, a robot, etc.) of the service provider server 110, the voice vector framework, at action 810, may receive the voice data associated with the phone call.
[0136] Next, at step 904, the feature extraction module 210 extracts user attributes from the voice data. For example, the feature extraction module 210 can extract words and/or phrases from the voice transcript.
[0137] Subsequently, at step 906, the controller 252 obtains user account information from a user account associated with the voice communication. For example, based on the extracted user attributes, the controller 252 may determine that the voice communication is regarding a user account with the service provider server 110 (e.g., disputing a transaction associated with the user account, resetting the user account, etc.).
[0138] Next, at step 908, the classification module 230 can select a classifier that corresponds to a select combination of features based on the user account information. For example, at action 822, the classification module 230 can retrieve a trained model based on the account information of the caller generated by the feature extraction module 210. In some examples, the trained model may be a gender-age-accent model. In some aspects of selecting the classifier that corresponds to the predetermined combination of features, the classification module 230 can select a select number of classifiers that correspond to different user attribute combinations based on different attributes in one or more of the user attributes or the user account information.
[0139] Subsequently, at step 910, the classification module 230 applies the classifier to the user attributes and/or the voice vectors.
[0140] Next, at step 912, the audio feature extraction engine 217, in coordination with the classifier 232, can generate a voice vector comprising multiple scores based on the applied classifier. For example, at action 820, the audio feature extraction engine 217 can generate the voice vector with a representation of words and/or phrases as audio attributes extracted from the audio call. In some aspects, each of the scores in the voice vector indicates a likelihood that an attribute of the different user attributes corresponds to an attribute of the select combination of features. In some aspects of generatingthe voice vector, the audio feature extraction engine 217 can mask one or more of the scores based at least in part on one or more attributes in the user account information. In this regard, the audio feature extraction engine217, in coordination with the classifier 232, can generate the voice vector with the masked one or more of the scores. In some aspects, the masked one or more of the scores are excluded from the comparing operation in step 914. In other aspects of generatingthe voice vector, the audio feature extraction engine 217 can apply a weighting to one or more of the scores based at least in part on one or more attributes in the user account information. In this regard, the audio feature extraction engine 217, in coordination with the classifier 232, can generate the voice vector with the weighting applied to the one or more of the scores.
[0141] Subsequently, at step 914, the voice scoring module 234 may compare the voice vector to a baseline vector that corresponds to a predetermined combination offeatures (e.g., the selected combination of features). In some aspects of comparing the voice vector to the baseline vector, the voice scoring module 234 can determine a distance between one or more of the scores in the voice vector and corresponding values in the baseline vector. In this respect, the voice scoring module 234 can determine whether the distance between each of the one or more of the scores and the corresponding values exceeds a predetermined distance threshold. In some aspects of comparing the voice vector to the baseline vector, the voice scoring module 234 can determine whether a difference between the voice vector and the baseline vector is within a predetermined tolerance threshold. In this regard, the voice scoring module 234 can determine that the user device is verified based on the difference between the voice vector and the baseline vector is determined to be within the predetermined tolerance threshold. In other aspects of comparing the voice vector to the baseline vector, the voice scoring module 234 can obtain multiple voice vectors from user account information associated with respective ones of multiple different user accounts, and the voice scoring module 234 can compare the voice vector to each of the voice vectors.
[0142] Next, at step 916, the report generation module 260 may send a notification to the agent device 120 associated with the service provider server 110. For example, at action 824, the report generation module 260 can provide an indication of the caller verification to the agent device 120. In some aspects, the notification includes an indication of whether the user device is verified based on the comparing. In some aspects, the report generation module 260 generates the notification with the indication that indicates that the user device is verified when the voice scoring module 234 determines that the distance between each of the one or more of the scores and the corresponding values does not exceed the predetermined distance threshold. In other aspects, the report generation module 260 generates the notification with the indication that indicates that the user device is not verified when the voice scoring module 234 determines that the distance between the one or more of the scores and the corresponding values exceeds the predetermined distance threshold. In this regard, the notification prompts the agent device 120 to send one or more queries to the customer device 130 for additional verification information. For example, at action 826, if the caller is verified, notification can be sent to the agent device 120. Otherwise, an alert is generated and sent to the agent device 120 for additional verification.
[0143] FIG. 10 is a flowchart of an example process 1000 of a voice vector framework for detecting malicious activity in user interactions, according to an implementation of the present disclosure. One or more of the steps 1002-1020 of process 1000 may be implemented, at least in part, in the form of executable code stored on non-transitory, tangible, machine-readable media that when run by one or more processors may cause the one or more processors to perform one or more of the steps 1002-1020. Some examples of computing devices, such as computer system 1100 may include non-transitory, tangible, machine readable media that include executable code that when run by one or more processors (e.g., processor 1112) may cause the one or more processors to perform the steps of process 1000. As illustrated, the process 1000 includes a number of enumerated steps, but aspects of the process 1000 may include additional steps before, after, and in between the enumerated steps. In some aspects, one or more of the enumerated steps may be omitted or performed in a different order.
[0144] The process 1000 begins at step 1002, where the feature extraction module 210 receives voice data associated with a voice communication between a user of a user device and a service provider server. For example, the voice communication may be from a phone call from a user of the customer device 130 and an agent (e.g., a human agent, a robot) associated with the service provider server 110. The phone call may be regarding a user account (e.g., disputing a transaction associated with the user account, resetting a password for the user account, etc.).
[0145] Next, at step 1004, the feature extraction module 210 can extract user attributes from the voice data. For example, at action 810, the feature extraction module 210 can extract words and/or phrases from the voice transcript.
[0146] Subsequently, at step 1006, the classification module 230 determines an intent of the voice communication based on the user attributes. For example, at action 832, the classification module 230 may determine that the intent of the voice call is a request to reset a password for a user account. Other example intents may include account verification, lift restriction from a user account, among others.
[0147] Next, at step 1008, the classification module 230 may select one or more of several machine learning-trained classifiers that correspond to the determinedintent. As discussed, various voice models corresponding to different intents have been trained using historic voice data associated with one or more user accounts. After a call has been classified as one of the different intents, the voice data associated with the call may be usedto train the voice model corresponding to the classified intent. At action 836, the classification module 230 retrieves trained models based on the intent, (inventors: can you elaborate on how the In some implementations, the classification module 230 may obtain user account information from a user account associated with the customer device 130, and the classification module 230 selects a machine learning-trained classifier that corresponds to a select combination of features based on the user account information.
[0148] Next, at step 1010, the classifier 232 applies each of the one or more machine learning-trained classifiers to the user attributes. In some aspects, the classifier 232 applies a selected machine learning-trained classifier to the different user attributes.
[0149] Subsequently, at step 1012, the audio feature extraction engine 217, in coordination with the classifier 232, generates voice vectors for the one or more machine learning-trained classifiers. For example, the audio feature extraction engine 217 may generate the voice vectors based on a baseline voice vector associated with each of the one or more of the machine learning-trained classifiers. In some aspects, the audio feature extraction engine 217, in coordination with the classifier 232, generates a voice vector that includes scores based on the applied machine learning-trained classifier. In some aspects, each of the scores in the voice vector indicates a likelihood that an attribute of the different user attributes corresponds to an attribute of a predetermined combination of features.
[0150] Next, at step 1014, the classifier 232 can select a machine learning-trained classifier that corresponds to a voice vector having the highest aggregate score. For example, the classifier 232 may compare the scores associated with the different voice vectors of the different machine learning-trained classifiers. The score may represent a similarity between the voice vector and the user attributes extracted from the voice data of the voice communication. Thus, the higher the score, the more similar the voice in the voice communication is to the voice vector. For example, at action 838, the classifier232 identifies which voice the caller matches in identifying a matched model.
[0151] Subsequently, at step 1016, the fraud class module 233, in coordination with the voice scoring module 234, may determine a voice signature associated with the selected one of the machine learning-trained classifiers.
[0152] Next, at step 1018, the fraud class module 233 can determine whether the customer device 130 interacted with the service provider server 110 in a prior interaction associated with a different user account based on the voice signature. For example, at action 840, the fraud class module 233 can retrieve account information for which the same caller called for review in a prior occasion. If a match is found, the fraud class module 233, in coordination with the report generation module 260, can alert an agent associated with the agent device 120, indicating that the current caller called previously for a different user account. Otherwise, the fraud class module 233 may not generate nor send a notification to the agent device 120. In some aspects of determining whether the customer device 130 interacted with the service provider server 110, the fraud class module 233 can access user account information in a data repository communicably coupled to the service provider server 110 and compare the voice signature to a historical voice signature associated with the accessed user account information that is stored in a data structure of the data repository. The fraud class module 233 can determine that a distance between the voice signature and the historical voice signature is within a predetermined tolerance threshold. In this regard, the fraud class module 233 determines that the customer device 130 interacted with the service provider server 110 in the prior interaction. In some aspects of comparing the voice signature to the historical voice signature, the fraud class module 233 can obtain different historical voice signatures from user account information associated with respective ones of different user accounts, and the fraud class module 233 compares the voice signature to each of the historical voice signatures.
[0153] Subsequently, at step 1020, the report generation module 260 can send a notification to a communication device associated with the service provider server 110 (e g., the agent device 120). In some aspects, the notification may include an indication of whether the user device interacted with the service provider server in the prior interaction. For example, at action 834, lhe report generation module 260 sends notification of detected fraudulent callers to the agent device 120.
[0154] In one embodiment, a method, comprises obtaining, by one or more hardware processors, user interaction data corresponding to a communication from a user to a service provider server via a first communication device, wherein the communication is regarding a user account with the service provider server; extracting, by the one or more hardware processors from the user interaction data, a plurality of features comprising one or more of textual data features or audio data features; determining, by the one or more hardware processors, an intent of the communication using a machine learning-trained classifier based on the plurality of features; grouping, by the one or more hardware processors, the communication with a plurality of past communications that is associated with the intent; deriving, by the one or more hardware processors and from the communication and the plurality of past communications, one or more common attributes; determining, by the one or more hardware processors, a risk level for the communication based on the one or more common attributes; and restricting, by the one or more hardware processors, access to one or more services of the service provider server through the user account.
[0155] In various embodiment of the above method, 1 ) the first communication device was authenticated to access the user account via a first authentication process, and where the method further comprises analyzing the plurality of past communications; detecting one or more anomalies within the plurality of past communications based on the analyzing; and in response to detecting the one or more anomalies, performing a second authentication process for the first communication device for accessing the user account; 2) the detecting the one or more anomalies comprises generating a plurality of clusters within the plurality of previous communications using one or more clustering algorithms, wherein each of the plurality of clusters corresponds to aunique pattern of activity; and determining whether the unique pattern of activity corresponds to a malicious activity; 3) the providing the indication comprises determining whether a threshold number of anomalies is detected in the communication and the plurality of past communications generating a report indicating the one or more anomalies when the threshold number of anomalies is detected; and transmitting the report to a second communication device; 4) the obtaining the user interaction data comprises obtaining textual data associated with a first type of interaction from a first data structure in a data repository communicably coupled to the service provider server; obtaining audio data associated with a second type of interaction from a second data structure in the data repository; and combining the textual data with the audio data to generate the user interaction data, wherein the textual data and the audio data correspond to different portions of the user interaction data; 5) the user interaction data comprises textual data and audio data, and wherein the extracting the plurality of features comprises selecting one of a plurality of feature extraction algorithms based on a comparison of performance metrics between each of the plurality of feature extraction algorithms; applying the selected feature extraction algorithm to the user interaction data; extracting a plurality of textual data features from the textual data; and extracting a plurality of audio data features from the audio data; 6) the method further comprising selecting one of a plurality of machine learning-trained classifiers based on a comparison of performance metrics between each of the plurality of machine learning-trained classifiers, wherein the determining the intent of the interaction comprises determining the intent of the interaction using the selected one of the plurality of machine learning-trained classifiers;
7) the method further comprising generating a plurality of machine learning-based networks corresponding to plurality of actionable insight categories; 8) the method above further comprising training each of the plurality of machine learning-based networks with a respective training dataset to form a plurality of machine learning-trained classifiers, the respective training dataset comprising labeled interaction data indicating what information pertains to which of the plurality of actionable insight categories; and selecting, from the plurality of machine learning- trained classifiers, the machine learning-trained classifier for determining the intent; 9) the method further comprising determining a plurality of clusters of communications, wherein each cluster in the plurality of clusters of communications corresponds to a type of event; determining that the communication is not mapped to any one of a plurality of clusters of communications; determining a particular event type corresponding to the communication; and generating a particular cluster for the particular event type; and/or 10) the method further comprising accessing a plurality of user interaction datasets associated with respective ones of a plurality of interactions between a plurality of communication devices andthe service provider server; determining a corresponding intent of each of the plurality of interactions from extracted features associated with each of the plurality of interactions using the machine learning-trained classifier; and classifying each of the plurality of interactions as a respective category of the plurality of actionable insight categories based at least in part on the corresponding intent of the interaction. [0156] In another embodiment, a system comprises a non-transitory memory; and one or more hardware processors coupled with the non-transitory memory and configured to read instructions from the non-transitory memory to causes the system to perform operations comprising in response to detecting a communication from a first communication device to a service provider server, obtaining user interaction data associated with the communication, wherein the first communication device is authenticated to access the service provider server via a first authentication process; extracting a plurality of features from the user interaction data, wherein the plurality of extracted features comprises textual data features and audio data features; classifying the plurality of extracted features into an actionable insight category from a plurality of actionable insight categories using a classifier, wherein the actionable insight category corresponds to fraudulent activity; generating a plurality of clusters with the plurality of extracted features using one or more clustering algorithms; detecting one or more anomalies in at least one of the plurality of clusters through an anomaly detection operation; and performing a second authentication process for further authenticating the first communication device based on the one or more anomalies detected in the at least one of the plurality of clusters.
[0157] In various embodiment of the above system, 1) the operations further comprise determining whether a threshold number of anomalies is detected in the at least one of the plurality of clusters; generating a report indicating the one or more anomalies when the threshold number of anomalies is detected; and sending, through the application programming interface to the second communication device, the report; 2) the classifying the plurality of extracted features comprises determining an intent of the interaction from the plurality of extracted features using the classifier, wherein the intent corresponds to the actionable insight category; 3) the operations further comprise selecting one of a plurality of machine learning-trained classifiers based on a comparison of performance metrics between each of the plurality of machine learning-trained classifiers, wherein the determining the intent of the interaction comprises determining the intent of the interaction using the selected one of the plurality of machine learning-trained classifiers; and/or 4) the operations further comprise generating a plurality of machine learning-based networks based on the plurality of actionable insight categories, wherein the plurality of machine learning-based networks corresponds to respective ones of the plurality of actionable insight categories; and training each of the plurality of machine learning-based networks with a respective training dataset to form a plurality of machine learning-trained classifiers, the respective training dataset comprising labeled interaction data indicating what information pertains to which of the plurality of actionable insight categories, wherein the classifying the plurality of extracted features comprises selectingthe classifier from the plurality of machine learning-trained classifiers.
[0158] In another embodiment, a n on-transitory machine-readable medium having stored thereon machine-readable instructions executable to cause a machine to perform operations comprising accessing user interaction data associated with an interaction between a first communication device and a service provider server associated with a service; generating, using a feature extraction engine of the service provider server, a plurality of feature representations of the user interaction data, the plurality of feature representations respectively corresponding to extracted features that comprise one or more of textual data features or audio data features; determining an intent of the interaction from the plurality of feature representations using a classifier, wherein the intent corresponds to one of a plurality of actionable insight categories; mapping the interaction to a first cluster of a plurality of first clusters based at least in part on the intent, wherein each of the plurality of first clusters corresponds to a particular type of activity in a first actionable insight category of the plurality of actionable insight categories; and issuing a remedial action for the interaction based on the mapping of the interaction to the first cluster, the remedial action being associated with the particular type of activity in the first actionable insight category.
[0159] In various embodiment of the above medium, 1) the operations further comprise applying the plurality of first clusters through an anomaly detection operation; detecting whether one or more anomalies are present in at least one of the plurality of first clusters based on the anomaly detection operation; issuing, through an application programming interface to the first communication device, a request that further authenticates the communication device when the one or more anomalies are detected in the at least one of the plurality of first clusters; and providing, through the application programming interface to a second communication device associated with the service provider server, an indication of the one or more anomalies; 2) the providingthe indication comprises determining whether a threshold number of anomalies is detected in the first cluster; generating a report indicating the one or more anomalies when the threshold number of anomalies is detected; and sending, through the application programming interface to the second communication device, the report; and/or 3) the operations further comprise generating a plurality of machine learning-based networks based on the plurality of actionable insight categories, wherein the plurality of machine learning-based networks corresponds to respective ones of the plurality of actionable insight categories; and training each of the plurality of machine learning-based networks with a respective training dataset to form a plurality of machine learning-trained classifiers, the respective training dataset comprising labeled interaction data indicating what information pertains to which of the plurality of actionable insight categories, wherein the determining the intent of the interaction using the classifier comprises selecting the classifier from the plurality of machine learning-trained classifiers.
[0160] In another embodiment, a system comprises a non-transitory memory; and one or more hardware processors coupled with the non-transitory memory and configured to read instructions from the non-transitory memory to cause the system to perform operations comprising receiving device data of a user device during a communication session between the user device and a service provider server, wherein the device data represents a plurality of device attributes associated with the user device, and wherein the user device is authenticated through a first authentication process for the communication session; determining, from the plurality of device attributes, a plurality of device attribute combinations; generating vector of likelihood scores for the plurality of device attribute combinations based on a plurality of statistical models; determining, from the plurality of device attribute combinations, at least one device attribute combination that does not satisfy one or more predetermined thresholds based on the generated vector of likelihood scores; detecting an anomaly in the communication session based on the determining that the at least one device attribute combination does not satisfy the one or more predetermined thresholds; and in response to detecting the anomaly, performing a second authentication process for further authenticating the user device for the communication session. [0161] In various embodiments of the above system, 1) the operations further comprise denying the user device access to a service of the service provider server in response to determining that the vector of likelihood scores is less than a threshold; 2) the second authentication process comprise prompting the user device to provide additional authentication information; 3) the operations further comprise granting the user device with access to a service of the service provider server when the user device is authenticated through the second authentication process; 4) the operations further comprise detecting that the user device has established the communication session with the service provider server, wherein the device data is received in response to the detecting that the user device has established the communication session, and wherein the communication session indicates a device type of the user device; 5) the operations further comprise receiving a first plurality of logs indicating a first plurality of events associated with a plurality of user devices that occur in a first window of time, wherein each event of the first plurality of events comprises first device data corresponding to the first window of time for a user device of the plurality of user devices; and receiving a second plurality of logs indicating a second plurality of events associated with the plurality of user devices that occur in a second window of time different from the first window of time, wherein each event of the second plurality of events comprises second device data corresponding to the second window of time for a user device of the plurality of user devices; 6) the operations further comprise generating a first plurality of statistical models with a plurality of first predetermined thresholds based on a first window of time; and generating a second plurality of statistical models with a plurality of second predetermined thresholds based on a second window of time, wherein at least one of the plurality of first predetermined thresholds is different from at least one of the plurality of second predetermined thresholds; 7) the operations further comprise receiving updated authentication data from the user device during the second authentication process; providing the updated authentication data to the plurality of statistical models; and adjusting the one or more predetermined thresholds based on the updated authentication data; 8) the plurality of device attributes is associated with a device type of the user device, wherein each device attribute in the plurality of device attributes is marked with a different descriptor and a corresponding attribute value that are specific in identifying the device type of the user device; 9) the operations further comprise receiving, from the user device, a plurality of user queries via the communication session, wherein each of the plurality of user queries comprises a user interaction related to a user account with the service provider server; and extracting, from the plurality of user queries, the device data of the user device; 10) the plurality of statistical models comprises a plurality of machine learning-based models, and wherein the operations further comprise training the plurality of machine learning-based models with a training dataset that comprises a plurality of user interactions captured within one or more predetermined windows of time and a mapping to a plurality of predetermined probability density function distributions; processing the plurality of device attributes with the plurality of machine learning-based models; and generating a likelihood prediction for each of the plurality of device attributes with the plurality of machine learning-based models.
[0162] In another embodiment, a method comprises in response to detecting an interaction between a user device and a service provider server, obtaining, by one or more hardware processors, a plurality of features associated with the user device; determining, by the one or more hardware processors, that a combination of features from the plurality of features has a variance of expected values exceeding a threshold variance; generating, by the one or more hardware processors, a prediction indicating a likelihood that the interaction is associated with a fraudulent activity based on the combination of features; and restricting, by the one or more hardware processors, the user device to one or more services of the service provider server based on the prediction. [0163] In various embodiments of the above method, 1 ) the plurality of features corresponds to a plurality of device attributes for a particular device type, wherein the method further comprises selecting, from the plurality of device attributes, a plurality of device attribute combinations, wherein each of the plurality of device attribute combinations corresponds to a different subset in the plurality of device attributes; generating a histogram for each of the plurality of device attribute combinations; and estimating a probability density function distribution that best fits the histogram for each of the plurality of device attribute combinations, wherein the histogram for each of the plurality of device attribute combinations comprises a two- dimensional relationship between first attribute values of a first device attribute against second attribute values of a second device attribute in the corresponding device attribute combination; and/or 2) the histogram indicates a plurality of relationships among a plurality of device attributes in a multi-dimensional space for the corresponding device attribute combination.
[0164] In another embodiment, a n on-transitory machine-readable medium having stored thereon machine-readable instructions executable to cause a machine to perform operations comprising receiving device data of a user device in response to detecting an interaction between the user device and a service provider server associated with a service; selecting a device attribute combination from a plurality of device attributes included in the device data with a feature combination engine, wherein the device attribute combination comprises device attributes having a variance of expected values that exceeds a threshold variance; selecting a prediction engine of a plurality of prediction engines to process the device attribute combination with a corresponding anomaly detection model of a plurality of anomaly detection models and to generate a prediction indicating a likelihood that the device attribute combination represents an anomaly corresponding to fraudulent activity; and issuing a remedial action to the user device through an application programming interface with a remedial action engine based on the prediction for applying the remedial action on the user interaction.
[0165] In various embodiments ofthe above medium, 1 ) the selecting the prediction engine comprises processing a plurality of device attributes from the device data with a probability density function algorithm; and generating a probability density distribution with the probability density function algorithm, wherein the probability density distribution comprises a two- dimensional relationship between a plurality of likelihood values against different device attribute values for a given device attribute combination of a plurality of device attribute combinations; 2) the selecting the prediction engine comprises, for each device attribute combination in the plurality of device attribute combinations, generating a likelihood metric indicating that a corresponding attribute value associated with the user interaction is an actual occurrence; 3) the operations further comprise obtaining a user profile that indicates a user behavior associated with one or more of the user device or a user account of a service; and determining a likelihood metric indicating that a corresponding attribute value associated with the user interaction is an actual occurrence, based on the user profile; 4) the selecting the prediction engine comprises applying a clustering algorithm to a vector of likelihood scores to generate one or more clusters; detecting a data point corresponding to at least one likelihood score of the vector of likelihood scores that is outside of the one or more clusters; applying a Euclidean multi-dimensional distance calculation to determine a distance from the one or more clusters in N-dimensional space, where N is a positive integer; determining whether the data point is a member of the one or more clusters or an outlier based on the distance; and determining that the data point is an outlier based on the distance being greater than a threshold distance, wherein the outlier corresponds to the anomaly in the user interaction; and/or 5) the selecting the prediction engine comprises applying a singular value decomposition algorithm to a vector of likelihood scores to determine one or more principle components in the vector of likelihood scores in terms of eigenvalues and eigenvectors, wherein each of the one or more principle components identifies which of the plurality of device attributes has a highest variance based at least on a ranking of the eigenvectors by order of variance; applying a projection operation by matrix multiplication to project new attribute values in a direction of a principle component through an eigenvector that corresponds to the principle component; generating a cluster in an eigenspace with the projected new attribute values; detecting a data point corresponding to at least one projected new attribute value that is outside of the cluster; applying a Euclidean multi-dimensional distance calculation to determine a distance from the cluster; determining whether the data point is a member of the cluster or an outlier based on the distance; and determining that the data pointis an outlier based on the distance being greater than a threshold distance, wherein the outlier corresponds to the anomaly in the user interaction.
[0166] In another embodiment, a method comprises receiving, by one or more hardware processors, voice data associated with a voice communication between a user of a user device and a service provider server; determining, by the one or more hardware processors, that the voice communication is associated with a user account of the user with the service provider server; extracting, by the one or more hardware processors, a plurality of user attributes from the voice data; determining, from the plurality of user attributes, a particular user attribute combination; selecting from a plurality of voice classifiers, a particular classifier corresponding to the particular user attribute combination, wherein the voice vector is generated using the particular classifier; generating, by the one or more hardware processors, a voice vector for the user based on the particular classifier; determining a difference between the voice vector and a baseline vector that corresponds to the user account; determining that the user is a malicious user when the difference exceeds a threshold; and in response to determining that the user is a malicious user, restricting the user from accessing the user account during the voice communication.
[0167] In various embodiments of the above method, 1) the comparing the voice vector to the baseline vector comprises determining a distance between one or more values in the voice vector and corresponding values in the baseline vector; and determining whether the distance between each of the one or more values and the corresponding values exceeds a predetermined distance threshold; 2) the method further comprising generating a notification indicating that the user is not authenticated to access the user account; and transmitting the notification to a communication device associated with the service provider server; 3) the notification prompts the communication device to prompt the user device for additional verification information; 4) the particular user attribute combination comprises a gender-age-accent combination; 5) the generating the voice vector comprises masking one or more values associated with the plurality of user attributes based at least in part on one or more characteristics of the user account; and generating the voice vector based on the masked one or more values, wherein the masked one or more values are excluded from the determining the difference; 6) the generating the voice vector comprises applying corresponding weights to one or more values associated with the plurality of user attributes based at least in part on one or more characteristics of the user account; and generating the voice vector based on the weighted one or more values; 7) the method further comprising determining the threshold based on one or more characteristics of the user account;
8) the method further comprising obtaining a plurality of baseline vectors associated with a plurality of the user accounts; and comparing the voice vector to each of the plurality of baseline vectors; 9) the user account is a first user account, wherein the method further comprises determining that the user has previously called on a second user account based on the comparing the voice vector to each of the plurality of baseline vectors, wherein the determining that the user is a malicious user is further based on determining that the user has previously called on the second user account; 10) the method further comprising determining, from the plurality of user attributes, a plurality of user attribute combinations; and generating a plurality of machine learning-based networks based on the plurality of user attribute combinations, wherein the plurality of machine learning-based networks corresponds to respective ones of the plurality of user attribute combinations; and/or 11) the method further comprising training each of the plurality of machine learning-based networks with a training dataset to form a plurality of machine learning-trained classifiers, the training dataset comprising attribute data that corresponds to one of the plurality of user attribute combinations, wherein the selecting the classifier comprises selecting a machine learning-trained classifier from the plurality of machine learning-trained classifiers.
[0168] In another embodiment, a system comprises a non-transitory memory; and one or more hardware processors coupled to the non-transitory memory and configured to execute instructions from the non-transitory memory to cause the system to perform operations comprising receiving user interaction data associated with an interaction between a user device and a service provider server associated with a service, the user interaction data comprising audio data associated with the interaction; extracting, using a feature extraction engine, a plurality of user attributes from the audio data; determining an intent of the interaction from the plurality of user attributes; selecting one or more of a plurality of machine learning-trained classifiers that correspond to the determined intent; applying each of the one or more of the plurality of machine learning-trained classifiers to the plurality of user attributes; generating a plurality of voice vectors with a voice vector associated with each of the one or more of the plurality of machine learning-trained classifiers; selecting one of the plurality of machine learning-trained classifiers that corresponds to one of the plurality of voice vectors having a highest aggregate score; determining a voice signature associated with the selected one of the plurality of machine learning-trained classifiers; determining whether the user device interacted with the service provider server in a prior interaction based on the voice signature; and sending a notification to a communication device associated with the service provider server, the notification comprising an indication of whether the user device interacted with the service provider server in the prior interaction.
[0169] In various embodiments of the above system, 1) the determining whether the user device interacted with the service provider server comprises accessing user account information in a data repository communicab ly coupled to the service provider server; comparing the voice signature to a historical voice signature associated with the accessed user account information that is stored in a data structure of the data repository; determining that a distance between the voice signature and the historical voice signature is within a predetermined tolerance threshold; and determining that the user device interacted with the service provider server in the prior interaction; 2) the comparing the voice signature to the historical voice signature comprises obtaining a plurality of historical voice signatures from user account information associated with respective ones of a plurality of user accounts; and comparing the voice signature to each of the plurality of historical voice signatures; and/or 3) the operations further comprise obtaining user account information from a user account associated with the user device; determining, from the plurality of user attributes, a particular user attribute combination based on the user account information; selecting, from the plurality of machine learning-trained classifiers, a first machine learning-trained classifier that corresponds to the particular user attribute combination; applying the first machine learning-trained classifier to the plurality of user attributes; generating a voice vector comprising a plurality of scores based on the applied machine learning-trained classifier, wherein each of the plurality of scores in the voice vector indicates a likelihood that an attribute of the plurality of user attributes corresponds to an attribute of a predetermined combination of features; comparing the voice vector to a baseline vector that correspondsto a predetermined combination of features; and sending a second notification to the communication device associated with the service provider server, the another notification comprising an indication of whether the user device is verified based on the comparing.
[0170] In another embodiment, a n on-transitory machine-readable medium having instructions stored thereon, the instructions executable to cause performance of operations comprising receiving user interaction data associated with an interaction between a user device and a service provider server associated with a service, the user interaction data comprising audio data associated with the interaction; extracting, using a feature extraction engine, a plurality of user attributes from the audio data; obtaining user account information from a user account associated with the user device; selecting a machine learning-trained classifier that corresponds to a select combination of features based on the user account information; applying the machine learning-trained classifier to the plurality of user attributes; generating a voice vector comprising a plurality of scores based on the applied machine learning-trained classifier, wherein each of the plurality of scores in the voice vector indicates a likelihood that an attribute of the plurality of user attributes corresponds to an attribute of a predetermined combination of features; comparing the voice vector to a baseline vector that correspondsto a predetermined combination of features; and sending a notification to a communication device associated with the service provider server, the notification comprising an indication of whether the user device is verified based on the comparing.
[0171] In various embodiments of the above medium, 1) the operations further comprise determining an intent of the interaction from the plurality of user attributes; selecting one or more of a plurality of machine learning-trained classifiers that correspond to the determined intent; generating a plurality of voice vectors with a voice vector associated with each of the one or more of the plurality of machine learning-trained classifiers; selecting a plurality of machine learning-trained classifiers that correspond to the determined intent; selecting one of the plurality of machine learning-trained classifiers that corresponds to one of the plurality of voice vectors having a highest aggregate score; determining a voice signature that corresponds to the selected one of the plurality of machine learning-trained classifiers; determining whether the user device interacted with the service provider server in a prior interaction based on the voice signature; and sending a notification to a communication device associated with the service provider server, the notification comprising an indication of whether the user device interacted with the service provider server in the prior interaction; 2) the operations further comprise determining a plurality of user attribute combinations; and generating a plurality of machine learning-based networks based on the plurality of user attribute combinations, wherein the plurality of machine learningbased networks corresponds to respective ones of the plurality of user attribute combinations; and/or 3) the operations further comprise training each of the plurality of machine learning-based networks with a training dataset to form a plurality of machine learning-trained classifiers, the training dataset comprising attribute data that corresponds to one of the plurality of user attribute combinations, wherein the selecting the machine learning-trained classifier comprises selecting the machine learning-trained classifier from the plurality of machine learning-trained classifiers. [0172] FIG. 11 is a block diagram of a computer system suitable for implementing one or more components in FIGS. 1 and 2, according to an implementation. In various implementations, the communication device may comprise a personal computing device e.g., smartphone, a computing tablet, a personal computer, laptop, a wearable computing device such as glasses or a watch, Bluetooth device, key FOB, badge, etc.) capable of communicating with the network. The service provider may utilize a network computing device (e.g., a network server) capable of communicating with the network. It should be appreciated thateachof the devices utilized by users and service providers may be implemented as computer system 1100 in a manner as follows.
[0173] Computer system 1100 includes a bus 1102 or other communication mechanism for communicating information data, signals, and information between various components of computer system 1100. Components include an input/output (I/O) component 1104 that processes a user action, such as selecting keys from a keypad/key board, selecting one or more buttons, image, or links, and/or moving one or more images, etc., and sends a corresponding signal to bus 1102. I/O component 1104 may also include an output component, such as a display 1111 and a cursor control 1113 (such as a keyboard, keypad, mouse, etc.). An optional audio input/output component 1105 may also be included to allow a user to use voice for inputting information by converting audio signals. Audio I/O component 1105 may allow the user to hear audio. A transceiver or network interface 1106 transmits and receives signals between computer system 1100 and other devices, such as another communication device, service device, or a service provider server via network 140. In one implementation, the transmission is wireless, although other transmission mediumsand methods may also be suitable. One or more processors 1112, which can be a micro-controller, digital signal processor (DSP), or other processing component, processes these various signals, such as for display on computer system 1100 or transmission to other devices via a communication link 1118. Processor(s) 1112 may also control transmission of information, such as cookies or IP addresses, to other devices. [0174] Components of computer system 1100 also include a system memory component 1114 (e.g., RAM), a static storage component 1116 (e.g., ROM), and/or a disk drive 1117. Computer system 1100 performs specific operations by processor(s) 1112 and other components by executing one or more sequences of instructions contained in system memory component 1114. Logic may be encoded in a computer readable medium, which may refer to any medium that participates in providing instructions to processor(s) 1112 for execution. Such a medium may take many forms, including but not limited to, non-volatile media, volatile media, and transmission media. In various implementations, non-volatile media includes optical or magnetic disks, volatile media includes dynamic memory, such as system memory component 1114, and transmission media includes coaxial cables, copper wire, and fiber optics, including wires that comprise bus 1102. In one implementation, the logic is encoded in non-transitory computer readable medium. In one example, transmission media may take the form of acoustic or light waves, such as those generated during radio wave, optical, and infrared data communications. [0175] Some common forms of computer readable media includes, for example, floppy disk, flexible disk, hard disk, magnetic tape, any other magnetic medium, CD-ROM, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, RAM, PROM, EEPROM, FLASH-EEPROM, any other memory chip or cartridge, or any other medium from which a computer is adapted to read.
[0176] In various implementations of the present disclosure, execution of instruction sequences to practice the present disclosure may be performed by computer system 1100. In various other implementations of the present disclosure, a plurality of computer systems 1100 coupled by communication link 1118 to the network (e.g., such as a LAN, WLAN, PTSN, and/or various other wired or wireless networks, including telecommunications, mobile, and cellular phone networks) may perform instruction sequences to practice the present disclosure in coordination with one another.
[0177] Where applicable, various implementations provided by the present disclosure may be implemented using hardware, software, or combinations of hardware and software. Also, where applicable, the various hardware components and/or software components set forth herein may be combined into composite components that include software, hardware, and/or both without departing from the spirit of the present disclosure. Where applicable, the various hardware components and/or software components set forth herein may be separated into subcomponents that include software, hardware, or both without departing from the scope of the present disclosure. In addition, where applicable, it is contemplated that software components may be implemented as hardware components and vice-versa.
[0178] Software, in accordance with the present disclosure, such as program code and/or data, may be stored on one or more computer readable mediums. It is also contemplated that software identified herein may be implemented using one or more general purpose or specific purpose computers and/or computer systems, networked and/or otherwise. Where applicable, the ordering of various steps described herein may be changed, combined into composite steps, and/or separated into sub-steps to provide features described herein.
[0179] The various featuresand steps described herein may be implemented as systems that include one or more memories storing various information described herein and one or more processors coupled to the one or more memories and a network, wherein the one or more processors are operable to perform steps as described herein, as non-transitory machine-readable medium that includes a plurality of machine-readable instructions which, when executed by one or more processors, are adapted to cause the one or more processors to perform a method that includes steps described herein, and methods performed by one or more devices, such as a hardware processor, user device, server, and other devices described herein.
[0180] The foregoing disclosure is not intended to limit the present disclosure to the precise forms or particular fields of use disclosed. As such, it is contemplated that various alternate implementations and/or modifications to the present disclosure, whether explicitly described or implied herein, are possible in light of the disclosure. Having thus described implementations of the present disclosure, persons of ordinary skill in the art will recognize that changes may be made in form and detail without departing from the scope of the present disclosure. Thus, the present disclosure is limited only by the claims.

Claims

CLAIMS WHAT IS CLAIMED IS:
1. A method, comprising: determining, by one or more hardware processors, a communication between a first instance of an application of a user device and a service provider server over a network; obtaining, by the one or more hardware processors, device data associated with the first instance of the application of the user device; accessing, by the one or more hardware processors, a stored profile of the application, wherein the stored profile is generated by : invoking a second instance of the application in a remote processing environment remote from the user device; causing the second instance of the application to interact with a web server over the network; accessing traffic data associated with one or more interactionsbetweenthe second instance of the application and the web server, wherein the traffic data indicates a behavior of the application based on the one or more interactions with the web server; and deriving, for the profile of the application, one or more features of the application in a native state from the traffic data; comparing the device data against the stored profile of the application; determining that the communication corresponds to a malicious activity based on the comparing; and in response to determining that the communication corresponds to the malicious activity, restricting the user device from accessing a service of the service provider server.
2. The method of claim 1, further comprising: transmitting, to a device associated with the service provider server, a notification indicating that the communication corresponds to the malicious activity.
66
3. The method of claim 1 , wherein the accessing the traffic data comprises obtaining first data transmitted from the second instance of the application to the web server, wherein the first data comprises at least one of (1 ) behavioral patterns associated with the application in the one or more interactions over a predetermined period of time, (2) a plurality of attributes of the application that is set for the one or more interactions, or (3) a plurality of functionality parameters of the application that is activated in the one or more interactions.
4. The method of claim 1, further comprising determining, based on the device data, that the first instance of the application is associated with a first version of a plurality of versions of the application.
5. The method of claim 4, wherein the stored profile is generated further by generating a data structure comprising application data that corresponds to the first version, wherein the application data indicates one or more of (1) behavioral patterns associated with the first version over a predetermined period of time, (2) a plurality of attributes of the application that correspond to the first version, or (3) a plurality of functionality parameters of the application that correspond to the first version.
6. The method of claim 4, further comprising determining, based on the device data, that the first instance of the application is also associated with a second version of the plurality of versions of the application, wherein the stored profile is generated further by generating a first data structure comprising first application data the corresponds to the first version and generating a second data structure comprising second application data that corresponds to the second version.
7. The method of claim 6, wherein the first data structure and the second data structure are prioritized based on a first weight associated with the first application data and a second weight associated with the second application data, and wherein the first weight and the second weight are determined based at least in part on respective lifecycles of the first application data and the second application data.
67
8. The method of claim 1, wherein the causing the second instance of the application to interact with the web server comprises sending, to the second instance of the application, a first control message that prompts the second instance of the application to send a request to the web server for returning a process script that is executable on the remote processing environment.
9. The method of claim 8, further comprising: interrogating the second instance of the application based on executing the process script, wherein the interrogating includes the one or more interactions between the second instance of the application and the web server; and logging results of the interrogating in a data structure of a data repository.
10. The method of claim 9, wherein the interrogating the second instance of the application comprises: invoking a predetermined error condition in the one or more interactions; and logging a response of the second instance of the application based on the predetermined error condition.
11. The method of claim 1 , wherein the invoking the second instance of the application in the remote processing environment comprises deploying the application to a virtual machine of a plurality of virtual machines using a virtualized environment.
12. The method of claim 1, wherein the invokingthe second instance of the application in the remote processing environment comprises deploying the application to a segregated container accessible to the controller of the service provider server.
68
13. The method of claim 1 , wherein the invoking the second instance of the application in the remote processing environment comprises: establishing a connection with the second instance of the application executed on a remote communication device; and establishing a remote control of the second instance of the application through the connection to the remote communication device.
14. The method of claim 1, wherein the invokingthe second instance of the application in the remote processing environment comprises: deploying the second instance of the application on a remote emulation device; and establishing a remote control of the second instance of the application through a connection with the remote emulation device.
15. The method of claim 1 , further comprising: determining that a difference between the device data and the stored profile exceeds a threshold, wherein the communication is determined to correspond to the malicious activity based on the difference exceeding the threshold.
16. A system, comprising: a non-transitory memory; and one or more hardware processors coupled to the non-transitory memory and configured to execute instructions from the non-transitory memory to cause the system to perform operations comprising: invoking, by a controller of a service provider server, an instance of at least one application in a remote processing environment using device data associated with the at least one application; transmitting, by the controller to the remote processing environment, a control message that prompts the instance of the at least one application to send one or more requests to a web server over a network for a process script that invokes one or more processes executable in the remote processing environment;
69 obtaining traffic data associated with one or more interactions between the instance of the at least one application and the web server, the traffic data indicating a behavior of application data associated with the at least one application based on the one or more interactions with the web server; determining one or more features of the at least one application in a native state from the behavior of the application data; generating a data profile of the at least one application that indicates the one or more features in the native state; and providing the data profile to a remote engine associated with the service that performs one or more detection operations with the data profile and detects potential malicious activity associated with the at least one application from the one or more detection operations.
17. The system of claim 16, wherein the operations further comprise: acquiring, using a retrieval engine of a service provider server through an application programming interface over a network, the device data associated with a plurality of versions of the at least one application over a communication channel with a plurality of data sources.
18. The system of claim 16, wherein the operations further comprise: providing the traffic data and the data profile to a data repository accessible to one or more remote devices associated with the service that detect malicious activity in the at least one application using a comparison between the data profile and user data.
19. The system of claim 16, wherein the operations further comprise: generating a notification that indicates the behavior of the application data; and providing, through an application programming interface over a network, the notification to a communication device associated with a user.
20. A non-transitory machine-readable medium having stored thereon machine- readable instructions executable to cause a machine to perform operations comprising:
70 receiving, using a retrieval engine of a service provider serverthrough an application programming interface over a network, device data associated with at least one application over a communication channel with one or more data sources; invoking, by a controller of the service provider server, an instance of the at least one application in a remote processing environment using the device data; transmitting, by the controller to the remote processing environment, a control message that causes the instance of the at least one application to interact with a web server over the network; obtaining traffic data associated with one or more interactions between the instance of the at least one application and the web server, the traffic data indicating a behavior of application data associated with the at least one application based on the one or more interactions with the web server; generating a data profile of the at least one of the at least one application that indicates one or more features of the at least one application in a native state from the behavior of the application data; and providing the traffic data and the data profile to a data repository accessible to one or more remote devices associated with a service that detects malicious activity in the at least one application using a comparison between the data profile and user data.
71
PCT/US2021/055107 2020-10-14 2021-10-14 Automated device data retrieval and analysis platform WO2022081930A1 (en)

Applications Claiming Priority (8)

Application Number Priority Date Filing Date Title
US17/070,755 2020-10-14
US17/070,714 2020-10-14
US17/070,737 US11277437B1 (en) 2020-10-14 2020-10-14 Automated device data retrieval and analysis platform
US17/070,755 US11700250B2 (en) 2020-10-14 2020-10-14 Voice vector framework for authenticating user interactions
US17/070,714 US11610206B2 (en) 2020-10-14 2020-10-14 Analysis platform for actionable insight into user interaction data
US17/070,682 2020-10-14
US17/070,682 US20220114593A1 (en) 2020-10-14 2020-10-14 Probabilistic anomaly detection in streaming device data
US17/070,737 2020-10-14

Publications (1)

Publication Number Publication Date
WO2022081930A1 true WO2022081930A1 (en) 2022-04-21

Family

ID=81208648

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2021/055107 WO2022081930A1 (en) 2020-10-14 2021-10-14 Automated device data retrieval and analysis platform

Country Status (1)

Country Link
WO (1) WO2022081930A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230289168A1 (en) * 2022-03-09 2023-09-14 Dell Products L.P. Method and system for performing an application upgrade based on user behavior

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160226872A1 (en) * 2013-02-22 2016-08-04 Duo Security, Inc. System and method for verifying status of an authentication device
US20190098037A1 (en) * 2017-09-28 2019-03-28 Oracle International Corporation Cloud-based threat detection
US20190297096A1 (en) * 2015-04-30 2019-09-26 Amazon Technologies, Inc. Threat detection and mitigation in a virtualized computing environment
US20200153855A1 (en) * 2016-02-26 2020-05-14 Oracle International Corporation Techniques for discovering and managing security of applications

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160226872A1 (en) * 2013-02-22 2016-08-04 Duo Security, Inc. System and method for verifying status of an authentication device
US20190297096A1 (en) * 2015-04-30 2019-09-26 Amazon Technologies, Inc. Threat detection and mitigation in a virtualized computing environment
US20200153855A1 (en) * 2016-02-26 2020-05-14 Oracle International Corporation Techniques for discovering and managing security of applications
US20190098037A1 (en) * 2017-09-28 2019-03-28 Oracle International Corporation Cloud-based threat detection

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230289168A1 (en) * 2022-03-09 2023-09-14 Dell Products L.P. Method and system for performing an application upgrade based on user behavior
US11893376B2 (en) * 2022-03-09 2024-02-06 Dell Products L.P. Method and system for performing an application upgrade based on user behavior

Similar Documents

Publication Publication Date Title
US11677781B2 (en) Automated device data retrieval and analysis platform
US20220114593A1 (en) Probabilistic anomaly detection in streaming device data
US11610206B2 (en) Analysis platform for actionable insight into user interaction data
US20200366671A1 (en) Identity verification and management system
US10977654B2 (en) Machine learning engine for fraud detection during cross-location online transaction processing
US11080637B2 (en) Universal model scoring engine
US11700250B2 (en) Voice vector framework for authenticating user interactions
US11308497B2 (en) Detecting fraud using machine-learning
US20220237603A1 (en) Computer system security via device network parameters
US11785030B2 (en) Identifying data processing timeouts in live risk analysis systems
US11115406B2 (en) System for security analysis and authentication
Arora et al. Facilitating user authorization from imbalanced data logs of credit cards using artificial intelligence
US20210398128A1 (en) Velocity system for fraud and data protection for sensitive data
US20220027428A1 (en) Security system for adaptive targeted multi-attribute based identification of online malicious electronic content
US20220051125A1 (en) Intelligent clustering of account communities for account feature adjustment
US11321449B2 (en) System for security analysis and authentication across downstream applications
WO2023129886A1 (en) Fraud detection using aggregate fraud score for confidence of liveness/similarity decisions of live and identity document photos
WO2022081930A1 (en) Automated device data retrieval and analysis platform
Chen et al. Fraud analysis and detection for real-time messaging communications on social networks
US20220164477A1 (en) Detecting leakage of personal information in computing code configurations
US20230403268A1 (en) Reducing false positives in entity matching based on image-linking graphs
US20240111892A1 (en) Systems and methods for facilitating on-demand artificial intelligence models for sanitizing sensitive data
US20240111891A1 (en) Systems and methods for sanitizing sensitive data and preventing data leakage using on-demand artificial intelligence models
US20240111890A1 (en) Systems and methods for sanitizing sensitive data and preventing data leakage from mobile devices
US20230199082A1 (en) Real-time electronic service processing adjustments

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21881147

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21881147

Country of ref document: EP

Kind code of ref document: A1