CN115687751A - Method and system for selecting user for target terminal - Google Patents

Method and system for selecting user for target terminal Download PDF

Info

Publication number
CN115687751A
CN115687751A CN202211096056.2A CN202211096056A CN115687751A CN 115687751 A CN115687751 A CN 115687751A CN 202211096056 A CN202211096056 A CN 202211096056A CN 115687751 A CN115687751 A CN 115687751A
Authority
CN
China
Prior art keywords
user
scene
features
target
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211096056.2A
Other languages
Chinese (zh)
Inventor
刘贺
李树泉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alipay Hangzhou Information Technology Co Ltd
Original Assignee
Alipay Hangzhou Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alipay Hangzhou Information Technology Co Ltd filed Critical Alipay Hangzhou Information Technology Co Ltd
Priority to CN202211096056.2A priority Critical patent/CN115687751A/en
Publication of CN115687751A publication Critical patent/CN115687751A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The method and the system for selecting the users for the target terminal, provided by the specification, are used for obtaining a candidate user set, wherein the candidate user set comprises user information of a plurality of candidate users, inputting the user information of the candidate users into a sequencing model, obtaining a sequencing sequence of the candidate users, wherein the sequencing model comprises a general sequencing network and a special sequencing network which are fused layer by layer, and selecting the target users from the sequencing sequence and transmitting the target users to the target terminal; in the scheme, the general sequencing network and the special sequencing network in the sequencing model are fused layer by layer, so that the universality of all scenes is increased while the uniqueness of the scenes is described, and the accuracy of user selection can be improved when a user is selected for a target terminal.

Description

Method and system for selecting user for target terminal
Technical Field
The present disclosure relates to the field of user selection, and more particularly, to a method and system for selecting a user for a target terminal.
Background
In recent years, with the rapid development of internet technology, ioT intelligent devices are applied more and more widely in the field of face recognition. In the face recognition process, elastic recognition can be generally adopted, and the elastic recognition is divided into two stages, namely end-side recognition and cloud-side recognition. The end-side identification does not need to interact with the cloud side, so that the identification efficiency can be greatly improved, and therefore more users need to walk on the end-side identification link as far as possible. In the process of identifying the link at the end side, a user with high coverage needs to be selected for the terminal. The field of face recognition may include multiple industry scenarios, such as a college scenario enterprise scenario, a bus trip scenario, and so on. The existing method for selecting users usually selects users under different industry scenes through a sequencing model or develops a sequencing model for each industry scene to select users.
In the research and practice process of the prior art, the inventor of the invention finds that scene uniqueness exists in different industry scenes, and the same sequencing model is used for prediction in a plurality of industry scenes, so that the user coverage rate in the plurality of industry scenes is lower, in addition, as the granularity of the industry scenes is increasingly thinner, a large amount of resources are needed for training and maintaining a plurality of sets of sequencing models, and the sequencing model is difficult to achieve a better fitting effect aiming at a small condition that the data volume accumulation of long-tail small scenes is small, so that the precision of the sequencing model is lower, and the accuracy of selecting users for a target terminal is lower.
Therefore, it is desirable to provide a method and system for selecting a user for a target terminal with higher accuracy.
Disclosure of Invention
The present specification provides a method and system for selecting a user for a target terminal with higher accuracy.
In a first aspect, the present specification provides a method for selecting a user for a target terminal, comprising: acquiring a candidate user set, wherein the candidate user set comprises user information of a plurality of candidate users; inputting the user information of the candidate users into a sequencing model to obtain a sequencing sequence of the candidate users, wherein the sequencing model comprises a general sequencing network and a special sequencing network which are fused layer by layer; and selecting a plurality of target users from the sequencing sequence and transmitting the target users to the target terminal.
In some embodiments, the ranking model comprises the generic ranking network and the proprietary ranking network corresponding to each preset scenario, the generic ranking network comprises at least one generic ranking network layer, the proprietary ranking network comprises at least one proprietary ranking network layer, and the generic ranking network and the proprietary ranking network have the same number of layers and dimensions.
In some embodiments, the ranking model includes a generic ranking network and a proprietary ranking network fused layer-by-layer, including: and each layer of the universal sequencing network layer outputs the output data thereof to the output end of the corresponding special sequencing network layer for data fusion.
In some embodiments, the ranking model is trained by: acquiring training data of a preset sequencing model, wherein the training data comprises a user data sample of each user in a user set corresponding to a terminal, and the preset sequencing model comprises a preset general sequencing network and a preset special sequencing network corresponding to each preset scene; extracting user characteristics and scene characteristics corresponding to each preset scene from the user data samples; extracting general sorting features from the user features by adopting the preset general sorting network, and extracting scene sorting features from the scene features by adopting the preset special sorting network; and fusing the general sorting features and the scene sorting features to obtain current scene sorting features corresponding to each preset scene, and converging the preset sorting model based on the general sorting features and the current scene sorting features to obtain the sorting model.
In some embodiments, the obtaining training data of the preset ranking model includes: selecting a user set corresponding to the terminal from a preset user set; acquiring a historical user data set of each user in the user set; and generating a user data sample of each user based on the historical user data set, and taking the user data sample of each user as the training data.
In some embodiments, the selecting a user set corresponding to the terminal from a preset user set includes: selecting at least one target user subset from the preset user set, wherein each target user subset comprises at least one user corresponding to a preset position; matching users in the target user subset with the terminal based on the terminal information of the terminal; and selecting the user matched with the terminal from the target user subset to obtain a user set corresponding to the terminal.
In some embodiments, said generating a user data sample for said each user based on said set of historical user data comprises: screening out historical user data before a preset historical moment from the historical user data set to obtain first historical user data; screening out historical user data in a target time range from the historical user data set to obtain second historical user data, wherein the target time range comprises a preset time range after the preset historical time; and adding the identification tag corresponding to each preset scene in the first historical user data based on the second historical user data to obtain a user data sample of each user.
In some embodiments, the adding, based on the second historical user data, the identification tag corresponding to each preset scenario in the first historical user data to obtain a user data sample of each user includes: identifying historical identification information of the user at the terminal in the second historical user data; determining an identification tag corresponding to each preset scene based on the historical identification information; adding the identification tag to the first historical user data to obtain a candidate user data sample set; and selecting a user data sample for the user from the set of candidate user data samples.
In some embodiments, the set of candidate user data samples includes a user data positive sample and a user data negative sample, an
Said selecting a user data sample for said user from said set of candidate user data samples comprises: acquiring the number of positive samples of the user data in the candidate user data sample set to obtain the number of positive samples; determining the number of target negative samples in the user data samples based on the number of positive samples and a preset sample proportion; randomly sampling target user data negative samples from the user data negative samples based on the target negative sample number; and taking the user data positive sample and the target user data negative sample as the user data sample of the user.
In some embodiments, said extracting, from the user data sample, the user feature and the scene feature corresponding to each preset scene includes: extracting initial user features and initial scene features corresponding to each preset scene from the user data sample, wherein the initial user features comprise discrete user features and dense user features, and the initial scene features comprise discrete scene features and dense scene features; extracting user text features from the discrete user features, and extracting scene text features from the discrete scene features; and fusing the user text features and the dense user features to obtain user features, and fusing the scene text features and the dense scene features to obtain scene features corresponding to each preset scene.
In some embodiments, the extracting the generic ranking features from the user features using the pre-generic ranking network comprises: determining a target universal sequencing network layer in the at least one preset universal sequencing network layer; performing multi-dimensional feature extraction on the user features by adopting the universal sorting sub-networks in the target universal network layer to obtain universal sorting sub-features corresponding to each universal sorting sub-network; and fusing the general sorting sub-features to obtain the general sorting features.
In some embodiments, the fusing the general ranking features and the scene ranking features to obtain a current scene ranking feature corresponding to each preset scene includes: determining a scene sequencing weight corresponding to each preset scene based on the scene characteristics corresponding to each preset scene; respectively weighting the general sorting features and the scene sorting features based on the scene sorting weight to obtain weighted general sorting features and weighted scene sorting features; and fusing the weighted general sorting features and the weighted scene sorting features to obtain the current scene sorting features corresponding to each preset scene.
In some embodiments, the converging the preset ranking model based on the general ranking features and the current scene ranking features to obtain the ranking model includes: fusing the general sorting features and the user features to obtain current general sorting features; taking the current general sorting feature as the user feature and taking the current scene sorting feature as the scene feature; returning to the step of extracting the general sorting features from the user features by adopting the preset general sorting network and extracting the scene sorting features from the scene features by adopting the preset special sorting network until the fusion times reach the preset times, and obtaining the target general sorting features and the target scene sorting features corresponding to each preset scene; and converging the preset sequencing model based on the target general sequencing feature and the target scene sequencing feature to obtain the sequencing model.
In some embodiments, the fusing the general ranking features and the user features to obtain current general ranking features includes: determining a universal ranking weight of a universal ranking sub-feature of each dimension in the universal ranking features based on the user features; weighting the general sorting sub-features based on the general sorting weight to obtain weighted general sorting sub-features; and fusing the weighted general sorting sub-features to obtain the current general sorting feature.
In some embodiments, the converging the preset ranking model based on the target general ranking feature and the target scene ranking feature to obtain the ranking model includes: determining general scene loss information corresponding to the user data samples based on the target general sorting features; determining proprietary scene loss information corresponding to each preset scene based on the target scene sequencing characteristics; fusing the general scene loss information and the special scene loss information to obtain target loss information of the preset sequencing model; and converging the preset sequencing model based on the target loss information to obtain the sequencing model.
In some embodiments, the determining, based on the target generic ordering characteristic, generic scene loss information corresponding to the user data samples includes: adjusting the target general sorting feature through a preset activation function to obtain an adjusted general sorting feature; predicting the identification information of the user at the terminal based on the adjusted general sorting feature to obtain first predicted identification information; and determining the general scene loss information corresponding to the user data sample based on the first prediction identification information and the identification tag in the user data sample.
In some embodiments, the determining, based on the target scene ranking characteristic, the proprietary scene loss information corresponding to each preset scene includes: adjusting the target scene sequencing characteristics through the preset activation function to obtain adjusted scene sequencing characteristics; fusing the adjusted scene sequencing feature and the adjusted general sequencing feature to obtain a fused scene sequencing feature; predicting the identification information of the user at the terminal under each preset scene based on the fused scene sequencing characteristics to obtain second predicted identification information; and determining the special scene loss information corresponding to each prediction scene based on the second prediction identification information and the identification label of the user data sample.
In some embodiments, the obtaining the set of candidate users comprises: acquiring a user selection request aiming at the target terminal; acquiring a target user set corresponding to the target terminal based on the user selection request; and cleaning the target user set to obtain a candidate user set.
In some embodiments, the entering user information of the plurality of candidate users into a ranking model to obtain a ranking sequence of the plurality of candidate users comprises: inputting the user information of the candidate users into the ranking model based on the user selection request to obtain the ranking information of the candidate users; and sequencing the candidate users based on the sequencing information to obtain a sequencing sequence of the candidate users.
In some embodiments, said entering user information of the plurality of candidate users into the ranking model based on the user selection request, resulting in ranking information of the plurality of candidate users, comprises: when the user selection request does not comprise scene information, inputting the user information of the candidate users into the sequencing model to obtain general sequencing information and scene sequencing information corresponding to each preset scene; and using the general ranking information and the scene ranking information as ranking information of the candidate users.
In some embodiments, said entering user information of the plurality of candidate users into the ranking model based on the user selection request, resulting in ranking information of the plurality of candidate users, comprises: when the user selection request comprises scene information, extracting a target scene from the scene information; when each preset scene comprises the target scene, inputting the user information of the candidate users into the sequencing model to obtain target scene sequencing information corresponding to the target scene; and taking the target scene ranking information as ranking information of the candidate users.
In some embodiments, when the user selection request includes scene information, after a target scene is extracted from the scene information, the method further includes: when each preset scene does not comprise the target scene, inputting the user information of the candidate users into the sequencing model to obtain the general sequencing information; and using the general ranking information as ranking information of the candidate users.
In some embodiments, the selecting a plurality of target users from the ordered sequence and transmitting to the target terminal includes: selecting a plurality of target users from the sorting sequence; acquiring target facial features of the target users; and sending the target users and the target facial features to the target terminal so that the target terminal can perform facial recognition based on the target facial features.
In some embodiments, the selecting the plurality of target users from the sorted sequence comprises: classifying the candidate users in the sorting sequence to obtain user groups of each type; and selecting a target user group from the user groups, and taking the users in the target user group as the target users.
In a second aspect, the present specification also provides a system for selecting a user for a target terminal, comprising: at least one storage medium storing at least one instruction set for performing a user selection for a target terminal; and at least one processor communicatively coupled to the at least one storage medium, wherein when the system for selecting a user for a target terminal is operating, the at least one processor reads the at least one instruction set and performs the method for selecting a user for a target terminal according to the first aspect of the specification.
According to the technical scheme, the method and the system for selecting the users for the target terminal, which are provided by the specification, acquire the candidate user set, wherein the candidate user set comprises the user information of a plurality of candidate users, the user information of the candidate users is input into the sequencing model to obtain the sequencing sequence of the candidate users, the sequencing model comprises a general sequencing network and a special sequencing network which are fused layer by layer, and the target users are selected from the sequencing sequence and transmitted to the target terminal; in addition, the general sequencing network and the special sequencing network in the sequencing model are fused layer by layer, so that the uniqueness of each scene is depicted, the universality of all scenes is increased, the fitting effect can be improved by means of the scene universality in a long-tail small scene, the risk of overfitting is reduced by means of the scene universality in a large scene, and the coverage rate of each scene can be improved, so that the accuracy of user selection can be improved when a user is selected for a target terminal.
Additional features of the method and system for selecting a user for a target terminal provided by the present description will be set forth in part in the description which follows. The following numerical and exemplary descriptions will be readily apparent to those of ordinary skill in the art in view of the description. The inventive aspects of the method and system for selecting a user for a target terminal provided by the present specification can be fully explained by the practice or use of the methods, apparatus and combinations described in the detailed examples below.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present disclosure, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present disclosure, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 illustrates an application scenario diagram of a system for selecting a user for a target terminal according to an embodiment of the present specification;
FIG. 2 illustrates a hardware block diagram of a computing device provided in accordance with an embodiment of the present description;
FIG. 3 illustrates a flowchart for training a pre-set ranking model provided in accordance with an embodiment of the present description;
FIG. 4 is a schematic structural diagram illustrating a gating network provided in accordance with an embodiment of the present disclosure;
FIG. 5 illustrates an architecture diagram for training a pre-set order model provided in accordance with an embodiment of the present description; and
fig. 6 shows a flowchart providing a method for selecting a user for a target terminal according to an embodiment of the present specification.
Detailed Description
The following description is presented to enable any person skilled in the art to make and use the present disclosure, and is provided in the context of a particular application and its requirements. Various modifications to the disclosed embodiments will be readily apparent to those skilled in the art, and the general principles defined herein may be applied to other embodiments and applications without departing from the spirit and scope of the present description. Thus, the present description is not limited to the embodiments shown, but is to be accorded the widest scope consistent with the claims.
The terminology used herein is for the purpose of describing particular example embodiments only and is not intended to be limiting. For example, as used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. The terms "comprises," "comprising," "includes," and/or "including," when used in this specification, are intended to specify the presence of stated integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
These and other features of the present specification, as well as the operation and function of the related elements of structure and the combination of parts and economies of manufacture, may be significantly improved upon consideration of the following description. Reference is made to the accompanying drawings, all of which form a part of this specification. It is to be expressly understood, however, that the drawings are for the purpose of illustration and description only and are not intended as a definition of the limits of the specification. It should also be understood that the figures are not drawn to scale.
The flow diagrams used in this specification illustrate the operation of system implementations according to some embodiments of the specification. It should be clearly understood that the operations of the flow diagrams may be performed out of order. Rather, the operations may be performed in reverse order or simultaneously. In addition, one or more other operations may be added to the flowchart. One or more operations may be removed from the flowchart.
Before describing the specific embodiments of the present specification, the following description will be made for the application scenarios of the present specification:
when the IoT intelligent device is applied to a face recognition scene, the IoT intelligent device usually adopts elastic recognition in the process of performing face recognition, and the elastic recognition is divided into two stages, namely end-side recognition and cloud-side recognition. Since the end-side identification does not need to interact with the cloud side, the identification efficiency can be greatly improved, and therefore, more users need to walk on the end-side identification link as much as possible. If the users who carry out face recognition at the terminal in the future can be predicted, then the cloud side sends the facial features of the users or other user recognition features to the end side in advance, so that the face recognition can be directly carried out at the end side, and the face recognition efficiency and accuracy can be greatly improved.
For convenience of description, the present specification will make the following explanations on terms that will appear in the following description:
IoT intelligent equipment: the intelligent equipment applied to the Internet of things can be understood as the intelligent equipment applied to the Internet of things, and the intelligent equipment can also be understood as the Internet of things equipment, and can be understood as all terminal equipment capable of carrying out face recognition in the scheme.
And (4) user selection: which may also be referred to as crowd sourcing, may be understood as screening the total number of users for high coverage for one or more terminals. In the scheme, the user who is likely to perform face recognition on the terminal in the future is predicted from the user set after the user recall is performed on each terminal.
It should be noted that the face recognition scenario is only one of a plurality of usage scenarios provided in this description, and the method and system for selecting a user for a target terminal described in this description may be applied to not only the face recognition scenario but also all scenarios for selecting a user, for example, a scenario for checking a core of an access control, or a scenario for face-brushing payment, and the like. It should be understood by those skilled in the art that the method and system for selecting a user for a target terminal described in the present specification can be applied to other usage scenarios and are also within the scope of the present specification.
Fig. 1 illustrates an application scenario diagram of a system 001 for selecting a user for a target terminal according to an embodiment of the present specification. The system 001 for selecting a user for a target terminal (hereinafter, referred to as the system 001) may be applied to user selection in any scenario, for example, user selection in a face recognition scenario, user selection in a face-brushing payment scenario, user selection in a bus trip scenario, and the like, as shown in fig. 1, the system 001 may include a target user 100, a client 200, a server 300, and a network 400.
The target user 100 may be a user who triggers selection of a user for the target terminal, and the target user 100 may perform a user selection operation at the client 200.
The client 200 may select a device of a user corresponding to the target terminal for responding to the user selection operation of the target user 100. In some embodiments, the method of selecting a user for a target terminal may be performed on the client 200. At this time, the client 200 may store data or instructions for performing the method of selecting a user for a target terminal described in the present specification, and may execute or be used to execute the data or instructions. In some embodiments, the client 200 may include a hardware device having a data information processing function and a program necessary for driving the hardware device to operate. As shown in fig. 1, client 200 may be communicatively coupled to server 300. In some embodiments, the server 300 may be communicatively coupled to a plurality of clients 200. In some embodiments, the client 200 may interact with the server 300 over the network 400 to receive or send messages or the like, such as to receive or send user information. In some embodiments, the client 200 may include a mobile device, a tablet computer, a laptop computer, an in-built device of a motor vehicle, or the like, or any combination thereof. In some embodiments, the mobile device may include a smart home device, a smart mobile device, a virtual reality device, an augmented reality device, or the like, or any combination thereof. In some embodiments, the smart home device may include a smart television, a desktop computer, or the like, or any combination thereof. In some embodiments, the smart mobile device may include a smartphone, a personal digital assistant, a gaming device, a navigation device, and the like, or any combination thereof. In some embodiments, the virtual reality device or augmented reality device may include a virtual reality helmet, virtual reality glasses, a virtual reality patch, an augmented reality helmet, augmented reality glasses, an augmented reality patch, or the like, or any combination thereof. For example, the virtual reality device or the augmented reality device may include *** glasses, head mounted displays, VRs, and the like. In some embodiments, the built-in devices in the motor vehicle may include an on-board computer, an on-board television, and the like. In some embodiments, the client 200 may include a data collection device for collecting user data or user information to obtain a set of candidate users. In some embodiments, the client 200 may be a device with location technology for locating the location of the client 200.
In some embodiments, the client 200 may have one or more Applications (APPs) installed. The APP can provide the target user 110 with the ability to interact with the outside world over the network 400 and an interface. The APP includes but is not limited to: the system comprises a webpage browser type APP program, a search type APP program, a chat type APP program, a shopping type APP program, a video type APP program, a financing type APP program, an instant messaging tool, a mailbox client, social platform software and the like. In some embodiments, a target APP may be installed on the client 200. The target APP can acquire user data or user information of the candidate user for the client 200, so that a candidate user set is obtained. In some embodiments, the target object 100 may also trigger a user selection request through the target APP. The target APP may perform the method for selecting a user for a target terminal described in this specification in response to the user selection request. The method for selecting the user for the target terminal will be described in detail later.
The server 300 may be a server that provides various services, such as a backend server that provides support for a collection of candidate users collected on the client 200. In some embodiments, the method of selecting a user for a target terminal may be performed on the server 300. At this time, the server 300 may store data or instructions for performing the method of selecting a user for a target terminal described in the present specification, and may execute or be used to execute the data or instructions. In some embodiments, the server 300 may include a hardware device having a data information processing function and a program necessary for driving the hardware device to operate. The server 300 may be communicatively coupled to a plurality of clients 200 and receive data transmitted by the clients 200.
Network 400 is the medium used to provide communication connections between clients 200 and server 300. The network 400 may facilitate the exchange of information or data. As shown in fig. 1, the client 200 and the server 300 may be connected to a network 400 and transmit information or data to each other through the network 400. In some embodiments, the network 400 may be any type of wired or wireless network, as well as combinations thereof. For example, network 400 may include a cable network, a wireline network, a fiber optic network, a telecommunications network, an intranet, the Internet, a Local Area Network (LAN), a Wide Area Network (WAN), a Wireless Local Area Network (WLAN), a Metropolitan Area Network (MAN), a Wide Area Network (WAN), the Public Switched Telephone Network (PSTN), a Bluetooth network, a ZigBee network, a Near Field Communication (NFC) network, or the like. In some embodiments, network 400 may include one or more network access points. For example, network 400 may include a wired or wireless network access point, such as a base station or an internet exchange point, through which one or more components of client 200 and server 300 may connect to network 400 to exchange data or information.
It should be understood that the number of clients 200, servers 300, and networks 400 in fig. 1 is merely illustrative. There may be any number of clients 200, servers 300, and networks 400, as desired for an implementation.
It should be noted that the method for selecting a user for a target terminal may be completely executed on the client 200, may be completely executed on the server 300, may be partially executed on the client 200, and may be partially executed on the server 300.
FIG. 2 illustrates a hardware block diagram of a computing device 600 provided in accordance with an embodiment of the present description. Computing device 600 may perform the methods described herein for selecting a user for a target terminal. The method of selecting a user for a target terminal is described elsewhere in this specification. The computing device 600 may be the client 200 when the method of selecting a user for a target terminal is performed on the client 200. When the method of selecting a user for a target terminal is performed on server 300, computing device 600 may be server 300. While the method for selecting a user for a target terminal may be performed in part on client 200 and in part on server 300, computing device 600 may be both client 200 and server 300.
As shown in fig. 2, computing device 600 may include at least one storage medium 630 and at least one processor 620. In some embodiments, computing device 600 may also include a communication port 650 and an internal communication bus 610. Meanwhile, computing device 600 may also include I/O components 660.
Internal communication bus 610 may connect various system components including storage medium 630, processor 620 and communication port 650.
I/O components 660 support input/output between computing device 600 and other components.
Communication port 650 provides for data communication between computing device 600 and the outside world, for example, communication port 650 may provide for data communication between computing device 600 and network 400. The communication port 650 may be a wired communication port or a wireless communication port.
The storage medium 630 may include a data storage device. The data storage device may be a non-transitory storage medium or a transitory storage medium. For example, the data storage device may include one or more of a disk 632, a read only memory medium (ROM) 634, or a random access memory medium (RAM) 636. The storage medium 630 also includes at least one set of instructions stored in the data storage device. The instructions are computer program code that may include programs, routines, objects, components, data structures, procedures, modules, etc. that perform the methods provided herein for selecting a user for a target terminal.
The at least one processor 620 may be communicatively coupled to at least one storage medium 630 and a communication port 650 via an internal communication bus 610. The at least one processor 620 is configured to execute the at least one instruction set. When the computing device 600 is run, the at least one processor 620 reads the at least one instruction set and, as directed by the at least one instruction set, performs the methods provided herein for selecting a user for a target terminal. The processor 620 may perform all the steps involved in the method of selecting a user for a target terminal. The processor 620 may be in the form of one or more processors, and in some embodiments, the processor 620 may include one or more hardware processors, such as microcontrollers, microprocessors, reduced Instruction Set Computers (RISC), application Specific Integrated Circuits (ASICs), application specific instruction set processors (ASIPs), central Processing Units (CPUs), graphics Processing Units (GPUs), physical Processing Units (PPUs), microcontroller units, digital Signal Processors (DSPs), field Programmable Gate Arrays (FPGAs), advanced RISC Machines (ARMs), programmable Logic Devices (PLDs), any circuit or processor capable of executing one or more functions, or the like, or any combination thereof. For illustrative purposes only, only one processor 620 is depicted in the computing device 600 in this description. It should be noted, however, that the computing device 600 may also include multiple processors, and thus, the operations and/or method steps disclosed in this specification may be performed by one processor, as described herein, or by a combination of multiple processors. For example, if in this description the processor 620 of the computing device 600 performs steps a and B, it should be understood that steps a and B may also be performed jointly or separately by two different processors 620 (e.g., a first processor performing step a, a second processor performing step B, or both a first and second processor performing steps a and B).
Fig. 3 shows a flowchart of training a preset ranking model in a method P100 for selecting a user for a target terminal according to an embodiment of the present specification. As before, the computing device 600 may perform the method P100 of the present specification of selecting a user for a target terminal. In particular, the processor 620 may read a set of instructions stored in its local storage medium and then execute the method P100 of the present specification for selecting a user for a target terminal, as specified by the set of instructions. As shown in fig. 3, method P100 may include:
s110: and acquiring training data of a preset sequencing model.
The training data includes a user data sample of each user in a user set corresponding to the terminal. The user data sample may include historical user data of each user in the terminal and an identification tag corresponding to each preset scenario.
The preset ranking model may be a preset model for ranking the users, and the preset ranking model may include a preset general ranking network and a preset exclusive ranking network corresponding to each preset scene. The preset general ranking network is understood to be a network preset for performing ranking prediction for users in a general scene, and the preset general ranking network may include at least one preset general ranking network layer, each preset general game network layer may include a plurality of parallel general ranking sub-networks, and the general ranking sub-networks may be a shared expert network or a fully connected layer. The preset proprietary sequencing network can comprise at least one preset proprietary sequencing network, each preset proprietary sequencing network can comprise a plurality of parallel proprietary sequencing sub-networks, and the proprietary sequencing sub-networks can be private expert networks or a layer of full connection layer. The network layer number and the dimensionality of the preset general sequencing network are the same as those of the preset proprietary sequencing network.
The method for acquiring the training data of the preset ranking model may be various, and specifically may be as follows:
for example, the processor 620 may select a user set corresponding to the terminal from a preset user set, obtain a historical user data set of each user in the user set, generate a user data sample of each user based on the historical user data set, and use the user data sample of each user as training data, which may specifically be as follows:
and S111, selecting a user set corresponding to the terminal from the preset user set.
The preset user set may be understood as a full user set. The set of users may include a set of terminal-related users recalled from the full set of users. The manner of selecting the user set corresponding to the terminal from the preset user set may be various, and specifically, the manner may be as follows:
for another example, the processor 620 may select at least one target user subset from the preset user set, match the users in the target user subset with the terminal based on the terminal information of the terminal, and select the user matched with the terminal from the target user subset to obtain the user set corresponding to the terminal.
Each target subset comprises at least one user corresponding to a preset position. For example, the processor 620 may obtain location information of each user in the preset user set, and perform multi-way recall on the users in the preset user set based on the preset location as an anchor point, so as to obtain at least one target user subset.
The preset position can be preset position information used as a point location fence, and the type of the preset position can be various. For example, the preset location may include location information reported by the user, location information in a payment record of the user, user resident location information, a receiving address of the user or a network address of the user, and the like. The location information here may be understood as a specific location (point location), or may be understood as a specific location range or a pre-divided area of a land.
After selecting at least one target user subset, matching users in the target user subset with the terminal based on the terminal information of the terminal. By terminal information is understood attribute information of the terminal, which may include deployment time, deployment location, device type, or other attribute information of the terminal. Based on the terminal information of the terminal, there may be various ways to match the users in the target user subset with the terminal, for example, the processor 620 may match the deployment location of the terminal with a preset location, or may also match the deployment location of the terminal with the location information of each user in the target user subset.
And selecting a user matched with the terminal or a target user subset from the target user subset, so as to obtain a user set corresponding to the terminal.
S112: and acquiring a historical user data set of each user in the user set.
The historical user data set comprises user data collected before the current time or in a preset historical time range.
The method for acquiring the historical user data set of each user in the user set may be various, and specifically may be as follows:
for example, the processor 620 may screen out, from the full user data set, user data of each user before the current time to obtain a historical user data set of the user, or may screen out, from the full user data set, a candidate user data set of each user, and screen out, from the candidate user data set, user data corresponding to a preset historical time range to obtain a historical user data set of the user.
S113: and generating a user data sample of each user based on the historical user data set, and taking the user data sample of each user as training data.
Based on the historical user data set, there may be multiple ways to generate the user data sample of each user, which may specifically be as follows:
for example, the processor 620 may screen out historical user data before a preset historical time from the historical user data set to obtain first historical user data, screen out historical user data in a target time range from the historical user data set to obtain second historical user data, and add an identification tag corresponding to each preset scene to the first historical user data based on the second historical user data to obtain a user data sample of each user.
The preset historical time may be a preset historical time. The preset historical time may include any historical time prior to the current time. Taking the preset historical time as 4 months, 15 days and 0 as an example, the first historical user data may be data before 4 months, 15 days and 0 in the historical user data set. The target time range comprises a preset time range after the preset historical moment; or when the preset historical time is 4 months, 15 days and 0, and the preset time range is one day, for example, the second historical user data may be the user data of one day of 4 months and 15 days in the historical user data set. Through comparison, the second historical user data can be future data of the first historical user data, and therefore the identification tag corresponding to the first historical user data can be determined through the second historical user data. Therefore, there may be multiple ways of adding the identification tag corresponding to each preset scenario to the first historical user based on the second historical user data to obtain the user data sample of each user, for example, the processor 620 may identify the historical identification information of the user at the terminal in the second historical user data, determine the identification tag corresponding to each preset scenario based on the historical identification information, add the identification tag to the first historical user data to obtain a candidate user data sample set, and select the user data sample of the user from the candidate user data sample set.
The history identification information may be understood as information indicating whether the user performs face identification or face identification on the terminal within the target identification range. The identification tag may be tag information indicating whether the user has completed identification in the terminal in each preset scene. For example, the processor 620 extracts an identification result corresponding to each preset scene from the historical identification information, and determines an identification tag corresponding to each preset scene based on the identification result, where the type of the identification tag may include a completed identification and an incomplete identification.
After the identification tag corresponding to each preset scene is determined, the identification tag may be added to the first historical user data to obtain a candidate user data sample set, and the identification tag adding manner may be various, for example, the processor 620 may add the identification tag that has been identified to the first historical user data to obtain a user data positive sample, add the identification tag that has not been identified to the first historical user data to obtain a user data negative sample, and use the user data positive sample and the user data negative sample as the candidate user data sample set.
After the set of candidate user data samples is obtained, the user data sample of the user may be selected from the set of candidate user data samples. The candidate user data sample set may include a user data positive sample and a user data negative sample, and the user data sample may be selected in various manners, for example, the processor 620 may obtain the number of the user data positive samples in the candidate user data sample set, obtain the number of the positive samples, determine the number of the target negative samples in the user data sample based on the number of the positive samples and a preset sample ratio, randomly sample the target user data negative sample in the user data negative sample based on the number of the target negative samples, and use the user data positive sample and the target user data negative sample as the user data sample of the user.
The user data positive sample can be regarded as a candidate user data sample which is identified within a target time range after a preset historical moment in the user recall data, and the user data negative sample is regarded as a candidate user data sample which is not identified within the target time range after the preset historical moment in the user recall data. The candidate user data sample comprises user data and a preset position corresponding to the user data. According to the practical application, the positive and negative samples have a serious proportion imbalance, so that random undersampling can be performed in the user data negative sample, and the quantity proportion of the user data positive sample and the user data negative sample is kept as a preset proportion, which can be set according to the practical application, for example, it can be 1:2 or other ratio.
After the user data samples for each user are generated, the user data samples for each user may be used as training data for the ranking model.
S120: and extracting user characteristics and scene characteristics corresponding to each preset scene from the user data samples.
The user characteristics may include user attribute characteristics and user behavior characteristics. The user attribute feature is used to characterize basic attribute information of the user, and may include, for example, age, gender, or hobbies, etc. The user behavior characteristics may include application characteristics and location characteristics. The application characteristics are used for characterizing the behavior characteristics of the user in a specific application program and represent the trust and the dependence degree of the user on the specific application to a certain degree. The types of application features may be various and may include, for example, the number of face swipes by the user on X days, the number of transactions or activity by the user on X days, and so forth. The location features are used to characterize the user's performance behavior using the application within a particular location (point pen). The type of location features may be various and may include, for example, the number of face swipes, number of transactions or activity of the user for X days in the location, etc.
The preset scene can be a preset vertical industry scene. The scene type can be various, for example, the scene type can include college and university group meals, enterprise group meals, K12 group meals, door control and body check, bus trip, or other scenes in which face recognition can be applied, and the like. The scene characteristics can be characterized as the performance behavior characteristics of the user in a specific scene, and represent the performance mode of the user in the specific scene. The types of the scene features may be various, and for example, the types may include the number of times that a user swipes a face in a college scene for approximately X days in a preset position, the number of times that a user transacts in an enterprise scene for approximately X days in a preset position, and the like.
The method for extracting the user characteristics and the scene characteristics corresponding to each preset scene from the user data sample may be various, and specifically may be as follows:
for example, the processor 620 may extract initial user features and initial scene features corresponding to each preset scene from the user data sample, where the initial user features include discrete user features and dense user features, the initial scene features include discrete scene features and dense scene features, extract user text features from the discrete user features, extract scene text features from the discrete scene features, fuse the user text features with the dense user features to obtain user features, and fuse the scene text features with the dense scene features to obtain scene features corresponding to each preset scene feature.
The extracted initial user features and initial scene features may be statistical features or sequence features, and the statistical features may be understood as feature information obtained in a statistical manner, such as face brushing times, transaction times, and the like; the sequence feature may be feature information included in a sequence based on the user position, and may include sequence information of user face brushing in the terminal under preset position information, for example, sequence information of user a brushing face for the first time, brushing face for the second time, and brushing face for the nth time in terminal C in position B. Therefore, there are various ways to extract the initial user features and the initial scene features, for example, when the initial user features and the initial scene features are statistical features, the processor 620 may extract user information and scene information from the user data samples, count the initial user features in the user information through the user feature extraction network, and count the initial scene features in the scene information through the scene feature extraction network; when the initial user features and the initial scene features are sequence features, multidimensional user sub-features can be extracted through a user extraction network, multidimensional scene sub-features can be extracted through the scene feature extraction network, target user sub-features are extracted from the user sub-features through an attention network, the target user sub-features are fused to obtain the initial user features, target scene sub-features are extracted from the scene sub-features through the attention network, and the target scene sub-features are fused to obtain the initial scene features.
The initial user features comprise discrete user features and dense user features, and the initial scene features comprise discrete scene features and dense scene features. The dense user features can be understood as dense features in the user features, the discrete user features can be understood as discrete features in the user features, the dense scene features can be understood as dense features in the scene features, and the discrete scene features can be understood as discrete features in the scene features. The dense feature may be a feature that mostly takes on a value other than zero. The discrete feature may be feature information whose feature value is not continuously divisible, and may generally include features described by a natural number, an integer, or a count unit of a user, such as the number of employees, the number of factories, the number of machines, and the age.
The user text features can be understood as word embedding vector features extracted from discrete user features. There are various ways to extract the user text features from the discrete user features, for example, the processor 620 may directly input the discrete user features into a word embedding layer (embedding layer), obtain intermediate layer embedding vector features, and use the intermediate layer embedding vector features as the user text features. Scene text features may be understood as word-embedded vector features extracted in discrete scene features. The manner of extracting the scene text features from the discrete scene features is the same as the manner of extracting the user text features, which is described above in detail, and is not described here any more.
After the user text features and the scene text features are extracted, the user text features and the dense user features can be fused, the scene text features and the dense scene features can be fused in various ways, for example, the processor 620 can directly splice (concat) the user text features and the dense user features to obtain the user features, and splice the scene text features and the dense scene features to obtain the scene features, or can also obtain user weights, weight the user text features and the dense user features respectively based on the user weights, and fuse the weighted user text features and the weighted dense user features to obtain the user features; and acquiring scene weight, and fusing the scene text features and the dense scene features respectively based on the scene weight to obtain the scene features.
According to the scheme, each preset scene (specific scene) can be independently depicted through the scene characteristics, and the scene characteristics can be different, so that the uniqueness of each preset scene can be more fully learned, the flexibility of each preset scene is increased, and the coverage rate of each preset scene is improved.
S130: and extracting general sequencing characteristics from the user characteristics by adopting a preset general sequencing network, and extracting scene sequencing characteristics from the scene characteristics by adopting a preset special sequencing network.
The general ordering feature can be understood as feature information for ordering the users in a general scene, and the scene ordering feature can be understood as feature information for ordering the users in a special scene.
The method for extracting the universal sorting features from the user features by adopting the preset universal sorting network can be various, and specifically can be as follows:
for example, the processor 620 may determine a target universal sorting network layer from the at least one preset universal sorting network layer, perform multi-dimensional feature extraction on the user features by using a universal sorting sub-network in the target universal sorting network layer to obtain a universal sorting sub-feature corresponding to each universal sorting sub-network, and fuse the universal sorting sub-features to obtain a universal sorting feature.
For example, the processor 620 may determine the target universal ranking network layer based on the number of times of extraction of the universal ranking features, for example, when the target universal ranking network layer is extracted for the first time, the target universal ranking network layer may be determined to be the first layer, when the target universal ranking network layer is extracted for the second time, the target universal ranking network layer may be determined to be the second layer, and when the target universal ranking network layer is extracted for the nth time, the target universal ranking network layer may be determined to be the nth layer. The number of layers of the general sequencing network layer can be set according to practical application and can be any number of layers.
After the target universal sequencing network layer is determined, a universal sequencing sub-network in the target universal sequencing network layer can be adopted to perform multi-dimensional feature extraction on the user features. The general ranking sub-network is essentially a full connection layer, so that there are various ways of performing multi-dimensional feature extraction on the user features, for example, the processor 620 may perform feature extraction on the user features by using the full connection layers in the general ranking sub-networks, so as to obtain the general ranking sub-features output by each general ranking sub-network.
After the general sorting sub-features are extracted, the general sorting sub-features can be fused, so that the general sorting features are obtained. The fusion mode may be multiple, for example, the processor 620 may directly combine the general sorting sub-features to obtain the general sorting features, or may directly splice the general sorting sub-features to obtain the general sorting features, or may further obtain weights of the general sorting sub-features, weight the general sorting sub-features based on the weights, and fuse the weighted general sorting sub-features to obtain the general sorting features.
The method for extracting the scene sequencing feature from the scene features by using the preset proprietary sequencing network is the same as the method for extracting the general sequencing feature, and is not described in detail herein.
S140: and fusing the general sorting features and the scene sorting features to obtain current scene features corresponding to each preset scene, and converging the preset sorting model based on the general sorting features and the current scene sorting features to obtain the sorting model.
The method for fusing the general ordering feature and the scene ordering feature may be various, and specifically may be as follows:
for example, the processor 620 may determine a scene ranking weight corresponding to each preset scene based on the scene features corresponding to each preset scene, respectively weight the general ranking features and the scene ranking features based on the scene ranking features to obtain weighted general ranking features and weighted scene ranking features, and fuse the weighted general ranking features and the weighted scene ranking features to obtain current ranking features corresponding to each preset scene.
The scene ranking weight may understand weight information between the scene ranking features and the general ranking features in the preset scene, and based on the scene features corresponding to each preset scene, there may be a variety of ways to determine the scene ranking weight corresponding to each preset scene, for example, the processor 620 may extract the scene ranking weight from the scene features through the full-link layer and the softmax activation function, for example, the scene features may be input to the full-link layer, the output of the full-link layer is activated through the softmax activation function, so as to output a weight vector m, where m is the number of the general ranking sub-networks and the special ranking sub-networks, and the m-dimensional weight vector is used as the scene ranking weight.
After the scene ranking weight is determined, the general ranking feature and the scene ranking feature may be weighted respectively based on the scene ranking weight, and the weighting manner may be multiple, for example, the processor 620 may extract weights corresponding to features output by m ranking networks (the general ranking sub-network and the proprietary ranking sub-network) from the m-dimensional weight vector, and weight the general ranking sub-feature in the general ranking feature and the scene ranking sub-feature in the scene ranking feature respectively based on the weights, so as to obtain the weighted general ranking feature and the weighted scene ranking feature.
After the general sorting features and the scene sorting features are weighted respectively, the weighted general sorting features and the weighted scene sorting features may be fused in a variety of ways, for example, the processor 620 may directly sum the weighted general sorting features and the weighted scene sorting features to obtain current scene sorting features corresponding to each preset scene, or may splice the weighted general sorting features and the weighted scene sorting features to obtain scene sorting features corresponding to each preset scene.
When the general ordering feature and the scene ordering feature are merged, a gating network may be used for merging, a network structure of the gating network may be as shown in fig. 4, and the gating network is used for sorting multiple (marked as m) sub-networks (a general ordering sub-network and/or a proprietary ordering sub-network)) Output feature vector (vector) 1 ...vector m ) Performing weighted fusion, wherein the process of generating the fusion weight may include: inputting an input vector (marked as a selector) corresponding to the sorting network into a full connection layer, and then outputting a m-dimensional weight vector through a softmax activation function, namely a fusion weight vector corresponding to the input vector, wherein the fusion weight vector can be a scene sorting weight. Feature vector (vector) to be output from a sorting subnetwork n ) Multiplying the weighted output of the corresponding position (n) by the weight of the corresponding position (n) so as to obtain the weighted output of the sorting sub-network, and then summing the weighted outputs of the m sorting sub-networks by corresponding position elements so as to obtain the output of the gating network at this time. It should be noted that, when the general ordering feature and the scene ordering feature are fused for the first time, the input vector (selector) at this time may be the scene feature, and the nth (nth) in the following may be used>1) In the second fusion, the input vector (selector) can be the last output of the gated network.
After the general sorting features and the scene sorting features are fused, the preset sorting model can be converged based on the current scene sorting features and the general sorting features obtained through fusion, and the convergence mode can be various, for example, the processor 620 can fuse the general sorting features and the user features to obtain the current general sorting features, use the current general sorting features as the user features, use the current scene sorting features as the scene features, return to the step of extracting the general sorting features from the user features by using the preset general sorting network, and extract the scene features from the scene features by using the preset proprietary sorting network, until the fusion times reach the preset times, obtain the target general sorting features and the target scene sorting features corresponding to each preset scene, and converge the preset sorting model based on the target general sorting features and the target scene sorting features to obtain the sorting model.
For example, the processor 620 may determine a general ranking weight of a general ranking sub-feature of each dimension in the general ranking feature based on the user feature, weight the general ranking sub-feature based on the general ranking weight to obtain a weighted general ranking feature, and fuse the weighted general ranking sub-features to obtain the current general ranking feature.
Wherein the generic ranking weight may be immediately a ranking weight for feature fusion in the generic ranking network. The essence of fusing the general ordering features and the user features is that the general ordering weight is determined based on the user features, and then the general ordering sub-features in the general ordering features are fused based on the general ordering weight, so that the current general ordering features are obtained. When the general ordering sub-features are fused, a gating network can be adopted for fusion. Different from the fusion of the scene sorting feature and the general sorting feature, in the current (first) fusion process, an input vector (selector) in the gating network is a user feature, and a feature for weighting is a general sorting sub-feature. In the subsequent fusion of the universal sorting sub-features, the feature output by the previous gating network is used as an input vector (selector) of the gating network.
After the current scene sequencing feature and the current general sequencing feature are obtained, the current general sequencing feature can be used as a user feature, the current scene sequencing feature is used as a scene feature, the steps of extracting the general sequencing feature from the user feature by adopting a preset general sequencing network and extracting the scene sequencing feature from the scene feature by adopting a preset special sequencing network are returned to be executed until the fusion times reach the preset times, and the target general sequencing feature output by the gating network and the target scene sequencing feature corresponding to each preset scene are obtained.
The preset times can be times of fusion of a preset gating network, and the preset times are related to the network layer number of the general sequencing network and the network layer number of the special sequencing network. The features output by each layer of network need to be fused by a gating network, and the fusion here can include two types, one is to perform feature fusion in a general sequencing network layer, and the other is to perform feature fusion in a special sequencing network layer corresponding to each preset scene, that is, after each layer of network is output, one gating network needs to be connected. And the features output by the last layer of the universal sequencing network layer are fused through the gated network to obtain a target universal sequencing network, and the features output by the last layer of the special sequencing network layer are fused through the gated network to obtain target scene sequencing features corresponding to the preset scene.
The general sequencing network and the special sequencing network are fused layer by layer through the gate control network, so that the universality of all scenes can be increased while the uniqueness of each preset scene is depicted, the fitting effect can be improved by means of the scene universality in the long-tail small scenes, the risk of overfitting can be reduced by means of the scene universality in the large scenes, and the coverage rate of each preset scene is improved.
After the target general ordering feature and the target scene ordering feature are obtained, the preset ordering model may be converged based on the target general ordering feature and the target scene feature, and the convergence manner may be multiple, for example, the processor 620 may determine general scene loss information corresponding to the user data sample based on the target general ordering feature, determine proprietary scene loss information corresponding to each preset scene based on the target scene ordering feature, fuse the general ordering scene loss information and the proprietary scene ordering loss information to obtain target loss information of the preset ordering model, and converge the preset ordering model based on the target loss information to obtain the ordering model, which may specifically be as follows:
(1) And determining the general scene loss information corresponding to the user data sample based on the target general sequencing characteristics.
Wherein, the general scene loss information can be understood as the loss information of the user data sample in the general scene. Based on the target general ordering feature, there may be multiple ways of determining the general scene loss information corresponding to the user data sample, which may specifically be as follows:
for example, the processor 620 may adjust the target general ordering feature by a preset activation function to obtain an adjusted general ordering feature, predict identification information of the user at the terminal based on the adjusted general ordering feature to obtain first predicted identification information, and determine general scene loss information corresponding to the user data sample based on the first predicted identification information and the identification tag in the user data sample.
For example, the processor 620 adjusts the target general ranking characteristic by using a full-connection network of one or more preset activation functions, so as to obtain the adjusted general ranking characteristic. The number of fully connected networks may be set according to practical applications, and may be, for example, 2 or any number.
The preset activation function may be of various types, and for example, may include relu (an activation function) or other activation functions that can adjust the target general ordering feature.
After the target general ranking feature is adjusted, the identification information of the user at the terminal can be predicted based on the adjusted general ranking feature, and first predicted identification information is obtained. The identification information may be understood as a ranking score or the like representing the recognition probability of face recognition or face recognition performed by the user at the terminal. The first predicted identification information can be understood as the identification information predicted in the general scene. The manner of predicting the identification information of the user at the terminal may be various, for example, the processor 620 may predict the identification information corresponding to the adjusted general ranking feature through a nonlinear function (sigmoid) to obtain the first predicted identification information, or may output the first predicted identification information based on the adjusted general ranking feature through other nonlinear functions.
After predicting the first predicted identification information, there may be various ways to determine the Loss information of the general scene corresponding to the user data sample based on the first predicted identification information and the identification tag in the user data sample, for example, the processor 620 may determine the identification probability of face recognition or face recognition performed by the user at the terminal in the general scene based on the first predicted identification information, and pass through the cross entropy Loss information (Loss) based on the identification probability and the identification tag in the user data sample s ) For determinationAnd general scene loss information corresponding to the user data samples.
(2) And determining the special scene loss information corresponding to each preset scene based on the sequencing characteristics of the target scenes.
The proprietary scene loss information can be understood as loss information of the user data sample in a preset scene. The method for determining the scene loss information corresponding to each preset scene may be various, and specifically may be as follows:
for example, the processor 620 may adjust the target scene ranking features through a preset activation function to obtain adjusted scene ranking features, fuse the adjusted scene ranking features and the adjusted general ranking features to obtain fused scene ranking features, predict the identification information of the user at the terminal in each preset scene based on the fused scene ranking features to obtain second predicted identification information, and determine the proprietary scene loss information corresponding to each preset scene based on the second predicted identification information and the identification tag of the user data sample, or may adjust the target scene ranking features through the preset activation function to obtain adjusted scene ranking features, predict the identification information of the user at the terminal in each preset scene based on the adjusted scene ranking features to obtain second predicted identification information, and determine the proprietary scene loss information corresponding to each preset scene based on the second predicted identification information and the identification tag of the user data sample.
The method for determining the proprietary scene loss corresponding to each preset scene may be similar to the method for determining the general sequencing loss information, and is described in detail above, and thus is not described in detail herein.
In this embodiment, the first prediction identification information and the second prediction identification information may be output through a task tower. Therefore, taking the number of the preset scenes as n as an example, the number of the task towers can be (n + 1), each preset scene corresponds to one task tower (a specific scene task tower), and the general scene corresponds to one task tower (a general scene task tower). The structure of the task tower can comprise a network layer of a full connection network and a nonlinear function (sigmoid) corresponding to a preset number of activation functions (relu).
(3) And fusing the general scene loss information and the special scene loss information to obtain target loss information of the preset sequencing model.
The target loss information may be understood as loss information for updating a network parameter of the preset ranking model.
The mode of fusing the general scene loss information and the special scene loss information may be various, and specifically may be as follows:
for example, the processor 620 adds the general scene loss information and the specific scene loss information corresponding to each preset scene, so as to obtain the target loss information, as shown in formula (1):
Loss=Loss t1 +Loss t2 +...+Loss tn +Loss s (1)
wherein, the Loss is the Loss information of the target scene, and the Loss is t1 Loss information of a proprietary scene corresponding to a first predetermined scene, loss t2 Loss information of the proprietary scene corresponding to the second predetermined scene, loss tn The special scene Loss information corresponding to the nth preset scene is obtained, wherein n is the number of the preset scenes and Loss s Is generic scene loss information.
In some embodiments, the processor 620 may further obtain the loss weight, respectively weight the general scene loss information and the proprietary scene loss information based on the loss weight, and fuse the weighted general scene loss information and the weighted proprietary scene loss information to obtain the target loss information.
(4) And converging the preset sequencing model based on the target loss information to obtain a sequencing model.
For example, the processor 620 may update the network parameters of the preset ranking model based on the target loss information by using a gradient descent algorithm, so as to obtain the ranking model, or may update the network parameters of the preset ranking model based on the target loss information by using another network parameter update algorithm, so as to obtain the ranking model.
When the preset sequencing model is trained, the preset sequencing model comprises a preset proprietary sequencing network and a universal sequencing network which correspond to two preset scenes, the initial user characteristics comprise user attribute characteristics, application characteristics and position characteristics, the architecture diagram for training the preset sequencing model can be shown in fig. 5, the preset sequencing model can be found to be a multi-task learning model, and the preset sequencing model can be divided into two parts in the training process: the first is a general sequencing network link, and the second is a special sequencing network link, which specifically includes the following steps:
(a) Universal sequencing network link
For example, the processor 620 may input an intermediate level embedding vector feature (denoted as E-shared) generated by the user attribute feature, the application feature and the location feature as a user feature to a first level universal sorting network layer of the universal sorting network. The method comprises the steps that a plurality of parallel general sorting sub-networks in a first layer of general sorting network layer output general sorting sub-features, the general sorting sub-features are fused through a gate control network (G), current general sorting features are output, then the output current sorting features are input into a second layer of general sorting network layer, features output by the second layer of general sorting network layer are fused again through the gate control network (G) until outputs of a last layer of general sorting network layer are fused through the gate control network, target general sorting features output by a preset general sorting network are obtained, then the target general sorting features are input into a general scene task tower, and the general scene task tower can output first prediction identification information.
(b) Proprietary sequencing network links
For example, the processor 620 may input, as the scene feature, an intermediate layer embedding vector feature (denoted as E-task) generated by an initial scene feature corresponding to the preset scene to a first layer proprietary sequencing network layer of the proprietary sequencing network corresponding to the preset scene. A plurality of parallel proprietary ranking sub-networks in a first tier proprietary ranking network output a scenario ranking sub-feature. And fusing the scene sequencing sub-features with the general sequencing sub-features output by a plurality of parallel general sequencing sub-networks in the first layer of general sequencing network through a gate control network (G), outputting the current scene features corresponding to the preset scene, then inputting the current scene features into the second layer of special sequencing network layer, fusing the scene sequencing sub-features output by the second layer of special sequencing network layer with the general sequencing sub-features output by the second layer of general sequencing network layer through the gate control network again until the scene sequencing sub-features output by the last layer of special sequencing network layer and the general sequencing sub-features output by the last layer of general sequencing network layer are fused through the gate control network, thereby obtaining the target scene sequencing features corresponding to the preset scene. And inputting the target scene sequencing feature into a specific scene task tower corresponding to the preset scene, and outputting second prediction identification information.
After the preset ranking model is trained to obtain the ranking model, a user may be selected for the target terminal based on the ranking model, and fig. 6 shows a flowchart of a method P100 for selecting a user for the target terminal according to an embodiment of the present specification. As before, the computing device 600 may perform the method P100 of the present specification of selecting a user for a target terminal. In particular, the processor 620 may read a set of instructions stored in its local storage medium and then execute the method P100 of the present specification for selecting a user for a target terminal, as specified by the set of instructions. As shown in fig. 6, method P100 may further include:
s150: and acquiring a candidate user set.
Wherein the candidate user set comprises user information of a plurality of candidate users. The user information may include user attribute information, specific behavior information of the user, information of the user in a specific scene, and the like.
The method for acquiring the candidate user set may be various, and specifically may be as follows:
for example, the processor 620 may obtain a user selection request for the target terminal, obtain a target user set corresponding to the target terminal based on the user selection request, and perform cleansing on the target user set to obtain a candidate user set.
The target terminal may be a terminal that needs to be selected by a user, and the target terminal may be an IoT smart device, or any device that can be used for face recognition, and so on.
The user selection request may be understood as a request for selecting a user for the target terminal, and the user selection request may be triggered and sent by the target user 100, or may be sent by the target terminal. For example, the processor 620 may obtain a current preset user set based on the user selection request, and use the current preset user set as a target user set corresponding to the target terminal, or may also obtain current terminal information of the target terminal based on the user selection request, and recall at least one user related to the target terminal in the full user set based on the current terminal information, so as to obtain the target user set corresponding to the target terminal.
The manner of recalling at least one user related to the target terminal from among the total number of users is similar to the manner of selecting the user set corresponding to the terminal from the preset user set, which is described above in detail, and is not repeated here.
After the target user set corresponding to the target terminal is obtained, the target user set can be cleaned, and a candidate user set is obtained. For example, the processor 620 may obtain a preset user blacklist, filter users in the target user set based on the preset user blacklist, and obtain user information of the filtered users, thereby obtaining a candidate user set, or may also input the target user set to a preset cleaning model, output remaining cleaned users, and obtain user information of the remaining users, thereby obtaining a candidate user set.
S160: and inputting the user information of the candidate users into the ranking model to obtain a ranking sequence of the candidate users.
The sequencing model comprises a general sequencing network and a proprietary sequencing network which are fused layer by layer, the proprietary sequencing network comprises a proprietary sequencing network corresponding to each preset scene, the general sequencing network comprises at least one general sequencing network layer, the proprietary sequencing network comprises at least one proprietary sequencing network layer, and the number of layers and the dimensionality of the general sequencing network are the same as those of the proprietary sequencing network. Each generic-sequencing network layer comprises a plurality of parallel generic-sequencing subnetworks, and each proprietary-sequencing network comprises a plurality of parallel proprietary-sequencing subnetworks.
The sequencing model comprises a sequencing network and a proprietary network which are fused layer by layer, and the layer-by-layer fusion can be understood as that each layer of general sequencing network layer outputs the output data of the general sequencing network layer to the output end of the corresponding proprietary sequencing network layer for data fusion. The output data may include a generic ranking feature corresponding to each generic ranking network layer, and the generic ranking feature may include a generic ranking sub-feature output by a plurality of parallel generic ranking sub-networks in the generic ranking network layer. An output of the proprietary sequencing network layer may output a scene sequencing feature, which may comprise a scene sequencing sub-feature output by a plurality of parallel proprietary sequencing sub-networks in the proprietary sequencing network layer. In addition, there are various ways to perform data fusion at the output end of the proprietary sequencing network layer, for example, the processor 620 may fuse the general sequencing feature and the scene sequencing feature corresponding to the current layer (target layer) through the gate control network (G) to obtain the current scene sequencing feature corresponding to each layer. Then, the current scene sequencing feature can be used as the scene feature of the current layer and input to the next proprietary sequencing network layer, so that the scene sequencing feature output by the next proprietary sequencing network layer is obtained; and inputting the universal sequencing feature as the user feature of the current layer into the next universal sequencing network layer to obtain the universal sequencing feature output by the next universal sequencing network layer until the universal sequencing feature output by the last universal sequencing network layer and the scene sequencing feature output by the last special sequencing network layer are fused to obtain the target scene sequencing feature output by the special sequencing network.
The user sequence may be understood as a sequence obtained by sequencing candidate users. The sorted sequence may be sorted according to a particular sorting rule, e.g., may be sorted in reverse or forward order based on a sorting score, etc.
The method for obtaining the ranking sequence of the multiple candidate users by inputting the user information of the multiple candidate users into the ranking model may be various, and specifically may be as follows:
for example, the processor 620 may input user information of a plurality of candidate users into the ranking model based on the user selection request, obtain ranking information of the plurality of candidate users, and rank the plurality of candidate users based on the ranking model, obtain a ranking sequence of the plurality of candidate users.
The ranking information may be understood as information required for ranking the candidate users, and the type of the ranking information may be various, for example, the ranking score, the ranking probability, or the ranking rank may be included. For example, when the processor 620 does not include scene information in the user selection request, the processor 620 inputs the user information of the plurality of candidate users into the ranking model to obtain general ranking information and scene ranking information corresponding to each preset scene, and uses the general ranking information and the scene ranking information as the ranking information of the plurality of candidate users; when the user selection request comprises scene information, extracting target scenes from the scene information, inputting user information of a plurality of candidate users to the sequencing model when each preset scene comprises the target scenes, obtaining target scene sequencing information corresponding to the target scenes, and taking the target scene sequencing information as the sequencing information of the candidate users.
The scene information can be used for representing information for selecting a scene where the user is located for the target terminal. When the user selection request does not include scene information, that is, it is indicated that a scene is not designated when a user is selected for the target terminal, at this time, user information of a plurality of candidate users may be input into the ranking model, a general ranking network and a general scene task tower in the ranking model are used to output general ranking information, a specific ranking network corresponding to each preset scene and a specific scene task tower corresponding to the specific ranking network are used to output scene ranking information corresponding to each preset scene, and the general ranking information and the scene ranking information corresponding to each preset scene are used as the ranking information of the plurality of candidate users.
In one embodiment, when the user selection request includes context information, it indicates that a context is specified when the user is selected for the target terminal. At this time, the processor 620 may extract the target scene from the scene information, and the target scene may be the designated scene. When the preset scene comprises a target scene, namely the appointed scene is a preset scene, user information of a plurality of candidate users is only required to be input into the sequencing model, a universal sequencing sub-network in the sequencing model is adopted to extract a plurality of layers of universal sequencing features from the user information, a special sequencing network corresponding to the target scene in the sequencing model is adopted to extract scene sequencing features from the user information, the universal sequencing features and the scene sequencing features are fused layer by layer, finally, the target scene sequencing features corresponding to the target scene are output, then, the target scene sequencing features are input into a specific scene task tower corresponding to the target scene, the target scene sequencing information corresponding to the target scene is output, and the target scene sequencing information is used as the sequencing information of the candidate users.
It should be noted that, when there is a specific scene and the specific scene is an existing scene, only the generic ranking network in the ranking model and the proprietary ranking network corresponding to the existing scene need to be used, and the target scene ranking information in the existing scene can be output.
In an embodiment, after the target scene is extracted from the scene information, but the target scene is not an existing scene, that is, when each preset scene does not include the target scene, the processor 620 may input the user information of the multiple candidate users to the ranking model to obtain the general ranking information, and use the general ranking information as the ranking information of the multiple candidate users.
When each preset scene does not include a target scene, the processor 620 inputs the candidate users into the ranking model, extracts user features from the user information by using the ranking model, inputs the user features into the general ranking network, outputs the target general ranking features through the general ranking network, inputs the target general ranking features into the general scene task tower, and outputs the general ranking information as the ranking information of the candidate users.
It should be noted that, in the scheme, for the newly added scene, the general ranking information output by the general ranking network and the general scene task tower in the ranking model can be directly used as the ranking information of the candidate user, so that the problem that the newly added scene has no history data and cannot be ranked (the new scene cold start problem) is solved, and meanwhile, the new scene can quickly fall to the ground.
After the ranking information of the multiple candidate users is obtained, the multiple candidate users can be ranked based on the ranking information to obtain a ranking sequence of the multiple candidate users. For example, taking the ranking information as the ranking score of each candidate user as an example, the processor 620 may rank the candidate users in a reverse order or a forward order based on the ranking score to obtain a ranking sequence of the candidate users, or taking the ranking probability that the ranking information is the ranking rank as an example, the processor 620 may determine the ranking rank of each candidate user based on the ranking probability, and rank the candidate users based on the ranking rank to obtain the ranking sequence of the candidate users.
S170: and selecting a plurality of target users from the sequencing sequence and transmitting the target users to the target terminal.
The target user may include a user selected for the target terminal, and the facial features of the target user are used for face recognition or face recognition at the end side of the target terminal.
The manner of selecting a plurality of target users from the sorting sequence and transmitting the target users to the target terminal may be various, and specifically, the manner may be as follows:
for example, the processor 620 may select a plurality of target users from the sorted sequence, obtain target facial features of the plurality of target users, and send the plurality of target users and the target facial features to the target terminal, so that the target terminal performs facial recognition based on the target facial features.
For example, the processor 620 may determine the sorting manner in the sorting sequence, may screen out users of Top N in the sorting sequence as a plurality of target users when the sorting manner is forward sorting, may screen out a plurality of users in a preset range at the tail of the sorting sequence as target users when the sorting manner is reverse sorting, or may sort candidate users in the sorting sequence to obtain a user group of each type, and select a target user group in the user group, and use a user in the target user group as a target user.
For example, the processor 620 may classify the sorted sequence according to a preset sorting interval, for example, the top 10 users in the sorted sequence are divided into a user group, and the users from 11 th to 20 th are divided into another user group, until all the candidate users in the sorted sequence are classified, so as to obtain each type of user group. It should be noted that the number of users in each type of user group may be the same or different.
After the candidate users are classified, a target user group can be selected from the user groups, and there are various ways to select the target user group, for example, the processor 620 may select a user group of a target type from the user groups to obtain the target user group, or may select the target user group from the user groups based on ranking of users in the user group in a ranking sequence.
After the target user group is selected, users in the target user group may be used as target users, and the target users may be all users in the target user group or some users in the target user group.
After the plurality of target users are selected, target facial features of the plurality of target users may be obtained, and there may be a plurality of ways to obtain the target facial features, for example, the processor 620 may obtain a preset facial feature set, and select facial features corresponding to the plurality of target users from the preset facial feature set to obtain the target facial features, or may obtain a facial image of each target user from the plurality of target users, and extract the facial features from the facial image, so as to obtain the target facial features of the plurality of target users, or may also obtain the facial features of the plurality of target users, obtain attribute information of local facial features of the target terminal, and select the target facial features from the facial features based on the attribute information, where the target facial features are facial features that the local facial features in the facial features do not have.
After the target facial features are obtained, the multiple target users and the target facial features may be sent to the target terminal in multiple manners, for example, the processor 620 may directly send the target facial features of the multiple target users to the target terminal, or may splice user information of the multiple target users with the target facial features and send the spliced facial features to the target terminal.
In some embodiments, after receiving a plurality of target users and target facial features, the target terminal may perform facial recognition based on the target facial features, and the facial recognition may be performed in various manners, for example, the target terminal updates a local facial feature set based on the target facial features to obtain a target facial feature set. After receiving the face recognition request, the target face image can be obtained, face features are extracted from the target face image, the face features are matched with the target face features in the target face feature set, and a user corresponding to the target face features which are successfully matched is used as a recognition result of the target face image.
To sum up, the method P100 and the system 001 for selecting a user for a target terminal, provided by the present specification, acquire a candidate user set, where the candidate user set includes user information of a plurality of candidate users, input the user information of the plurality of candidate users into a ranking model, obtain a ranking sequence of the plurality of candidate users, where the ranking model includes a general ranking network and a proprietary ranking network that are merged layer by layer, and select a plurality of target users from the ranking sequence and transmit the target users to the target terminal; in addition, the general sequencing network and the special sequencing network in the sequencing model are fused layer by layer, so that the uniqueness of each scene is depicted, the universality of all scenes is increased, the fitting effect can be improved by means of the scene universality in a long-tail small scene, the risk of overfitting is reduced by means of the scene universality in a large scene, and the coverage rate of each scene can be improved, so that the accuracy of user selection can be improved when a user is selected for a target terminal.
Another aspect of the present description provides a non-transitory storage medium having stored thereon at least one set of executable instructions for performing a user selection for a target terminal. When executed by a processor, the executable instructions direct the processor to perform the steps of the method P100 of selecting a user for a target terminal described herein. In some possible implementations, various aspects of the description may also be implemented in the form of a program product including program code. When the program product is run on the computing device 600, the program code is adapted to cause the computing device 600 to perform the steps of the method P100 of selecting a user for a target terminal as described herein. A program product for implementing the above-described methods may employ a portable compact disc read only memory (CD-ROM) including program code and may be run on the computing device 600. However, the program product of this description is not limited in this respect, as a readable storage medium can be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system. The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. The computer readable storage medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable storage medium may be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a readable storage medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing. Program code for carrying out operations for this specification may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on computing device 600, partly on computing device 600, as a stand-alone software package, partly on computing device 600 and partly on a remote computing device, or entirely on the remote computing device.
The foregoing description has been directed to specific embodiments of this disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or advantageous.
In conclusion, after reading this detailed disclosure, those skilled in the art will appreciate that the foregoing detailed disclosure may be presented by way of example only, and may not be limiting. Those skilled in the art will appreciate that the present specification contemplates various reasonable variations, enhancements and modifications to the embodiments, even though not explicitly described herein. Such alterations, improvements, and modifications are intended to be suggested by this specification, and are within the spirit and scope of the exemplary embodiments of this specification.
Furthermore, certain terminology has been used in this specification to describe embodiments of the specification. For example, "one embodiment," "an embodiment," and/or "some embodiments" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the specification. Therefore, it is emphasized and should be appreciated that two or more references to "an embodiment" or "one embodiment" or "an alternative embodiment" in various portions of this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined as suitable in one or more embodiments of the specification.
It should be appreciated that in the foregoing description of embodiments of the specification, various features are grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the specification, for the purpose of aiding in the understanding of one feature. This is not to be taken as an admission that any of the above-described features are required in combination, and it is fully possible for a person skilled in the art, on reading this description, to identify some of the devices as single embodiments. That is, embodiments in this specification may also be understood as an integration of a plurality of sub-embodiments. And each sub-embodiment described herein is equally applicable in less than all features of a single foregoing disclosed embodiment.
Each patent, patent application, publication of a patent application, and other material, such as articles, books, specifications, publications, documents, articles, and the like, cited herein is hereby incorporated by reference. All matters hithertofore set forth herein except as related to any prosecution history, any prosecution history which may be inconsistent or conflicting with this document, or any prosecution history which may have a limiting effect on the broadest scope of the claims. Now or later associated with this document. For example, if there is any inconsistency or conflict in the description, definition, and/or use of terms associated with any of the included materials with respect to the terms, descriptions, definitions, and/or uses associated with this document, the terms in this document are used.
Finally, it should be understood that the embodiments of the application disclosed herein are illustrative of the principles of the embodiments of the present specification. Other modified embodiments are also within the scope of this specification. Accordingly, the embodiments disclosed herein are to be considered in all respects as illustrative and not restrictive. Those skilled in the art may implement the applications in this specification in alternative configurations according to the embodiments in this specification. Accordingly, embodiments of the present description are not limited to the embodiments described with particularity in the application.

Claims (25)

1. A method of selecting a user for a target terminal, comprising:
acquiring a candidate user set, wherein the candidate user set comprises user information of a plurality of candidate users;
inputting the user information of the candidate users into a sequencing model to obtain a sequencing sequence of the candidate users, wherein the sequencing model comprises a general sequencing network and a special sequencing network which are fused layer by layer; and
and selecting a plurality of target users from the sequencing sequence and transmitting the target users to the target terminal.
2. The method of claim 1, wherein the ranking model comprises the generic ranking network and the proprietary ranking network corresponding to each preset scenario, the generic ranking network comprises at least one generic ranking network layer, the proprietary ranking network comprises at least one proprietary ranking network layer, and the generic ranking network and the proprietary ranking network have the same number of layers and dimensions.
3. The method of selecting users for target terminals of claim 2 wherein the ranking model comprises a layer-by-layer fused generic ranking network and proprietary ranking network comprising:
and each layer of the universal sequencing network layer outputs the output data thereof to the output end of the corresponding special sequencing network layer for data fusion.
4. The method of claim 1, wherein the ranking model is trained by:
acquiring training data of a preset sequencing model, wherein the training data comprises a user data sample of each user in a user set corresponding to a terminal, and the preset sequencing model comprises a preset general sequencing network and a preset special sequencing network corresponding to each preset scene;
extracting user characteristics and scene characteristics corresponding to each preset scene from the user data samples;
extracting general sorting features from the user features by adopting the preset general sorting network, and extracting scene sorting features from the scene features by adopting the preset special sorting network; and
and fusing the general sorting features and the scene sorting features to obtain current scene sorting features corresponding to each preset scene, and converging the preset sorting model based on the general sorting features and the current scene sorting features to obtain the sorting model.
5. The method of claim 4, wherein the obtaining training data of a preset ranking model comprises:
selecting a user set corresponding to the terminal from a preset user set;
acquiring a historical user data set of each user in the user set; and
and generating a user data sample of each user based on the historical user data set, and taking the user data sample of each user as the training data.
6. The method of claim 5, wherein the selecting the user set corresponding to the terminal from a preset user set comprises:
selecting at least one target user subset from the preset user set, wherein each target user subset comprises at least one user corresponding to a preset position;
matching users in the target user subset with the terminal based on the terminal information of the terminal; and
and selecting the user matched with the terminal from the target user subset to obtain a user set corresponding to the terminal.
7. The method of claim 5, wherein the generating the user data sample for each user based on the historical user data set comprises:
screening out historical user data before a preset historical moment from the historical user data set to obtain first historical user data;
screening out historical user data in a target time range from the historical user data set to obtain second historical user data, wherein the target time range comprises a preset time range after the preset historical time; and
and adding the identification tag corresponding to each preset scene in the first historical user data based on the second historical user data to obtain a user data sample of each user.
8. The method of claim 7, wherein the adding an identification tag corresponding to each preset scenario to the first historical user data based on the second historical user data to obtain a user data sample of each user comprises:
identifying historical identification information of the user at the terminal in the second historical user data;
determining an identification tag corresponding to each preset scene based on the historical identification information;
adding the identification tag to the first historical user data to obtain a candidate user data sample set; and
selecting a user data sample for the user from the set of candidate user data samples.
9. The method of selecting a user for a target terminal of claim 8, wherein the set of candidate user data samples comprises a user data positive sample and a user data negative sample, an
Said selecting a user data sample for the user in the set of candidate user data samples comprises:
acquiring the number of positive samples of the user data in the candidate user data sample set to obtain the number of positive samples;
determining the number of target negative samples in the user data samples based on the number of positive samples and a preset sample proportion;
randomly sampling target user data negative samples from the user data negative samples based on the target negative sample number; and
and taking the user data positive sample and the target user data negative sample as the user data sample of the user.
10. The method of claim 4, wherein the extracting of the user characteristics and the scene characteristics corresponding to each of the predetermined scenes from the user data samples comprises:
extracting initial user features and initial scene features corresponding to each preset scene from the user data sample, wherein the initial user features comprise discrete user features and dense user features, and the initial scene features comprise discrete scene features and dense scene features;
extracting user text features from the discrete user features, and extracting scene text features from the discrete scene features; and
and fusing the user text features and the dense user features to obtain user features, and fusing the scene text features and the dense scene features to obtain scene features corresponding to each preset scene.
11. The method of claim 4, wherein the pre-defined universal ranking network comprises at least one pre-defined universal ranking network layer, each of the pre-defined universal ranking network layers comprising a plurality of parallel universal ranking subnetworks, and
the method for extracting the universal sequencing feature from the user features by adopting the preset universal sequencing network comprises the following steps:
determining a target universal sequencing network layer in the at least one preset universal sequencing network layer;
performing multi-dimensional feature extraction on the user features by adopting the universal sorting sub-networks in the target universal network layer to obtain universal sorting sub-features corresponding to each universal sorting sub-network; and
and fusing the general sorting sub-features to obtain the general sorting features.
12. The method of claim 4, wherein the fusing the generic ordering features and the scene ordering features to obtain the current scene ordering features corresponding to each preset scene comprises:
determining a scene sequencing weight corresponding to each preset scene based on the scene characteristics corresponding to each preset scene;
respectively weighting the general sorting features and the scene sorting features based on the scene sorting weight to obtain weighted general sorting features and weighted scene sorting features; and
and fusing the weighted general sorting features and the weighted scene sorting features to obtain the current scene sorting features corresponding to each preset scene.
13. The method of claim 4, wherein the converging the pre-set ranking model based on the generic ranking features and the current scene ranking features to obtain the ranking model comprises:
fusing the general sorting features and the user features to obtain current general sorting features;
taking the current general sorting feature as the user feature and taking the current scene sorting feature as the scene feature;
returning to the step of extracting the general sorting features from the user features by adopting the preset general sorting network and extracting the scene sorting features from the scene features by adopting the preset special sorting network until the fusion times reach the preset times, and obtaining the target general sorting features and the target scene sorting features corresponding to each preset scene; and
and converging the preset sequencing model based on the target general sequencing feature and the target scene sequencing feature to obtain the sequencing model.
14. The method of claim 13, wherein the fusing the generic ranking features and the user features to obtain current generic ranking features comprises:
determining a universal ranking weight of a universal ranking sub-feature of each dimension in the universal ranking features based on the user features;
weighting the general sorting sub-features based on the general sorting weight to obtain weighted general sorting sub-features; and
and fusing the weighted general sorting sub-features to obtain the current general sorting feature.
15. The method of claim 13, wherein the converging the preset ranking model based on the target generic ranking feature and the target scene ranking feature to obtain the ranking model comprises:
determining general scene loss information corresponding to the user data samples based on the target general sorting features;
determining proprietary scene loss information corresponding to each preset scene based on the target scene sequencing characteristics;
fusing the general scene loss information and the special scene loss information to obtain target loss information of the preset sequencing model; and
and converging the preset sequencing model based on the target loss information to obtain the sequencing model.
16. The method of claim 15, wherein the determining the generic scene loss information corresponding to the user data samples based on the target generic ordering characteristic comprises:
adjusting the target general sorting feature through a preset activation function to obtain an adjusted general sorting feature;
predicting the identification information of the user at the terminal based on the adjusted general sorting feature to obtain first predicted identification information; and
and determining the general scene loss information corresponding to the user data sample based on the first prediction identification information and the identification label in the user data sample.
17. The method of claim 16, wherein the determining the proprietary scene loss information corresponding to each of the predetermined scenes based on the target scene ranking features comprises:
adjusting the target scene sequencing characteristics through the preset activation function to obtain adjusted scene sequencing characteristics;
fusing the adjusted scene sequencing feature and the adjusted general sequencing feature to obtain a fused scene sequencing feature;
predicting the identification information of the user at the terminal under each preset scene based on the fused scene sequencing characteristics to obtain second predicted identification information; and
determining proprietary scene loss information corresponding to the each predicted scene based on the second predicted identification information and an identification tag of the user data sample.
18. The method of claim 1, wherein the obtaining a set of candidate users comprises:
acquiring a user selection request aiming at the target terminal;
acquiring a target user set corresponding to the target terminal based on the user selection request; and
and cleaning the target user set to obtain a candidate user set.
19. The method of claim 18, wherein the entering user information of the plurality of candidate users into a ranking model to obtain a ranking sequence of the plurality of candidate users comprises:
inputting the user information of the candidate users into the ranking model based on the user selection request to obtain the ranking information of the candidate users; and
and sequencing the candidate users based on the sequencing information to obtain a sequencing sequence of the candidate users.
20. The method of claim 19, wherein the entering user information for the plurality of candidate users into the ranking model based on the user selection request results in ranking information for the plurality of candidate users comprising:
when the user selection request does not comprise scene information, inputting the user information of the candidate users into the sequencing model to obtain general sequencing information and scene sequencing information corresponding to each preset scene; and
and taking the general ranking information and the scene ranking information as ranking information of the candidate users.
21. The method of claim 19, wherein the entering user information for the plurality of candidate users into the ranking model based on the user selection request results in ranking information for the plurality of candidate users comprising:
when the user selection request comprises scene information, extracting a target scene from the scene information;
when each preset scene comprises the target scene, inputting the user information of the candidate users into the sequencing model to obtain target scene sequencing information corresponding to the target scene; and
and taking the target scene ranking information as ranking information of the candidate users.
22. The method of claim 21, wherein when the user selection request includes context information, after extracting a target context from the context information, the method further comprises:
when each preset scene does not comprise the target scene, inputting the user information of the candidate users into the sequencing model to obtain the general sequencing information; and
and taking the general ranking information as ranking information of the candidate users.
23. The method of claim 1, wherein the selecting a plurality of target users from the ordered sequence and transmitting to the target terminal comprises:
selecting a plurality of target users from the sorting sequence;
acquiring target facial features of the plurality of target users; and
and sending the target users and the target facial features to the target terminal so that the target terminal can perform facial recognition based on the target facial features.
24. The method of selecting users for target terminals according to claim 23, wherein said selecting a plurality of target users from said ordered sequence comprises:
classifying the candidate users in the sorting sequence to obtain user groups of each type; and
and selecting a target user group from the user groups, and taking the users in the target user group as the target users.
25. A system for selecting a user for a target terminal, comprising:
at least one storage medium storing at least one instruction set for performing a user selection for a target terminal; and
at least one processor communicatively coupled to the at least one storage medium,
wherein when the system for selecting a user for a target terminal is run, the at least one processor reads the at least one instruction set and performs the method for selecting a user for a target terminal of any one of claims 1-24 in accordance with an indication of the at least one instruction set.
CN202211096056.2A 2022-09-08 2022-09-08 Method and system for selecting user for target terminal Pending CN115687751A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211096056.2A CN115687751A (en) 2022-09-08 2022-09-08 Method and system for selecting user for target terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211096056.2A CN115687751A (en) 2022-09-08 2022-09-08 Method and system for selecting user for target terminal

Publications (1)

Publication Number Publication Date
CN115687751A true CN115687751A (en) 2023-02-03

Family

ID=85063425

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211096056.2A Pending CN115687751A (en) 2022-09-08 2022-09-08 Method and system for selecting user for target terminal

Country Status (1)

Country Link
CN (1) CN115687751A (en)

Similar Documents

Publication Publication Date Title
CN113657465B (en) Pre-training model generation method and device, electronic equipment and storage medium
CN107688823B (en) A kind of characteristics of image acquisition methods and device, electronic equipment
CN109460514B (en) Method and device for pushing information
CN110046706B (en) Model generation method and device and server
CN111444952A (en) Method and device for generating sample identification model, computer equipment and storage medium
CN109145828B (en) Method and apparatus for generating video category detection model
WO2019232772A1 (en) Systems and methods for content identification
CN104077723B (en) A kind of social networks commending system and method
WO2019191266A1 (en) Object classification method, apparatus, server, and storage medium
CN112214677B (en) Point of interest recommendation method and device, electronic equipment and storage medium
CN111368973A (en) Method and apparatus for training a hyper-network
CN112000763A (en) Method, device, equipment and medium for determining competition relationship of interest points
CN108197203A (en) A kind of shop front head figure selection method, device, server and storage medium
CN113379045B (en) Data enhancement method and device
CN112884569A (en) Credit assessment model training method, device and equipment
CN115759748A (en) Risk detection model generation method and device and risk individual identification method and device
CN115062779A (en) Event prediction method and device based on dynamic knowledge graph
CN115130536A (en) Training method of feature extraction model, data processing method, device and equipment
CN114360027A (en) Training method and device for feature extraction network and electronic equipment
CN110532448B (en) Document classification method, device, equipment and storage medium based on neural network
CN111782774B (en) Method and device for recommending problems
CN111814044A (en) Recommendation method and device, terminal equipment and storage medium
CN115687751A (en) Method and system for selecting user for target terminal
CN115017362A (en) Data processing method, electronic device and storage medium
CN114511022A (en) Feature screening, behavior recognition model training and abnormal behavior recognition method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination