CN114996348A - User portrait generation method and device, electronic equipment and storage medium - Google Patents

User portrait generation method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN114996348A
CN114996348A CN202210731109.7A CN202210731109A CN114996348A CN 114996348 A CN114996348 A CN 114996348A CN 202210731109 A CN202210731109 A CN 202210731109A CN 114996348 A CN114996348 A CN 114996348A
Authority
CN
China
Prior art keywords
user
target
portrait
time
target user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210731109.7A
Other languages
Chinese (zh)
Inventor
梁伟
卢毅
李馨迟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Telecom Corp Ltd
Original Assignee
China Telecom Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Telecom Corp Ltd filed Critical China Telecom Corp Ltd
Priority to CN202210731109.7A priority Critical patent/CN114996348A/en
Publication of CN114996348A publication Critical patent/CN114996348A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/26Visual data mining; Browsing structured data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/22Indexing; Data structures therefor; Storage structures
    • G06F16/2228Indexing structures
    • G06F16/2255Hash tables
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the disclosure provides a user portrait generation method, a user portrait generation device, electronic equipment and a storage medium, wherein a target portrait dimension corresponding to a target user is determined based on user information of the target user; generating an initial user representation of the target user in the target representation dimension based on user data of the target user in the target representation dimension; calculating the time weight of the initial user portrait based on the duration of a target time period corresponding to the user data of the target user and the duration between the moment when the target user generates the user behavior for the first time and the moment when the user behavior for the last time in the target time period; the calculated temporal weight and the initial user representation are determined to be the final target user representation of the target user. Based on this, the time weight of the initial user portrait can represent the importance degree of the user characteristics of the target user in the dimension of the target portrait, that is, the user portrait with time characteristics can be generated, and the effectiveness of the user portrait can be improved.

Description

User portrait generation method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of internet technologies, and in particular, to a user portrait generation method and apparatus, an electronic device, and a storage medium.
Background
With the rapid development of internet technology, more and more applications (e.g., social applications, distributed financial applications, metastic applications, game applications, etc.) use blockchain technology, and when a user uses the functions provided by the applications, the applications store user data of the user at the applications to the blockchain, for example, the social applications store personal information and social information of the user to the blockchain. Therefore, a large amount of user data is precipitated on the block chain.
In the related art, when various services such as communication, social contact, online shopping, information and entertainment are provided for users, some enterprises collect a large amount of user data to accurately represent user figures of the users, and provide services for the users based on the user figures.
However, a user profile corresponding to a period of time is generated based on user data over the period of time, and the user profile can only describe characteristics of the user over the period of time. The user feature changes with the passage of time, the user portrait cannot represent the change of the importance degree of the user feature of the user with the passage of time, the user portrait has no time characteristic, and the effectiveness of the user portrait generated in the related art is low.
Disclosure of Invention
An object of the embodiments of the present disclosure is to provide a method, an apparatus, an electronic device and a storage medium for generating a user representation with temporal characteristics, so as to improve the effectiveness of the user representation. The specific technical scheme is as follows:
in a first aspect, to achieve the above object, an embodiment of the present disclosure provides a user representation generation method, where the method includes:
determining an portrait dimension corresponding to a target user as a target portrait dimension based on user information of the target user;
generating a user representation of the target user in the target representation dimension as an initial user representation based on user data of the target user in the target representation dimension;
calculating the time weight of the initial user portrait based on the duration of a target time period corresponding to the user data of the target user and the duration between the moment when the target user generates the user behavior for the first time and the moment when the user behavior occurs for the last time in the target time period;
and determining the calculated time weight and the initial user portrait as a final user portrait of the target user to serve as a target user portrait.
In some embodiments, the calculating a temporal weight of the initial user representation based on a duration of a target time period corresponding to the user data of the target user and a duration between a time at which user behavior first occurs and a time at which user behavior last occurs for the target user within the target time period includes:
determining whether a user representation of the target user in the target representation dimension has been generated prior to generating the initial representation;
if the user representation of the target user in the target representation dimension is not generated before the initial representation is generated, calculating a time weight of the initial user representation based on a duration of a target time period corresponding to user data of the target user and a duration between a time when user behavior of the target user occurs for the first time and a time when user behavior occurs for the last time in the target time period;
if a user representation of the target user in the target representation dimension has been generated prior to generating the initial representation, obtaining a temporal weight of each user representation of the target user in the target representation dimension that has been generated; determining the time weight at the position of an inflection point in the variation trend of the time weight of each user portrait according to the sequence of the generation time of each user portrait, and taking the time weight as a target time weight; and calculating the time weight of the initial user portrait based on the target time weight, the number of the time weights of the user portraits, the duration of a target time period corresponding to the user data of the target user, and the duration between the moment when the target user firstly generates the user behavior and the moment when the target user finally generates the user behavior in the target time period.
In some embodiments, said calculating a time weight for the initial user representation based on the target time weight, the number of time weights for the user representations, a duration of a target time period corresponding to the user data of the target user, and a duration between a first occurrence of user behavior and a last occurrence of user behavior of the target user within the target time period comprises:
calculating a reference time weight based on a target time period duration corresponding to the user data of the target user, a time duration between a first time when the target user generates the user behavior and a last time when the target user generates the user behavior in the target time period, and a time weight of a first user portrait in each user portrait according to a sequence of generation times of each user portrait;
if the reference time weight is not less than a third numerical value and the weight from the time of the first user portrait in each user portrait to the target time is in an ascending trend according to the sequence of the generation time of each user portrait, calculating the time weight of the initial user portrait based on the time length of a target time period corresponding to the user data of the target user, the time length between the moment when the user behavior occurs for the first time and the moment when the user behavior occurs for the last time in the target time period, the number of each user portrait and the number of the weight from the time of the first user portrait in each user portrait to the time of the target time weight according to the sequence of the generation time of each user portrait;
if the reference time weight is not less than the third numerical value and the time weight of the first user portrait in each user portrait is a descending trend from the time weight of the first user portrait to the target time weight according to the sequence of the generation time of each user portrait, calculating the time weight of the initial user portrait based on the time length between the moment when the target user firstly generates the user behavior and the moment when the user behavior last generates in the target time period, the time length of the target time period corresponding to the user data of the target user, and the difference value with the largest absolute value in the difference values of two adjacent time weights in the time weights of each user portrait;
if the reference time weight is smaller than the third numerical value and the time weight from the first user portrait in each user portrait is a descending trend according to the sequence of the generation time of each user portrait, calculating the time weight of the initial user portrait based on the duration of a target time period corresponding to the user data of the target user, the duration between the moment when the user behavior occurs for the first time and the moment when the user behavior occurs for the last time in the target time period, the number of each user portrait and the number of the time weights from the time weight of the first user portrait in each user portrait to the time weight of the target according to the sequence of the generation time of each user portrait;
if the reference time weight is smaller than the third numerical value and the target time weight is in an ascending trend from the time weight of the first user portrait in the user portraits to the target time weight according to the sequence of the generation time of the user portraits, calculating the time weight of the initial user portrait based on the time length between the moment when the target user firstly generates the user behavior and the moment when the target user finally generates the user behavior in the target time period, the time length of the target time period corresponding to the user data of the target user and the difference value with the maximum absolute value in the difference values of the two adjacent time weights in the time weights of the user portraits.
In some embodiments, after said determining that the calculated temporal weight and the initial user representation are the final user representation of the target user as the target user representation, the method further comprises:
after receiving a use request aiming at the target user portrait, extracting a user identifier of a user carried in the use request as a user identifier and extracting a use abstract carried in the use request; wherein, the usage abstract represents a usage scene of the user for acquiring the target user portrait;
determining whether the user has the right of use of the target user representation based on the user identifier, the usage summary and the user information of the target user;
if the user does not have the right of use of the target user portrait, sending an alarm message to the electronic equipment used by the target user to remind the target user of the current request use behavior of the user for the target user portrait;
if the user has the right to use the target user representation, the target user representation is sent to the electronic device used by the user.
In some embodiments, the user information of the target user comprises: an Internet Protocol (IP) address of the electronic device used by the target user;
the determining whether the user has the right of use of the target user representation based on the user identifier, the usage summary, and the user information of the target user comprises:
sending an inquiry message aiming at the portrait of the target user to the electronic equipment used by the target user according to the IP address of the electronic equipment used by the target user; wherein, the query message carries the user identifier and the usage summary;
upon receiving a confirmation authorization message sent by an electronic device used by the target user, determining that the user has access to the target user representation;
and when receiving a canceling authorization message sent by the electronic equipment used by the target user, determining that the user does not have the use right of the target user portrait.
In some embodiments, the user information of the target user comprises: the target user aims at an authorization list and an authorization abstract of the target user representation; wherein the authorization list includes user identifications of users authorized by the target user to use the target user representation; the authorization digest represents a usage scenario in which the target user authorizes use of the target user representation;
the determining whether the user has the right of use of the target user representation based on the user identifier, the usage summary, and the user information of the target user comprises:
judging whether the authorization list contains the user identification;
if the user identifier is not included in the authorization list, determining that the user does not have access to the target user representation;
if the authorization list contains the user identification, calculating a difference value between the use abstract and the authorization abstract; if the difference value is larger than a preset threshold value, determining that the user does not have the right of use of the target user portrait; and if the difference value is not larger than the preset threshold value, determining that the user has the right of use of the target user portrait.
In some embodiments, said calculating a difference value between said usage digest and said authorization digest comprises:
extracting continuous character strings with a first preset length from the usage abstract to obtain each character string contained in the usage abstract;
for each extracted character string, if the authorized abstract contains the character string same as the character string, determining the matching degree corresponding to the character string as a first numerical value;
if the authorized abstract does not contain the character string which is the same as the character string, extracting continuous character strings with a second preset length from the character string to obtain each sub-character string contained in the character string; for each substring contained in the character string, if the authorized abstract does not contain the character string same as the substring, determining that the matching degree corresponding to the substring is a second numerical value; if the authorized abstract contains the character string which is the same as the sub-character string, calculating the matching degree corresponding to the sub-character string based on the number of the characters contained in the sub-character string, the number of the characters contained in the authorized abstract and the occurrence frequency of the character string which is the same as the sub-character string in the authorized abstract; calculating a sum of matching degrees corresponding to each sub-character string contained in the character string, and calculating a ratio of the sum to the number of each sub-character string contained in the character string to obtain the matching degree corresponding to the character string;
and calculating a difference value between the usage abstract and the authorization abstract based on the matching degree corresponding to each character string contained in the usage abstract and the number of the character strings contained in the usage abstract.
In some embodiments, after said determining that the calculated temporal weight and the initial user representation are the final user representation of the target user as the target user representation, the method further comprises:
generating a DID of the target user as a target DID according to a preset DID generation rule and the user information of the target user;
generating a user identifier of the target user as a target user identifier based on the generation time of the designated user portrait of the target user, the number of the target user and the target DID;
and correspondingly recording the target user identification and the target user portrait.
In some embodiments, the generating, as the target user identification, the user identification of the target user based on the generation time of the designated user representation of the target user, the number of the target user, and the target DID includes:
performing hash processing on the generation time of the designated user portrait of the target user to obtain a hash value of the generation time of the designated user portrait, and performing hash processing on the number of the target user to obtain a hash value of the number of the target user;
splicing the hash value of the generation time of the designated user portrait and the hash value of the number of the target user to obtain a hash value string;
and generating the user identification of the target user as the target user identification based on the hash value string and the target DID.
In some embodiments, the generating, as the target user identifier, the user identifier of the target user based on the hash value string and the target DID includes:
if the number of the characters contained in the hash value string is not larger than the number of the characters contained in the target DID, determining the position of each character in the hash value string according to the arrangement sequence of the characters contained in the hash value string from high order to low order; determining characters at the same positions as the characters in the target DID according to the arrangement sequence of the characters from high position to low position contained in the target DID, and obtaining the characters corresponding to the characters in the target DID; calculating the remainder of the character and the corresponding character in the target DID to obtain the user identifier of the target user as the target user identifier;
if the number of the characters contained in the hash value string is larger than that of the characters contained in the target DID, determining the characters with characters at corresponding positions in the target DID as first characters according to the arrangement sequence of the characters contained in the hash value string from high order to low order, and determining other characters except the first characters in the hash value string as second characters; counting the occurrence frequency of each character in the target DID; for each first character, determining the position of the first character in the hash value string according to the arrangement sequence of the characters contained in the hash value string from high order to low order; determining characters in the target DID at the same positions as the first character according to the arrangement sequence of the characters contained in the target DID from high position to low position, and obtaining the characters corresponding to the first character in the target DID; calculating the remainder of the first character and the corresponding character in the target DID as a first remainder; for each second character, determining the position of the second character in the hash value string according to the arrangement sequence of the characters contained in the hash value string from low order to high order; determining characters at the same positions as the second character in the corresponding sorting result according to the arrangement sequence of the occurrence times of the characters contained in the target DID from high to low to obtain the characters corresponding to the second character in the target DID; calculating the remainder of the second character and the corresponding character in the target DID to obtain a second remainder; and generating the user identifier of the target user containing the first remainder and the second remainder as a target user identifier.
In some embodiments, the corresponding recording the target user identification and the target user representation includes:
judging whether the corresponding relation between the user identification stored in the portrait node and the user node contains the target user identification; the portrait node is a head node of a preset user block chain; the user node is a non-head node of the user block chain; a user node for storing user information of a corresponding user;
if the corresponding relation comprises the target user identification, determining a user node corresponding to the target user identification to obtain the user node of the target user; establishing a chain table node after the last chain table node of the portrait block chain taking the user node of the target user as a head node, and storing the portrait of the target user to the established chain table node;
if the corresponding relation does not contain the target user identification, a user node is newly established behind the last user node of the user block chain and is used as the user node of the target user, and the target user identification and the user node of the target user are correspondingly recorded in the corresponding relation; establishing an portrait block chain by taking the user node of the target user as a head node; the newly-built portrait block chain comprises a newly-built linked list node except a head node; and storing the target user portrait to the newly-built linked list node.
In some embodiments, said storing said target user representation to said newly created linked list node comprises:
and generating a two-dimensional array comprising the target user portrait and the generation time of the target user portrait, and storing the two-dimensional array to the newly-built linked list node.
In some embodiments, prior to said sending said target user representation to an electronic device used by said user, said method further comprises:
determining a user node corresponding to the target user identification in the corresponding relation between the user identification recorded by the image node and the user node to obtain the user node of the target user;
determining a linked list node corresponding to the target user image in the corresponding relation between the user image recorded by the user node of the target user and the linked list node;
and obtaining the target user portrait from the determined linked list nodes.
In a second aspect, to achieve the above object, an embodiment of the present disclosure provides a user representation generating apparatus, including:
the portrait dimension determining module is used for determining portrait dimensions corresponding to a target user as target portrait dimensions based on user information of the target user;
an initial user representation generation module for generating a user representation of the target user in the target representation dimension as an initial user representation based on user data of the target user in the target representation dimension;
the time weight calculation module is used for calculating the time weight of the initial user portrait based on the duration of a target time period corresponding to the user data of the target user and the duration between the moment when the target user generates the user behavior for the first time and the moment when the target user generates the user behavior for the last time in the target time period;
a target user representation generation module to determine the calculated temporal weight and the initial user representation as a final user representation of the target user as a target user representation.
In some embodiments, the temporal weight calculation module is specifically configured to determine whether a user representation of the target user in the target representation dimension has been generated before generating the initial representation;
if the user representation of the target user in the target representation dimension is not generated before the initial representation is generated, calculating a time weight of the initial user representation based on a duration of a target time period corresponding to user data of the target user and a duration between a time when user behavior of the target user occurs for the first time and a time when user behavior occurs for the last time in the target time period;
if a user representation of the target user in the target representation dimension has been generated prior to generating the initial representation, obtaining a temporal weight for each user representation of the target user in the target representation dimension that has been generated; determining the time weight at the position of an inflection point in the variation trend of the time weight of each user portrait according to the sequence of the generation time of each user portrait, and taking the time weight as a target time weight; and calculating the time weight of the initial user portrait based on the target time weight, the number of the time weights of the user portraits, the duration of a target time period corresponding to the user data of the target user, and the duration between the moment when the target user firstly generates the user behavior and the moment when the target user finally generates the user behavior in the target time period.
In some embodiments, the time weight calculation module is specifically configured to calculate a reference time weight based on a duration of a target time period corresponding to the user data of the target user, a duration between a time at which the target user first generates a user behavior and a time at which the target user last generates the user behavior within the target time period, and a time weight of a first user portrait in each user portrait according to a sequence of generation times of the user portraits;
if the reference time weight is not less than a third numerical value and the weight from the time of the first user portrait in each user portrait to the target time is in an ascending trend according to the sequence of the generation time of each user portrait, calculating the time weight of the initial user portrait based on the time length of a target time period corresponding to the user data of the target user, the time length between the moment when the user behavior occurs for the first time and the moment when the user behavior occurs for the last time in the target time period, the number of each user portrait and the number of the weight from the time of the first user portrait in each user portrait to the time of the target time weight according to the sequence of the generation time of each user portrait;
if the reference time weight is not smaller than the third numerical value and the time weight of the first user portrait in the user portraits is in a descending trend from the time weight of the first user portrait to the target time weight according to the sequence of the generation time of the user portraits, calculating the time weight of the initial user portrait based on the time length between the moment when the target user firstly generates the user behavior and the moment when the user behavior last generates in the target time period, the time length of the target time period corresponding to the user data of the target user and the difference value with the maximum absolute value in the difference values of the two adjacent time weights in the time weights of the user portraits;
if the reference time weight is smaller than the third numerical value and the time weight from the first user portrait in each user portrait is a descending trend according to the sequence of the generation time of each user portrait, calculating the time weight of the initial user portrait based on the duration of a target time period corresponding to the user data of the target user, the duration between the moment when the user behavior occurs for the first time and the moment when the user behavior occurs for the last time in the target time period, the number of each user portrait and the number of the time weights from the time weight of the first user portrait in each user portrait to the time weight of the target according to the sequence of the generation time of each user portrait;
if the reference time weight is smaller than the third numerical value and the target time weight is in an ascending trend from the time weight of the first user portrait in the user portraits to the target time weight according to the sequence of the generation time of the user portraits, calculating the time weight of the initial user portrait based on the time length between the moment when the target user firstly generates the user behavior and the moment when the target user finally generates the user behavior in the target time period, the time length of the target time period corresponding to the user data of the target user and the difference value with the maximum absolute value in the difference values of the two adjacent time weights in the time weights of the user portraits.
In some embodiments, the apparatus further comprises:
an extracting module, configured to, after the target user representation generating module performs determining that the calculated time weight and the initial user representation are a final user representation of the target user, as a target user representation, perform extracting, after receiving a usage request for the target user representation, a user identifier of a user carried in the usage request, as a user identifier, and extracting a usage summary carried in the usage request; wherein, the usage abstract represents a usage scene of the user for acquiring the target user portrait;
a usage right judging module for judging whether the user has the usage right of the target user representation based on the user identification, the usage abstract and the user information of the target user;
the warning message sending module is used for sending a warning message to the electronic equipment used by the target user to remind the target user of the current request using behavior of the user aiming at the target user portrait if the user does not have the right of using the target user portrait;
and the user portrait sending module is used for sending the target user portrait to the electronic equipment used by the user if the user has the right of use of the target user portrait.
In some embodiments, the user information of the target user comprises: an internet protocol, IP, address of the electronic device used by the target user;
the right-of-use judging module is specifically configured to send an inquiry message for the target user portrait to the electronic device used by the target user according to the IP address of the electronic device used by the target user; wherein, the query message carries the user identifier and the usage summary;
upon receiving a confirmation authorization message sent by an electronic device used by the target user, determining that the user has access to the target user representation;
upon receiving a de-authorization message sent by an electronic device used by the target user, determining that the user does not have access to the target user representation.
In some embodiments, the user information of the target user comprises: the target user aims at an authorization list and an authorization abstract of the target user representation; wherein the authorization list includes user identifications of users authorized by the target user to use the target user representation; the authorization digest represents a usage scenario in which the target user authorizes use of the target user representation;
the right-of-use judging module is specifically configured to judge whether the authorization list includes the user identifier;
determining that said user does not have access to said target user representation if said user identifier is not included in said authorization list;
if the authorization list contains the user identification, calculating a difference value between the use abstract and the authorization abstract; if the difference value is larger than a preset threshold value, determining that the user does not have the right of use of the target user portrait; and if the difference value is not larger than the preset threshold value, determining that the user has the right of use of the target user portrait.
In some embodiments, the usage right determining module is specifically configured to extract continuous character strings with a first preset length from the usage abstract to obtain each character string included in the usage abstract;
for each extracted character string, if the authorized abstract contains the character string same as the character string, determining the matching degree corresponding to the character string as a first numerical value;
if the authorized abstract does not contain the character string which is the same as the character string, extracting continuous character strings with a second preset length from the character string to obtain each sub-character string contained in the character string; for each substring contained in the character string, if the authorized abstract does not contain the character string same as the substring, determining that the matching degree corresponding to the substring is a second numerical value; if the authorized abstract contains the character string which is the same as the sub-character string, calculating the matching degree corresponding to the sub-character string based on the number of the characters contained in the sub-character string, the number of the characters contained in the authorized abstract and the occurrence frequency of the character string which is the same as the sub-character string in the authorized abstract; calculating a sum of matching degrees corresponding to each sub-character string contained in the character string, and calculating a ratio of the sum to the number of each sub-character string contained in the character string to obtain the matching degree corresponding to the character string;
and calculating a difference value between the usage abstract and the authorization abstract based on the matching degree corresponding to each character string contained in the usage abstract and the number of the character strings contained in the usage abstract.
In some embodiments, the apparatus further comprises:
a DID generation module, configured to, after the target user representation generation module performs the time weight determined and calculated and the initial user representation is the final user representation of the target user, perform DID generation according to a preset DID generation rule and user information of the target user, and generate a DID of the target user as a target DID;
the user identification generation module is used for generating the user identification of the target user as the target user identification based on the generation time of the appointed user portrait of the target user, the number of the target user and the target DID;
and the recording module is used for correspondingly recording the target user identification and the target user portrait.
In some embodiments, the user identifier generating module is specifically configured to perform hash processing on the generation time of the designated user portrait of the target user to obtain a hash value of the generation time of the designated user portrait, and perform hash processing on the number of the target user to obtain a hash value of the number of the target user;
splicing the hash value of the generation time of the designated user portrait and the hash value of the number of the target user to obtain a hash value string;
and generating the user identification of the target user as the target user identification based on the hash value string and the target DID.
In some embodiments, the user identifier generating module is specifically configured to, if the number of characters included in the hash value string is not greater than the number of characters included in the target DID, determine, for each character in the hash value string, a position of the character in the hash value string according to an arrangement order of the characters included in the hash value string from a high order to a low order; determining characters at the same positions as the characters in the target DID according to the arrangement sequence of the characters from high position to low position contained in the target DID, and obtaining the characters corresponding to the characters in the target DID; calculating the remainder of the character and the corresponding character in the target DID to obtain the user identifier of the target user as the target user identifier;
if the number of the characters contained in the hash value string is larger than that of the characters contained in the target DID, determining the characters with characters at corresponding positions in the target DID as first characters according to the arrangement sequence of the characters contained in the hash value string from high order to low order, and determining other characters except the first characters in the hash value string as second characters; counting the occurrence frequency of each character in the target DID; for each first character, determining the position of the first character in the hash value string according to the arrangement sequence of the characters contained in the hash value string from high order to low order; determining characters in the target DID at the same positions as the first character according to the arrangement sequence of the characters contained in the target DID from high position to low position, and obtaining the corresponding characters of the first character in the target DID; calculating the remainder of the first character and the corresponding character in the target DID as a first remainder; for each second character, determining the position of the second character in the hash value string according to the arrangement sequence of the characters contained in the hash value string from low order to high order; determining characters at the same positions as the second character in the corresponding sorting result according to the arrangement sequence of the occurrence times of the characters contained in the target DID from high to low to obtain the characters corresponding to the second character in the target DID; calculating the remainder of the second character and the corresponding character in the target DID to obtain a second remainder; and generating the user identifier of the target user containing the first remainder and the second remainder as a target user identifier.
In some embodiments, the recording module is specifically configured to determine whether the corresponding relationship between the user identifier stored in the representation node and the user node includes the target user identifier; the portrait node is a head node of a preset user block chain; the user node is a non-head node of the user block chain; a user node for storing user information of a corresponding user;
if the corresponding relation comprises the target user identification, determining a user node corresponding to the target user identification to obtain the user node of the target user; establishing a chain table node after the last chain table node of the portrait block chain which takes the user node of the target user as a head node, and storing the portrait of the target user to the newly established chain table node;
if the corresponding relation does not contain the target user identification, a user node is newly established behind the last user node of the user block chain and is used as the user node of the target user, and the target user identification and the user node of the target user are correspondingly recorded in the corresponding relation; establishing an portrait block chain by taking the user node of the target user as a head node; the newly-built portrait block chain comprises a newly-built linked list node except a head node; and storing the target user portrait to the newly-built linked list node.
In some embodiments, the recording module is specifically configured to generate a two-dimensional array including the target user representation and a generation time of the target user representation, and store the two-dimensional array to the newly-created linked list node.
In some embodiments, the apparatus further comprises:
a user node determining module, configured to determine a user node corresponding to the target user identifier in a corresponding relationship between a user identifier and a user node recorded by the image node before the user representation sending module sends the target user representation to the electronic device used by the user, so as to obtain the user node of the target user;
a linked list node determining module, configured to determine a linked list node corresponding to the target user image in a correspondence between the user image recorded in the user node of the target user and the linked list node;
and the user portrait acquisition module is used for acquiring the target user portrait from the determined linked list nodes.
The embodiment of the disclosure also provides an electronic device, which comprises a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory complete mutual communication through the communication bus;
a memory for storing a computer program;
a processor for implementing any of the above-described user image generation method steps when executing a program stored in the memory.
An embodiment of the present disclosure further provides a computer-readable storage medium, in which a computer program is stored, and when the computer program is executed by a processor, the steps of the user representation generation method described in any of the above are implemented.
Embodiments of the present disclosure also provide a computer program product containing instructions that, when executed on a computer, cause the computer to perform any of the user representation generation methods described above.
The user portrait generation method provided by the embodiment of the disclosure determines portrait dimensions corresponding to a target user as target portrait dimensions based on user information of the target user; generating a user representation of the target user in the target representation dimension as an initial user representation based on user data of the target user in the target representation dimension; calculating the time weight of the initial user portrait based on the duration of a target time period corresponding to the user data of the target user and the duration between the moment when the target user generates the user behavior for the first time and the moment when the user behavior occurs for the last time in the target time period; and determining the calculated time weight and the initial user portrait as a final user portrait of the target user to serve as a target user portrait.
Based on the above processing, the temporal weight of the initial user representation may represent the importance of the user data of the target representation dimension in the target time period, i.e., represent the importance of the user characteristics of the target user in the target representation dimension, and further the temporal weights of the user representations of the target user generated at different times may represent: the change of the importance degree of the user characteristics of the target user in the dimension of the target portrait along with time, namely, the user portrait with time characteristic can be generated, and the effectiveness of the user portrait can be improved.
Of course, it is not necessary for any product or method of practicing the disclosure to achieve all of the advantages set forth above at the same time.
Drawings
In order to more clearly illustrate the embodiments of the present disclosure or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present disclosure, and other embodiments can be obtained by those skilled in the art according to the drawings.
FIG. 1 is a flow chart of a method for generating a user representation according to an embodiment of the present disclosure;
FIG. 2a is a schematic diagram of a user representation generation according to an embodiment of the present disclosure;
FIG. 2b is a schematic diagram of a user representation provided in an embodiment of the present disclosure;
FIG. 3 is a flow chart of another user representation generation method provided by an embodiment of the present disclosure;
FIG. 4 is a flow chart of another user representation generation method provided by embodiments of the present disclosure;
FIG. 5 is a flow chart of another method of user representation generation provided by embodiments of the present disclosure;
FIG. 6 is a flow chart of another method of user representation generation provided by embodiments of the present disclosure;
FIG. 7 is a flow chart of another user representation generation method provided by embodiments of the present disclosure;
FIG. 8 is a flow chart of another user representation generation method provided by embodiments of the present disclosure;
FIG. 9 is a flow chart of another user representation generation method provided by embodiments of the present disclosure;
FIG. 10 is a flow chart of another user representation generation method provided by embodiments of the present disclosure;
fig. 11 is a schematic structural diagram of a blockchain according to an embodiment of the present disclosure;
FIG. 12 is a flow chart of another user representation generation method provided by embodiments of the present disclosure;
FIG. 13 is a flowchart of a method for managing a user representation according to an embodiment of the disclosure;
FIG. 14 is a flowchart of another method for user representation management according to an embodiment of the present disclosure;
FIG. 15 is a flowchart of another method for user representation management according to an embodiment of the present disclosure;
FIG. 16 is a block diagram of a user representation generation apparatus according to an embodiment of the disclosure;
fig. 17 is a block diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
The technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, and not all of the embodiments. All other embodiments that can be derived from the disclosure by one of ordinary skill in the art based on the embodiments in the disclosure are intended to be within the scope of the disclosure.
In the related art, when various services such as communication, social contact, online shopping, information, entertainment and the like are provided for users, a large amount of user data is collected by some enterprises to accurately represent user images of the users, and the services are provided for the users based on the user images. However, a user profile corresponding to a period of time is generated based on user data over the period of time, and the user profile can only describe characteristics of the user over the period of time. The user feature changes with the passage of time, the user portrait cannot represent the change of the importance degree of the user feature of the user with the passage of time, the user portrait has no time characteristic, and the effectiveness of the user portrait generated in the related art is low.
In order to solve the above problem, referring to fig. 1, fig. 1 is a flowchart of a user portrait generation method provided by an embodiment of the present disclosure, where the method is applied to an electronic device, and the method may include the following steps:
s101: and determining the portrait dimension corresponding to the target user as the target portrait dimension based on the user information of the target user.
S102: a user representation of the target user in the target representation dimension is generated as an initial user representation based on user data of the target user in the target representation dimension.
S103: and calculating the time weight of the initial user portrait based on the duration of the target time period corresponding to the user data of the target user and the duration between the moment when the target user generates the user behavior for the first time and the moment when the target user generates the user behavior for the last time in the target time period.
S104: the calculated temporal weight and the initial user representation are determined as a final user representation of the target user as a target user representation.
Based on the user portrait generation method provided by the embodiment of the disclosure, the time weight of the initial user portrait may represent the importance degree of the user data of the target portrait dimension in the target time period, that is, represent the importance degree of the user characteristics of the target user in the target portrait dimension, and further the time weights of the user portraits of the target user generated at different times can represent: the user representation with time characteristics can be generated by changing the importance degree of the user characteristics of the target user in the dimension of the target representation with time, and the effectiveness of the user representation can be improved.
In step S101, the target user is any user who needs to generate a user profile, and the target user may be an individual user. The user information of the target user comprises basic information of the target user, and the basic information of the target user comprises the name, the gender, the age, the occupation and the like of the target user. The electronic equipment acquires the basic information of the target user and determines the target user category of the target user based on the basic information of the target user. For example, the basic information of the target user includes: sex: male, age: and 40, determining the target user category as: middle-aged males. Or, the basic information of the target user includes: sex: female, age: and 23, determining the target user category as: young women.
Then, the electronic device obtains the portrait dimension corresponding to the target user category as the target portrait dimension. An image dimension corresponding to a user category is determined based on user data of users included in the user category. The portrait dimensions represent categories of applications in which a user is behaving, for example, portrait dimensions may include: financial dimensions, social dimensions, collection dimensions, meta universe dimensions, game dimensions, and the like.
Illustratively, the user category is young males, the user category including users including: user 1, user 2, user 3, and user 4. The user data of user 1 includes: socially relevant user data; the user data of user 2 includes: socially relevant user data and metastic-related user data; the user data of the user 3 includes: game related user data and social related user data, the user data of the user 4 comprising: game related user data and social related user data.
The electronic equipment clusters the user data of the users contained in the user category, and the obtaining of the portrait dimension corresponding to the user data of the users contained in the user category comprises the following steps: social dimension, gaming dimension, and meta-universe dimension. Because the user data of each user is less in user data related to the meta universe, the electronic device determines that the image dimension corresponding to the user category includes: social dimension and gaming dimension.
For step S102, since various application programs used by the user may store user data into the blockchain, the electronic device may obtain the user data of each user from a mainstream public chain, where the mainstream public chain may include: etherhouse, Solana (a mainstream blockchain), BSC (binary Smart Chain), Polygon (another mainstream blockchain), etc., which relate to distributed financial applications, NFT (Non-fungal Token) digital collection applications, metastic applications, etc. Then, the electronic device may store the acquired user data in a preset database in a data wide table manner. The data width table refers to a data table in which indexes, dimensions and attributes related to business topics are associated together.
The electronic device obtains user data of a target user in a target portrait dimension from a preset database, and the electronic device may process the obtained user data based on the following manner to obtain a feature vector (which may be referred to as a first feature vector) of the target user in the target portrait dimension.
In the mode 1, the electronic device performs coding processing on the acquired user data according to a preset coding mode to obtain a first feature vector. The preset encoding mode can be One-hot (One-hot encoding) or Embedding (word Embedding) encoding.
Mode 2, the electronic device calculates a data value of each acquired user data in the target portrait dimension. The data value of one user data in one image dimension may be a TF-IDF (Term Frequency-Inverse text Frequency index) value of the user data in the image dimension, and the electronic device generates a feature vector including the data value of each user data of the target user, to obtain a first feature vector.
Illustratively, the user data of the target user is: financial products purchased by the target user within one month include: product a, product B and product C. The electronic equipment calculates the TF-IDF value of the product A in the user data of the financial dimension to obtain a data value a of the product A, calculates the TF-IDF value of the product B in the user data of the financial dimension to obtain a data value B of the product B, and calculates the TF-IDF value of the product C in the user data of the financial dimension to obtain a data value C of the product C. Further, the electronic device determines the first feature vector as [ a, b, c ].
And further, the electronic equipment generates a user portrait of the target user in the target portrait dimension based on the determined first feature vector and a preset user behavior analysis algorithm to obtain an initial user portrait.
For example, referring to fig. 2a, fig. 2a is a schematic diagram of generating a user representation according to an embodiment of the present disclosure.
The preset user behavior analysis algorithm comprises the following steps: supervised learning analysis algorithms, such as regression analysis algorithms, CNN (Convolutional Neural Network) deep learning algorithms, unsupervised learning analysis algorithms, such as cluster analysis algorithms, adaptive learning analysis algorithms, such as GAN (generic adaptive Network, Generative countermeasure Network) prediction algorithms.
When the preset user behavior analysis algorithm is a clustering analysis algorithm, the electronic equipment can acquire the feature vectors of the target portrait dimension corresponding to the preset user portraits, and then the electronic equipment can calculate the similarity between the feature vector of each preset user portrait and the first feature vector, and determine the preset user portrait with higher similarity obtained by calculation as the initial user portrait of the target user in the target portrait dimension.
When the preset user behavior analysis algorithm is an analysis algorithm with supervised learning, the electronic device may input the first feature vector to a classification network model (e.g., a CNN model) trained in advance, to obtain probabilities of the user portrait of the target user in the target portrait dimension output by the classification network model as each preset user portrait, and further, the electronic device may determine the preset user portrait with a higher corresponding probability as an initial user portrait of the target user in the target portrait dimension. The classification network model is obtained by training a sample feature vector of a sample user in the dimension of the target portrait and a sample portrait of the sample user in the dimension of the target portrait.
For example, referring to fig. 2b, fig. 2b is a schematic diagram of a user portrait provided by an embodiment of the present disclosure.
The user representation of the user Alice includes: user representations of financial dimensions, such as common traders, liquidity providers, market makers, and the like.
User portrayal of social dimensions, e.g., DAO (Decentralized Autonomous Organization) participants, StepN (an application built based on the Solana blockchain) participants, etc.
User figures of collection dimension, such as an ant digital product owner, a digital collection diamond hand, and the like.
User representations of a metastic dimension, such as metastic elementary analysis, robux (an application that provides social and gaming) depth participants, Sandbox (a block chain based gaming platform) land builders, and the like.
User representations of game dimensions, such as DCL (a blockchain based gaming application) game hands, a primary entrant to some GameFi (a blockchain based gaming application), and the like.
In step S103, the initial user profile is generated based on the user data of the user within a period of time, and the initial user profile can only represent the user characteristics of the target user within the period of time, but cannot represent the change of the importance degree of the user characteristics of the target user in the dimension of the target profile over time, that is, the initial user profile does not have the time characteristic.
The target time period is: and when the initial user portrait is generated, acquiring a time period corresponding to the user data. For example, the target portrait dimension is a financial dimension, and when the initial user portrait is generated, the electronic device acquires financial products purchased by the target user from 5 month 1 to 5 month 31, the target time period is 5 month 1 to 5 month 31, and the duration of the target time period is 31 days.
The time when the target user purchases the financial product for the first time in the target time period is No. 5 month 10, and the time when the target user purchases the financial product for the last time is No. 5 month 15, so that the time length between the time when the target user takes the user behavior for the first time and the time when the user behavior takes place for the last time in the target time period is 5 days.
The electronic device calculates the time weight of the initial user portrait based on the duration of the target time period (which may be referred to as a first duration) corresponding to the user data of the target user and the duration (which may be referred to as a second duration) between the time when the user behavior of the target user occurs for the first time and the time when the user behavior occurs for the last time in the target time period.
In one implementation, the electronic device may directly calculate a ratio of the second duration to the first duration as a temporal weight of the initial user representation.
In another implementation, on the basis of fig. 1, referring to fig. 3, step S103 may include the following steps:
s1031: it is determined whether a user representation of the target user in the target representation dimension has been generated prior to generating the initial representation, and if not, step S1032 is performed, and if so, step S1033 is performed.
S1032: and calculating the time weight of the initial user portrait based on the duration of the target time period corresponding to the user data of the target user and the duration between the moment when the target user generates the user behavior for the first time and the moment when the target user generates the user behavior for the last time in the target time period.
S1033: and acquiring the time weight of each generated user portrait of the target user in the dimension of the target portrait.
S1034: and determining the time weight at the position of the inflection point in the variation trend of the time weight of each user portrait according to the sequence of the generation time of each user portrait as the target time weight.
S1035: and calculating the time weight of the initial user portrait based on the target time weight, the number of the time weights of all the user portraits, the duration of a target time period corresponding to the user data of the target user, and the duration between the moment when the target user generates the user behavior for the first time and the moment when the target user generates the user behavior for the last time in the target time period.
The initial user representation is generated based on user data of the target user within a target time period, and the electronic device determines whether the electronic device has generated a user representation of the target user in a target representation dimension based on user data within other time periods prior to generating the initial representation.
If a user representation of the target user in the target representation dimension is not generated before the initial representation is generated, the electronic device calculates a time weight of the initial user representation based on a duration of a target time period corresponding to user data of the target user and a duration between a time at which the user behavior of the target user first occurs and a time at which the user behavior of the target user last occurs within the target time period. If a user representation of a target user in the target representation dimension has been generated prior to generating the initial representation, the electronic device calculates a time weight for the initial user representation based on a trend of a change in a time weight for each user representation of the target user in the target representation dimension that has been generated, a duration of a target time period corresponding to user data for the target user, and a duration between a time at which user behavior first occurs and a time at which user behavior last occurs for the target user within the target time period.
If a user representation of a target user in the target representation dimension is not generated prior to generating the initial representation, the electronic device calculates a temporal weight of the initial user representation based on equation (1) below.
Figure BDA0003713518800000181
Q represents the temporal weight of the initial user representation; Δ T represents a time length between the time when the user behavior of the target user occurs for the first time and the time when the user behavior occurs for the last time in the target time period, and T represents a time length of the target time period corresponding to the user data of the target user.
If a user representation of a target user in a target representation dimension has been generated prior to generating an initial representation, the electronic device may obtain a temporal weight for each user representation of the target user in the target representation dimension that has been generated, e.g., the electronic device may obtain a temporal weight for a preset number of user representations that are closest to the current time. Then, the electronic device sorts the time weights of the user portraits according to the sequence of the generation time of the user portraits. Further, the electronic device determines, as the target time weight, a time weight at an inflection point position in a variation trend of the time weight of each user portrait based on the ranking result.
Illustratively, the electronic device sorts the time weights of the user portraits according to the sequence of the generation time of the user portraits, and the obtained sorting result is as follows: q1, q2, q3, q 4. If q1 is equal to or greater than q2, q2 is equal to or greater than q3, but q3 is less than q4, then q3 is the target temporal weight (which may be denoted as qe) at the inflection point location in the trend of the temporal weight of each user portrait.
Furthermore, the electronic device calculates the time weight of the initial user portrait based on the target time weight, the number of time weights of each user portrait, the duration of the target time period corresponding to the user data of the target user, and the duration between the time when the target user first generates the user behavior and the time when the target user last generates the user behavior in the target time period.
In some embodiments, step S1035 may include the steps of:
step 1, calculating reference time weight based on the time length of a target time period corresponding to the user data of the target user, the time length between the moment when the target user firstly generates the user behavior and the moment when the target user finally generates the user behavior in the target time period, and the time weight of the first user portrait in each user portrait according to the sequence of the generation time of each user portrait.
And 2, if the reference time weight is not less than the third numerical value and the time weight from the first user portrait in each user portrait to the target time weight is in an ascending trend according to the sequence of the generation time of each user portrait, calculating the time weight of the initial user portrait based on the time length of the target time period corresponding to the user data of the target user, the time length between the first user behavior occurrence time and the last user behavior occurrence time of the target user in the target time period, the number of each user portrait and the number of the time weights from the time weight of the first user portrait in each user portrait to the target time weight according to the sequence of the generation time of each user portrait.
And 3, if the reference time weight is not less than the third numerical value and the time weight of the first user portrait in each user portrait is in a descending trend from the time weight of the first user portrait to the target time weight according to the sequence of the generation time of each user portrait, calculating the time weight of the initial user portrait based on the time length between the moment when the target user firstly generates the user behavior and the moment when the user behavior last generates in the target time period, the time length of the target time period corresponding to the user data of the target user and the difference value with the maximum absolute value in the difference values of the two adjacent time weights in the time weights of each user portrait.
And 4, if the reference time weight is smaller than the third numerical value and the time weight from the first user portrait in the user portraits to the target time weight is in a descending trend according to the sequence of the generation time of the user portraits, calculating the time weight of the initial user portrait based on the time length of the target time period corresponding to the user data of the target user, the time length between the first user behavior generation moment and the last user behavior generation moment of the target user in the target time period, the number of the user portraits and the number of the time weights from the time weight of the first user portrait in the user portraits to the target time weight according to the sequence of the generation time of the user portraits.
And 5, if the reference time weight is smaller than a third numerical value and the time weight of the first user portrait in each user portrait is in an ascending trend from the time weight of the first user portrait to the target time weight according to the sequence of the generation time of each user portrait, calculating the time weight of the initial user portrait based on the maximum absolute difference value of the time length between the moment when the target user firstly generates the user behavior and the moment when the target user finally generates the user behavior in the target time period, the time length of the target time period corresponding to the user data of the target user and the difference value of the two adjacent time weights in the time weights of each user portrait.
After each generated user portrait of the target user in the target portrait dimension is obtained, the electronic device calculates the reference time weight based on the duration of a target time period corresponding to the user data of the target user, the duration between the moment when the target user first generates the user behavior and the moment when the target user last generates the user behavior in the target time period, and according to the sequence of the generation time of each user portrait, the time weight of the first user portrait in each user portrait and the following formula (2).
Figure BDA0003713518800000201
Δ q represents a reference time weight; Δ T represents a time length between a time when the user behavior of the target user occurs for the first time and a time when the user behavior occurs for the last time in the target time period, T represents a time length of the target time period corresponding to the user data of the target user, q represents a time length of the target time period corresponding to the user data of the target user, and q represents a time length of the target time period 1 The time weight of the first user representation in each user representation is expressed in the chronological order of the generation time of each user representation.
Furthermore, the electronic device calculates the time weight of the initial user portrait based on whether the reference time weight is smaller than the third numerical value and the change trend from the time weight of the first user portrait in each user portrait to the target time weight according to the sequence of the generation time of each user portrait. The third value may be 0.
If the reference time weight is not less than the third value and the target time weight is in an ascending trend from the time weight of the first user portrait in each user portrait to the target time weight according to the sequence of the generation time of each user portrait, it is indicated that the user characteristic of the target user in the target portrait dimension at the current moment keeps the ascending trend, that is, the time weight of the initial user portrait of the target user in the target portrait dimension at the current moment is greater than the time weight of the user portrait of the nearest target portrait dimension, which indicates that the user characteristic of the target user in the target portrait dimension can still represent the target user, the target user has a greater degree of liking to the target portrait dimension, and the electronic device calculates the time weight of the initial user portrait based on the following formula (3).
Figure BDA0003713518800000202
Q represents the temporal weight of the initial user representation; Δ T represents a time length between a time when the target user first generates the user behavior and a time when the target user last generates the user behavior in the target time period, and T represents a time length of the target time period corresponding to the user data of the target user; m represents the number of time weights between the time weight of the first user portrait in each user portrait and the target time weight according to the sequence of the generation time of each user portrait; k represents the number of user representations; Δ q denotes a reference time weight.
If the reference time weight is not less than the third value and the target time weight is a descending trend from the time weight of the first user portrait in each user portrait to the time weight of the target user portrait at the current moment according to the sequence of the generation time of each user portrait, the time weight of the user characteristic of the target user in the dimension of the target portrait is increased, and the electronic equipment calculates the time weight of the initial user portrait based on the following formula (4).
Figure BDA0003713518800000211
Q represents the temporal weight of the initial user representation; delta T represents the time length between the first time when the user behavior occurs to the target user and the last time when the user behavior occurs to the target user in the target time period, and T represents the time length of the target time period corresponding to the user data of the target user; Δ q represents a reference time weight; a represents the difference value with the largest absolute value in the difference values of two adjacent time weights in the time weights of each user image.
If the reference time weight is smaller than the third value and the target time weight is in a descending trend from the time weight of the first user portrait in each user portrait to the target time weight according to the sequence of the generation time of each user portrait, the user characteristics of the target user in the target portrait dimension at the current moment are shown to keep the descending trend, namely the time weight of the initial user portrait of the target user in the target portrait dimension at the current moment is not larger than the time weight of the user portrait of the nearest target portrait dimension, and the target user is shown to have less favorability for the target portrait dimension, and the electronic equipment calculates the time weight of the initial user portrait based on the formula (3).
If the reference time weight is smaller than the third value and the time weight from the time weight of the first user portrait in each user portrait to the target time weight is an ascending trend according to the sequence of the generation time of each user portrait, it indicates that the time weight of each user portrait of the target user at the current moment is changed from the ascending trend to the descending trend, the time weight of the user feature of the target user in the dimension of the target portrait becomes smaller, and the electronic equipment calculates the time weight of the initial user portrait based on the formula (4).
In step S104, the time weight of the initial user representation may represent the importance of the user data of the target representation dimension in the target time period, that is, the importance of the user characteristics of the target user in the target representation dimension, and the time weights of the user representations of the target user generated at different times may represent: the importance of the user feature of the target user in the target portrait dimension changes with time.
The electronic device further determines the calculated temporal weight and the initial user representation as a final target user representation of the target user.
In some embodiments, on the basis of fig. 1, referring to fig. 4, after step S104, the method may further include the steps of:
s105: after receiving the use request aiming at the target user portrait, extracting the user identification of the user carried in the use request as the user identification, and extracting the use abstract carried in the use request.
The abstract is used for showing a use scene of a user for acquiring a target user portrait.
S106: based on the user ID, the usage abstract and the user information of the target user, it is determined whether the user has the right of use of the target user image, if not, step S107 is executed, and if yes, step S108 is executed.
S107: and sending an alarm message to the electronic equipment used by the target user to remind the target user of the request use behavior of the user aiming at the target user portrait.
S108: a target user representation is transmitted to an electronic device used by a user.
The user is any user currently requesting use of a target user representation, and the user may be an enterprise user or an individual user, for example, a financial enterprise requesting a user representation of a target user in a financial dimension to provide targeted financial services to the user based on the obtained user representation, or a gaming enterprise requesting a user representation of a target user in a gaming dimension to provide targeted gaming services to the user based on the obtained user representation.
The user sends a use request aiming at the target user portrait of the target user to the electronic equipment, wherein the use request carries a user identifier (namely, user identifier) and a use abstract of the target user, and the user identifier (namely, target user identifier) of the target user. The user id of a user may be the name of the user, a number assigned to the user, a user id generated based on the DID of the user, and the like.
After receiving the use request, the electronic device extracts the user identifier, the use abstract and the target user identifier carried in the use request. The usage summary represents a usage scenario of the user for obtaining the target user representation, for example, obtaining the target user representation provides financial services for the user.
The electronic device locally records a corresponding relationship between a user identifier set and a user information set, and electronically determines user information corresponding to a target user identifier to obtain user information of the target user (which may be referred to as target user information). The user identification set comprises at least one user identification; the user information set contains at least one user information. The user identifiers in the user identifier set correspond to the user information in the user information set one to one, for example, the correspondence between the user identifier set and the user information set includes: the user identifier a corresponds to the user information a, the user identifier B corresponds to the user information B, and the user identifier C corresponds to the user information C.
Furthermore, the electronic device determines whether the user has the right of use of the target user representation based on the user identifier, the usage abstract and the target user information, and processes the user according to the determination result.
The electronic device (which may be referred to as a first electronic device) may determine whether the user has the right to use the target user representation based on the following.
In the first way, the first step is to perform the following steps,
the target user information further includes: an IP (Internet Protocol) address of an electronic device (which may be referred to as a second electronic device) used by the target user.
Accordingly, step S106 may include the steps of:
step 1, sending a query message aiming at the portrait of the target user to the electronic equipment used by the target user according to the IP address of the electronic equipment used by the target user.
The query message carries the user id and the usage abstract.
And 2, when receiving a confirmation authorization message sent by the electronic equipment used by the target user, determining that the user has the use right of the target user portrait.
And 3, when receiving a cancel authorization message sent by the electronic equipment used by the target user, determining that the user does not have the use right of the target user portrait.
The first electronic equipment sends a query message aiming at the portrait of the target user to the second electronic equipment according to the IP address of the second electronic equipment used by the target user, wherein the query message carries a user identification and a use abstract. The second electronic device may be a terminal, a server, or the like.
The target user determines whether the user is authorized to use the target user representation based on the user identification and the usage summary carried in the query message. And if the target user determines that the target user uses the target user portrait, the target user inputs an authorization confirmation instruction to the second electronic equipment, and the second electronic equipment sends an authorization confirmation message to the first electronic equipment when receiving the authorization confirmation instruction. The first electronic device, upon receiving the confirmation authorization message, determines that the user has access to the target user representation.
If the target user determines that the target user is not authorized to use the target user representation, the target user may input a de-authorization instruction to the second electronic device, and the second electronic device sends a de-authorization message to the first electronic device when receiving the de-authorization instruction. The first electronic device, upon receiving the de-authorization message, determines that the user does not have access to the target user representation. Alternatively, if the target user determines that the user is not authorized to use the target user representation, the target user may not do so. If the first electronic device does not receive the confirmation authorization message within the preset time length, it is determined that the user does not have the right of use of the target user representation.
In the second way, the first way is,
the target user information further includes: and the target user draws an authorized list and an authorized abstract aiming at the target user.
When generating the user portrait of the user, the electronic device may prompt the user to authorize another user to use the user portrait of the user to obtain an authorized list of the user, and may prompt the user to clarify a use scene of the user portrait to obtain an authorized abstract of the user. The electronic device then locally records the user's authorization list and authorization digest for the user representation.
The electronic equipment acquires an authorization list and an authorization abstract of a target user image, which are locally recorded, aiming at the target user image, and acquires the authorization list and the authorization abstract of the target user image.
The electronic equipment judges whether the authorization list of the target user picture contains a user identifier or not, if the authorization list of the target user picture does not contain the user identifier, the user is determined not to have the use right of the target user picture, and if the authorization list of the target user picture contains the user identifier, whether the use abstract is the same as the authorization abstract or not is judged. If the usage summary is the same as the authorized summary, it is determined that the user has access to the target user representation. If the usage summary is not the same as the authorized summary, it is determined that the user does not have access to the target user representation.
In the third mode, the first step is to perform the first step,
the target user information further includes: the target user aims at the authorization list and the authorization abstract of the target user image; the authorization list contains user identifications of users authorized by the target user to use the target user representation; the authorization digest represents a usage scenario in which the target user is authorized to use the target user representation.
Accordingly, on the basis of fig. 4, referring to fig. 5, step S106 may include the following steps:
s1061: and judging whether the authorization list contains the user identifier, if not, executing the step S1062, and if so, executing the step S1063.
S1062: it is determined that the user does not have access to the target user representation.
S1063: the difference between the usage digest and the authorization digest is calculated.
S1064: if the difference value is larger than the preset threshold value, the user is determined not to have the right of use of the target user portrait.
S1065: if the difference value is not larger than the preset threshold value, the user is determined to have the right of use of the target user portrait.
In order to more accurately determine whether the user has the right of use of the target user image, the electronic device calculates a difference value between the usage abstract and the authorization abstract under the condition that the authorization list of the target user image contains the user identifier. For example, the electronic device may perform a word segmentation process on the usage summary and generate a feature vector of the usage summary based on a word segmentation result of the usage summary. The electronic equipment can also perform word segmentation processing on the authorization abstract and generate the feature vector of the authorization abstract based on the word segmentation result of the authorization abstract. Then, the electronic device calculates the similarity between the feature vector of the usage abstract and the feature vector of the authorization abstract, and calculates the difference between 1 and the similarity to obtain the difference between the usage abstract and the authorization abstract.
Determining that the user does not have the right of use of the target user portrait under the condition that the calculated difference value is larger than a preset threshold value; and determining that the user has the use right of the target user portrait under the condition that the calculated difference value is not larger than a preset threshold value.
The preset threshold may be set by a technician according to experience, for example, the preset threshold may be 0.6, or the preset threshold may also be 0.5, but is not limited thereto. Alternatively, the preset threshold may be learned from sample data.
In some embodiments, on the basis of fig. 5, referring to fig. 6, step S1063 may include the steps of:
s10631: and extracting continuous character strings with the first preset length from the use abstract to obtain each character string contained in the use abstract.
S10632: and for each extracted character string, if the authorized abstract contains the character string same as the character string, determining the matching degree corresponding to the character string to be a first numerical value.
S10633: if the authorized abstract does not contain the character string which is the same as the character string, extracting the continuous character string with the second preset length from the character string to obtain each sub-character string contained in the character string.
S10634: and for each substring contained in the character string, if the authorized abstract does not contain the character string identical to the substring, determining that the matching degree corresponding to the substring is a second numerical value.
S10635: and if the authorized abstract contains the character string which is the same as the sub-character string, calculating the matching degree corresponding to the sub-character string based on the number of the characters contained in the sub-character string, the number of the characters contained in the authorized abstract and the occurrence times of the character string which is the same as the sub-character string in the authorized abstract.
S10636: and calculating the sum of the matching degrees corresponding to the sub-character strings contained in the character string, and calculating the ratio of the sum to the number of the sub-character strings contained in the character string to obtain the matching degree corresponding to the character string.
S10637: and calculating a difference value between the usage abstract and the authorized abstract based on the matching degree corresponding to each character string contained in the usage abstract and the number of the character strings contained in the usage abstract.
The electronic device can perform extraction of the character string using the summary based on the N-gram. N-gram is a way of extracting a sequence of N items, which may be letters or words, from a given piece of text in NLP (Natural Language Processing) algorithm. When N ═ 1, it is called unigram; when N ═ 2, called bigram; when N is 3, it is called trigram, and so on. N is a first predetermined length in the embodiments of the present disclosure.
Taking trigram (i.e. N ═ 3) as an example, the electronic device extracts the 1 st character to the 3 rd character to obtain a character string, extracts the 2 nd character to the 4 th character to obtain a character string, extracts the 3 rd character to the 6 th character to obtain a character string, and so on until the N-2 nd character to the N th character are extracted to obtain a character string, so as to obtain a plurality of character strings contained in the summary. n represents the number of characters contained using the abstract.
The electronic equipment determines whether the authorized abstract contains the character string which is the same as the character string or not according to each character string contained in the used abstract, and determines that the matching degree corresponding to the character string is a first numerical value under the condition that the authorized abstract contains the character string which is the same as the character string, wherein the first numerical value can be 1.
And under the condition that the authorized abstract does not contain the character string identical to the character string, the electronic equipment extracts the continuous character string with the second preset length from the character string to obtain each sub-character string contained in the character string. The second preset length is smaller than the first preset length.
Illustratively, the character string is abc, and the second preset length is 1, then the character string includes sub-character strings of: a, b, c, and the second preset length is 2, the substring included in the string is ab, bc, and it should be noted that ac is not a continuous substring because it is not continuous.
For each substring included in the character string, in a case that the authorized digest does not include a character string identical to the substring, the electronic device determines that the matching degree corresponding to the substring is a second numerical value, which may be a smaller numerical value, for example, the second numerical value is 0.
When the authorized digest includes a character string identical to the substring, the electronic device calculates a matching degree corresponding to the substring based on the following formula (5).
Figure BDA0003713518800000261
d represents the matching degree corresponding to the sub-character string; x represents the number of characters contained in the substring; y represents the number of characters contained in the character string; p represents the number of characters contained in the authorization digest; c represents the number of occurrences of the same string as the substring in the authorized digest.
For each character string contained in the usage abstract, after the matching degree corresponding to each sub-character string contained in the character string is obtained through calculation, the electronic equipment calculates the sum of the matching degrees corresponding to each sub-character string contained in the character string, and calculates the ratio of the sum to the number of each sub-character string contained in the character string, so that the matching degree corresponding to the character string is obtained.
Further, after calculating the matching degree corresponding to each character string included in the usage summary, the electronic device calculates a difference value between the usage summary and the authorization summary based on the following formula (6).
Figure BDA0003713518800000262
b represents the difference value between the usage summary and the authorization summary; sum represents a summation function; d is a radical of i Indicating the matching degree corresponding to the ith character string contained in the abstract; sumd(s) i A sum value indicating a matching degree corresponding to each character string included in the usage summary; s represents the number of each character string included in the usage summary.
The matching degree corresponding to one sub-character string represents the difference degree between the sub-character string and the authorized abstract, and the matching degree corresponding to one character string represents the difference degree between the character string and the authorized abstract. Correspondingly, the difference between the usage abstract and the authorized abstract calculated based on the matching degree corresponding to each character string can be represented as: the degree of difference between the usage digest and the authorization digest. The lower the degree of difference between the usage summary and the authorized summary, the lower the degree of difference between the usage scenario of the target user representation represented by the summary and the usage scenario of the target user representation represented by the authorized summary, i.e., the greater the probability that the usage scenario of the target user representation represented by the summary is the usage scenario of the target user representation represented by the authorized summary, the greater the probability that the user has the right of use for the target user representation.
Therefore, when the difference value between the usage abstract and the authorization abstract is larger than the preset threshold value, the electronic equipment determines that the user does not have the usage right of the target user portrait, and when the difference value between the usage abstract and the authorization abstract is not larger than the preset threshold value, the electronic equipment determines that the user has the usage right of the target user portrait.
In the case that the user does not have the right of use of the target user portrait, in order to avoid infringing on the right and privacy of the user and improve the security of the user portrait, the electronic device (i.e., the first electronic device) determines target user information corresponding to the target user identifier to obtain target user information of a target user to which the target user portrait belongs, and the target user information may include the name of the target user, the IP address of the electronic device (i.e., the second electronic device) used by the target user, and the like. And then, the first electronic equipment sends an alarm message to the second electronic equipment used by the target user based on the IP address of the second electronic equipment so as to remind the target user of the request use behavior of the user aiming at the target user portrait. The alert message may carry a user identification, a usage summary, an identification of the target user representation, and an identification indicating that the user does not have usage of the target user representation.
In the case where the user has the right to use the target user image, the first electronic device transmits the target user image to an electronic device (which may be referred to as a third electronic device) used by the user, and the third electronic device may be a terminal, a server, or the like. When the target user portrait is sent to the third electronic device, the first electronic device can also send a reminding message to the second electronic device so as to remind the target user of the using behavior of the current user for the target user portrait. The alert message may carry a user identifier, a summary of the usage, an identifier of the target user representation, and an identifier indicating that the user has the usage of the target user representation.
Based on the processing, whether the user has the right of use of the target user portrait can be judged, and under the condition that the user does not have the right of use of the target user portrait, a warning message is sent to the electronic equipment used by the target user to remind the target user of the action of requesting the target user portrait by the user at this time.
After generating the target user representation, the electronic device may further obtain a user identification of the target user (i.e., a target user identification), and record the target user identification and the target user representation correspondingly.
In one implementation, the electronic device may directly obtain a number that is pre-assigned to the target user as the target user identifier.
In another implementation, on the basis of fig. 4, referring to fig. 7, after step S104, the method may further include the steps of:
s109: and generating the DID of the target user as the target DID according to a preset DID generation rule and the user information of the target user.
S110: and generating a user identifier of the target user as the target user identifier based on the generation time of the specified user portrait of the target user, the number of the target user and the target DID.
S111: and correspondingly recording the target user identification and the target user portrait.
The user information of the target user includes: the name, age, gender, occupation of the target user, and the number assigned to the target user, etc.
The electronic equipment generates a DID document of the target user according to a preset DID (Decentralized Identifier) generation rule and user information of the target user. The preset DID generation rule may be a generation method of a DID document provided by W3C (world wide web consortium).
The DID document is a JSON-LD Object (a method for representing and transmitting interlinked data based on JSON), and comprises 6 parts: DID identifier, a set of cryptographic materials (such as public keys), a set of cryptographic protocols, a set of service endpoints, timestamp, an optional JSON-LD signature to prove that the DID document is legitimate.
The electronic device acquires a DID identifier in a DID document as a target DID of a target user.
If a user representation in the target representation dimension by the target user is not generated prior to generating the target user representation, the user representation is designated as the target user representation. If a user representation in the target representation dimension has been generated by the target user prior to generation of the target user representation, the designated user representation may be any one of the user representations generated, e.g., the user representation that was generated the earliest in time.
The electronic equipment can correspondingly record the user identification and the user portrait of each user, and when the user identification of each user is generated, the electronic equipment can correspondingly record the user identification and the user information of each user, so that the corresponding relation between the user identification set and the user information set can be obtained, the user portrait and the user information can be associated through the user identification, and the safety of the user portrait is improved.
In some embodiments, on the basis of fig. 7, referring to fig. 8, step S110 may include the steps of:
s1101: the hash processing is carried out on the generation time of the appointed user portrait of the target user to obtain a hash value of the generation time of the appointed user portrait, and the hash processing is carried out on the number of the target user to obtain a hash value of the number of the target user.
S1102: and splicing the hash value of the generation time of the appointed user portrait and the hash value of the number of the target user to obtain a hash value string.
S1103: and generating a user identifier of the target user as the target user identifier based on the hash value string and the target DID.
The electronic equipment acquires the generation time of the designated user portrait of the target user and the number of the target user, carries out hash processing on the generation time of the designated user portrait to obtain a hash value of the generation time of the designated user portrait, and carries out hash processing on the number of the target user to obtain a hash value of the number of the target user. And the electronic equipment splices the hash value of the generation time of the appointed user portrait and the hash value of the number of the target user to obtain a hash value string.
In one implementation, the electronic device may splice the obtained hash value string and the target DID, and use the splicing result as the target user identifier. Or, when the hash value string and the target DID have the same number of characters, the electronic device may calculate a weighted sum of each character in the hash value string and a corresponding character in the target DID to obtain the target user identifier.
In another implementation, on the basis of fig. 8 and referring to fig. 9, step S1103 may include the following steps:
s11031: and if the number of the characters contained in the hash value string is not more than that of the characters contained in the target DID, determining the position of each character in the hash value string according to the arrangement sequence of the characters contained in the hash value string from the high order to the low order.
S11032: determining characters at the same positions as the characters in the target DID according to the arrangement sequence of the characters contained in the target DID from high position to low position, and obtaining the characters corresponding to the characters in the target DID; and calculating the remainder of the character and the corresponding character in the target DID to obtain the user identifier of the target user as the target user identifier.
S11033: and if the number of the characters contained in the hash value string is greater than that of the characters contained in the target DID, determining the characters with the characters at the corresponding positions in the target DID as first characters according to the arrangement sequence of the characters contained in the hash value string from high order to low order, and determining other characters except the first characters in the hash value string as second characters.
S11034: and counting the occurrence times of each character in the target DID.
S11035: for each first character, determining the position of the first character in the hash value string according to the arrangement sequence of the characters from high order to low order contained in the hash value string; determining characters at the same positions as the first character in the target DID according to the arrangement sequence of the characters from high position to low position contained in the target DID to obtain the corresponding characters of the first character in the target DID; the remainder of the first character and the corresponding character in the target DID is calculated as the first remainder.
S11036: for each second character, determining the position of the second character in the hash value string according to the arrangement sequence of the characters from low order to high order contained in the hash value string; determining characters at the same positions as the second character in the corresponding sorting result according to the arrangement sequence of the occurrence times of the characters contained in the target DID from high to low to obtain the characters corresponding to the second character in the target DID; and calculating the remainder of the second character and the corresponding character in the target DID to obtain a second remainder.
S11037: and generating the user identification of the target user containing the first remainder and the second remainder as the target user identification.
Illustratively, if the hash value string is: [0, 2, 5, 3, 2, 2], where the target DID is [0, 1, 3, 5, 3, 3], the hash value string contains the same number of characters as the target DID, the electronic device calculates the remainder of the first character (i.e. 0) in the hash value string and the first character (i.e. 0) in the target DID in an order from high to low as 0, calculates the remainder of the second character (i.e. 2) in the hash value string and the first character (i.e. 1) in the target DID as 0, and so on until the remainder of the sixth character (i.e. 2) in the hash value string and the sixth character (i.e. 3) in the target DID are calculated as 2, resulting in the target user identification as: [0,0,2,3,2,2].
If the hash value string is: [0, 2, 5, 3, 2], where the target DID is [0, 1, 3, 5, 3, 3], the hash value string contains a smaller number of characters than the target DID, the electronic device calculates the remainder of the first character (i.e., 0) in the hash value string and the first character (i.e., 0) in the target DID in an order from high to low as 0, calculates the remainder of the second character (i.e., 2) in the hash value string and the first character (i.e., 1) in the target DID as 0, and so on until the remainder of the fifth character (i.e., 2) in the hash value string and the fifth character (i.e., 3) in the target DID are calculated as 2, and the target user identifier is obtained as: [0,0,2,3,2].
If the hash value string is: [0, 2, 5, 3, 2, 6, 9, 7], where the target DID is [1, 1, 3, 5, 3, 3], where the hash string contains a greater number of characters than the target DID, and where the electronic device determines, in order from higher order to lower order, that a character exists at a corresponding position in the target DID, the electronic device including: 0, 2, 5, 3, 2, 6, i.e., the first character comprises 0, 2, 5, 3, 2, 6, and the electronic device determines that the other characters in the hash string other than the first character comprise 9, i.e., the second character comprises 9, 7.
Then, the electronic device calculates the remainder of the first character (i.e. 0) in the hash value string and the first character (i.e. 0) in the target DID to be 0, calculates the remainder of the second character (i.e. 2) in the hash value string and the first character (i.e. 1) in the target DID to be 0, and so on until the remainder of the sixth first character (i.e. 6) in the hash value string and the sixth character (i.e. 3) in the target DID to be 0, and obtains the first remainder by: 0,0,2,3,0.
The electronic device determines that the number of occurrences of 3 in the target DID is 3, the number of occurrences of 1 is 2, and the number of occurrences of 5 is 1, and then obtains a ranking result according to an arrangement order of the numbers of occurrences of the characters included in the target DID from high to low: 3,1,5. The electronic device determines a first second character (i.e. 7) according to the arrangement order of the characters contained in the hash value string from lower to upper, and determines a second character (i.e. 9) according to the arrangement order of the characters contained in the hash value string from lower to upper, and determines a second character in the corresponding ordering result from higher to lower, and the second character in the corresponding ordering result is 1, and the electronic device calculates a remainder of 9 and 1 as 0, and obtaining the second remainder includes: 3,0.
And then, the electronic equipment splices the first remainder and the second remainder to obtain a target user identifier as follows: [0,0,2,3,0,3,0].
Based on the processing, the target DID of the target user can be generated, the target DID is independent of any centralized registry, identity provider or certificate authority, is a globally unique identity, and has the characteristics of global uniqueness, high resolvability, encryption and encryption verification. The security of the target user identifier generated based on the target DID is high, and further, the security of the user portrait can be further improved.
In one implementation, the electronic device may directly store the target user identifier and the target user representation in a predetermined database. In addition, the electronic equipment also records the corresponding relation between the target user identification and the target user information, and can associate the target user information with the target user portrait through the target user identification, so that the safety of the user portrait is improved.
In another implementation, to improve user representation security, the electronic device may store the target user representation to a predetermined blockchain. Accordingly, on the basis of fig. 7, referring to fig. 10, step S111 may include the steps of:
s1111: and judging whether the corresponding relation between the user identification stored in the portrait node and the user node contains the target user identification, if so, executing step S1112, and if not, executing step S1113.
The portrait node is a head node of a preset user block chain; the user node is a non-head node of the user block chain; one user node is used for storing user information of a corresponding user.
S1112: determining a user node corresponding to the target user identifier to obtain a user node of the target user; and establishing a linked list node after the last linked list node of the portrait block chain with the user node of the target user as the head node, and storing the portrait of the target user to the established linked list node.
S1113: a user node is newly established after the last user node of the user block chain and is used as the user node of the target user, and the target user identification and the user node of the target user are correspondingly recorded in the corresponding relation; establishing an portrait block chain by taking a user node of a target user as a head node; the newly-built portrait block chain comprises a newly-built linked list node except a head node; and storing the target user portrait to the newly-established linked list node.
The electronic equipment is provided with a preset block chain, and the block chain comprises a user block chain and an image block chain. The head node of the user block chain is a portrait node, and the portrait node records the corresponding relation between the user identification of the user and the user node. The non-head node of the user block chain is a user node, and each user node records the user information of the corresponding user and the corresponding relation between the user image of the user and the chain table node.
The image block chain corresponds to a user, a head node of the image block chain of one user is a user node of the user, a non-head node in the image block chain is a linked list node, and the linked list node is used for storing a user image of the user.
Exemplarily, referring to fig. 11, fig. 11 is a schematic structural diagram of a block chain according to an embodiment of the present disclosure. The user block chain is as follows: image node-user node 1-user node 2-user node 3. The user node 1 is a user node of the user 1, the user node 2 is a user node of the user 2, and the user node 3 is a user node of the user 3.
The image block chain includes: the portrait block chain corresponding to the user 1, namely, the user node 1, the linked list node 2, the linked list node 3, the portrait block chain corresponding to the user 2, namely, the user node 2, the linked list node 4, and the portrait block chain corresponding to the user 3, namely, the user node 3, the linked list node 5, and the linked list node 6.
After generating the target user portrait, the electronic device judges whether the corresponding relationship between the user identifier stored in the portrait node and the user node contains the target user identifier, if the corresponding relationship stored in the portrait node contains the target user identifier, it indicates that the user portrait of the target user has been generated, that is, the portrait block chain corresponding to the target user has been generated, the electronic device determines the user node corresponding to the target user identifier, that is, the user node of the target user, in the corresponding relationship, and the portrait block chain taking the user node of the target user as the head node is the portrait block chain corresponding to the target user.
The electronic equipment creates a linked list node after the last linked list node of the portrait block chain with the user node of the target user as the head node, and stores the portrait of the target user to the created linked list node. The electronic equipment can also correspondingly record the corresponding relation between the target user portrait and the newly-built linked list node in the user node of the target user.
If the corresponding relation stored in the portrait node does not contain the target user identification, which indicates that the user portrait of the target user is not generated, that is, the user node of the target user is not generated, the electronic equipment creates a new user node behind the last user node of the user block chain as the user node of the target user, and correspondingly records the target user identification and the user node of the target user in the corresponding relation. The corresponding relationship between the user identifier and the user node may also represent the corresponding relationship between the user identifier set and the user information set. The electronic equipment can also correspondingly record the corresponding relation between the target user portrait and the newly-built linked list node in the user node of the target user.
Then, the electronic device creates an image block chain by taking the user node of the target user as a head node, wherein the created image block chain comprises the head node (namely the user node of the target user) and a created linked list node, and the electronic device stores the target user image in the created linked list node.
In some embodiments, the step of the electronic device storing the target user representation to the newly created linked list node comprises the steps of: and generating a two-dimensional array comprising the target user portrait and the generation time of the target user portrait, and storing the two-dimensional array to the newly-established linked list node.
The electronic device generates a two-dimensional array comprising a target user representation and a generation time of the target user representation, one dimension of the two-dimensional array being the generation time of the target user representation and the other dimension being the target user representation. And then, the electronic equipment stores the two-dimensional array to the newly-built linked list node.
Based on the processing, the user portrait of the target user is associated with the user node which uniquely represents the user in the portrait block chain, the user node can give the identity information of the user corresponding to the portrait to the user, the identity information of the user is the user identification of the user, namely, the identity information of the user can be associated with the user portrait, the ownership of the user portrait is bound for the user portrait based on the identity information of the user, and then when the user portrait is used, the user can be informed of the using behavior of the user portrait through the association relation, the user portrait is allowed to be used only after the user authorizes the using behavior, the right and the privacy of the user are guaranteed not to be violated, and the safety of the user portrait can be improved.
In some embodiments, the electronic device may also retrieve the target user representation prior to sending the target user representation to the electronic device used by the user.
In one implementation, if the electronic device directly stores the target user identifier and the target user portrait in a preset database, the electronic device directly obtains the target user portrait corresponding to the target user identifier from the preset database.
In another implementation, on the basis of fig. 10, referring to fig. 12, before step S108, the method may further include the steps of:
s112: and determining a user node corresponding to the target user identifier in the corresponding relation between the user identifier recorded by the image node and the user node to obtain the user node of the target user.
S113: and determining the link list node corresponding to the target user image in the corresponding relation between the user image recorded by the user node of the target user and the link list node.
S114: and acquiring the target user portrait from the determined linked list nodes.
And if the electronic equipment stores the target user portrait to the portrait block chain corresponding to the target user, the electronic equipment determines a user node corresponding to the target user identifier in the corresponding relation between the user identifier recorded by the portrait node and the user node to obtain the user node of the target user.
Then, the electronic device may determine an image block chain with a user node of the target user as a head node, traverse the image block chain to obtain a linked list node storing the target user image in the image block chain, and obtain the target user image from the linked list node.
Or, the electronic device determines a linked list node corresponding to the target user image in the corresponding relationship between the user image recorded by the user node of the target user and the linked list node, and acquires the target user image from the determined linked list node.
Referring to FIG. 13, FIG. 13 is a flowchart illustrating another method for managing a user representation according to an embodiment of the present disclosure.
Step 1, user data is obtained.
Since various application programs used by the user can store user data in the block chain, the electronic device can acquire respective user data of each user from the mainstream public chain, and then the electronic device can store the acquired user data in a preset database in a data wide table manner. And then, the electronic equipment acquires the user data of the target user from a preset database.
And step 2, determining the dimension of the image.
The electronic equipment determines a target user category of the target user based on the basic information of the target user, and obtains a target portrait dimension corresponding to the target user category.
And 3, forming a user portrait according to the portrait dimension.
The electronic equipment processes user data of the target user in the target portrait dimension to generate a target user portrait of the target user in the target portrait dimension.
And 4, storing the user portrait in the portrait block chain, and configuring the identity of the user for the user portrait so that the user can enjoy the commercial behavior of the user portrait based on the identity.
The electronic equipment stores the target user portrait of the target user to a portrait block chain with a user node of the target user as a head node, and records the corresponding relation between the target user identification and the user node of the target user, so that the target user identification can be associated with the portrait block chain storing the user portrait of the target user, namely the target user portrait is associated with the identity information of the target user, and the identity information of the target user can be the target user identification, so that the target user enjoys the right of commercial behavior on the target user portrait based on the identity information.
And step 5, determining the ownership of the user portrait through the user identity, and further using the user portrait based on the ownership.
Upon receiving a user request for use of a target user representation of a target user, identity information (i.e., a target user identification) of the target user enjoying ownership of the target user representation is determined, and based on the identity information of the target user, it is determined whether the user has rights to use the target user representation. And when the user has the right of use of the target user portrait, sending an alarm message to the electronic equipment used by the target user. When the user has the right of use of the target user image, the target user image is sent to the electronic device used by the user.
Based on the processing, the identity information of the user can be associated with the user portrait, the ownership of the user portrait is bound for the user portrait based on the identity information of the user, and then when the user portrait is used, the user can be informed of the using behavior of the user portrait through the association relation, the user portrait is allowed to be used only after the user authorizes the using behavior, the right and privacy of the user are guaranteed not to be infringed, and the safety of the user portrait can be improved.
Referring to FIG. 14, FIG. 14 is a flowchart illustrating another method for managing a user representation according to an embodiment of the disclosure.
Step 1, user data is obtained.
The user data can be stored in the block chain by various application programs used by the user, the electronic equipment can acquire the respective user data of each user from the mainstream public chain, and then the acquired user data can be stored in a preset database in a data wide table mode by the electronic equipment. And then, the electronic equipment acquires the user data of the target user from a preset database.
And step 2, determining the dimension of the portrait according to the user data.
The electronic equipment determines a target user category of the target user based on the basic information of the target user, and obtains a target portrait dimension corresponding to the target user category. The target representation dimension is determined based on user data of each user included in the target user category.
And 3, forming the user portrait according to the portrait dimension, the user data and the time attribute.
The electronic equipment processes user data of the target user in the portrait dimension to generate an initial user portrait of the target user in the target portrait dimension, calculates a time weight (i.e. a time attribute) of the initial user portrait based on a duration of a target time period corresponding to the user data of the target user and a duration between a moment when the target user first generates a user behavior and a moment when the target user last generates the user behavior in the target time period, and determines the calculated time weight and the initial user portrait as a final target user portrait of the target user.
And 4, storing the user portrait in a portrait block chain, and configuring the identity of the user for the user portrait so that the user can enjoy the commercial behavior of the user portrait based on the identity.
The electronic equipment stores the target user portrait of the target user to a portrait block chain with a user node of the target user as a head node, and records the corresponding relation between the target user identification and the user node of the target user, so that the target user identification can be associated with the portrait block chain storing the user portrait of the target user, namely the target user portrait is associated with the identity information of the target user, and the identity information of the target user can be the target user identification, so that the target user enjoys the right of commercial behavior on the target user portrait based on the identity information.
And 5, using the user portrait through the user identity.
Upon receiving a user request for use of a target user representation of a target user, identity information of the target user enjoying ownership of the target user representation is determined, and based on the identity information of the target user, it is determined whether the user has rights of use of the target user representation. And when the user has the right of use of the target user portrait, sending an alarm message to the electronic equipment used by the target user. When the user has the right of use of the target user image, the target user image is sent to the electronic device used by the user.
Based on the processing, the time weight of the user portrait of each portrait dimension of the user can be determined, the time weight describes the variation trend of the importance degree of the user feature of the user in the portrait dimension along with the time variation, further the user personalized difference caused by the time characteristic is reflected in the process of generating the user portrait, and the user portrait and the identity information of the user are bound through building a portrait block chain. When the user portrait is used, the user is informed of the use matters of the user portrait through the identity information of the user, and the user portrait is allowed to be used only after the user authorizes the use behaviors, so that the right and the privacy of the user are guaranteed not to be infringed, and the safety of the user portrait can be improved.
Referring to FIG. 15, FIG. 15 is a flowchart illustrating another method for managing a user representation according to an embodiment of the present disclosure.
Step 1, user data is obtained.
Since various application programs used by the user can store user data in the block chain, the electronic device can acquire respective user data of each user from the mainstream public chain, and then the electronic device can store the acquired user data in a preset database in a data wide table manner. And then, the electronic equipment acquires the user data of the target user from a preset database.
And step 2, determining the dimension of the image.
The electronic equipment determines a target user category of the target user based on the basic information of the target user, and obtains a target portrait dimension corresponding to the target user category. The target representation dimension is determined based on user data of each user included in the target user category.
And 3, forming a user portrait according to the portrait dimension.
The electronic equipment processes user data of the target user in the target portrait dimension to generate a target user portrait of the target user in the target portrait dimension.
And 4, configuring the digital identity of the user image based on the DID, so that the user can enjoy the commercial behavior right of the user image based on the digital identity.
The electronic equipment generates a target DID of a target user, generates a target user identification based on the target DID, takes the target user identification as the digital identity of the target user, and then associates the digital identity of the target user with the target user portrait of the target user, so that the target user can enjoy the right of business behaviors to the target user portrait based on the digital identity.
And 5, using the user portrait through DID.
Upon receiving a user request for a target user representation of a target user, a digital identity of the target user (i.e., a target user identification) is determined that enjoys ownership of the target user representation, and a determination is made as to whether the user has rights to use the target user representation based on the digital identity of the target user, i.e., whether the user has rights to use the target user representation based on target user information corresponding to the target user identification. When the user has the right of use of the target user representation, an alarm message is sent to the electronic device used by the target user. When the user has the right of use of the target user image, the target user image is transmitted to the electronic device used by the user.
Based on the above processing, the DID of the user can be generated, and the user identifier of the user can be generated based on the DID, and the owner of the user portrait can be determined through the user identifier. When the user portrait is used, the identity information of the user who enjoys the ownership of the user portrait is determined through the user identification, the user is informed of the use matters of the user portrait through the identity information of the user, the user portrait is allowed to be used only after the user authorizes the use behaviors, the right and the privacy of the user are guaranteed not to be infringed, and the safety of the user portrait can be improved.
Corresponding to the method embodiment of fig. 1, referring to fig. 16, fig. 16 is a block diagram of a user representation generating apparatus provided in an embodiment of the present disclosure, where the apparatus includes:
the portrait dimension determining module 1601 is configured to determine, based on user information of a target user, a portrait dimension corresponding to the target user as a target portrait dimension;
an initial user representation generation module 1602, configured to generate a user representation of the target user in the target representation dimension as an initial user representation based on user data of the target user in the target representation dimension;
a time weight calculating module 1603, configured to calculate a time weight of the initial user portrait based on a duration of a target time period corresponding to the user data of the target user and a duration between a time when the target user first generates a user behavior and a time when the target user last generates the user behavior in the target time period;
a target user representation generation module 1604 that determines the calculated temporal weights and the initial user representation as a final user representation of the target user as a target user representation.
In some embodiments, the temporal weight calculation module 1603 is specifically configured to determine whether a user representation of the target user in the target representation dimension has been generated prior to generating the initial representation;
if the user representation of the target user in the target representation dimension is not generated before the initial representation is generated, calculating the time weight of the initial user representation based on the duration of a target time period corresponding to the user data of the target user and the duration between the moment when the user behavior occurs for the first time and the moment when the user behavior occurs for the last time in the target time period;
if a user representation of the target user in the target representation dimension has been generated prior to generating the initial representation, obtaining a temporal weight for each user representation of the target user in the target representation dimension that has been generated; determining the time weight at the position of an inflection point in the variation trend of the time weight of each user portrait according to the sequence of the generation time of each user portrait, and taking the time weight as a target time weight; and calculating the time weight of the initial user portrait based on the target time weight, the number of the time weights of the user portraits, the duration of a target time period corresponding to the user data of the target user, and the duration between the moment when the target user firstly generates the user behavior and the moment when the target user finally generates the user behavior in the target time period.
In some embodiments, the time weight calculating module 1603 is specifically configured to calculate a reference time weight based on a duration of a target time period corresponding to the user data of the target user, a duration between a time when the target user first generates a user behavior and a time when the target user last generates the user behavior in the target time period, and a time weight of a first user portrait in each user portrait according to a sequence of generation times of the user portraits;
if the reference time weight is not smaller than a third numerical value and the target time weight is in an ascending trend from the time weight of the first user portrait in each user portrait to the target time weight according to the sequence of the generation time of each user portrait, calculating the time weight of the initial user portrait based on the duration of a target time period corresponding to the user data of the target user, the duration between the moment when the user behavior occurs for the first time and the moment when the user behavior occurs for the last time in the target time period, the number of each user portrait and the number of the time weights between the time weight of the first user portrait in each user portrait and the target time weight according to the sequence of the generation time of each user portrait;
if the reference time weight is not less than the third numerical value and the time weight of the first user portrait in each user portrait is a descending trend from the time weight of the first user portrait to the target time weight according to the sequence of the generation time of each user portrait, calculating the time weight of the initial user portrait based on the time length between the moment when the target user firstly generates the user behavior and the moment when the user behavior last generates in the target time period, the time length of the target time period corresponding to the user data of the target user, and the difference value with the largest absolute value in the difference values of two adjacent time weights in the time weights of each user portrait;
if the reference time weight is smaller than the third numerical value and the time weight from the first user portrait in each user portrait is a descending trend according to the sequence of the generation time of each user portrait, calculating the time weight of the initial user portrait based on the duration of a target time period corresponding to the user data of the target user, the duration between the moment when the user behavior occurs for the first time and the moment when the user behavior occurs for the last time in the target time period, the number of each user portrait and the number of the time weights from the time weight of the first user portrait in each user portrait to the time weight of the target according to the sequence of the generation time of each user portrait;
if the reference time weight is smaller than the third numerical value and the target time weight is in an ascending trend from the time weight of the first user portrait in the user portraits to the target time weight according to the sequence of the generation time of the user portraits, calculating the time weight of the initial user portrait based on the time length between the moment when the target user firstly generates the user behavior and the moment when the target user finally generates the user behavior in the target time period, the time length of the target time period corresponding to the user data of the target user and the difference value with the maximum absolute value in the difference values of the two adjacent time weights in the time weights of the user portraits.
In some embodiments, the apparatus further comprises:
an extracting module, configured to, after the target user representation generating module 1604 performs determining that the calculated time weight and the initial user representation are the final user representation of the target user, as a target user representation, perform extracting, after receiving a usage request for the target user representation, a user identifier of a user carried in the usage request, as a user identifier, and extracting a usage summary carried in the usage request; wherein, the usage abstract represents a usage scene of the user for acquiring the target user portrait;
a usage right judging module for judging whether the user has the usage right of the target user representation based on the user identification, the usage abstract and the user information of the target user;
the warning message sending module is used for sending a warning message to the electronic equipment used by the target user to remind the target user of the current request using behavior of the user aiming at the target user portrait if the user does not have the right of using the target user portrait;
and the user portrait sending module is used for sending the target user portrait to the electronic equipment used by the user if the user has the right of using the target user portrait.
In some embodiments, the user information of the target user comprises: an Internet Protocol (IP) address of the electronic device used by the target user;
the right-of-use judging module is specifically configured to send an inquiry message for the target user portrait to the electronic device used by the target user according to the IP address of the electronic device used by the target user; wherein, the query message carries the user identifier and the usage summary;
upon receiving a confirmation authorization message sent by an electronic device used by the target user, determining that the user has access to the target user representation;
upon receiving a de-authorization message sent by an electronic device used by the target user, determining that the user does not have access to the target user representation.
In some embodiments, the user information of the target user includes: the target user aims at an authorization list and an authorization abstract of the target user representation; wherein the authorization list includes user identifications of users authorized by the target user to use the target user representation; the authorization digest represents a usage scenario in which the target user authorizes use of the target user representation;
the right-of-use judging module is specifically configured to judge whether the authorization list includes the user identifier;
determining that said user does not have access to said target user representation if said user identifier is not included in said authorization list;
if the authorization list contains the user identification, calculating a difference value between the use abstract and the authorization abstract; if the difference value is larger than a preset threshold value, determining that the user does not have the right of use of the target user portrait; and if the difference value is not larger than the preset threshold value, determining that the user has the right of use of the target user portrait.
In some embodiments, the usage right determining module is specifically configured to extract continuous character strings with a first preset length from the usage abstract to obtain each character string included in the usage abstract;
for each extracted character string, if the authorized abstract contains the character string same as the character string, determining the matching degree corresponding to the character string as a first numerical value;
if the authorized abstract does not contain the character string which is the same as the character string, extracting continuous character strings with a second preset length from the character string to obtain each sub-character string contained in the character string; for each substring contained in the character string, if the authorized abstract does not contain the character string same as the substring, determining that the matching degree corresponding to the substring is a second numerical value; if the authorized abstract contains the character string which is the same as the sub-character string, calculating the matching degree corresponding to the sub-character string based on the number of the characters contained in the sub-character string, the number of the characters contained in the authorized abstract and the occurrence frequency of the character string which is the same as the sub-character string in the authorized abstract; calculating a sum of matching degrees corresponding to each sub-character string contained in the character string, and calculating a ratio of the sum to the number of each sub-character string contained in the character string to obtain the matching degree corresponding to the character string;
and calculating a difference value between the usage abstract and the authorization abstract based on the matching degree corresponding to each character string contained in the usage abstract and the number of the character strings contained in the usage abstract.
In some embodiments, the apparatus further comprises:
a DID generation module, configured to, after the target user representation generation module 1604 performs determining that the calculated time weight and the initial user representation are the final user representation of the target user, as a target user representation, perform a DID generation rule according to a preset distributed identity identifier DID generation rule and user information of the target user, and generate a DID of the target user, as a target DID;
the user identification generation module is used for generating the user identification of the target user as the target user identification based on the generation time of the appointed user portrait of the target user, the number of the target user and the target DID;
and the recording module is used for correspondingly recording the target user identification and the target user portrait.
In some embodiments, the user identifier generating module is specifically configured to perform hash processing on the generation time of the designated user portrait of the target user to obtain a hash value of the generation time of the designated user portrait, and perform hash processing on the number of the target user to obtain a hash value of the number of the target user;
splicing the hash value of the generation time of the designated user portrait and the hash value of the number of the target user to obtain a hash value string;
and generating the user identification of the target user as the target user identification based on the hash value string and the target DID.
In some embodiments, the user identifier generating module is specifically configured to, if the number of characters included in the hash value string is not greater than the number of characters included in the target DID, determine, for each character in the hash value string, a position of the character in the hash value string according to an arrangement order of the characters included in the hash value string from a high order to a low order; determining characters at the same positions as the characters in the target DID according to the arrangement sequence of the characters from high position to low position contained in the target DID, and obtaining the characters corresponding to the characters in the target DID; calculating the remainder of the character and the corresponding character in the target DID to obtain the user identifier of the target user as the target user identifier;
if the number of the characters contained in the hash value string is larger than that of the characters contained in the target DID, determining the characters with characters at corresponding positions in the target DID as first characters according to the arrangement sequence of the characters contained in the hash value string from high order to low order, and determining other characters except the first characters in the hash value string as second characters; counting the occurrence frequency of each character in the target DID; for each first character, determining the position of the first character in the hash value string according to the arrangement sequence of the characters contained in the hash value string from high order to low order; determining characters in the target DID at the same positions as the first character according to the arrangement sequence of the characters contained in the target DID from high position to low position, and obtaining the corresponding characters of the first character in the target DID; calculating the remainder of the first character and the corresponding character in the target DID as a first remainder; for each second character, determining the position of the second character in the hash value string according to the arrangement sequence of the characters contained in the hash value string from the lower position to the upper position; determining characters at the same positions as the second character in the corresponding sorting result according to the arrangement sequence of the occurrence times of the characters contained in the target DID from high to low to obtain the characters corresponding to the second character in the target DID; calculating the remainder of the second character and the corresponding character in the target DID to obtain a second remainder; and generating the user identifier of the target user containing the first remainder and the second remainder as a target user identifier.
In some embodiments, the recording module is specifically configured to determine whether the target user identifier is included in a correspondence between a user identifier stored in the representation node and the user node; the portrait node is a head node of a preset user block chain; the user node is a non-head node of the user block chain; a user node for storing user information of a corresponding user;
if the corresponding relation comprises the target user identification, determining a user node corresponding to the target user identification to obtain the user node of the target user; establishing a chain table node after the last chain table node of the portrait block chain taking the user node of the target user as a head node, and storing the portrait of the target user to the established chain table node;
if the corresponding relation does not contain the target user identification, a user node is newly established behind the last user node of the user block chain and is used as the user node of the target user, and the target user identification and the user node of the target user are correspondingly recorded in the corresponding relation; establishing an portrait block chain by taking the user node of the target user as a head node; the newly-built portrait block chain comprises a newly-built linked list node except a head node; and storing the target user portrait to the newly-built linked list node.
In some embodiments, the recording module is specifically configured to generate a two-dimensional array including the target user representation and a generation time of the target user representation, and store the two-dimensional array to the newly-created linked list node.
In some embodiments, the apparatus further comprises:
a user node determining module, configured to determine a user node corresponding to the target user identifier in a corresponding relationship between a user identifier recorded in the image node and a user node before the user representation sending module sends the target user representation to the electronic device used by the user, so as to obtain the user node of the target user;
a linked list node determining module, configured to determine a linked list node corresponding to the target user image in a correspondence between the user image recorded by the user node of the target user and the linked list node;
and the user portrait acquisition module is used for acquiring the target user portrait from the determined linked list nodes.
Based on the user portrait generation device provided by the embodiment of the present disclosure, the time weight of the initial user portrait may represent the importance degree of the user data of the target portrait dimension in the target time period, that is, represent the importance degree of the user characteristics of the target user in the target portrait dimension, and further, the time weight of each user portrait of the target user generated at different times may represent: the change of the importance degree of the user characteristics of the target user in the dimension of the target portrait along with time, namely, the user portrait with time characteristic can be generated, and the effectiveness of the user portrait can be improved.
The disclosed embodiment also provides an electronic device, as shown in fig. 17, including a processor 1701, a communication interface 1702, a memory 1703 and a communication bus 1704, where the processor 1701, the communication interface 1702, and the memory 1703 communicate with each other through the communication bus 1704,
a memory 1703 for storing a computer program;
the processor 1701 is configured to implement the steps of the user image generation method described in any of the above embodiments when executing the program stored in the memory 1703.
The communication bus mentioned in the electronic device may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
The communication interface is used for communication between the electronic equipment and other equipment.
The Memory may include a Random Access Memory (RAM) or a Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components.
In yet another embodiment provided by the present disclosure, a computer-readable storage medium is further provided, in which a computer program is stored, and the computer program, when executed by a processor, implements the steps of any of the above-mentioned user representation generation methods.
In yet another embodiment provided by the present disclosure, a computer program product containing instructions is also provided, which when run on a computer, causes the computer to perform any of the user representation generation methods of the above embodiments.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. The procedures or functions described in accordance with the embodiments of the disclosure are, in whole or in part, generated when the computer program instructions are loaded and executed on a computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
It should be noted that, in this document, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the apparatus, the electronic device, the computer-readable storage medium, and the computer program product embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and in the relevant places, reference may be made to the partial description of the method embodiments.
The above description is only a preferred embodiment of the present disclosure, and is not intended to limit the scope of the present disclosure. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present disclosure are included in the scope of protection of the present disclosure.

Claims (28)

1. A method of user representation generation, the method comprising:
determining an portrait dimension corresponding to a target user as a target portrait dimension based on user information of the target user;
generating a user representation of the target user in the target representation dimension as an initial user representation based on user data of the target user in the target representation dimension;
calculating the time weight of the initial user portrait based on the duration of a target time period corresponding to the user data of the target user and the duration between the moment when the target user firstly generates the user behavior and the moment when the target user finally generates the user behavior in the target time period;
and determining the calculated time weight and the initial user portrait as a final user portrait of the target user to serve as a target user portrait.
2. The method of claim 1, wherein calculating the temporal weight of the initial user representation based on a duration of a target time period corresponding to the user data of the target user and a duration between a time at which user behavior first occurs and a time at which user behavior last occurs for the target user within the target time period comprises:
determining whether a user representation of the target user in the target representation dimension has been generated prior to generating the initial representation;
if the user representation of the target user in the target representation dimension is not generated before the initial representation is generated, calculating a time weight of the initial user representation based on a duration of a target time period corresponding to user data of the target user and a duration between a time when user behavior of the target user occurs for the first time and a time when user behavior occurs for the last time in the target time period;
if a user representation of the target user in the target representation dimension has been generated prior to generating the initial representation, obtaining a temporal weight of each user representation of the target user in the target representation dimension that has been generated; determining the time weight at the position of an inflection point in the variation trend of the time weight of each user portrait according to the sequence of the generation time of each user portrait, and taking the time weight as a target time weight; and calculating the time weight of the initial user portrait based on the target time weight, the number of the time weights of the user portraits, the duration of a target time period corresponding to the user data of the target user, and the duration between the first time when the user behavior occurs and the last time when the user behavior occurs in the target time period.
3. The method of claim 2, wherein calculating the time weight for the initial user profile based on the target time weight, the number of time weights for each user profile, the duration of a target time period corresponding to the user data for the target user, and the duration between the time at which the user behavior first occurred and the time at which the user behavior last occurred for the target user within the target time period comprises:
calculating a reference time weight based on the duration of a target time period corresponding to the user data of the target user, the duration between the moment when the target user first generates the user behavior and the moment when the target user last generates the user behavior in the target time period, and the time weight of the first user portrait in each user portrait according to the sequence of the generation time of each user portrait;
if the reference time weight is not smaller than a third numerical value and the target time weight is in an ascending trend from the time weight of the first user portrait in each user portrait to the target time weight according to the sequence of the generation time of each user portrait, calculating the time weight of the initial user portrait based on the duration of a target time period corresponding to the user data of the target user, the duration between the moment when the user behavior occurs for the first time and the moment when the user behavior occurs for the last time in the target time period, the number of each user portrait and the number of the time weights between the time weight of the first user portrait in each user portrait and the target time weight according to the sequence of the generation time of each user portrait;
if the reference time weight is not less than the third numerical value and the time weight of the first user portrait in each user portrait is a descending trend from the time weight of the first user portrait to the target time weight according to the sequence of the generation time of each user portrait, calculating the time weight of the initial user portrait based on the time length between the moment when the target user firstly generates the user behavior and the moment when the user behavior last generates in the target time period, the time length of the target time period corresponding to the user data of the target user, and the difference value with the largest absolute value in the difference values of two adjacent time weights in the time weights of each user portrait;
if the reference time weight is smaller than the third numerical value and the time weight from the first user portrait in each user portrait is a descending trend according to the sequence of the generation time of each user portrait, calculating the time weight of the initial user portrait based on the duration of a target time period corresponding to the user data of the target user, the duration between the moment when the user behavior occurs for the first time and the moment when the user behavior occurs for the last time in the target time period, the number of each user portrait and the number of the time weights from the time weight of the first user portrait in each user portrait to the time weight of the target according to the sequence of the generation time of each user portrait;
if the reference time weight is smaller than the third numerical value and the time weight of the first user portrait in each user portrait is in an ascending trend from the time weight of the first user portrait to the target time weight according to the sequence of the generation time of each user portrait, calculating the time weight of the initial user portrait based on the time length between the moment when the target user firstly generates the user behavior and the moment when the user behavior last generates in the target time period, the time length of the target time period corresponding to the user data of the target user, and the difference value with the largest absolute value in the difference value of the two adjacent time weights in the time weights of each user portrait.
4. The method of claim 1, wherein after said determining that the calculated temporal weight and the initial user representation are a final user representation of the target user as a target user representation, the method further comprises:
after receiving a use request aiming at the target user portrait, extracting a user identifier of a user carried in the use request as a user identifier and extracting a use abstract carried in the use request; wherein, the usage abstract represents a usage scene of the user for acquiring the target user portrait;
determining whether the user has the right of use of the target user representation based on the user identifier, the usage summary and the user information of the target user;
if the user does not have the right of use of the target user portrait, sending an alarm message to the electronic equipment used by the target user to remind the target user of the current request use behavior of the user for the target user portrait;
if the user has the right to use the target user representation, the target user representation is sent to the electronic device used by the user.
5. The method of claim 4, wherein the user information of the target user comprises: an Internet Protocol (IP) address of the electronic device used by the target user;
the determining whether the user has the right of use of the target user representation based on the user identifier, the usage summary, and the user information of the target user comprises:
sending an inquiry message aiming at the portrait of the target user to the electronic equipment used by the target user according to the IP address of the electronic equipment used by the target user; wherein, the query message carries the user identifier and the usage summary;
upon receiving a confirmation authorization message sent by an electronic device used by the target user, determining that the user has access to the target user representation;
upon receiving a de-authorization message sent by an electronic device used by the target user, determining that the user does not have access to the target user representation.
6. The method of claim 4, wherein the user information of the target user comprises: the target user aims at an authorization list and an authorization abstract of the target user representation; wherein the authorization list includes user identifications of users authorized by the target user to use the target user representation; the authorization digest represents a usage scenario in which the target user authorizes use of the target user representation;
the determining whether the user has the right of use of the target user representation based on the user identifier, the usage summary, and the user information of the target user comprises:
judging whether the authorization list contains the user identification;
determining that said user does not have access to said target user representation if said user identifier is not included in said authorization list;
if the authorization list contains the user identification, calculating a difference value between the use abstract and the authorization abstract; if the difference value is larger than a preset threshold value, determining that the user does not have the right of use of the target user portrait; and if the difference value is not larger than the preset threshold value, determining that the user has the right of use of the target user portrait.
7. The method of claim 6, wherein calculating a difference value between the usage digest and the authorization digest comprises:
extracting continuous character strings with a first preset length from the usage abstract to obtain each character string contained in the usage abstract;
for each extracted character string, if the authorized abstract contains the character string same as the character string, determining the matching degree corresponding to the character string as a first numerical value;
if the authorized abstract does not contain the character string which is the same as the character string, extracting continuous character strings with a second preset length from the character string to obtain each sub-character string contained in the character string; for each substring contained in the character string, if the authorized abstract does not contain the character string same as the substring, determining that the matching degree corresponding to the substring is a second numerical value; if the authorized abstract contains the character string which is the same as the sub-character string, calculating the matching degree corresponding to the sub-character string based on the number of the characters contained in the sub-character string, the number of the characters contained in the authorized abstract and the occurrence frequency of the character string which is the same as the sub-character string in the authorized abstract; calculating a sum of matching degrees corresponding to each sub-character string contained in the character string, and calculating a ratio of the sum to the number of each sub-character string contained in the character string to obtain the matching degree corresponding to the character string;
and calculating a difference value between the usage abstract and the authorization abstract based on the matching degree corresponding to each character string contained in the usage abstract and the number of the character strings contained in the usage abstract.
8. The method of claim 4, wherein after said determining that the calculated temporal weight and the initial user representation are the final user representation of the target user as a target user representation, the method further comprises:
generating a DID of the target user as a target DID according to a preset DID generation rule and the user information of the target user;
generating a user identifier of the target user as a target user identifier based on the generation time of the designated user portrait of the target user, the number of the target user and the target DID;
and correspondingly recording the target user identification and the target user portrait.
9. The method of claim 8, wherein generating the user identification of the target user as a target user identification based on the generation time of the target user's designated user representation, the target user's number, and the target DID comprises:
performing hash processing on the generation time of the designated user portrait of the target user to obtain a hash value of the generation time of the designated user portrait, and performing hash processing on the number of the target user to obtain a hash value of the number of the target user;
splicing the hash value of the generation time of the designated user portrait and the hash value of the number of the target user to obtain a hash value string;
and generating the user identification of the target user as the target user identification based on the hash value string and the target DID.
10. The method according to claim 9, wherein the generating a user id of the target user as a target user id based on the hash value string and the target DID comprises:
if the number of the characters contained in the hash value string is not more than the number of the characters contained in the target DID, aiming at each character in the hash value string, determining the position of the character in the hash value string according to the arrangement sequence of the characters contained in the hash value string from high order to low order; determining characters at the same positions as the characters in the target DID according to the arrangement sequence of the characters from high position to low position contained in the target DID, and obtaining the characters corresponding to the characters in the target DID; calculating the remainder of the character and the corresponding character in the target DID to obtain the user identifier of the target user as the target user identifier;
if the number of the characters contained in the hash value string is larger than that of the characters contained in the target DID, determining that the characters with characters at the corresponding positions in the target DID exist in the arrangement sequence from high order to low order of the characters contained in the hash value string, and determining that the characters except the first characters in the hash value string are used as second characters; counting the occurrence frequency of each character in the target DID; for each first character, determining the position of the first character in the hash value string according to the arrangement sequence of the characters contained in the hash value string from high order to low order; determining characters in the target DID at the same positions as the first character according to the arrangement sequence of the characters contained in the target DID from high position to low position, and obtaining the corresponding characters of the first character in the target DID; calculating the remainder of the first character and the corresponding character in the target DID as a first remainder; for each second character, determining the position of the second character in the hash value string according to the arrangement sequence of the characters contained in the hash value string from the lower position to the upper position; determining characters at the same positions as the second character in the corresponding sorting result according to the arrangement sequence of the occurrence times of the characters contained in the target DID from high to low to obtain the characters corresponding to the second character in the target DID; calculating the remainder of the second character and the corresponding character in the target DID to obtain a second remainder; and generating the user identifier of the target user containing the first remainder and the second remainder as a target user identifier.
11. The method of claim 8, wherein said correspondingly recording said target user identification and said target user representation comprises:
judging whether the corresponding relation between the user identification stored in the portrait node and the user node contains the target user identification; the portrait node is a head node of a preset user block chain; the user node is a non-head node of the user block chain; a user node for storing user information of a corresponding user;
if the corresponding relation comprises the target user identification, determining a user node corresponding to the target user identification to obtain the user node of the target user; establishing a chain table node after the last chain table node of the portrait block chain taking the user node of the target user as a head node, and storing the portrait of the target user to the established chain table node;
if the corresponding relation does not contain the target user identification, a user node is newly established behind the last user node of the user block chain and is used as the user node of the target user, and the target user identification and the user node of the target user are correspondingly recorded in the corresponding relation; establishing an portrait block chain by taking the user node of the target user as a head node; the newly-built portrait block chain comprises a newly-built linked list node except a head node; and storing the target user portrait to the newly-built linked list node.
12. The method of claim 11, wherein said storing said target user representation to said newly created linked list node comprises:
and generating a two-dimensional array comprising the target user portrait and the generation time of the target user portrait, and storing the two-dimensional array to the newly-built linked list node.
13. The method of claim 11, wherein prior to said sending said target user representation to an electronic device used by said user, said method further comprises:
determining a user node corresponding to the target user identifier in the corresponding relation between the user identifier recorded by the image node and the user node to obtain the user node of the target user;
determining a linked list node corresponding to the target user image in the corresponding relation between the user image recorded by the user node of the target user and the linked list node;
and obtaining the target user portrait from the determined linked list nodes.
14. A user representation generation apparatus, the apparatus comprising:
the portrait dimension determining module is used for determining a portrait dimension corresponding to a target user as a target portrait dimension based on user information of the target user;
an initial user representation generation module for generating a user representation of the target user in the target representation dimension as an initial user representation based on user data of the target user in the target representation dimension;
the time weight calculation module is used for calculating the time weight of the initial user portrait based on the duration of a target time period corresponding to the user data of the target user and the duration between the moment when the target user generates the user behavior for the first time and the moment when the target user generates the user behavior for the last time in the target time period;
a target user representation generation module to determine the calculated temporal weight and the initial user representation as a final user representation of the target user as a target user representation.
15. The apparatus of claim 14, wherein said temporal weight calculation module is configured to determine whether a user representation of said target user in said target representation dimension has been generated prior to generating said initial representation;
if the user representation of the target user in the target representation dimension is not generated before the initial representation is generated, calculating a time weight of the initial user representation based on a duration of a target time period corresponding to user data of the target user and a duration between a time when user behavior of the target user occurs for the first time and a time when user behavior occurs for the last time in the target time period;
if a user representation of the target user in the target representation dimension has been generated prior to generating the initial representation, obtaining a temporal weight for each user representation of the target user in the target representation dimension that has been generated; determining the time weight at the position of an inflection point in the variation trend of the time weight of each user portrait according to the sequence of the generation time of each user portrait, and taking the time weight as a target time weight; and calculating the time weight of the initial user portrait based on the target time weight, the number of the time weights of the user portraits, the duration of a target time period corresponding to the user data of the target user, and the duration between the moment when the target user firstly generates the user behavior and the moment when the target user finally generates the user behavior in the target time period.
16. The apparatus according to claim 15, wherein the time weight calculation module is specifically configured to calculate a reference time weight based on a duration of a target time period corresponding to the user data of the target user, a duration between a time at which the target user first generates the user behavior and a time at which the target user last generates the user behavior within the target time period, and a time weight of a first user portrait in each user portrait according to a sequence of generation times of the user portraits;
if the reference time weight is not less than a third numerical value and the weight from the time of the first user portrait in each user portrait to the target time is in an ascending trend according to the sequence of the generation time of each user portrait, calculating the time weight of the initial user portrait based on the time length of a target time period corresponding to the user data of the target user, the time length between the moment when the user behavior occurs for the first time and the moment when the user behavior occurs for the last time in the target time period, the number of each user portrait and the number of the weight from the time of the first user portrait in each user portrait to the time of the target time weight according to the sequence of the generation time of each user portrait;
if the reference time weight is not less than the third numerical value and the time weight of the first user portrait in each user portrait is a descending trend from the time weight of the first user portrait to the target time weight according to the sequence of the generation time of each user portrait, calculating the time weight of the initial user portrait based on the time length between the moment when the target user firstly generates the user behavior and the moment when the user behavior last generates in the target time period, the time length of the target time period corresponding to the user data of the target user, and the difference value with the largest absolute value in the difference values of two adjacent time weights in the time weights of each user portrait;
if the reference time weight is smaller than the third numerical value and the time weight from the first user portrait in each user portrait is a descending trend according to the sequence of the generation time of each user portrait, calculating the time weight of the initial user portrait based on the duration of a target time period corresponding to the user data of the target user, the duration between the moment when the user behavior occurs for the first time and the moment when the user behavior occurs for the last time in the target time period, the number of each user portrait and the number of the time weights from the time weight of the first user portrait in each user portrait to the time weight of the target according to the sequence of the generation time of each user portrait;
if the reference time weight is smaller than the third numerical value and the target time weight is in an ascending trend from the time weight of the first user portrait in the user portraits to the target time weight according to the sequence of the generation time of the user portraits, calculating the time weight of the initial user portrait based on the time length between the moment when the target user firstly generates the user behavior and the moment when the target user finally generates the user behavior in the target time period, the time length of the target time period corresponding to the user data of the target user and the difference value with the maximum absolute value in the difference values of the two adjacent time weights in the time weights of the user portraits.
17. The apparatus of claim 14, further comprising:
an extracting module, configured to, after the target user representation generating module performs determining that the calculated time weight and the initial user representation are the final user representation of the target user, and the initial user representation is used as the target user representation, perform extracting, after receiving a usage request for the target user representation, a user identifier of a user carried in the usage request, which is used as the user identifier, and extract a usage summary carried in the usage request; wherein, the usage abstract represents a usage scene of the user for acquiring the target user portrait;
a usage right judging module for judging whether the user has the usage right of the target user representation based on the user identification, the usage abstract and the user information of the target user;
the warning message sending module is used for sending a warning message to the electronic equipment used by the target user to remind the target user of the current request using behavior of the user aiming at the target user portrait if the user does not have the right of using the target user portrait;
and the user portrait sending module is used for sending the target user portrait to the electronic equipment used by the user if the user has the right of using the target user portrait.
18. The apparatus of claim 17, wherein the user information of the target user comprises: an Internet Protocol (IP) address of the electronic device used by the target user;
the right-of-use judging module is specifically configured to send, to the electronic device used by the target user, an inquiry message for the target user portrait according to the IP address of the electronic device used by the target user; wherein, the query message carries the user identifier and the usage summary;
upon receiving a confirmation authorization message sent by an electronic device used by the target user, determining that the user has access to the target user representation;
upon receiving a de-authorization message sent by an electronic device used by the target user, determining that the user does not have access to the target user representation.
19. The apparatus of claim 17, wherein the user information of the target user comprises: the target user aims at an authorization list and an authorization abstract of the target user representation; wherein the authorization list includes user identifications of users authorized by the target user to use the target user representation; the authorization digest represents a usage scenario in which the target user authorizes use of the target user representation;
the right-of-use judging module is specifically configured to judge whether the authorization list includes the user identifier;
determining that said user does not have access to said target user representation if said user identifier is not included in said authorization list;
if the authorization list contains the user identification, calculating a difference value between the use abstract and the authorization abstract; if the difference value is larger than a preset threshold value, determining that the user does not have the right of use of the target user portrait; and if the difference value is not larger than the preset threshold value, determining that the user has the right of use of the target user portrait.
20. The apparatus according to claim 19, wherein the usage right determining module is specifically configured to extract consecutive character strings with a first preset length from the usage abstract to obtain each character string included in the usage abstract;
for each extracted character string, if the authorized abstract contains the character string same as the character string, determining the matching degree corresponding to the character string as a first numerical value;
if the authorized abstract does not contain the character string which is the same as the character string, extracting continuous character strings with a second preset length from the character string to obtain each sub-character string contained in the character string; for each substring contained in the character string, if the authorized abstract does not contain the character string same as the substring, determining that the matching degree corresponding to the substring is a second numerical value; if the authorized abstract contains the character string which is the same as the sub-character string, calculating the matching degree corresponding to the sub-character string based on the number of the characters contained in the sub-character string, the number of the characters contained in the authorized abstract and the occurrence frequency of the character string which is the same as the sub-character string in the authorized abstract; calculating a sum of matching degrees corresponding to each sub-character string contained in the character string, and calculating a ratio of the sum to the number of each sub-character string contained in the character string to obtain the matching degree corresponding to the character string;
and calculating a difference value between the usage abstract and the authorization abstract based on the matching degree corresponding to each character string contained in the usage abstract and the number of the character strings contained in the usage abstract.
21. The apparatus of claim 17, further comprising:
a DID generation module, configured to, after the target user representation generation module performs the time weight determined and calculated and the initial user representation is the final user representation of the target user, perform DID generation according to a preset DID generation rule and user information of the target user, and generate a DID of the target user as a target DID;
the user identification generation module is used for generating a user identification of the target user as a target user identification based on the generation time of the designated user portrait of the target user, the number of the target user and the target DID;
and the recording module is used for correspondingly recording the target user identification and the target user portrait.
22. The apparatus of claim 21, wherein the user identifier generation module is specifically configured to perform a hash process on a generation time of a specified user representation of the target user to obtain a hash value of the generation time of the specified user representation, and perform a hash process on a number of the target user to obtain a hash value of the number of the target user;
splicing the hash value of the generation time of the designated user portrait and the hash value of the number of the target user to obtain a hash value string;
and generating the user identification of the target user as the target user identification based on the hash value string and the target DID.
23. The apparatus according to claim 22, wherein the user identifier generating module is specifically configured to determine, for each character in the hash value string, a position of the character in the hash value string according to an arrangement order of the characters in the hash value string from high order to low order, if the number of characters included in the hash value string is not greater than the number of characters included in the target DID; determining characters at the same positions as the characters in the target DID according to the arrangement sequence of the characters from high position to low position contained in the target DID, and obtaining the characters corresponding to the characters in the target DID; calculating the remainder of the character and the corresponding character in the target DID to obtain the user identifier of the target user as the target user identifier;
if the number of the characters contained in the hash value string is larger than that of the characters contained in the target DID, determining the characters with characters at corresponding positions in the target DID as first characters according to the arrangement sequence of the characters contained in the hash value string from high order to low order, and determining other characters except the first characters in the hash value string as second characters; counting the occurrence frequency of each character in the target DID; for each first character, determining the position of the first character in the hash value string according to the arrangement sequence of the characters contained in the hash value string from high order to low order; determining characters in the target DID at the same positions as the first character according to the arrangement sequence of the characters contained in the target DID from high position to low position, and obtaining the characters corresponding to the first character in the target DID; calculating the remainder of the first character and the corresponding character in the target DID as a first remainder; for each second character, determining the position of the second character in the hash value string according to the arrangement sequence of the characters contained in the hash value string from low order to high order; determining characters at the same positions as the second character in the corresponding sorting result according to the arrangement sequence of the occurrence times of the characters contained in the target DID from high to low to obtain the characters corresponding to the second character in the target DID; calculating the remainder of the second character and the corresponding character in the target DID to obtain a second remainder; and generating the user identifier of the target user containing the first remainder and the second remainder as a target user identifier.
24. The apparatus of claim 21, wherein the recording module is configured to determine whether the target user id is included in a correspondence between a user id stored in the representation node and the user node; the portrait node is a head node of a preset user block chain; the user node is a non-head node of the user block chain; a user node for storing user information of a corresponding user;
if the corresponding relation comprises the target user identification, determining a user node corresponding to the target user identification to obtain the user node of the target user; establishing a chain table node after the last chain table node of the portrait block chain taking the user node of the target user as a head node, and storing the portrait of the target user to the established chain table node;
if the corresponding relation does not contain the target user identification, a user node is newly established behind the last user node of the user block chain and is used as the user node of the target user, and the target user identification and the user node of the target user are correspondingly recorded in the corresponding relation; establishing a portrait block chain by taking the user node of the target user as a head node; the newly-built portrait block chain comprises a newly-built linked list node except a head node; and storing the target user portrait to the newly-built linked list node.
25. The apparatus of claim 24, wherein the recording module is further configured to generate a two-dimensional array comprising the target user representation and a generation time of the target user representation, and store the two-dimensional array in the newly created linked list node.
26. The apparatus of claim 24, further comprising:
a user node determining module, configured to determine a user node corresponding to the target user identifier in a corresponding relationship between a user identifier and a user node recorded by the image node before the user representation sending module sends the target user representation to the electronic device used by the user, so as to obtain the user node of the target user;
a linked list node determining module, configured to determine a linked list node corresponding to the target user image in a correspondence between the user image recorded by the user node of the target user and the linked list node;
and the user portrait acquisition module is used for acquiring the target user portrait from the determined linked list nodes.
27. The electronic equipment is characterized by comprising a processor, a communication interface, a memory and a communication bus, wherein the processor and the communication interface are used for realizing the communication between the processor and the memory through the communication bus;
a memory for storing a computer program;
a processor for implementing the method steps of any one of claims 1 to 13 when executing a program stored in a memory.
28. A computer-readable storage medium, characterized in that a computer program is stored in the computer-readable storage medium, which computer program, when being executed by a processor, carries out the method steps of any one of claims 1 to 13.
CN202210731109.7A 2022-06-24 2022-06-24 User portrait generation method and device, electronic equipment and storage medium Pending CN114996348A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210731109.7A CN114996348A (en) 2022-06-24 2022-06-24 User portrait generation method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210731109.7A CN114996348A (en) 2022-06-24 2022-06-24 User portrait generation method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114996348A true CN114996348A (en) 2022-09-02

Family

ID=83037408

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210731109.7A Pending CN114996348A (en) 2022-06-24 2022-06-24 User portrait generation method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114996348A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117437091A (en) * 2023-12-21 2024-01-23 南京市文化投资控股集团有限责任公司 Operation interaction management system and method for meta-universe scene
CN117875501A (en) * 2024-01-12 2024-04-12 深圳振华数据信息技术有限公司 Social media user behavior prediction system and method based on big data

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117437091A (en) * 2023-12-21 2024-01-23 南京市文化投资控股集团有限责任公司 Operation interaction management system and method for meta-universe scene
CN117437091B (en) * 2023-12-21 2024-02-23 南京市文化投资控股集团有限责任公司 Operation interaction management system and method for meta-universe scene
CN117875501A (en) * 2024-01-12 2024-04-12 深圳振华数据信息技术有限公司 Social media user behavior prediction system and method based on big data
CN117875501B (en) * 2024-01-12 2024-07-02 深圳振华数据信息技术有限公司 Social media user behavior prediction system and method based on big data

Similar Documents

Publication Publication Date Title
Gordon et al. Jury learning: Integrating dissenting voices into machine learning models
US11783126B2 (en) Enabling chatbots by detecting and supporting affective argumentation
CN110598016B (en) Method, device, equipment and medium for recommending multimedia information
CN102737333B (en) For calculating user and the offer order engine to the coupling of small segmentation
CN114996348A (en) User portrait generation method and device, electronic equipment and storage medium
CN105590055A (en) Method and apparatus for identifying trustworthy user behavior in network interaction system
CN107077486A (en) Affective Evaluation system and method
US10795899B2 (en) Data discovery solution for data curation
CN113628049B (en) Conflict arbitration method of blockchain intelligent contracts based on group intelligence
CN111652622A (en) Risk website identification method and device and electronic equipment
Manoharan et al. An Intelligent Fuzzy Rule‐Based Personalized News Recommendation Using Social Media Mining
CN114819967A (en) Data processing method and device, electronic equipment and computer readable storage medium
CN111639696B (en) User classification method and device
Livraga et al. Data confidentiality and information credibility in on-line ecosystems
CN111552865A (en) User interest portrait method and related equipment
Dunna et al. Paying Attention to the Algorithm Behind the Curtain: Bringing Transparency to YouTube's Demonetization Algorithms
Huang et al. Neural explicit factor model based on item features for recommendation systems
CN109543094B (en) Privacy protection content recommendation method based on matrix decomposition
CN116159310A (en) Data processing method, device, electronic equipment and storage medium
CN114996347A (en) User portrait management method and device, electronic equipment and storage medium
CN114969197A (en) User portrait management method and device, electronic equipment and storage medium
Chen et al. Social-network-assisted task recommendation algorithm in mobile crowd sensing
Poniszewska-Marańda et al. Analyzing user profiles with the use of social API
CN113259150B (en) Data processing method, system and storage medium
CN113032625B (en) Video sharing method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination