CN111143663B - Information pushing method and device - Google Patents

Information pushing method and device Download PDF

Info

Publication number
CN111143663B
CN111143663B CN201811302008.8A CN201811302008A CN111143663B CN 111143663 B CN111143663 B CN 111143663B CN 201811302008 A CN201811302008 A CN 201811302008A CN 111143663 B CN111143663 B CN 111143663B
Authority
CN
China
Prior art keywords
user
information
product
terminal
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811302008.8A
Other languages
Chinese (zh)
Other versions
CN111143663A (en
Inventor
王金燕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN201811302008.8A priority Critical patent/CN111143663B/en
Publication of CN111143663A publication Critical patent/CN111143663A/en
Application granted granted Critical
Publication of CN111143663B publication Critical patent/CN111143663B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/55Push-based network services

Landscapes

  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the application discloses an information pushing method and device. One embodiment of the method comprises the following steps: in response to receiving a search formula including a product keyword sent by a terminal, generating a return result, and sending the return result to the terminal, the return result including: prompt information for prompting a user of the terminal to upload the reference information and instruct operation; in response to receiving the reference information sent by the terminal, determining product information associated with the product keywords for recommendation to the user based on the reference information, the reference information comprising: a facial image of the user; product information associated with the product keywords for recommendation to the user is sent to the terminal for presentation of the product information to the user at the terminal. The method and the device have the advantages that through interaction with the user, the characteristic information of the user is obtained, the product information suitable for the user to use the product is determined to be pushed to the user, and the accuracy of information pushing is improved.

Description

Information pushing method and device
Technical Field
The present application relates to the field of computers, and in particular, to the field of the internet, and in particular, to an information pushing method and apparatus.
Background
Users often have a need to search using a search engine, such as desiring to obtain information for a certain type of product of a certain brand. At present, after a user inputs a search formula, information of finding all products associated with the keyword parts of the products in the search formula input by the user is returned to the user. The user cannot know which products are suitable for self use.
Disclosure of Invention
The embodiment of the application provides an information pushing method and device.
In a first aspect, an embodiment of the present application provides an information pushing method, where the method includes: in response to receiving a search formula including a product keyword sent by a terminal, generating a return result, and sending the return result to the terminal, the return result including: prompt information for prompting a user of the terminal to upload the reference information and instruct operation; in response to receiving the reference information sent by the terminal, determining product information associated with the product keywords for recommendation to the user based on the reference information, the reference information comprising: a facial image of the user; product information associated with the product keywords for recommendation to the user is sent to the terminal for presentation of the product information to the user at the terminal.
In a second aspect, an embodiment of the present application provides an information pushing apparatus, including: the first response unit is configured to generate a return result in response to receiving a search formula including a product keyword sent by the terminal, and send the return result to the terminal, wherein the return result comprises: prompt information for prompting a user of the terminal to upload the reference information and instruct operation; a second response unit configured to determine product information associated with the product keyword for recommendation to the user based on the reference information in response to receiving the reference information transmitted by the terminal, the reference information including: a facial image of the user; and the pushing unit is configured to send the product information associated with the product keywords for recommendation to the user to the terminal so as to present the product information to the user at the terminal.
The information pushing method and device provided by the embodiment of the application generate a return result by responding to a search formula comprising product keywords sent by a receiving terminal, and send the return result to the terminal, wherein the return result comprises the following steps: prompt information for prompting a user of the terminal to upload the reference information and instruct operation; in response to receiving the reference information sent by the terminal, determining product information associated with the product keywords for recommendation to the user based on the reference information, the reference information comprising: a facial image of the user; product information associated with the product keywords for recommendation to the user is sent to the terminal for presentation of the product information to the user at the terminal. The method and the device have the advantages that through interaction with the user, the characteristic information of the user is obtained, the product information suitable for the user to use the product is determined to be pushed to the user, and the accuracy of information pushing is improved.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the detailed description of non-limiting embodiments, made with reference to the accompanying drawings in which:
FIG. 1 illustrates an exemplary system architecture suitable for use in implementing embodiments of the present application;
FIG. 2 shows a flow chart of one embodiment of an information push method according to the present application;
FIG. 3 shows a flow chart of another embodiment of an information push method according to the present application;
FIG. 4 shows a schematic structural view of an embodiment of an information pushing device according to the present application;
FIG. 5 is a schematic diagram of a computer system suitable for use with a server implementing an embodiment of the application.
Detailed Description
The application is described in further detail below with reference to the drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the application and are not limiting of the application. It should be noted that, for convenience of description, only the portions related to the present application are shown in the drawings.
It should be noted that, without conflict, the embodiments of the present application and features of the embodiments may be combined with each other. The application will be described in detail below with reference to the drawings in connection with embodiments.
Referring to FIG. 1, an exemplary system architecture suitable for implementing embodiments of the present application is shown.
As shown in fig. 1, the system architecture includes a terminal 101, a network 102, and a server 103. The network 102 may be a wired communication network or a wireless communication network.
The terminal 101 may be a smart phone, a tablet computer, a PC. The server 103 may be a server providing a search service.
When the user of the terminal 101 desires to acquire information associated with a product, a search formula associated with the product may be input on the terminal, and the terminal 101 transmits the search formula associated with the product input by the user of the terminal 101 to the server 103. The server 103 may determine a return result according to a search formula associated with the product input by the user, and transmit the return result to the terminal 101.
Referring to fig. 2, a flow of one embodiment of an information push method according to the present application is shown. The information pushing method provided by the embodiment of the present application may be performed by a server (e.g., the server 103 in fig. 1). The method comprises the following steps:
in step 201, in response to receiving the search expression including the product keyword sent by the terminal, a return result is generated, and the return result is sent to the terminal.
In the present embodiment, when a user desires to acquire information related to a product, the user may input a search formula associated with the product desired to be acquired at the terminal. The user's terminal may transmit the search formula input by the user to the server. The server may segment the search expression input by the user, and when determining that the search expression input by the user contains the product keyword, may generate a return result. The return result comprises: and the prompt information is used for prompting the user to upload the reference information through the terminal. The prompt information may be used to prompt the user to upload facial images of the user. The returned results may also include information related to the product keywords. The server sends the returned result as a search result to the terminal, and the server searches other search results and sends the other search results to the terminal. After receiving the returned results, the terminal may present the returned results to the user at an area in the search results page for presentation of the returned results. Other search results are also included in the search results page.
For example, when the user inputs the search formula "XX brand eye cream", the user inputs a keyword indicating a brand and a keyword indicating a type of a product in the search formula. The profile information of the brand indicating the keyword indication of the brand and the partial product information of the type indicating the keyword type of the product of the brand may be searched out and transmitted to the user's terminal as a return result. The returned result also contains prompt information for prompting the user to carry out the uploading instruction operation of the reference information.
Step 202, product information for recommendation to a user is determined based on the reference information.
In this embodiment, the reference information sent by the terminal includes: a facial image of the user. After the return information containing the prompt information is sent to the terminal of the user, the user can perform the uploading indication operation of the facial image of the user after the prompt information is presented to the user.
For example, the user may click a button for uploading the user's facial image in the region of the search results page where the returned results are presented. After the user clicks the button, a control for selecting the uploaded facial image of the user may be invoked, by which the user selects the uploaded facial image of the user from the facial images of the plurality of users stored on the terminal, so that the user completes the upload instruction operation of the reference information. And sending the facial image of the user selected by the user to a server. When the user terminal is provided with the camera, after the user clicks the button, the camera of the terminal can be called to shoot the user, the face image of the user is obtained, and the obtained face image of the user can be used as the uploaded face image of the user, so that the user finishes the uploading indication operation of the reference information. After the user performs the upload instruction operation of the face image of the user, the terminal of the user may transmit the face image of the user to the server. The server may generate feature information of the user based on the reference information in response to receiving the reference information transmitted from the terminal of the user, and determine product information associated with the product keyword for recommendation to the user based on the feature information of the user.
In this embodiment, product information for recommendation to the user may be determined based on the reference information.
For example, face images and recommended product information uploaded in advance by a plurality of users may be stored in advance. The recommended product information is product information that, after being pushed to the user, the user confirms a product that is more suitable for his/her face-related product. The uploaded recommended product information of the same type may be further stored in correspondence with the facial image of the user uploading the recommended product information of the same type. The pre-stored face image with the similarity larger than the similarity threshold value can be found out according to the similarity between the face image of the current user and all the pre-stored face images, and meanwhile, the type of product information expected to be acquired by the user is determined according to the keywords which are input by the user and represent the type of the product. And uploading recommended product information of the type of product information which is expected to be acquired by the user and uploaded by the user of the found pre-stored facial image as product information for recommending to the user. For example, the user desires to acquire a hairpin, the associated word in the search formula input by the user contains a "hairpin", the face shape of the current user can be determined according to the face image of the current user, a pre-stored face image with the similarity of the contained face shape and the face shape of the current user larger than a similarity threshold can be searched out from the pre-stored face image, and the recommended product information of the hairpin uploaded by the user uploading the searched pre-stored face image is used as the product information for recommending to the user.
Step 203, the product information is sent to the terminal to present the product information to the user at the terminal.
In the present embodiment, after determining the product information associated with the product keyword for recommendation to the user, the product information associated with the product keyword for recommendation to the user may be transmitted to the terminal, and thus the product information may be presented to the user at the terminal.
Referring to fig. 3, a flow chart of another embodiment of the information push method according to the present application is shown. The information pushing method provided by the embodiment of the present application may be performed by a server (e.g., the server 103 in fig. 1). The method comprises the following steps:
in step 301, in response to receiving the search expression including the product keyword sent by the terminal, a return result is generated, and the return result is sent to the terminal.
In the present embodiment, when a user desires to acquire information related to a product, the user may input a search formula associated with the product desired to be acquired at the terminal. The user's terminal may transmit the search formula input by the user to the server. The server may segment the search expression input by the user, and when determining that the search expression input by the user contains the product keyword, may generate a return result. The return result comprises: and the prompt information is used for prompting the user to upload the reference information through the terminal. The prompt information may be used to prompt the user to upload facial images of the user. The returned results may also include information related to the product keywords. The server sends the returned result as a search result to the terminal, and the server searches other search results and sends the other search results to the terminal. After receiving the returned results, the terminal may present the returned results to the user at an area in the search results page for presentation of the returned results. Other search results are also included in the search results page.
Step 302, generating feature information of the user based on the reference information, and determining product information associated with the product keywords for recommendation to the user based on the feature information of the user.
In this embodiment, the reference information sent by the terminal includes: the face image of the user and the attribute information of the user, wherein the attribute information comprises one or more of the following items: age, sex. The user can fill in the gender and the age at the terminal, generate attribute information containing the user's age and the user's gender at the user's terminal, and send the attribute information containing the user's age and the user's gender to the server. After the user inputs the attribute information of the user and selects the face image of the user or shoots the face image of the user, a button for uploading the attribute information of the user and the face image of the user is clicked, and the uploading indication operation of the reference information is completed.
In this embodiment, the feature information of the user may be generated from the face image of the user. For example, the feature information of one user includes: characteristics of the user's skin, such as dryness or oiliness, determined based on the user's facial image, the color of the user's skin determined based on the user's facial image. A plurality of feature information for matching associated with the product keyword input by the user may be previously constructed, each of the feature information for matching including a preset value corresponding to each item in the feature information of the user. Preset product information associated with the product keyword input by the user, which corresponds to each feature information for matching, may be predetermined. Thus, each of the feature information for matching corresponds to one of the product information associated with the product keyword. When product information associated with a product keyword for recommendation to a user is determined based on feature information of the user, product information associated with the product keyword for recommendation to the user may be selected from a plurality of product information associated with the product keyword.
For example, a user enters a search for "XX brand facial cleanser," and the user's characteristic information includes: characteristics of the user's skin, such as dryness or oiliness, determined based on the user's facial image, the color of the user's skin determined based on the user's facial image. The pre-built plurality of feature information for matching associated with each user-entered product keyword of the feature information for matching associated with the user-entered product keyword each includes: a preset value corresponding to the characteristic of the skin and a preset value corresponding to the color of the skin. The feature information for matching having the item matching each item in the user's feature among the plurality of pre-constructed feature information for matching associated with the product keyword input by the user may be determined, and the preset product information corresponding to the feature information for matching of the item matching each item in the user's feature, namely, the product information of a certain type of facial cleanser in the XX-brand facial cleanser, may be used as the product information associated with the product keyword for recommendation to the user.
In this embodiment, the feature information of the user may include: object feature information of a face object of a face of a user associated with a product keyword, attribute information of the user, the object feature information being obtained based on a face image of the user.
When feature information of a user is generated based on reference information, a face object of a face of the user associated with a product keyword may be first determined. Object feature information of a face object of the user's face associated with the product keyword may then be obtained based on the face image of the user. For example, the user enters a search for "XX brand eye cream", and the facial object of the user's face associated with the product keyword is the eyes of the user. Object feature information of eyes of the user can then be obtained based on the facial image of the user. In order to obtain object feature information of a face object of a face of a user associated with a product keyword based on a face image of the user, a neural network corresponding to a type of each face object may be first trained, respectively. The neural network corresponding to the type of the face object can be used for acquiring the object characteristic information of the face object of the type after training. When training the neural network corresponding to the type of the face object by using the training set of the neural network corresponding to the type of the face object, each training sample in the training set is a face image of a user, and each training sample has labeling information. For example, the type of the face object is an eye, and object feature information of the eye includes: fine lines and color. Each training sample has labeling information, which includes: the grade of the fine lines of eyes in the marked training sample and the color of the marks of eyes in the training sample. The neural network corresponding to the type of one face object may be a convolutional neural network, and in each training process, the convolutional neural network may extract feature points of the face object in the face image as training samples, predict the feature points to obtain predicted vectors, where each component in the predicted vectors is one item of feature information of the predicted face object. And adjusting parameters of the neural network corresponding to the type of the face object according to the difference between the prediction result of the neural network and the labeling information. The neural network corresponding to the type of the face object after training can acquire the object characteristic information of the face object of the type.
When the object feature information of the face object associated with the product keyword of the face of the user needs to be obtained, the face image of the user can be input into a neural network corresponding to the type of the face object associated with the product keyword to obtain the object feature information of the face object associated with the product keyword of the face of the user, and then the object feature information of the face object associated with the product keyword of the face of the user can be combined with the attribute information of the user to obtain the feature information of the user.
In this embodiment, each type of face object may be pre-bound with a plurality of preset feature information. The preset characteristic information of the type binding of the face object comprises: preset object feature information corresponding to the type, preset attribute information corresponding to the type. The items in each of the predetermined characteristic information bound to the type are at least one different from the items in the other predetermined characteristic information. Product information corresponding to each of a plurality of preset feature information, to which the type of one face object is pre-bound, may be predetermined.
For example, the type of the face object is an eye, and object feature information of the eye of the user includes: fine lines and color. Each of a plurality of preset feature information binding with the eye part, namely the type of the face object, comprises: the grade of the preset fine lines, the preset color, the preset age and the preset sex. The level of the preset fine lines and the preset color belong to preset object characteristic information corresponding to eyes, and the preset age and the preset sex belong to preset attribute information corresponding to eyes. The eye part is a type of the face object, and the item in each piece of preset characteristic information is different from the item in the other pieces of preset characteristic information.
When feature information of a user includes object feature information of a face object of the face of the user associated with a product keyword and attribute information of the user, a type of the face object of the face of the user associated with the product keyword may be first determined when product information of the face object associated with the product keyword for recommendation to the user is determined based on the feature information of the user, and a plurality of preset feature information of the face of the user, which is pre-bound to the determined type, may be used as a plurality of preset feature information of the face object of the user corresponding to the product keyword. Each of a plurality of preset feature information of the face of the user corresponding to the face object associated with the product keyword corresponds to preset product information associated with the product keyword.
Then, it is possible to determine preset feature information having the largest matching degree with the feature information of the user among a plurality of preset feature information corresponding to the face object associated with the product keyword of the face of the user. When the matching degree is calculated, the characteristic information of the user and the preset characteristic information can be respectively expressed by vectors, and each component in the vector for expressing the characteristic information of the user is a normalized numerical value of one item of the characteristic information of the user. Each component in the vector representing the preset feature information is a normalized value of one of the preset feature information. The vector similarity between the vector representing the feature information of the user and the vector representing the preset feature information may be calculated, with the vector similarity being regarded as the degree of matching. The predetermined product information corresponding to the predetermined object feature information with the greatest matching degree of the feature information of the user can be used as the product information associated with the product keyword for recommending to the user.
For example, a user enters a search for "XX brand eye cream", and the facial object of the user's face associated with the product keyword is the user's eyes. The characteristic information of the eyes of the user comprises: the grade of the fine lines of the eyes of the user, the color of the eyes of the user, the age of the user, the sex of the user. Each piece of preset characteristic information in the plurality of pieces of preset characteristic information corresponding to eyes of the user comprises: the grade of the preset fine lines, the preset color, the preset age and the preset sex. Each piece of preset characteristic information in a plurality of pieces of preset characteristic information corresponding to eyes of the user corresponds to preset product information associated with product keywords. For example, each piece of preset characteristic information corresponds to product information of one eye cream of the XX brand. The method comprises the steps of determining preset characteristic information with the largest matching degree with the characteristic information of the user in a plurality of preset characteristic information corresponding to the eyes of the face of the user, and taking the determined preset product information corresponding to the preset characteristic information with the largest matching degree with the characteristic information of the user, namely the product information of the preset eye cream of the XX brand corresponding to the preset characteristic information with the largest matching degree with the characteristic information of the user, as the product information which is used for recommending to the user and is associated with product keywords.
Step 303, the product information is sent to the terminal to present the product information to the user at the terminal.
In the present embodiment, after determining the product information associated with the product keyword for recommendation to the user, the product information associated with the product keyword for recommendation to the user may be transmitted to the terminal, and thus the product information may be presented to the user at the terminal.
In this embodiment, the server may process the face image of the user in a preset image processing manner corresponding to the product information associated with the product keyword for recommendation to the user, to obtain a processed face image of the user, where the processed face image of the user may indicate an effect of the user after using the product corresponding to the product information associated with the product keyword for recommendation to the user.
For example, the user enters a search for "XX brand eye cream," product information for one eye cream of the XX brand recommended to the user for product information associated with the product keyword. The preset image processing mode corresponding to the product information comprises the following steps: the pixel values of the pixel points on the outline of the eyes in the region of the eyes in the face image of the user are changed by filtering the region of the eyes in the face image of the user. The fine lines of the eyes can be reduced by filtering the region of the eyes in the face image of the user, and the dark circles of the eyes can be reduced by changing the pixel values of the pixel points on the outline of the eyes in the region of the eyes in the face image of the user.
The server may transmit the processed face image of the user to the terminal. Then, the processed facial image of the user may be presented to the user at the terminal so that the user can see the effect after using the product corresponding to the product information associated with the product keyword for recommendation to the user. After the terminal receives the processed user's facial image, the processed user's facial image may be presented to the user in an area in the search results page for presenting the processed user's facial image.
Referring to fig. 4, as an implementation of the method shown in the foregoing drawings, the present application provides an embodiment of an information pushing apparatus, where the embodiment of the apparatus corresponds to the embodiment of the method shown in fig. 2.
As shown in fig. 4, the information pushing apparatus of the present embodiment includes: a first response unit 401, a second response unit 402, a push unit 403. Wherein the first response unit 401 is configured to generate a return result in response to receiving a search formula including a product keyword sent by the terminal, and send the return result to the terminal, where the return result includes: prompt information for prompting a user of the terminal to upload the reference information and instruct operation; the second response unit 402 is configured to determine product information associated with the product keyword for recommendation to the user based on the reference information in response to receiving the reference information transmitted by the terminal, the reference information including: a facial image of the user; the pushing unit 403 is configured to send product information associated with the product keywords for recommendation to the user to the terminal to present the product information to the user at the terminal.
In some optional implementations of the present embodiment, the second response unit is further configured to: based on the reference information, generating feature information of the user, and based on the feature information of the user, determining product information associated with the product keywords for recommendation to the user.
In some optional implementations of this embodiment, the reference information further includes: attribute information of the user, the attribute information including one or more of: age, sex.
In some optional implementations of this embodiment, the feature information of the user includes: object feature information of a face object of a face of a user, the object feature information being obtained based on a face image of the user, the object feature information being associated with a product keyword, attribute information of the user.
In some optional implementations of this embodiment, the second response unit is further configured to determine preset feature information, of a plurality of preset feature information corresponding to a face object associated with a product keyword, of the face of the user, having a maximum matching degree with the feature information of the user; and taking the preset product information corresponding to the preset characteristic information with the largest matching degree of the characteristic information of the user as the product information which is used for recommending to the user and is associated with the product keywords.
In some optional implementations of this embodiment, the information pushing apparatus further includes: a processing unit configured to process a face image of a user in a preset image processing manner corresponding to product information associated with product keywords for recommendation to the user, obtain a processed face image of the user, and transmit the processed face image of the user to a terminal
FIG. 5 shows a schematic diagram of a computer system suitable for use in implementing embodiments of the present application.
As shown in fig. 5, the computer system includes a Central Processing Unit (CPU) 501, which can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 502 or a program loaded from a storage section 508 into a Random Access Memory (RAM) 503. In the RAM503, various programs and data required for the operation of the computer system are also stored. The CPU 501, ROM502, and RAM503 are connected to each other through a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
The following components are connected to the I/O interface 505: an input section 506; an output section 507; a storage portion 508 including a hard disk and the like; and a communication section 509 including a network interface card such as a LAN card, a modem, or the like. The communication section 509 performs communication processing via a network such as the internet. The drive 510 is also connected to the I/O interface 505 as needed. A removable medium 511 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 510 as needed so that a computer program read therefrom is mounted into the storage section 508 as needed.
In particular, the processes described in the embodiments of the present application may be implemented as computer programs. For example, embodiments of the application include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising instructions for performing the method shown in the flowchart. The computer program can be downloaded and installed from a network through the communication portion 509, and/or installed from the removable medium 511. The above-described functions defined in the method of the present application are performed when the computer program is executed by a Central Processing Unit (CPU) 501.
The present application also provides a server that may be configured with one or more processors; and a memory for storing one or more programs, wherein the one or more programs may include instructions for performing the operations described in the above embodiments. The one or more programs, when executed by the one or more processors, cause the one or more processors to perform the operations described in the above embodiments.
The present application also provides a computer-readable medium, which may be included in a server; or may exist alone and not be assembled into a server. The computer readable medium carries one or more programs that, when executed by a server, cause the server to: in response to receiving a search formula including a product keyword sent by a terminal, generating a return result, and sending the return result to the terminal, the return result including: prompt information for prompting a user of the terminal to upload the reference information and instruct operation; in response to receiving the reference information sent by the terminal, generating feature information of the user based on the reference information, and determining product information associated with the product keywords for recommendation to the user based on the feature information of the user, the reference information including: a facial image of the user; product information associated with the product keywords for recommendation to the user is sent to the terminal for presentation of the product information to the user at the terminal.
The computer readable medium according to the present application may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium may include, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with a message execution system, apparatus, or device. In the present application, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with a message execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable messages for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer messages.
The above description is only illustrative of the preferred embodiments of the present application and of the principles of the technology employed. It will be appreciated by persons skilled in the art that the scope of the application referred to in the present application is not limited to the specific combinations of the technical features described above, but also covers other technical features formed by any combination of the technical features described above or their equivalents without departing from the inventive concept. Such as the above-mentioned features and the technical features disclosed in the present application (but not limited to) having similar functions are replaced with each other.

Claims (12)

1. An information pushing method is characterized by comprising the following steps:
generating a return result in response to receiving a search formula including a product keyword sent by a terminal, and sending the return result to the terminal, wherein the return result comprises: prompt information for prompting a user of the terminal to perform uploading instruction operation of the reference information;
in response to receiving reference information sent by a terminal, determining product information associated with the product keywords for recommendation to the user based on the reference information, the reference information comprising: a facial image of the user;
transmitting the product information to a terminal to present the product information to the user at the terminal;
based on the reference information, determining product information associated with product keywords for recommendation to the user includes:
generating characteristic information of a user based on the reference information, and determining product information associated with the product keywords for recommendation to the user based on the characteristic information of the user; the determining product information associated with the product keywords for recommendation to the user based on the characteristic information of the user includes: and determining, based on a plurality of feature information for matching associated with the product keyword input by the user, which are previously constructed, wherein each feature information for matching contains preset values corresponding to respective items in the feature information of the user, feature information for matching having items matching each item in the feature information of the user among the plurality of feature information for matching associated with the product keyword input by the user, and taking preset product information corresponding to the feature information for matching of the item matching each item in the feature information of the user as product information associated with the product keyword for recommendation to the user.
2. The method of claim 1, wherein the reference information further comprises: attribute information of the user, the attribute information including one or more of: age, sex.
3. The method of claim 2, wherein the characteristic information of the user comprises: object feature information of a face object of the face of the user, which is associated with the product keyword, and attribute information of the user, the object feature information being obtained based on a face image of the user.
4. The method of claim 3, wherein determining product information associated with the product keywords for recommendation to the user based on the characteristic information of the user comprises:
determining preset characteristic information with the largest matching degree with the characteristic information of the user in a plurality of preset characteristic information corresponding to the face object associated with the product keyword of the face of the user;
and taking the preset product information corresponding to the preset characteristic information with the largest matching degree of the characteristic information of the user as the product information which is used for recommending to the user and is associated with the product keyword.
5. The method according to one of claims 1 to 4, characterized in that the method further comprises:
processing the facial image of the user in a preset image processing mode corresponding to the product information associated with the product keyword and used for being recommended to the user, obtaining the processed facial image of the user, and sending the processed facial image of the user to the terminal.
6. An information pushing apparatus, characterized by comprising:
the first response unit is configured to generate a return result in response to receiving a search formula including a product keyword sent by a terminal, and send the return result to the terminal, wherein the return result comprises: prompt information for prompting a user of the terminal to perform uploading instruction operation of the reference information;
a second response unit configured to determine product information associated with the product keyword for recommendation to the user based on the reference information in response to receiving the reference information transmitted by the terminal, the reference information including: a facial image of the user;
a pushing unit configured to send the product information to the terminal to present the product information to the user at the terminal;
the second response unit is further configured to: generating feature information of the user based on the reference information, and determining product information associated with the product keywords for recommendation to the user based on the feature information of the user; the second response unit is further configured to: and determining, based on a plurality of feature information for matching associated with the product keyword input by the user, which are previously constructed, wherein each feature information for matching contains preset values corresponding to respective items in the feature information of the user, feature information for matching having items matching each item in the feature information of the user among the plurality of feature information for matching associated with the product keyword input by the user, and taking preset product information corresponding to the feature information for matching of the item matching each item in the feature information of the user as product information associated with the product keyword for recommendation to the user.
7. The apparatus of claim 6, wherein the reference information further comprises: attribute information of the user, the attribute information including one or more of: age, sex.
8. The apparatus of claim 7, wherein the characteristic information of the user comprises: object feature information of a face object of the face of the user, which is associated with the product keyword, and attribute information of the user, the object feature information being obtained based on a face image of the user.
9. The apparatus according to claim 8, wherein the second response unit is further configured to determine preset feature information having a highest degree of matching with the feature information of the user among a plurality of preset feature information of the face of the user corresponding to the face object associated with the product keyword; and taking the preset product information corresponding to the preset characteristic information with the largest matching degree of the characteristic information of the user as the product information which is used for recommending to the user and is associated with the product keyword.
10. The apparatus according to one of claims 6-9, characterized in that the apparatus further comprises:
the processing unit is configured to process the facial image of the user in a preset image processing mode corresponding to the product information associated with the product keyword and used for being recommended to the user, obtain the processed facial image of the user, and send the processed facial image of the user to the terminal.
11. A server, comprising:
one or more processors;
a memory for storing one or more programs,
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method of any of claims 1-5.
12. A computer readable medium, having stored thereon a computer program, characterized in that the program, when executed by a processor, implements the method according to any of claims 1-5.
CN201811302008.8A 2018-11-02 2018-11-02 Information pushing method and device Active CN111143663B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811302008.8A CN111143663B (en) 2018-11-02 2018-11-02 Information pushing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811302008.8A CN111143663B (en) 2018-11-02 2018-11-02 Information pushing method and device

Publications (2)

Publication Number Publication Date
CN111143663A CN111143663A (en) 2020-05-12
CN111143663B true CN111143663B (en) 2023-08-18

Family

ID=70516201

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811302008.8A Active CN111143663B (en) 2018-11-02 2018-11-02 Information pushing method and device

Country Status (1)

Country Link
CN (1) CN111143663B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20160009900A (en) * 2014-07-17 2016-01-27 주식회사 인프라웨어 Method and apparatus for recommending an application based on image recognition
CN106156144A (en) * 2015-04-13 2016-11-23 腾讯科技(深圳)有限公司 Information-pushing method and device
CN106303354A (en) * 2016-08-18 2017-01-04 北京奇虎科技有限公司 A kind of face specially good effect recommends method and electronic equipment
CN107590255A (en) * 2017-09-19 2018-01-16 百度在线网络技术(北京)有限公司 Information-pushing method and device
CN107729492A (en) * 2017-10-18 2018-02-23 广东小天才科技有限公司 A kind of method for pushing of exercise, system and terminal device
CN107885889A (en) * 2017-12-13 2018-04-06 聚好看科技股份有限公司 Feedback method, methods of exhibiting and the device of search result
CN108446390A (en) * 2018-03-22 2018-08-24 百度在线网络技术(北京)有限公司 Method and apparatus for pushed information

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180197221A1 (en) * 2017-01-06 2018-07-12 Dragon-Click Corp. System and method of image-based service identification

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20160009900A (en) * 2014-07-17 2016-01-27 주식회사 인프라웨어 Method and apparatus for recommending an application based on image recognition
CN106156144A (en) * 2015-04-13 2016-11-23 腾讯科技(深圳)有限公司 Information-pushing method and device
CN106303354A (en) * 2016-08-18 2017-01-04 北京奇虎科技有限公司 A kind of face specially good effect recommends method and electronic equipment
CN107590255A (en) * 2017-09-19 2018-01-16 百度在线网络技术(北京)有限公司 Information-pushing method and device
CN107729492A (en) * 2017-10-18 2018-02-23 广东小天才科技有限公司 A kind of method for pushing of exercise, system and terminal device
CN107885889A (en) * 2017-12-13 2018-04-06 聚好看科技股份有限公司 Feedback method, methods of exhibiting and the device of search result
CN108446390A (en) * 2018-03-22 2018-08-24 百度在线网络技术(北京)有限公司 Method and apparatus for pushed information

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
个性化三维人脸及其在服装虚拟展示中的应用研究;吕海清;《中国优秀硕士论文电子期刊网》;全文 *

Also Published As

Publication number Publication date
CN111143663A (en) 2020-05-12

Similar Documents

Publication Publication Date Title
CN108830235B (en) Method and apparatus for generating information
JP6316447B2 (en) Object search method and apparatus
CN110489582B (en) Method and device for generating personalized display image and electronic equipment
US11310559B2 (en) Method and apparatus for recommending video
CN107590255B (en) Information pushing method and device
US10289927B2 (en) Image integration search based on human visual pathway model
CN110740389B (en) Video positioning method, video positioning device, computer readable medium and electronic equipment
CN109300059B (en) Dish recommending method and device
CN107832720B (en) Information processing method and device based on artificial intelligence
CN110728188A (en) Image processing method, device, system and storage medium
CN117194772B (en) Content pushing method and device based on user tag
CN109241930B (en) Method and apparatus for processing eyebrow image
CN110992127A (en) Article recommendation method and device
CN114528474A (en) Method and device for determining recommended object, electronic equipment and storage medium
CN112532507A (en) Method and device for presenting expression image and method and device for sending expression image
CN111143663B (en) Information pushing method and device
CN110674388A (en) Mapping method and device for push item, storage medium and terminal equipment
CN111787042B (en) Method and device for pushing information
CN112989177B (en) Information processing method, information processing device, electronic equipment and computer storage medium
US11893594B2 (en) Image processing system, image processing method, and program
CN111949813B (en) Friend-making request method, friend-making request device, friend-making request computer device, friend-making request storage medium
CN116703503A (en) Intelligent recommendation method and system for campus canteen dishes
JP2020004410A (en) Method for facilitating media-based content share, computer program and computing device
CN112148962A (en) Method and device for pushing information
CN108121969B (en) Method and apparatus for processing image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant