CN111415185A - Service processing method, device, terminal and storage medium - Google Patents

Service processing method, device, terminal and storage medium Download PDF

Info

Publication number
CN111415185A
CN111415185A CN201910016186.2A CN201910016186A CN111415185A CN 111415185 A CN111415185 A CN 111415185A CN 201910016186 A CN201910016186 A CN 201910016186A CN 111415185 A CN111415185 A CN 111415185A
Authority
CN
China
Prior art keywords
image
information
value transfer
terminal
product information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910016186.2A
Other languages
Chinese (zh)
Other versions
CN111415185B (en
Inventor
李胤恺
耿志军
郭润增
黄家宇
刘文君
吕中方
周航
孔维伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201910016186.2A priority Critical patent/CN111415185B/en
Publication of CN111415185A publication Critical patent/CN111415185A/en
Application granted granted Critical
Publication of CN111415185B publication Critical patent/CN111415185B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0281Customer communication at a business location, e.g. providing product or service information, consulting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0282Rating or review of business operators or products
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Strategic Management (AREA)
  • Accounting & Taxation (AREA)
  • Development Economics (AREA)
  • Finance (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Game Theory and Decision Science (AREA)
  • General Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Economics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a service processing method, a service processing device, a terminal and a storage medium, and belongs to the technical field of computers. The method comprises the following steps: when the target range is detected to include people, identifying the collected face image to acquire user information corresponding to the face image; acquiring a first image; acquiring target product information from a plurality of candidate product information corresponding to the user information; displaying a second image for embodying an effect that a person in the first image applied the target product, according to the first image and the target product information; and when a value transfer instruction is acquired, sending a value transfer request. The invention determines the user information in a face recognition mode, carries out product recommendation based on the user information, also provides a virtual trial function of the product, does not need to paint or wear an entity product, integrally realizes the links of selection, trial and numerical value transfer, does not need manual participation, reduces the labor cost and effectively improves the efficiency of the business processing method.

Description

Service processing method, device, terminal and storage medium
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a method, an apparatus, a terminal, and a storage medium for processing a service.
Background
With the development of computer technology, people can perform various services through the computer technology to improve the working efficiency. Among them, assisting users to pick, try and buy products is also a business.
At present, the business processing method is that users usually use products on their bodies, shop salesmen is also needed to assist in selecting the products, and the products are also needed to be carried to a counter for value transfer when the products are purchased.
The business processing method needs a user to paint or wear the product of the entity in person, is insanitary and consumes long time, if the user wants to select various products, the user needs to paint or wear the products for many times, the selection process is inconvenient, the selection, the trial and the numerical value transfer are all disjointed, each link consumes much time, and much labor cost is needed, so the efficiency of the business processing method is low.
Disclosure of Invention
The embodiment of the invention provides a service processing method, a service processing device, a terminal and a storage medium, which can solve the problem of low efficiency in the related technology. The technical scheme is as follows:
in one aspect, a method for processing a service is provided, where the method includes:
when the target range includes a person, identifying the collected face image to acquire user information corresponding to the face image;
acquiring a first image;
acquiring target product information from a plurality of candidate product information corresponding to the user information;
displaying a second image corresponding to the first image according to the first image and the target product information, wherein the second image is used for embodying the effect that a person in the first image applies a target product corresponding to the target product information;
and when a value transfer instruction corresponding to the target product information is acquired, sending a value transfer request to a server, wherein the value transfer request is used for indicating the server to execute service processing corresponding to the value transfer request.
In one aspect, a service processing apparatus is provided, where the apparatus includes:
the acquisition module is used for identifying the acquired face image and acquiring user information corresponding to the face image when the target range is detected to include a person;
the acquisition module is further used for acquiring a first image;
the acquisition module is further configured to acquire target product information from a plurality of candidate product information corresponding to the user information;
the display module is used for displaying a second image corresponding to the first image according to the first image and the target product information, wherein the second image is used for showing the effect that people in the first image apply the target product corresponding to the target product information;
and the sending module is used for sending a numerical value transfer request to a server when a numerical value transfer instruction corresponding to the target product information is obtained, wherein the numerical value transfer request is used for indicating the server to execute service processing corresponding to the numerical value transfer request.
In one aspect, a terminal is provided, where the terminal includes a processor and a memory, where the memory stores at least one instruction, and the instruction is loaded and executed by the processor to implement an operation performed by the service processing method.
In one aspect, a computer-readable storage medium is provided, and at least one instruction is stored in the computer-readable storage medium and loaded and executed by a processor to implement operations performed by the business processing method.
When a person is detected, the embodiment of the invention determines the corresponding user information by carrying out face recognition on the person, can recommend candidate product information based on the user information, and then can collect an image, thereby displaying the image of the person in the image after the person applies the target product on the basis of the image and combining the target product information to realize virtual trial. When a user wants to purchase the target product, the user can send a value transfer request to the server based on the obtained value transfer instruction, in the process, user information can be determined in a face recognition mode to recommend the product, manual login of the user is not needed, a virtual trial function of the product is further provided, the physical product is not needed to be painted or worn, the selection, trial and value transfer links can be integrally achieved, manual participation is not needed, labor cost is reduced, and meanwhile the efficiency of the business processing method can be effectively improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is an implementation environment of a service processing method according to an embodiment of the present invention;
fig. 2 is a flowchart of a service processing method according to an embodiment of the present invention;
fig. 3 is a flowchart of a service processing method according to an embodiment of the present invention;
fig. 4 is a flowchart of a service processing method according to an embodiment of the present invention;
fig. 5 is a flowchart of a service processing method according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of a service processing apparatus according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of a terminal according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be described in detail with reference to the accompanying drawings.
Fig. 1 is an implementation environment of a service processing method according to an embodiment of the present invention, and referring to fig. 1, the implementation environment may include a terminal 101 and a server 102. The terminal 101 and the server 102 may be connected through a network to implement data interaction, and the terminal 101 may perform corresponding steps based on a network request by sending the network request to the server 102, so as to provide corresponding services.
In the embodiment of the present invention, the terminal 101 may provide a face recognition function, a virtual trial function of a product, and a value transfer function, when a value transfer operation is detected, the terminal 101 may send a value transfer request of the product to the server 102, and the server 102 executes corresponding service processing based on the value transfer request, thereby providing a value transfer service for the terminal 101.
It should be noted that the terminal 101 may access the server 102 through a client installed on the terminal 101, or may access the server 102 through a web portal, which is not limited in the embodiment of the present invention.
Fig. 2 is a flowchart of a service processing method according to an embodiment of the present invention, where the service processing method is applied to a terminal, and the terminal may be the terminal 101 in the implementation environment shown in fig. 1. Referring to fig. 2, the service processing method may include the steps of:
201. when the target range includes a person, the terminal identifies the collected face image and acquires user information corresponding to the face image.
In the embodiment of the invention, the terminal can have an image acquisition function and a face recognition function, the terminal can acquire an image, recognize the acquired image and determine whether a person exists in a target range in the image, and when the person exists, the terminal can start the corresponding function, and the target range can be all areas or partial areas in the image. When the terminal determines that the target range in the image includes a person, it indicates that there may be a business processing service that the user wants to use the terminal to provide, and the terminal may further perform face image acquisition and identify the acquired face image, or perform face identification based on the previously acquired image to determine the identity of the user corresponding to the face, so as to provide the corresponding business processing service for the user.
Specifically, an image capturing component may be mounted on the terminal, and image capturing may be performed by the image capturing component. For example, the terminal may be a cosmetic mirror or a fitting mirror, a camera may be installed on the cosmetic mirror or the fitting mirror, and the cosmetic mirror or the fitting mirror may implement the image acquisition function and the face recognition function through the camera. The product provided by the terminal may include a makeup product or a dress, and the like, which is not limited in the embodiment of the present invention.
In a possible implementation manner, the target range may refer to an image acquisition range of the terminal, that is, the terminal may acquire an image of the target range, and when a user needs to use a service processing service provided by the terminal, the user may move the position of the terminal to enable the body to be within the image acquisition range of the terminal, or certainly, enable the target part of the body to be within the image acquisition range of the terminal, for example, the face. The terminal can collect images of the target range, and when the collected images are determined to include people, namely when the target range is detected to include people, the terminal can execute the step of identifying the identity of the user.
In another possible implementation manner, the image capturing range of the terminal may include the target range, that is, the target range may be a partial region in the image capturing range. Accordingly, when a user needs to use a business process service provided by the terminal, the user needs to keep a certain posture so that the body or a target part of the body is within a certain range of the image captured by the terminal. In step 201, the terminal may perform image acquisition on an image acquisition range thereof, and detect whether a target range in the acquired image includes a person, and if so, that is, in order to detect that the target range includes a person, the terminal may perform the step of identifying the identity of the user.
It should be noted that, the above description provides two implementation manners for detecting people included in the target range, and the embodiment of the present invention does not limit the specific definition of the target range. In a possible implementation manner, a detection period may be set in the terminal, and the terminal may periodically perform image acquisition and detect whether a person is included in an acquired image. Of course, the terminal may also perform image acquisition in real time, which is not limited in the embodiments of the present invention, and if an implementation manner of the detection period is adopted, the values of the detection period are not specifically limited in the embodiments of the present invention. The terminal determines whether the acquired image includes a person, and any target detection algorithm can be adopted, which is not limited in the embodiment of the invention.
When the terminal determines that the target range includes a person, the terminal can further determine the identity of the person, and the terminal can be specifically realized in a face recognition mode. The terminal can collect the face image, recognize the collected face image, and also recognize the face of the image collected when the detected target range includes the person, if the collected image includes the face, the identity of the user can be recognized. Of course, if the previously acquired image does not include a face or the face recognition fails, the terminal may further perform the steps of acquiring the face image and recognizing the acquired face image.
Specifically, when the terminal performs face recognition, the collected face image can be compared with the face image in the stored information, and when the recognition results are different, the terminal can also execute different steps. Specifically, the identification result may include the following two cases, and in the two cases, the steps that the terminal needs to perform may also be as follows:
in the first case, when the acquired face image matches the face image in the storage information, that is, when it is determined that the storage information includes the user information corresponding to the face image according to the recognition result, the terminal may acquire the user information corresponding to the face image from the storage information.
The terminal can store user information, and can identify the face image, so that the identity of the user in the face image is determined, and the stored user information is acquired. For example, the user may have previously registered an account with the terminal, or registered an account with an application installed on the terminal, and user information for the account may be stored during the registration process. The user is a member, and the terminal identifies the user identity in step 201 to determine whether the user is a member. For example, a member system may be installed on the terminal, and the terminal may obtain corresponding member information (user information) from the member system when the user is determined to be a member.
Of course, the stored information may also be information on other terminals or servers, and the terminal may access the other terminals or the servers to obtain the user information corresponding to the face image. The embodiment of the present invention does not limit what specific implementation manner is adopted.
In the second case, when the matching between the acquired face image and any one of the face images in the stored information fails, that is, when it is determined that the stored information does not include the user information corresponding to the face image according to the recognition result, the terminal may set an interface according to the user information to obtain the user information corresponding to the face image.
If the user may not have previously registered an account with the terminal or with the relevant application installed on the terminal, and the stored information may not include the user information corresponding to the face image, the terminal may display a user information setting interface, and the user information setting interface may include a plurality of input items or options that are input or selected by the user to set the user information. For example, the user may not be a member, the terminal may display a registration interface, and the user inputs user information required for registering an account, so as to obtain the user information corresponding to the facial image. Of course, the terminal may also store the face image and the user information correspondingly, or send the face image and the user information to other terminals or servers storing the user information correspondingly, for example, if the member system is used for implementation, the terminal may update the user information of the newly registered member to the member system.
Through the face recognition process, the terminal can determine the identity of the user in a face image acquisition mode without manually inputting an account number and an account number password by the user or using other terminals for identity verification by the user, so that the user operation can be effectively reduced, the complexity of the user operation is reduced, and the acquisition efficiency of user information is improved.
In a possible implementation manner, after the terminal acquires the user information corresponding to the face image, the user information may be displayed in the current interface. The user information may include identification information of the user, where the identification information may be a name of the user, and of course, the user information may also include other information of the user, for example, an avatar of the user, and the like, which is not limited in this embodiment of the present invention.
202. The terminal acquires a first image.
In the embodiment of the invention, the terminal can also provide a virtual trial function, and the terminal can acquire the first image, wherein the first image comprises a person to be subjected to virtual trial, and simulates the effect of the person after the product is applied in the first image, so that a user does not need to paint or wear an entity product to watch the trial effect of the product. In a possible implementation manner, the terminal may acquire an image of an image acquisition range of the terminal through an image acquisition component to obtain the first image. Of course, after the terminal acquires the first image, the first image can also be displayed.
Specifically, the terminal may obtain the first image in real time, or may obtain the first image every acquisition period, and of course, the terminal may also obtain the first image after step 201, and perform a subsequent virtual trial process based on the first image, which is not limited in the embodiment of the present invention.
For example, taking the terminal as a product trial terminal, for example, the product trial terminal may be a cosmetic mirror and have a virtual cosmetic trial function, and of course, the product trial terminal may also have a virtual cosmetic trial function of other products, for example, may be a fitting mirror and have a virtual fitting function, and of course, may also provide other trial functions, such as a trial of accessories and the like. The product can be a makeup product or a dress and the like which need to be selected and tried out when being purchased. The user can walk to the front of the product trial terminal, when the product trial terminal detects that the visual field range of the product trial terminal comprises people or when the certain area in the visual field range of the product trial terminal comprises people, the face recognition function can be activated, the identity of the user is recognized, and the first image is acquired and displayed in real time.
It should be noted that, in the above steps 201 and 202, the terminal may first execute step 201 and then execute step 202, that is, after determining the user identity, the terminal provides a virtual trial function for the user, and may also execute the steps 201 and 202 when detecting that a person is included in the target range, or execute step 202 when detecting that a person is included in the target range, and then execute step 201 when executing step 202, and the terminal does not specifically limit the execution sequence of the steps 201 and 202.
203. And the terminal acquires a plurality of candidate product information corresponding to the user information based on the user information.
After the terminal acquires the user information, a corresponding product can be recommended to the user according to the user information. The terminal may store a plurality of candidate product information, and the terminal may select the plurality of candidate product information for the current user according to the user information.
Specifically, the process of the terminal acquiring the plurality of candidate product information corresponding to the user information may be: the terminal can acquire a plurality of candidate product information corresponding to the user information according to at least one item of historical purchase information and historical browsing information corresponding to the user information, historical purchasing information and historical browsing information corresponding to other user information of which the similarity with the user information is greater than a threshold value.
In a possible implementation manner, historical purchase information and/or historical browsing information of a plurality of user information may be stored in the terminal, and when the terminal needs to recommend product information for a certain user, the user may be recommended according to the historical information of the user, and of course, the user may also be recommended according to the historical information of other users similar to the user. For example, if the user has purchased a certain brand of lipstick or a certain color number of lipstick before, the terminal may obtain various types of lipstick of the brand or the color number of lipstick as a plurality of candidate product information, and certainly, the terminal may also obtain other candidate product information for the user to select.
The threshold may be preset by a relevant technician, and the specific value of the threshold is not limited in the embodiment of the present invention.
The above only provides several implementation manners for obtaining the candidate product information corresponding to the user information, and the terminal may also obtain the candidate product information corresponding to the user information through other implementation manners, for example, different user information in the terminal may correspond to different product groups, for example, the user information indicates that the user is a male, the male corresponds to at least one product group, the at least one product group includes a plurality of candidate product information suitable for the male, and the terminal may obtain the candidate product information in the at least one product group.
In a possible implementation manner, when the terminal acquires the plurality of candidate product information, the plurality of candidate product information may be further displayed in the current interface. Further, the terminal may display the plurality of candidate product information in a target area in the current interface. For example, the terminal may display the plurality of candidate product information in a lower area of the current interface.
In a specific possible embodiment, the candidate product information may be displayed in groups, and specifically, the groups of the candidate product information may be determined based on categories of products corresponding to the candidate product information, brands of products corresponding to the candidate product information, or some information in the candidate product information, for example, according to categories, products may include lipstick, eye shadow, blush, concealer, eyebrow pencil, hair clip, earring, jacket, trousers, skirt, shoes, or hat, and the above categories are only used as examples. For another example, the terminal may set product information for each brand of product as a group, or the terminal may set product information for lipstick of each color number as a group, and set product information for eyebrow pencil of each color as a group.
In one possible implementation, the terminal may display a product menu in the target area, where an identification of at least one product group and at least one candidate product information in each product group may be displayed.
204. And the terminal acquires target product information from the candidate product information.
After the terminal acquires the information of the candidate products, the terminal can select the information of the target product from the information of the candidate products to perform virtual trial. Specifically, the process of selecting the target product information from the plurality of candidate product information by the terminal may be implemented by the terminal according to a certain selection rule, or may be implemented based on a user selection.
Specifically, the process of acquiring the target product information by the terminal may be implemented by any one of the following implementation manners:
in the first mode, the terminal acquires the first candidate product information as the target product information according to the sequence of the plurality of candidate product information.
In the first mode, after the terminal acquires the information of the plurality of candidate products, the information of the candidate products arranged first in the information of the plurality of candidate products can be used as the information of the target products to execute the following steps, so that the virtual trial of the target products corresponding to the information of the target products is realized, the information of the candidate products can be used as the information of the target products for the virtual trial without the selection of a user, the virtual trial efficiency is improved, and the operation time of the user is saved.
And secondly, the terminal acquires candidate product information corresponding to the product selection instruction as the target product information according to the product selection instruction.
In the second mode, after the terminal acquires the plurality of candidate product information, the plurality of candidate product information can be displayed, the user can select the candidate product information which the user wants to try on from the plurality of displayed candidate product information, and perform product selection operation on the terminal, and when the terminal acquires a product selection instruction triggered by the product selection operation, the candidate product information selected by the user can be used as target product information, so that the following steps are performed to realize virtual trial.
And in the third mode, the terminal randomly acquires one candidate product information from the plurality of candidate product information as the target product information.
In the third mode, after the terminal acquires the information of the plurality of candidate products, one piece of candidate product information can be randomly selected for virtual trial. The random acquisition process may be implemented by any random algorithm, which is not limited in the embodiment of the present invention.
It should be noted that, three types of target product information obtaining manners are provided, and the terminal may further obtain the target product information from the multiple candidate product information by using other obtaining manners, for example, the candidate product information with the largest matching degree may be obtained as the target product information based on the matching degrees of the multiple candidate product information and the user information.
The above step 203 and step 204 are processes of obtaining target product information from a plurality of candidate product information corresponding to the user information, and the terminal may recommend the plurality of candidate product information for the user based on the user information, and then may obtain the target product information from the plurality of candidate product information, so as to perform the following steps for virtual trial. In a possible implementation manner, the terminal may also display a plurality of preset candidate product information without acquiring the user information, and when step 202 is executed, so as to select one candidate product information for virtual trial by using any one of the above manners for acquiring the target product information.
It should be noted that, the step 202 may be executed simultaneously in the process of executing the step 203 and the step 204, or may be executed before the step 203 and the step 204 are executed, or of course, may be executed after the step 203 and the step 204 are executed by the terminal, and the execution timing of the step 202 is not limited in the embodiment of the present invention.
205. And the terminal displays a second image corresponding to the first image according to the first image and the target product information.
When the terminal acquires the target product information and also acquires the first image, the first image can be processed based on the target product information, so that the second image is displayed. The second image is used for showing the effect that the person in the first image applies the target product corresponding to the target product information.
Specifically, different products may be applied to different positions on a person, for example, lipstick may be applied to lips, blush may be applied to cheeks, a coat may be worn on an upper body, shoes may be worn on feet, and the like, so that the terminal may first obtain a position corresponding to the product and then process a corresponding position of the first image. The process of displaying the second image by the terminal in step 205 may also be implemented in various ways, which is described below by way of way one and way two, and the terminal may display the second image in any way of way one and way two, or may use other ways, which is not limited in this embodiment of the present invention.
The method comprises the steps that firstly, the terminal determines position information of a product image corresponding to the target product information according to the first image and the target product information, wherein the product image refers to an image when a product is applied to a person in the first image; and the terminal displays the first image, and displays a product image corresponding to the target product information at the position indicated by the position information in the first image.
When the terminal performs virtual trial, the first image and the product image corresponding to the target product information can be displayed in a combined manner, so that the effect that the corresponding product is applied to the human body is achieved. Different target product information can correspond to different product images, and the terminal can acquire the product image corresponding to the target product information according to the target product information. For example, a red lipstick of a brown color may correspond to pixel values of pixels in an image that may be one or more pixel values that differ by no more than a threshold value.
In a possible implementation manner, the target product information may further correspond to human body part information, where the human body part information is used to indicate a part where a target product corresponding to the target product information is located when the target product is applied to a person, and the terminal may obtain the human body part information corresponding to the target product information and determine, based on the human body part information and the first image, position information corresponding to the product image in the first image. For example, the position information corresponding to the lipstick is the region where the lips are located in the first image, and the position information corresponding to the blush is the region where the cheeks are located in the first image.
Further, the shape and size of each human body part may be different, so that when the terminal determines the position information of the product image, it needs to consider the shape and size of the corresponding part in the face in the first image, and may use the information of the position of the corresponding part in the first image as the position information of the product image. Specifically, the terminal may perform human body part detection on the first image, and determine coordinates of a corresponding human body part in the first image of the product image, and of course, the position information may also be expressed in other manners, for example, a relative position to a certain position point in the first image, and the like, which is not limited in this embodiment of the present invention.
The terminal acquires a first image, also acquires a product image, and determines the position of the product image in the first image, and the terminal can display the first image and the product image, wherein for the product image, the terminal can display the product image at a corresponding position in the first image based on the acquired position information.
Secondly, the terminal generates a second image corresponding to the first image according to the first image and a product image corresponding to the target product information, wherein the product image is an image of a product applied to a person in the first image; the terminal displays the second image.
After the terminal acquires the first image and the target product information, the terminal may first acquire a product image corresponding to the target product information, and then generate a second image based on the first image and the product image, where the second image is an image obtained by applying the product image to the first image. Specifically, the terminal may also acquire position information of the product image, and determine the position of the product image in the first image, so that when the second image is generated, image processing may be performed according to the position information. Of course, there is also a possible implementation manner that the terminal may determine the position information of the first image and the product image in the second image respectively, so as to perform rendering based on the position information of the first image and the product image respectively to obtain the second image. The embodiment of the present invention does not limit what specific implementation manner is adopted.
In a possible implementation manner, the first image may be a Three-dimensional (3D) image, and when the terminal generates the second image, the terminal may perform interpolation processing on the first image based on the product image to obtain the second image.
Processing the first image by the terminal according to pixel information corresponding to the target product information to obtain a second image corresponding to the first image, wherein the pixel information is used for representing pixel values of pixel points corresponding to products and distribution information of the pixel points; the terminal displays the second image.
In the third mode, different product information may correspond to different pixel information, and after the terminal acquires the target product information, the terminal may acquire the pixel information corresponding to the target product information, so that the pixel information is used as a data basis for processing the first image, and the pixel values of the pixel points in the first image are processed to obtain the second image.
Specifically, the pixel information may include a pixel value and distribution information of a pixel point, and the terminal may determine, based on the first image, location information corresponding to the target product information, so that the pixel point at the location indicated by the location information in the first image may be processed based on the pixel information to obtain a second image.
It should be noted that, the process of displaying the second image by the terminal is only exemplarily described in three ways, and the process may also be implemented in other ways, which is not limited in the embodiment of the present invention. The obtained images are processed on the basis of the images including the person and the images of the products when the products are applied to the person, which are comprehensively obtained at the terminal, or the related information of the products, so that the effect that the person in the images applies the target products corresponding to the target product information is reflected, and the virtual trial effect is achieved. The process can be realized by adopting an Augmented Reality (AR) technology, and the makeup product or the person in the image is virtually painted for the face in the shot image of the user to wear the clothes, so that the user can directly know the actual application effect of the makeup product or the clothes based on the second image displayed by the terminal, painting or wearing of the solid product is not needed, painting or wearing time can be saved, and trial efficiency is effectively improved.
In a possible implementation manner, the screen of the terminal may be a mirror screen, so that the image displayed by the terminal has a high color reproduction degree, a high brightness, and a better display effect.
In a specific possible embodiment, the terminal may further provide a trial adjustment function, and the user may perform an adjustment operation to adjust the application effect of the product on the person to different degrees. For example, the effect of the lipstick may be different between the re-painting and the light painting, and the color, the gradient degree, etc. may be different after the painting. And when the terminal acquires the adjusting instruction triggered by the adjusting operation, executing corresponding steps to adjust the display effect of the corresponding position of the product in the second image.
Specifically, the terminal may obtain the transparency corresponding to the target product information according to the adjustment instruction. Accordingly, in the above three modes, the process of displaying the product image corresponding to the target product information by the terminal or the process of displaying the second image by the terminal may refer to the transparency.
In the first aspect, the step of displaying, by the terminal, the product image corresponding to the target product information at the position indicated by the position information in the first image may be: and the terminal displays the product image corresponding to the target product information at the position indicated by the position information in the first image according to the transparency. In the second mode, the step of displaying the second image by the terminal may be: and the terminal displays the first image in the second image and displays the product image in the second image according to the transparency. In the third embodiment, the step of displaying the second image by the terminal may be: and the terminal displays the pixel point corresponding to the first image in the second image and displays the pixel point corresponding to the target product information according to the transparency.
Therefore, the transparency of the product image corresponding to the target product information or the transparency of the corresponding pixel point is changed, the degree of the product being applied can be seen from the second image displayed by the terminal, and the display effect of the second image is better. Through providing adjustment function on probation, can convenience of customers learn the multiple effect when the product is used fast, conveniently to the user further selects whether to purchase the product based on this multiple effect.
206. And when a numerical value transfer instruction corresponding to the target product information is acquired, the terminal acquires a face image.
Besides the image acquisition, face recognition and virtual trial functions, the terminal can also provide services related to numerical value transfer, when the terminal displays a second image or displays information of a plurality of candidate products, and a user wants to purchase a product corresponding to a currently tried product or information of a certain candidate product in the information of the candidate products, numerical value transfer operation can be performed in the terminal, and when the terminal acquires a numerical value transfer instruction triggered by the numerical value transfer operation, a corresponding confirmation step can be executed to trigger sending of a numerical value transfer request to the server to request execution of corresponding business processing.
In a possible implementation manner, when the terminal displays the second image or displays information of a plurality of candidate products, a value transfer button may be further provided, and the user may perform a touch operation on the value transfer button, so that the terminal may obtain the value transfer instruction and execute step 206. In a further possible implementation manner, the terminal may further provide an add button in the interface, where the add button is used to take the target product information as the product information to be subjected to the numerical value transfer.
For example, when the current user selects a lipstick, the terminal may display the second image in the interface, display an effect of the user after the lipstick is smeared, and display information of the lipstick and an immediate purchase button in a target area in the interface, where the immediate purchase button is a value transfer button, the user may perform touch operation on the immediate purchase button, and the terminal may acquire a value transfer instruction to perform the following value transfer steps. In another possible implementation manner, when the terminal displays the second image, a shopping cart adding button may be further displayed in the interface, the user may also perform a touch operation on the shopping cart adding button, and the terminal may use the information of the lipstick as one item of product information to be subjected to numerical value transfer based on the touch operation.
Specifically, the terminal can support Face recognition payment (Face Pay), when the terminal obtains a numerical value transfer instruction, the terminal can collect a Face image, so that the Face image is recognized, whether the current Face image is matched with the Face image corresponding to the numerical value transfer-out account or not is determined, if the Face image is matched with the Face image, it can be confirmed that the current user has the authority of using the numerical value transfer-out account to transfer numerical values, and if the Face image is not matched with the Face image, the current user cannot use the numerical value transfer-out account to transfer numerical values. The face recognition payment mode is provided, a user does not need to input a payment password or use other terminals for payment, and the efficiency of service processing can be effectively improved.
In this step 206, the terminal may obtain the value transfer-out account associated with the user information, so as to perform identity verification on the acquired face image based on the value transfer-out account. In another possible implementation manner, the value transfer-out account may also be determined according to an account setting instruction, the user may input a value transfer-out account on the terminal or select one value transfer-out account from a plurality of value transfer-out accounts associated with the user information, and the terminal may perform authentication on the acquired face image based on the value transfer-out account.
In a possible implementation manner, in the above steps, the terminal acquires the user information, provides the virtual trial function, and performs the authentication during the value transfer, which can be implemented in a face recognition manner, and the user does not need to perform excessive manual operations on the terminal, and the terminal can provide multiple functions, thereby effectively reducing the user operation, reducing the complexity of the user operation, and improving the efficiency of the whole service processing flow.
207. The terminal compares the face image with a face image corresponding to a numerical value transfer-out account associated with the user information, and when the face image and the face image corresponding to the numerical value transfer-out account meet a target condition, the terminal executes step 208; when the face image corresponding to the face image and the value transfer-out account does not satisfy the target condition, the terminal performs step 209.
The terminal may obtain a face image corresponding to the value transfer-out account, where the face image corresponding to the value transfer-out account is used to perform identity verification on the acquired face image, and the terminal may execute step 207 to compare the face image acquired in step 206 with the face image corresponding to the value transfer-out account.
If the comparison result indicates that the face image corresponding to the value transfer-out account and the acquired face image are face images of the same person, it can be confirmed that the current user can use the value transfer-out account to perform value transfer, and the authentication of the current user is successful, so the terminal can execute the following step 208, if the comparison result indicates that the face image corresponding to the value transfer-out account and the acquired face image are face images of different persons, the current user does not have the value transfer authority of the value transfer-out account, and the terminal can execute the following step 209 if the authentication of the current user fails.
Specifically, the terminal may extract features of the acquired face image and a face image corresponding to the value transfer-out account, compare the extracted features, and obtain similarity or matching degree of the acquired face image and the face image corresponding to the value transfer-out account. When the similarity or the matching degree is greater than the similarity threshold or the matching degree threshold, the terminal may perform step 208, that is, when the target condition is that the similarity or the matching degree of the acquired face image and the face image corresponding to the value transfer account is greater than the similarity threshold or the matching degree threshold, and when the similarity or the matching degree is less than or equal to the similarity threshold or the matching degree threshold, the terminal may perform step 209 described below.
The above description only takes the target condition as an example that the similarity or matching degree of the face image corresponding to the collected face image and the value transfer-out account is greater than the similarity threshold or matching degree threshold, and the target condition may be set or adjusted by a related technician based on a use requirement, which is not limited in the embodiment of the present invention.
208. The terminal sends a value transfer request to the server.
The value transfer request is used for instructing the server to execute service processing corresponding to the value transfer request. The terminal confirms the identity of the current user, determines that the terminal can use the value transfer account to carry out value transfer, and then can initiate a value transfer request, and the server provides a value transfer service.
Specifically, the value transfer request may carry identification information of the value transfer-out account, identification information of the value transfer-in account, a value transfer amount, and the like, where the value transfer request is used to instruct the server to transfer the value transfer amount from the value transfer-out account to the value transfer-in account. Of course, the value transfer request may also carry other information, and the embodiment of the present invention does not limit the specific content carried by the value transfer request.
The server receives the value transfer request, can verify the value transfer request, and can execute corresponding business processing when the verification is passed, namely, the value transfer amount is transferred from the value transfer account to the value transfer account. After the server completes the value transfer, a value transfer success message can be sent to the terminal. After receiving the message of successful value transfer, the terminal can also display a prompt of successful value transfer. Of course, when the value transfer is successful, the user can obtain the target product corresponding to the target product information.
209. The terminal displays a selection prompt for prompting selection in face recognition and other numerical value transfer modes, and when a face recognition instruction is obtained, the terminal executes the step 206 and the step 207; when the other value transfer instruction is acquired, the terminal executes step 210.
When the face image and the face image corresponding to the value transfer-out account do not meet the target condition and the current user authentication fails, the terminal can provide more value transfer modes to avoid that the terminal cannot purchase products due to the failure of value transfer.
Specifically, the terminal may display a selection prompt to prompt the user to select from multiple numerical value transfer modes, and the user may select to perform face recognition again, or may select another numerical value transfer mode, for example, scan an identification code, input an account password, or the like. If the user selects to perform face recognition again, the terminal may obtain a face recognition instruction triggered by the selection operation, and at this time, the terminal may perform step 206 and step 207 again; if the user selects another numerical value transfer mode, the terminal may obtain another numerical value transfer instruction triggered by the selection operation, and the terminal may execute the following step 210 to implement a further numerical value transfer step.
210. The terminal acquires the value transfer information required by the value transfer mode corresponding to the value transfer instruction, and based on the value transfer information, the terminal performs step 208.
The value transfer information required by different value transfer modes may be different, for example, the mode of scanning the identification code requires acquiring the identification code of the user, and the mode of inputting the account password requires acquiring the password of the value transfer account. If the user selects another numerical value transfer manner, the terminal needs to acquire numerical value transfer information required by the numerical value transfer manner selected by the user, so as to perform authentication on the current user based on the numerical value transfer information, thereby determining whether the numerical value transfer can be performed, and if the authentication is successful, the terminal may perform step 208. Of course, if the authentication fails, the terminal may also perform step 209 described above.
The steps 206 to 210 are processes of sending a value transfer request to the server when the value transfer instruction corresponding to the target product information is obtained, in which the terminal can provide a value transfer service based on virtual trial directly on the basis of a value transfer requirement of a user, and requests the server for the value transfer service, and the trial and value transfer links are continuous without manual participation, so that the labor cost can be effectively reduced, and the efficiency of service processing can be improved.
It should be noted that, in the foregoing process, only one possible implementation manner is provided, and in this implementation manner, when the value transfer instruction corresponding to the target product information is acquired, the terminal acquires the face image, so that the identity authentication is performed based on the newly acquired face image, and thus, the accuracy of the identity authentication can be effectively ensured by acquiring the face image again for the identity authentication.
In another possible implementation manner, the above step 206 and step 207 may be: when a value transfer instruction corresponding to the target product information is acquired, the terminal can compare the acquired face image or the first image with a face image corresponding to a value transfer-out account associated with the user information. The terminal can directly carry out identity verification based on the currently acquired first image or the face image acquired by the user before virtual trial without acquiring the face image again, so that the efficiency of the whole business processing process can be effectively improved. Accordingly, in this implementation, the step 208 may be: when the acquired face image and the face image corresponding to the value transfer account meet the target condition, the terminal sends a value transfer request to the server; or when the first image and the face image corresponding to the numerical value transfer account meet the target condition, the terminal sends a numerical value transfer request to the server.
In another possible implementation manner, when the terminal passes through the currently acquired first image or the previously acquired face image, an authentication failure may occur, and in a specific possible embodiment, when the face image of the acquired face image corresponding to the value transfer-out account does not satisfy the target condition, or when the face image of the first image corresponding to the value transfer-out account does not satisfy the target condition, the terminal may also execute the above steps 209 and 210. Certainly, after step 209, when the terminal acquires the face recognition instruction, the terminal may also perform step 206 and step 207, that is, when the terminal fails to perform authentication based on the first image or based on the previously acquired face image, the terminal may also provide more payment modes and re-authentication options, and if the user selects re-authentication, the terminal may acquire the face image again for performing authentication. The service processing flow corresponding to the implementation manner can also refer to the embodiment shown in fig. 5 below, and the embodiment of the present invention is not described herein for further details.
For example, as shown in fig. 3, when no user uses the terminal, the terminal may be in a standby state, and when the terminal detects that a person is in the target range, the terminal may start the image acquisition component to perform face acquisition and perform face recognition on the acquired face image, or of course, when a person is detected in the target range, the terminal may perform face recognition based on the acquired image. The terminal can judge whether the current user is a member, if so, the terminal can acquire the member information of the member from the member system and display the member information, and if not, the terminal can provide an interface for registering the member, for example, the user can register by using a mobile phone number of the user, and the terminal can synchronize the registered member information to the member system. Only taking the product as a makeup product as an example, after the face of a user is collected by the terminal, the user can enter makeup trial, the user can select a commodity, the terminal can process an image based on the selected commodity and display the makeup effect after the commodity is tried, when the user wants to purchase the commodity, the user can select the commodity, for example, the commodity can be added into a shopping cart firstly and then payment is determined, the terminal can support face recognition payment, if the face recognition is successful, the terminal can complete payment based on the payment service of the server, and if the face recognition is failed, the terminal can provide face recognition payment or other payment modes, such as code scanning payment, for example.
In a possible implementation manner, after the step 208, the terminal may further update the historical purchase information and/or the historical browsing information corresponding to the user information. Therefore, when the subsequent user performs the virtual trial again, the corresponding candidate product information can be obtained based on the updated user information, and when other users perform the virtual trial, if the similarity between the other users and the user is greater than the threshold value, the terminal can also perform the step of obtaining the candidate product information based on the historical purchase information and/or the historical browsing information corresponding to the user information of the user.
In a specific possible embodiment, the steps of detecting that the target area includes a person, acquiring the face image, and acquiring the first image are implemented based on the same camera. Of course, if all other image capturing steps performed by the terminal in the above-mentioned various implementations can be implemented based on the camera, for example, the step of capturing the face image by the terminal in step 206 can also be implemented based on the camera. That is, the image acquisition step included in the service processing method is realized based on the same camera.
In the related art, the above steps of detecting that a person is included in the target range, acquiring the face image, and acquiring a plurality of acquired images in the first image are generally implemented by a plurality of cameras, for example, detecting whether a person is included in the target range is implemented by the first camera, acquiring and identifying the face image is implemented by the second camera, and acquiring the first image is implemented by the third camera.
In the embodiment of the present invention, all of the above steps may be implemented by the same camera. The terminal can store a corresponding configuration file, and the terminal can control the camera to execute the steps based on the configuration file. In a possible implementation manner, the camera provided by the embodiment of the invention has the functions of ranging, collecting image colors, and supplementing luminosity and details, so that the multiple steps can be realized through a single camera, and the definition and the display effect of the collected image can be ensured. That is, through upgrading of software and hardware, integration of a plurality of cameras is achieved in the embodiment of the invention, and the cameras can be embedded in the terminal, so that the hardware cost is reduced, the hardware installation program is simplified, and the appearance of the terminal is beautified while the functions are achieved.
For example, in a specific example, the configuration file may be three Software Development Kits (SDKs), that is, the three image capturing steps may be implemented based on three SDKs, and when an image capturing step needs to be performed, the terminal may call a corresponding SDK to perform the corresponding image capturing step. Specifically, when the situation that a person is included in the target range is detected, the terminal calls the first SDK to recognize the collected face image. After the identification, the terminal can close the first SDK, call the second SDK, execute the step of obtaining the first image, and when the numerical value transfer instruction corresponding to the target product information is obtained, the terminal can close the second SDK, call the third SDK, collect the face image, and identify the face image.
Next, a specific flow of the service processing method is described by a specific example, as shown in fig. 4, the product is taken as a makeup product for example, when a user does not use the terminal, the terminal may be in a standby state, the screen is an initial screen, when a person is detected within a target range, an image acquisition component may be started to perform image acquisition and face recognition, the terminal may display an image acquisition interface and a face recognition interface, the terminal may perform virtual makeup trial, the terminal may display a makeup screen, the user may select makeup product information displayed by the terminal, the terminal may process an acquired face image by using an AR technology based on makeup product information selected by the user, display an image after the user applies the makeup product, the user may perform a sliding operation on the terminal to adjust the makeup reaction degree, for example, if the trial product is lipstick, the user can slide left and right and left, the cosmetic reaction degree is reduced, the lipstick smearing degree in the image is lighter and lighter, and the lipstick sliding right is higher and heavier and darker.
The user can perform touch operation on the purchase-while-standing button, if the user adds some cosmetic product information into the shopping cart before, the product information has multiple items, and the multiple items of product information can comprise the cosmetic product information currently added into the shopping cart and the cosmetic product information previously added into the shopping cart; if the user does not add the makeup product information into the shopping cart before, the product information has a single item, and the single item of product information is the makeup product information currently added into the shopping cart. After the user determines payment, the terminal can support face payment, if face recognition succeeds, the terminal can display a payment confirmation interface, after the user selects and determines, the terminal can send a payment request to the server, if face recognition fails, the terminal can prompt that face recognition fails and prompt the user to select a re-recognition and code scanning payment mode, if code scanning payment is selected, the terminal can display an identification code, the user executes corresponding code scanning and other operations, and when the identity of the user is successfully confirmed, the terminal can also send the payment request to the server.
When a person is detected, the embodiment of the invention determines the corresponding user information by carrying out face recognition on the person, can recommend candidate product information based on the user information, and then can collect an image, thereby displaying the image of the person in the image after the person applies the target product on the basis of the image and combining the target product information to realize virtual trial. When a user wants to purchase the target product, the user can send a value transfer request to the server based on the obtained value transfer instruction, in the process, user information can be determined in a face recognition mode to recommend the product, manual login of the user is not needed, a virtual trial function of the product is further provided, the physical product is not needed to be painted or worn, the selection, trial and value transfer links can be integrally achieved, manual participation is not needed, labor cost is reduced, and meanwhile the efficiency of the business processing method can be effectively improved.
In the embodiment shown in fig. 2, the flow of the service processing method is described, and an authentication manner is provided in steps 206 to 210: and when a value transfer instruction corresponding to the target product information is acquired, the terminal acquires a face image, so that identity authentication is performed based on the newly acquired face image. In a possible implementation manner, the terminal can also perform identity verification directly based on the currently acquired first image or the face image acquired by the user before virtual trial without acquiring the face image again, so that the efficiency of the whole process of business processing can be effectively improved.
The following describes an overall process of service processing corresponding to the implementation manner through the embodiment shown in fig. 5, where fig. 5 is a flowchart of a service processing method provided in the embodiment of the present invention, and referring to fig. 5, the method may include the following steps:
501. when the target range includes a person, the terminal identifies the collected face image and acquires user information corresponding to the face image.
502. The terminal acquires a first image.
503. And the terminal acquires a plurality of candidate product information corresponding to the user information based on the user information.
504. And the terminal acquires target product information from the candidate product information.
The step 503 and the step 504 are processes of acquiring target product information from a plurality of candidate product information corresponding to the user information.
505. And the terminal displays a second image corresponding to the first image according to the first image and the target product information.
The steps 501 to 505 are identical to the steps 201 to 205 in the embodiment shown in fig. 2, and the embodiment of the present invention is not repeated herein.
506. When a value transfer instruction corresponding to the target product information is acquired, the terminal compares the acquired face image or the first image with a face image corresponding to a value transfer account associated with the user information.
When the acquired face image and the face image corresponding to the value transfer-out account satisfy the target condition, or when the first image and the face image corresponding to the value transfer-out account satisfy the target condition, the terminal executes step 507;
and when the acquired face image and the face image corresponding to the numerical value transfer-out account do not meet the target condition, or when the first image and the face image corresponding to the numerical value transfer-out account do not meet the target condition, the terminal executes step 508.
In step 506, when the terminal obtains the value transfer instruction, it is not necessary to re-collect the face image, and the identity authentication is directly performed based on the currently collected first image or the face image collected by the user before the virtual trial, so that the efficiency of the whole process of the business processing can be effectively improved. Specifically, this step 506 may include two cases:
in the first case: and when a value transfer instruction corresponding to the target product information is acquired, the terminal compares the acquired face image with a face image corresponding to a value transfer-out account associated with the user information.
Corresponding to the first situation, when the acquired face image and the face image corresponding to the value transfer account satisfy the target condition, the terminal executes step 507; and when the acquired face image and the face image corresponding to the value transfer-out account do not meet the target condition, the terminal executes step 508.
In the second case: and when a value transfer instruction corresponding to the target product information is acquired, the terminal compares the first image with a face image corresponding to a value transfer-out account associated with the user information.
Corresponding to the second case, when the first image and the face image corresponding to the value transfer account satisfy the target condition, the terminal executes step 507; when the first image and the face image corresponding to the value transfer-out account do not satisfy the target condition, the terminal executes step 508.
It should be noted that, in the above two cases, the process of comparing the face image by the terminal is the same as the comparison process shown in step 207, and this is not described in detail in this embodiment of the present invention.
507. The terminal sends a value transfer request to the server.
Step 507 is similar to step 208, and the embodiment of the present invention is not described herein.
508. The terminal displays a selection prompt for prompting selection in face recognition and other numerical value transfer modes, and when a face recognition instruction is obtained, the terminal executes step 509; when other value transfer instructions are obtained, the terminal executes step 510.
The step 508 is the same as the step 209, except that when the face recognition instruction is obtained, in the step 206 provided in the embodiment shown in fig. 2, the terminal acquires the face image again, so that when the face recognition instruction is obtained, the terminal may execute the step 206 again. In the embodiment of the present invention, the terminal directly uses the face image or the first image collected before to perform the authentication, but does not collect the face image again, and when the user selects to perform the face recognition again during the authentication, the terminal may execute step 509 which is the same as the step 206 described above, and collect a new face image to perform the authentication. Of course, in a possible implementation manner, when the face recognition instruction is acquired, the terminal may also execute the step 506 to perform face recognition again.
509. The terminal collects a face image, compares the face image with a face image corresponding to a numerical value transfer-out account, and executes a step 507 when the face image and the face image corresponding to the numerical value transfer-out account meet a target condition; and when the face image corresponding to the numerical value transfer account and the face image do not meet the target condition, the terminal executes step 508.
Step 509 is similar to step 206 and step 207 provided in the embodiment shown in fig. 2, and the embodiment of the present invention is not described herein again.
510. The terminal obtains the value transfer information required by the value transfer mode corresponding to the value transfer instruction, and based on the value transfer information, the terminal executes step 507.
Step 510 is similar to step 210 provided in the embodiment shown in fig. 2, and the embodiment of the present invention is not repeated herein.
The above steps 506 to 510 are processes of sending a value transfer request to the server when the value transfer instruction corresponding to the target product information is acquired.
According to the embodiment of the invention, the user information can be determined in a face recognition mode to recommend the product, manual login of the user is not needed, a virtual trial function of the product is provided, the solid product is not needed to be painted or worn, the selection, trial and value transfer links can be integrally realized, manual participation is not needed, the labor cost is reduced, and meanwhile, the efficiency of the business processing method can be effectively improved. Furthermore, in the numerical value transfer link, a face recognition payment mode can be adopted, and face recognition can be carried out on the basis of images acquired during virtual test or images acquired during user information determination, so that the image acquisition times can be reduced, the integration and the fluency of a business processing flow can be improved, and the efficiency of the business processing method can be effectively improved.
All the above-mentioned optional technical solutions can be combined arbitrarily to form the optional embodiments of the present invention, and are not described herein again.
Fig. 6 is a schematic structural diagram of a service processing apparatus according to an embodiment of the present invention, and referring to fig. 6, the apparatus includes:
the acquisition module 601 is configured to, when it is detected that a target range includes a person, identify an acquired face image, and acquire user information corresponding to the face image;
the obtaining module 601 is further configured to obtain a first image;
the obtaining module 601 is further configured to obtain target product information from a plurality of candidate product information corresponding to the user information;
a display module 602, configured to display a second image corresponding to the first image according to the first image and the target product information, where the second image is used to embody an effect that a person in the first image applies a target product corresponding to the target product information;
a sending module 603, configured to send a value transfer request to a server when a value transfer instruction corresponding to the target product information is obtained, where the value transfer request is used to instruct the server to execute service processing corresponding to the value transfer request.
In one possible implementation, the obtaining module 601 is further configured to:
when the storage information is determined to include the user information corresponding to the face image according to the recognition result, the user information corresponding to the face image is obtained from the storage information; or the like, or, alternatively,
and when the user information corresponding to the face image is determined not to be included in the stored information according to the identification result, setting an interface according to the user information, and acquiring the user information corresponding to the face image.
In a possible implementation manner, the obtaining module 601 is further configured to obtain, based on the user information, a plurality of candidate product information corresponding to the user information;
the display module 602 is further configured to obtain target product information from the plurality of candidate product information.
In a possible implementation manner, the obtaining module 601 is further configured to obtain a plurality of candidate product information corresponding to the user information according to at least one of historical purchase information, historical browsing information corresponding to the user information, historical purchase information corresponding to other user information whose similarity with the user information is greater than a threshold, and historical browsing information;
correspondingly, the device also comprises:
and the updating module is used for updating the historical purchasing information and/or the historical browsing information corresponding to the user information.
In one possible implementation, the obtaining module 601 is further configured to:
acquiring first candidate product information as the target product information according to the sequence of the candidate product information; or the like, or, alternatively,
according to the product selection instruction, acquiring candidate product information corresponding to the product selection instruction as the target product information; or the like, or, alternatively,
and randomly acquiring one candidate product information from the plurality of candidate product information as the target product information.
In one possible implementation, the sending module 603 is configured to:
when a value transfer instruction corresponding to the target product information is acquired, comparing the acquired face image or the first image with a face image corresponding to a value transfer account associated with the user information;
when the acquired face image and the face image corresponding to the value transfer account meet the target condition, sending a value transfer request to a server; or when the first image and the face image corresponding to the numerical value transfer account meet the target condition, sending a numerical value transfer request to the server.
In one possible implementation, the sending module 603 is configured to:
when a numerical value transfer instruction corresponding to the target product information is acquired, acquiring a face image;
comparing the face image with a face image corresponding to a numerical value transfer-out account associated with the user information;
and when the face image and the face image corresponding to the numerical value transfer account meet the target condition, sending a numerical value transfer request to a server.
In one possible implementation, the display module 602 is further configured to:
when the acquired face image and the face image corresponding to the numerical value transfer account do not meet the target condition, displaying a selection prompt, wherein the selection prompt is used for prompting selection in face recognition and other numerical value transfer modes; or the like, or, alternatively,
and when the first image and the face image corresponding to the numerical value transfer account do not meet the target condition, displaying a selection prompt, wherein the selection prompt is used for prompting selection in face recognition and other numerical value transfer modes.
In a possible implementation manner, the sending module 603 is further configured to:
when a face recognition instruction is obtained, executing a step of comparing the acquired face image or the first image with a face image corresponding to a numerical value transfer-out account associated with the user information; or executing the steps of collecting the face image, and comparing the face image with the face image corresponding to the numerical value transfer-out account associated with the user information; or the like, or, alternatively,
and when other numerical value transfer instructions are acquired, acquiring numerical value transfer information required by a numerical value transfer mode corresponding to the numerical value transfer instruction, and executing the step of sending a numerical value transfer request to the server based on the numerical value transfer information.
In one possible implementation, the display module 602 is configured to:
according to the first image and the target product information, determining position information of a product image corresponding to the target product information, wherein the product image refers to an image of a product when the product is applied to a person in the first image; displaying the first image, and displaying a product image corresponding to the target product information at a position indicated by the position information in the first image; or the like, or, alternatively,
generating a second image corresponding to the first image according to the first image and a product image corresponding to the target product information, wherein the product image is an image of a product when the product is applied to a person in the first image; displaying the second image; or the like, or, alternatively,
processing the first image according to pixel information corresponding to the target product information to obtain a second image corresponding to the first image, wherein the pixel information is used for representing pixel values of pixel points corresponding to products and distribution information of the pixel points; the second image is displayed.
In a possible implementation manner, the obtaining module 601 is further configured to obtain a transparency corresponding to the target product information according to the adjustment instruction;
correspondingly, the display module 602 is configured to display a product image corresponding to the target product information at a position indicated by the position information in the first image according to the transparency; or the like, or, alternatively,
correspondingly, the display module 602 is configured to display a first image of the second images, and display a product image of the second images according to the transparency; or the like, or, alternatively,
accordingly, the display module 602 is configured to display a pixel point corresponding to the first image in the second image, and display a pixel point corresponding to the target product information according to the transparency.
In a possible implementation manner, the steps of detecting that the target range includes a person, acquiring the face image, and acquiring the acquired image in the first image are implemented based on the same camera.
The device provided by the embodiment of the invention can determine the corresponding user information by carrying out face recognition on the detected person, recommend the candidate product information based on the user information, and then collect the image, thereby displaying the image of the person in the image after the target product is applied on the basis of the image and combining the target product information to realize virtual trial. When a user wants to purchase the target product, the user can send a value transfer request to the server based on the obtained value transfer instruction, in the process, user information can be determined in a face recognition mode to recommend the product, manual login of the user is not needed, a virtual trial function of the product is further provided, the physical product is not needed to be painted or worn, the selection, trial and value transfer links can be integrally achieved, manual participation is not needed, labor cost is reduced, and meanwhile the efficiency of the business processing method can be effectively improved.
It should be noted that: in the service processing apparatus provided in the foregoing embodiment, when performing service processing, only the division of the functional modules is illustrated, and in practical applications, the function distribution may be completed by different functional modules according to needs, that is, the internal structure of the terminal is divided into different functional modules, so as to complete all or part of the functions described above. In addition, the service processing apparatus and the service processing method provided by the above embodiments belong to the same concept, and specific implementation processes thereof are described in the method embodiments for details, which are not described herein again.
Fig. 7 is a schematic structural diagram of a terminal according to an embodiment of the present invention, where the terminal 700 may be a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio L layer III, mpeg Audio layer 3), an MP4 player (Moving Picture Experts Group Audio L layer IV, mpeg Audio layer 4), a notebook computer, or a desktop computer, and the terminal 700 may also be referred to as a user equipment, a portable terminal, a laptop terminal, a desktop terminal, or other names.
In general, terminal 700 includes: a processor 701 and a memory 702.
Processor 701 may include one or more Processing cores, such as a 4-core processor, an 8-core processor, etc. processor 701 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), a P L a (Programmable logic Array), processor 701 may also include a main processor and a coprocessor, the main processor being a processor for Processing data in a wake-up state, also known as a CPU (Central Processing Unit), the coprocessor being a low-power processor for Processing data in a standby state, in some embodiments, processor 701 may be integrated with a GPU (Graphics Processing Unit) for rendering and rendering content for display, in some embodiments, processor 701 may also include an AI (intelligent processor) for learning operations related to an AI (Artificial Intelligence processor) for computing operations related to display screens.
Memory 702 may include one or more computer-readable storage media, which may be non-transitory. Memory 702 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 702 is used to store at least one instruction for execution by processor 701 to implement a business process method provided by a method embodiment of the invention.
In some embodiments, the terminal 700 may further optionally include: a peripheral interface 703 and at least one peripheral. The processor 701, the memory 702, and the peripheral interface 703 may be connected by buses or signal lines. Various peripheral devices may be connected to peripheral interface 703 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 704, display 705, camera 706, audio circuitry 707, positioning components 708, and power source 709.
The peripheral interface 703 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 701 and the memory 702. In some embodiments, processor 701, memory 702, and peripheral interface 703 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 701, the memory 702, and the peripheral interface 703 may be implemented on a separate chip or circuit board, which is not limited in this embodiment.
The Radio Frequency circuit 704 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 704 communicates with communication networks and other communication devices via electromagnetic signals. The rf circuit 704 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 704 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuitry 704 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: metropolitan area networks, various generation mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the rf circuit 704 may further include NFC (Near Field Communication) related circuits, which are not limited in this disclosure.
The Display 705 is used to Display a UI (User Interface) that may include graphics, text, icons, video, and any combination thereof, when the Display 705 is a touch Display, the Display 705 also has the ability to capture touch signals on or over the surface of the Display 705. the touch signals may be input to the processor 701 for processing as control signals.
The camera assembly 706 is used to capture images or video. Optionally, camera assembly 706 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 706 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuitry 707 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 701 for processing or inputting the electric signals to the radio frequency circuit 704 to realize voice communication. For the purpose of stereo sound collection or noise reduction, a plurality of microphones may be provided at different portions of the terminal 700. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 701 or the radio frequency circuit 704 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, the audio circuitry 707 may also include a headphone jack.
The positioning component 708 is used to locate the current geographic location of the terminal 700 to implement navigation or L BS (L o geographic based Service.) the positioning component 708 may be a positioning component based on the united states GPS (global positioning System), the beidou System of china, the greiner System of russia, or the galileo System of the european union.
Power supply 709 is provided to supply power to various components of terminal 700. The power source 709 may be alternating current, direct current, disposable batteries, or rechargeable batteries. When power source 709 includes a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, terminal 700 also includes one or more sensors 710. The one or more sensors 710 include, but are not limited to: acceleration sensor 711, gyro sensor 712, pressure sensor 713, fingerprint sensor 714, optical sensor 715, and proximity sensor 716.
The acceleration sensor 711 can detect the magnitude of acceleration in three coordinate axes of a coordinate system established with the terminal 700. For example, the acceleration sensor 711 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 701 may control the touch screen 705 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 711. The acceleration sensor 711 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 712 may detect a body direction and a rotation angle of the terminal 700, and the gyro sensor 712 may cooperate with the acceleration sensor 711 to acquire a 3D motion of the terminal 700 by the user. From the data collected by the gyro sensor 712, the processor 701 may implement the following functions: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
Pressure sensors 713 may be disposed on a side bezel of terminal 700 and/or an underlying layer of touch display 705. When the pressure sensor 713 is disposed on a side frame of the terminal 700, a user's grip signal on the terminal 700 may be detected, and the processor 701 performs right-left hand recognition or shortcut operation according to the grip signal collected by the pressure sensor 713. When the pressure sensor 713 is disposed at a lower layer of the touch display 705, the processor 701 controls the operability control on the UI interface according to the pressure operation of the user on the touch display 705. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 714 is used for collecting fingerprints of a user, the identity of the user is identified by the processor 701 according to the fingerprints collected by the fingerprint sensor 714, or the identity of the user is identified by the fingerprint sensor 714 according to the collected fingerprints, when the identity of the user is identified to be a credible identity, the user is authorized to execute relevant sensitive operations by the processor 701, the sensitive operations comprise screen unlocking, encrypted information viewing, software downloading, value transferring, setting changing and the like, the fingerprint sensor 714 can be arranged on the front side, the back side or the side of the terminal 700, when a physical key or a manufacturer L ogo is arranged on the terminal 700, the fingerprint sensor 714 can be integrated with the physical key or the manufacturer L ogo.
The optical sensor 715 is used to collect the ambient light intensity. In one embodiment, the processor 701 may control the display brightness of the touch display 705 based on the ambient light intensity collected by the optical sensor 715. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 705 is increased; when the ambient light intensity is low, the display brightness of the touch display 705 is turned down. In another embodiment, processor 701 may also dynamically adjust the shooting parameters of camera assembly 706 based on the ambient light intensity collected by optical sensor 715.
A proximity sensor 716, also referred to as a distance sensor, is typically disposed on a front panel of the terminal 700. The proximity sensor 716 is used to collect the distance between the user and the front surface of the terminal 700. In one embodiment, when the proximity sensor 716 detects that the distance between the user and the front surface of the terminal 700 gradually decreases, the processor 701 controls the touch display 705 to switch from the bright screen state to the dark screen state; when the proximity sensor 716 detects that the distance between the user and the front surface of the terminal 700 gradually becomes larger, the processor 701 controls the touch display 705 to switch from the breath screen state to the bright screen state.
Those skilled in the art will appreciate that the configuration shown in fig. 7 is not intended to be limiting of terminal 700 and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be used.
In an exemplary embodiment, a computer-readable storage medium, such as a memory, including instructions executable by a processor to perform the business process method of the above embodiments is also provided. For example, the computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a Compact Disc Read-Only Memory (CD-ROM), a magnetic tape, a floppy disk, an optical data storage device, and the like.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, and the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only exemplary of the present invention and should not be taken as limiting the invention, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (15)

1. A method for processing a service, the method comprising:
when the target range includes a person, identifying the collected face image to acquire user information corresponding to the face image;
acquiring a first image;
acquiring target product information from a plurality of candidate product information corresponding to the user information;
displaying a second image corresponding to the first image according to the first image and the target product information, wherein the second image is used for embodying the effect that a person in the first image applies a target product corresponding to the target product information;
and when a value transfer instruction corresponding to the target product information is acquired, sending a value transfer request to a server, wherein the value transfer request is used for indicating the server to execute service processing corresponding to the value transfer request.
2. The method according to claim 1, wherein the obtaining of the user information corresponding to the face image comprises:
when the storage information is determined to include the user information corresponding to the face image according to the recognition result, the user information corresponding to the face image is obtained from the storage information; or the like, or, alternatively,
and when determining that the storage information does not include the user information corresponding to the face image according to the recognition result, setting an interface according to the user information, and acquiring the user information corresponding to the face image.
3. The method according to claim 1, wherein the obtaining target product information from a plurality of candidate product information corresponding to the user information comprises:
acquiring a plurality of candidate product information corresponding to the user information based on the user information;
and acquiring target product information from the candidate product information.
4. The method according to claim 3, wherein the obtaining of the candidate product information corresponding to the user information based on the user information comprises:
acquiring a plurality of candidate product information corresponding to the user information according to at least one item of historical purchase information and historical browsing information corresponding to the user information, historical purchasing information and historical browsing information corresponding to other user information of which the similarity with the user information is greater than a threshold value;
correspondingly, after the numerical value transfer request is sent to the server when the numerical value transfer instruction corresponding to the target product information is obtained, the method further includes:
and updating historical purchase information and/or historical browsing information corresponding to the user information.
5. The method of claim 3, wherein the obtaining target product information from the plurality of candidate product information comprises:
acquiring first candidate product information as the target product information according to the sequence of the candidate product information; or the like, or, alternatively,
according to a product selection instruction, acquiring candidate product information corresponding to the product selection instruction as the target product information; or the like, or, alternatively,
and randomly acquiring one candidate product information from the plurality of candidate product information as the target product information.
6. The method according to claim 1, wherein when the numerical value transfer instruction corresponding to the target product information is obtained, sending a numerical value transfer request to a server includes:
when a value transfer instruction corresponding to the target product information is acquired, comparing the acquired face image or the first image with a face image corresponding to a value transfer-out account associated with the user information;
when the acquired face image and the face image corresponding to the numerical value transfer account meet the target condition, sending a numerical value transfer request to a server; or when the first image and the face image corresponding to the numerical value transfer account meet the target condition, sending a numerical value transfer request to a server.
7. The method according to claim 1, wherein when the numerical value transfer instruction corresponding to the target product information is obtained, sending a numerical value transfer request to a server includes:
when a numerical value transfer instruction corresponding to the target product information is acquired, acquiring a face image;
comparing the face image with a face image corresponding to a numerical value transfer-out account associated with the user information;
and when the facial image and the facial image corresponding to the numerical value transfer account meet the target condition, sending a numerical value transfer request to a server.
8. The method according to claim 6 or 7, characterized in that the method further comprises:
when the acquired face image and the face image corresponding to the numerical value transfer-out account do not meet the target condition, displaying a selection prompt, wherein the selection prompt is used for prompting selection in face recognition and other numerical value transfer modes; or the like, or, alternatively,
and when the first image and the face image corresponding to the numerical value transfer-out account do not meet the target condition, displaying a selection prompt, wherein the selection prompt is used for prompting selection in face recognition and other numerical value transfer modes.
9. The method of claim 8, further comprising:
when a face recognition instruction is obtained, executing a step of comparing the acquired face image or the first image with a face image corresponding to a numerical value transfer-out account associated with the user information; or executing the steps of collecting a face image and comparing the face image with a face image corresponding to a numerical value transfer-out account associated with the user information; or the like, or, alternatively,
and when other numerical value transfer instructions are obtained, obtaining numerical value transfer information required by a numerical value transfer mode corresponding to the numerical value transfer instruction, and executing the step of sending a numerical value transfer request to a server based on the numerical value transfer information.
10. The method of claim 1, wherein the displaying a second image corresponding to the first image according to the first image and the target product information comprises:
according to the first image and the target product information, determining position information of a product image corresponding to the target product information, wherein the product image is an image of a product when the product is applied to a person in the first image; displaying the first image, and displaying a product image corresponding to the target product information at a position indicated by the position information in the first image; or the like, or, alternatively,
generating a second image corresponding to the first image according to the first image and a product image corresponding to the target product information, wherein the product image is an image of a product when the product is applied to a person in the first image; displaying the second image; or the like, or, alternatively,
processing the first image according to pixel information corresponding to the target product information to obtain a second image corresponding to the first image, wherein the pixel information is used for representing pixel values of pixel points corresponding to products and distribution information of the pixel points; and displaying the second image.
11. The method of claim 10, further comprising:
acquiring the transparency corresponding to the target product information according to the adjusting instruction;
correspondingly, the displaying the product image corresponding to the target product information at the position indicated by the position information in the first image includes: displaying a product image corresponding to the target product information at a position indicated by the position information in the first image according to the transparency; or the like, or, alternatively,
accordingly, the displaying the second image comprises: displaying a first image in the second images, and displaying a product image in the second images according to the transparency; or the like, or, alternatively,
accordingly, the displaying the second image comprises: and displaying pixel points corresponding to the first image in the second image, and displaying pixel points corresponding to the target product information according to the transparency.
12. The method according to any one of claims 1 to 11, wherein the steps of detecting that a person is included in the target range, acquiring the face image, and acquiring the first image are performed based on the same camera.
13. A traffic processing apparatus, characterized in that the apparatus comprises:
the acquisition module is used for identifying the acquired face image and acquiring user information corresponding to the face image when the target range is detected to include a person;
the acquisition module is further used for acquiring a first image;
the acquisition module is further configured to acquire target product information from a plurality of candidate product information corresponding to the user information;
the display module is used for displaying a second image corresponding to the first image according to the first image and the target product information, wherein the second image is used for showing the effect that people in the first image apply the target product corresponding to the target product information;
and the sending module is used for sending a numerical value transfer request to a server when a numerical value transfer instruction corresponding to the target product information is obtained, wherein the numerical value transfer request is used for indicating the server to execute service processing corresponding to the numerical value transfer request.
14. A terminal, characterized in that the terminal comprises a processor and a memory, wherein at least one instruction is stored in the memory, and the instruction is loaded and executed by the processor to implement the operation performed by the service processing method according to any one of claims 1 to 12.
15. A computer-readable storage medium having stored therein at least one instruction, which is loaded and executed by a processor to perform operations performed by a business process method according to any one of claims 1 to 12.
CN201910016186.2A 2019-01-08 2019-01-08 Service processing method, device, terminal and storage medium Active CN111415185B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910016186.2A CN111415185B (en) 2019-01-08 2019-01-08 Service processing method, device, terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910016186.2A CN111415185B (en) 2019-01-08 2019-01-08 Service processing method, device, terminal and storage medium

Publications (2)

Publication Number Publication Date
CN111415185A true CN111415185A (en) 2020-07-14
CN111415185B CN111415185B (en) 2024-05-28

Family

ID=71492578

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910016186.2A Active CN111415185B (en) 2019-01-08 2019-01-08 Service processing method, device, terminal and storage medium

Country Status (1)

Country Link
CN (1) CN111415185B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112001303A (en) * 2020-08-21 2020-11-27 四川长虹电器股份有限公司 Television image-keeping device and method
CN112818765A (en) * 2021-01-18 2021-05-18 中科院成都信息技术股份有限公司 Image filling identification method, device, system and storage medium
WO2022151663A1 (en) * 2021-01-15 2022-07-21 北京市商汤科技开发有限公司 Access control machine interaction method and apparatus, access control machine assembly, electronic device, and medium
CN114880057A (en) * 2022-04-22 2022-08-09 北京三快在线科技有限公司 Image display method, image display device, terminal, server, and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120232977A1 (en) * 2011-03-08 2012-09-13 Bank Of America Corporation Real-time video image analysis for providing targeted offers
CN206236156U (en) * 2016-11-21 2017-06-09 汕头市智美科技有限公司 A kind of virtual examination adornment equipment
CN107818110A (en) * 2016-09-13 2018-03-20 青岛海尔多媒体有限公司 A kind of information recommendation method, device
JP2018120527A (en) * 2017-01-27 2018-08-02 株式会社リコー Image processing apparatus, image processing method, and image processing system
CN108509466A (en) * 2017-04-14 2018-09-07 腾讯科技(深圳)有限公司 A kind of information recommendation method and device
CN108648061A (en) * 2018-05-18 2018-10-12 北京京东尚科信息技术有限公司 image generating method and device
CN108694736A (en) * 2018-05-11 2018-10-23 腾讯科技(深圳)有限公司 Image processing method, device, server and computer storage media
CN109034935A (en) * 2018-06-06 2018-12-18 平安科技(深圳)有限公司 Products Show method, apparatus, computer equipment and storage medium
CN109118465A (en) * 2018-08-08 2019-01-01 颜沿(上海)智能科技有限公司 A kind of examination adornment system and method

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120232977A1 (en) * 2011-03-08 2012-09-13 Bank Of America Corporation Real-time video image analysis for providing targeted offers
CN107818110A (en) * 2016-09-13 2018-03-20 青岛海尔多媒体有限公司 A kind of information recommendation method, device
CN206236156U (en) * 2016-11-21 2017-06-09 汕头市智美科技有限公司 A kind of virtual examination adornment equipment
JP2018120527A (en) * 2017-01-27 2018-08-02 株式会社リコー Image processing apparatus, image processing method, and image processing system
CN108509466A (en) * 2017-04-14 2018-09-07 腾讯科技(深圳)有限公司 A kind of information recommendation method and device
CN108694736A (en) * 2018-05-11 2018-10-23 腾讯科技(深圳)有限公司 Image processing method, device, server and computer storage media
CN108648061A (en) * 2018-05-18 2018-10-12 北京京东尚科信息技术有限公司 image generating method and device
CN109034935A (en) * 2018-06-06 2018-12-18 平安科技(深圳)有限公司 Products Show method, apparatus, computer equipment and storage medium
CN109118465A (en) * 2018-08-08 2019-01-01 颜沿(上海)智能科技有限公司 A kind of examination adornment system and method

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112001303A (en) * 2020-08-21 2020-11-27 四川长虹电器股份有限公司 Television image-keeping device and method
WO2022151663A1 (en) * 2021-01-15 2022-07-21 北京市商汤科技开发有限公司 Access control machine interaction method and apparatus, access control machine assembly, electronic device, and medium
CN112818765A (en) * 2021-01-18 2021-05-18 中科院成都信息技术股份有限公司 Image filling identification method, device, system and storage medium
CN112818765B (en) * 2021-01-18 2023-09-19 中科院成都信息技术股份有限公司 Image filling identification method, device and system and storage medium
CN114880057A (en) * 2022-04-22 2022-08-09 北京三快在线科技有限公司 Image display method, image display device, terminal, server, and storage medium

Also Published As

Publication number Publication date
CN111415185B (en) 2024-05-28

Similar Documents

Publication Publication Date Title
US11678734B2 (en) Method for processing images and electronic device
CN110992493B (en) Image processing method, device, electronic equipment and storage medium
CN112162671B (en) Live broadcast data processing method and device, electronic equipment and storage medium
CN111415185B (en) Service processing method, device, terminal and storage medium
CN110572711B (en) Video cover generation method and device, computer equipment and storage medium
CN110827195B (en) Virtual article adding method and device, electronic equipment and storage medium
CN110956580B (en) Method, device, computer equipment and storage medium for changing face of image
CN111723803B (en) Image processing method, device, equipment and storage medium
CN110570460A (en) Target tracking method and device, computer equipment and computer readable storage medium
CN111062248A (en) Image detection method, device, electronic equipment and medium
CN113763228A (en) Image processing method, image processing device, electronic equipment and storage medium
CN111339938A (en) Information interaction method, device, equipment and storage medium
CN112581358A (en) Training method of image processing model, image processing method and device
CN110796083A (en) Image display method, device, terminal and storage medium
CN110837300B (en) Virtual interaction method and device, electronic equipment and storage medium
CN111915305B (en) Payment method, device, equipment and storage medium
CN110659895A (en) Payment method, payment device, electronic equipment and medium
CN112258385B (en) Method, device, terminal and storage medium for generating multimedia resources
CN110891181B (en) Live broadcast picture display method and device, storage medium and terminal
CN111881423A (en) Method, device and system for limiting function use authorization
CN112967261B (en) Image fusion method, device, equipment and storage medium
CN111597468B (en) Social content generation method, device, equipment and readable storage medium
CN113935678A (en) Method, device, equipment and storage medium for determining multiple distribution terminals held by distributor
CN112399080A (en) Video processing method, device, terminal and computer readable storage medium
CN112767453B (en) Face tracking method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40025781

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant