CN110084675A - Commodity selling method, the network terminal and the device with store function on a kind of line - Google Patents

Commodity selling method, the network terminal and the device with store function on a kind of line Download PDF

Info

Publication number
CN110084675A
CN110084675A CN201910335234.4A CN201910335234A CN110084675A CN 110084675 A CN110084675 A CN 110084675A CN 201910335234 A CN201910335234 A CN 201910335234A CN 110084675 A CN110084675 A CN 110084675A
Authority
CN
China
Prior art keywords
data
user
model
picture
key point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910335234.4A
Other languages
Chinese (zh)
Inventor
文允
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201910335234.4A priority Critical patent/CN110084675A/en
Publication of CN110084675A publication Critical patent/CN110084675A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0641Shopping interfaces
    • G06Q30/0643Graphical representation of items or shoppers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation

Abstract

This application discloses commodity selling method, the network terminal on a kind of line and with the device of store function, this method comprises: obtaining the characteristic of user and obtaining model's picture of the commodity of user's selection;Characteristic corresponding in model's picture is replaced with to the characteristic of user using preset algorithm, and the model's picture obtained after above-mentioned replacement processing is as synthesising picture;Synthesising picture is sent to user by the above-mentioned means, the application enables to user to select more suitable commodity, purchase is avoided to make mistakes.

Description

Commodity selling method, the network terminal and the device with store function on a kind of line
Technical field
The present invention relates to realm of sale, more particularly to commodity selling method, the network terminal on a kind of line and have storage The device of function.
Background technique
User, can not try-on garment in person when electric business website is done shopping.User can only often be shone by model's plane, and The clothes that may be suitble to oneself is selected in size selection.Compared with solid shop/brick and mortar store shopping, user can not intuitively see what oneself was tried on Effect.And high, pursuit quality, more inclined rationality user are required, need to see the collocation degree of commodity Yu oneself true man, but electricity at present What quotient's platform cannot provide.It will lead to the loss of quite a few resting order in this way, while also will cause due to after purchase The cost of the improper goods return and replacement of commodity wastes.
Summary of the invention
The invention mainly solves the technical problem of providing commodity selling method, the network terminal on a kind of line and there is storage The device of function enables to user to select more suitable commodity, and purchase is avoided to make mistakes.
In order to solve the above technical problems, the technical solution that the application uses is: providing merchandise sales side on a kind of line Method, comprising: obtain the characteristic of user and obtain model's picture of the commodity of user's selection;It will using preset algorithm Corresponding characteristic replaces with the characteristic of the user in model's picture, and will be after above-mentioned replacement processing Obtained model's picture is as synthesising picture;The synthesising picture is sent to user.
Wherein, the characteristic includes at least one in the header data and stature data of the user;The head Portion's data include the facial feature data of the user, in hair style data at least one of;The stature data include the use At least one of in the shape of face data at family, body data;The preset algorithm includes convolutional neural networks algorithm.
Wherein, when the characteristic includes the header data of the user, the characteristic for obtaining user, packet Include: the picture and/or video data that upload from user obtain the header data of user;It is described to utilize preset algorithm by the model Corresponding characteristic replaces with the characteristic of the user in picture, comprising: obtains mould from model's picture Special facial behavioral characteristics data, wherein the face behavioral characteristics data include it is following at least one: facial shadow, face Expression, face angle;Using head composition algorithm to the header data of the user and the facial behavioral characteristics number of the model According to being synthesized, the user's head generated data of the facial behavioral characteristics data with the model is generated;Using image co-registration Header data in model's picture is replaced with the user's head generated data by algorithm.
Wherein, it is described to utilize preset algorithm by the model when characteristic includes the stature data of the user Corresponding characteristic replaces with the characteristic of the user in picture, comprising: the stature data of user are obtained, And the stature data are handled using three-dimensional reconstruction convolutional neural networks, obtain the stature three-dimensional data of the user; The first key point of label in the stature three-dimensional data;By it is described label have the stature three-dimensional data of the first key point into Row two-dimensional projection obtains the stature 2-D data of first key point;According to the stature 2-D data of first key point Scalloping is carried out to the data of position corresponding with first key point in model's picture.
Wherein, first key point of label in the stature three-dimensional data, comprising: obtain model's picture acceptance of the bid Note the second key point, according to second key point in the stature three-dimensional data the first key point of label;The basis Data of the stature 2-D data of first key point to position corresponding with first key point in model's picture Carry out scalloping, comprising: close to described in model's picture second according to the stature 2-D data of first key point The data of key point carry out scalloping.
Wherein, described to obtain the second key point marked in model's picture, comprising: to use three-dimensional reconstruction convolutional Neural Network obtains model's three-dimensional data in model's picture, and third key point is marked in model's three-dimensional data;It will The model's three-dimensional data for being marked with the third key point carries out two-dimensional projection, and obtain model's picture second is crucial Point.
Wherein, by the parameter of setting when the stature three-dimensional data progress two-dimensional projection for marking and having the first key point The parameter of setting when carrying out two-dimensional projection with the model's three-dimensional data that will be marked with the third key point is identical.
Wherein, the method further includes: detect user click the synthesising picture when, according to the model scheme Piece associated purchase link, jumps to the purchase page of the commodity, and according to the characteristic select the commodity with The matched size of user.
In order to solve the above technical problems, another technical solution that the application uses is: a kind of network terminal is provided, including Processor and memory, the processor couple the memory, and the memory is for storing program instruction, the processor Step for running the program instruction in the memory to realize on line as described above in commodity selling method.
In order to solve the above technical problems, another technical solution that the application uses is: providing a kind of with store function Device, be stored with program instruction, described program instruction can be performed to realize on line as described above in commodity selling method The step of.
The beneficial effect of the application is: being in contrast to the prior art, the application is selected user using preset algorithm Commodity model's picture in corresponding characteristic replace with the characteristic of user, to generate synthesising picture, the synthesis Picture can help user to obtain the effect for oneself dressing the moulding of the model, so as to select more suitable commodity, avoid Purchase fault.
Detailed description of the invention
In order to more clearly explain the technical solutions in the embodiments of the present application, make required in being described below to embodiment Attached drawing is briefly described, it should be apparent that, the drawings in the following description are only some examples of the present application, for For those of ordinary skill in the art, without creative efforts, it can also be obtained according to these attached drawings other Attached drawing.Wherein:
Fig. 1 is the flow diagram of the first embodiment of commodity selling method on line provided by the present application;
Fig. 2 is that characteristic corresponding in model's picture is replaced with use in commodity selling method on line provided by the present application The flow diagram of the first embodiment of the step of characteristic at family;
Fig. 3 is that characteristic corresponding in model's picture is replaced with use in commodity selling method on line provided by the present application The flow diagram of the second embodiment of the step of characteristic at family;
Fig. 4 is that characteristic corresponding in model's picture is replaced with use in commodity selling method on line provided by the present application The flow diagram of the 3rd embodiment of the step of characteristic at family;
Fig. 5 is one the step of obtaining the second key point of model's picture on line provided by the present application in commodity selling method The flow diagram of embodiment;
Fig. 6 is the flow diagram of the second embodiment of commodity selling method on line provided by the present application;
Fig. 7 is the process of embodiment the step of generating synthesising picture on line provided by the present application in commodity selling method Schematic diagram;
Fig. 8 is the flow diagram of the first embodiment of Method of Commodity Recommendation on line provided by the present application;
Fig. 9 is the structural schematic diagram of one embodiment of the network terminal provided by the present application;
Figure 10 is the structural schematic diagram for one embodiment of device that the present invention has store function.
Specific embodiment
Below in conjunction with the attached drawing in the embodiment of the present application, technical solutions in the embodiments of the present application carries out clear, complete Site preparation description, it is clear that described embodiments are only a part of embodiments of the present application, rather than whole embodiments.Based on this Embodiment in application, those of ordinary skill in the art are obtained every other under the premise of not making creative labor Embodiment shall fall in the protection scope of this application.
User can only understand the general appearance of the commodity, and root when electric business website is done shopping by the planar picture of model According to the style and size of the size data selection commodity that businessman provides, the effect oneself tried on can not be intuitively seen, it may Cause purchase commodity and it is improper oneself, need goods return and replacement, cause waste.
This application provides a kind of marketing methods of commodity on line, and the characteristic of user can be replaced to user's selection Commodity model's picture in corresponding characteristic, to obtain synthesising picture, which can help user intuitive See the effect for oneself trying the commodity on, so as to select more suitable commodity, purchase is avoided to make mistakes.
Specifically, refering to fig. 1, Fig. 1 is that the process of the first embodiment of commodity selling method on line provided by the present application is shown It is intended to, commodity selling method includes: on line provided by the present application
S101: obtaining the characteristic of user and obtains model's picture of the commodity of user's selection.
In a specific implement scene, detect that user is in the state of browsing commodity, such as detect that user steps on When recording certain shopping website or opening certain shopping APP, the characteristic of user is obtained.For example, prompting frame can be popped up, prompt to use Family inputs the characteristic of itself, or reads the user identifier that user logs in, and reads the feature of itself of user's history input Data.
In this implement scene, characteristic includes at least one in the header data and stature data of the user, head Portion's data include the facial feature data of the user, in hair style data at least one of.Stature data include the shape of face of the user At least one of in data, stature data.In this implement scene, when needing to obtain the header data of user, user can be defeated Enter oneself photo, video for extracting characteristic, either automatically starting camera obtain user full face or Video is cut into the square region centered on face by Face datection algorithm, such as 64 × 64,112 × 112,128 × The RGB image of 128 or 224 × 224 pixels, therefrom to extract header data.Specifically, it is obtaining centered on user face Square region after, pass through facial recognition data collection pre-training convolutional neural networks, hair style identify data set pre-training convolution mind Facial feature data, the hair style data in the header data of user are obtained through network.
Facial recognition data collection pre-training convolutional neural networks can extract unique face feature, for example, shape of face, face Distribution etc..The data of the square region face-centered being cut into obtained in step before are inputted into facial recognition data Collect pre-training convolutional neural networks, the last layer before available network class layer.With the pretty good vgg-16 of current classification performance For network structure, exporting is 1 × 1 × 4096 layers.Hair style identification data set pre-training convolutional neural networks can extract uniqueness Hair style feature, for example, color development, length, right and wrong etc..It is similar with facial recognition data collection pre-training convolutional neural networks, equally By taking the pretty good vgg-16 network structure of current classification performance as an example, exporting is 1 × 1 × 4096 layers.
In the stature data for needing to obtain user, can be set by dressing the tight for having sensor or 3D scanning The standby characteristic for obtaining user.
After detecting that user has selected commodity, model's picture of the commodity is obtained.Model's picture can be preparatory with businessman It provides, is also possible to select suitable picture as model's picture from the picture of the displaying of the commodity.
In other implement scenes, model's picture can also be provided by user, such as user browses on network and sees certain The street of star is taken a picture, then downloads the street and take a picture, which is taken a picture as model's photo.Model's picture can also be also user Other people photo voluntarily shot or user when seeing video to performer wear take it is interested screenshot then is carried out to video, obtain Picture including performer.If model's picture provides for user, this method also needs to start search other than providing synthesising picture Function finds the dress ornament that model is dressed in model's picture of user's offer, to facilitate user is subsequent to buy.
S102: corresponding characteristic in model's picture is replaced with to the spy of the user using preset algorithm Data are levied, and using the model's picture obtained after above-mentioned replacement processing as synthesising picture.
It, will be corresponding in model's picture acquired in step s101 using preset algorithm in a specific implement scene Characteristic replaces with the characteristic of user acquired in step s101.In this implement scene, preset algorithm includes convolution Neural network algorithm.
Specifically, Fig. 2 and Fig. 3 are please referred to.Fig. 2 be on line provided by the present application in commodity selling method by model's picture In corresponding characteristic the step of replacing with the characteristic of user first embodiment flow diagram.The application provides Line on the step of characteristic corresponding in model's picture is replaced with into the characteristic of user in commodity selling method, packet It includes:
S201: the facial behavioral characteristics data of model are obtained from model's picture.
In a specific implement scene, model's picture that businessman provides is generally whole body picture, runs to the picture Face datection algorithm is cut into the local data on head from model's picture of whole body, for example, being cut into centered on face Square region, such as the RGB image of 64 × 64,112 × 112,128 × 128 or 224 × 224 pixels.In this implement scene, The square region of the header data for the user being cut into the size and step S101 of the square region being cut into this step It is in the same size, to facilitate the operations such as subsequent progress Data Synthesis, in other implement scenes, what is be cut into this step is rectangular The size for the square region being cut into the size in region and step S101 can be inconsistent, transports in subsequent progress Data Synthesis etc. When calculation, first by the two institute it is small or amplification certain proportion, to be adjusted to onesize.
The facial behavioral characteristics data of model are obtained from the head square region being cut into, wherein face dynamic is special Sign data include facial shadow, facial expression, in face angle at least one of.In this implement scene, face dynamic is obtained The algorithm of characteristic is pre-training facial angle, shadow and expressive features convolutional neural networks.
Pre-training facial angle, shadow and expressive features convolutional neural networks can be used for extracting facial angle, shadow and Expressive features, vgg- similar with facial recognition data collection pre-training convolutional neural networks, equally pretty good with current classification performance For 16 network structures, the layer third from the bottom for vgg-16, i.e., 7 × 7 × 512 layers are exported.
In other implement scenes, facial recognition data collection pre-training convolutional neural networks, hair style identification data set are instructed in advance Practicing convolutional neural networks, pre-training facial angle, shadow and expressive features convolutional neural networks can also be vgg-19 network knot Structure.
S202: using head composition algorithm to the header data of the user and the facial behavioral characteristics data of the model It is synthesized, generates the user's head generated data of the facial behavioral characteristics data with the model.
In a specific implement scene, using head composition algorithm by the head number of user acquired in step s101 It is synthesized according to the behavioral characteristics data of the face with the model in step S201.In this implement scene, head composition algorithm For the vgg-16 network of the reversion of pre-training, inputs and identify that data set pre-training convolutional neural networks, hair style identify data for face Collect pre-training convolutional neural networks, pre-training facial angle, shadow and expressive features convolutional neural networks, these three networks it is defeated It is superimposed after tiling out, exporting is 224 × 224 × 3.In this implement scene, export as the synthesis face of a square area Figure, it is in the same size with the square region that is cut into step S101 and step S201.Head composition algorithm in this step is adopted Method is corresponded to each other with the method that the algorithm for obtaining facial behavioral characteristics data in step S202 uses.For example, this implementation The algorithm that facial behavioral characteristics data are obtained in scene, in step S202 is vgg-16 network, and head composition algorithm is pre-training Reversion vgg-16 network.
After the synthesis of head composition algorithm, the user's head synthesis with model face behavioral characteristics data is generated Face feature in model's picture and hair style feature as in the generated data, are all replaced with user by data, and face is dynamically Characteristic is possessed in model's picture.
S203: the header data in model's picture is replaced with by the user's head using Image Fusion and is synthesized Data.
In a specific implement scene, using Image Fusion will in model's picture before the squared region that is cut into Domain replaces with the user's head generated data with model face behavioral characteristics data generated in step S202.
In some implement scenes, the ornaments such as the commodity that user selects are hair fastener, cap, glasses, only with the head of user Data are related, and model's photo that businessman provides also only may include the header data of model, therefore be only completed replacing for header data Change the wearing effect that can completely show user.Specifically, referring to Fig. 3, Fig. 3 is merchandise sales side on line provided by the present application The process of the second embodiment for the step of characteristic corresponding in model's picture is replaced with the characteristic of user in method is shown It is intended to.
As can be seen from the above description, in this implement scene, user's head generated data is generated using convolutional neural networks, It more can accurately and quickly generate head generated data.
S301: the three-dimensional head model parameter of user is obtained.
In a specific implement scene, obtains the shape of face data of user and obtain the step class of the header data of user Seemingly, the square region centered on face, such as 64 × 64,112 × 112,128 × 128 are cut by Face datection algorithm Or 224 × 224 pixel RGB image.
The three of user is calculated according to the data of the square region being cut into using three-dimensional reconstruction convolutional neural networks Head model parameter is tieed up, which is with the face picture marked and its three-dimensional head model parameter The convolutional neural networks of pre-training, therefore the three-dimensional head of user can be accurately obtained using the three-dimensional reconstruction convolutional neural networks Portion's model parameter.In this implement scene, adjustable 30 geometric vectors of the three-dimensional head model and 36 expression vectors, The geometric vector can cover forehead, ear and chin shape of face of different shapes.In this implement scene, the three-dimensional reconstruction convolution Neural network is Mobilenet-V2 structure.
S302: obtain finishing changing model's picture of user's head using Image Fusion.
In a specific implement scene, after getting the three-dimensional head model parameter of user, according in step S201 The coordinate of the square region of cutting translates inverse operation by rotation, and the rectangular user's head generated in step S202 is synthesized The position of data investigation face into original image.It is merged again by Image Fusion with the three-dimensional head model parameter of the user, For example, using graph cut (possion blending) algorithm in this implement scene, what is obtained finishes changing the face and hair of user Model's picture of type.
As can be seen from the above description, the present embodiment can obtain three-dimensional head model using three-dimensional reconstruction convolutional neural networks Parameter can fast and accurately obtain three-dimensional head model parameter, be joined the three-dimensional head model using Image Fusion Number is merged with head generated data with model's picture, can accurately realize the model's picture for finishing changing the head of user.
In other implement scenes, the commodity of user's selection may be that jacket, trousers etc. need to check that whole body dresses effect Commodity.It needs for stature data of the stature data of user to model to be replaced at this time, specifically, referring to Fig. 4, Fig. 4 It is the characteristic that characteristic corresponding in model's picture is replaced in commodity selling method user on line provided by the present application According to the step of 3rd embodiment flow diagram.It will be right in model's picture in commodity selling method on line provided by the present application The characteristic answered replaces with the step of characteristic of user, comprising:
S401: the stature data of user are obtained, and using three-dimensional reconstruction convolutional neural networks to the stature data It is handled, obtains the stature three-dimensional data of the user.
In a specific implement scene, stature data include the shape of face data of the user, in body data at least One.The shape of face data for obtaining user, needing to obtain includes the positive photo of user or video, therefrom extracts the shape of face number of user According to.The body data of user is obtained, can obtain user's by dressing the tight for having sensor or 3D scanning device Characteristic or user input the data at each position after voluntarily measuring.In this implement scene, stature data include the use The body data at family.
S402: the first key point of label in the stature three-dimensional data.
In a specific implement scene, label first is crucial in the stature three-dimensional data that obtains in step S301 Point, first key point can be user and manually select, and can also be operation critical point detection algorithm, by critical point detection algorithm Selection.
In this implement scene, which chosen according to the second key point marked in model's picture is corresponding 's.Specifically, referring to Fig. 5, Fig. 5 is the second key for obtaining model's picture on line provided by the present application in commodity selling method The flow diagram of the embodiment of the step of point.The of model's picture is obtained on line provided by the present application in commodity selling method The step of two key points, comprising:
S501: model's three-dimensional data in model's picture is obtained using three-dimensional reconstruction convolutional neural networks, and in institute State label third key point in model's three-dimensional data.
In a specific implement scene, join using with the account three-dimensional head model of acquisition described in step S301 Several methods is similar, and model's three-dimensional data is equally obtained using three-dimensional reconstruction convolutional neural networks.In this implement scene, three It is the database pre-training convolution with the human body picture and its three-dimensional (3 D) manikin parameter that have marked that dimension, which rebuilds convolutional neural networks, Neural network.In this implement scene, which is Mobilenet-V2 structure.Obtaining model three When dimension data, the geometric vector of the three-dimensional (3 D) manikin of model, attitude vectors and camera parameter can be obtained.
Third key point is marked in model's three-dimensional data.Third key point can be by running critical point detection algorithm It obtains, such as openpose algorithm.In other implement scenes, it can also be and directly manually selected by user.
S502: the model's three-dimensional data for being marked with the third key point is subjected to two-dimensional projection, obtains the mould Second key point of special picture.
In a specific implement scene, the model's three-dimensional data for being marked with third key point is subjected to two-dimensional projection extremely Model's picture, obtains the two-dimensional projection position on model's picture of third key point, which is the of model's picture Two key points.
As can be seen from the above description, three dimension of mould model is obtained using three-dimensional reconstruction convolutional neural networks in the present embodiment According to label third key point can make with the two-dimensional projection of third key point for the second key point in model's three-dimensional data The selection of second key point is more accurate, while available model's three-dimensional data is to the projective parameter of model's picture.
S403: it marks the stature three-dimensional data for having the first key point to carry out two-dimensional projection for described, obtains described the The stature 2-D data of one key point.
In a specific implement scene, after getting the second key point being marked in model's picture, according to this Second key point, the first key point of label in the stature three-dimensional data of user.It, can also be according in other implement scenes Three key points summarize the first key point of label in the stature three-dimensional data of user.
The stature three-dimensional data for being marked with the first key point is subjected to two-dimensional projection, obtains the stature two dimension of the first key point Data.The parameter of this two-dimensional projection carries out two-dimensional projection with by the model's three-dimensional data for being marked with third key point to obtain the The parameter of setting when two key points is identical, just can guarantee that the stature 2-D data of the first key point of acquisition is accurate in this way.
In this implement scene, the parameter that when two-dimensional projection is arranged include camera parameter (for example, exposure, focal length etc.), The parameter etc. of network output can also include angle, the angle of projection etc. of shooting.
S404: according to the stature 2-D data of first key point in model's picture with first key point The data of corresponding position carry out scalloping.
In a specific implement scene, according to the stature 2-D data of first key point in model's picture with The data of the corresponding position of one key point carry out scalloping.Since the position of the first key point is arranged according to the second key point , therefore the corresponding position of the first key point is the position of the second key point.In this implement scene, using image-warp Algorithm carries out scalloping.After scalloping, model's stature in the picture becomes user's stature, can be obtained and wears figure The image of the user of commodity in piece.
As can be seen from the above description, lead to the second key point of label in model's picture in the present embodiment, and according to this second Key point selects the first key point in the stature three-dimensional data of user, according to the stature 2-D data of the first key point to model The data of second key point described in picture carry out scalloping, can obtain the image that user wears diagram commodity, can be accurate And the wears effect of intuitive reflection user.
S103: the synthesising picture is sent to user.
In a specific implement scene, after generating synthesising picture, sent out the synthesising picture as new model's picture User is given, the picture for automatically browsing user is can be and replaces with synthesising picture, can also be shown and be synthesized with pop-up window Picture.It allows users to quickly and gets information about the effect for oneself dressing the commodity.
Specifically, can be when detecting that user is browsing a certain commodity, it, will in order to promote the desire to purchase of user Model's picture that the businessman of the commodity provides replaces with new model's picture (i.e. synthesising picture).Or for save the cost, Meet in the case that user the trigger conditions such as browses to or click, double-clicks, the model's picture for meeting trigger condition is replaced with New model's picture (i.e. synthesising picture).In other implement scenes, in order to further increase the desire to purchase of user, use can be When the thumbnail of a commodity or purchase chain, new model's picture of the automatic spring commodity is somebody's turn to do at family for mouse-over New model's picture is generated according to model's picture that the businessman of the commodity is provided previously, or according to multiple model's pictures of the commodity In any one model's picture generate.
In other implement scenes, businessman can not only provide model's picture, can also provide model's video or dynamic Figure, can be from model's video or cardon at random or by one photo of default rule interception as model's picture, Huo Zheke As model's picture, the picture of each frame to be carried out at above-mentioned replacement for each frame picture in model's video or cardon Reason, obtains the synthesising picture of each frame, the synthesising picture of obtained each frame is being combined into synthetic video or cardon.
In other implement scenes, it can also be that the browsing time of detection user judges whether user has buying intention, if The details page that user browses a certain commodity is more than preset time, such as 5s, then it represents that user's purchase intention is stronger, then at this time can be with It shows new model's picture, such as pops up new model's picture or original model's picture of commodity details page is replaced with into new model Picture etc..
As can be seen from the above description, in the present embodiment, by using convolutional neural networks, by key point in model's picture Data replace with the data of the corresponding key point of user, are dressed to obtain to have user's appearance and stature and dress model Commodity synthesising picture, which is sent to user, can enable a user to quickly understand and oneself dress the quotient The effect of product avoids purchase from making mistakes, causes to waste to accurately buy.
In other embodiments, after user checks synthesising picture, there is purchase intention to commodity, then detect that user needs to purchase When buying the commodity, suitable size can be automatically selected to user according to the stature three-dimensional data of the user obtained before, specifically Ground, referring to Fig. 6, Fig. 6 is the flow diagram of the second embodiment of commodity selling method on line provided by the present application, the application Commodity selling method includes: on the line of offer
S601: obtaining the characteristic of user and obtains model's picture of the commodity of user's selection.
S602: corresponding characteristic in model's picture is replaced with to the spy of the user using preset algorithm Data are levied, and using the model's picture obtained after above-mentioned replacement processing as synthesising picture.
S603: the synthesising picture is sent to user.
In this implement scene, step S601-S603 is implemented in commodity selling method on line provided by the present application first Step S101-S103 in example is substantially similar, is no longer repeated herein.
S604: it when detecting that user clicks the synthesising picture, links, jumps according to the associated purchase of model's picture Go to the purchase page of the commodity, and the commodity are selected according to the characteristic with the matched size of user.
In a specific implement scene, when detecting that user clicks the conjunction picture, as user wants to buy and be somebody's turn to do Commodity can connect according to the associated purchase of the corresponding model's picture of synthesising picture at this time, jump to the purchase page of the commodity, And according to the stature three-dimensional data of the characteristic of the user obtained before, such as user, select to match in the commodity with user Size, such user can directly buy, and without selecting size, can also select wrong size to avoid user.
As can be seen from the above description, the mould for the commodity that the present embodiment selects user by using convolutional neural networks algorithm Corresponding characteristic replaces with the characteristic of user in special picture, to generate synthesising picture, and is detecting user's choosing Automatically selected when selecting purchase with the matched size of user, user can be helped to obtain the effect for oneself dressing the moulding of the model, And appropriate sizes are selected, so as to select more suitable commodity, purchase is avoided to make mistakes.
Referring to Fig. 7, Fig. 7 is reality the step of generating synthesising picture on line provided by the present application in commodity selling method Apply the flow diagram of example.Include: the step of generation synthesising picture in commodity selling method on line provided by the present application
S701: obtaining the header data of user, which includes the facial feature data of user, in hair style data At least one of.
In a specific implement scene, photo, the video extraction that can be inputted by user go out header data, or Automatic starting camera obtains the full face or video of user.In this implement scene, obtain user's upload includes just The full face of face and hair style.The facial feature data and hair style data of user are obtained from the photo.Specifically, pass through face Detection algorithm is cut into the square region centered on face from the full face that the user provides, such as 64x64,112x112, The RGB image of 128x128 or 224x224 pixel.After obtaining the square region centered on user face, pass through recognition of face Data set pre-training convolutional neural networks, hair style identification data set pre-training convolutional neural networks obtain in the header data of user Facial feature data, hair style data.
S702: the facial behavioral characteristics data of model are obtained from model's picture, wherein facial behavioral characteristics data include Below at least one: facial shadow, facial expression, face angle.
In a specific implement scene, model's picture that businessman provides is generally whole body picture, runs to the picture Face datection algorithm is cut into the regional area on head from model's picture of whole body, for example, being cut into centered on face Square region, such as the RGB image of 64x64,112x112,128x128 or 224x224 pixel.
The facial behavioral characteristics data of model are obtained from the head square region being cut into, wherein face dynamic is special Sign data include facial shadow, facial expression, in face angle at least one of.In this implement scene, face dynamic is obtained The algorithm of characteristic is pre-training facial angle, shadow and expressive features convolutional neural networks.Pre-training facial angle, shadow And expressive features convolutional neural networks can be used for extracting facial angle, shadow and expressive features, it is pre- with facial recognition data collection Training convolutional neural networks are similar, equally by taking the pretty good vgg-16 network structure of current classification performance as an example, export as vgg-16 Layer third from the bottom, i.e., 7 × 7 × 512 layers.
S703: the header data of user and the facial behavioral characteristics data of model are closed using head composition algorithm At the user's head generated data of facial behavioral characteristics data of the generation with model.
In a specific implement scene, using head composition algorithm by the head number of the user obtained in step S701 It is synthesized according to the behavioral characteristics data of the face with the model in step S702.In this implement scene, head composition algorithm For the vgg-16 network of the reversion of pre-training, inputs and identify that data set pre-training convolutional neural networks, hair style identify data for face Collect pre-training convolutional neural networks, pre-training facial angle, shadow and expressive features convolutional neural networks, these three networks it is defeated It is superimposed after tiling out, exporting is 224 × 224 × 3.In this implement scene, export as the synthesis face of a square area Figure.
After the synthesis of head composition algorithm, the user's head synthesis with model face behavioral characteristics data is generated Face feature in model's picture and hair style feature as in the generated data, are all replaced with user by data, and face is dynamically Characteristic is possessed in model's picture.
By the processing of step S701-S703, by model's picture facial characteristics and hair style feature replace with User's, but shape of face and stature or model are original, will further handle below model's picture, so that shape of face Also transition into user's with stature.
S704: the three-dimensional head model parameter of user is obtained.
In a specific implement scene, the full face that user includes full header is obtained, is calculated by Face datection Method is cut into the square region centered on face, such as 64 × 64,112 × 112,128 × 128 or 224 × 224 pixels RGB image.
The three of user is calculated according to the data of the square region being cut into using three-dimensional reconstruction convolutional neural networks Head model parameter is tieed up, which is with the face picture marked and its three-dimensional head model parameter The convolutional neural networks of pre-training, therefore the three-dimensional head of user can be accurately obtained using the three-dimensional reconstruction convolutional neural networks Portion's model parameter.
S705: model's three-dimensional data in model's picture is obtained using three-dimensional reconstruction convolutional neural networks, and in model three Third key point is marked in dimension data.
In a specific implement scene, model's three-dimensional data is obtained using three-dimensional reconstruction convolutional neural networks.? Third key point is marked in model's three-dimensional data.Third key point can be to be obtained by running critical point detection algorithm, such as Openpose algorithm.
S706: the model's three-dimensional data for being marked with third key point is subjected to two-dimensional projection, obtains the second of model's picture Key point.
In a specific implement scene, the model's three-dimensional data for being marked with third key point is subjected to two-dimensional projection extremely Model's picture, obtains the two-dimensional projection position on model's picture of third key point, which is the of model's picture Two key points.
S707: according to the second key point in stature three-dimensional data the first key point of label.
In a specific implement scene, after getting the second key point being marked in model's picture, according to this Second key point, the first key point of label in the stature three-dimensional data of user.It, can also be according in other implement scenes Three key points summarize the first key point of label in the stature three-dimensional data of user.
S708: the stature three-dimensional data for being marked with the first key point is subjected to two-dimensional projection, obtains the body of the first key point Material 2-D data.
The stature three-dimensional data for being marked with the first key point is subjected to two-dimensional projection, obtains the stature two dimension of the first key point Data.The parameter of this two-dimensional projection carries out two-dimensional projection with by the model's three-dimensional data for being marked with third key point to obtain the The parameter of setting when two key points is identical, just can guarantee that the stature 2-D data of the first key point of acquisition is accurate in this way.
In this implement scene, the parameter that when two-dimensional projection is arranged include camera parameter (for example, exposure, focal length etc.), The parameter etc. of network output can also include angle, the angle of projection etc. of shooting.
S709: according to the stature 2-D data of first key point in model's picture with first key point The data of corresponding position carry out scalloping.
In a specific implement scene, according to the stature 2-D data of first key point in model's picture with The data of the corresponding position of one key point carry out scalloping.Since the position of the first key point is arranged according to the second key point , therefore the corresponding position of the first key point is the position of the second key point.Model after scalloping, in the picture Stature becomes user's stature, can be obtained the image for wearing the user of commodity in picture.
As can be seen from the above description, the model for the commodity that the present embodiment is selected user by using convolutional neural networks schemes Corresponding characteristic replaces with the characteristic of user in piece, to generate synthesising picture, can accurately and quickly reflect User dresses the actual conditions of the commodity.
Referring to Fig. 8, Fig. 8 is the flow diagram of the first embodiment of Method of Commodity Recommendation on line provided by the present application. Method of Commodity Recommendation includes: on line provided by the present application
S801: identifying the identity of user, and the characteristic of user and the purchase of user are obtained according to the identity Object data.
In a specific implement scene, user can provide the characteristic of oneself when buying commodity, or provide Picture or video will will record this feature data, after user buys successfully for extracting characteristic so as to purchase next time When use, avoid duplicate acquisition characteristic, waste of resource.In this implement scene, can detecte user currently log in or The identity logged in front of identifies the identity, the characteristic recorded when buying commodity before obtaining the identity. After the identity for identifying user, the shopping record before the identity is also got, or the note of the commodity browsed The relevant data of the shopping such as record, it is possible thereby to go out the shopping preferences or shopping need of user according to the inferred from input data of acquisition, for example, User buys the more businessman of number, browses the multiple type of merchandise.
S802: commodity to be recommended are obtained according at least one in the characteristic and the purchase data.
In a specific implement scene, commodity to be recommended are obtained according to the characteristic of user and purchase data.Root According to the characteristic of user, the data such as the colour of skin of available user, three-dimensional, height, brachium, leg are long can according to these data Think that user selects suitable commodity as commodity to be recommended, for example, user colour is deeper, then selects the clothes of the wheat colour of skin Recommend perhaps user's figure it is more fat, do not recommend close-fitting or show fat clothes.
Purchase data can be the record of user's purchase commodity or the record of user's search commercial articles, user browsed Record, the record of user's collecting commodities of commodity etc..And according to the purchase data of user, the shopping of available user is practised It is used, to select suitable commodity as commodity to be recommended.Such as the upper new commodity of businessman repeatedly bought of recommended user or The more commodity of favorable comment in the type of merchandise that discounting commodity or recommended user repeatedly browse.The search of user can also be obtained It records (such as Baidu search, Google search etc.), obtains user's interested keyword recently, according to the offer of these keywords The commodity matched are as commodity to be recommended.
After getting suitable commodity according to the characteristic of user and purchase data, selection simultaneously matching characteristic data and The commodity of purchase data are as commodity to be recommended.
In other implement scenes, quotient to be recommended can also be obtained according only to the characteristic or purchase data of user Product.
S803: model's picture of commodity to be recommended is obtained.
In a specific implement scene, model's picture of commodity to be recommended is obtained.It is to be recommended in this implement scene Model's picture of commodity is model's picture that seller provides.In other implement scenes, model's picture of commodity to be recommended may be used also To be arbitrarily to select one from multiple model's pictures of the commodity.
S804: corresponding characteristic in model's picture is replaced with to the spy of the user using preset algorithm Data are levied, and using the model's picture obtained after above-mentioned replacement processing as synthesising picture.
In this implement scene, this step and the step for being the first embodiment of commodity selling method on line provided by the present application Rapid S102 is substantially similar, is no longer repeated herein.
In other implement scenes, businessman can not only provide model's picture, can also provide model's video or dynamic Figure, can be from model's video or cardon at random or by one photo of default rule interception as model's picture, Huo Zheke As model's picture, the picture of each frame to be carried out at above-mentioned replacement for each frame picture in model's video or cardon Reason, obtains the synthesising picture of each frame, the synthesising picture of obtained each frame is being combined into synthetic video or cardon.
S808: the synthesising picture is pushed to user.
In a specific implement scene, it is pushed to the synthesising picture generated in step S804 as new model's picture User can be when user browses commodity, synthesising picture is popped up, or is shown in the corner of display area, center etc. Position, so that user can intuitively understand the wearing effect of oneself after seeing the picture, to promote the desire to purchase of user.
In other implement scenes, user can also be when opening certain shopping application software, in the waiting for opening software Time shows new model's picture (i.e. synthesising picture).Or it can be and show new model's picture in the form of web page popup window (i.e. synthesising picture).
In other implement scenes, while by the way that photomontage is pushed to user to recommend the commodity, according to step The characteristic of the user obtained in rapid S801 select the commodity with the matched size of user, which is recommended into use Family so that user purchase when, can choose suitable size, avoid purchase make mistakes.
As can be seen from the above description, in the present embodiment, using preset algorithm by characteristic corresponding in model's picture According to the characteristic for replacing with user, the synthesising picture obtained after processing can accurately reflect the practical effect for dressing commodity of user Fruit, which, which is pushed to user, can be improved the desire to purchase of user.
Referring to Fig. 9, Fig. 9 is the structural schematic diagram of one embodiment of the network terminal provided by the present application, the network terminal It include: processor 11 and memory 12, processor 11 couples memory 12, and processor 11 controls itself and storage at work Device 12 is to realize the step in the method in any of the above-described embodiment.
Wherein, mobile terminal can be mobile phone, notebook, tablet computer and vehicle-mounted computer etc., herein with no restrictions.In detail The method of thin file management can be found in above-mentioned, and details are not described herein.
Processor 11 is able to carry out Fig. 1-Fig. 7 and its related text description when executing the program instruction that memory 12 stores Any embodiment line on commodity selling method, or execute any embodiment of Fig. 8 and its related text description Method of Commodity Recommendation on line.
Referring to Fig. 10, Figure 10 is the structural schematic diagram for one embodiment of device that the present invention has store function, the tool There is the device 20 of store function to be stored with program instruction 21, program instruction 21 can be performed Fig. 1-Fig. 7 and its related text is retouched Commodity selling method on the line for any embodiment stated, or execute any embodiment of Fig. 8 and its related text description Line on Method of Commodity Recommendation.
Wherein, this have the device 20 of store function be specifically as follows USB flash disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic or disk etc. can It to store the medium of program instruction 21, or may be the server for being stored with the program instruction 21, which can will deposit The program instruction 21 of storage is sent to other equipment operation, or can also be with the program instruction 21 of the self-operating storage.
The foregoing is merely presently filed embodiments, are not intended to limit the scope of the patents of the application, all to utilize this Equivalent structure or equivalent flow shift made by application specification and accompanying drawing content, it is relevant to be applied directly or indirectly in other Technical field similarly includes in the scope of patent protection of the application.

Claims (10)

1. commodity selling method on a kind of line characterized by comprising
It obtains the characteristic of user and obtains model's picture of the commodity of user's selection;
Corresponding characteristic in model's picture is replaced with to the characteristic of the user using preset algorithm, and The model's picture obtained after above-mentioned replacement processing is as synthesising picture;
The synthesising picture is sent to user.
2. the method according to claim 1, wherein the characteristic include the user header data and At least one of in stature data;
The header data include the facial feature data of the user, in hair style data at least one of;
The stature data include the shape of face data of user, in body data at least one of;
The preset algorithm includes convolutional neural networks algorithm.
3. according to the method described in claim 2, it is characterized in that, when the characteristic includes the header data of the user When, the characteristic for obtaining user, comprising:
The picture and/or video data that upload from user obtain the header data of user;
The characteristic that corresponding characteristic in model's picture is replaced with to the user using preset algorithm According to, comprising:
From model's picture obtain model facial behavioral characteristics data, wherein it is described face behavioral characteristics data include Below at least one: facial shadow, facial expression, face angle;
The header data of the user and the facial behavioral characteristics data of the model are synthesized using head composition algorithm, Generate the user's head generated data of the facial behavioral characteristics data with the model;
Header data in model's picture is replaced with by the user's head generated data using Image Fusion.
4. according to the method in claim 2 or 3, which is characterized in that the characteristic includes the stature number of the user According to when, the characteristic that corresponding characteristic in model's picture is replaced with to the user using preset algorithm According to including:
The stature data of user are obtained, and the stature data are handled using three-dimensional reconstruction convolutional neural networks, Obtain the three-dimensional stature data of the user;
The first key point of label in the stature three-dimensional data;
It marks the stature three-dimensional data for having the first key point to carry out two-dimensional projection for described, obtains first key point Stature 2-D data;
According to the stature 2-D data of first key point to position corresponding with first key point in model's picture The data set carry out scalloping.
5. according to the method described in claim 4, it is characterized in that, the label in the stature three-dimensional data first is crucial Point, comprising:
The second key point marked in model's picture is obtained, according to second key point in the stature three-dimensional data The first key point of label;
The stature 2-D data according to first key point is to corresponding with first key point in model's picture Position data carry out scalloping, comprising:
It is carried out according to data of the stature 2-D data of first key point to the second key point described in model's picture Scalloping.
6. according to the method described in claim 5, it is characterized in that, described obtain the second key marked in model's picture Point, comprising:
Model's three-dimensional data in model's picture is obtained using three-dimensional reconstruction convolutional neural networks, and three-dimensional in the model Third key point is marked in data;
Model's three-dimensional data that the third key point will be marked with carries out two-dimentional head portrait, obtains the of model's picture Two key points.
7. according to the method described in claim 6, it is characterized in that, marking the stature for having the first key point three-dimensional for described The parameter of setting carries out two with the model's three-dimensional data that will be marked with the third key point when data carry out two-dimensional projection The parameter of setting when dimension projection is identical.
8. the method according to claim 1, wherein the method further includes:
When detecting that user clicks the synthesising picture, links, jump to described according to the associated purchase of model's picture The purchase page of commodity, and the commodity are selected according to the characteristic with the matched size of user.
9. a kind of network terminal, which is characterized in that including processor and memory, the processor couples the memory,
The memory is used to run the program instruction in the memory for storing program instruction, the processor to realize Step as described in claim any one of 1-8 on line in commodity selling method.
10. a kind of device with store function, which is characterized in that be stored with program instruction, described program instruction can be held Row is to realize the step as described in claim any one of 1-8 on line in commodity selling method.
CN201910335234.4A 2019-04-24 2019-04-24 Commodity selling method, the network terminal and the device with store function on a kind of line Pending CN110084675A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910335234.4A CN110084675A (en) 2019-04-24 2019-04-24 Commodity selling method, the network terminal and the device with store function on a kind of line

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910335234.4A CN110084675A (en) 2019-04-24 2019-04-24 Commodity selling method, the network terminal and the device with store function on a kind of line

Publications (1)

Publication Number Publication Date
CN110084675A true CN110084675A (en) 2019-08-02

Family

ID=67416466

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910335234.4A Pending CN110084675A (en) 2019-04-24 2019-04-24 Commodity selling method, the network terminal and the device with store function on a kind of line

Country Status (1)

Country Link
CN (1) CN110084675A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110838042A (en) * 2019-10-29 2020-02-25 深圳市掌众信息技术有限公司 Commodity display method and system
CN111881351A (en) * 2020-07-27 2020-11-03 深圳市爱深盈通信息技术有限公司 Intelligent clothing recommendation method, device, equipment and storage medium
CN112036984A (en) * 2020-09-04 2020-12-04 烟台冰兔网络科技有限公司 E-commerce operation big data management system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104899563A (en) * 2015-05-29 2015-09-09 深圳大学 Two-dimensional face key feature point positioning method and system
US20160210500A1 (en) * 2015-01-15 2016-07-21 Samsung Electronics Co., Ltd. Method and apparatus for adjusting face pose
CN107067429A (en) * 2017-03-17 2017-08-18 徐迪 Video editing system and method that face three-dimensional reconstruction and face based on deep learning are replaced
CN107610202A (en) * 2017-08-17 2018-01-19 北京觅己科技有限公司 Marketing method, equipment and the storage medium replaced based on facial image
CN109242760A (en) * 2018-08-16 2019-01-18 Oppo广东移动通信有限公司 Processing method, device and the electronic equipment of facial image

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160210500A1 (en) * 2015-01-15 2016-07-21 Samsung Electronics Co., Ltd. Method and apparatus for adjusting face pose
CN104899563A (en) * 2015-05-29 2015-09-09 深圳大学 Two-dimensional face key feature point positioning method and system
CN107067429A (en) * 2017-03-17 2017-08-18 徐迪 Video editing system and method that face three-dimensional reconstruction and face based on deep learning are replaced
CN107610202A (en) * 2017-08-17 2018-01-19 北京觅己科技有限公司 Marketing method, equipment and the storage medium replaced based on facial image
CN109242760A (en) * 2018-08-16 2019-01-18 Oppo广东移动通信有限公司 Processing method, device and the electronic equipment of facial image

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110838042A (en) * 2019-10-29 2020-02-25 深圳市掌众信息技术有限公司 Commodity display method and system
CN110838042B (en) * 2019-10-29 2022-08-30 深圳市掌众信息技术有限公司 Commodity display method and system
CN111881351A (en) * 2020-07-27 2020-11-03 深圳市爱深盈通信息技术有限公司 Intelligent clothing recommendation method, device, equipment and storage medium
CN112036984A (en) * 2020-09-04 2020-12-04 烟台冰兔网络科技有限公司 E-commerce operation big data management system

Similar Documents

Publication Publication Date Title
US11227008B2 (en) Method, system, and device of virtual dressing utilizing image processing, machine learning, and computer vision
US10140652B2 (en) Computer implemented methods and systems for generating virtual body models for garment fit visualisation
KR101955649B1 (en) Image feature data extraction and use
CN105447047B (en) It establishes template database of taking pictures, the method and device for recommendation information of taking pictures is provided
US11200274B2 (en) Method of e-commerce
CN110084675A (en) Commodity selling method, the network terminal and the device with store function on a kind of line
CN110084676A (en) Method of Commodity Recommendation, the network terminal and the device with store function on a kind of line
US20160042233A1 (en) Method and system for facilitating evaluation of visual appeal of two or more objects
KR102582441B1 (en) Virtual wardrobe-based apparel sales application, method, and apparatus therefor
CN106023068A (en) Glasses frame try-on method, apparatus and system
CN114339434A (en) Method and device for displaying goods fitting effect
US20240161423A1 (en) Systems and methods for using machine learning models to effect virtual try-on and styling on actual users
US20240037869A1 (en) Systems and methods for using machine learning models to effect virtual try-on and styling on actual users
TWM601402U (en) Cloth transaction system
CN114611168A (en) Design requirement data processing method and device, storage medium and processor
CN116739699A (en) Method for providing commodity recommendation information and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20190802