CN116029783A - Method and device for displaying try-on effect of goods - Google Patents

Method and device for displaying try-on effect of goods Download PDF

Info

Publication number
CN116029783A
CN116029783A CN202211531438.3A CN202211531438A CN116029783A CN 116029783 A CN116029783 A CN 116029783A CN 202211531438 A CN202211531438 A CN 202211531438A CN 116029783 A CN116029783 A CN 116029783A
Authority
CN
China
Prior art keywords
target
goods
display data
effect display
size
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211531438.3A
Other languages
Chinese (zh)
Inventor
姜颖
张程
陈子鉴
余阳
颜盈盈
冯颖慧
段虞峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba China Co Ltd
Original Assignee
Alibaba China Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba China Co Ltd filed Critical Alibaba China Co Ltd
Priority to CN202211531438.3A priority Critical patent/CN116029783A/en
Publication of CN116029783A publication Critical patent/CN116029783A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The embodiment of the specification provides a method and a device for displaying the try-on effect of goods, wherein the method for displaying the try-on effect of goods comprises the following steps: responding to an article selection request for a target article sent by a client, and acquiring user stature attribute information; determining target effect display data from the effect display data of the target goods according to the user stature attribute information, and sending the target effect display data to the client, wherein the effect display data are effect display data of the target goods with different sizes which are worn on corresponding parts of virtual models with different stature attributes; and responding to a size comparison instruction sent by a client, generating size matching information according to the target effect display data, and sending the size matching information to the client.

Description

Method and device for displaying try-on effect of goods
Technical Field
The embodiment of the specification relates to the technical field of data processing, in particular to a method for displaying the try-on effect of goods.
Background
With the development of internet technology, online shopping applications provide more convenient shopping services for users. In order to provide better shopping service for users, the goods suppliers usually shoot goods to be sold, and then release shot pictures or videos to an online shopping platform, so that the users can check the pictures or videos conveniently. In the prior art, most of the goods suppliers need to beat the goods to be sold, and most of shooting modes are single. For example, in the clothing goods scene, the merchant only invites a few models to shoot, and the display effect is limited to the wearing effect corresponding to the height and weight of the models. However, the application scenes of the consumer to the goods are various, the shooting effect limited by the goods supplier cannot intuitively feed back the effect of the goods in the application scenes of the consumer, so that the single rate is reduced, and meanwhile, the consultation amount of the consumer to the goods supplier is obviously increased, so that an effective scheme is needed to solve the problems.
Disclosure of Invention
In view of this, the present embodiments provide five kinds of article try-on effect display methods. One or more embodiments of the present specification relate to five kinds of goods try-on effect display devices, a computing device, a computer-readable storage medium, and a computer program, which solve the technical drawbacks of the prior art.
According to a first aspect of embodiments of the present disclosure, there is provided a method for displaying a try-on effect of an article, applied to a server, including:
responding to an article selection request for a target article sent by a client, and acquiring user stature attribute information;
determining target effect display data from the effect display data of the target goods according to the user stature attribute information, and sending the target effect display data to the client, wherein the effect display data are effect display data of the target goods with different sizes which are worn on corresponding parts of virtual models with different stature attributes;
and responding to a size comparison instruction sent by a client, generating size matching information according to the target effect display data, and sending the size matching information to the client.
According to a second aspect of embodiments of the present disclosure, there is provided an article try-on effect display device, applied to a server, including:
the request acquisition module is configured to respond to an article selection request for a target article sent by the client and acquire user stature attribute information;
the effect determining module is configured to determine target effect display data from the effect display data of the target goods according to the user stature attribute information, and send the target effect display data to the client, wherein the effect display data is effect display data of corresponding parts of virtual models for wearing target goods with different sizes to different stature attributes;
and the size matching module is configured to respond to a size comparison instruction sent by the client, generate size matching information according to the target effect display data and send the size matching information to the client.
According to a third aspect of embodiments of the present disclosure, there is provided a method for displaying a try-on effect of an article, applied to a client, including:
responding to an article selection instruction, determining a target article, and sending an article selection request to the server according to the target article;
And receiving the size matching information returned by the server, and displaying the size matching information through an goods display interface.
According to a fourth aspect of embodiments of the present disclosure, there is provided an article try-on effect display device, applied to a client, including:
the request sending module is configured to respond to the goods selection instruction, determine a target goods and send a goods selection request to the server according to the target goods;
the data receiving module is configured to receive the size matching information returned by the server and display the size matching information through the goods display interface.
According to a fifth aspect of embodiments of the present disclosure, there is provided a method for displaying a try-on effect of an article, applied to a server, including:
responding to an article selection request for a target article sent by a live client, and acquiring user stature attribute information;
determining target effect display data from the effect display data of the target goods according to the user stature attribute information, and sending the target effect display data to the live client, wherein the effect display data are effect display data of the target goods with different sizes which are worn on corresponding parts of virtual models with different stature attributes;
And responding to a size comparison instruction sent by a live broadcast client, generating size matching information according to the target effect display data, and sending the size matching information to the live broadcast client.
According to a sixth aspect of embodiments of the present disclosure, there is provided an article try-on effect display device, applied to a server, including:
the request acquisition module is configured to respond to an article selection request for a target article sent by the live broadcast client side and acquire user stature attribute information;
the effect determining module is configured to determine target effect display data from the effect display data of the target goods according to the user stature attribute information, and send the target effect display data to the live client, wherein the effect display data are effect display data of corresponding parts of virtual models for wearing target goods with different sizes to different stature attributes;
and the size matching module is configured to respond to the size comparison instruction sent by the live broadcast client, generate size matching information according to the target effect display data and send the size matching information to the live broadcast client.
According to a seventh aspect of embodiments of the present disclosure, there is provided an item try-on effect display method, applied to a client, including:
responding to an article selection instruction, determining at least two target articles, and sending an article selection request to the server according to the at least two target articles;
receiving at least two target effect display data corresponding to user stature attribute information returned by the server side in response to the article selection request, wherein the at least two target effect display data are effect display data of at least two sizes of target articles to be worn on corresponding positions of a virtual model with the same stature attribute;
and generating size matching information according to the at least two target effect display data, and displaying the size matching information through an article display interface, wherein the size matching information comprises a difference value corresponding to at least two size values of the target article.
According to an eighth aspect of embodiments of the present disclosure, there is provided an article try-on effect display device, applied to a client, including:
the request sending module is configured to respond to the goods selection instruction, determine at least two target goods and send a goods selection request to the server according to the at least two target goods;
The effect determining module is configured to receive at least two target effect display data corresponding to the user stature attribute information returned by the server side in response to the commodity selection request, wherein the at least two target effect display data are effect display data of at least two sizes of target commodities which are worn on corresponding positions of a virtual model with the same stature attribute;
the size matching module is configured to generate size matching information according to the at least two target effect display data, and display the size matching information through an item display interface, wherein the size matching information comprises a difference value corresponding to at least two size values of the target item.
According to a ninth aspect of embodiments of the present disclosure, there is provided a method for displaying a try-on effect of an article, applied to a live client, including:
the method comprises the steps that a live broadcast client creates an article selection request of a related target article and sends the article selection request to a live broadcast server;
receiving size matching information fed back by the live broadcast server side aiming at the data acquisition request;
and displaying the size matching information through an article display interface so as to display the display effect of the target article on the corresponding part of the virtual model to be worn.
According to a tenth aspect of embodiments of the present disclosure, there is provided an item try-on effect display device, applied to a live client, including:
the request sending module is configured to create an article selection request of the associated target article by the live broadcast client and send the article selection request to the live broadcast server;
the data receiving module is configured to receive size matching information fed back by the live broadcast server side aiming at the data acquisition request;
the effect display module is configured to display the size matching information through an article display interface so as to display the display effect of the corresponding part of the target article wearing model to be virtualized.
According to an eleventh aspect of embodiments of the present specification, there is provided a computing device comprising:
a memory and a processor;
the memory is used for storing computer executable instructions, and the processor is used for executing the computer executable instructions, and the computer executable instructions realize the steps of the goods try-on effect display method when being executed by the processor.
According to a twelfth aspect of embodiments of the present specification, there is provided a computer-readable storage medium storing computer-executable instructions which, when executed by a processor, implement the steps of the above-described item try-on effect presentation method.
According to a thirteenth aspect of the embodiments of the present specification, there is provided a computer program, wherein the computer program, when executed in a computer, causes the computer to execute the steps of the above-described article try-on effect presentation method.
In order to feed back the effect display data corresponding to the figure attribute information of the user and determine the size matching information of the effect display data, the method for displaying the effect test-wear effect of the goods can obtain the figure attribute information of the user in response to the goods selection request for the target goods sent by the client, determine the target effect display data from the effect display data of the target goods according to the figure attribute information, and send the target effect display data to the client, wherein the effect display data is effect display data of corresponding parts of virtual models with different sizes of the target goods to be worn on different figure attributes, and generate size matching information according to the target effect display data in response to a size comparison instruction sent by the client and send the size matching information to the client. The size matching information of the target effect display data is displayed on the client side, so that a user can intuitively see a specific try-on effect, the user can conveniently purchase goods, and the purchasing experience of the user is improved.
Drawings
Fig. 1 is a schematic diagram of a method for displaying the try-on effect of an article according to an embodiment of the present disclosure;
FIG. 2a is a flowchart of a method for displaying a try-on effect of an article according to an embodiment of the present disclosure;
FIG. 2b is a schematic illustration of a first try-on effect provided by one embodiment of the present disclosure;
FIG. 2c is a schematic illustration of a second try-on effect provided by one embodiment of the present disclosure;
FIG. 2d is a schematic illustration of a third try-on effect provided by one embodiment of the present disclosure;
fig. 3 is a schematic structural view of an article try-on effect display device according to an embodiment of the present disclosure;
FIG. 4a is a flowchart of another method for displaying the try-on effect of an article according to one embodiment of the present disclosure;
FIG. 4b is a schematic illustration of a first try-on effect interface provided by one embodiment of the present disclosure;
FIG. 4c is a schematic diagram of a second try-on effect interface provided by one embodiment of the present disclosure;
FIG. 4e is a schematic diagram of a third try-on effect interface provided by one embodiment of the present disclosure;
FIG. 4d is a schematic diagram of a fourth try-on effect interface provided by one embodiment of the present disclosure;
FIG. 4f is a schematic illustration of a fifth try-on effect interface provided by one embodiment of the present disclosure;
FIG. 5 is a schematic view of another device for displaying the effect of fitting an article according to an embodiment of the present disclosure;
FIG. 6 is a flowchart of yet another method for displaying the try-on effect of an article according to one embodiment of the present disclosure;
fig. 7 is a schematic structural view of yet another article try-on effect display device according to an embodiment of the present disclosure;
FIG. 8a is a flowchart of yet another method for displaying the try-on effect of an article according to one embodiment of the present disclosure;
FIG. 8b is an interface diagram of yet another method for displaying try-on effects of an article according to one embodiment of the present disclosure;
fig. 9 is a schematic structural view of still another article try-on effect display device according to an embodiment of the present disclosure;
FIG. 10 is a flowchart of yet another method for displaying the try-on effect of an article according to one embodiment of the present disclosure;
FIG. 11 is a schematic view of a structure of a device for displaying the effect of fitting an article according to an embodiment of the present disclosure;
FIG. 12 is a block diagram of a computing device provided in one embodiment of the present description.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present description. This description may be embodied in many other forms than described herein and similarly generalized by those skilled in the art to whom this disclosure pertains without departing from the spirit of the disclosure and, therefore, this disclosure is not limited by the specific implementations disclosed below.
The terminology used in the one or more embodiments of the specification is for the purpose of describing particular embodiments only and is not intended to be limiting of the one or more embodiments of the specification. As used in this specification, one or more embodiments and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used in one or more embodiments of the present specification refers to and encompasses any or all possible combinations of one or more of the associated listed items.
It should be understood that, although the terms first, second, etc. may be used in one or more embodiments of this specification to describe various information, these information should not be limited by these terms. These terms are only used to distinguish one type of information from another. For example, a first may also be referred to as a second, and similarly, a second may also be referred to as a first, without departing from the scope of one or more embodiments of the present description. The word "if" as used herein may be interpreted as "at … …" or "at … …" or "responsive to a determination", depending on the context.
In the present specification, there are provided five kinds of article try-on effect display methods, and the present specification relates to five kinds of article try-on effect display apparatuses, a computing device, and a computer-readable storage medium, which are described in detail one by one in the following embodiments.
Referring to fig. 1, fig. 1 shows a schematic diagram of an effect display method for trying on an article according to an embodiment of the present disclosure, in order to enable feedback of effect display data corresponding to stature attribute information of a user and determine size matching information of the effect display data, the stature attribute information of the user may be obtained in response to an article selection request for a target article sent by a client, the target effect display data may be determined from the effect display data of the target article according to the stature attribute information, and the target effect display data may be sent to the client, where the effect display data is effect display data of a virtual model corresponding to a virtual model wearing target articles of different sizes to different stature attributes, the size matching information may be generated according to the target effect display data in response to a size comparison instruction sent by the client, and the size matching information may be sent to the client. The size matching information of the target effect display data is displayed on the client side, so that a user can intuitively see a specific try-on effect, the user can conveniently purchase goods, and the purchasing experience of the user is improved.
It should be noted that, the user information (including but not limited to user equipment information, user personal information, etc.) and the data (including but not limited to data for analysis, stored data, presented data, etc.) related to the present application are information and data authorized by the user or fully authorized by each party, and the collection, use and processing of the related data need to comply with the related laws and regulations and standards of the related country and region, and provide corresponding operation entries for the user to select authorization or rejection.
Referring to fig. 2, fig. 2 is a flowchart illustrating a method for displaying a try-on effect of an article according to an embodiment of the present disclosure, which specifically includes the following steps.
Step S202: and responding to an article selection request for the target article sent by the client, and acquiring the user stature attribute information.
The server side serving as the execution main body of the scheme can be a server side corresponding to an article interaction platform providing article display and transaction functions, can also be a server side corresponding to a live broadcast platform with the article display and transaction functions, can also be a server side corresponding to a virtual fitting application providing article wearing effect display functions, and the like, and the specification is not limited to the server side. Correspondingly, the client can be a client corresponding to the server, and the bearing equipment of the client can be a personal computer, a mobile phone or a tablet personal computer and other equipment. The target article may be a wearable article, for example, in a case where a wearing object of the article (i.e., an entity image corresponding to the virtual model) is a character image (e.g., an adult, a child, etc.), the article may be a wearable article such as a garment, a shoe cap, a sock, a case, an ornament, etc.; in the case where the wearing object of the article is an animal figure (such as cat, dog, etc.), the article may be a wearable article such as a chain, a tether, clothing, a shoe cap, etc.; in the case that the wearing object of the article is an article image (such as a vehicle, an electronic device, etc.), the article may be a wearable article such as a vehicle raincoat, an electronic device protective cover, a screen film, etc. Any of the images may be a simulated model close to the real image or an anthropomorphic cartoon image model, which is not limited in the specification. The item selection request may be a request generated after the user selects an item, the request being for selecting a target item.
Accordingly, the stature attribute information may include stature size information, for example, the stature size information may include at least one of height, weight, shoulder width, chest circumference, waist circumference, hip circumference, leg length, hand length, head length, etc.; for cats, dogs, etc., the stature size information may include at least one of height, weight, body length, etc., and for the object image of a vehicle, etc., the stature size information may include at least one of body length, body width, body height, mirror position, etc.
In practical application, when a user browses a target commodity through a client, the effect display corresponding to the target commodity can be automatically generated through the user stature attribute information by acquiring the user stature attribute information.
For example, when a user purchases an article at a client and clicks an article page of a jacket, the server responds to an article selection request for a target article sent by the client and acquires stature attribute information of the user.
According to the embodiment of the specification, under the condition that the goods selection request is received, the figure attribute information of the user is obtained, so that the matched display effect can be selected in the subsequent steps, and the efficiency is improved.
Further, when the stature attribute information of the user is obtained, the user can not only provide the corresponding stature attribute information, for example, provide the stature attribute information of the user, the stature attribute information of the parent or the stature attribute information of the friend, but also automatically determine the stature attribute information of the user through the purchase history of the user. The specific display modes are as follows.
In one implementation manner, the obtaining, in response to the article selection request for the target article sent by the client, user stature attribute information includes:
responding to an article selection request for a target article sent by a client, and sending a history authorization request to the client;
and under the condition that the client returns the authorization information, acquiring the historical purchase information, and acquiring the user stature attribute information according to the historical purchase information.
The request for authorizing the historical record can be a request for applying for the user to view the historical purchase record.
In practical application, after receiving a data acquisition request, if a user does not actively submit stature attribute information, the server may analyze the data acquisition request at this time to determine a user identifier corresponding to the user, and then directly extract stature attribute information corresponding to the user identifier from a stature attribute information base, so as to use the stature attribute information as user stature attribute information of the user. Or when the user initiatively submits the stature attribute information, after receiving the data acquisition request, the data acquisition request can be analyzed to obtain the stature attribute information of the user carried in the request, so that the effect display data can be conveniently selected according to the determined stature attribute information, and the effect display data is closer to the stature attribute of the user.
For example, if the user purchased a coat before, then user stature attribute information exists in the system. And the user purchases the goods at the client, and when clicking the goods page of a piece of jacket, the server responds to the goods selection request aiming at the target goods sent by the client and acquires the figure attribute information of the user. And searching in the past purchase record according to the identification of the user under the condition that the user does not actively fill in the stature attribute information, so as to acquire the stature attribute information of the user.
For another example, if the user has previously purchased a jacket for himself or herself and friends, there are different stature attribute information corresponding to the user in the system, that is, there are a plurality of stature attribute information. And the user purchases the goods at the client, and when clicking the goods page of a piece of jacket, the server responds to the goods selection request aiming at the target goods sent by the client and acquires the figure attribute information of the user. When the user is found that the user does not actively fill in the stature attribute information, searching in the past purchase record according to the identification of the user, and then finding a plurality of stature attribute information, and at this time, sending a query to the client: "which stature attribute information you want" to acquire user stature attribute information.
It should be noted that, the stature attribute information may be automatically determined according to different rules, specifically, when the user finds that the stature attribute information is not actively filled, the user searches in the past purchase record according to the user identifier, and may find a plurality of stature attribute information, and at this time, whether the user purchases the article is determined according to the attribute of the article. For example, if the user is a woman and the item to be purchased is a woman coat, the stature attribute information with the largest number of uses is found from the purchase record of the past order. In the embodiment of the present specification, the rule for determining the stature attribute information is not limited, and the purpose may be achieved.
In conclusion, by determining the stature attribute information of the user in different modes, service requirements under different scenes can be supported, and when the user has the requirement of viewing the effect display data, the effect display data close to the stature attribute of the user can be fed back in a plurality of modes.
Further, in order to prepare the selected effect display data in advance, so that after a user submits a data acquisition request, the effect display data close to the figure attribute information of the user can be fed back quickly, and the effect display data needs to be pre-generated by combining the related information submitted by the goods provider, in this embodiment, the specific implementation manner is as follows:
In some embodiments, before the acquiring the user stature attribute information in response to the item selection request for the target item sent by the client, the method further includes:
receiving an item identifier submitted by an item provider to which the target item belongs;
loading target specification description information corresponding to the target goods according to the goods identification, and acquiring virtual model stature attribute information of the virtual model;
and generating effect display data for wearing the target goods on the corresponding parts of the virtual model according to the target specification description information and the virtual model stature attribute information.
Specifically, the article provider specifically refers to a provider that provides a target article, and uploads an article identifier through an article supply end, where the article supply end is a client used by the article provider, including but not limited to a mobile phone, a computer, or an intelligent wearable device, etc. The article identifier specifically refers to a unique identifier corresponding to the target article, which is composed of digital characters and has uniqueness. Correspondingly, the target specification description information specifically refers to attribute description information of the goods in different dimensions, and may include size information, for example, for a jacket type goods, the size information may include one or more of a garment length, a height, a chest circumference, a waistline, a body shape, and the like, for example, if the specification description information of a piece of jacket goods may be "170/92A", it is indicated that the jacket is suitable for a height of 170cm, a chest circumference of 92cm, and a body shape of a type a; alternatively, for a protective case of an electronic device, the dimensional information may be length, width, thickness, position of the hole, and the like. For different goods, the corresponding specification description information may be correspondingly different, which is not limited in the specification, but it is understood that the more the obtained specification description information of the goods is, the more accurate the effect display data corresponding to the wearing effect is generated later.
Specifically, the virtual model is a virtual model capable of fitting different goods and corresponds to different stature attribute information. In this embodiment, the virtual model may have various forms. For example, a virtual model obtained by three-dimensionally modeling a solid model may be used, for example, a solid model (for example, a real manikin) may be photographed from a plurality of continuous or spaced observation angles, respectively, to obtain a plurality of image frames, and then three-dimensionally modeling is performed based on the plurality of photographed image frames to obtain a stereoscopic virtual model. For another example, a virtual model obtained by performing three-dimensional modeling on the server side or other devices according to the stature attribute information specified by the user may be also used, for example, if the commodity provider inputs stature size data by itself, the server side may perform three-dimensional modeling according to the data to generate a corresponding virtual model. The virtual model is obtained through a three-dimensional modeling mode, specifically, a three-dimensional rendering engine based on CG (computer graphics) technology in the related technology can be adopted to perform three-dimensional modeling (or three-dimensional rendering), so that the effect presentation of virtual fitting can be realized by using the current mature CG technology, and the specific calculation process can refer to the three-dimensional rendering related content disclosed in the related technology and is not repeated here. The virtual model obtained through three-dimensional modeling has stronger stereoscopic impression, and the display attributes such as the gesture, the angle and the like of the virtual model can be changed randomly, so that more realistic virtual model and goods penetrating and overlapping display effects can be realized by adopting the mode, and a large amount of training data is not needed, so that the faster modeling speed can be realized, the rendering waiting time of a user is shortened, and the use experience of the user is further improved. For example, the virtual model may be a virtual model automatically generated according to the stature attribute information specified by the goods provider through a preset model generation algorithm, for example, after stature attribute information such as height and weight is provided, a corresponding virtual model is automatically generated according to a preset model generation method, so that high requirements of three-dimensional modeling on computing capacity are avoided, light modeling service is facilitated, and operation pressure of a server side is reduced.
For example, a virtual model a and a virtual model B are set in the system in advance. Virtual model a and virtual model B have been associated with different sizes of blouse to generate effect display data, e.g., the specifications for a piece of blouse item may include "170/92A", "180/92A" and "190/92A", then virtual model a and virtual model B are respectively associated with the blouse of "170/92A", "180/92A" and "190/92A" specifications to generate effect display data. Six effects were shown.
It should be noted that, for different goods, the corresponding specification description information may be different correspondingly, and this specification is not limited to this, for example, for the protective case of the electronic device, the size information may be length, width, thickness, hole location, and so on. However, it can be understood that the more the acquired specification description information of the goods is, the more accurate the effect display data corresponding to the wearing effect is generated later.
In this embodiment, the target article is taken as an example to describe the corresponding effect display data, and the effect display data corresponding to other articles may refer to the same or corresponding description contents in this embodiment. Based on the above, when receiving the article identifier submitted by the article provider to which the target article belongs, it is explained that the article provider needs to create effect display data for the target article at this time, so as to create effect display data richer in the target article; if the clothing scene, the wearing effect of wearing different sizes on models with different statures; for example, in a vehicle scene, the effect that the consumable materials of different sizes are applied to different vehicles is achieved. According to the goods identifier submitted by the goods provider, the target specification description information corresponding to the target goods, namely the size, the picture and the like of the target goods, is loaded in the database which is already input with the target goods; and generating effect display data for wearing the target goods on the corresponding parts of the virtual model according to the model stature attribute information of the virtual model selected by the goods provider.
In order to be able to feed back to the user the effect presentation data close to their stature properties, user stature property information of the user may be determined in response to the data acquisition request, and then selection is made among the effect presentation data of the target good according to the user stature property information, so as to feed back to the user the effect presentation data closer to their real wearing effect. The user stature attribute information specifically refers to stature attribute information corresponding to the user, and the description thereof may refer to the same or corresponding description content in the above embodiment, which is not repeated herein.
When the effect display data is generated, in order to improve the authenticity and facilitate the user to see the details of the try-on effect in the application stage, the goods provider can also select the gesture of the virtual model, namely, the gesture of the virtual model after wearing the target goods; and meanwhile, background selection can be performed, so that effect display data with higher quality is generated, and the viewing experience of consumers in the application stage is improved.
Furthermore, when the effect display data is specifically generated, the effect display data of wearing the goods on the corresponding part of the virtual model can be obtained by performing three-dimensional modeling through the CG technology. In practice, the modeling process is a process of matching the specification description information of the goods with the stature attribute information of the virtual model, and comprehensively generating appearance display data corresponding to the virtual model wearing the goods. Under the condition that a plurality of target goods are provided, the server side can respectively generate independent effect display data for wearing the goods on corresponding parts of the virtual model, such as independent effect display data of a worn model corresponding to the virtual model when the virtual model only wears the coat and a top hat, and independent effect display data of a worn model corresponding to the virtual model when the virtual model only wears the hat, so that a user can conveniently look up independent display effects corresponding to the coat and the hat respectively. Or, the server side can also generate the combined effect display data of the models which are worn on the corresponding parts of the virtual model at the same time, and still take the example that the goods comprise a piece of jacket and a piece of hat, the server side can generate the combined effect display data of the worn models corresponding to the virtual model when the virtual model is worn with the jacket and the hat, so that a user can conveniently look up the combined display effect of the jacket and the hat which are worn simultaneously.
In addition, under the condition that a plurality of goods exist, the goods can be combined automatically or according to a selection instruction of a user, corresponding multi-combination effect display data are generated, and the user can select to view and compare the wearing effects of various wearing combinations. The effect display data may be an effect display image, such as an image generated on a worn model according to display attributes such as different observation angles, zoom degrees, and the like; or displaying the video according to the effect generated by the preset observation track for the worn model, wherein the observation visual angle of the video is dynamically changed according to the observation track, so that the video can completely display the wearing effect of the worn model in all directions. Or, the effect display data may be the wearable model, so that a user can conveniently view the display effect of the wearable model in the three-dimensional space.
In an embodiment, a user may customize a scene in which the virtual model is located in advance, for example, the scene may include a skin color, a makeup, an expression, a gesture, and other self scenes of the model, and may also include a prop, a background, and other environmental scenes used by the model, which are not described herein. Therefore, the server side can acquire the scene attribute information of the scene where the virtual model is located, and then generates effect display data for wearing goods on the corresponding part of the virtual model and for enabling the virtual model to be located in the scene according to the scene attribute information, so that more vivid wearing effect is realized.
In an embodiment, the server may generate at least one effect display picture for wearing the goods on the corresponding portion of the virtual model according to the preset display attribute, so as to ensure quick generation of the effect display data. Or, in the case that the virtual model includes a stereoscopic model generated through three-dimensional modeling, the server may further adjust a display parameter of the stereoscopic model wearing the goods in response to a parameter control instruction sent by the user for the stereoscopic model, and generate effect display data corresponding to the stereoscopic model wearing the goods according to the adjusted display parameter. Therefore, the display effect of the stereoscopic model is adjusted according to the user's will, and the user can optionally adjust the zoom degree (such as by enlarging and looking through details) or the display angle (such as by rotating and looking through the surrounding display effect) of the stereoscopic model according to the user's will, so as to realize the interaction process of the user and the stereoscopic model.
In specific implementation, the setting algorithm can also be used for generating the effect display data, so that the efficiency of generating the effect display data is improved. Specifically, firstly, providing goods pictures and effect demand information by a goods provider; secondly, combining effect demand information to generate the stature and model face of the virtual model, and assembling the stature and model face to obtain the virtual model; and determining an article model according to the article picture, and fusing the article model with the virtual model to obtain display data of the article worn on the corresponding part of the virtual model. The effect that only the goods were dressed to virtual model corresponding position that obtains this moment, in order to can improve the richness of many sizes, still need adjust to the key point to the show data of different sizes goods dressed to different stature virtual model is generated, from storing it can. Further, in order to improve the richness of the virtual model, the display effect of various goods worn by the virtual model can be synthesized through an algorithm, so that effect display data with better display effect can be obtained.
In the process, the goods picture refers to a hanging picture corresponding to the goods, a mannequin picture and other various required pictures; the effect demand information refers to information related to model diagram customization parameters, such as model type, collocation type, background type, size information and the like; and after the goods picture and the effect demand information are obtained, preprocessing can be performed to improve the rendering efficiency. That is, the preprocessing refers to providing an interactive processing interface for the goods provider, and the goods provider can finish interactive image matting, image wrinkling and shadow processing at the webpage end by itself through the interactive processing interface, and meanwhile, can perform interactive operations such as goods key point alignment, goods edge liquefaction deformation and the like, so that the aim is to preprocess a required picture to a state capable of directly performing algorithm rendering, and the rendering speed is increased.
In the virtual model assembly stage, different virtual faces can be generated in batches through the GAN network model for the virtual model users to select; meanwhile, the process can combine the model face with the model face multi-pose synthesis technology, and the synthesized face corresponds to multiple angles and multiple poses. For synthesizing a complete set of multiple model poses. In order to improve the definition of the model face, a super-resolution synthesis technology of the model face can be used, namely, a high-definition model face is synthesized, and the details of the facial features and the hairs of the face can be clearly carved; in addition, model faces with different makeup styles can be synthesized by a special face makeup synthesis technology; the model face stylization synthesis technology can realize the synthesis of models with different years, different skin colors, different five officials and the like so as to obtain virtual models with different use requirements.
In the model goods fusing stage, goods can be firstly segmented from goods pictures, then goods texture patterns are calculated through a space mapping matrix by combining a 3D goods texture mapping technology, and are accurately mapped to a goods model, and then a 3D goods reconstruction technology is combined for generating a more real goods model. At the moment, the goods model and the virtual model are fused, and display data of the goods worn on the corresponding part of the virtual model can be obtained.
In the fusion process, key point position coordinates of a plurality of joint positions of a human body of a virtual model are required to be acquired firstly; then, the model gesture is controlled by changing the coordinates of the key points, and the model face gesture is aligned with the body gesture, so that a whole group of different model gesture effects are synthesized; meanwhile, the key point position coordinates of the goods can be obtained; aligning the key point coordinates of the goods with the adjusted key point coordinates of the virtual model, and fitting the goods posture and the human body posture; thereby generating effect presentation data.
For example, in a clothing scene, when a virtual model is fused with clothing, a model image with one size can be expanded into model effect images with multiple sizes. In the multi-size fitting technology, a seed model is based on an existing standard figure and model size, such as 165 and L codes, migration synthesis of figure and garment parameter difference values is completed in a key point control mode, a group of difference variable is overlapped on a human body and garment of the seed model, and model-garment key point matching is performed after a group of difference variable is overlapped, so that the target figure and model are obtained. The model part obtains human body data through a large number of 3D accurate modeling, and is preset in a background algorithm database of the product, and the model part characterizes human body parameter combinations of different height-weight combinations. After the existing human body parameters (such as 165cm-50 kg) and the target human body parameters (such as 170cm-55 kg) are input, the corresponding human body parameter difference values can be calculated, and the virtual model with the target human body parameters is obtained by migration synthesis of the difference values through human body key points. The key points of the human body refer to the key point coordinates of each important joint of the human body, such as neck, shoulder head, wrist, elbow and the like, and when the key points are migrated, the image features of the peripheral area of the key points can be migrated along with the key point. The clothing part can obtain the clothing with the target clothing parameters by inputting clothing size parameters (such as L size, clothing length: 52, sleeve length: 56, chest circumference: 76 and shoulder width: 32) and target clothing size parameters (such as XL size, clothing length: 54, sleeve length: 57, chest circumference: 80 and shoulder width: 34), technical clothing parameter differences, and performing migration synthesis of the differences through clothing key points. The key points of the clothing refer to key point coordinates of all important positions of the clothing, such as necklines, cuffs, shoulders and the like, and the image features of the peripheral areas of the key points of the clothing can be migrated along with the key points of the human body. And finally, combining model-clothing key point matching, namely finishing fusion of the model and the clothing, and outputting effect display data corresponding to the requirements of the goods provider.
Step S204: and determining target effect display data from the effect display data of the target goods according to the user stature attribute information, and sending the target effect display data to the client, wherein the effect display data are effect display data of the target goods with different sizes which are worn on corresponding parts of virtual models with different stature attributes.
Accordingly, the model stature attribute of the virtual model may be stature attribute information corresponding to the virtual model, and may include stature size information of the virtual model, for example, the stature size information of the virtual model corresponding to the figure of an adult, a child, or the like may include at least one information of height, weight, shoulder width, chest circumference, waistline, hip circumference, leg length, hand length, head length, or the like; the stature size information of the virtual model corresponding to the animal figures such as cats and dogs can comprise at least one of height, weight, body length and the like, and the stature size information of the virtual model corresponding to the object figures such as vehicles can comprise at least one of body length, body width, body height, rearview mirror position and the like.
In practical application, after the user stature attribute information of the user is obtained, further, in order to feed back the effect display data close to the stature attribute of the user to the user, the effect display data of the related target goods can be determined in the pre-generated total effect display data, and then the effect display data of the target goods is screened according to the user stature attribute information, so that the effect display data close to the user stature attribute information is obtained and is used as the target effect display data.
In specific implementation, considering that the user stature attribute information is attribute information corresponding to the user stature, and the effect displayed by the effect display data is an effect that the target goods are worn on the virtual model of different statures, when determining the target effect display data, the effect display data to which the similar model stature attribute information belongs can be determined by calculating the similarity of the user stature attribute information and the model stature attribute information, and the specific implementation manner in this embodiment is as follows:
in one implementation manner, the determining, according to the user stature attribute information, target effect display data from the effect display data of the target goods includes:
determining a target virtual model from at least two virtual models corresponding to the target goods according to the user stature attribute information;
acquiring at least two candidate effect display data corresponding to the target virtual model and target goods with different sizes;
and determining target effect display data from the at least two candidate effect display data.
The target virtual model may be a virtual model that matches the user, for example, the target virtual model is similar to the height and weight of the user. The candidate effect presentation data may be effect presentation data of the target virtual model and different sizes of goods.
In practical application, it is necessary to determine target effect display data from a plurality of effect display data generated in advance according to user stature attribute information, so as to intelligently display a more suitable effect to a user.
For example, a virtual model a and a virtual model B are set in the system in advance. Virtual model a and virtual model B have been associated with different sizes of blouse to generate effect display data, e.g., the specifications for a piece of blouse item may include "170/92A", "180/92A" and "190/92A", then virtual model a and virtual model B are respectively associated with the blouse of "170/92A", "180/92A" and "190/92A" specifications to generate effect display data. Six effects were shown. According to the stature attribute information of the user, the effect display data corresponding to the upper garment of the virtual model A wearing 170/92A can be determined to be target effect display data.
Specifically, the determining, according to the user stature attribute information, the target virtual model from at least two virtual models corresponding to the target goods includes:
determining height information and weight information according to the user stature attribute information;
and matching the height information and the weight information with the height information and the weight information of the at least two virtual models to determine a target virtual model.
In practical application, the virtual model can be matched through the height and the weight of the user, so that the virtual model which is more similar to the user in size is determined.
For example, a virtual model a and a virtual model B are set in the system in advance. The height of the virtual model A is 170 cm, the height of the virtual model B is 180 cm, the height of the user can be determined to be 172 cm according to the stature attribute information of the user, and the virtual model A is determined to be the target virtual model.
In sum, the target effect display data is selected in a mode of calculating the similarity of the stature attributes, so that the target effect display data fed back to the client can be ensured to be closer to the stature attribute information of the user, and the application effect of the target goods in the application scene can be displayed to the user, thereby assisting the user in purchasing.
In one implementation, the determining target effect presentation data from the at least two candidate effect presentation data includes:
determining a target size matched with the user stature attribute information;
and determining target effect display data from the at least two candidate effect display data according to the target size, wherein the size of a target goods in the target effect display data is matched with the target size.
In practical applications, after determining the target virtual model, the user needs to match the appropriate size of the goods.
For example, a virtual model a and a virtual model B are set in the system in advance. The height of the virtual model A is 170 cm, the height of the virtual model B is 180 cm, the height of the user can be determined to be 172 cm according to the stature attribute information of the user, and the virtual model A is determined to be the target virtual model. Virtual model A includes dressing effect display data of three specifications, "170/92A", "180/92A" and "190/92A". Then it may be determined that virtual model a, wearing the "170/92A" specification, is the target effect presentation data based on the user's height of 172 cm.
According to the embodiment of the specification, the virtual model similar to the user is determined, the size of the goods matched with the user is determined, and the similar virtual model is selected to wear the goods with the size matched with the user, so that the target effect display data are determined, and therefore proper effects are displayed for the user.
Further, the method for displaying the try-on effect of the goods in the specification further comprises the following steps: and responding to a size comparison instruction sent by a client, generating size matching information according to the target effect display data, and sending the size matching information to the client.
The size comparison instruction may be a comparison operation performed by the user at the client, for example, an image of a certain portion of the virtual model is enlarged, that is, the size comparison instruction is automatically generated. The size matching information may be difference information between the size of the goods and the size of the virtual model part. For example, the difference between the arm length and the sleeve length.
In practical application, when a user performs operations such as amplifying, clicking and the like on the client, the user can trigger the client to send a size comparison instruction, so that the server generates size matching information for the target effect display data.
For example, wearing a virtual model a of the "170/92A" specification as target effect display data, when a user clicks a picture of the hand of the virtual model on a mobile phone, the hand image is enlarged, a mobile phone client sends a size comparison instruction to a server, and the server responds to the size comparison instruction sent by the client to calculate size matching information of the virtual model a of the "170/92A" specification.
According to the embodiment of the specification, the size comparison instruction is triggered through operations such as checking and clicking of a user, so that the server side generates size matching information of the target effect display data, and generates more detailed information of the target effect display data, and the user can more intuitively see the difference.
Specifically, the size matching information may be determined by the size of the target article and the location data of the virtual model, and the specific implementation is as follows.
In one implementation manner, the generating size matching information according to the target effect display data includes:
determining position size data according to the size of the target goods corresponding to the target effect display data;
and generating size matching information according to the position size data and the position data of the virtual model corresponding to the target effect display data.
Specifically, the position size data may be sleeve length, trousers length, collar height, and other size data. The site data may be size data of a site of the human body, such as data of an arm length.
For example, the virtual model A of the "170/92A" specification is worn as target effect display data, and the position size data, such as the sleeve length of 40 cm, is determined according to the "170/92A" specification. Correspondingly, the position data of the virtual model A is arm length data, and size matching information is generated according to the arm length data and the sleeve length.
According to the embodiment of the specification, the size matching information of the target effect display data is generated by the server side through comparing the position data of the virtual model with the position size data of the goods, and more refined information is generated on the target effect display data, so that a user can more intuitively see the difference.
Correspondingly, the generating the size matching information according to the target effect display data includes:
determining the size value of the goods according to the size of the target goods corresponding to the target effect display data;
determining a corresponding difference value between the size value of the goods and the size value of the virtual model according to the size value of the virtual model corresponding to the target effect display data;
generating size matching information according to the difference value, wherein the size value comprises a plurality of or all of the following values: shoulder width value, chest circumference value, sleeve length value, and waist circumference value.
In practical application, the difference value calculation of the size of the goods and the size of the part can be carried out on different parts of the virtual model, so that the size matching information generation of each part is realized. Referring to fig. 2b, a difference calculation may be made between the sleeve length and the arm length of the virtual model, thereby exhibiting the wrist-to-cuff distance.
For example, when the virtual model a of the "170/92A" specification is worn as the target effect display data, the shoulder width value, the article chest circumference value, the article sleeve length value, and the article waist circumference value of the jacket of the "170/92A" specification are obtained, the shoulder width value, the chest circumference value, the arm length value, and the waist circumference value of the model a are obtained, and the size matching information is obtained by calculating the differences between the shoulder width value and the shoulder width value, the article chest circumference value and the chest circumference value, the article sleeve length value and the arm length value, and the article waist circumference value and the waist circumference value of the model a of the jacket.
According to the embodiment of the specification, the size difference value of the part of the virtual model and the goods is calculated, so that a more refined display effect can be achieved, and the experience of a user is improved.
Further, the user may also select a combination of the virtual model to be displayed and the corresponding size of the goods by himself, and the specific implementation is as follows.
In one implementation, the method further includes: responding to a size selection instruction sent by a client, and determining at least one target virtual model and at least one size target goods according to the size selection instruction;
determining at least one effect display data from the effect display data of the target goods according to the at least one target virtual model and the target goods with at least one size;
and generating at least one size matching information according to the at least one effect display data.
Based on this, when the user purchases the target goods for other objects (such as relatives, friends, owners and personal articles), the effect display data recommended to the user for the first time is the effect display data corresponding to the user stature attribute information, and the effect display data actually required by the user should correspond to the purchased object, so when receiving the display data switching request submitted by the user for the target goods, it is indicated that the target user needs to switch the user stature attribute information at this time, the display data switching request can be analyzed at this time, so as to obtain the switched user stature attribute information, namely, the user stature attribute information is switched. And then, determining switching effect display data corresponding to the stature attribute information of the switching user in effect display data generated in advance by the target goods.
Specifically, referring to fig. 2c, a user may select one virtual model to wear a coat of two sizes. The screen respectively displays the display data of the upper clothes of the wearing size A and the wearing size B of the same virtual model, according to the display data, the length difference value of the two upper clothes when the two upper clothes are worn can be calculated, the length difference value of the two upper clothes when the two upper clothes are worn is displayed, and the goods size of a proper user is recommended. Thus, the user can more intuitively feel the effect of the clothes with two sizes.
In another implementation, referring to fig. 2d, a user may select one virtual model to wear a coat of two sizes. The screen displays display data of the virtual model wearing the recommended-sized coat. Meanwhile, the display data of the upper garment with other sizes can be displayed in the small window, and corresponding information, such as the length of the garment below the hand or the length of the garment above the hand, is displayed. According to the display data, the length difference of the jackets with different sizes when being worn can be further calculated, the length difference of the two jackets when being worn can be displayed, and the size of the goods of the proper user can be recommended. Thus, the user can more intuitively feel the effect of the clothes with two sizes. It should be noted that the size of the goods displayed in the widget may be selected by the user or may be automatically selected by the system, for example, the system automatically selects the adjacent size of the recommended size. This is not limited by the present description examples. And the number of the small windows and the positions of the small windows are not fixed, for example, when the user selects X goods to be compared, X small windows are displayed, and the user can drag the positions of the small windows or replace the display data in the small windows with the display data of the main window. In addition, display information corresponding to the small window can be selected, for example, comparison information of clothes length and hands and comparison information of cuffs and wrists can be selected.
Further, when the user needs to purchase the target goods for the sizes of the user in different periods, such as clothes to be purchased after weight gain, clothes to be purchased after weight loss, and car clothes, seat cushion and the like to be purchased after car replacement; the user stature attribute information of the user at the current stage is not changed, and the feedback effect display data can not meet the current purchasing requirement of the user, so that the user can switch the stature attribute information, namely after receiving a display data switching request submitted by the user for a target goods, the target user needs to switch the user stature attribute information at the moment, the display data request can be analyzed at the moment, and the switched user stature attribute information, namely the user stature attribute information is obtained. And then, determining switching effect display data corresponding to the stature attribute information of the switching user in effect display data generated in advance by the target goods, and feeding the switching effect display data back to the client. It should be noted that, when determining the effect display data according to the size attribute information of the switching user, the determining process may refer to the same or corresponding descriptions in the above embodiment, which is not repeated herein.
In addition, in order to improve the shopping touch rate of the user, the wearing effect of other goods can be recommended to the user, and in this embodiment, the specific implementation manner is as follows:
determining recommended goods which have goods association with the target goods or determining recommended goods according to preference information of the user; and determining recommended effect display data corresponding to the user stature attribute information in the effect display data generated in advance by the recommended goods, generating size matching information according to the recommended effect display data and the target effect display data, and sending the size matching information to a client.
Specifically, the recommended goods specifically refer to goods having a goods association with the target goods, the goods association including, but not limited to, a goods style association, a goods store association, a goods matching association, etc.; the recommended goods can be determined according to the preference information of the user, and when the recommended goods are determined according to the preference information, the goods and the target goods can belong to the same shop, or can be goods of other shops in the platform. Correspondingly, the recommended effect display data specifically refers to effect display data corresponding to recommended goods.
Based on the above, when other goods need to be recommended to the user, in order to ensure that the recommended goods have relevance with the user, the recommended goods having the goods relevance with the target goods can be selected, or the recommended goods can be determined according to the preference information of the user, after the recommended goods are determined, the recommended effect display data corresponding to the user stature attribute information can be determined in the effect display data generated in advance of the recommended goods, and the recommended effect display data is used for displaying the display effect of the recommended goods worn on the corresponding position of the virtual model. For example, if the target effect display data is the wristwatch W1 and the recommended effect display data is the wristwatch W2, size matching information is generated according to the wristband lengths of the wristwatch W1 and the wristwatch W2, and the size matching information is sent to the client.
In addition, when the recommended goods are recommended, since the recommended goods are associated with the target goods, the effect display data including both the target goods and the recommended goods can be selected for feedback. It should be noted that, the determining of the recommended effect display data may refer to the determining process of the target effect display data in the above embodiment, and this embodiment is not repeated here.
In order to feed back the effect display data corresponding to the figure attribute information of the user and determine the size matching information of the effect display data, the method for displaying the effect test-wear effect of the goods can obtain the figure attribute information of the user in response to the goods selection request for the target goods sent by the client, determine the target effect display data from the effect display data of the target goods according to the figure attribute information, and send the target effect display data to the client, wherein the effect display data is effect display data of corresponding parts of virtual models with different sizes of the target goods to be worn on different figure attributes, and generate size matching information according to the target effect display data in response to a size comparison instruction sent by the client and send the size matching information to the client. The size matching information of the target effect display data is displayed on the client side, so that a user can intuitively see a specific try-on effect, the user can conveniently purchase goods, and the purchasing experience of the user is improved.
Corresponding to the method embodiment, the present disclosure further provides an embodiment of an article try-on effect display device, and fig. 3 shows a schematic structural diagram of an article try-on effect display device provided in one embodiment of the present disclosure. As shown in fig. 3, the device is applied to a server, and includes:
The request acquisition module 302 is configured to respond to an article selection request for a target article sent by the client and acquire user stature attribute information;
the effect determining module 304 is configured to determine target effect display data from the effect display data of the target goods according to the user stature attribute information, and send the target effect display data to the client, where the effect display data is effect display data of corresponding parts of virtual models of different stature attributes where target goods with different sizes are worn;
the size matching module 306 is configured to generate size matching information according to the target effect display data in response to a size comparison instruction sent by a client, and send the size matching information to the client.
In an alternative embodiment, the size matching module 306 is further configured to:
determining position size data according to the size of the target goods corresponding to the target effect display data;
and generating size matching information according to the position size data and the position data of the virtual model corresponding to the target effect display data.
In an alternative embodiment, the size matching module 306 is further configured to:
According to the size of the target goods corresponding to the target effect display data, determining a shoulder width value of the goods, a chest circumference value of the goods, a sleeve length value of the goods and a waistline value of the goods;
according to the shoulder width value, the chest circumference value, the arm length value and the waistline value of the virtual model corresponding to the target effect display data, and the shoulder width value, the chest circumference value, the sleeve length value and the waistline value of the goods, determining a shoulder width difference value, a chest circumference difference value, an arm length difference value and a waistline difference value;
and generating size matching information according to the shoulder width difference value, the chest circumference difference value, the arm length difference value and the waistline difference value.
In an alternative embodiment, the size matching module 306 is further configured to:
responding to a size selection instruction sent by a client, and determining at least one target virtual model and at least one size target goods according to the size selection instruction;
determining at least one effect display data from the effect display data of the target goods according to the at least one target virtual model and the target goods with at least one size;
and generating at least one size matching information according to the at least one effect display data.
In an alternative embodiment, the request acquisition module 302 is further configured to:
responding to an article selection request for a target article sent by a client, and sending a history authorization request to the client;
and under the condition that the client returns the authorization information, acquiring the historical purchase information, and acquiring the user stature attribute information according to the historical purchase information.
In an alternative embodiment, the effect determination module 304 is further configured to:
determining a target virtual model from at least two virtual models corresponding to the target goods according to the user stature attribute information;
acquiring at least two candidate effect display data corresponding to the target virtual model and target goods with different sizes;
and determining target effect display data from the at least two candidate effect display data.
In an alternative embodiment, the effect determination module 304 is further configured to:
determining height information and weight information according to the user stature attribute information;
and matching the height information and the weight information with the height information and the weight information of the at least two virtual models to determine a target virtual model.
In an alternative embodiment, the effect determination module 304 is further configured to:
determining a target size matched with the user stature attribute information;
and determining target effect display data from the at least two candidate effect display data according to the target size, wherein the size of a target goods in the target effect display data is matched with the target size.
In an alternative embodiment, the effect determination module 304 is further configured to:
receiving an item identifier submitted by an item provider to which the target item belongs;
loading target specification description information corresponding to the target goods according to the goods identification, and acquiring virtual model stature attribute information of the virtual model;
and generating effect display data for wearing the target goods on the corresponding parts of the virtual model according to the target specification description information and the virtual model stature attribute information.
In order to feed back the effect display data corresponding to the figure attribute information of the user and determine the size matching information of the effect display data, the article try-on effect display device provided by the embodiment can obtain the figure attribute information of the user in response to an article selection request for a target article sent by a client, determine target effect display data from the effect display data of the target article according to the figure attribute information, and send the target effect display data to the client, wherein the effect display data is effect display data of corresponding parts of virtual models with different size and wearing the target article to different figure attributes, and generate size matching information according to the target effect display data in response to a size comparison instruction sent by the client and send the size matching information to the client. The size matching information of the target effect display data is displayed on the client side, so that a user can intuitively see a specific try-on effect, the user can conveniently purchase goods, and the purchasing experience of the user is improved.
The above is a schematic scheme of the device for displaying the effect of fitting the goods in this embodiment. It should be noted that, the technical solution of the device for displaying the effect of try-on of the goods and the technical solution of the method for displaying the effect of try-on of the goods described above belong to the same concept, and the details of the technical solution of the device for displaying the effect of try-on of the goods which are not described in detail can be referred to the description of the technical solution of the method for displaying the effect of try-on of the goods described above.
Fig. 4a is a flowchart of another method for displaying the try-on effect of an article according to an embodiment of the present disclosure, which is applied to a client, and specifically includes the following steps.
Step S402, a target goods are determined in response to a goods selection instruction, and a goods selection request is sent to the server according to the target goods;
step S404, receiving the size matching information returned by the server, and displaying the size matching information through a goods display interface.
The client provided in this embodiment is a terminal device held by a user, and is provided with an application program capable of selecting goods, where the application program may be a shopping application program, or a browser, or an applet in the application program.
The goods selecting interface specifically refers to an interface for providing target goods selection for a user, at least one goods to be selected is displayed in the interface, and the displayed goods to be selected can be displayed to the user in the form of videos, characters or pictures so as to facilitate the user to select the target goods; correspondingly, the goods selection instruction specifically refers to a clicking instruction submitted by a user through a goods selection interface, and the instruction can definitely determine a target goods selected by the user and trigger an execution process of sending effect display data for requesting to associate the target goods to a server.
Based on this, referring to fig. 4b, after the user opens the item selection interface through the client, at least one image, video or text information of the item to be selected may be displayed to the user through the item selection interface, so that the user may conveniently select the item. When the user has shopping demands, the user can click the favorite target goods through the goods selection interface, at the moment, the goods clicked by the user can be used as the target goods, and the goods selection instruction corresponding to the goods is received, so that the follow-up request of target effect display data based on the goods selection instruction is facilitated, the display effect that the target goods are worn on the corresponding parts of the virtual model is displayed at the client, and the goods purchasing experience of the user is improved.
It should be noted that, when the user starts the interface, the client requests the server for the goods to be selected and receives the content that the server feeds back to the request, and the display content corresponding to different users in different scenes may be different, which is not limited in this embodiment.
Further, when creating the data acquisition request, in order to request the server to feed back the effect display data closer to the stature attribute information of the user, the stature attribute information may be actively input by the user, or the stature attribute information may be determined according to the information of the user, where in this embodiment, the specific implementation manner is as follows:
After the responding to the goods selection instruction, further comprising: displaying a stature attribute input interface in response to the goods selection instruction, and receiving user stature attribute information input through the stature attribute input interface;
and creating an article selection request associated with the target article according to the user stature attribute information.
Specifically, the stature attribute input interface specifically refers to an interface for providing input stature attribute information for a user, for example, in a clothing shopping scene, the stature attribute input interface may include a height input control, a weight input control, a leg length input control, an arm length input control, a waistline input control, and the like; as in a vehicle consumable shopping scenario, the stature attribute input interface may include a vehicle brand input control, a vehicle length input control, a vehicle height input control, and the like. The stature attribute input interfaces in different scenes can be set according to actual requirements, and the embodiment is not limited in any way.
Based on the above, after receiving the goods selection instruction submitted by the user for the target goods, in order to feed back the effect display data corresponding to the user stature attribute information to the client, the creation of the data acquisition request may be performed after the client collects the user stature attribute information. That is, the stature attribute input interface can be displayed in response to the article selection instruction, so that the user can conveniently input stature attribute information, and when receiving user stature attribute information input by the user through the stature attribute input interface, a data acquisition request of the associated target article can be created according to the input user stature attribute information, and the data acquisition request is fed back to the server side at the moment, so that effect display data of the associated target article and corresponding to the user stature attribute information can be requested.
For example, after the user selects the type a coat, a stature attribute input interface corresponding to the stature attribute of the user may be displayed in response to the commodity selection instruction of the user, where the interface is shown in fig. 4c, and at this time, user stature attribute information { height Ly, weight By … } input By the user through the stature attribute input interface is received, and a data acquisition request associated with the type a coat may be created according to the user stature attribute information, so as to implement a subsequent display image that may request the type a coat to be worn on the virtual model from the server in response to the request, where the stature of the virtual model is similar to the stature attribute information input By the user, so as to facilitate the user to view.
Further, after the size matching information is displayed through the goods display interface, the method further includes:
receiving a click command submitted by a stature part control in the goods display interface, and determining a target part according to the click command;
and selecting target size matching information matched with the target part from the size matching information, and displaying the target size matching information through an goods display interface, wherein the part corresponding to the target size matching information is the target part.
Specifically, the stature position control is used for facilitating the user to enlarge the position of the virtual model, so that the display effect is more clearly watched.
For example, when the virtual model a of the "170/92A" specification is worn as the target effect display data, the shoulder width value, the article chest circumference value, the article sleeve length value, and the article waist circumference value of the jacket of the "170/92A" specification are obtained, the shoulder width value, the chest circumference value, the arm length value, and the waist circumference value of the model a are obtained, and the size matching information is obtained by calculating the differences between the shoulder width value and the shoulder width value, the article chest circumference value and the chest circumference value, the article sleeve length value and the arm length value, and the article waist circumference value and the waist circumference value of the model a of the jacket. After the size matching information is sent to the client for display, referring to fig. 4d, the user can click the wrist to check the difference between the length value of the sleeve and the length value of the arm.
According to the embodiment of the specification, the click instruction submitted by the user through the stature position control in the goods display interface is received, so that the user can enlarge the position of the virtual model, and the display effect can be seen more clearly.
Further, considering that the user may purchase friends or relatives or pre-purchase the situation of the user's body change in different periods when shopping through the goods purchasing application, in order to make the user more convenient, the user may request new effect display data according to the instruction of clicking the body switching control by the user, in this embodiment, the specific implementation manner is as follows:
After the size matching information is displayed through the goods display interface, the method further comprises:
receiving a click command submitted by a virtual model selection control in the goods display interface, and determining at least one target virtual model according to the click command;
receiving a click command submitted by an item size selection control in the item display interface, and determining a target item with at least one size according to the click command;
and sending the identification of the at least one target virtual model and the identification of the at least one size target goods to a server, and receiving at least one size matching information returned by the server.
Based on the above, when the user purchases the target goods for other objects (such as relatives, friends, owners and personal articles), after entering the goods display interface, the user can jump to the figure attribute input interface after clicking the figure switching control through the click command submitted by the figure switching control in the goods display interface; at the moment, the user can re-input the stature attribute information of the switching user of other objects through the stature attribute input interface, then creates a display data switching request according to the stature attribute information of the switching user, and sends the display data switching request to the server; and the server side responds to the display data switching request, determines the switching effect display data corresponding to the size attribute information of the switching user in the effect display data pre-generated by the target goods, and feeds the switching effect display data back to the client side. After receiving the switching effect display data, the client can update the goods display interface according to the switching effect display data; the method and the device have the advantages that the switching effect display data are displayed through the updated goods display interface, or the switching effect display data and the target effect display data are displayed simultaneously through the updated goods display interface, so that a user can compare two wearing effects conveniently.
Specifically, referring to fig. 4e, a user may select one virtual model to wear two sizes of upper garment. And respectively displaying the display data of the jackets of the same virtual model with the wearing sizes A and B in the screens corresponding to the clients, calculating the length difference value of the two jackets when the two jackets are worn according to the display data, displaying the length difference value of the two jackets when the two jackets are worn, and recommending the sizes of goods of proper users. Thus, the user can more intuitively feel the effect of the clothes with two sizes.
In another implementation, referring to fig. 4f, a user may select one virtual model to wear a coat of two sizes. And displaying the display data of the coat with the recommended size, which is worn by the virtual model, on the screen corresponding to the client. Meanwhile, the display data of the upper garment with other sizes can be displayed in the small window, and corresponding information, such as the length of the garment below the hand or the length of the garment above the hand, is displayed. According to the display data, the length difference of the jackets with different sizes when being worn can be further calculated, the length difference of the two jackets when being worn can be displayed, and the size of the goods of the proper user can be recommended. Thus, the user can more intuitively feel the effect of the clothes with two sizes.
It should be noted that, the number of the portlets and the positions of the portlets are not fixed, for example, when the user selects X goods to be compared, the X portlets are displayed, and the user can drag the positions of the portlets, or replace the display data in the portlets with the display data of the main window. In addition, display information corresponding to the small window can be selected, for example, comparison information of clothes length and hands and comparison information of cuffs and wrists can be selected.
In addition, after the client clicks a certain item, the user can also see the wearing effect of recommending other items, and in this embodiment, the specific implementation manner is as follows:
and receiving a click command in the goods display interface, acquiring target effect display data from a server, receiving recommended effect display data for recommending goods sent by the server, and generating size matching information according to the target effect display data and the recommended effect display data.
Specifically, the recommended goods specifically refer to goods having a goods association with the target goods, the goods association including, but not limited to, a goods style association, a goods store association, a goods matching association, etc.; the recommended goods can be determined according to the preference information of the user, and when the recommended goods are determined according to the preference information, the goods and the target goods can belong to the same shop, or can be goods of other shops in the platform. Correspondingly, the recommended effect display data specifically refers to effect display data corresponding to recommended goods.
For example, the user clicks the detail page of the watch W1, and the client receives the target effect display data sent by the server, where the target effect display data is display data of the watch W1 worn by the virtual model, and receives the recommended effect display data, where the recommended effect display data is display data of the watch W2 worn by the virtual model, and receives a difference between the wristband lengths of the watch W1 and the watch W2 sent by the server, so as to display the difference to the user.
In order to feed back the effect display data corresponding to the figure attribute information of the user and determine the size matching information of the effect display data, the method for displaying the try-on effect of the goods can determine a target goods in response to a goods selection instruction, send a goods selection request to the server according to the target goods, receive the size matching information returned by the server, and display the size matching information through a goods display interface. The size matching information of the target effect display data is displayed on the client side, so that a user can intuitively see a specific try-on effect, the user can conveniently purchase goods, and the purchasing experience of the user is improved.
Corresponding to the method embodiment, the present disclosure further provides an embodiment of a method for displaying the try-on effect of the article, and fig. 5 shows a schematic structural diagram of another method for displaying the try-on effect of the article according to one embodiment of the present disclosure. As shown in fig. 5, the apparatus is applied to a client, and includes:
The request sending module 502 is configured to respond to an item selection instruction, determine a target item, and send an item selection request to the server according to the target item;
the data receiving module 504 is configured to receive the size matching information returned by the server, and display the size matching information through the goods display interface.
In an alternative embodiment, the request sending module 502 is further configured to:
displaying a stature attribute input interface in response to the goods selection instruction, and receiving user stature attribute information input through the stature attribute input interface;
and creating an article selection request associated with the target article according to the user stature attribute information.
In an alternative embodiment, the data receiving module 504 is further configured to:
receiving a click command submitted by a stature part control in the goods display interface, and determining a target part according to the click command;
and selecting target size matching information matched with the target part from the size matching information, and displaying the target size matching information through an goods display interface, wherein the part corresponding to the target size matching information is the target part.
In an alternative embodiment, the data receiving module 504 is further configured to:
receiving a click command submitted by a virtual model selection control in the goods display interface, and determining at least one target virtual model according to the click command;
receiving a click command submitted by an item size selection control in the item display interface, and determining a target item with at least one size according to the click command;
and sending the identification of the at least one target virtual model and the identification of the at least one size target goods to a server, and receiving at least one size matching information returned by the server.
In order to feed back the effect display data corresponding to the figure attribute information of the user and determine the size matching information of the effect display data, the device for displaying the try-on effect of the goods provided by the embodiment can respond to the goods selection instruction to determine the target goods, send the goods selection request to the server according to the target goods, receive the size matching information returned by the server and display the size matching information through the goods display interface. The size matching information of the target effect display data is displayed on the client side, so that a user can intuitively see a specific try-on effect, the user can conveniently purchase goods, and the purchasing experience of the user is improved.
The above is a schematic scheme of the device for displaying the effect of fitting the goods in this embodiment. It should be noted that, the technical solution of the device for displaying the effect of try-on of the goods and the technical solution of the method for displaying the effect of try-on of the goods described above belong to the same concept, and the details of the technical solution of the device for displaying the effect of try-on of the goods which are not described in detail can be referred to the description of the technical solution of the method for displaying the effect of try-on of the goods described above.
Fig. 6 is a flowchart of another method for displaying the try-on effect of an article according to an embodiment of the present disclosure, which is applied to a server, and specifically includes the following steps.
Step S602, acquiring user stature attribute information in response to an item selection request for a target item sent by a live client;
step S604, determining target effect display data from the effect display data of the target goods according to the user stature attribute information, and sending the target effect display data to the live client, wherein the effect display data is effect display data of corresponding parts of virtual models for wearing target goods with different sizes to different stature attributes;
step S606, in response to the size comparison instruction sent by the live broadcast client, size matching information is generated according to the target effect display data, and the size matching information is sent to the live broadcast client.
Optionally, the generating size matching information according to the target effect display data includes:
determining position size data according to the size of the target goods corresponding to the target effect display data;
and generating size matching information according to the position size data and the position data of the virtual model corresponding to the target effect display data.
Optionally, the generating size matching information according to the target effect display data includes:
according to the size of the target goods corresponding to the target effect display data, determining a shoulder width value of the goods, a chest circumference value of the goods, a sleeve length value of the goods and a waistline value of the goods;
according to the shoulder width value, the chest circumference value, the arm length value and the waistline value of the virtual model corresponding to the target effect display data, and the shoulder width value, the chest circumference value, the sleeve length value and the waistline value of the goods, determining a shoulder width difference value, a chest circumference difference value, an arm length difference value and a waistline difference value;
and generating size matching information according to the shoulder width difference value, the chest circumference difference value, the arm length difference value and the waistline difference value.
Optionally, the method further comprises:
responding to a size selection instruction sent by a live client, and determining at least one target virtual model and at least one size target goods according to the size selection instruction;
Determining at least one effect display data from the effect display data of the target goods according to the at least one target virtual model and the target goods with at least one size;
and generating at least one size matching information according to the at least one effect display data.
Optionally, the obtaining the user stature attribute information in response to the item selection request for the target item sent by the live client includes:
responding to an article selection request for a target article sent by a live client, and sending a history record authorization request to the live client;
and under the condition that the live broadcast client returns the authorization information, acquiring the historical purchase information, and acquiring the user stature attribute information according to the historical purchase information.
Optionally, the determining target effect display data from the effect display data of the target goods according to the user stature attribute information includes:
determining a target virtual model from at least two virtual models corresponding to the target goods according to the user stature attribute information;
acquiring at least two candidate effect display data corresponding to the target virtual model and target goods with different sizes;
And determining target effect display data from the at least two candidate effect display data.
Optionally, the determining, according to the user stature attribute information, a target virtual model from at least two virtual models corresponding to the target goods includes:
determining height information and weight information according to the user stature attribute information;
and matching the height information and the weight information with the height information and the weight information of the at least two virtual models to determine a target virtual model.
Optionally, the determining target effect display data from the at least two candidate effect display data includes:
determining a target size matched with the user stature attribute information;
and determining target effect display data from the at least two candidate effect display data according to the target size, wherein the size of a target goods in the target effect display data is matched with the target size.
Optionally, before the acquiring the user stature attribute information in response to the item selection request for the target item sent by the live client, the method further includes:
receiving an item identifier submitted by an item provider to which the target item belongs;
Loading target specification description information corresponding to the target goods according to the goods identification, and acquiring virtual model stature attribute information of the virtual model;
and generating effect display data for wearing the target goods on the corresponding parts of the virtual model according to the target specification description information and the virtual model stature attribute information.
In order to feed back the effect display data corresponding to the figure attribute information of the user and determine the size matching information of the effect display data, the method for displaying the effect test-wear effect of the object goods can obtain the figure attribute information of the user in response to the goods selection request for the object goods sent by the live client, determine the target effect display data from the effect display data of the object goods according to the figure attribute information of the user and send the target effect display data to the live client, wherein the effect display data is effect display data of the corresponding parts of virtual models with different sizes of the object goods to be worn on different figure attributes, and generate size matching information according to the target effect display data in response to a size comparison instruction sent by the live client and send the size matching information to the live client. The size matching information of the target effect display data is displayed on the client side, so that a user can intuitively see a specific try-on effect, the user can conveniently purchase goods, and the purchasing experience of the user is improved.
Corresponding to the method embodiment, the present disclosure further provides an embodiment of an article try-on effect display device, and fig. 7 shows a schematic structural diagram of another article try-on effect display device provided in one embodiment of the present disclosure. As shown in fig. 7, the apparatus includes:
the request acquisition module 702 is configured to acquire user stature attribute information in response to an item selection request for a target item sent by the client;
the effect determining module 704 is configured to determine target effect display data from the effect display data of the target goods according to the user stature attribute information, and send the target effect display data to the client, wherein the effect display data is effect display data of corresponding parts of virtual models of different stature attributes for wearing target goods with different sizes;
and the size matching module 706 is configured to respond to a size comparison instruction sent by a client, generate size matching information according to the target effect display data, and send the size matching information to the client.
In an alternative embodiment, the size matching module 706 is further configured to:
Determining position size data according to the size of the target goods corresponding to the target effect display data;
and generating size matching information according to the position size data and the position data of the virtual model corresponding to the target effect display data.
In an alternative embodiment, the size matching module 706 is further configured to:
according to the size of the target goods corresponding to the target effect display data, determining a shoulder width value of the goods, a chest circumference value of the goods, a sleeve length value of the goods and a waistline value of the goods;
according to the shoulder width value, the chest circumference value, the arm length value and the waistline value of the virtual model corresponding to the target effect display data, and the shoulder width value, the chest circumference value, the sleeve length value and the waistline value of the goods, determining a shoulder width difference value, a chest circumference difference value, an arm length difference value and a waistline difference value;
and generating size matching information according to the shoulder width difference value, the chest circumference difference value, the arm length difference value and the waistline difference value.
In an alternative embodiment, the size matching module 706 is further configured to:
responding to a size selection instruction sent by a live client, and determining at least one target virtual model and at least one size target goods according to the size selection instruction;
Determining at least one effect display data from the effect display data of the target goods according to the at least one target virtual model and the target goods with at least one size;
and generating at least one size matching information according to the at least one effect display data.
In an alternative embodiment, the request acquisition module 702 is further configured to:
responding to an article selection request for a target article sent by a live client, and sending a history record authorization request to the live client;
and under the condition that the live broadcast client returns the authorization information, acquiring the historical purchase information, and acquiring the user stature attribute information according to the historical purchase information.
In an alternative embodiment, the effect determination module 704 is further configured to:
determining a target virtual model from at least two virtual models corresponding to the target goods according to the user stature attribute information;
acquiring at least two candidate effect display data corresponding to the target virtual model and target goods with different sizes;
and determining target effect display data from the at least two candidate effect display data.
In an alternative embodiment, the effect determination module 704 is further configured to:
determining height information and weight information according to the user stature attribute information;
and matching the height information and the weight information with the height information and the weight information of the at least two virtual models to determine a target virtual model.
In an alternative embodiment, the effect determination module 704 is further configured to:
determining a target size matched with the user stature attribute information;
and determining target effect display data from the at least two candidate effect display data according to the target size, wherein the size of a target goods in the target effect display data is matched with the target size.
In an alternative embodiment, the effect determination module 704 is further configured to:
receiving an item identifier submitted by an item provider to which the target item belongs;
loading target specification description information corresponding to the target goods according to the goods identification, and acquiring virtual model stature attribute information of the virtual model;
and generating effect display data for wearing the target goods on the corresponding parts of the virtual model according to the target specification description information and the virtual model stature attribute information.
In order to feed back the effect display data corresponding to the figure attribute information of the user and determine the size matching information of the effect display data, the method for displaying the effect test-wear effect of the object goods can obtain the figure attribute information of the user in response to the goods selection request for the object goods sent by the live client, determine the target effect display data from the effect display data of the object goods according to the figure attribute information of the user and send the target effect display data to the live client, wherein the effect display data is effect display data of the corresponding parts of virtual models with different sizes of the object goods to be worn on different figure attributes, and generate size matching information according to the target effect display data in response to a size comparison instruction sent by the live client and send the size matching information to the live client. The size matching information of the target effect display data is displayed on the client side, so that a user can intuitively see a specific try-on effect, the user can conveniently purchase goods, and the purchasing experience of the user is improved.
Corresponding to the above method embodiment, the present disclosure further provides a method for displaying an effect of fitting an article, and fig. 8a is a flowchart of another method for displaying an effect of fitting an article according to one embodiment of the present disclosure, where the method is applied to a client, and includes:
Step S802, responding to an article selection instruction, determining at least two target articles, and sending an article selection request to the server according to the at least two target articles;
step S804, receiving at least two target effect display data corresponding to the user figure attribute information returned by the server in response to the article selection request, where the at least two target effect display data are effect display data of wearing at least two size target articles on corresponding positions of a virtual model with the same figure attribute;
step S806, generating size matching information according to the at least two target effect display data, and displaying the size matching information through an item display interface, where the size matching information includes a difference value corresponding to at least two size values of the target item.
Specifically, referring to fig. 8b, the user may select one virtual model to wear two sizes of coats, i.e., to define two target items. The client side sends an article selection request to the server side and receives target effect display data returned by the server side, for example, display data of the jackets of the same virtual model with the wearing sizes A and B are respectively displayed in a screen corresponding to the client side, according to the display data, the client side can locally calculate the length difference value of the two jackets when the two jackets are worn, display the length difference value of the two jackets when the two jackets are worn in the screen, and recommend the article sizes of proper users. Thus, the user can more intuitively feel the effect of the clothes with two sizes.
In the embodiment of the specification, the client locally calculates the size matching information of the target effect display data by receiving the target effect display data returned by the server, so that the size matching information of the target effect display data is displayed by the client, a user can intuitively see a specific try-on effect, the user can conveniently purchase goods, and the purchasing experience of the user is improved.
Corresponding to the above method embodiment, the present disclosure further provides an embodiment of an article try-on effect display device, and fig. 9 is a structural diagram of still another article try-on effect display device provided in one embodiment of the present disclosure, where the device is applied to a client, and includes:
a request sending module 902 configured to determine a target item in response to an item selection instruction, and send an item selection request to the server according to the target item;
the effect determining module 904 is configured to receive at least two target effect display data corresponding to the user figure attribute information returned by the server in response to the article selection request, where the at least two target effect display data are effect display data of wearing at least two size target articles on corresponding positions of a virtual model with the same figure attribute;
And a size matching module 906 configured to generate size matching information according to the at least two target effect display data, and display the size matching information through an item display interface, wherein the size matching information includes a difference value corresponding to at least two size values of the target item.
The embodiment of the present disclosure further provides a method for displaying a try-on effect of an article, and fig. 10 is a flowchart of another method for displaying a try-on effect of an article, where the method is applied to a live client, and includes:
step S1002, a live client creates an item selection request of a related target item and sends the item selection request to a live server;
step S1004, receiving size matching information fed back by the live broadcast server side for the data acquisition request, where the size matching information includes a difference value corresponding to at least two size values of the target goods;
step S1006, displaying the size matching information through an article display interface so as to present the difference effect of the corresponding parts of the model to be virtual when the target article is worn.
Corresponding to the above method embodiment, the present disclosure further provides an embodiment of an article try-on effect display device, and fig. 11 is a structural diagram of another article try-on effect display device provided in one embodiment of the present disclosure, where the device includes:
The request sending module 1102 is configured to create an item selection request of the associated target item by the live client and send the item selection request to the live server;
a data receiving module 1104, configured to receive size matching information fed back by the live broadcast server for the data acquisition request;
the effect display module 1106 is configured to display the size matching information through an item display interface, so as to present a display effect of the corresponding part of the target item to be worn by the virtual model.
The above is a schematic scheme of the device for displaying the effect of fitting the goods in this embodiment. It should be noted that, the technical solution of the device for displaying the effect of try-on of the goods and the technical solution of the method for displaying the effect of try-on of the goods described above belong to the same concept, and the details of the technical solution of the device for displaying the effect of try-on of the goods which are not described in detail can be referred to the description of the technical solution of the method for displaying the effect of try-on of the goods described above.
Fig. 12 illustrates a block diagram of a computing device 1200 provided in accordance with an embodiment of the present specification. The components of computing device 1200 include, but are not limited to, memory 1210 and processor 1220. Processor 1220 is coupled to memory 1210 by bus 1230 and database 1250 is used to store data.
The computing device 1200 also includes an access device 1240, the access device 1240 enabling the computing device 1200 to communicate via the one or more networks 1260. Examples of such networks include public switched telephone networks (PSTN, public Switched Telephone Network), local area networks (LAN, local Area Network), wide area networks (WAN, wide Area Network), personal area networks (PAN, personal Area Network), or combinations of communication networks such as the internet. The access device 1240 may include one or more of any type of network interface, wired or wireless, such as a network interface card (NIC, network interface controller), such as an IEEE802.11 wireless local area network (WLAN, wireless Local Area Network) wireless interface, a worldwide interoperability for microwave access (Wi-MAX, worldwide Interoperability for Microwave Access) interface, an ethernet interface, a universal serial bus (USB, universal Serial Bus) interface, a cellular network interface, a bluetooth interface, near field communication (NFC, near Field Communication).
In one embodiment of the present description, the above components of computing device 1200, as well as other components not shown in fig. 12, may also be connected to each other, such as by a bus. It should be understood that the block diagram of the computing device illustrated in FIG. 12 is for exemplary purposes only and is not intended to limit the scope of the present description. Those skilled in the art may add or replace other components as desired.
Computing device 1200 may be any type of stationary or mobile computing device, including a mobile computer or mobile computing device (e.g., tablet, personal digital assistant, laptop, notebook, netbook, etc.), mobile phone (e.g., smart phone), wearable computing device (e.g., smart watch, smart glasses, etc.), or other type of mobile device, or a stationary computing device such as a desktop computer or personal computer (PC, personal Computer). Computing device 1200 may also be a mobile or stationary server.
Wherein the processor 1220 is configured to execute computer-executable instructions that, when executed by the processor, perform the steps of the data processing method described above. The foregoing is a schematic illustration of a computing device of this embodiment. It should be noted that, the technical solution of the computing device and the technical solution of the method for displaying the effect of fitting the goods belong to the same concept, and details of the technical solution of the computing device, which are not described in detail, can be referred to the description of the technical solution of the method for displaying the effect of fitting the goods.
An embodiment of the present disclosure also provides a computer-readable storage medium storing computer-executable instructions that, when executed by a processor, implement the steps of the method for displaying a try-on effect of an article described above.
The above is an exemplary version of a computer-readable storage medium of the present embodiment. It should be noted that, the technical solution of the storage medium and the technical solution of the method for displaying the try-on effect of the goods belong to the same concept, and the details of the technical solution of the storage medium which are not described in detail can be referred to the description of the technical solution of the method for displaying the try-on effect of the goods.
An embodiment of the present disclosure further provides a computer program, where the computer program when executed in a computer causes the computer to execute the steps of the method for displaying a try-on effect of an article described above.
The above is an exemplary version of a computer program of the present embodiment. It should be noted that, the technical solution of the computer program and the technical solution of the method for displaying the effect of try-on of goods described above belong to the same concept, and details of the technical solution of the computer program, which are not described in detail, can be referred to the description of the technical solution of the method for displaying the effect of try-on of goods described above.
The foregoing describes specific embodiments of the present disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
The computer instructions include computer program code that may be in source code form, object code form, executable file or some intermediate form, etc. The computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth. It should be noted that the computer readable medium contains content that can be appropriately scaled according to the requirements of jurisdictions in which such content is subject to legislation and patent practice, such as in certain jurisdictions in which such content is subject to legislation and patent practice, the computer readable medium does not include electrical carrier signals and telecommunication signals.
It should be noted that, for simplicity of description, the foregoing method embodiments are all expressed as a series of combinations of actions, but it should be understood by those skilled in the art that the embodiments are not limited by the order of actions described, as some steps may be performed in other order or simultaneously according to the embodiments of the present disclosure. Further, those skilled in the art will appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily all required for the embodiments described in the specification.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to the related descriptions of other embodiments.
The preferred embodiments of the present specification disclosed above are merely used to help clarify the present specification. Alternative embodiments are not intended to be exhaustive or to limit the invention to the precise form disclosed. Obviously, many modifications and variations are possible in light of the teaching of the embodiments. The embodiments were chosen and described in order to best explain the principles of the embodiments and the practical application, to thereby enable others skilled in the art to best understand and utilize the invention. This specification is to be limited only by the claims and the full scope and equivalents thereof.

Claims (17)

1. A method for displaying the try-on effect of goods is applied to a server and comprises the following steps:
responding to an article selection request for a target article sent by a client, and acquiring user stature attribute information;
and determining target effect display data from the effect display data of the target goods according to the user stature attribute information, and sending the target effect display data to the client, wherein the effect display data are effect display data of the target goods with different sizes which are worn on corresponding parts of virtual models with different stature attributes.
2. The method of claim 1, further comprising:
and responding to a size comparison instruction sent by a client, generating size matching information according to the target effect display data, and sending the size matching information to the client.
3. The method of claim 2, the generating size matching information according to the target effect presentation data, comprising:
determining position size data according to the size of the target goods corresponding to the target effect display data;
and generating size matching information according to the position size data and the position data of the virtual model corresponding to the target effect display data.
4. The method of claim 2, the generating size matching information according to the target effect presentation data, comprising:
determining the size value of the goods according to the size of the target goods corresponding to the target effect display data;
determining a corresponding difference value between the size value of the goods and the size value of the virtual model according to the size value of the virtual model corresponding to the target effect display data;
generating size matching information according to the difference value, wherein the size value comprises a plurality of or all of the following values: shoulder width value, chest circumference value, sleeve length value, and waist circumference value.
5. The method of claim 1, further comprising:
responding to a size selection instruction sent by a client, and determining at least one target virtual model and at least one size target goods according to the size selection instruction;
determining at least one effect display data from the effect display data of the target goods according to the at least one target virtual model and the target goods with at least one size;
and generating at least one size matching information according to the at least one effect display data.
6. The method of claim 1, wherein the obtaining user stature attribute information in response to the item selection request for the target item sent by the client includes:
responding to an article selection request for a target article sent by a client, and sending a history authorization request to the client;
and under the condition that the client returns the authorization information, acquiring the historical purchase information, and acquiring the user stature attribute information according to the historical purchase information.
7. The method of claim 1, wherein the determining target effect display data from the effect display data of the target goods according to the user stature attribute information includes:
Determining a target virtual model from at least two virtual models corresponding to the target goods according to the user stature attribute information;
acquiring at least two candidate effect display data corresponding to the target virtual model and target goods with different sizes;
and determining target effect display data from the at least two candidate effect display data.
8. The method of claim 7, wherein determining, according to the user stature attribute information, a target virtual model from at least two virtual models corresponding to the target item, comprises:
determining height information and weight information according to the user stature attribute information;
and matching the height information and the weight information with the height information and the weight information of the at least two virtual models to determine a target virtual model.
9. The method of claim 7, the determining target effect presentation data from the at least two candidate effect presentation data comprising:
determining a target size matched with the user stature attribute information;
and determining target effect display data from the at least two candidate effect display data according to the target size, wherein the size of a target goods in the target effect display data is matched with the target size.
10. The method of claim 1, further comprising, prior to the obtaining user stature attribute information in response to the item selection request for the target item sent by the client:
receiving an item identifier submitted by an item provider to which the target item belongs;
loading target specification description information corresponding to the target goods according to the goods identification, and acquiring virtual model stature attribute information of the virtual model;
and generating effect display data for wearing the target goods on the corresponding parts of the virtual model according to the target specification description information and the virtual model stature attribute information.
11. The method for displaying the try-on effect of the goods is applied to a client and comprises the following steps:
responding to an article selection instruction, determining a target article, and sending an article selection request to the server according to the target article;
and receiving the size matching information returned by the server, and displaying the size matching information through an goods display interface.
12. The method of claim 11, further comprising, after the responding to the item selection instruction:
displaying a stature attribute input interface in response to the goods selection instruction, and receiving user stature attribute information input through the stature attribute input interface;
And creating an article selection request associated with the target article according to the user stature attribute information.
13. The method of claim 11, after the displaying the size matching information via the item display interface, further comprising:
receiving a click command submitted by a stature part control in the goods display interface, and determining a target part according to the click command;
and selecting target size matching information matched with the target part from the size matching information, and displaying the target size matching information through an goods display interface, wherein the part corresponding to the target size matching information is the target part.
14. The method of claim 11, after the displaying the size matching information via the item display interface, further comprising:
receiving a click command submitted by a virtual model selection control in the goods display interface, and determining at least one target virtual model according to the click command;
receiving a click command submitted by an item size selection control in the item display interface, and determining a target item with at least one size according to the click command;
And sending the identification of the at least one target virtual model and the identification of the at least one size target goods to a server, and receiving at least one size matching information returned by the server.
15. A method for displaying the try-on effect of goods is applied to a server and comprises the following steps:
responding to an article selection request for a target article sent by a live client, and acquiring user stature attribute information;
determining target effect display data from the effect display data of the target goods according to the user stature attribute information, and sending the target effect display data to the live client, wherein the effect display data are effect display data of the target goods with different sizes which are worn on corresponding parts of virtual models with different stature attributes;
and responding to a size comparison instruction sent by a live broadcast client, generating size matching information according to the target effect display data, and sending the size matching information to the live broadcast client.
16. The method for displaying the try-on effect of the goods is applied to a client and comprises the following steps:
responding to an article selection instruction, determining a target article, and sending an article selection request to the server according to the target article;
Receiving at least two target effect display data corresponding to user stature attribute information returned by the server side in response to the article selection request, wherein the at least two target effect display data are effect display data of at least two sizes of target articles to be worn on corresponding positions of a virtual model with the same stature attribute;
and generating size matching information according to the at least two target effect display data, and displaying the size matching information through an article display interface, wherein the size matching information comprises a difference value corresponding to at least two size values of the target article.
17. The method for displaying the goods try-on effect is applied to a live client and comprises the following steps:
the method comprises the steps that a live broadcast client creates an article selection request of a related target article and sends the article selection request to a live broadcast server;
receiving size matching information fed back by the live broadcast server side aiming at the data acquisition request, wherein the size matching information comprises a difference value corresponding to at least two size values of the target goods;
and displaying the size matching information through an article display interface so as to present the difference effect of the corresponding parts of the virtual model to be worn by the target article.
CN202211531438.3A 2022-12-01 2022-12-01 Method and device for displaying try-on effect of goods Pending CN116029783A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211531438.3A CN116029783A (en) 2022-12-01 2022-12-01 Method and device for displaying try-on effect of goods

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211531438.3A CN116029783A (en) 2022-12-01 2022-12-01 Method and device for displaying try-on effect of goods

Publications (1)

Publication Number Publication Date
CN116029783A true CN116029783A (en) 2023-04-28

Family

ID=86076676

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211531438.3A Pending CN116029783A (en) 2022-12-01 2022-12-01 Method and device for displaying try-on effect of goods

Country Status (1)

Country Link
CN (1) CN116029783A (en)

Similar Documents

Publication Publication Date Title
US11593871B1 (en) Virtually modeling clothing based on 3D models of customers
US11164381B2 (en) Clothing model generation and display system
KR102425462B1 (en) Fashion preference analysis
Pachoulakis et al. Augmented reality platforms for virtual fitting rooms
US8818883B2 (en) Personalized shopping avatar
US20170352091A1 (en) Methods for generating a 3d virtual body model of a person combined with a 3d garment image, and related devices, systems and computer program products
US20220258049A1 (en) System and method for real-time calibration of virtual apparel using stateful neural network inferences and interactive body measurements
CN111681070B (en) Online commodity purchasing method, purchasing device, storage device and purchasing equipment
Zhu et al. An interactive clothing design and personalized virtual display system
CN113610612B (en) 3D virtual fitting method, system and storage medium
KR20200023970A (en) Virtual fitting support system
US20180268472A1 (en) System and method for digital makeup mirror
WO2020079235A1 (en) Method and apparatus for accessing clothing
CN116029783A (en) Method and device for displaying try-on effect of goods
KR20170018613A (en) System and method for advertisement using 3d model
CN116051228A (en) Method for determining try-on effect of goods
WO2022081745A1 (en) Real-time rendering of 3d wearable articles on human bodies for camera-supported computing devices
KR20020051667A (en) Method and apparatus for representing virtual shape of wearing garment(s)
KR20170143223A (en) Apparatus and method for providing 3d immersive experience contents service
NL2022937B1 (en) Method and Apparatus for Accessing Clothing
Kubal et al. Augmented reality based online shopping
US20240161423A1 (en) Systems and methods for using machine learning models to effect virtual try-on and styling on actual users
US20240071019A1 (en) Three-dimensional models of users wearing clothing items
US20240037869A1 (en) Systems and methods for using machine learning models to effect virtual try-on and styling on actual users
US11620802B2 (en) System and method for providing a simulated visualization of product personalized with user selected art

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination