CN115358828A - Information processing and interaction method, device, equipment and medium based on virtual fitting - Google Patents

Information processing and interaction method, device, equipment and medium based on virtual fitting Download PDF

Info

Publication number
CN115358828A
CN115358828A CN202211257860.4A CN202211257860A CN115358828A CN 115358828 A CN115358828 A CN 115358828A CN 202211257860 A CN202211257860 A CN 202211257860A CN 115358828 A CN115358828 A CN 115358828A
Authority
CN
China
Prior art keywords
dimensional model
target
fitting
information
vertexes
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211257860.4A
Other languages
Chinese (zh)
Other versions
CN115358828B (en
Inventor
孙泽锋
陈志文
吕承飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba China Co Ltd
Original Assignee
Alibaba China Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba China Co Ltd filed Critical Alibaba China Co Ltd
Priority to CN202211257860.4A priority Critical patent/CN115358828B/en
Publication of CN115358828A publication Critical patent/CN115358828A/en
Application granted granted Critical
Publication of CN115358828B publication Critical patent/CN115358828B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0641Shopping interfaces
    • G06Q30/0643Graphical representation of items or shoppers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • G06T17/205Re-meshing
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Business, Economics & Management (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Finance (AREA)
  • Accounting & Taxation (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Processing Or Creating Images (AREA)

Abstract

In the embodiment of the application, firstly, a fusion three-dimensional model simulating a virtual fitting state is established based on a three-dimensional model of a fitting object and a reference three-dimensional model of a target commodity object, then, in the virtual fitting state, adaptation degree information of a plurality of target vertexes on the three-dimensional model of the fitting object reflecting the fitting adaptation degree is calculated, and size parameters and/or shape parameters of the reference three-dimensional model are adjusted based on the adaptation degree information of the plurality of target vertexes until the target three-dimensional model meeting the adaptation degree requirement is obtained, so that the target three-dimensional model of the target commodity object highly adapted to the fitting object is obtained, further, personalized commodity customization service can be performed for a user or the commodity object highly adapted to the user is selected for the user, and the condition of goods return or goods change caused by the fact that the commodity object is not suitable is effectively reduced.

Description

Information processing and interaction method, device, equipment and medium based on virtual try-on
Technical Field
The present application relates to the field of internet technologies, and in particular, to a method, an apparatus, a device, and a medium for information processing and interaction based on virtual fitting.
Background
With the development of internet technology and electronic commerce, people can do online shopping without going out. However, for wearing goods such as shoes, the user cannot try on the goods during online purchase, and the goods are often returned and changed due to the fact that the wearing goods such as shoes are not fit for feet and are not suitable for themselves after the goods arrive, so that the shopping experience of the user is seriously influenced, the cost of online shopping is increased, and the efficiency is reduced.
Thus, some solutions for estimating the foot length, recommending shoes of a suitable size to the user based on the estimated foot length, such as estimating the foot length using key points in the foot image, or measuring the foot length of the user by means of AR (Augmented Reality) technology, have appeared in the prior art. Based on the schemes, the user still has difficulty in knowing whether the shoe is crowded with feet relative to the foot shape of the user, whether the shoe is comfortable to wear and the like when purchasing the shoe. That is to say, the current scheme still can't solve the shopping problem of wearing class commodity well, can't fine solution return for change the goods problem.
Disclosure of Invention
Various aspects of the application provide an information processing and interaction method, device, equipment and medium based on virtual fitting, so as to obtain a target three-dimensional model of a target commodity object highly adapted to a fitting object, and facilitate better personalized customized service for a customer.
The embodiment of the application provides an information processing method based on virtual fitting, which comprises the following steps: responding to the fitting request, fusing the three-dimensional model of the fitting object with the reference three-dimensional model of the target commodity object to obtain a fused three-dimensional model, wherein the fused three-dimensional model represents a first relative position relation between the three-dimensional model of the fitting object and the reference three-dimensional model in a fitting state; according to the first relative position relation, acquiring a plurality of distance information between a plurality of target vertexes on the three-dimensional model of the fitting object and corresponding vertexes or regions on the reference three-dimensional model, and using the distance information as the adaptation degree information of the target vertexes; and under the condition that the reference three-dimensional model does not meet the adaptation degree requirement according to the adaptation degree information of the target vertexes, adjusting the size parameter and/or the shape parameter of the reference three-dimensional model, and acquiring the adaptation degree information of the target vertexes again until the target three-dimensional model meeting the adaptation degree requirement is obtained.
An embodiment of the present application further provides an information processing apparatus based on virtual fitting, including: the fusion module is used for responding to the fitting request, fusing the three-dimensional model of the fitting object with the reference three-dimensional model of the target commodity object to obtain a fused three-dimensional model, and representing a first relative position relation between the three-dimensional model of the fitting object and the reference three-dimensional model in the fitting state by the fused three-dimensional model; the acquisition module is used for acquiring a plurality of pieces of distance information between a plurality of target vertexes on the three-dimensional model of the fitting object and corresponding vertexes or regions on the reference three-dimensional model according to the first relative position relation, and the distance information is used as the adaptation degree information of the target vertexes; and the adjusting module is used for adjusting the size parameter and/or the shape parameter of the reference three-dimensional model under the condition that the reference three-dimensional model does not meet the adaptability requirement according to the adaptability information of the target vertexes, and acquiring the adaptability information of the target vertexes again until the target three-dimensional model meeting the adaptability requirement is obtained.
The embodiment of the application further provides an information interaction method based on virtual fitting, which includes: responding to a fitting request, and fusing a three-dimensional model of a fitting object and a reference three-dimensional model of a target commodity object to obtain a fused three-dimensional model, wherein the fused three-dimensional model represents a first relative position relation between the three-dimensional model of the fitting object and the reference three-dimensional model in a fitting state; according to the first relative position relationship, acquiring a plurality of pieces of distance information between a plurality of target vertexes on the three-dimensional model of the fitting object and corresponding vertexes or regions on the reference three-dimensional model, and using the distance information as the adaptation degree information of the target vertexes; displaying any three-dimensional model of the three-dimensional model, the reference three-dimensional model and the fusion three-dimensional model of the fitting object, and visually marking the adaptation degree information of the target vertexes on the any three-dimensional model; and responding to the adjustment operation aiming at any three-dimensional model, adjusting the size parameter and/or the appearance parameter of the reference three-dimensional model, and synchronously updating the adaptation degree information of the target vertexes and the visual mark on any three-dimensional model according to the adjusted reference three-dimensional model.
The embodiment of the application further provides an information interaction method based on virtual fitting, which includes: responding to the fitting request, acquiring a three-dimensional model of a fitting object and a reference three-dimensional model of a target commodity object, and displaying the three-dimensional model of the fitting object and the reference three-dimensional model; fusing the three-dimensional model of the fitting object with a reference three-dimensional model of the target commodity to obtain a first relative position relation between the three-dimensional model of the fitting object and the reference three-dimensional model in a fitting state; according to the first relative position relation, calculating a plurality of pieces of distance information between a plurality of target vertexes on the three-dimensional model of the fitting object and corresponding vertexes or regions on the reference three-dimensional model, and using the plurality of pieces of distance information as the adaptation degree information of the plurality of target vertexes; visually marking the adaptability information of the plurality of target vertexes on the three-dimensional model of the fitting object; and responding to the adjustment operation aiming at the reference three-dimensional model, adjusting the size parameter and/or the appearance parameter of the reference three-dimensional model, and synchronously updating the adaptation degree information of the target vertexes and the visual mark on the three-dimensional model of the fitting object according to the adjusted reference three-dimensional model.
An embodiment of the present application further provides an electronic device, including: a memory and a processor; a memory for storing a computer program; the processor is coupled to the memory for executing a computer program for performing steps in a virtual fitting based information processing method or information interaction method.
Embodiments of the present application also provide a computer-readable storage medium storing a computer program, which, when executed by a processor, causes the processor to implement steps in an information processing method or an information interaction method based on virtual fitting.
In the embodiment of the application, firstly, a fused three-dimensional model simulating a virtual fitting state is created based on a three-dimensional model of a fitting object and a reference three-dimensional model of a target commodity object, then, in the virtual fitting state, adaptation degree information of a plurality of target vertexes on the three-dimensional model of the fitting object for reflecting the fitting adaptation degree is obtained, and size parameters and/or shape parameters of the reference three-dimensional model are adjusted based on the adaptation degree information of the plurality of target vertexes until the target three-dimensional model meeting the adaptation degree requirement is obtained. Therefore, the real try-on scene is simulated by combining the three-dimensional models of the try-on object and the target commodity object through a virtual try-on technology, the reference three-dimensional model of the target commodity object is dynamically adjusted until the reference three-dimensional model is matched with the size or the shape of the three-dimensional model of the try-on object, the reference three-dimensional model is closer to the real try-on scene, the target three-dimensional model of the target commodity object which is highly matched with the try-on object can be obtained, personalized commodity customization service can be performed for a user or commodities which are highly matched with the user can be selected for the user, the condition of goods return or goods change caused by the fact that the commodity object is not suitable can be effectively reduced, the cost of online shopping is reduced, and the efficiency of the online shopping and the shopping experience of the user are improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is an application scene diagram of a virtual fitting based on a three-dimensional model according to an embodiment of the present application;
fig. 2 is a diagram of another application scenario of virtual fitting based on a three-dimensional model according to an embodiment of the present application;
fig. 3 is a flowchart of an information processing method based on virtual fitting according to an embodiment of the present application;
fig. 4 is a flowchart of another information processing method based on virtual try-on according to an embodiment of the present application;
fig. 5 is a schematic view of another virtual fitting process based on a three-dimensional model according to an embodiment of the present application;
fig. 6a is a flowchart of an information interaction method based on virtual fitting according to an embodiment of the present application;
fig. 6b is a flowchart of another information interaction method based on virtual fitting according to an embodiment of the present application;
fig. 7a is a model structure diagram of a three-dimensional reconstruction network according to an embodiment of the present disclosure;
fig. 7b is a flowchart of a three-dimensional reconstruction method according to an embodiment of the present application;
fig. 7c is a model structure diagram of another three-dimensional reconstruction network according to an embodiment of the present disclosure;
fig. 7d is a model structure diagram of a feature extraction network according to an embodiment of the present application;
fig. 7e is a model structure diagram of a feature extraction module in a feature extraction network according to an embodiment of the present application;
fig. 7f is a block diagram of a downsampling sub-module according to an embodiment of the present disclosure;
fig. 8 is a schematic structural diagram of an information processing apparatus based on virtual fitting according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be described in detail and completely with reference to the following specific embodiments of the present application and the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In the embodiment of the application, firstly, a fused three-dimensional model simulating a virtual fitting state is established based on a three-dimensional model of a fitting object and a reference three-dimensional model of a target commodity object, then, in the virtual fitting state, adaptation degree information of a plurality of target vertexes on the three-dimensional model of the fitting object for reflecting the fitting adaptation degree is obtained, and size parameters and/or shape parameters of the reference three-dimensional model are adjusted based on the adaptation degree information of the plurality of target vertexes until the target three-dimensional model meeting the adaptation degree requirement is obtained. Therefore, the real try-on scene is simulated by combining the three-dimensional models of the try-on object and the target commodity object through a virtual try-on technology, the reference three-dimensional model of the target commodity object is dynamically adjusted until the reference three-dimensional model is matched with the size or the shape of the three-dimensional model of the try-on object, the reference three-dimensional model is closer to the real try-on scene, the target three-dimensional model of the target commodity object in height fit with the try-on object is obtained, further personalized commodity customization service can be performed for a user or the commodity object in height fit with the user is selected for the user, the condition of goods return or goods change caused by the fact that the commodity object is not suitable is effectively reduced, the cost of online shopping is reduced, and the efficiency of the online shopping and the shopping experience of the user are improved.
In a merchandise customization or online shopping scenario, the try-on object may be any body part of the user, including, but not limited to: feet, hands, head, upper or lower body, etc. The commodity objects to be tried on include, for example, but are not limited to: shoes, gloves, hats, clothing or pants, etc. The virtual fitting technology based on the three-dimensional model can efficiently and accurately provide the commodity three-dimensional model highly suitable for the user, based on the commodity three-dimensional model, the user can select and purchase a suitable commodity object, or commodity customized service can be provided for the user, the condition of goods return or goods change caused by the fact that the commodity object is not suitable is reduced, the cost of online shopping is reduced, and the efficiency of online shopping and the shopping experience of the user are improved.
In practical applications, the virtual fitting technique relies on a three-dimensional model of the fitting object and a three-dimensional model of the merchandise object. In order to obtain a three-dimensional model of a fitting object and a three-dimensional (3 dimensional,3 d) model of a commodity object, a plurality of images of a real fitting object may be taken, the real fitting object may be three-dimensionally reconstructed based on the plurality of images to obtain the three-dimensional model of the fitting object, and the three-dimensional model of the fitting object may also be regarded as a virtual fitting object with respect to the real fitting object. Similarly, a plurality of images of the real commodity object are taken, and the real commodity object is subjected to three-dimensional reconstruction based on the plurality of images to obtain a three-dimensional model of the commodity object, which can also be regarded as a virtual commodity object relative to the real commodity object. The three-dimensional model of the try-on object and the three-dimensional model of the goods object may be three-dimensional mesh (mesh) models, the three-dimensional mesh models including a plurality of vertices (Vertex) and a plurality of patches, which may be triangular patches or arbitrary polygonal patches, each patch being composed of a plurality of vertices, for example, a triangular patch being composed of three vertices. In this embodiment, an implementation manner of performing three-dimensional reconstruction on the fitting object and the commodity object to obtain the corresponding three-dimensional model is not limited, and in the subsequent embodiments, a three-dimensional reconstruction method is exemplarily given by taking the fitting object as an example, which may be specifically referred to in the subsequent embodiments.
After obtaining the three-dimensional model of the fitting object and the three-dimensional model of the commodity object, fusing the three-dimensional model of the fitting object and the three-dimensional model of the commodity object to execute virtual fitting operation, and obtaining the adaptation degree information of a plurality of vertexes on the three-dimensional model of the fitting object in the virtual fitting process, wherein the adaptation degree information is used for representing the tightness degree between the vertexes and corresponding vertexes or areas on the three-dimensional model of the commodity object and can also be understood as fitting comfort; and adjusting the size parameters and/or the shape parameters of the three-dimensional model of the commodity object according to the adaptation degree information of a plurality of vertexes included in the three-dimensional model of the fitting object until the three-dimensional model of the commodity object meeting the adaptation degree requirement is obtained. If the shopping scene is the online shopping scene, the commodity object with the proper size and/or shape can be selected for the user according to the finally obtained three-dimensional model of the commodity object, the information of the commodity object is provided for the user so that the user can place an order to purchase, if the shopping scene is the customized scene, the finally obtained three-dimensional model of the commodity object can be provided for a customizing platform for producing the commodity object, the customizing platform can be used for customizing the produced commodity object for the user according to the three-dimensional model of the commodity object, and the produced commodity object is provided for the user through logistics distribution, so that the commodity customizing task is completed.
For better understanding, several application scenarios of virtual fitting based on three-dimensional models are described below.
Fig. 1 is an application scene diagram of a virtual fitting based on a three-dimensional model according to an embodiment of the present application. In the application scene, the server mainly undertakes the calculation tasks of fusing and adjusting the three-dimensional models of the commodity objects matched with the fitting objects until the target three-dimensional models are obtained. Taking the example of commodity customization, when a user has a customization demand for a commodity object, as shown in (1) in fig. 1, a terminal device is triggered to send a try-on request (or a customization request) to a server, the server locally maintains a three-dimensional model of the try-on object and a reference three-dimensional model of each commodity object in advance, and the server can acquire the reference three-dimensional model of a target commodity object and can also acquire the three-dimensional model of the try-on object of the user according to the try-on request. The target commodity object is a commodity object which can be worn on the fitting object, the reference three-dimensional model is a three-dimensional model of the target commodity object in an initial state, under the condition that the reference three-dimensional model does not meet the requirement of the adaptation degree, the size parameter and/or the appearance parameter of the reference three-dimensional model are/is adjusted, the three-dimensional model meeting the requirement of the adaptation degree can be obtained, and the three-dimensional model meeting the requirement of the adaptation degree is called as the target three-dimensional model. It is worth noting that the server has a three-dimensional reconstruction capability, and can also perform three-dimensional reconstruction on the fitting object based on a plurality of images of the fitting object in real time to obtain a three-dimensional model of the fitting object. Similarly, the server can also perform three-dimensional reconstruction on the target commodity object based on a plurality of images of the target commodity object in real time to obtain a three-dimensional model of the target commodity object. That is, the server may store the three-dimensional models of the try-on object and each commodity object in advance, or may reconstruct the three-dimensional model of each object in real time in a three-dimensional manner, which is not limited thereto.
Referring to (2) in fig. 1, the server fuses the three-dimensional model of the fitting object and the reference three-dimensional model of the target commodity object to obtain a fused three-dimensional model capable of representing the fitting state. Taking the fitting object as a foot and the target commodity object as a shoe as an example, a fused three-dimensional model obtained by fusing a three-dimensional model of the foot and a three-dimensional model of the shoe is morphologically represented as a state in which the foot virtually wears the shoe. The representation of the three-dimensional model on the fitting state is fused, and the representation can be represented as a relative position relation between the three-dimensional model of the fitting object and the reference three-dimensional model of the target commodity object.
Referring to (3) in fig. 1, based on the relative position relationship between the three-dimensional model of the fitting object defined by fusing the three-dimensional models and the reference three-dimensional model of the target commodity object, the fitting degree information of a plurality of target vertexes on the three-dimensional model of the fitting object is obtained, and the fitting degree information of each target vertex reflects the fitting comfort degree between the corresponding vertex or region on the reference three-dimensional model of the target commodity object. The target vertex can be a part of or all of the vertexes on the three-dimensional model of the fitting object. With continued reference to fig. 1, it is assumed that the vertex P is any one of a plurality of target vertices included in the three-dimensional model of the fitting object, and a plurality of vertices such as the vertex O, the vertex a, the vertex B, the vertex C, the vertex D, and the vertex E are vertices on the reference three-dimensional model. The vertex O is a vertex corresponding to the vertex P. In practical applications, the information of the fitness of the vertex P may be calculated based on the distance from the vertex P to the vertex O. Alternatively, the information on the degree of adaptation of the vertex P may be calculated based on the distance from the vertex P to each triangular patch in which the vertex O is located. For example, the distance d1 from the vertex P to the OAB triangle patch is calculated, the distance d2 from the vertex P to the OBC triangle patch is calculated, the distance d3 from the vertex P to the OCD triangle patch is calculated, the distance d4 from the vertex P to the OED triangle patch is calculated, and the distance d5 from the vertex P to the OAE triangle patch is calculated, the distances d1, d2, d3, d4, and d5 are weighted, summed or averaged to obtain the final distance information of the area where the vertex P to the vertex O are located, and the fitting degree information of the vertex P is calculated based on the final distance information of the area where the vertex P to the vertex O are located. It should be noted that, the weighting and summing or the averaging are not limited to d1, d2, d3, d4, and d5, and for example, the maximum value or the minimum value of the distances d1, d2, d3, d4, and d5 may be used as the final distance information from the vertex P to the area where the vertex O is located, that is, the information of the degree of adaptation of the vertex P. The distance from the vertex P to each triangular patch may be the distance from the vertex P to the center of each triangular patch, or may be the vertical distance from the vertex P to each triangular patch, which is not limited herein.
Referring to (4) in fig. 1, the server provides the adaptation degree information of the target vertices on the three-dimensional model of the fitting object to the terminal device, and the terminal device automatically identifies whether the reference three-dimensional model meets the adaptation degree requirement based on the adaptation degree information of the target vertices, or, referring to (5) in fig. 1, the terminal device renders the three-dimensional model of the fitting object based on the adaptation degree information of the target vertices to obtain an adaptation degree thermodynamic diagram, and displays the adaptation degree thermodynamic diagram for the user to view. And the user checks the fitness thermodynamic diagram to autonomously identify whether the benchmark three-dimensional model meets the fitness requirement. Under the condition that the reference three-dimensional model is confirmed not to meet the requirement of the adaptation degree, referring to (6) in fig. 1, the terminal device triggers an adjusting operation, referring to (7) in fig. 1, and the server responds to the adjusting operation to adjust the reference three-dimensional model so as to obtain the target three-dimensional model. It should be noted that, in each embodiment, the visual representation of the information about the degree of adaptation of the multiple target vertices in the thermodynamic diagram manner is only an example, and is not limited to this, and the information about the degree of adaptation of the multiple target vertices may be visually marked on the fused three-dimensional model in various manners that are convenient for a user to visually view.
Referring to (8) in fig. 1, the server provides the target three-dimensional model to the customization platform. Referring to (9) in fig. 1, the customization platform performs production manufacturing based on the target three-dimensional model to customize the target commodity object adapted to the try-on object. Referring to at (r) in fig. 1, the produced target merchandise object is delivered to the customer via logistics. Optionally, in each embodiment, the size parameter and/or the shape parameter corresponding to the target three-dimensional model may also be provided to the customizing platform, and the customizing platform performs production and manufacturing based on the size parameter and/or the shape parameter corresponding to the target three-dimensional model, so as to customize the target commodity object adapted to the try-on object.
Fig. 2 is another application scenario diagram of virtual fitting based on a three-dimensional model according to an embodiment of the present application. In the application scenario, as the processing capability of the terminal device is stronger and stronger, the terminal device can also undertake the calculation tasks of fusing and adjusting the three-dimensional models of the commodity objects adapted to the fitting objects until the target three-dimensional models are obtained. In practical application, the server may provide the three-dimensional model of the fitting object and the three-dimensional model of the target commodity object, and the terminal device requests to obtain the three-dimensional model of the fitting object and the reference three-dimensional model of the target commodity object from the server. The server may pre-store the three-dimensional model of the fitting object and the reference three-dimensional model of the target commodity object, or may three-dimensionally reconstruct the three-dimensional model of the fitting object and the reference three-dimensional model of the target commodity object in real time, which is not limited thereto. Of course, the three-dimensional model of the try-on object and the reference three-dimensional model of the target commodity object may also be obtained by three-dimensional reconstruction by the terminal device. Under the condition that the terminal device locally loads the three-dimensional model of the fitting object and the reference three-dimensional model of the target commodity object, as shown in (1) in fig. 2, the terminal device responds to a fitting request triggered by a user, and fuses the three-dimensional model of the fitting object and the reference three-dimensional model of the target commodity object to obtain a fused three-dimensional model. The process is referred to as fusion operation
Referring to (2) in fig. 2, the terminal device obtains the fitting degree information of a plurality of target vertexes on the three-dimensional model of the fitting object based on the relative position relationship between the three-dimensional model of the fitting object defined by fusing the three-dimensional models and the reference three-dimensional model of the target commodity object, and the fitting degree information of each target vertex reflects the fitting degree between the corresponding vertex on the reference three-dimensional model of the target commodity object. The process is referred to as adaptation information calculation operation for short.
The terminal device automatically identifies whether the reference three-dimensional model meets the requirement of the suitability degree based on the suitability degree information of the target vertexes, or, referring to (3) in fig. 2, the terminal device renders the three-dimensional model of the fitting object based on the suitability degree information of the target vertexes to obtain the suitability thermodynamic diagram, and displays the suitability thermodynamic diagram for the user to check. And the user checks whether the fitness thermodynamic diagram automatically identifies the reference three-dimensional model to meet the fitness requirement. Under the condition that the reference three-dimensional model does not meet the requirement of the adaptation degree, referring to (4) in fig. 2, the terminal device triggers an adjustment operation initiated by a user to adjust the reference three-dimensional model so as to obtain the target three-dimensional model.
Referring to (5) in fig. 2, the terminal device provides the target three-dimensional model to the server, and the server provides the target three-dimensional model to the customizing platform (not shown in fig. 2). Referring to (6) in fig. 2, the customization platform performs production and manufacturing based on the target three-dimensional model to customize the target commodity object. Referring to (6) in fig. 2, the produced target commodity object is delivered to the user through logistics. In some application scenarios, after the terminal device provides the target three-dimensional model to the server, the server may also return information of the target commodity object to the terminal device, where the target commodity object may be a customized commodity object with a size and a shape adapted to the target three-dimensional model. Examples of information for the target merchandise object include, but are not limited to: the material, style, production schedule, logistics distribution schedule, production date, manufacturer, etc. of the target merchandise object.
It should be noted that fig. 1 and fig. 2 are only exemplary application scenarios, and do not limit the tasks undertaken by the terminal device and the server in the process of executing the task of creating the three-dimensional model of the commodity object adapted to the fitting object.
In this embodiment, the terminal device includes, for example, but is not limited to: the mobile phone comprises a mobile phone, a tablet computer, a desktop computer, a wearable intelligent device, an intelligent household device and the like. Servers include, for example but are not limited to: the server cluster can be a cloud server, a cloud server cluster or a container, a virtual machine and the like in the cloud server.
The technical solutions provided by the embodiments of the present application are described in detail below with reference to the accompanying drawings.
Fig. 3 is a flowchart of an information processing method based on virtual fitting according to an embodiment of the present application. The method may be executed by an information processing apparatus based on virtual fitting, and the apparatus may be composed of software and/or hardware, and may be integrated in a terminal device or a server, or may be partially integrated in a terminal device and partially integrated in a server, which is not limited thereto. Referring to fig. 3, the method may include the steps of:
301. and responding to the fitting request, fusing the three-dimensional model of the fitting object and the reference three-dimensional model of the target commodity object to obtain a fused three-dimensional model, wherein the fused three-dimensional model represents a first relative position relation between the three-dimensional model of the fitting object and the reference three-dimensional model in the fitting state.
302. And acquiring a plurality of distance information between a plurality of target vertexes on the three-dimensional model of the fitting object and corresponding vertexes or areas on the reference three-dimensional model according to the first relative position relation, and using the distance information as the adaptation degree information of the target vertexes.
303. Under the condition that the reference three-dimensional model does not meet the adaptation degree requirement according to the adaptation degree information of the target vertexes, the size parameter and/or the shape parameter of the reference three-dimensional model are/is adjusted, and the adaptation degree information of the target vertexes is obtained again until the target three-dimensional model meeting the adaptation degree requirement is obtained.
In this embodiment, when the user has a fitting requirement, a fitting request may be initiated by the terminal device, where the fitting request is used to request to fit a commodity object worn on a fitting object, and the fitting request may include, but is not limited to: an identification of the fitting object, an identification of the merchandise object, and a plurality of images of the fitting object. And responding to the fitting request, and acquiring the three-dimensional model of the fitting object and the reference three-dimensional model of the target commodity object. If the three-dimensional model of the fitting object and the reference three-dimensional model of the target commodity object are three-dimensionally reconstructed in advance, the three-dimensional model of the fitting object generated in advance may be acquired based on the identification of the fitting object, and the reference three-dimensional model of the target commodity object three-dimensionally reconstructed in advance may be acquired based on the identification of the commodity object. Of course, real-time three-dimensional reconstruction of a three-dimensional model of the fitting object or a reference three-dimensional model of the target commodity object is also supported, which is not limited.
In the customized scenario, the fitting request may be regarded as a customized request, and then, in response to the fitting request, an optional implementation manner for obtaining the three-dimensional model of the fitting object and the reference three-dimensional model of the target commodity object is as follows: responding to the customization request, and displaying a commodity customization page, wherein the commodity customization page comprises at least one customizable commodity object; responding to the selection operation of a user on a commodity customized page, and determining a selected target commodity object and a reference three-dimensional model thereof; and loading the three-dimensional model of the fitting object matched with the target commodity object from the three-dimensional model library of the user according to the type of the target commodity object.
Specifically, the user interacts with the goods customization page, and first, selects a target goods object and a reference three-dimensional model thereof that satisfy the customization requirements. Then, according to the type of the target commodity object, which fitting object needs to be fitted can be determined, and the three-dimensional model of the fitting object matched with the target commodity object is loaded from the three-dimensional model library of the user. It should be noted that the reference three-dimensional model or the three-dimensional model library of the user may be maintained locally in the terminal device or may be maintained in the server. Under the condition that the reference three-dimensional model or the three-dimensional model library of the user is not maintained locally by the terminal equipment, the server can be requested to send the reference three-dimensional model or the three-dimensional model of the fitting object, or the server is requested to reconstruct the reference three-dimensional model or the three-dimensional model of the fitting object in real time in a three-dimensional mode.
The three-dimensional model library of the user includes, but is not limited to: a three-dimensional model of the feet, a three-dimensional model of the hands, a three-dimensional model of the head, a three-dimensional model of the upper body or a three-dimensional model of the lower body.
As an example, when loading the three-dimensional model of the fitting object, a corresponding model loading page may be displayed according to the type of the target commodity object, and the three-dimensional model of the fitting object may be obtained in response to a loading trigger operation on the model loading page.
In this embodiment, after obtaining the three-dimensional model of the fitting object and the reference three-dimensional model of the target commodity object, a three-dimensional model fusion operation is performed to fuse the three-dimensional model of the fitting object and the reference three-dimensional model of the target commodity object, so as to obtain a fused three-dimensional model. And fusing the three-dimensional model to represent a first relative position relation between the three-dimensional model of the fitting object and the reference three-dimensional model in the fitting state.
Specifically, the three-dimensional model fusion operation may be understood as converting vertex positions included in each three-dimensional model to vertex positions in the same coordinate system, and composing a fused three-dimensional model based on the vertices of the plurality of three-dimensional models after coordinate conversion. That is, the fused three-dimensional model as a whole is essentially composed of vertices of a plurality of three-dimensional models that are uniformly transformed to the same coordinate system. Based on this, the relative positional relationship between the three-dimensional model of the fitting object and the reference three-dimensional model in the fitting state can be represented based on the relative positional relationship between the vertices of the three-dimensional model belonging to the fitting object in the fused three-dimensional model and the corresponding vertices of the reference three-dimensional model in the fused three-dimensional model. For convenience of understanding, the relative positional relationship between the three-dimensional model of the fitting object and the reference three-dimensional model in the fitting state is referred to as a first relative positional relationship.
Further optionally, in order to better simulate the real wearing state, target fitting parameters may be predefined, and under the control of the target fitting parameters, the three-dimensional model of the fitting object in the three-dimensional model and the three-dimensional model of the commodity object are fused to present a more ideal wearing state. The target fitting parameters are different due to different wearing scenes. Taking a virtual shoe fitting as an example, the target fitting parameters include the following parameters: the distance between the heel vertex of the three-dimensional model of the shoe and the heel vertex of the three-dimensional model of the foot is 1-3 mm, the three-dimensional model of the foot is positioned in the middle of the three-dimensional model of the shoe, the sole of the three-dimensional model of the foot is attached to the bottom of the three-dimensional model of the shoe, and the bottom of the three-dimensional model of the shoe is wrapped on the three-dimensional model of the foot.
Based on the above, an optional implementation manner of step 301 is: responding to the fitting request, and acquiring a three-dimensional model of the fitting object, a reference three-dimensional model of the target commodity object and target fitting parameters of the fitting object aiming at the target commodity object; determining second relative position relations between at least three reference vertexes on the three-dimensional model of the fitting object and corresponding reference vertexes on the reference three-dimensional model according to the target fitting parameters; and at least partially placing the three-dimensional model of the fitting object inside the reference three-dimensional model according to the second relative position relation so as to obtain a fused three-dimensional model.
In practical application, the target fitting parameters can be set according to experience. Further optionally, the target fitting parameters of the fitting object for the target commodity object may be obtained according to the attribute information of the fitting object, the fitting preference information of the user to which the fitting object belongs, and/or the reference fitting parameters corresponding to the target commodity object.
The attribute information of the try-on object is used for describing the characteristics of the try-on object, and the try-on objects of different users have different attribute information. Taking the foot as an example, the big toe of some feet is longer than other toes, the 2 nd toe of some feet is the longest, and the length of the front 3 toes of some feet is basically similar. Some feet have large foot width, and some feet have small foot width. Some feet are long, and some feet are short. Some of the feet have large toe circumferences and some of the feet have small toe circumferences. Some of the legs have large circumference and some of the legs have small circumference.
The fitting preference information reflects the wearing preference of the user to which the fitting object belongs. Taking shoes as an example, round-nose shoes, sharp-nosed shoes or square-head shoes are preferred. For example, clothes are preferred to be fit and loose.
The reference fitting parameters corresponding to the target commodity object refer to fitting parameters with universality set according to a large amount of test data.
When the target fitting parameters are determined, the reference fitting parameters can be finely adjusted based on the attribute information of the fitting object and the fitting preference information of the user to which the fitting object belongs, so that the target fitting parameters are obtained.
In this embodiment, the target fitting parameters are analyzed, and vertices defining the target fitting parameters are determined as reference vertices. Taking the foot as an example, the reference vertex may be a heel vertex, several vertices on the sole of the foot, vertices on the toes, and so on.
Determining a second relative positional relationship between the plurality of reference vertices on the three-dimensional model of the fitting object and the corresponding reference vertices on the reference three-dimensional model according to the target fitting parameters when the fitting object is a foot and the target object is a shoe, the second relative positional relationship including at least one of:
mode 1: and determining the fitting distance between a first heel vertex on the three-dimensional model of the foot and a second heel vertex on the reference three-dimensional model of the shoe as a second relative position relation according to the fitting distance between the shoe and the heel.
In the three-dimensional reconstruction, for each vertex included in the three-dimensional model of the foot, the vertex type may be labeled, the vertex types including, for example: heel apex, plantar apex, or toe apex. One vertex on the heel is selected from a plurality of vertexes included in the three-dimensional model of the foot based on the vertex type to serve as a first heel vertex, and one heel vertex having the same position distribution as the first heel vertex is selected from a plurality of heel vertexes on the reference three-dimensional model of the shoe to serve as a corresponding second heel vertex according to the position distribution of the first heel vertex on the heel. And when the three-dimensional models are fused, controlling the distance between the first heel vertex and the second heel vertex to be the fitting distance under the same coordinate system.
Mode 2: and determining that the vertex of the first sole on the three-dimensional model of the foot is superposed with the vertex of the second sole on the reference three-dimensional model of the shoe as a second relative position relation according to the joint relation between the sole and the shoe bottom.
A number of first plantar vertices on a sole of a foot are selected from a plurality of vertices included in a three-dimensional model of the foot based on the vertex types. And selecting a plurality of second sole vertexes with the same position distribution as the first sole vertexes from a plurality of vertexes on the reference three-dimensional model of the shoe according to the position distribution of the sole vertexes on the heel. When the three-dimensional models are fused, under the same coordinate system, the vertex positions of the first sole vertex and the second sole vertex of each group are controlled to be the same or close, so that the sole is attached to the sole.
Mode 3: according to the alignment relation between the center of the sole and the center of the sole, the alignment of a first central line vertex positioned on the central line of the sole on the three-dimensional model of the foot and a second central line vertex positioned on the central line of the sole on the reference three-dimensional model of the shoe in the foot length direction is determined as a second relative position relation.
A vertex on a center line of a sole of a foot among a plurality of vertices included in a three-dimensional model of a foot is selected as a first center line vertex based on a vertex type and a vertex position, and one vertex having the same positional distribution as the first center line vertex is selected as a corresponding second center line vertex from the plurality of vertices on a reference three-dimensional model of a shoe. And controlling the alignment of the vertex of the first central line and the vertex of the second central line in the foot length direction under the same coordinate system when the three-dimensional models are fused.
In this embodiment, the position coordinates of each vertex included in the three-dimensional model of the fitting object and the position coordinates of each vertex included in the reference three-dimensional model are uniformly transformed to the same coordinate system, and the three-dimensional model of the fitting object and the reference three-dimensional model are controlled to maintain the second relative position relationship, so that the operation of placing at least part of the three-dimensional model of the fitting object inside the reference three-dimensional model is completed, and the fused three-dimensional model is obtained.
Specifically, the three-dimensional model of the fitting object in the fused three-dimensional model and the reference three-dimensional model maintain the first relative positional relationship, and in this fused state, the fitting degree information calculation operation is performed. The fitting degree information reflects the degree of fitting, and first, a plurality of target vertices involved in the calculation of the fitting degree information are selected from a plurality of vertices included in the three-dimensional model of the fitting object. For example, each vertex on the three-dimensional model of the try-on object is taken as a target vertex. Further, in order to reduce the data processing amount and also to consider the accuracy of calculation of the fitting degree information, a part of vertices may be selected as target vertices from the three-dimensional model of the fitting object. For example, based on the key part information of the fitting target, a vertex corresponding to the key part information is selected as a target vertex from the three-dimensional model of the fitting target. Key parts include, for example, but are not limited to: toe, heel, arch, instep, inner instep, outer instep, sole, etc.
After determining a plurality of target vertices on the three-dimensional model of the fitting object involved in the calculation of the suitability information, for each target vertex, the suitability information of the target vertex may be determined according to the distance information between the target vertex and the corresponding vertex on the reference three-dimensional model. Further optionally, in order to better measure the adaptation degree information, distance information from the target vertex to a region where a corresponding vertex on the reference three-dimensional model is located may be further used as the adaptation degree information of the target vertex. Then, calculating a plurality of pieces of distance information between a plurality of target vertices on the three-dimensional model of the fitting object and corresponding regions on the reference three-dimensional model as the fitting degree information of the plurality of target vertices, based on the first relative positional relationship, includes: aiming at each target vertex on the three-dimensional model of the fitting object, acquiring a first vertex which is closest to the target vertex on the reference three-dimensional model according to the first relative position relation; taking a plurality of triangular patches taking the first vertex as a connecting point as corresponding areas of the target vertex on the reference three-dimensional model; and calculating a plurality of distances from the target vertex to the plurality of triangular patches, and generating the adaptation degree information of the target vertex according to the plurality of distances.
The distances from the target vertex to the triangular patch include, for example and without limitation: the distance from the target vertex to the center point of the triangular patch, the vertical distance from the target vertex to the triangular patch, and the distance from the target vertex to the three vertexes of the triangular patch are subjected to maximum value, minimum value or average value calculation. In practical application, the maximum value, the minimum value or the average value of a plurality of distances from the target vertex to a plurality of triangular patches is obtained to obtain final distance information from the target vertex to the triangular patches, and the final distance information is used as the adaptability information of the target vertex.
In practical application, the adaptation degree range information which meets the adaptation degree requirement and corresponds to each target vertex can be flexibly set. And if the suitability information of each target vertex is in the corresponding suitability range information, the target vertex meets the suitability requirement. And if the suitability information of each target vertex does not fall into the corresponding suitability range information, the target vertex does not meet the suitability requirement. And after determining whether each target vertex meets the respective suitability requirement, determining whether the reference three-dimensional model meets the suitability requirement based on the condition that each target vertex meets the respective suitability requirement. For example, if all target vertices meet their respective fitness requirements, the reference three-dimensional model is determined to meet the fitness requirements. For another example, when the ratio of the number of target vertices satisfying the requirement for the degree of adaptation to the number of all target vertices is greater than a specified ratio that is flexibly set, it is determined that the reference three-dimensional model satisfies the requirement for the degree of adaptation. For another example, when the number of target vertices satisfying the suitability requirement is greater than the flexibly set specified number, it is determined that the reference three-dimensional model satisfies the suitability requirement, and no limitation is imposed on this.
Further optionally, a manual intervention mode can be introduced to confirm whether the reference three-dimensional model meets the requirement of the adaptation degree, so that the user satisfaction degree is improved. In order to enable a user to visually know that the reference three-dimensional model meets the adaptation degree requirement, any one of the three-dimensional model, the reference three-dimensional model and the fusion three-dimensional model of the fitting object can be displayed, and the adaptation degree information of a plurality of target vertexes is visually marked on any one of the three-dimensional models, wherein the adaptation degree information with different relations with the reference adaptation degree range corresponds to different visual marking states so that the user can confirm whether the reference three-dimensional model meets the adaptation degree requirement or not.
Specifically, the information of the degree of adaptation of a plurality of target vertexes is visually marked on any one of the three-dimensional models, so that different information of the degree of adaptation is marked by using different visual mark states. For example, vertices that meet the fitness requirement are labeled green, and vertices that do not meet the fitness requirement are labeled red.
The reference suitability range refers to a numerical range in which the suitability that defines whether or not the suitability requirement is satisfied is located. The adaptation degree information in the reference adaptation degree range meets the adaptation degree requirement, and the adaptation degree information not in the reference adaptation degree range does not meet the adaptation degree requirement.
Optionally, in order to more vividly and intuitively reflect the information distribution of the suitability information of each target vertex on the three-dimensional model of the fitting object, when the suitability information of the plurality of target vertices is visually marked on any three-dimensional model, any three-dimensional model can be rendered according to the suitability information of the plurality of target vertices to obtain a suitability thermodynamic diagram, wherein different colors in the suitability thermodynamic diagram represent the suitability information with different relationships with the size of the reference suitability range. The reference fitting degree range may be plural, and for example, different reference fitting degree ranges may be set for different portions of the fitting subject. Taking the foot as an example, the heel region corresponds to a first reference fit range, e.g., 1-2cm, the ball region corresponds to a second reference fit range, e.g., 0.5-1cm, the ankle region corresponds to a third reference fit range, e.g., 0-1cm, and so on. The first color value is adopted to mark the adaptation degree information within the reference adaptation degree range, the second color value is adopted to mark the adaptation degree information larger than the upper limit value of the reference adaptation degree, and the third color value is adopted to mark the adaptation degree information smaller than the lower limit value of the reference adaptation degree range. Thus, the user can know which positions are proper through the first color value, know which positions are too loose according to the second color value, and know which positions are too compact according to the third color value.
After any three-dimensional model for visually marking the suitability information is displayed to a user, the user autonomously confirms whether the reference three-dimensional model meets the suitability requirement according to the visual marking state of any three-dimensional model. Taking the fitness thermodynamic diagram as an example, when the user visually checks that the number of the areas marked with the colors (for example, red) which do not meet the fitness requirement on the fitness thermodynamic diagram is large, the conclusion that the fitness of the reference three-dimensional model and the three-dimensional model of the fitting object is low can be given, namely the fitness requirement is not met. When the user visually checks that the number of the areas marked with the colors (for example, red) which do not meet the requirement of the suitability degree on the suitability degree thermodynamic diagram is small, the conclusion that the suitability degree of the reference three-dimensional model and the three-dimensional model of the fitting object is high can be given, namely the suitability degree requirement is met. When the user visually checks that the number of the areas marked with the colors (for example, red) which do not meet the requirement of the suitability degree on the suitability degree thermodynamic diagram is not large or small, the conclusion in the suitability degree of the reference three-dimensional model and the three-dimensional model of the fitting object can be given, and the conclusion that the suitability degree requirement is not met or the suitability degree requirement is met can be given according to the requirement at the moment.
In this embodiment, when it is determined that the reference three-dimensional model does not meet the requirement of the suitability degree according to the suitability degree information of the target vertices, the size parameter and/or the shape parameter of the reference three-dimensional model is adjusted, and the suitability degree information of the target vertices is obtained again until the target three-dimensional model meeting the requirement of the suitability degree is obtained.
It should be noted that, after the size parameter and/or the shape parameter of the reference three-dimensional model are/is adjusted each time, the adjusted reference three-dimensional model is used as a new reference three-dimensional model, and the obtaining of the information of the degree of adaptation of the plurality of target vertexes is repeatedly performed until the reference three-dimensional model is determined to meet the requirement of the degree of adaptation according to the information of the degree of adaptation of the plurality of target vertexes, and the reference three-dimensional model meeting the requirement of the degree of adaptation is used as the target three-dimensional model.
Dimensional parameters of the reference three-dimensional model include, for example and without limitation: the length, width and height of the entire three-dimensional reference model, or the length, width and height of each part in the three-dimensional reference model. Taking a shoe as an example, the dimensional parameters include: shoe length, shoe width, or toe length or width, or instep height, etc.
The appearance parameters of the reference three-dimensional model define the appearance characteristics of the reference three-dimensional model. Taking a shoe as an example, the height of the heel, the width of the head, the length of the head, or the height of the instep, etc. of the shoe.
In this embodiment, the size parameter and/or the shape parameter of the reference three-dimensional model may be automatically adjusted, or the size parameter and/or the shape parameter of the reference three-dimensional model may be adjusted in response to an adjustment operation for the reference three-dimensional model triggered by a user, which is not limited herein.
Further optionally, in order to facilitate the user to initiate the adjustment operation, an adjustment control may be provided to the user, and the user may initiate the adjustment operation for the reference three-dimensional model through the adjustment control. In particular, an adjustment control, which may be, but is not limited to, a slider, may be presented within the associated region of any of the three-dimensional models described above. Based on the method, at least one sliding operation on the sliding strip can be responded, the sliding distance and the sliding direction of each sliding operation are obtained, and the adjusting amplitude and the adjusting direction are respectively determined according to the sliding distance and the sliding direction; and adjusting the size parameter and/or the shape parameter of the reference three-dimensional model according to the adjustment direction and the adjustment amplitude. In this embodiment, the sliding distance determines the adjustment range of the size parameter and/or the shape parameter, and the sliding direction determines the adjustment direction of the size parameter and/or the shape parameter. The adjustment direction may be an adjustment toward an increasing direction or an adjustment toward a decreasing direction based on the current parameter, which is not limited in this respect. It should be noted that any area of the display area where the three-dimensional model is located may be used as an associated area, and a slider bar is displayed in the associated area, so as to facilitate the user to perform the adjustment operation.
In an alternative embodiment, the sliding distance is proportional to the adjustment amplitude, and the larger the sliding distance is, the larger the adjustment amplitude of the size parameter and/or the shape parameter is; the smaller the sliding distance, the smaller the adjustment range for the dimensional parameter and/or the profile parameter. Correspondingly, taking a left-to-right sliding bar as an example, sliding to the left represents adjustment back, which means that the size parameter and/or the shape parameter are adjusted to be smaller, that is, the adjustment direction is the direction of adjustment to be smaller; sliding to the right represents a forward adjustment, meaning that the dimensional parameter and/or the form parameter is adjusted to be larger, i.e. the adjustment direction is the direction of the larger adjustment.
In practical application, the size parameter and the appearance parameter of the reference three-dimensional model can be adjusted by utilizing the linkage of a sliding strip. In consideration of practical application, there may be adjustment requirements only for the size parameters or for the shape parameters. In order to facilitate independent adjustment of the size parameter or the shape parameter, the slide bar may include a first slide bar and a second slide bar, the first slide bar is used for adjusting the size parameter of the reference three-dimensional model, and the second slide bar is used for adjusting the shape parameter of the reference three-dimensional model. The user can adjust the size parameter and the appearance parameter of the reference three-dimensional model through the first sliding bar and the second sliding bar respectively.
In this embodiment, if the terminal device creates the target three-dimensional model, the terminal device may send the target three-dimensional model to the server, and the server may obtain information of the target commodity object whose size and shape are both matched with the try-on object, and may also return the information of the target commodity object to the terminal device. Examples of information for the target merchandise object include, but are not limited to: the material, style, production schedule, logistics distribution schedule, production date, manufacturer, etc. of the target merchandise object. Further optionally, the terminal device may output information of the target commodity object to the user, and the user may determine whether to customize the target commodity object according to the information; and in response to the user determining the operation of customizing, the terminal device may further send a customizing instruction to the server. Based on the method, the server can also send the target three-dimensional model to a customized platform, the customized platform carries out production and manufacturing based on the target three-dimensional model so as to customize a target commodity object with the size and the shape matched with the try-on object, and the produced target commodity object is delivered to the user through logistics distribution.
According to the technical scheme, firstly, a fusion three-dimensional model simulating a virtual fitting state is established based on a three-dimensional model of a fitting object and a reference three-dimensional model of a target commodity object, then, in the virtual fitting state, adaptation degree information of a plurality of target vertexes on the three-dimensional model of the fitting object reflecting the fitting adaptation degree is calculated, and size parameters and/or shape parameters of the reference three-dimensional model are adjusted based on the adaptation degree information of the plurality of target vertexes until the target three-dimensional model meeting the adaptation degree requirement is obtained. Therefore, a real fitting scene is simulated by using a virtual fitting technology, and the reference three-dimensional model of the target commodity object is dynamically adjusted until the reference three-dimensional model is matched with the size or the shape of the three-dimensional model of the fitting object, so that the method is closer to the real fitting scene and is beneficial to better performing personalized customization service for customers; the condition of goods returning or goods changing caused by the fact that the commodity objects are not suitable is effectively reduced, the cost of online shopping is reduced, and the efficiency of online shopping and the shopping experience of a user are improved.
The embodiment of the application also provides an information processing method based on virtual fitting, and the method can be applied to a commodity purchasing scene. Referring to fig. 4, the method may include the steps of:
401. and responding to the virtual fitting request aiming at the target commodity object, and acquiring the three-dimensional model of the target commodity object to be fitted and the three-dimensional model of the fitting object.
402. And fusing the three-dimensional model of the fitting object and the three-dimensional model of the target commodity object to obtain a fused three-dimensional model, wherein the fused three-dimensional model represents a first relative position relation between the three-dimensional model of the fitting object and the three-dimensional model of the target commodity object in a fitting state.
403. And acquiring a plurality of distance information between a plurality of target vertexes on the three-dimensional model of the fitting object and corresponding vertexes or areas on the three-dimensional model of the target commodity object according to the first relative position relation, and using the distance information as the adaptation degree information of the target vertexes.
404. And rendering the three-dimensional model of the fitting object according to the adaptation degree information of the target vertexes to obtain a thermodynamic diagram reflecting the wearing adaptation degrees of the target vertexes of the fitting object.
After obtaining the thermodynamic diagram, the user may determine whether the fitting object is suitable for wearing the target commodity object and which parts of the reference three-dimensional model of the target commodity object are unsuitable and how unsuitable (e.g., too loose or too compact), and accordingly, the user may issue an adjustment operation for adjusting the reference three-dimensional model, and based on the adjustment operation, the size parameter and/or the shape parameter of the reference three-dimensional model of the target commodity object may be adjusted until the target three-dimensional model satisfying the requirement of the degree of adaptation is obtained.
For the detailed processes of steps 401 to 404, reference may be made to the foregoing embodiments, which are not described herein again.
In this embodiment, after the fitting degree information of the plurality of target vertices of the three-dimensional model of the fitting object is calculated, the three-dimensional model of the fitting object is rendered, and a thermodynamic diagram reflecting the fitting degree of the plurality of target vertices of the fitting object is obtained. Based on thermodynamic diagrams, a user can visually check whether the fitting object is suitable for wearing the target commodity object, and in a shopping scene, the user can decide whether to buy the target commodity object or not, or whether to replace the size of the target commodity object or not.
In some scenarios, the user is enabled to adjust the dimensional parameters or the appearance parameters of the three-dimensional model of the target merchandise object to more realistically simulate a fitting scenario. For example, in a real shoe fitting scenario, a user adjusts the dimensional parameters or shape parameters of the shoe interior by loosening or tightening the lace to maximize the comfort of wearing the shoe.
Based on this, as shown in fig. 4, step 404 further includes: and 405, if the try-on object is judged to be not suitable for wearing the target commodity object according to the thermodynamic diagram, adjusting the size parameter and/or the shape parameter of the three-dimensional model of the target commodity object, taking the adjusted three-dimensional model of the target commodity object as a new three-dimensional model of the target commodity object, and returning to execute the step 402 and the subsequent steps until the try-on object is judged to be suitable for wearing the target commodity object according to the thermodynamic diagram, or the adjustment times reach the maximum adjustment times.
Taking fig. 5 as an example, the sliding strip is displayed in the association area of the three-dimensional model of the shoe inner cavity, the change process of the shoe tightness when the shoe laces are tied can be simulated through the sliding of the sliding strip, and the thermodynamic diagrams with different adaptation degrees can be obtained after the foot 3D Mesh model (namely, the three-dimensional model) is fused with the 3D Mesh models of the shoe inner cavities with different shapes. The user intuitively knows whether the sliding strip needs to be slid again through the thermodynamic diagram so as to change the form of the 3D Mesh model of the inner cavity of the shoe again until a satisfactory thermodynamic diagram is obtained, or the user can be recommended to change the pair of shoes for trying on after the number of times of adjustment is large. In fig. 5, the average value of the distances from the vertex on the foot 3D Mesh model to the triangular patches connected to the corresponding vertex on the 3D Mesh model of the shoe inner cavity is calculated as the fitting degree information of the vertex on the foot 3D Mesh model, and the thermodynamic diagram is rendered on the foot 3D Mesh model based on the fitting degree information.
According to the technical scheme, firstly, a fusion three-dimensional model simulating a virtual fitting state is established based on a three-dimensional model of a fitting object and a three-dimensional model of a target commodity object, then, in the virtual fitting state, adaptation degree information of a plurality of target vertexes on the three-dimensional model of the fitting object reflecting the fitting degree is obtained, the three-dimensional model of the fitting object is rendered based on the adaptation degree information of the plurality of target vertexes reflecting the fitting degree, a thermodynamic diagram reflecting the fitting degree of the plurality of target vertexes of the fitting object is obtained, and finally, whether the fitting object is suitable for the target commodity object is judged according to the thermodynamic diagram. Therefore, the real try-on scene is simulated by the virtual try-on technology, the virtual try-on scene is closer to the real try-on scene, the condition of goods return or goods change caused by the fact that the commodity object is improper is effectively reduced, the cost of online shopping is reduced, and the efficiency of online shopping and the shopping experience of a user are improved.
Fig. 6a is a flowchart of an information interaction method based on virtual fitting according to an embodiment of the present application. Referring to fig. 6a, the method may comprise the steps of:
601. and responding to the fitting request, fusing the three-dimensional model of the fitting object and the reference three-dimensional model of the target commodity object to obtain a fused three-dimensional model, wherein the fused three-dimensional model represents a first relative position relation between the three-dimensional model of the fitting object and the reference three-dimensional model in the fitting state.
602. And acquiring a plurality of distance information between a plurality of target vertexes on the three-dimensional model of the fitting object and corresponding vertexes or regions on the reference three-dimensional model according to the first relative position relation, and using the distance information as the adaptation degree information of the target vertexes.
603. Displaying any three-dimensional model of the fitting object, the reference three-dimensional model and the fusion three-dimensional model, and visually marking the adaptation degree information of the multiple target vertexes on any three-dimensional model.
604. And responding to the adjustment operation aiming at any model, adjusting the size parameter and/or the appearance parameter of the reference three-dimensional model, and synchronously updating the adaptation degree information of the multiple target vertexes and the visual mark on any three-dimensional model according to the adjusted reference three-dimensional model.
In this embodiment, when the user has a fitting requirement, the user may initiate a fitting request through the terminal device, where the fitting request is used to request to fit a commodity object worn on a fitting object, and the fitting request may include, but is not limited to: an identification of the try-on object, an identification of the merchandise object, a number of images of the try-on object. In some application scenarios, when a user triggers a try-on request on a terminal device, the terminal device may display a page on which related information may be input, on which the user inputs related description information of a try-on object, related description information of a commodity object, and uploads several images of the try-on object, and the like, and the terminal device determines an identifier of the try-on object and an identifier of the commodity object by analyzing the related information input by the user on the page, acquires a three-dimensional model of the try-on object from a plurality of pre-stored three-dimensional models according to the identifier of the try-on object, and acquires a three-dimensional model of the commodity object from the plurality of pre-stored three-dimensional models according to the identifier of the commodity object. Of course, real-time three-dimensional reconstruction of a three-dimensional model of the fitting object or a reference three-dimensional model of the target commodity object is also supported, which is not limited. For example, a three-dimensional model of the fitting object is reconstructed in real time using several images of the fitting object uploaded by the user.
After the three-dimensional model of the fitting object and the reference three-dimensional model of the target commodity object are obtained, three-dimensional model fusion operation and the operation of determining the adaptation degree information of a plurality of target vertexes are carried out. For more on the three-dimensional model fusion operation and the fitness information determination operation of the plurality of target vertices, reference may be made to the related description of the foregoing embodiments.
In this embodiment, after the information of the degree of adaptation of the multiple target vertices on the three-dimensional model of the fitting object is obtained, any one of the three-dimensional model, the reference three-dimensional model, and the fused three-dimensional model of the fitting object may be displayed, and the information of the degree of adaptation of the multiple target vertices on any one of the three-dimensional models may be visually marked.
That is, one or more of the three-dimensional model, the reference three-dimensional model and the fused three-dimensional model of the fitting object may be displayed on the terminal device, and the information on the degree of adaptation of the plurality of target vertices may be visually marked on one or more of the three-dimensional model, the reference three-dimensional model and the fused three-dimensional model of the fitting object.
For example, a thermodynamic diagram reflecting the information on the degree of fitting of a plurality of target vertices is rendered on the three-dimensional model of the fitting object, and/or a thermodynamic diagram reflecting the information on the degree of fitting of a plurality of target vertices is rendered on the reference three-dimensional model, and/or a thermodynamic diagram reflecting the information on the degree of fitting of a plurality of target vertices is rendered on the fused three-dimensional model. For more on the operation of performing the visual marking, reference may be made to the related description of the previous embodiments.
After any three-dimensional model for visually marking the suitability information is displayed to a user, the user autonomously confirms whether the reference three-dimensional model meets the suitability requirement according to the visual marking state of any three-dimensional model. Taking the fitness thermodynamic diagram as an example, when the user visually checks that the number of the areas marked with the colors (for example, red) which do not meet the fitness requirement on the fitness thermodynamic diagram is large, the conclusion that the fitness of the reference three-dimensional model and the three-dimensional model of the fitting object is low can be given, namely the fitness requirement is not met. When the user visually checks that the number of the areas marked with the colors (such as red) which do not meet the requirement of the suitability degree on the suitability degree thermodynamic diagram is less, the conclusion that the suitability degree of the reference three-dimensional model and the three-dimensional model of the fitting object is high can be given, namely the suitability degree requirement is met. When the user visually checks that the number of the areas marked with the colors (for example, red) which do not meet the requirement of the suitability degree on the suitability degree thermodynamic diagram is not large or small, the conclusion in the suitability degree of the reference three-dimensional model and the three-dimensional model of the fitting object can be given, and the conclusion that the suitability degree requirement is not met or the suitability degree requirement is met can be given according to the requirement at the moment.
In this embodiment, when it is determined that the reference three-dimensional model does not meet the requirement of the suitability degree according to the suitability degree information of the target vertices, the size parameter and/or the shape parameter of the reference three-dimensional model is adjusted, and the suitability degree information of the target vertices and the visual marker on any one of the three-dimensional models are updated synchronously according to the adjusted reference three-dimensional model. For more on the adjustment operation of the reference three-dimensional model, reference may be made to the related description in the foregoing embodiments.
In this embodiment, after adjusting the size parameter and/or the shape parameter of the reference three-dimensional model each time, the adjusted reference three-dimensional model is used as a new reference three-dimensional model, and the obtaining of the information of the degree of adaptation of the plurality of target vertices is repeatedly performed until the reference three-dimensional model is determined to meet the requirement of the degree of adaptation according to the information of the degree of adaptation of the plurality of target vertices, and the reference three-dimensional model meeting the requirement of the degree of adaptation is used as the target three-dimensional model.
It is worth noting that, each time the size parameter and/or the shape parameter of the reference three-dimensional model is adjusted, correspondingly, the adaptation degree information of the target vertexes is synchronously updated, and the visual mark reflecting the adaptation degree information is also synchronously updated, so that a user can more intuitively know the adaptation degree change condition brought by the model adjustment operation, and the acquisition probability of the three-dimensional model of the target commodity object is improved.
For details of each step in the method shown in fig. 6a, reference may be made to relevant contents of the foregoing embodiments, and details are not described here.
According to the technical scheme provided by the embodiment of the application, a real fitting scene is simulated by a virtual fitting technology, and the reference three-dimensional model of the target commodity object is dynamically adjusted until the reference three-dimensional model is matched with the size or the shape of the three-dimensional model of the fitting object, so that the method is closer to the real fitting scene and is beneficial to better performing personalized customized service for a customer; the condition of goods returning or goods changing caused by the fact that the commodity objects are not suitable is effectively reduced, the cost of online shopping is reduced, and the efficiency of online shopping and the shopping experience of a user are improved.
Fig. 6b is a flowchart of another information interaction method based on virtual fitting according to an embodiment of the present application. Referring to fig. 6b, the method may comprise the steps of:
701. and responding to the fitting request, acquiring the three-dimensional model of the fitting object and the reference three-dimensional model of the target commodity object, and displaying the three-dimensional model of the fitting object and the reference three-dimensional model.
702. And fusing the three-dimensional model of the fitting object and the reference three-dimensional model of the target commodity to obtain a first relative position relation between the three-dimensional model of the fitting object and the reference three-dimensional model in the fitting state.
703. And acquiring a plurality of distance information between a plurality of target vertexes on the three-dimensional model of the fitting object and corresponding vertexes or regions on the reference three-dimensional model according to the first relative position relation, and using the distance information as the adaptation degree information of the target vertexes.
704. And visually marking the adaptability information of the plurality of target vertexes on the three-dimensional model of the fitting object.
705. And responding to the adjustment operation aiming at the reference three-dimensional model, adjusting the size parameter and/or the appearance parameter of the reference three-dimensional model, and synchronously updating the adaptation degree information of the target vertexes and the visual mark on the three-dimensional model of the fitting object according to the adjusted reference three-dimensional model.
In this embodiment, when the user has a fitting requirement, the user may initiate a fitting request through the terminal device, where the fitting request is used to request to fit a commodity object worn on a fitting object, and the fitting request may include, but is not limited to: an identification of the fitting object, an identification of the merchandise object, and a plurality of images of the fitting object. In some application scenarios, when a user triggers a try-on request on a terminal device, the terminal device may display a page on which related information may be input, on which the user inputs related description information of a try-on object, related description information of a commodity object, and uploads several images of the try-on object, and the like, and the terminal device determines an identifier of the try-on object and an identifier of the commodity object by analyzing the related information input by the user on the page, acquires a three-dimensional model of the try-on object from a plurality of pre-stored three-dimensional models according to the identifier of the try-on object, and acquires a three-dimensional model of the commodity object from the plurality of pre-stored three-dimensional models according to the identifier of the commodity object. Of course, real-time three-dimensional reconstruction of a three-dimensional model of the fitting object or a reference three-dimensional model of the target commodity object is also supported, which is not limited. For example, a three-dimensional model of the fitting object is reconstructed in real time using several images of the fitting object uploaded by the user.
After the three-dimensional model of the fitting object and the reference three-dimensional model of the target commodity object are obtained, the three-dimensional model of the fitting object and the reference three-dimensional model can be displayed on the terminal equipment for a user to view. Next, a three-dimensional model fusion operation and an operation of determining the suitability information of the plurality of target vertices are performed. For more on the three-dimensional model fusion operation and the fitness information determination operation of the plurality of target vertices, reference may be made to the related description of the foregoing embodiments. It should be noted that the three-dimensional model fusion process may not be visible, and the three-dimensional model fusion target is to find a first relative position relationship between the three-dimensional model of the fitting object in the fusion state and the reference three-dimensional model, and then obtain the information of the degree of adaptation of the multiple target vertices on the three-dimensional model of the fitting object based on the first relative position relationship.
In this embodiment, after the information of the suitability of the plurality of target vertices on the three-dimensional model of the fitting object is obtained, it may be shown that the information of the suitability of the plurality of target vertices on the three-dimensional model of the fitting object is visually marked. For example, a thermodynamic diagram reflecting the suitability information of a plurality of target vertices is rendered on a three-dimensional model of the fitting object. For more on the operation of performing the visual marking, reference may be made to the related description of the previous embodiments.
After the three-dimensional model of the fitting object for visually marking the fitting degree information is displayed to the user, the user autonomously confirms whether the reference three-dimensional model meets the fitting degree requirement or not according to the visual marking state of the three-dimensional model of the fitting object. Taking the fitness thermodynamic diagram as an example, when the user visually checks that the number of the areas marked with the colors (for example, red) which do not meet the fitness requirement on the fitness thermodynamic diagram is large, the conclusion that the fitness of the reference three-dimensional model and the three-dimensional model of the fitting object is low can be given, namely the fitness requirement is not met. When the user visually checks that the number of the areas marked with the colors (for example, red) which do not meet the requirement of the suitability degree on the suitability degree thermodynamic diagram is small, the conclusion that the suitability degree of the reference three-dimensional model and the three-dimensional model of the fitting object is high can be given, namely the suitability degree requirement is met. When the user visually checks that the number of the areas marked with the colors (for example, red) which do not meet the requirement of the suitability degree on the suitability degree thermodynamic diagram is not large or small, the conclusion in the suitability degree of the reference three-dimensional model and the three-dimensional model of the fitting object can be given, and the conclusion that the suitability degree requirement is not met or the suitability degree requirement is met can be given according to the requirement at the moment.
In this embodiment, when it is determined that the reference three-dimensional model does not meet the requirement of the degree of fit according to the information of the degree of fit of the plurality of target vertices, the size parameter and/or the shape parameter of the reference three-dimensional model are adjusted, and the information of the degree of fit of the plurality of target vertices and the visual marker on the three-dimensional model of the fitting object are synchronously updated according to the adjusted reference three-dimensional model. For more on the adjustment operation of the reference three-dimensional model, reference may be made to the related description in the foregoing embodiments.
In this embodiment, after adjusting the size parameter and/or the shape parameter of the reference three-dimensional model each time, the adjusted reference three-dimensional model is used as a new reference three-dimensional model, and the obtaining of the information of the degree of adaptation of the plurality of target vertices is repeatedly performed until it is determined that the reference three-dimensional model meets the requirement of the degree of adaptation according to the information of the degree of adaptation of the plurality of target vertices, and the reference three-dimensional model meeting the requirement of the degree of adaptation is used as the target three-dimensional model.
It is worth noting that, each time the size parameter and/or the shape parameter of the reference three-dimensional model is adjusted, correspondingly, the adaptation degree information of the target vertexes is synchronously updated, and the visual mark reflecting the adaptation degree information is also synchronously updated, so that a user can more intuitively know the adaptation degree change condition brought by the model adjustment operation, and the high-precision obtaining probability of the three-dimensional model of the target commodity object is improved.
In this embodiment, after the fitting degree information of the plurality of target vertices of the three-dimensional model of the fitting object is calculated, the three-dimensional model of the fitting object is rendered to obtain a thermodynamic diagram reflecting the fitting degree of the plurality of target vertices of the fitting object. The user can visually check whether the fitting object is suitable for wearing the target commodity object based on the thermodynamic diagram, and decide whether to purchase the target commodity object.
In the embodiment, the adjustment of the size parameter or the shape parameter of the three-dimensional model of the target commodity object is supported, so that the fitting scene is simulated more truly. For example, in a real shoe fitting scenario, a user adjusts the dimensional parameters or shape parameters of the shoe interior by loosening or tightening the lace to maximize the comfort of wearing the shoe.
In practical application, an adjustment control, such as a slider, may be displayed in the associated region of the reference three-dimensional model, and the adaptation degree information of the plurality of target vertices on the three-dimensional model of the try-on object is updated synchronously as the user operates the adjustment control to adjust the size parameter or the shape parameter of the three-dimensional model of the target commodity object.
Taking fig. 5 as an example, the adjusting control is a sliding bar, the sliding simulation of the sliding bar is the change process of the tightness of the lacing, and after the 3D Mesh model (also a three-dimensional model) of the foot is fused with the 3D Mesh models of the shoe inner cavities in different shapes, thermodynamic diagrams with different adaptation degrees can be obtained. The thermodynamic diagram is rendered on the foot 3D Mesh model, the sliding strip is displayed in the relevant area of the 3D Mesh model of the shoe inner cavity, and the thermodynamic diagram on the foot 3D Mesh model can change synchronously as the user slides the sliding strip to adjust the 3D Mesh model of the shoe inner cavity. The user intuitively knows whether the sliding strip needs to be slid again through the thermodynamic diagram so as to change the form of the 3D Mesh model of the inner cavity of the shoe again until a satisfactory thermodynamic diagram is obtained, or the user can be recommended to change the pair of shoes for trying on after the number of times of adjustment is large.
For details of each step in the method shown in fig. 6b, reference may be made to relevant contents of the foregoing embodiments, and details are not described here.
According to the technical scheme provided by the embodiment of the application, a real fitting scene is simulated by a virtual fitting technology, and the reference three-dimensional model of the target commodity object is dynamically adjusted until the reference three-dimensional model is matched with the size or the shape of the three-dimensional model of the fitting object, so that the method is closer to the real fitting scene and is beneficial to better performing personalized customized service for a customer; the condition of goods returning or goods changing caused by the fact that the commodity objects are not suitable is effectively reduced, the cost of online shopping is reduced, and the efficiency of online shopping and the shopping experience of a user are improved.
The following embodiment will describe a three-dimensional reconstruction method for a fitting object provided in the embodiment of the present application. In the following embodiments, the fitting object is referred to as a target object.
Fig. 7a is a model structure diagram of a three-dimensional reconstruction network according to an embodiment of the present application. Referring to fig. 7a, the entire three-dimensional reconstruction network may include: the system comprises a feature extraction network, a vector splicing network, a parameter regression network and a Mongolian processing network. In practical applications, the target object may be any object that needs to be three-dimensionally reconstructed, and the target object is, for example, a body part such as a foot object, a hand object, a head object, an elbow object, or a leg object on a human body, for example, various animals, plants, and the like in nature, for example, a three-dimensional space scene such as a real house, a mountain, and the like, without limitation. When the target object is reconstructed three-dimensionally, firstly, an image acquisition device may be used to acquire a video stream including the target object, as shown in (1) in fig. 7a, sequentially input a plurality of consecutive frames of images including the target object in the video stream into a three-dimensional reconstruction network, as shown in (2) and (3) in fig. 7a, the feature extraction network sequentially performs feature extraction on each frame of image, and extracts a feature vector of each frame of image. After the feature vectors of the multi-frame images are obtained, as shown in (4) and (5) in fig. 7a, the feature vectors of the multi-frame images are sequentially spliced by using a vector splicing network according to the sequence from morning to evening of the image acquisition time to obtain a target splicing feature vector; next, referring to (6) and (7) in fig. 7a, a parameter regression network is used to perform prediction processing on the target stitching feature vector, so as to obtain a plurality of control parameters for model control, where the plurality of control parameters may include an attitude control parameter and a shape control parameter. Finally, as shown in (8) and (9) of fig. 7a, the initial three-dimensional model of the target object obtained based on the three-dimensional model description information is subjected to the masking processing by using the masking processing network, so that the target three-dimensional model of the target object can be output, and the whole three-dimensional reconstruction task is completed.
In practical application, the whole three-dimensional reconstruction network may be deployed on a terminal device and may be deployed on a server, or a part of the network in the whole three-dimensional reconstruction network is deployed on the terminal device and a part of the network is deployed on the server, which is not limited to this. Optionally, the terminal device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a wearable device, and a vehicle-mounted device. The server includes, for example, but not limited to, a single server or a distributed server cluster of multiple servers.
It should be understood that the model structure of the three-dimensional reconstruction network in fig. 7a is only schematic. For example, the feature extraction network may also be provided with a splicing processing function, so that the three-dimensional reconstruction network does not need to include a special vector splicing network. For another example, the parametric regression network may also add a masking layer processing function, so that the three-dimensional reconstruction network does not need to include a special masking layer processing network. The neural network architecture with the above feature extraction, vector splicing, parameter regression, and Mongolian processing capabilities is all suitable for the embodiment of the present application.
Fig. 7b is a flowchart of a three-dimensional reconstruction method according to an embodiment of the present application. Referring to fig. 7b, the method may include the steps of:
201. acquiring a multi-frame image comprising a target object and three-dimensional model description information corresponding to the target object.
202. Inputting the multi-frame images into a feature extraction network for feature extraction to obtain feature vectors of the multi-frame images, and splicing the feature vectors of the multi-frame images to obtain target splicing feature vectors.
203. And inputting the target splicing feature vector into a parameter regression network, and predicting a plurality of control parameters for model control according to the three-dimensional model description information, wherein the plurality of control parameters comprise attitude control parameters and shape control parameters.
204. And carrying out layer covering treatment on the initial three-dimensional model of the target object according to the attitude control parameter and the shape control parameter to obtain a target three-dimensional model of the target object, wherein the initial three-dimensional model is obtained according to the three-dimensional model description information.
In the present embodiment, three-dimensional model description information corresponding to a target object is prepared in advance. When the target object is a body part, the three-dimensional Model description information corresponding to the target object may be determined based on an SMPL (Skinned Multi-Person Linear Model), which is a naked body (Skinned), and a vertex-based three-dimensional Model of the human body, which can accurately represent different shapes (shape) and postures (position) of the human body.
The three-dimensional model description information describes the number of vertices that the three-dimensional model of the target object needs to contain, position information of each vertex, and the number of parameters for model control. An initial three-dimensional model of the target object may be constructed based on the positional information of the vertices. Taking the target object as the foot as an example, the three-dimensional model constructed by 1600 vertices may be used as the initial three-dimensional model of the foot, and 1600 vertices are merely examples, but not limited thereto, and specifically, the number of vertices required by the three-dimensional model may be flexibly selected according to the model precision. The number of parameters for model control is not limited, and the parameters can be flexibly set according to the accuracy and complexity of model control. For example, the plurality of control parameters for model control may include an attitude control parameter for controlling the attitude of the three-dimensional model and a shape control parameter for controlling the shape of the three-dimensional model. The attitude control parameters can comprise 3 attitude angles such as a roll angle, a pitch angle, a yaw angle and the like, and the attitude of the three-dimensional model is controlled through the 3 attitude angles. The shape control parameters vary from target object to target object, and a change in any one of the shape parameters may cause a change in the shape of one or more portions of the target object. Taking the target object as an example of a foot, the shape control parameters include, for example, 10 shape parameters, and the 10 shape parameters can control toe size, foot slimming, longitudinal and transverse stretching, arch bending, and the like. Taking the target object as the head as an example, the shape control parameters include 8 shape parameters, and the 8 shape parameters can control the mouth size, the bridge height, the eye distance, the forehead width and the like. Taking the target object as an example of a house, the shape control parameters include, for example, 30 shape parameters, and the 30 shape parameters may control the floor height of the house, the size of the house, the outer wall structure of the house, and the like.
Because the precision of the initial three-dimensional model is low, the reality sense of the initial three-dimensional model is not enough, and the target object in the real world is difficult to express really. For this reason, in view of improving the accuracy of the three-dimensional model, a target three-dimensional model of the target object is reconstructed using a three-dimensional reconstruction network.
In this embodiment, in order to enhance the robustness of the model and introduce a certain smoothing effect to the model, a multi-frame image including a target object may be acquired, and the multi-frame image is input to a three-dimensional reconstruction network for three-dimensional reconstruction. The number of the multi-frame images is not limited, and is, for example, 3 frames, 4 frames, 5 frames, or the like. In practical application, the target object can be subjected to video acquisition in advance to obtain a video stream, the video stream is stored locally, and when the target object needs to be subjected to three-dimensional reconstruction, a multi-frame image including the target object is acquired from the locally stored video stream. Of course, the video acquisition may also be performed on the target object in real time to obtain a video stream, and the multi-frame image including the target object is obtained from the video stream acquired in real time, which is not limited herein.
In practical application, for each frame of image in a plurality of frames of images including a target object, the frame of image can be directly input to a feature extraction network in a three-dimensional reconstruction network for feature extraction. Specifically, each frame of image in the multiple frames of images is sequentially used as the current frame of image, the current frame of image can be directly input to the feature extraction network for feature extraction, and in the process, the feature vector of each extracted frame of image can be stored. In this way, when a plurality of frames of images are used for three-dimensional reconstruction, the current frame of image may be input to a feature extraction network in the three-dimensional reconstruction network for feature extraction, and feature vectors of history images of other frames before the current frame of image may be directly obtained from the corresponding storage space, but the invention is not limited thereto. For example, the current frame image and the previous several frames of history images may be simultaneously input to the feature extraction network to perform feature extraction. Further optionally, when the image is acquired, the current frame image includes the target object and also includes the surrounding environment where the target object is located, so that in order to improve the accuracy of feature extraction, the current frame image may be cut, and the cut image may be subjected to feature extraction. Therefore, when the current frame image is input into the feature extraction network for feature extraction to obtain a feature vector of the current frame image, the image position of the target object in the current frame image can be detected, and the local image where the target object is located is cut out from the current frame image according to the image position; and inputting the local image into a feature extraction network for feature extraction to obtain a feature vector of the current frame image. The type and position of the target Object in the image can be detected by an Object detection (Object detection) algorithm.
Optionally, in order to accurately position the image position of the target object, when detecting the image position of the target object in the current frame image, preprocessing the current frame image in sequence, where the preprocessing includes at least one of image scaling processing and normalization processing; and inputting the preprocessed image into a target detection network for target detection to obtain the image position of the target object in the preprocessed image.
For example, the feet are continuously photographed, and 4 frames of original images are obtained. The 4 frames of original images are zoomed to 160 pixels at the top and 90 pixels at the width, the zoomed 4 frames of images are normalized by a Z-Score (standard fraction) method, the 4 frames of images after the normalization processing are input into a real-time foot target detection network for foot detection to obtain the image position of the foot, 4 foot images are cut out from the 4 original images according to the image position of the foot, and the size of the 4 foot images is 128 pixels. The foot image with the size of 128-128 pixels can be input into a feature extraction network for feature extraction.
In this embodiment, the model structure of the feature extraction network is not limited, and any network having a feature extraction function may be used as the feature extraction network.
In practical application, a feature extraction network can be used for extracting features of each frame of image in multiple frames of images to obtain feature vectors of each frame of image, and after feature extraction of all images is completed, vector splicing is performed on the feature vectors corresponding to multiple images to obtain target spliced feature vectors. Referring to fig. 7c, the image 1, the image 2, the image 3, the image 4 and the like obtained by cutting are respectively subjected to feature extraction by using a feature extraction network, so as to obtain respective corresponding 128-dimensional feature vectors; and performing vector splicing on the 4 128-dimensional feature vectors by using a vector splicing network to obtain 512-dimensional feature vectors. Wherein the feature vector with 512 dimensions is the target splicing feature vector.
In practical application, a feature extraction network can be used for extracting features of each frame of image to obtain a feature vector of each frame of image, and the feature vector of the frame of image is stored in a specified storage space. And after completing the feature extraction of the current frame image with the latest image acquisition time in the multi-frame image, performing vector splicing on the feature vector of the current frame image and the feature vector of at least one frame of historical image acquired from the appointed storage space. Thus, illustratively, the multi-frame image includes a current frame image and at least one history image; inputting the multi-frame images into a feature extraction network for feature extraction to obtain feature vectors of the multi-frame images, wherein the feature vectors comprise: inputting the current frame image into a feature extraction network for feature extraction each time to obtain a feature vector of the current frame image; splicing the feature vectors of the multi-frame images to obtain a target splicing feature vector, and the method comprises the following steps: acquiring a characteristic vector of at least one frame of historical image from a designated storage space by adopting a set sliding window; and splicing the feature vector of the current frame image and the feature vector of at least one frame of historical image to obtain a target spliced feature vector. It is noted that a sliding window is set for controlling the number of history images acquired from a specified storage space, and the length of the sliding window may be 3 in a scene in which three-dimensional reconstruction is performed using 4 frames of images, for example; in a scene reconstructed three-dimensionally using 5 frames of images, the length of the sliding window may be 4.
In this embodiment, after the feature vectors of the multi-frame images are spliced to obtain the target spliced feature vector, the target spliced feature vector is input to the parameter regression network, and a plurality of control parameters for model control are predicted according to the three-dimensional model description information.
The present embodiment does not limit the model structure of the parametric regression network, and any model that can be trained to perform control parameter prediction can be used as the parametric regression network. Further alternatively, the parametric regression network may be expressed as an MLP (multi layer Perceptron) network and may be capable of performing at least one MLP operation. The MLP network comprises a plurality of input layers, a plurality of output layers and a plurality of hidden layers, and is a feedforward artificial neural network model which maps a plurality of input data sets onto a single output data set. Thus, further optionally, inputting the target stitching feature vector into a parameter regression network, and predicting a plurality of control parameters for model control according to the three-dimensional model description information, including: and inputting the target splicing characteristic vector into a parameter regression network, and performing at least one multi-layer perceptron MLP operation on the target splicing characteristic vector according to the three-dimensional model description information to obtain a plurality of control parameters for model control.
Referring to fig. 7c, taking an example that the parameter regression network performs two MLP operations on the target stitching feature vector, performing one MLP operation on the 512-dimensional feature vector output by the vector stitching network to obtain a 1600-dimensional feature vector; and performing MLP operation on the 1600-dimensional feature vector again to obtain a 13-dimensional feature vector, wherein each element in the 13-dimensional feature vector is a control parameter, namely 13 control parameters are obtained. It should be noted that the 13-dimension is only an example of the number of control parameters, and can be flexibly set according to the requirements of the target object, the control complexity, and the like.
And after the attitude control parameter and the shape control parameter are output by the parameter regression network, carrying out layer covering treatment on the initial three-dimensional model of the target object according to the attitude control parameter and the shape control parameter to obtain a target three-dimensional model of the target object. The initial three-dimensional model is obtained according to the three-dimensional model description information, the precision of the initial three-dimensional model is to be improved, in the process of Mongolian treatment, the posture of the initial three-dimensional model is adjusted by using the posture control parameter, the shape of the initial three-dimensional model is adjusted by using the shape control parameter, and then the target three-dimensional model with higher precision is obtained. It should be noted that, in addition to adjusting the pose and shape of the three-dimensional model during the skinning process, the vertices included in the three-dimensional model may be associated with the bones. The embodiment of the application is not described too much about skinning.
According to the technical scheme, an initial three-dimensional model of the target object is established by utilizing three-dimensional model description information corresponding to the target object, three-dimensional reconstruction is carried out by utilizing a plurality of images comprising the target object, respective feature vectors of the images are extracted in the three-dimensional reconstruction process, the respective feature vectors of the images are spliced, attitude control parameters and shape control parameters used for model control are predicted based on the spliced feature vectors, and the initial three-dimensional model of the target object is subjected to layer covering processing according to the attitude control parameters and the shape control parameters to obtain the target three-dimensional model of the target object. Therefore, the three-dimensional reconstruction mode greatly improves the precision of the three-dimensional model, the higher the precision of the three-dimensional model is, the stronger the sense of reality of the three-dimensional model is, the more the target object in the real world can be expressed really, and further the application range of the three-dimensional model is effectively expanded and the application effect of the three-dimensional model is improved. Particularly, in a commodity shopping scene, a commodity adapted to a target object can be purchased for the target object based on a three-dimensional reconstructed model, and conditions are provided for solving the existing goods returning and changing problem.
In some optional embodiments of the present application, the feature extraction network may perform feature extraction in conjunction with image features and camera pose data for more accurate feature extraction. As an example, the feature extraction network may include a feature extraction module, a camera parameter fusion module, a feature stitching module, and a feature dimension reduction module. Then, inputting the multiple frames of images into a feature extraction network for feature extraction to obtain feature vectors of the multiple frames of images, including: inputting each frame of image in a plurality of frames of images into a feature extraction module in a feature extraction network for feature extraction to obtain an image feature map of the frame of image; inputting the camera attitude data when the frame image is collected into a camera parameter fusion module in a feature extraction network for feature extraction to obtain a camera pose feature map of the frame image; splicing the image feature map and the camera pose feature map of each frame of image by using a feature splicing module in a feature extraction network to obtain a spliced feature map of each frame of image; and performing dimensionality reduction on the spliced characteristic image of each frame of image by using a characteristic dimensionality reduction module in the characteristic extraction network to obtain a characteristic vector of each frame of image.
Specifically, the feature extraction module is used for extracting an image feature map of each frame of image. In addition, the model structure of the feature extraction module is not limited, and any feature extraction network capable of extracting image features can be used as the feature extraction module.
The camera parameter fusion module is a module for extracting the characteristics of the camera attitude data. In addition, the model structure of the camera parameter fusion module is not limited, and any network capable of extracting the features of the camera pose data can be used as the camera parameter fusion module.
Further optionally, in order to obtain a camera pose feature map of each frame of image with higher accuracy, the camera pose data obtained when the frame of image is acquired is input to a camera parameter fusion module in the feature extraction network for feature extraction, and the implementation manner of obtaining the camera pose feature map of the frame of image may be: inputting camera attitude data when the frame image is acquired into a camera parameter fusion module in a feature extraction network, wherein the camera attitude data comprises at least two attitude angles; performing trigonometric function processing according to the at least two attitude angles and the correlation between the at least two attitude angles to obtain a plurality of attitude characterization parameters; and processing various attitude characterization parameters by using a multi-layer perceptron MLP network in a camera parameter fusion module to obtain a camera attitude characteristic diagram of the frame of image.
Specifically, the camera attitude data may include at least two attitude angles of a yaw angle, a pitch angle, and a roll angle. As an example, the method performs trigonometric function processing according to at least two attitude angles and a correlation between the at least two attitude angles to obtain a plurality of attitude characterization parameters, including: carrying out numerical calculation on every two attitude angles in at least two attitude angles to obtain a plurality of fusion attitude angles, wherein each fusion attitude angle represents the correlation between the two corresponding attitude angles; and respectively carrying out trigonometric function processing on each attitude angle in the at least two attitude angles and each fusion attitude angle in the multiple fusion attitude angles to obtain multiple attitude characterization parameters.
In practical application, various numerical calculations such as addition, subtraction or multiplication are performed on two attitude angles in at least two attitude angles to obtain and expand various fusion attitude angles, and each fusion attitude angle represents the mutual relationship between the two corresponding attitude angles. When the trigonometric function processing is performed for each attitude angle and each fusion attitude angle, cosine function, sine function, cotangent function, or tangent function processing may be performed, but is not limited thereto.
Referring to fig. 7d, the camera pose data may include a yaw angle α, a pitch angle β, and a roll angle γ. Theta may be any one of a yaw angle alpha, a pitch angle beta and a roll angle gamma,ψis any attitude angle other than θ. The attitude angles theta, theta and theta are different in pairs,ψAdding the two to obtain a fused attitude angle theta +ψSubtracting two different attitude angles to obtain a fused attitude angle theta-ψFurther, 6 kinds of fusion attitude angles are obtained, which are respectively alpha + beta, alpha + gamma, beta + gamma, alpha-beta, alpha-gamma and beta-gamma. The 3 attitude angles and the 6 fusion attitude angles are respectively processed by trigonometric functions tau (e) such as sine function sin (e) and cosine function cos (e), 18 trigonometric function processing results can be obtained, namely 18 attitude characterization parameters, and 18 attitude characterization parameters form 18-dimensional vectors.
After obtaining the multiple posture characterization parameters, processing the multiple posture characterization parameters by using a multi-layer perceptron MLP network to obtain a camera posture characteristic diagram of the frame of image. As an example, processing multiple pose characterization parameters by using a multi-layer perceptron MLP network, and obtaining a camera pose feature map of the frame image may be implemented as follows: vectorizing the multiple attitude characterization parameters to obtain a camera attitude feature vector; and processing the camera attitude feature vector by using a multi-layer perceptron MLP network to obtain a camera attitude feature map. Referring to fig. 7d, 18-dimensional camera pose feature vectors composed of 18 pose characterization parameters are input to the multi-layer perceptron MLP network for processing, 64-dimensional feature vectors are obtained, the 64-dimensional feature vectors are converted into feature maps with the size of 4 × 64, and the feature maps with the size of 4 × 64 are the camera pose feature maps.
In this embodiment, for each frame of image, a feature splicing module in the feature extraction network is used to splice an image feature map of the frame of image output by the feature extraction module and a camera pose feature map of the frame of image output by the camera parameter fusion module to obtain a spliced feature map of each frame of image, and a feature dimension reduction module in the feature extraction network is used to perform dimension reduction processing on the spliced feature map of each frame of image to obtain a spliced vector of each frame of image.
Referring to fig. 7d, the feature concatenation module outputs the feature map with the size of 4 × 256, and the convolution module with the convolution kernel size of 1*1 performs dimensionality reduction processing on the feature map with the size of 4 × 256 to obtain the feature map with the size of 4 × 64. Splicing the feature map of 4 × 64 with the feature map of 4 × 64 output by the camera parameter fusion module to obtain a feature map of 4 × 128; and performing dimensionality reduction on the feature map of 4 × 128 by using a convolution module with a convolution kernel of 4*4 to obtain a feature map of 1 × 128, and converting the feature map of 1 × 128 into a feature vector of 128 dimensions, so as to finish the task of extracting the features of the frame image.
In some optional embodiments of the present application, in order to improve the accuracy of feature extraction, the feature extraction module in the feature extraction network may include a jump connection layer and a down-sampling layer that are sequentially connected, so that, for each frame of image in a plurality of frames of images, the frame of image is input to the feature extraction module in the feature extraction network for feature extraction, and an optional implementation manner of obtaining the image feature map of the frame of image is as follows: inputting each frame of image in a plurality of frames of images into a jump connection layer in a feature extraction module, extracting multi-resolution feature maps of the frame of image, and performing jump connection on the feature maps with the same resolution to obtain a second intermediate feature map of the frame of image; and inputting the second intermediate feature map of the frame image into a down-sampling layer in the feature extraction module to perform down-sampling processing for M times to obtain an image feature map of the frame image, wherein M is a positive integer not less than 1.
Specifically, the hopping connection layer may perform a plurality of downsampling and a plurality of upsampling operations, and perform a hopping connection operation during the upsampling. And for each up-sampling operation, up-sampling the feature diagram input at this time to obtain a feature diagram output by the up-sampling at this time, and connecting the feature diagram output by the up-sampling at this time with the obtained feature diagram with the same resolution, namely jump-connecting to obtain a final output feature diagram of the up-sampling at this time. Referring to fig. 7e, the jump connection layer firstly performs feature extraction on an input image to obtain an initial feature map of the input image, then performs a plurality of downsampling operations on the initial feature map, acquires a feature map output by the last downsampling operation each time the downsampling operation is performed, performs downsampling on the feature map output by the last downsampling operation to obtain a feature map output by the current downsampling operation, and the input feature map of the first downsampling operation is the initial feature map; thus, a plurality of feature maps with different resolutions can be obtained through a plurality of times of down-sampling operations; taking the feature map output by the last downsampling operation as a first intermediate feature map; then, executing a plurality of times of upsampling operation on the first intermediate feature map, acquiring the feature map output by the last upsampling operation in the process of each upsampling operation, and upsampling the feature map output by the last downsampling operation to obtain the intermediate feature map output by the current upsampling operation; and connecting the intermediate characteristic diagram output by the up-sampling operation and the characteristic diagram with the same resolution obtained by the down-sampling operation or the characteristic extraction, namely jump connection, to obtain the final output characteristic diagram of the up-sampling operation. And after a plurality of times of upsampling operations, taking the feature map output by the last upsampling operation as a jump connection layer to perform feature extraction on the input image to obtain a second intermediate feature map.
As an example, the jumping connection layer adopts an encoder and decoder structure, for each frame of image, the frame of image is input to the jumping connection layer in the feature extraction module, the frame of image is subjected to multi-resolution feature map extraction, and the feature maps with the same resolution are subjected to jumping connection to obtain a second intermediate feature map, including: inputting the frame image into an encoder in a jump connection layer, encoding the frame image to obtain an initial feature map of the frame image, and sequentially performing downsampling processing on the initial feature map for N times to obtain a first intermediate feature map; and inputting the first intermediate feature map into a decoder in a jump connection layer, sequentially carrying out up-sampling processing on the first intermediate feature map for N times, and carrying out jump connection on the first intermediate feature map with the same resolution obtained by the down-sampling processing in the encoder in each up-sampling processing so as to obtain a second intermediate feature map of the frame image. Referring to fig. 7e, four in the skip connection layer represent encoders corresponding to the down-sampled arrows, and three in the skip connection layer represent decoders corresponding to the up-sampled arrows.
In a possible implementation manner, the encoder includes an encoding sub-module and N down-sampling sub-modules connected in sequence, and then inputs the frame image into the encoder in the skip connection layer, encodes the frame image to obtain an initial feature map of the frame image, and sequentially performs N down-sampling processes on the initial feature map to obtain a first intermediate feature map, including: inputting the frame image into a coding submodule for coding to obtain an initial characteristic diagram of the frame image; n downsampling processing is carried out on the initial feature map by utilizing N downsampling sub-modules to obtain a first intermediate feature map; in each downsampling submodule, performing convolution processing on input of K1 convolution units by using target convolution parameters corresponding to the convolution units in sequence to obtain an intermediate feature map to be activated, and activating the intermediate feature map to be activated by using an activation function to obtain output of each convolution unit, wherein K1 is a positive integer larger than or equal to 2. In this embodiment, the number of convolution units included in each downsampling sub-module in the encoder is not limited, and may be, for example, 2, 3, 4, or 5.
Referring to fig. 7f, each downsampling sub-module is illustrated as including 3 convolution units connected in sequence. The output result of the last convolution unit is the input parameter of the next convolution unit, the input parameter of the first convolution unit of the first downsampling submodule is the initial characteristic diagram output by the coding submodule, and the output result of the last convolution unit of the last downsampling submodule is the first intermediate characteristic diagram.
In the inference stage of the three-dimensional reconstruction network, the target convolution parameters corresponding to each convolution unit are obtained by carrying out reparameterization technology combination on the parameters of a plurality of branches in the training stage. The precision of the three-dimensional reconstruction network can be improved by introducing a plurality of branches in the training stage of the three-dimensional reconstruction network, and the three-dimensional reconstruction efficiency of the three-dimensional reconstruction network can be improved by combining the branches in the reasoning stage of the three-dimensional reconstruction network.
Referring to fig. 7f, for each convolution unit in the downsampling sub-module, in the training phase, the operation process of the convolution unit is divided into three branches, and it is assumed that the parameters of the first branch are denoted as c1 and b1; the parameters of the second branch are denoted as c2 and b2; the parameter of the second branch is recorded as b3; c1 and c2 are convolution parameters, b1, b2 and b3 are BN (Batch Normalization) parameters; after the input parameters are sequentially processed by convolution parameters corresponding to the three branches and batch normalization parameters, the processing results of the three branches are added to obtain an intermediate feature map to be activated, and the intermediate feature map to be activated is activated by using an activation function (such as ReLu or sigmoid) to obtain the output of each convolution unit.
Referring to fig. 7f, for each convolution unit in the downsampling sub-module, in the inference stage, through the re-parameterization technique, the target convolution parameters of the convolution unit are obtained by combining the convolution parameters of the three branches in the training stage and the batch normalization parameters. It should be understood that in the training phase and the reasoning phase, the intermediate feature map to be activated, which is obtained by processing the convolution parameters and batch normalization parameters corresponding to the three branches, of the same input parameters is the same as the intermediate feature map to be activated, which is obtained by processing the target convolution parameter c3 with the re-parameterization. That is, although the operation manner of the input parameter is changed, the operation result of the input parameter is not changed.
In some embodiments of the present application, the feature extraction module in the feature extraction network includes a skip connection layer and a down-sampling layer, which are connected in sequence, and further, the down-sampling layer includes a plurality of down-sampling sub-modules connected in sequence, and each down-sampling sub-module may be any module having a down-sampling function, which is not limited in this respect. Referring to fig. 7c, in the down-sampling layer, each down-sampling sub-module performs down-sampling on the feature map output by the previous down-sampling sub-module to obtain the feature map output by the down-sampling sub-module, the first down-sampling sub-module performs down-sampling on the second intermediate feature map output by the skip connection layer, and the feature map output by the last down-sampling sub-module is used as the output result of the down-sampling layer.
Further optionally, the downsampling layer includes M downsampling sub-modules connected in sequence, and then the second intermediate feature map of the frame image is input into the downsampling layer in the feature extraction module to perform downsampling processing for M times, so as to obtain the image feature map of the frame image, including: utilizing M downsampling sub-modules to carry out downsampling processing on the second intermediate feature map for M times to obtain an image feature map of the frame image; in each downsampling submodule, performing convolution processing on input of K2 convolution units by using target convolution parameters corresponding to the convolution units in sequence to obtain an intermediate feature map to be activated, and activating the intermediate feature map to be activated by using an activation function to obtain output of each convolution unit, wherein K2 is a positive integer larger than or equal to 2. In this embodiment, the number of convolution units included in each downsampling sub-module in the downsampling layer is not limited, and may be, for example, 2, 3, 4, or 5. In an alternative embodiment, each downsampling sub-module in the downsampling layer may include 3 convolution units, and the structure of the downsampling sub-module shown in fig. 7f may be adopted, but is not limited thereto.
In some optional embodiments, after the target three-dimensional model is obtained, for each frame of image in the multiple frames of images, the target three-dimensional model is adapted to the target object in the frame of image according to the camera posture data when the frame of image is acquired, and the commodity adapted to the target object is purchased for the target object based on the adaptation result.
Specifically, for each frame of image, camera external parameters are obtained according to camera pose data when the frame of image is acquired, and the camera external parameters refer to parameters of the camera in a world coordinate system, such as the position and the rotation direction of the camera, and mainly include a rotation matrix and a translation matrix. Of course, a camera parameter estimation network may be trained in advance by using a large number of sample images and corresponding camera external parameters when the sample images are taken. In the inference stage, the image is input into a camera parameter estimation network for identification processing, and corresponding camera external parameters are obtained when the image is shot. After the camera external parameters are obtained, based on a pinhole imaging theory, projecting each vertex in the target three-dimensional model to the frame image according to the camera external parameters to obtain projection points corresponding to each vertex in the target three-dimensional model; and determining real image characteristic points matched with the projection points from the real image characteristic points of the frame image by utilizing a characteristic point matching technology, and determining an adaptation result between a vertex on a target object in the real world and a vertex on the target three-dimensional model according to the image position of the real image characteristic points corresponding to the projection points in the image and the image position of the projection points aiming at each projection point. The real image feature points refer to feature points corresponding to vertices on a target object in the real world. For example, the degree of adaptation between the vertex on the target object in the real world and the vertex on the target three-dimensional model is quantified based on the difference between the image position of the real image feature point and the image position of the projection point. The larger the difference between the image position of the real image characteristic point and the image position of the projection point is, the smaller the adaptation degree is; the smaller the difference between the image position of the real image feature point and the image position of the projection point is, the greater the adaptation degree is. And after the adaptation result between each vertex on the target object in the real world and the corresponding vertex on the target three-dimensional model is obtained, selecting and purchasing the matched commodity for the target object based on the adaptation result.
As an example, when providing target commodity information adapted to a target object according to a target three-dimensional model, selecting commodity information having a highest degree of adaptation between the commodity three-dimensional model and the target three-dimensional model from a plurality of candidate commodity information as the target commodity information according to the target three-dimensional model and the commodity three-dimensional models corresponding to the plurality of candidate commodity information, and providing the target commodity information to the target object;
as another example, when providing the target object with the target commodity information adapted to the target object according to the target three-dimensional model, the commodity three-dimensional model adapted to the target three-dimensional model may be customized for the target object according to the model parameters corresponding to the target three-dimensional model and the selected commodity type, and the commodity information corresponding to the commodity three-dimensional model may be provided to the target object as the target commodity information.
In some optional embodiments, any one of the multiple frames of images is input into the depth estimation network to estimate the size information of the target object, and the target three-dimensional model is labeled according to the estimated size information of the target object.
In practical application, a depth estimation network can be trained by using massive sample images and size information of target objects in the sample images in advance. In the inference stage, the image is input into a depth estimation network to estimate size information of the target object, including, for example and without limitation: length and width of the target object. The estimated size information of the target object can be labeled on the target three-dimensional model. For example, in a virtual shoe fitting scenario, there may be a need to measure foot length and foot width, which are labeled on the reconstructed three-dimensional model of the foot.
In some optional embodiments, for each frame of the multiple frames of images, the target three-dimensional model is adapted to the target object in the frame of image according to the camera pose data when the frame of image is acquired, and the shape parameter of the target object is measured based on the adaptation result.
It should be noted that the execution subjects of the steps of the methods provided in the above embodiments may be the same device, or different devices may be used as the execution subjects of the methods. For example, the execution subjects of steps 401 to 404 may be device a; for another example, the execution subject of steps 401 and 402 may be device a, and the execution subject of steps 403 to 404 may be device B; and so on.
In addition, in some of the flows described in the above embodiments and the drawings, a plurality of operations are included in a specific order, but it should be clearly understood that the operations may be executed out of the order presented herein or in parallel, and the sequence numbers of the operations, such as 401, 402, etc., are merely used to distinguish various operations, and the sequence numbers themselves do not represent any execution order. Additionally, the flows may include more or fewer operations, and the operations may be performed sequentially or in parallel. It should be noted that, the descriptions of "first", "second", etc. in this document are used for distinguishing different messages, devices, modules, etc., and do not represent a sequential order, nor limit the types of "first" and "second" to be different.
Fig. 8 is a schematic structural diagram of an information processing apparatus based on virtual fitting according to an embodiment of the present application. Referring to fig. 8, the apparatus may include:
the fusion module 81 is used for responding to the fitting request, fusing the three-dimensional model of the fitting object with the reference three-dimensional model of the target commodity object to obtain a fused three-dimensional model, and representing a first relative position relation between the three-dimensional model of the fitting object and the reference three-dimensional model in a fitting state by the fused three-dimensional model;
an obtaining module 82, configured to calculate, according to the first relative position relationship, a plurality of pieces of distance information between a plurality of target vertices on the three-dimensional model of the fitting object and corresponding vertices or regions on the reference three-dimensional model, as adaptation degree information of the plurality of target vertices;
and the adjusting module 83 is configured to, when it is determined that the reference three-dimensional model does not meet the requirement of the suitability according to the suitability information of the multiple target vertices, adjust the size parameter and/or the shape parameter of the reference three-dimensional model, and recalculate the suitability information of the multiple target vertices until the target three-dimensional model meeting the requirement of the suitability is obtained.
Further optionally, the apparatus further includes a sending module 84, configured to send the target three-dimensional model to the server, and receive information of the target commodity object returned by the server, where the target commodity object is a customized commodity object whose size and shape are adapted to the target three-dimensional model.
Further optionally, when it is determined that the reference three-dimensional model does not meet the requirement of the suitability degree according to the suitability degree information of the multiple target vertices, the adjusting module 83 is specifically configured to, when adjusting the size parameter and/or the shape parameter of the reference three-dimensional model: displaying any three-dimensional model of the fitting object, the reference three-dimensional model and the fusion three-dimensional model, and visually marking the adaptation degree information of a plurality of target vertexes on any three-dimensional model, wherein the adaptation degree information with different size relations with the reference adaptation degree range corresponds to different visual marking states so as to ensure that a user confirms whether the reference three-dimensional model meets the adaptation degree requirement; and adjusting the size parameter and/or the shape parameter of the reference three-dimensional model in response to an adjusting operation for the reference three-dimensional model, wherein the adjusting operation is initiated under the condition that the reference three-dimensional model is confirmed not to meet the adaptability requirement.
Further optionally, when visually marking the information of the suitability of the plurality of target vertices on any three-dimensional model, the adjusting module 83 is specifically configured to: rendering any three-dimensional model according to the adaptation degree information of the target vertexes to obtain an adaptation degree thermodynamic diagram, wherein different colors in the adaptation degree thermodynamic diagram represent the adaptation degree information with different relations with the reference adaptation degree range.
Further optionally, if a slide bar is displayed in the associated region of any three-dimensional model, the adjusting module 83 is specifically configured to, in response to the adjustment operation for the reference three-dimensional model, adjust the size parameter and/or the shape parameter of the reference three-dimensional model: responding to at least one sliding operation on the sliding strip, acquiring the sliding distance and the sliding direction of each sliding operation, and respectively determining the adjustment amplitude and the adjustment direction according to the sliding distance and the sliding direction; and adjusting the size parameter and/or the shape parameter of the reference three-dimensional model according to the adjustment direction and the adjustment amplitude.
Further optionally, the slide bar includes a first slide bar and a second slide bar, the first slide bar is used to adjust a size parameter of the reference three-dimensional model, and the second slide condition is used to adjust an appearance parameter of the reference three-dimensional model.
Further optionally, the fusion module 81 responds to the fitting request, and fuses the three-dimensional model of the fitting object and the reference three-dimensional model of the target commodity object, and when obtaining the three-dimensional fusion model, the fusion module is specifically configured to: responding to the fitting request, and acquiring a three-dimensional model of the fitting object, a reference three-dimensional model of the target commodity object and target fitting parameters of the fitting object aiming at the target commodity object; determining a second relative position relation between at least three reference vertexes on the three-dimensional model of the fitting object and corresponding reference vertexes on the reference three-dimensional model according to the target fitting parameters; and at least partially placing the three-dimensional model of the fitting object inside the reference three-dimensional model according to the second relative position relation so as to obtain a fused three-dimensional model.
Further optionally, when the fusion module 81 obtains the target fitting parameters of the fitting object for the target commodity object, the fusion module is specifically configured to: and acquiring target fitting parameters of the fitting object aiming at the target commodity object according to the attribute information of the fitting object, the fitting preference information of the user to which the fitting object belongs and/or the reference fitting parameters corresponding to the target commodity object.
Further optionally, when the fitting object is a foot and the target object is a shoe, the fusion module 81 determines, according to the target fitting parameter, a second relative position relationship between the plurality of reference vertices on the three-dimensional model of the fitting object and the corresponding reference vertices on the reference three-dimensional model, and specifically:
determining a fitting distance between a first heel vertex on the three-dimensional model of the foot and a second heel vertex on the reference three-dimensional model of the shoe as a second relative position relation according to the fitting distance between the shoe and the heel;
determining that a first sole vertex on the three-dimensional model of the foot is superposed with a second sole vertex on the reference three-dimensional model of the shoe as a second relative position relation according to the joint relation between the sole and the sole;
according to the alignment relation between the sole center and the sole center, determining that a first central line vertex on a sole central line on the three-dimensional model of the foot is aligned with a second central line vertex on the sole central line on the reference three-dimensional model of the shoe in the foot length direction to serve as a second relative position relation.
Further optionally, when the fusion module 81 responds to the fitting request and obtains the three-dimensional model of the fitting object and the reference three-dimensional model of the target commodity object, the fusion module is specifically configured to: responding to the fitting request, and displaying a commodity customized page, wherein the commodity customized page comprises at least one customizable commodity object; responding to the selection operation of a user on a commodity customized page, and determining a selected target commodity object and a reference three-dimensional model thereof; and loading the three-dimensional model of the fitting object matched with the target commodity object from the three-dimensional model library of the user according to the type of the target commodity object.
Further optionally, the obtaining module 82 calculates, according to the first relative position relationship, a plurality of pieces of distance information between a plurality of target vertices on the three-dimensional model of the fitting object and corresponding regions on the reference three-dimensional model, and when the plurality of pieces of distance information are used as the adaptation degree information of the plurality of target vertices, is specifically configured to: aiming at each target vertex on the three-dimensional model of the fitting object, acquiring a first vertex which is closest to the target vertex on the reference three-dimensional model according to the first relative position relation; taking a plurality of triangular patches taking the first vertex as a connecting point as corresponding areas of the target vertex on the reference three-dimensional model; and calculating a plurality of distances from the target vertex to the plurality of triangular patches, and generating the adaptation degree information of the target vertex according to the plurality of distances.
Further optionally, the obtaining module 82 is further configured to: taking each vertex on the three-dimensional model of the try-on object as a target vertex; or selecting a vertex corresponding to the key part information from the three-dimensional model of the fitting object as a target vertex according to the key part information of the fitting object.
The apparatus shown in fig. 8 can perform the method shown in fig. 3, and the implementation principle and technical effect thereof are not described in detail. The specific manner in which each module and unit of the apparatus in fig. 8 in the above embodiment perform operations has been described in detail in the embodiment related to the method, and will not be described in detail herein.
Fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present application. As shown in fig. 9, the electronic apparatus includes: a memory 91 and a processor 92;
memory 91 is used to store computer programs and may be configured to store other various data to support operations on the computing platform. Examples of such data include instructions for any application or method operating on the computing platform, contact data, phonebook data, messages, pictures, videos, and so forth.
The memory 91 may be implemented by any type or combination of volatile or non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
A processor 92, coupled to the memory 91, for executing the computer program in the memory 91 for: responding to the fitting request, fusing the three-dimensional model of the fitting object with the reference three-dimensional model of the target commodity object to obtain a fused three-dimensional model, wherein the fused three-dimensional model represents a first relative position relation between the three-dimensional model of the fitting object and the reference three-dimensional model in the fitting state; according to the first relative position relation, acquiring a plurality of distance information between a plurality of target vertexes on the three-dimensional model of the fitting object and corresponding vertexes or regions on the reference three-dimensional model, and using the distance information as the adaptation degree information of the target vertexes; and under the condition that the reference three-dimensional model does not meet the adaptation degree requirement according to the adaptation degree information of the target vertexes, adjusting the size parameter and/or the shape parameter of the reference three-dimensional model, and acquiring the adaptation degree information of the target vertexes again until the target three-dimensional model meeting the adaptation degree requirement is obtained.
Alternatively, the processor 92, coupled to the memory 91, is configured to execute the computer program in the memory 91 to: responding to the fitting request, fusing the three-dimensional model of the fitting object with the reference three-dimensional model of the target commodity object to obtain a fused three-dimensional model, wherein the fused three-dimensional model represents a first relative position relation between the three-dimensional model of the fitting object and the reference three-dimensional model in the fitting state; according to the first relative position relation, acquiring a plurality of distance information between a plurality of target vertexes on the three-dimensional model of the fitting object and corresponding vertexes or regions on the reference three-dimensional model, and using the distance information as the adaptation degree information of the target vertexes; displaying any three-dimensional model of the fitting object, the reference three-dimensional model and the fusion three-dimensional model, and visually marking the adaptation degree information of a plurality of target vertexes on any three-dimensional model; and responding to the adjustment operation aiming at any three-dimensional model, adjusting the size parameter and/or the appearance parameter of the reference three-dimensional model, and synchronously updating the adaptation degree information of the multiple target vertexes and the visual mark on any three-dimensional model according to the adjusted reference three-dimensional model.
Alternatively, the processor 92, coupled to the memory 91, is configured to execute the computer program in the memory 91 to: responding to the fitting request, acquiring a three-dimensional model of the fitting object and a reference three-dimensional model of the target commodity object, and displaying the three-dimensional model of the fitting object and the reference three-dimensional model; fusing the three-dimensional model of the fitting object and the reference three-dimensional model of the target commodity to obtain a first relative position relation between the three-dimensional model of the fitting object and the reference three-dimensional model in a fitting state; according to the first relative position relation, calculating a plurality of distance information between a plurality of target vertexes on the three-dimensional model of the fitting object and corresponding vertexes or regions on the reference three-dimensional model to serve as the adaptation degree information of the target vertexes; visually marking the adaptation degree information of a plurality of target vertexes on the three-dimensional model of the fitting object; and responding to the adjustment operation aiming at the reference three-dimensional model, adjusting the size parameter and/or the appearance parameter of the reference three-dimensional model, and synchronously updating the adaptation degree information of the target vertexes and the visual mark on the three-dimensional model of the fitting object according to the adjusted reference three-dimensional model.
Further, as shown in fig. 9, the electronic device further includes: communications component 93, display 94, power component 95, audio component 96, and the like. Only some of the components are schematically shown in fig. 9, and the electronic device is not meant to include only the components shown in fig. 9. In addition, the components within the dashed line frame in fig. 9 are optional components, not necessary components, and may be determined according to the product form of the electronic device. The electronic device of this embodiment may be implemented as a terminal device such as a desktop computer, a notebook computer, a smart phone, or an IOT device, and may also be a server device such as a conventional server, a cloud server, or a server array. If the electronic device of this embodiment is implemented as a terminal device such as a desktop computer, a notebook computer, a smart phone, etc., the electronic device may include components within the dashed line frame in fig. 9; if the electronic device of this embodiment is implemented as a server device such as a conventional server, a cloud server, or a server array, components within a dashed box in fig. 9 may not be included.
For details of the implementation process of the processor to perform each action, reference may be made to the related description in the foregoing method embodiment or apparatus embodiment, and details are not described herein again.
Accordingly, the present application further provides a computer-readable storage medium storing a computer program, where the computer program is capable of implementing the steps that can be executed by the electronic device in the foregoing method embodiments when executed.
Accordingly, the present application also provides a computer program product, which includes a computer program/instruction, when the computer program/instruction is executed by a processor, the processor is enabled to implement the steps that can be executed by an electronic device in the above method embodiments.
The communication component is configured to facilitate wired or wireless communication between the device in which the communication component is located and other devices. The device where the communication component is located can access a wireless network based on a communication standard, such as a WiFi, a 2G, 3G, 4G/LTE, 5G and other mobile communication networks, or a combination thereof. In an exemplary embodiment, the communication component receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In one exemplary embodiment, the communication component further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
The display described above includes a screen, which may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation.
The power supply assembly provides power to various components of the device in which the power supply assembly is located. The power components may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the device in which the power component is located.
The audio component described above may be configured to output and/or input an audio signal. For example, the audio component includes a Microphone (MIC) configured to receive an external audio signal when the device in which the audio component is located is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signal may further be stored in a memory or transmitted via a communication component. In some embodiments, the audio assembly further comprises a speaker for outputting audio signals.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and so forth) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising a … …" does not exclude the presence of another identical element in a process, method, article, or apparatus that comprises the element.
The above are merely examples of the present application and are not intended to limit the present application. Various modifications and changes may occur to those skilled in the art to which the present application pertains. Any modification, equivalent replacement, improvement or the like made within the spirit and principle of the present application shall be included in the scope of the claims of the present application.

Claims (16)

1. An information processing method based on virtual fitting is characterized by comprising the following steps:
responding to a fitting request, and fusing a three-dimensional model of a fitting object and a reference three-dimensional model of a target commodity object to obtain a fused three-dimensional model, wherein the fused three-dimensional model represents a first relative position relation between the three-dimensional model of the fitting object and the reference three-dimensional model in a fitting state;
according to the first relative position relationship, acquiring a plurality of pieces of distance information between a plurality of target vertexes on the three-dimensional model of the fitting object and corresponding vertexes or regions on the reference three-dimensional model, and using the distance information as the adaptation degree information of the target vertexes;
and under the condition that the reference three-dimensional model does not meet the adaptation degree requirement according to the adaptation degree information of the target vertexes, adjusting the size parameter and/or the shape parameter of the reference three-dimensional model, and acquiring the adaptation degree information of the target vertexes again until the target three-dimensional model meeting the adaptation degree requirement is obtained.
2. The method of claim 1, after obtaining the target three-dimensional model satisfying the fitness requirement, further comprising:
and sending the target three-dimensional model to a server, and receiving information of a target commodity object returned by the server, wherein the target commodity object is a customized commodity object with the size and the shape matched with the target three-dimensional model.
3. The method of claim 1, wherein adjusting the dimensional parameter and/or the form parameter of the reference three-dimensional model in the event that it is determined from the fit information of the plurality of target vertices that the reference three-dimensional model does not meet the fit requirement comprises:
displaying any three-dimensional model of the three-dimensional model, the reference three-dimensional model and the fusion three-dimensional model of the fitting object;
visually marking the adaptation degree information of the plurality of target vertexes on any three-dimensional model, wherein the adaptation degree information with different size relations with the reference adaptation degree range corresponds to different visual marking states; and
and responding to the adjustment operation aiming at the reference three-dimensional model, and adjusting the size parameter and/or the appearance parameter of the reference three-dimensional model until a target three-dimensional model meeting the adaptation degree requirement is obtained.
4. The method of claim 3, wherein visually labeling the fit information of the plurality of target vertices on the arbitrary three-dimensional model comprises:
rendering the any three-dimensional model according to the adaptation degree information of the target vertexes to obtain an adaptation degree thermodynamic diagram, wherein different colors in the adaptation degree thermodynamic diagram represent the adaptation degree information with different relations with the reference adaptation degree range.
5. The method according to claim 3, wherein if a slide bar is displayed in the associated region of any three-dimensional model, the adjusting the size parameter and/or the shape parameter of the reference three-dimensional model in response to the adjusting operation for the reference three-dimensional model comprises:
responding to at least one sliding operation on the sliding strip, acquiring the sliding distance and the sliding direction of each sliding operation, and respectively determining the adjustment amplitude and the adjustment direction according to the sliding distance and the sliding direction;
and adjusting the size parameter and/or the appearance parameter of the reference three-dimensional model according to the adjustment direction and the adjustment amplitude.
6. The method of claim 5, wherein the slide bar comprises a first slide bar and a second slide bar, the first slide bar is used for adjusting the size parameter of the reference three-dimensional model, and the second slide condition is used for adjusting the shape parameter of the reference three-dimensional model.
7. The method according to any one of claims 1 to 6, wherein the step of fusing the three-dimensional model of the fitting object with the reference three-dimensional model of the target commodity object in response to the fitting request to obtain a three-dimensional fused model comprises:
responding to a fitting request, and acquiring a three-dimensional model of a fitting object, a reference three-dimensional model of a target commodity object and target fitting parameters of the fitting object aiming at the target commodity object;
determining second relative position relations between at least three reference vertexes on the three-dimensional model of the fitting object and corresponding reference vertexes on the reference three-dimensional model according to the target fitting parameters;
and at least partially placing the three-dimensional model of the fitting object in the reference three-dimensional model according to the second relative position relation so as to obtain a fused three-dimensional model.
8. The method of claim 7, wherein obtaining target fitting parameters of the fitting object for the target commodity object comprises:
and acquiring target fitting parameters of the fitting object aiming at the target commodity object according to the attribute information of the fitting object, the fitting preference information of the user to which the fitting object belongs and/or the reference fitting parameters corresponding to the target commodity object.
9. The method of claim 7, wherein determining a second relative positional relationship between a plurality of reference vertices on the three-dimensional model of the fitting object and corresponding reference vertices on the three-dimensional model of the reference, based on the target fitting parameters, in the case where the fitting object is a foot and the target object is a shoe, comprises at least one of:
determining the fitting distance between a first heel vertex on the three-dimensional model of the foot and a second heel vertex on the reference three-dimensional model of the shoe as a second relative position relation according to the fitting distance between the shoe and the heel;
determining that a first sole vertex on the three-dimensional model of the foot is superposed with a second sole vertex on the reference three-dimensional model of the shoe as the second relative position relation according to the fitting relation between the sole and the sole;
and determining that a first central line vertex positioned on a central line of the sole on the three-dimensional model of the foot is aligned with a second central line vertex positioned on the central line of the sole on the reference three-dimensional model of the shoe in the foot length direction according to the alignment relation between the center of the sole and the center of the sole, and taking the alignment as the second relative position relation.
10. The method of claim 7, wherein obtaining the three-dimensional model of the try-on object and the reference three-dimensional model of the target merchandise object in response to the try-on request comprises
Responding to a fitting request, and displaying a commodity customized page, wherein the commodity customized page comprises at least one customizable commodity object;
responding to the selection operation of the user on the commodity customized page, and determining a selected target commodity object and a reference three-dimensional model thereof;
and loading the three-dimensional model of the fitting object matched with the target commodity object from a three-dimensional model library of a user according to the type of the target commodity object.
11. The method according to any one of claims 1 to 6, wherein calculating, as the suitability information of the plurality of target vertices, a plurality of pieces of distance information between the plurality of target vertices on the three-dimensional model of the fitting object and the corresponding region on the reference three-dimensional model, based on the first relative positional relationship, includes:
aiming at each target vertex on the three-dimensional model of the fitting object, acquiring a first vertex which is closest to the target vertex on the reference three-dimensional model according to the first relative position relation;
taking a plurality of triangular patches taking the first vertex as a connecting point as a corresponding area of the target vertex on the reference three-dimensional model;
and calculating a plurality of distances from the target vertex to the plurality of triangular patches, and generating the adaptation degree information of the target vertex according to the plurality of distances.
12. An information interaction method based on virtual fitting is characterized by comprising the following steps:
responding to a fitting request, and fusing a three-dimensional model of a fitting object and a reference three-dimensional model of a target commodity object to obtain a fused three-dimensional model, wherein the fused three-dimensional model represents a first relative position relation between the three-dimensional model of the fitting object and the reference three-dimensional model in a fitting state;
according to the first relative position relationship, acquiring a plurality of pieces of distance information between a plurality of target vertexes on the three-dimensional model of the fitting object and corresponding vertexes or regions on the reference three-dimensional model, and using the distance information as the adaptation degree information of the target vertexes;
displaying any three-dimensional model of the three-dimensional model, the reference three-dimensional model and the fusion three-dimensional model of the fitting object, and visually marking the adaptation degree information of the target vertexes on the any three-dimensional model;
and responding to the adjustment operation aiming at any three-dimensional model, adjusting the size parameter and/or the appearance parameter of the reference three-dimensional model, and synchronously updating the adaptation degree information of the target vertexes and the visual mark on any three-dimensional model according to the adjusted reference three-dimensional model.
13. An information interaction method based on virtual fitting is characterized by comprising the following steps:
responding to the fitting request, acquiring a three-dimensional model of a fitting object and a reference three-dimensional model of a target commodity object, and displaying the three-dimensional model of the fitting object and the reference three-dimensional model;
fusing the three-dimensional model of the fitting object with a reference three-dimensional model of the target commodity to obtain a first relative position relation between the three-dimensional model of the fitting object and the reference three-dimensional model in a fitting state;
according to the first relative position relation, calculating a plurality of pieces of distance information between a plurality of target vertexes on the three-dimensional model of the fitting object and corresponding vertexes or regions on the reference three-dimensional model, and using the plurality of pieces of distance information as the adaptation degree information of the plurality of target vertexes;
visually marking the adaptability information of the target vertexes on the three-dimensional model of the fitting object; and
and responding to the adjustment operation aiming at the reference three-dimensional model, adjusting the size parameter and/or the appearance parameter of the reference three-dimensional model, and synchronously updating the adaptation degree information of the target vertexes and the visual mark on the three-dimensional model of the fitting object according to the adjusted reference three-dimensional model.
14. An information processing apparatus based on virtual fitting, characterized by comprising:
the fusion module is used for responding to a fitting request, fusing the three-dimensional model of the fitting object with the reference three-dimensional model of the target commodity object to obtain a fused three-dimensional model, and representing a first relative position relation between the three-dimensional model of the fitting object and the reference three-dimensional model in a fitting state;
an obtaining module, configured to obtain, according to the first relative position relationship, a plurality of pieces of distance information between a plurality of target vertices on the three-dimensional model of the fitting object and corresponding vertices or regions on the reference three-dimensional model, as adaptation degree information of the plurality of target vertices;
and the adjusting module is used for adjusting the size parameter and/or the shape parameter of the reference three-dimensional model under the condition that the reference three-dimensional model does not meet the adaptation degree requirement according to the adaptation degree information of the target vertexes, and acquiring the adaptation degree information of the target vertexes again until the target three-dimensional model meeting the adaptation degree requirement is obtained.
15. An electronic device, comprising: a memory and a processor; the memory for storing a computer program; the processor is coupled to the memory for executing the computer program for performing the steps in the method of any one of claims 1-11, 12 and 13.
16. A computer-readable storage medium having a computer program stored thereon, which, when executed by a processor, causes the processor to carry out the steps of the method of any one of claims 1-11, claim 12 and claim 13.
CN202211257860.4A 2022-10-14 2022-10-14 Information processing and interaction method, device, equipment and medium based on virtual fitting Active CN115358828B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211257860.4A CN115358828B (en) 2022-10-14 2022-10-14 Information processing and interaction method, device, equipment and medium based on virtual fitting

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211257860.4A CN115358828B (en) 2022-10-14 2022-10-14 Information processing and interaction method, device, equipment and medium based on virtual fitting

Publications (2)

Publication Number Publication Date
CN115358828A true CN115358828A (en) 2022-11-18
CN115358828B CN115358828B (en) 2023-03-28

Family

ID=84007853

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211257860.4A Active CN115358828B (en) 2022-10-14 2022-10-14 Information processing and interaction method, device, equipment and medium based on virtual fitting

Country Status (1)

Country Link
CN (1) CN115358828B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117523142A (en) * 2023-11-13 2024-02-06 书行科技(北京)有限公司 Virtual fitting method, virtual fitting device, electronic equipment and computer readable storage medium

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080163344A1 (en) * 2006-12-29 2008-07-03 Cheng-Hsien Yang Terminal try-on simulation system and operating and applying method thereof
CN105788002A (en) * 2016-01-06 2016-07-20 湖南拓视觉信息技术有限公司 3D virtual shoe fitting method and system
CN105787751A (en) * 2016-01-06 2016-07-20 湖南拓视觉信息技术有限公司 3D human body virtual fitting method and system
US9460557B1 (en) * 2016-03-07 2016-10-04 Bao Tran Systems and methods for footwear fitting
US20170156430A1 (en) * 2014-07-02 2017-06-08 Konstantin Aleksandrovich KARAVAEV Method for virtually selecting clothing
US20180033202A1 (en) * 2016-07-29 2018-02-01 OnePersonalization Limited Method and system for virtual shoes fitting
US9965801B1 (en) * 2016-12-22 2018-05-08 Capital One Services, Llc Systems and methods for virtual fittings
CN108564454A (en) * 2018-06-04 2018-09-21 冼玮 3D dressing systems and its method suitable for the network platform
CN108961015A (en) * 2018-07-27 2018-12-07 朱培恒 A kind of online virtual examination shoes method
CN110148040A (en) * 2019-05-22 2019-08-20 珠海随变科技有限公司 A kind of virtual fit method, device, equipment and storage medium
CN110348972A (en) * 2019-07-29 2019-10-18 足购科技(杭州)有限公司 The shoes system for trying and method of view-based access control model algorithm
JP2020097803A (en) * 2018-12-18 2020-06-25 成衛 貝田 Virtual fitting system
CN114119905A (en) * 2020-08-27 2022-03-01 北京陌陌信息技术有限公司 Virtual fitting method, system, equipment and storage medium
US20220157020A1 (en) * 2020-11-16 2022-05-19 Clo Virtual Fashion Inc. Method and apparatus for online fitting
CN114841783A (en) * 2022-05-27 2022-08-02 阿里巴巴(中国)有限公司 Commodity information processing method and device, terminal device and storage medium
CN114846389A (en) * 2019-08-26 2022-08-02 沃比帕克公司 Virtual fitting system and method for eyewear
CN114928751A (en) * 2022-05-09 2022-08-19 咪咕文化科技有限公司 Object display method, device and equipment and readable storage medium

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080163344A1 (en) * 2006-12-29 2008-07-03 Cheng-Hsien Yang Terminal try-on simulation system and operating and applying method thereof
US20170156430A1 (en) * 2014-07-02 2017-06-08 Konstantin Aleksandrovich KARAVAEV Method for virtually selecting clothing
CN105788002A (en) * 2016-01-06 2016-07-20 湖南拓视觉信息技术有限公司 3D virtual shoe fitting method and system
CN105787751A (en) * 2016-01-06 2016-07-20 湖南拓视觉信息技术有限公司 3D human body virtual fitting method and system
US9460557B1 (en) * 2016-03-07 2016-10-04 Bao Tran Systems and methods for footwear fitting
US20180033202A1 (en) * 2016-07-29 2018-02-01 OnePersonalization Limited Method and system for virtual shoes fitting
US9965801B1 (en) * 2016-12-22 2018-05-08 Capital One Services, Llc Systems and methods for virtual fittings
CN108564454A (en) * 2018-06-04 2018-09-21 冼玮 3D dressing systems and its method suitable for the network platform
CN108961015A (en) * 2018-07-27 2018-12-07 朱培恒 A kind of online virtual examination shoes method
JP2020097803A (en) * 2018-12-18 2020-06-25 成衛 貝田 Virtual fitting system
CN110148040A (en) * 2019-05-22 2019-08-20 珠海随变科技有限公司 A kind of virtual fit method, device, equipment and storage medium
CN110348972A (en) * 2019-07-29 2019-10-18 足购科技(杭州)有限公司 The shoes system for trying and method of view-based access control model algorithm
CN114846389A (en) * 2019-08-26 2022-08-02 沃比帕克公司 Virtual fitting system and method for eyewear
CN114119905A (en) * 2020-08-27 2022-03-01 北京陌陌信息技术有限公司 Virtual fitting method, system, equipment and storage medium
US20220157020A1 (en) * 2020-11-16 2022-05-19 Clo Virtual Fashion Inc. Method and apparatus for online fitting
CN114928751A (en) * 2022-05-09 2022-08-19 咪咕文化科技有限公司 Object display method, device and equipment and readable storage medium
CN114841783A (en) * 2022-05-27 2022-08-02 阿里巴巴(中国)有限公司 Commodity information processing method and device, terminal device and storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
吕鹏鹏: ""互联网+"背景下的三维虚拟试衣软件的前景与展望", 《山东纺织经济》 *
陈桂清等: "非接触三维测体技术研究进展及其在服装领域的应用", 《纺织导报》 *
陈青青: "基于服装参数的虚拟服装试穿模拟", 《莆田学院学报》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117523142A (en) * 2023-11-13 2024-02-06 书行科技(北京)有限公司 Virtual fitting method, virtual fitting device, electronic equipment and computer readable storage medium

Also Published As

Publication number Publication date
CN115358828B (en) 2023-03-28

Similar Documents

Publication Publication Date Title
US10489683B1 (en) Methods and systems for automatic generation of massive training data sets from 3D models for training deep learning networks
US10777020B2 (en) Virtual representation creation of user for fit and style of apparel and accessories
CN111787242B (en) Method and apparatus for virtual fitting
WO2021028728A1 (en) Method and system for remotely selecting garments
US20190228448A1 (en) System, Platform and Method for Personalized Shopping Using a Virtual Shopping Assistant
US20190188784A1 (en) System, platform, device and method for personalized shopping
KR102202843B1 (en) System for providing online clothing fitting service using three dimentional avatar
US20160081435A1 (en) Footwear recommendations from foot scan data
JP2021168157A (en) System, platform and method for personalized shopping using automated shopping assistant
EP3599590A1 (en) An online virtual shoe fitting method
CN115358828B (en) Information processing and interaction method, device, equipment and medium based on virtual fitting
US11507781B2 (en) Methods and systems for automatic generation of massive training data sets from 3D models for training deep learning networks
US11113892B2 (en) Method and apparatus for on-line and off-line retail of all kind of clothes, shoes and accessories
WO2022066570A1 (en) Providing ar-based clothing in messaging system
CN115359192B (en) Three-dimensional reconstruction and commodity information processing method, device, equipment and storage medium
CN114170250B (en) Image processing method and device and electronic equipment
CN113298956A (en) Image processing method, nail beautifying method and device, and terminal equipment
Kok et al. Footnet: An efficient convolutional network for multiview 3d foot reconstruction
Wiradinata et al. Online Measuring Feature for Batik Size Prediction using Mobile Device: A Potential Application for a Novelty Technology
KR102314167B1 (en) Method of curating beauty care products by analyzing user's facial contour and skin type
CN112884556A (en) Shop display method, system, equipment and medium based on mixed reality
CN114445271A (en) Method for generating virtual fitting 3D image
CN114596412B (en) Method for generating virtual fitting 3D image
US20230334527A1 (en) System and method for body parameters-sensitive facial transfer in an online fashion retail environment
RU2805003C2 (en) Method and system for remote clothing selection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40081891

Country of ref document: HK