CN113838217A - Information display method and device, electronic equipment and readable storage medium - Google Patents

Information display method and device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN113838217A
CN113838217A CN202111117491.4A CN202111117491A CN113838217A CN 113838217 A CN113838217 A CN 113838217A CN 202111117491 A CN202111117491 A CN 202111117491A CN 113838217 A CN113838217 A CN 113838217A
Authority
CN
China
Prior art keywords
target
model
determining
virtual
display space
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111117491.4A
Other languages
Chinese (zh)
Other versions
CN113838217B (en
Inventor
李婷婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202111117491.4A priority Critical patent/CN113838217B/en
Publication of CN113838217A publication Critical patent/CN113838217A/en
Application granted granted Critical
Publication of CN113838217B publication Critical patent/CN113838217B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2021Shape modification

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Architecture (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The disclosure provides an information display method and device, electronic equipment and a readable storage medium, and relates to the technical field of artificial intelligence, in particular to the technical field of augmented reality. The specific implementation scheme is as follows: determining a target display space of a virtual fit model, wherein the virtual fit model is a fit model rendered by using a target clothing mapping; and displaying the virtual putting-through model in a target display space based on an augmented reality method. The virtual wearing and taking model is rendered based on the target clothing map to obtain the virtual wearing and taking model, and then the virtual wearing and taking model is displayed in the target display space through the augmented reality technology, so that the virtual wearing and taking effect which can be seen by a user is more real, and whether the user needs to be met is determined according to the displayed wearing and taking effect.

Description

Information display method and device, electronic equipment and readable storage medium
Technical Field
The present disclosure relates to the field of artificial intelligence technology, and more particularly, to the field of augmented reality technology.
Background
Along with the development of online shopping, more and more users purchase clothes and the like through online shopping, however, the online shopping mode user can not see the real wearing effect, how to solve the problem that the user can not really try on, but can know whether the clothes are suitable for the demand of the user becomes a problem.
Disclosure of Invention
The disclosure provides an information display method, an information display device, electronic equipment and a readable storage medium.
According to a first aspect of the present disclosure, there is provided an information display method, including:
determining a target display space of a virtual fit model, wherein the virtual fit model is a fit model rendered by using a target clothing mapping;
and displaying the virtual putting-through model in the target display space based on an augmented reality method.
According to a second aspect of the present disclosure, there is provided an information presentation apparatus comprising:
the system comprises a first determination module, a second determination module and a third determination module, wherein the first determination module is used for determining a target display space of a virtual fit model, and the virtual fit model is a fit model rendered by using a target clothing mapping;
and the display module is used for displaying the virtual wearing model in the target display space based on an augmented reality method.
According to a third aspect of the present disclosure, there is provided an electronic apparatus comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method.
According to a fourth aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing a computer to perform the above method.
According to a fifth aspect of the present disclosure, a computer program product is provided, comprising a computer program which, when executed by a processor, implements the above-described method.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is a schematic flow chart of an information presentation method provided in accordance with the present disclosure;
FIG. 2 is an exemplary diagram of a target display space provided in accordance with the present disclosure;
FIG. 3 is an exemplary illustration of the effect of a virtual fit model in a target display space provided in accordance with the present disclosure;
FIG. 4 is an exemplary diagram of a target display space corresponding to an evening party application scenario provided in accordance with the present disclosure;
FIG. 5 is an exemplary diagram of a target display space corresponding to a basketball court application scenario provided in accordance with the present disclosure;
FIG. 6 is an exemplary illustration of effects provided in accordance with the present disclosure to place a virtual fit model in accordance with a target object in a target display space;
FIG. 7 is a schematic structural diagram of an information presentation device provided in accordance with the present disclosure;
FIG. 8 is a block diagram of an electronic device used to implement an embodiment of the disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
The Augmented Reality (Augmented Reality) technology is a technology for skillfully fusing virtual information and a real world, and is widely applied to the real world after simulating and simulating virtual information such as characters, images, three-dimensional models, music, videos and the like generated by a computer by using various technical means such as multimedia, three-dimensional modeling, real-time tracking and registration, intelligent interaction, sensing and the like, wherein the two kinds of information supplement each other, so that the real world is enhanced. In essence, the AR places virtual information such as 2D or 3D elements (characters, pictures, models, audio and video, etc.) in an image collected by a camera of the device, creating an illusion that the virtual elements are actually present in the real world.
The embodiment of the disclosure is an application of an augmented reality technology in a virtual wearing scene; the virtual wearing model is a wearing model rendered by using a target clothing map, specifically, a user selects clothes to try on, and then renders a corresponding model according to the clothes to try on to obtain a virtual wearing model; the model may be a three-dimensional model or a two-dimensional model.
Specifically, the virtual wearing model can be put into the corresponding target display space, and then the user can see the effect of the corresponding clothes in the target display space, so that the effect of the corresponding clothes in the corresponding target display space can be seen even if the user cannot try on the clothes really; the target display space can be obtained by taking a picture or a video of the real environment space.
Compared with the prior art that the real fitting effect cannot be seen, the method has the advantages that the target display space of the virtual fitting model is determined, and the virtual fitting model is a fitting model rendered by using the target clothing mapping; and displaying the virtual putting-through model in the target display space based on an augmented reality method. The virtual wearing and taking model is rendered based on the target clothing map to obtain the virtual wearing and taking model, and then the virtual wearing and taking model is displayed in the target display space through the augmented reality technology, so that the virtual wearing and taking effect which can be seen by a user is more real, and whether the user needs to be met is determined according to the displayed wearing and taking effect.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Example one
Fig. 1 illustrates an information presentation method provided by an embodiment of the present disclosure, and as shown in fig. 1, the information presentation method includes:
step S101, determining a target display space of a virtual fit model, wherein the virtual fit model is a fit model rendered by using a target clothing mapping;
and S102, displaying the virtual wearing model in the target display space based on an augmented reality method.
Specifically, a target display space corresponding to the virtual fit model can be determined, and then the virtual fit model is placed in the target display space through an augmented reality method, so that a virtual fit effect which can be seen by a user is more real, and whether the user's own requirements are met is determined according to the displayed fit effect. The virtual fit model is a fit model rendered by using the target clothing map.
Fig. 2 is an example of a target display space of a virtual fit model, wherein the target display space may be a real environment space obtained by shooting a video of a surrounding environment space; as shown in fig. 3, for the effect of placing the virtual fit model in the target display space, the user can see a more real effect of the virtual fit model in the target display space.
The embodiment of the present disclosure provides a possible implementation manner, wherein the determining a target display space of a virtual fit-through model includes:
and determining the target display space through a preset matching rule based on the determined target clothes.
Specifically, the user may select and determine a corresponding target apparel through a webpage, an APP, and the like, and then determine a target display space corresponding to the target apparel through a predetermined matching rule. That is to say, the target display space is not unique, and based on the difference of target dress, can correspond there is different target display space, also is based on different target dress, what can correspond shows in different display space, has realized the differentiation show of target dress for the user watches the more real effect of putting on and taking.
The predetermined matching rule may also be determined based on the target apparel and other factors, such as determining the target display space based on the obtained location of the user and the target apparel, and determining the target display space based on different seasons and the target apparel, which is not limited in this disclosure. Further, the target display space may be obtained based on a big data analysis technology, and if the number of times that the corresponding display space is used at a certain place or a certain time in a season is greater, the corresponding display space is used as the target display space.
With the embodiments of the present disclosure, the problem of determining a target display space is solved.
The embodiment of the present disclosure provides a possible implementation manner, where determining a target display space based on a determined target garment by using a predetermined matching rule includes:
determining an applicable scene of the target clothes;
determining the target display space based on the determined applicable scene of the target clothes.
Specifically, the applicable scene of the target garment may be determined according to the attribute information of the target garment, for example, the attribute information may be a evening dress, a basketball dress, a football dress, a suit, and the like, so that the applicable scene of the target garment may be a evening party, a basketball court, a football court, a business activity, and the like.
Specifically, the target display space may be determined according to the determined applicable scene, for example, the display spaces of the evening scene, the basketball court, the football court, the business office place, and the like may be determined according to the applicable scene of the target apparel, such as the evening party, the basketball court, the football court, the business office place, and the like, for example, fig. 4 is the display space of the evening scene, and fig. 5 is the display space corresponding to the basketball court.
Specifically, a plurality of target display spaces can be provided for each applicable scene for the user to select, so that the wearing and putting-on effect of the display spaces more suitable for the user to know is obtained.
For the embodiment of the disclosure, the target display space is determined according to the applicable scene of the target clothes, so that the user can see the effect of the target clothes in the specific display space, and further determine whether the target clothes meet the requirements of the user in the corresponding scene.
The embodiment of the present disclosure provides a possible implementation manner, where the displaying the virtual fit model in the target display space based on an augmented reality method includes:
determining a target putting position of the virtual putting-through model based on the target display space;
and throwing the virtual putting-through model to the target display space for displaying based on the determined target putting position of the virtual putting-through model.
Specifically, the putting position of the virtual fit model in the target display space needs to be determined, and then the virtual fit model is put in the putting position of the target display space, so that a user can know the fit effect of the corresponding target clothes in the target display space.
Specifically, the putting position can be determined in the target display space in a machine learning manner, specifically, the model can be trained in a supervised learning manner through the two-dimensional or three-dimensional images of the plurality of display spaces and the marked putting position points corresponding to the display spaces, so that a corresponding putting position determination model is obtained, and then the putting position of the virtual putting model in the target display space can be determined according to the obtained putting position determination model.
For the embodiment of the disclosure, the determination problem of the putting position is solved, and how to put the virtual putting-through model into the target display space is solved.
The embodiment of the present disclosure provides a possible implementation manner, wherein determining a target placement position of the virtual fit-through model based on the target display space includes:
identifying and determining attribute information of at least one target object in the target display space, wherein the attribute information comprises position information of the target object and a plane where the target object is located;
and translating the position of the target object by a preset distance on a plane where the target object is positioned, and determining a target release position.
Specifically, attribute information of at least one target object in the target presentation space may be determined through a corresponding visual image processing algorithm, where the attribute information includes position information of the target object and a plane on which the target object is located. For example, as shown in fig. 6, the target object may be a sofa, and the attribute information may include the position of the sofa in the target display space (which may be two-dimensional or three-dimensional coordinates), and the plane (ground) on which the sofa is located. Specifically, the position information may be a position of one edge position point of the candidate frame corresponding to the target object; if a target object is identified and the target object is subjected to framing processing (e.g., adding a rectangular frame), the position of a certain point (e.g., the corresponding position point marked with 1 in fig. 6) on the side of the rectangular frame may be used as the position information of the target object; specifically, the position information may be the position of the center position point of the candidate frame corresponding to the target object, for example, when a target object is identified and the target object is subjected to frame selection processing (for example, a rectangular frame is added), the position of the center point of the rectangular frame (for example, the position point corresponding to label 2 in fig. 6) may be used as the position information of the target object.
Specifically, the position of the target object may be translated by a predetermined distance (for example, the position of a certain point on the side of the rectangular frame is translated by a predetermined distance, such as 10cm or 20cm, as the position of the target object), and a position is selected as the target release position from the area translated by the predetermined distance and located on the plane where the target object is located.
Exemplarily, as shown in fig. 6, attribute information of the target object (sofa) in the target display space, i.e. position information of the sofa (represented by the position of the position point marked at 1) and the plane (which may be understood as the ground) on which the sofa is located, is determined through a visual image processing algorithm, and then the position of the sofa is translated by a predetermined distance in the plane on which the sofa is located, so as to obtain a target placement position of the virtual fit model, and then the virtual fit model is placed at the target placement position.
For the embodiment of the disclosure, the problem of determining the target putting position is solved, and in addition, the target putting position is determined by referring to the position of the target object, so that a user can determine the putting-on and putting-on effect of the target clothes by referring to the target object, and the authenticity of the virtual putting-on and putting-on effect is improved.
The embodiment of the present disclosure provides a possible implementation manner, wherein the method further includes:
acquiring body parameters of a user;
determining the fit model based on the obtained body parameters of the user.
Specifically, the user may input body parameter information (such as height, three-dimensional circumference, and the like), then determine a corresponding fitting model according to the body parameter of the user, and then render the determined corresponding model according to the clothing selected by the user to obtain the virtual fitting model.
For the embodiment of the disclosure, the displayed virtual putting-on model is determined according to the body parameters of the user, so that the reality of the virtual putting-on is improved.
The embodiment of the present disclosure provides a possible implementation manner, where the determining a virtual wearing model based on the obtained body parameters of the user includes:
determining a reference wearing model based on the obtained body parameters of the user;
obtaining model correction parameters of the reference fit model of a user;
and correcting the reference fit-through model based on the obtained model correction parameters, and determining the fit-through model.
Specifically, a reference fit model may be determined according to body parameters input by a user, and since the position of the reference fit model part does not conform to the shape of the user, the user may perform correction processing on the reference fit model according to the situation of the user to obtain corresponding model correction parameters, and then the obtained model correction parameters correct the reference fit model to determine the fit model.
For the embodiment of the disclosure, the virtual fit model is obtained by the user through correction processing, and the reality of the virtual fit effect is further improved.
Example two
An embodiment of the present disclosure provides an information display apparatus, as shown in fig. 7, including:
a first determining module 701, configured to determine a target display space of a virtual fit model, where the virtual fit model is a fit model rendered by using a target clothing mapping;
a display module 702, configured to display the virtual wearing model in the target display space based on an augmented reality method.
Compared with the prior art, the scheme provided by the embodiment of the disclosure cannot see the real fitting effect. The method comprises the steps of determining a target display space of a virtual fit model, wherein the virtual fit model is a fit model rendered by using a target clothes map; and displaying the virtual putting-through model in the target display space based on an augmented reality method. The virtual wearing and taking model is rendered based on the target clothing map to obtain the virtual wearing and taking model, and then the virtual wearing and taking model is displayed in the target display space through the augmented reality technology, so that the virtual wearing and taking effect which can be seen by a user is more real, and whether the user needs to be met is determined according to the displayed wearing and taking effect.
The embodiment of the present disclosure provides a possible implementation manner, wherein the first determining module includes:
and the first determining unit is used for determining the target display space through a predetermined rule based on the determined target clothes.
The embodiment of the present disclosure provides a possible implementation manner, wherein the method is specifically configured to determine an application scenario of the target apparel; and means for determining the target display space based on the determined applicable scenario of the target apparel.
The embodiment of the present disclosure provides a possible implementation manner, wherein the display module includes:
the second determination unit is used for determining a target putting position of the virtual putting-through model based on the target display space;
and the releasing unit is used for releasing the virtual fit model to the target display space for display based on the determined target releasing position of the virtual fit model.
The embodiment of the present disclosure provides a possible implementation manner, wherein the second determining unit is specifically configured to identify and determine attribute information of at least one target object in the target display space, where the attribute information includes position information of the target object and a plane where the target object is located; and the target throwing position is determined by translating the position of the target object by a preset distance on the plane where the target object is positioned.
The embodiment of the present disclosure provides a possible implementation manner, wherein the apparatus further includes:
the acquisition module is used for acquiring body parameters of a user;
a second determination module for determining the fit model based on the obtained body parameters of the user.
The embodiment of the present disclosure provides a possible implementation manner, wherein the second determining module includes:
a third determination unit for determining a reference wearing model based on the acquired body parameters of the user;
the obtaining unit is used for obtaining model correction parameters of the reference wearing model for a user;
and the fourth determining unit is used for correcting the reference fit model based on the acquired model correction parameters and determining the fit model.
For the embodiments of the present disclosure, the beneficial effects achieved by the embodiments of the present disclosure are the same as those of the embodiments of the method described above, and are not described herein again.
In the technical scheme of the disclosure, the acquisition, storage, application and the like of the personal information of the related user all accord with the regulations of related laws and regulations, and do not violate the good customs of the public order.
The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.
The electronic device includes: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method as provided by the embodiments of the present disclosure.
Compared with the prior art, the electronic equipment cannot see the real fitting effect. The method comprises the steps of determining a target display space of a virtual fit model, wherein the virtual fit model is a fit model rendered by using a target clothes map; and displaying the virtual putting-through model in the target display space based on an augmented reality method. The virtual wearing and taking model is rendered based on the target clothing map to obtain the virtual wearing and taking model, and then the virtual wearing and taking model is displayed in the target display space through the augmented reality technology, so that the virtual wearing and taking effect which can be seen by a user is more real, and whether the user needs to be met is determined according to the displayed wearing and taking effect.
The readable storage medium is a non-transitory computer readable storage medium storing computer instructions for causing a computer to perform a method as provided by an embodiment of the present disclosure.
The readable storage medium is compared with the prior art that the real fitting effect cannot be seen. The method comprises the steps of determining a target display space of a virtual fit model, wherein the virtual fit model is a fit model rendered by using a target clothes map; and displaying the virtual putting-through model in the target display space based on an augmented reality method. The virtual wearing and taking model is rendered based on the target clothing map to obtain the virtual wearing and taking model, and then the virtual wearing and taking model is displayed in the target display space through the augmented reality technology, so that the virtual wearing and taking effect which can be seen by a user is more real, and whether the user needs to be met is determined according to the displayed wearing and taking effect.
The computer program product comprising a computer program which, when executed by a processor, implements a method as shown in the first aspect of the disclosure.
Compared with the prior art, the computer program product cannot see the real fitting effect. The method comprises the steps of determining a target display space of a virtual fit model, wherein the virtual fit model is a fit model rendered by using a target clothes map; and displaying the virtual putting-through model in the target display space based on an augmented reality method. The virtual wearing and taking model is rendered based on the target clothing map to obtain the virtual wearing and taking model, and then the virtual wearing and taking model is displayed in the target display space through the augmented reality technology, so that the virtual wearing and taking effect which can be seen by a user is more real, and whether the user needs to be met is determined according to the displayed wearing and taking effect.
FIG. 8 illustrates a schematic block diagram of an example electronic device 800 that can be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 8, the apparatus 800 includes a computing unit 801 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM)802 or a computer program loaded from a storage unit 808 into a Random Access Memory (RAM) 803. In the RAM 803, various programs and data required for the operation of the device 800 can also be stored. The calculation unit 801, the ROM 802, and the RAM 803 are connected to each other by a bus 804. An input/output (I/O) interface 807 is also connected to bus 804.
A number of components in the device 800 are connected to the I/O interface 805, including: an input unit 806, such as a keyboard, a mouse, or the like; an output unit 807 such as various types of displays, speakers, and the like; a storage unit 808, such as a magnetic disk, optical disk, or the like; and a communication unit 809 such as a network card, modem, wireless communication transceiver, etc. The communication unit 809 allows the device 800 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
Computing unit 801 may be a variety of general and/or special purpose processing components with processing and computing capabilities. Some examples of the computing unit 801 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and the like. The calculation unit 801 executes the respective methods and processes described above, such as the method information presentation method. For example, in some embodiments, the method information presentation method may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 807. In some embodiments, part or all of the computer program can be loaded and/or installed onto device 800 via ROM 802 and/or communications unit 809. When loaded into RAM 803 and executed by computing unit 801, a computer program may perform one or more steps of the method information presentation method described above. Alternatively, in other embodiments, the computing unit 801 may be configured to perform the method information presentation method in any other suitable manner (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server with a combined blockchain.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved, and the present disclosure is not limited herein.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.

Claims (17)

1. An information display method, comprising:
determining a target display space of a virtual fit model, wherein the virtual fit model is a fit model rendered by using a target clothing mapping;
and displaying the virtual putting-through model in the target display space based on an augmented reality method.
2. The method of claim 1, the determining a target display space of a virtual fit model, comprising:
and determining the target display space through a preset matching rule based on the determined target clothes.
3. The method of claim 2, the determining a target display space based on the determined target apparel through a predetermined matching rule, comprising:
determining an applicable scene of the target clothes;
determining the target display space based on the determined applicable scene of the target clothes.
4. The method according to any one of claims 1-3, wherein the displaying the virtual fit-on model in the target display space based on an augmented reality method comprises:
determining a target putting position of the virtual putting-through model based on the target display space;
and throwing the virtual putting-through model to the target display space for displaying based on the determined target putting position of the virtual putting-through model.
5. The method of claim 4, wherein the determining a target placement location of the virtual fit-through model based on the target presentation space comprises:
identifying and determining attribute information of at least one target object in the target display space, wherein the attribute information comprises position information of the target object and a plane where the target object is located;
and translating the position of the target object by a preset distance on a plane where the target object is positioned, and determining a target release position.
6. The method of claim 1, wherein the method further comprises:
acquiring body parameters of a user;
determining the fit model based on the obtained body parameters of the user.
7. The method of claim 6, wherein the determining a virtual fit model based on the obtained body parameters of the user comprises:
determining a reference wearing model based on the obtained body parameters of the user;
obtaining model correction parameters of the reference fit model of a user;
and correcting the reference fit-through model based on the obtained model correction parameters, and determining the fit-through model.
8. An information presentation device comprising:
the system comprises a first determination module, a second determination module and a third determination module, wherein the first determination module is used for determining a target display space of a virtual fit model, and the virtual fit model is a fit model rendered by using a target clothing mapping;
and the display module is used for displaying the virtual wearing model in the target display space based on an augmented reality method.
9. The apparatus of claim 8, the first determination module, comprising:
and the first determining unit is used for determining the target display space through a preset matching rule based on the determined target clothes.
10. The method according to claim 9, the first determining unit being specifically configured to determine an applicable scenario of the target apparel; and means for determining the target display space based on the determined applicable scenario of the target apparel.
11. The apparatus of any of claims 8-10, the display module, comprising:
the second determination unit is used for determining a target putting position of the virtual putting-through model based on the target display space;
and the releasing unit is used for releasing the virtual fit model to the target display space for display based on the determined target releasing position of the virtual fit model.
12. The apparatus according to claim 11, wherein the second determining unit is specifically configured to identify and determine attribute information of at least one target object in the target presentation space, where the attribute information includes position information of the target object and a plane where the target object is located; and the target throwing position is determined by translating the position of the target object by a preset distance on the plane where the target object is positioned.
13. The method of claim 8, wherein the apparatus further comprises:
the acquisition module is used for acquiring body parameters of a user;
a second determination module for determining the fit model based on the obtained body parameters of the user.
14. The method of claim 13, wherein the second determination module comprises:
a third determination unit for determining a reference wearing model based on the acquired body parameters of the user;
the obtaining unit is used for obtaining model correction parameters of the reference wearing model for a user;
and the fourth determining unit is used for correcting the reference fit model based on the acquired model correction parameters and determining the fit model.
15. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-7.
16. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-7.
17. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1-7.
CN202111117491.4A 2021-09-23 2021-09-23 Information display method and device, electronic equipment and readable storage medium Active CN113838217B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111117491.4A CN113838217B (en) 2021-09-23 2021-09-23 Information display method and device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111117491.4A CN113838217B (en) 2021-09-23 2021-09-23 Information display method and device, electronic equipment and readable storage medium

Publications (2)

Publication Number Publication Date
CN113838217A true CN113838217A (en) 2021-12-24
CN113838217B CN113838217B (en) 2023-09-12

Family

ID=78969620

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111117491.4A Active CN113838217B (en) 2021-09-23 2021-09-23 Information display method and device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN113838217B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114612643A (en) * 2022-03-07 2022-06-10 北京字跳网络技术有限公司 Image adjusting method and device for virtual object, electronic equipment and storage medium
CN115272564A (en) * 2022-07-15 2022-11-01 中关村科学城城市大脑股份有限公司 Action video transmitting method, device, equipment and medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107895315A (en) * 2017-12-25 2018-04-10 戴睿 A kind of net purchase dressing system and method based on virtual reality
CN111640194A (en) * 2020-06-07 2020-09-08 上海商汤智能科技有限公司 AR scene image display control method and device, electronic equipment and storage medium
CN111739169A (en) * 2019-10-31 2020-10-02 北京京东尚科信息技术有限公司 Product display method, system, medium and electronic device based on augmented reality
CN112274922A (en) * 2020-11-19 2021-01-29 网易(杭州)网络有限公司 Virtual subject position adjusting method and device, storage medium and electronic equipment
WO2021018214A1 (en) * 2019-07-30 2021-02-04 Oppo广东移动通信有限公司 Virtual object processing method and apparatus, and storage medium and electronic device
WO2021073268A1 (en) * 2019-10-15 2021-04-22 北京市商汤科技开发有限公司 Augmented reality data presentation method and apparatus, electronic device, and storage medium
CN112734947A (en) * 2020-12-29 2021-04-30 贝壳技术有限公司 Method and device for 3D content delivery in VR house
CN112950789A (en) * 2021-02-03 2021-06-11 天津市爱美丽科技有限公司 Method, device and storage medium for displaying object through virtual augmented reality
CN113191843A (en) * 2021-04-28 2021-07-30 北京市商汤科技开发有限公司 Simulation clothing fitting method and device, electronic equipment and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107895315A (en) * 2017-12-25 2018-04-10 戴睿 A kind of net purchase dressing system and method based on virtual reality
WO2021018214A1 (en) * 2019-07-30 2021-02-04 Oppo广东移动通信有限公司 Virtual object processing method and apparatus, and storage medium and electronic device
WO2021073268A1 (en) * 2019-10-15 2021-04-22 北京市商汤科技开发有限公司 Augmented reality data presentation method and apparatus, electronic device, and storage medium
CN111739169A (en) * 2019-10-31 2020-10-02 北京京东尚科信息技术有限公司 Product display method, system, medium and electronic device based on augmented reality
CN111640194A (en) * 2020-06-07 2020-09-08 上海商汤智能科技有限公司 AR scene image display control method and device, electronic equipment and storage medium
CN112274922A (en) * 2020-11-19 2021-01-29 网易(杭州)网络有限公司 Virtual subject position adjusting method and device, storage medium and electronic equipment
CN112734947A (en) * 2020-12-29 2021-04-30 贝壳技术有限公司 Method and device for 3D content delivery in VR house
CN112950789A (en) * 2021-02-03 2021-06-11 天津市爱美丽科技有限公司 Method, device and storage medium for displaying object through virtual augmented reality
CN113191843A (en) * 2021-04-28 2021-07-30 北京市商汤科技开发有限公司 Simulation clothing fitting method and device, electronic equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
刘燕;淮永建;帅立;: "交互式虚拟服装展示***设计与实现", 微计算机信息, no. 03 *
郑琳;潘浪浪;樊凯琪;: "三维立体虚拟魔镜***研究及实现", 农家参谋, no. 06 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114612643A (en) * 2022-03-07 2022-06-10 北京字跳网络技术有限公司 Image adjusting method and device for virtual object, electronic equipment and storage medium
CN114612643B (en) * 2022-03-07 2024-04-12 北京字跳网络技术有限公司 Image adjustment method and device for virtual object, electronic equipment and storage medium
CN115272564A (en) * 2022-07-15 2022-11-01 中关村科学城城市大脑股份有限公司 Action video transmitting method, device, equipment and medium

Also Published As

Publication number Publication date
CN113838217B (en) 2023-09-12

Similar Documents

Publication Publication Date Title
CN113643412B (en) Virtual image generation method and device, electronic equipment and storage medium
US20170186219A1 (en) Method for 360-degree panoramic display, display module and mobile terminal
CN106846497B (en) Method and device for presenting three-dimensional map applied to terminal
CN111815755A (en) Method and device for determining shielded area of virtual object and terminal equipment
CN111862205B (en) Visual positioning method, device, equipment and storage medium
CN110473293B (en) Virtual object processing method and device, storage medium and electronic equipment
US20230267584A1 (en) Virtual clothing changing method and apparatus, and device and medium
CN113838217B (en) Information display method and device, electronic equipment and readable storage medium
CN113129450A (en) Virtual fitting method, device, electronic equipment and medium
CN110866977A (en) Augmented reality processing method, device and system, storage medium and electronic equipment
US10147240B2 (en) Product image processing method, and apparatus and system thereof
CN112529097B (en) Sample image generation method and device and electronic equipment
CN113870439A (en) Method, apparatus, device and storage medium for processing image
CN114723888B (en) Three-dimensional hair model generation method, device, equipment, storage medium and product
CN112652057A (en) Method, device, equipment and storage medium for generating human body three-dimensional model
CN115861498A (en) Redirection method and device for motion capture
US10295403B2 (en) Display a virtual object within an augmented reality influenced by a real-world environmental parameter
CN114529647A (en) Object rendering method, device and apparatus, electronic device and storage medium
US20230260218A1 (en) Method and apparatus for presenting object annotation information, electronic device, and storage medium
CN115965735B (en) Texture map generation method and device
CN116755823A (en) Virtual exhibition hall loading method, device, equipment, storage medium and program product
CN115619986B (en) Scene roaming method, device, equipment and medium
CN113781653B (en) Object model generation method and device, electronic equipment and storage medium
CN115775300A (en) Reconstruction method of human body model, training method and device of human body reconstruction model
CN115690363A (en) Virtual object display method and device and head-mounted display device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant