WO2019000464A1 - 一种图像显示方法、装置、存储介质和终端 - Google Patents

一种图像显示方法、装置、存储介质和终端 Download PDF

Info

Publication number
WO2019000464A1
WO2019000464A1 PCT/CN2017/091369 CN2017091369W WO2019000464A1 WO 2019000464 A1 WO2019000464 A1 WO 2019000464A1 CN 2017091369 W CN2017091369 W CN 2017091369W WO 2019000464 A1 WO2019000464 A1 WO 2019000464A1
Authority
WO
WIPO (PCT)
Prior art keywords
human body
dimensional image
image model
module
target
Prior art date
Application number
PCT/CN2017/091369
Other languages
English (en)
French (fr)
Inventor
梁昆
Original Assignee
广东欧珀移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 广东欧珀移动通信有限公司 filed Critical 广东欧珀移动通信有限公司
Priority to PCT/CN2017/091369 priority Critical patent/WO2019000464A1/zh
Priority to CN201780090737.9A priority patent/CN110622218A/zh
Publication of WO2019000464A1 publication Critical patent/WO2019000464A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Definitions

  • the present invention relates to the field of mobile communications, and in particular, to an image display method, apparatus, storage medium, and terminal.
  • Virtual fitting is the use of computer technology to allow virtual models to replace the real users to try on the online sale of clothing, through the virtual model to try on the effect of the presentation of the user to buy online clothing to form a reference, easy for users to buy the right clothing.
  • the current virtual fitting scheme mainly utilizes a virtual fitting model in the gallery, and the user selects a virtual fitting model and clothes, so that the clothing can be selected by the effect of wearing the model.
  • This kind of fitting method will lack the actual detail effect due to the difference between the user's body shape and the model, and also does not consider the effects of fabrics, wrinkles, etc., and cannot meet the actual fitting requirements of the customer. In this case, the customer cannot get the real try-on effect.
  • the embodiment of the invention provides an image display method, device, storage medium and terminal, which can improve the fitting effect of the user.
  • an embodiment of the present invention provides an image display method, including:
  • the target three-dimensional image model is displayed.
  • an embodiment of the present invention further provides an image display device, including: a human body image acquisition module, a human body depth acquisition module, a human body model establishment module, a synthesis module, and a display module;
  • the human body image acquisition module is configured to collect at least two human body images through a dual camera of the terminal;
  • the human body depth acquisition module is configured to acquire depth information of the human body according to the at least two human body images
  • the human body model establishing module is configured to establish a three-dimensional image model of the human body according to the depth information of the human body;
  • the synthesizing module is configured to synthesize a three-dimensional image model of the target virtual costume and the three-dimensional image model of the human body to obtain a target three-dimensional image model;
  • the display module is configured to display the target three-dimensional image model.
  • the present invention also provides a storage medium, wherein the storage medium stores instructions that are loaded by a processor to perform the following steps:
  • the target three-dimensional image model is displayed.
  • an embodiment of the present invention further provides a terminal, including a memory and a processor, where the memory stores an instruction, and the processor loads the instruction to perform the following steps:
  • the target three-dimensional image model is displayed.
  • FIG. 1 is a schematic diagram of a scene structure of an image display method according to an embodiment of the present invention.
  • FIG. 2 is a schematic flowchart diagram of an image display method according to an embodiment of the present invention.
  • FIG. 3 is a schematic diagram of a dual camera framing range according to an embodiment of the present invention.
  • FIG. 4 is a schematic diagram of image conversion of an image display method according to an embodiment of the present invention.
  • FIG. 5 is another schematic flowchart of an image display method according to an embodiment of the present invention.
  • FIG. 6 is a schematic structural diagram of an image display device according to an embodiment of the present invention.
  • FIG. 7 is another schematic structural diagram of an image display device according to an embodiment of the present invention.
  • FIG. 8 is still another schematic structural diagram of an image display device according to an embodiment of the present invention.
  • FIG. 9 is a schematic structural diagram of a terminal according to an embodiment of the present invention.
  • FIG. 10 is a schematic structural diagram of another terminal according to an embodiment of the present invention.
  • the principles of the present invention operate using many other general purpose or special purpose computing, communication environments, or configurations.
  • Examples of well-known computing systems, environments, and configurations suitable for use with the present invention may include, but are not limited to, hand-held phones, personal computers, servers, multi-processor systems, microcomputer-based systems, mainframe computers, and A distributed computing environment, including any of the above systems or devices.
  • the present embodiment will be described from the perspective of an image display device.
  • the device may be integrated into a terminal, and the terminal may be a dual-camera electronic device such as a mobile internet device (such as a smart phone or a tablet).
  • a mobile internet device such as a smart phone or a tablet.
  • FIG. 1 is a schematic diagram of a scene structure of an image display method according to an embodiment of the present invention. Including the terminal and the server, the terminal and the server establish a communication connection through the Internet.
  • the terminal may record the input and output data during the processing, and then send the recorded data to the server, wherein the terminal may send the data to the server by using the WEB method, or may pass the terminal.
  • the client program installed in it sends data to the server.
  • the server can collect data sent by multiple terminals, and process the received data based on machine deep learning to perform related functions.
  • the terminal and the server may be, but are not limited to, adopting any one of the following transmission protocols: HTTP (Hypertext Transfer Protocol), FTP (File Transfer Protocol), P2P (Peer to Peer), and the like. Network), P2SP (Peer to Server & Peer, point to server and point).
  • HTTP Hypertext Transfer Protocol
  • FTP File Transfer Protocol
  • P2P Peer to Peer
  • Network P2SP (Peer to Server & Peer, point to server and point).
  • FIG. 2 is a schematic flowchart of an image display method according to an embodiment of the present invention.
  • the image display method in this embodiment includes:
  • step S101 at least two human body images are collected by the dual camera of the terminal.
  • two different images of the same scene can be acquired by the dual cameras on the terminal.
  • the terminal receives the photographing instruction input by the user, and opens the dual camera according to the photographing instruction, and the dual camera can acquire two image data at this time.
  • the viewing range of the two camera modules is also different, so that the images acquired by the two camera modules are also Different.
  • the two camera modules may be the same or different.
  • the two camera modules have a 16-megapixel wide-angle lens with a range of a and a second 20-megapixel telephoto lens with a viewing range of b. Let me give an example.
  • the original image is obtained by taking a picture with a dual camera.
  • the original image may include images of multiple parts of the human body, plants, buildings, etc., which may further pass through the dual camera.
  • the human body image is determined.
  • the method for determining the human body image may be various, for example, the face recognition image is used to identify the face image in the image, thereby determining the human body image.
  • Face recognition is a biometric recognition technology based on human facial feature information for identification. Using a camera or camera to capture an image or video stream containing a face, and automatically detect and track the face in the image, and then perform a series of related techniques on the face of the detected face, usually called portrait recognition and face recognition.
  • face recognition can detect the face in the original image by using the Adaboost (Adaptive Boosting) algorithm based on the Haar feature, or use other algorithms to detect the face in the original image. Make a limit.
  • Adaboost Adaptive Boosting
  • the feature information of the plurality of human body images may be separately extracted, and the target human body image among the plurality of human body images may be determined according to the feature information.
  • Step S102 Acquire depth information of the human body according to at least two human body images.
  • the mobile phone has a dual camera, one of which is a main camera and the other is a secondary camera, and the preview image taken by the main camera is the main preview image, and the preview image taken by the auxiliary camera is the auxiliary preview image. Since there is a certain distance or angle between the main camera and the auxiliary camera, there is a certain phase difference between the main preview image and the auxiliary preview image, and the depth difference of each pixel block or even each pixel point can be obtained by using the phase difference. As shown in FIG. 4, the original image and the depth image of the current scene are respectively.
  • the parallax information can be calculated according to the trigonometric principle, and the parallax information can be used to represent the depth of the object in the scene by conversion. information.
  • a depth image of the scene can also be obtained by taking a group of images at different angles in the same scene.
  • the method for acquiring the depth information of the human body may be various, for example, the depth information of the plurality of organ points of the human body may be acquired, and then the depth information of the human body is generated according to the depth information of the plurality of organ points.
  • an organ point refers to a contour point on a human body, and each human body organ may include one or more contour points.
  • the human body image may include a plurality of organ points, such as a nose contour point, a facial contour point, a mouth contour point, an eye contour point, an arm contour point, a contour point of the leg, and the like, which are not limited in this embodiment.
  • the organ points in the human body image may be positioned to obtain at least one organ point.
  • the terminal may perform positioning by using an algorithm such as ASM (Active Shape Model), AAM (Active Appearance Model) or SDM (Supervised Descent Method).
  • ASM Active Shape Model
  • AAM Active Appearance Model
  • SDM Supervised Descent Method
  • the depth information of the organ point is used to indicate the distance from the point to the terminal of the organ, and has a negative correlation with the distance from the organ point to the terminal, that is, the larger the image depth value of the organ point, indicating that the organ point is away from the terminal. The closer the distance, the smaller the image depth of the organ point, indicating the further the distance of the organ point from the terminal.
  • a triangulation method may be used, and the distance between the organ point and the terminal may be determined, according to the The distance between the organ point and the terminal determines the depth information of the organ point. After the depth information of the plurality of organ points is acquired, the depth information of the human body is further generated.
  • Step S103 establishing a three-dimensional image model of the human body according to the depth information of the human body.
  • a three-dimensional reconstruction model (D Reconstruction) can be used to establish a three-dimensional image model of the human body.
  • 3D reconstruction refers to the establishment of a mathematical model suitable for computer representation and processing of 3D objects. It is the basis for processing, manipulating and analyzing its properties in a computer environment. It is also the key to establishing virtual reality in the computer to express the objective world. technology.
  • Three-dimensional reconstruction constructs a surface by reconstructing three-dimensional coordinates in a computer, so it is necessary to study the relationship between points, lines, faces, and their mutual positions of the constructed object.
  • the process of three-dimensional reconstruction of the human body can obtain the point cloud data of the human body by measuring the relative depth between the human body and the terminal, and the density of the point cloud directly determines the accuracy of the modeled object. According to these point cloud data, the surface shape of the human body can be obtained. In addition, if the point cloud data of the human body includes color information at the position of the data point, the color texture of the human body surface can be generated according to the neighborhood point of the point cloud.
  • Step S104 synthesizing the three-dimensional image model of the target virtual costume and the three-dimensional image model of the human body to obtain a target three-dimensional image model.
  • key points in the three-dimensional image model of the target virtual costume can be extracted, and then the three-dimensional image model of the target virtual costume and the three-dimensional image model of the human body are synthesized according to the key points.
  • the three-dimensional image model of the target virtual costume may be first divided, and the model is divided into at least five model parts of the trunk, the left and right arms, and the left and right legs, and then at least the bottom point of the model is obtained from the segmented model part.
  • Step S105 displaying a target three-dimensional image model.
  • image pre-processing may be performed on the image of the model, and the pre-processing may include image enhancement, smoothing processing, noise reduction processing, and the like.
  • the information in the image can be selectively enhanced and suppressed to improve the visual effect of the image, or to transform the image into a form more suitable for machine processing, so as to facilitate data extraction.
  • an image enhancement system can highlight the outline of an image with a high-pass filter, allowing the machine to measure the shape and perimeter of the outline.
  • Contrast broadening, logarithmic transformation, density stratification, and histogram equalization can all be used to change image grayscale and highlight details.
  • the image display method provided by the embodiment of the present invention can acquire at least two human body images through the dual camera of the terminal, acquire depth information of the human body according to at least two human body images, and establish a three-dimensional image model of the human body according to the depth information of the human body.
  • the three-dimensional image model of the target virtual costume and the three-dimensional image model of the human body are combined to obtain a target three-dimensional image model, and the target three-dimensional image model is displayed.
  • the invention is based on a dual camera terminal, can establish a three-dimensional image model of the user's human body, and synthesizes with the three-dimensional image model of the costume, and according to the virtual fitting, the real try-on effect can be obtained.
  • FIG. 5 is another schematic flowchart of an image display method according to an embodiment of the present invention, including:
  • Step S201 collecting at least two human body images through the dual cameras of the terminal.
  • the method further includes:
  • the other human body images are compressed according to the size of the reference image so that at least two human body images are the same size.
  • Step S202 obtaining a distance between two camera modules of the dual camera.
  • Step S203 generating disparity information between at least two human body images according to the distance.
  • Step S204 calculating depth information of the human body based on the parallax information.
  • the mobile phone has a dual camera device, one of which is a main camera and the other is a secondary camera, and the preview image taken by the main camera is the main preview image, and the preview image taken by the auxiliary camera is the auxiliary preview image. Since the main camera module and the auxiliary camera module have a certain distance or angle, the main preview image and the auxiliary preview image have a certain phase difference, that is, parallax information, and the pixel difference can be used to obtain each pixel block. And even the depth of field of each pixel.
  • Step S205 establishing a three-dimensional image model of the human body according to the depth information of the human body.
  • a three-dimensional image model of the human body can be established using 3D Reconstruction.
  • the process of three-dimensional reconstruction of the human body can obtain the point cloud data of the human body by measuring the relative depth between the human body and the terminal, and the density of the point cloud directly determines the accuracy of the modeled object. According to these point cloud data, the surface shape of the human body can be obtained.
  • Step S206 acquiring first feature point location information in the three-dimensional image model of the target virtual costume.
  • the three-dimensional image model of the target virtual costume is acquired, specifically, the image of the target virtual costume is acquired, and then the depth information of the apparel is obtained according to the image, and the target virtual apparel is established according to the apparel depth information.
  • the image of the target virtual costume can be obtained by photographing the dual camera of the terminal, or can be downloaded from a network database. That is, before synthesizing the three-dimensional image model of the target virtual costume and the three-dimensional image model of the human body, the method further includes:
  • the depth information of the clothing is obtained according to at least two virtual clothing images, and the three-dimensional image model of the target virtual clothing is established according to the clothing depth information.
  • Step S207 determining corresponding second feature point position information in the three-dimensional image model of the human body according to the first feature point position information.
  • Step S208 synthesizing the three-dimensional image model of the target virtual costume and the three-dimensional image model of the human body according to the first feature point position information and the second feature point position information to obtain a target three-dimensional image model.
  • Step S209 displaying a target three-dimensional image model.
  • the user may also adjust related parameters of the apparel, such as color, size, etc., and the three-dimensional image model is also adjusted accordingly, that is, the target three-dimensional image model is displayed. Thereafter, the method can further include:
  • the adjusted target 3D image model is displayed.
  • the user may further query related information of the apparel, such as price, manufacturer, production date, etc., taking the price as an example, after displaying the target three-dimensional image model,
  • the method can also include:
  • the image display method provided by the embodiment of the present invention can acquire at least two human body images through the dual camera of the terminal, obtain the distance between the two camera modules of the dual camera, and generate at least two human body images according to the distance.
  • the parallax information is calculated according to the parallax information
  • the depth information of the human body is calculated
  • the three-dimensional image model of the human body is established according to the depth information of the human body
  • the first feature point position information in the three-dimensional image model of the target virtual costume is obtained, according to the first feature point position.
  • the three-dimensional image model of the human body determines the corresponding second feature point position information, and synthesizes the three-dimensional image model of the target virtual costume and the three-dimensional image model of the human body according to the first feature point position information and the second feature point position information, A target three-dimensional image model is obtained and a target three-dimensional image model is displayed.
  • the invention is based on a dual camera terminal, can establish a three-dimensional image model of the user's human body, and synthesizes with the three-dimensional image model of the costume, and according to the virtual fitting, the real try-on effect can be obtained.
  • an embodiment of the present invention further provides an apparatus based on the above image display method.
  • the meaning of the noun is the same as that in the above image display method.
  • FIG. 6 is a schematic structural diagram of an image display apparatus according to an embodiment of the present invention.
  • the image display apparatus 30 includes a human body image acquisition module 301, a human body depth acquisition module 302, a human body model establishment module 303, and a synthesis module. 304 and display module 305;
  • the human body image acquisition module 301 is configured to collect at least two human body images through the dual camera of the terminal;
  • the human body depth acquisition module 302 is configured to acquire depth information of the human body according to at least two human body images
  • the human body model building module 303 is configured to establish a three-dimensional image model of the human body according to the depth information of the human body;
  • a synthesis module 304 configured to synthesize a three-dimensional image model of the target virtual costume and a three-dimensional image model of the human body to obtain a target three-dimensional image model;
  • the display module 305 is configured to display a target three-dimensional image model.
  • the depth information acquisition module 302 includes: a distance acquisition sub-module 3021, a disparity generation sub-module 3022, and a depth calculation sub-module 3023;
  • the distance obtaining sub-module 3021 is configured to acquire a distance between two camera modules of the dual camera;
  • a disparity generation sub-module 3022 configured to generate disparity information between at least two human body images according to the distance
  • the depth calculation sub-module 3023 is configured to calculate depth information of the human body according to the disparity information.
  • the image display device 30 may further include: a clothing image collection module 306, a clothing depth acquisition module 307, and an apparel model establishing module 308;
  • the costume image capturing module 306 is configured to acquire at least two virtual clothes of the target virtual costume before the synthesizing module 304 synthesizes the three-dimensional image model of the target virtual costume and the three-dimensional image model of the human body.
  • a clothing depth obtaining module 307 configured to acquire depth information of the clothing according to at least two virtual clothing images
  • the costume model establishing module 308 is configured to establish a three-dimensional image model of the target virtual costume according to the clothing depth information.
  • the synthesizing module 304 includes: a first information acquiring submodule, a second information determining submodule, and a synthesizing submodule;
  • a first information acquiring submodule configured to acquire first feature point location information in a three-dimensional image model of the target virtual costume
  • a second information determining submodule configured to determine corresponding second feature point location information in the three-dimensional image model of the human body according to the first feature point location information
  • a synthesis submodule configured to synthesize the three-dimensional image model of the target virtual costume and the three-dimensional image model of the human body according to the first feature point location information and the second feature point location information.
  • the image display device 30 further includes: a selection module and a processing module;
  • a processing module configured to compress other human body images according to a size of the reference image, so that at least two human body images are the same size.
  • the image display device 30 further includes: an adjustment instruction receiving module and an adjustment module;
  • the adjustment instruction receiving module is configured to receive the clothing parameter adjustment instruction input by the user after the display module 305 displays the target three-dimensional image model;
  • the display module 305 is further configured to display the adjusted target three-dimensional image model.
  • the image display device 30 further includes: a query instruction receiving module and a query module;
  • a query instruction receiving module configured to: after the display module 305 displays the target three-dimensional image model, receive a query instruction of the target virtual costume, and the query instruction carries attribute information of the target virtual costume;
  • the query module is configured to query the price of the target apparel according to the attribute information.
  • the image display device can acquire at least two human body images through the dual camera of the terminal, acquire depth information of the human body according to at least two human body images, and establish a three-dimensional image model of the human body according to the depth information of the human body.
  • the three-dimensional image model of the target virtual costume and the three-dimensional image model of the human body are combined to obtain a target three-dimensional image model, and the target three-dimensional image model is displayed.
  • the invention is based on a dual camera terminal, can establish a three-dimensional image model of the user's human body, and synthesizes with the three-dimensional image model of the costume, and according to the virtual fitting, the real try-on effect can be obtained.
  • the present invention also provides a storage medium storing instructions that are loaded by a processor to perform the following steps:
  • the target three-dimensional image model is displayed.
  • the embodiment of the present invention further provides a terminal, which may be a device such as a smart phone or a tablet computer.
  • the terminal 400 includes a processor 401 and a memory 402.
  • the processor 401 is electrically connected to the memory 402.
  • the processor 401 is a control center of the terminal 400, and connects various parts of the entire terminal by various interfaces and lines, and executes each terminal by running or loading an application stored in the memory 402 and calling data stored in the memory 402. Functions and processing data to monitor the terminal as a whole.
  • the processor 401 in the terminal 400 loads the instructions corresponding to the process of one or more applications into the memory 402 according to the following steps, and is stored in the memory 402 by the processor 401.
  • the application to achieve the following features:
  • the target three-dimensional image model is displayed.
  • the processor 401 when the depth information of the human body is acquired according to the at least two human body images, the processor 401 is configured to perform the following steps:
  • the processor 401 is further configured to perform the following steps before synthesizing the three-dimensional image model of the target virtual costume and the three-dimensional image model of the human body:
  • the processor 401 when the three-dimensional image model of the target virtual costume is combined with the three-dimensional image model of the human body, the processor 401 is configured to perform the following steps:
  • the processor 401 is further configured to perform the following steps:
  • the other human body images are compressed according to the size of the reference image such that the at least two human body images are the same size.
  • FIG. 10 is a schematic structural diagram of a terminal according to an embodiment of the present invention.
  • the terminal 500 can include a radio frequency (RF) circuit 501, including one Or a memory 502 of one or more computer readable storage media, an input unit 503, a display unit 504, a sensor 504, an audio circuit 506, a Wireless Fidelity (WiFi) module 507, a processor including one or more processing cores 508, and power supply 509 and other components.
  • RF radio frequency
  • WiFi Wireless Fidelity
  • FIG. 10 does not constitute a limitation to the terminal, and may include more or less components than those illustrated, or combine some components, or different component arrangements.
  • the radio frequency circuit 501 can be used for transmitting and receiving information, or receiving and transmitting signals during a call. Specifically, after receiving the downlink information of the base station, the radio network is processed by one or more processors 508; in addition, the data related to the uplink is sent to the base station. .
  • the radio frequency circuit 501 includes, but is not limited to, an antenna, at least one amplifier, a tuner, one or more oscillators, a Subscriber Identity Module (SIM) card, a transceiver, a coupler, and a low noise amplifier (LNA, Low Noise Amplifier), duplexer, etc.
  • SIM Subscriber Identity Module
  • LNA Low Noise Amplifier
  • Memory 502 can be used to store applications and data.
  • the application stored in the memory 502 contains executable code. Applications can form various functional modules.
  • the processor 508 executes various functional applications and data processing by running an application stored in the memory 502.
  • the memory 502 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application required for at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may be stored according to Data created by the use of the terminal (such as audio data, phone book, etc.).
  • the input unit 503 can be configured to receive input numbers, character information or user characteristic information (such as fingerprints), and to generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function controls.
  • input unit 503 can include a touch-sensitive surface as well as other input devices.
  • Display unit 504 can be used to display information entered by the user or information provided to the user, as well as various graphical user interfaces of the terminal, which can be composed of graphics, text, icons, video, and any combination thereof.
  • the display unit 504 can include a display panel.
  • the display panel can be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
  • the terminal may also include at least one sensor 505 such as a light sensor, a motion sensor, and others sensor.
  • the light sensor may include an ambient light sensor and a proximity sensor, wherein the ambient light sensor may adjust the brightness of the display panel according to the brightness of the ambient light, and the proximity sensor may close the display panel and/or the backlight when the terminal moves to the ear.
  • the gravity acceleration sensor can detect the magnitude of acceleration in all directions (usually three axes). When it is stationary, it can detect the magnitude and direction of gravity.
  • the terminal can also be configured with gyroscopes, barometers, hygrometers, thermometers, infrared sensors and other sensors, no longer Narration.
  • the audio circuit 506 can provide an audio interface between the user and the terminal through a speaker and a microphone.
  • the audio circuit 506 can convert the received audio data into an electrical signal, which is transmitted to a speaker, and converted into a sound signal output by the speaker.
  • the microphone converts the collected sound signal into an electrical signal, which is received by the audio circuit 506 and converted into
  • the audio data is then processed by the audio data output processor 508, transmitted via the RF circuit 501 to, for example, another terminal, or the audio data is output to the memory 502 for further processing.
  • Wireless Fidelity is a short-range wireless transmission technology.
  • the terminal can help users to send and receive e-mail, browse web pages and access streaming media through the wireless fidelity module 507, which provides users with wireless broadband Internet access.
  • FIG. 10 shows the wireless fidelity module 507, it can be understood that it does not belong to the essential configuration of the terminal, and may be omitted as needed within the scope of not changing the essence of the invention.
  • the processor 508 is a control center of the terminal, and connects various parts of the entire terminal by various interfaces and lines, and executes various kinds of terminals by running or executing an application stored in the memory 502 and calling data stored in the memory 502. Function and process data to monitor the terminal as a whole.
  • the processor 508 may include one or more processing cores; preferably, the processor 508 may integrate an application processor and a modem processor, where the application processor mainly processes an operating system, a user interface, an application, and the like.
  • the modem processor primarily handles wireless communications. It will be appreciated that the above described modem processor may also not be integrated into the processor 508.
  • the terminal also includes a power source 509 (such as a battery) that supplies power to the various components.
  • the power source can
  • the power management system is logically coupled to the processor 508 to manage functions such as charging, discharging, and power management through the power management system.
  • the power supply 509 may also include any one or more of a DC or AC power source, a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator, and the like.
  • the terminal may further include a camera, a Bluetooth module, and the like, and details are not described herein again.
  • the processor 508 is further configured to: acquire at least two human body images through the dual camera of the terminal, acquire depth information of the human body according to at least two human body images, and establish a three-dimensional image model of the human body according to the depth information of the human body, and the target virtual apparel
  • the three-dimensional image model and the human body's three-dimensional image model are synthesized to obtain a target three-dimensional image model and display a target three-dimensional image model.
  • the foregoing modules may be implemented as a separate entity, or may be implemented in any combination, and may be implemented as the same or a plurality of entities.
  • the foregoing modules refer to the foregoing method embodiments, and details are not described herein again.
  • the storage medium may include: a read only memory (ROM), a random access memory (RAM), a magnetic disk or an optical disk.
  • the image display method, device, storage medium and terminal provided by the embodiments of the present invention are described in detail.
  • the functional modules may be integrated into one processing chip, or each module may exist physically or separately. More than two modules are integrated in one module.
  • the above integrated modules can be implemented in the form of hardware or in the form of software functional modules.
  • the principles and embodiments of the present invention are described herein with reference to specific examples. The description of the above embodiments is only for the purpose of understanding the method of the present invention and the core idea thereof. Also, those skilled in the art according to the present invention The present invention is not limited by the scope of the present invention.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)
  • Image Processing (AREA)

Abstract

本发明实施例公开了一种图像显示方法、装置、存储介质和终端;所述方法包括:通过终端的双摄像头采集至少两幅人体图像,根据人体图像获取人体的深度信息,根据人体的深度信息建立人体的三维图像模型,将目标虚拟服饰的三维图像模型和人体的三维图像模型进行合成,以得到目标三维图像模型,并显示目标三维图像模型。

Description

一种图像显示方法、装置、存储介质和终端 技术领域
本发明涉及移动通信领域,具体涉及一种图像显示方法、装置、存储介质和终端。
背景技术
虚拟试衣是指利用计算机技术让虚拟模特代替真实用户试穿网上出售的服装,通过虚拟模特试穿呈现的效果对用户选购网上服装形成参考,便于用户购买到合适的服装。目前的虚拟试衣方案主要利用图库中的虚拟试衣模特,由用户选择虚拟试衣模特和衣服,从而可以通过该模特穿着该衣服的效果来挑选服装。而这种试衣方式会因为用户体型与模特之间的差异,缺乏真实的细节效果,同时也没有考虑面料、褶皱等效果,无法满足顾客的实际试衣需求。在这种情况下,客户无法获取真实的试穿效果。
发明内容
本发明实施例提供一种图像显示方法、装置、存储介质和终端,可以提升用户的试衣效果。
第一方面,本发明实施例提供一种图像显示方法,包括:
通过终端的双摄像头采集至少两幅人体图像;
根据所述至少两幅人体图像获取人体的深度信息;
根据所述人体的深度信息建立所述人体的三维图像模型;
将目标虚拟服饰的三维图像模型和所述人体的三维图像模型进行合成,以得到目标三维图像模型;
显示所述目标三维图像模型。
第二方面,本发明实施例还提供了一种图像显示装置,包括:人体图像采集模块、人体深度获取模块、人体模型建立模块、合成模块以及显示模块;
所述人体图像采集模块,用于通过终端的双摄像头采集至少两幅人体图像;
所述人体深度获取模块,用于根据所述至少两幅人体图像获取人体的深度信息;
所述人体模型建立模块,用于根据所述人体的深度信息建立所述人体的三维图像模型;
所述合成模块,用于将目标虚拟服饰的三维图像模型和所述人体的三维图像模型进行合成,以得到目标三维图像模型;
所述显示模块,用于显示所述目标三维图像模型。
第三方面,本发明还提供一种存储介质,其中,所述存储介质存储有指令,所述指令被处理器加载以执行以下步骤:
通过终端的双摄像头采集至少两幅人体图像;
根据所述至少两幅人体图像获取人体的深度信息;
根据所述人体的深度信息建立所述人体的三维图像模型;
将目标虚拟服饰的三维图像模型和所述人体的三维图像模型进行合成,以得到目标三维图像模型;
显示所述目标三维图像模型。
第四方面,本发明实施例还提供一种终端,其中,包括存储器和处理器,所述存储器存储有指令,所述处理器加载所述指令以执行以下步骤:
通过终端的双摄像头采集至少两幅人体图像;
根据所述至少两幅人体图像获取人体的深度信息;
根据所述人体的深度信息建立所述人体的三维图像模型;
将目标虚拟服饰的三维图像模型和所述人体的三维图像模型进行合成,以得到目标三维图像模型;
显示所述目标三维图像模型。
附图说明
为了更清楚地说明本发明实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本发明实施例提供的图像显示方法的一种场景构架示意图。
图2为本发明实施例提供的图像显示方法的一种流程示意图。
图3为本发明实施例提供的一种双摄像头取景范围示意图。
图4为本发明实施例提供的图像显示方法的一种图像转换示意图。
图5为本发明实施例提供的图像显示方法的另一种流程示意图。
图6为本发明实施例提供的图像显示装置的一种结构示意图。
图7为本发明实施例提供的图像显示装置的另一种结构示意图。
图8为本发明实施例提供的图像显示装置的又一种结构示意图。
图9为本发明实施例提供的一种终端的结构示意图。
图10为本发明实施例提供的另一种终端的结构示意图。
具体实施方式
请参照图式,其中相同的组件符号代表相同的组件,本发明的原理是以实施在一适当的运算环境中来举例说明。以下的说明是基于所例示的本发明具体实施例,其不应被视为限制本发明未在此详述的其它具体实施例。
在以下的说明中,本发明的具体实施例将参考由一部或多部计算机所执行的步骤及符号来说明,除非另有述明。因此,这些步骤及操作将有数次提到由计算机执行,本文所指的计算机执行包括了由代表了以一结构化型式中的数据的电子信号的计算机处理单元的操作。此操作转换该数据或将其维持在该计算机的内存***中的位置处,其可重新配置或另外以本领域测试人员所熟知的方式来改变该计算机的运作。该数据所维持的数据结构为该内存的实***置,其具有由该数据格式所定义的特定特性。但是,本发明原理以上述文字来说明,其并不代表为一种限制,本领域测试人员将可了解到以下所述的多种步骤及操作亦可实施在硬件当中。
本发明的原理使用许多其它泛用性或特定目的运算、通信环境或组态来进行操作。所熟知的适合用于本发明的运算***、环境与组态的范例可包括(但不限于)手持电话、个人计算机、服务器、多处理器***、微电脑为主的***、主架构型计算机、及分布式运算环境,其中包括了任何的上述***或装置。
以下将分别进行详细说明。
本实施例将从图像显示装置的角度进行描述,该装置具体可以集成在终端中,该终端可以为移动互连接网络设备(如智能手机、平板电脑)等具备双摄像头的电子设备。
请参阅图1,图1为本发明实施例提供的图像显示方法的一种场景构架示意图。包括终端和服务器,终端和服务器通过互联网建立通信连接。
当用户通过终端中的屏幕图像显示功能进行处理时,终端可记录处理过程中的输入输出数据,然后,向服务器发送记录的数据,其中,终端可采用WEB方式向服务器发送数据,也可以通过终端中安装的客户端程序向服务器发送数据。服务器可以收集多个终端发送的数据,基于机器深度学习对接收到数据进行处理,从而执行相关功能。
终端和服务器之间可以但不限于采用以下传输协议中的任一种:HTTP(Hypertext transfer protocol,超文本传输协议)、FTP(File Transfer Protocol,文件传输协议)、P2P(Peer to Peer,对等网络)、P2SP(Peer to Server&Peer,点对服务器和点)等。
请参阅图2,图2为本发明实施例提供的一种图像显示方法的流程示意图,本实施例的图像显示方法包括:
步骤S101,通过终端的双摄像头采集至少两幅人体图像。
在本发明实施例当中,可以通过终端上的双摄像头获取同一场景的两幅不同图像。比如,终端接收用户输入的拍照指令,根据该拍照指令打开双摄像头,此时双摄像头可以获取到两路图像数据。
如图3所示,由于双摄像头的两个摄像模组位于不同的位置,所以两个摄像模组的取景范围也会有所不同,从而使得两个摄像模组分别获取到的图像也会有所不同。其中,上述两个摄像模组可以相同也可以不同,比如上述两个摄像模组一个为1600万像素广角镜头取景范围为a,另一个为2000万像素长焦镜头,取景范围为b,在此不再一一举例说明。
在一实施例中,通过双摄像头进行拍摄,得到原始图像。该原始图像中可能会包括人体、植物、建筑物等多个部分的图像,可以进一步在通过双摄像头 获取到的两幅原始图像当中确定人体图像,其中,确定人体图像的方法可以有多种,比如通过人脸识别技术来识别图像当中的人脸图像,进而确定人体图像。人脸识别是基于人的脸部特征信息进行身份识别的一种生物识别技术。用摄像机或摄像头采集含有人脸的图像或视频流,并自动在图像中检测和跟踪人脸,进而对检测到的人脸进行脸部的一系列相关技术,通常也叫做人像识别、面部识别。另外,人脸识别可以采用基于Haar特征的Adaboost(AdaptiveBoosting,自适应增强)算法对原始图像中的人脸进行检测,或者采用其他算法对原始图像中的人脸进行检测,本实施例对此不做限定。
在一实施例中,若上述通过双摄像头采集的图像当中存在多个人体图像,则可以分别提取上述多个人体图像的特征信息,根据该特征信息确定多个人体图像当中的目标人体图像。
步骤S102,根据至少两幅人体图像获取人体的深度信息。
在本发明实施例中,手机具有双摄像头,其中一个为主摄像头、另一个为辅摄像头,由主摄像头拍摄出的预览图为主预览图、由辅摄像头拍摄出的预览图为辅预览图。由于主摄像头与辅摄像头之间具有一定距离或角度,因此,主预览图与辅预览图之间具有一定的相位差,利用该相位差可以获取到各个像素块、甚至各个像素点的景深深度。如图4所示,分别为当前场景的原始图像和深度图像。在双摄像头同时获取同一场景的两幅图像之后,通过立体匹配算法找到两幅图像中对应的像素点,随后可以根据三角原理计算出视差信息,而视差信息通过转换可用于表征场景中物体的深度信息。基于立体匹配算法,还可通过拍摄同一场景下不同角度的一组图像来获得该场景的深度图像。
在一实施例当中,上述获取人体的深度信息的方法可以有多种,比如可以获取人体的多个器官点的深度信息,然后根据上述多个器官点的深度信息生成人体的深度信息。其中,器官点是指人体器官上的轮廓点,每个人体器官可以包括一个或多个轮廓点。该人体图像中可以包括多个器官点,如鼻子轮廓点、脸部轮廓点、嘴巴轮廓点、眼睛轮廓点、胳膊轮廓点、腿的轮廓点等等,本实施例对此不做限定。
在获取到上述至少两幅人体图像之后,可以对该人体图像中的器官点进行定位,得到至少一个器官点。其中,进行器官点定位时,终端可以采用ASM(ActiveShapeModel,动态形状模型)、AAM(ActiveAppearanceModel,动态表观模型)或者SDM(SupervisedDescentMethod,监督下降算法)等算法进行定位。得到人体图像中的器官点之后,终端即可通过该双摄像头,获取定位得到的每个器官点的深度信息。
其中,器官点的深度信息用于指示该器官点到终端的距离,与器官点到终端的距离呈负相关关系,也即,器官点的图像深度值越大,表示该器官点距离该终端的距离越近,器官点的图像深度越小,表示该器官点距离该终端的距离越远。
在一实施例中,终端通过双摄像头拍摄该器官点时,由于器官点与两个摄像头之间的角度不同,可以采用三角定位法,可以确定该器官点与该终端之间的距离,根据该器官点与该终端之间的距离,即可确定该器官点的深度信息。在获取到多个器官点的深度信息后,进一步生成人体的深度信息。
步骤S103,根据人体的深度信息建立人体的三维图像模型。
在本实施例中,可以利用三维重建技术(3D Reconstruction)建立人体的三位图像模型。其中,三维重建是指对三维物体建立适合计算机表示和处理的数学模型,是在计算机环境下对其进行处理、操作和分析其性质的基础,也是在计算机中建立表达客观世界的虚拟现实的关键技术。三维重建通过在计算机中重建三维坐标来构建表面,因此需要对构建物体的点、线、面及其相互位置的关系加以研究。对人体进行三维重建的过程可以通过测量人体与终端之间的相对深度,获得人体的点云数据,点云的密度将直接决定建模物体的精度。再根据这些点云数据进行查补,最终即可获得人体的表面形状。另外,如果人体的点云数据中包含该数据点位置上的颜色信息,则可根据点云的邻域点生成人体表面的彩色纹理。
步骤S104,将目标虚拟服饰的三维图像模型和人体的三维图像模型进行合成,以得到目标三维图像模型。
在本发明实施例中,可以提取目标虚拟服饰的三维图像模型中的关键点,然后根据关键点将目标虚拟服饰的三维图像模型和人体的三维图像模型进行合成。比如,可以首先对目标虚拟服饰的三维图像模型进行分割,将该模型至少分割为躯干、左右手臂、左右腿5个模型部分,然后从分割出的模型部分中至少得到该模型的裆底点、左右肩部点、左右腋下点及脖子基准点六个关键点然后根据上述六个关键点将目标虚拟服饰的三维图像模型和人体的三维图像模型进行合成,以得到目标三维图像模型。
步骤S105,显示目标三维图像模型。
在一实施例当中,在显示上述目标三维图像模型之前,还可以对该模型的图像进行图像预处理,该预处理可以包括图像增强、平滑处理、降噪处理等等。比如,在得到目标三位图像模型的图像之后,可以对图像中的信息有选择地加强和抑制,以改善图像的视觉效果,或将图像转变为更适合于机器处理的形式,以便于数据抽取或识别。例如一个图像增强***可以通过高通滤波器来突出图像的轮廓线,从而使机器能够测量轮廓线的形状和周长。图像增强技术有多种方法,反差展宽、对数变换、密度分层和直方图均衡等都可用于改变图像灰调和突出细节。
由上可知,本发明实施例提供的图像显示方法可以通过终端的双摄像头采集至少两幅人体图像,根据至少两幅人体图像获取人体的深度信息,根据人体的深度信息建立人体的三维图像模型,将目标虚拟服饰的三维图像模型和人体的三维图像模型进行合成,以得到目标三维图像模型,并显示目标三维图像模型。本发明基于双摄像头的终端,可以建立用户人体的三维图像模型,并与服饰的三维图像模型进行合成,据此进行虚拟试衣,可以获得真实的试穿效果。
根据上一实施例的描述,以下将进一步地来说明本发明的图像显示方法。
请参阅图5,图5为本发明实施例提供的图像显示方法的另一种流程示意图,包括:
步骤S201,通过终端的双摄像头采集至少两幅人体图像。
由于双摄像头的两个摄像模组的位置不同,因此通过两个摄像模组获取到 的至少两张图像的尺寸也可能有所不同,本实施例可以对上述至少两张图像进行处理,以使得上述图像的尺寸相同。在一实施例中,在通过终端的双摄像头采集用户的至少两幅人体图像之后,该方法还包括:
选取至少两幅人体图像当中尺寸最小的图像为基准图像;
根据基准图像的尺寸对其他人体图像进行压缩,以使至少两幅人体图像尺寸相同。
步骤S202,获取双摄像头的两个摄像模组之间的距离。
步骤S203,根据距离生成至少两幅人体图像之间的视差信息。
步骤S204,根据视差信息,计算出人体的深度信息。
其中,手机具有双摄像头装置,其中一个为主摄像头、另一个为辅摄像头,由主摄像头拍摄出的预览图为主预览图、由辅摄像头拍摄出的预览图为辅预览图。由于主摄像头模组与辅摄像头模组之间具有一定距离或角度,因此,主预览图与辅预览图之间具有一定的相位差,也即视差信息,利用该相位差可以获取到各个像素块、甚至各个像素点的景深深度。
步骤S205,根据人体的深度信息建立人体的三维图像模型。
在一实施例当中,可以利用三维重建技术(3D Reconstruction)建立人体的三位图像模型。对人体进行三维重建的过程可以通过测量人体与终端之间的相对深度,获得人体的点云数据,点云的密度将直接决定建模物体的精度。再根据这些点云数据进行查补,最终即可获得人体的表面形状。
步骤S206,获取目标虚拟服饰的三维图像模型中的第一特征点位置信息。
在一实施例当中,在该步骤之前,还需要获取目标虚拟服饰的三维图像模型,具体可以通过获取目标虚拟服饰的图像,然后根据图像获取服饰的深度信息,并根据服饰深度信息建立目标虚拟服饰的三维图像模型。其中,目标虚拟服饰的图像可以通过终端的双摄像头拍照进行获取,也可以从网络数据库当中进行下载等。也即在将目标虚拟服饰的三维图像模型和人体的三维图像模型进行合成之前,该方法还包括:
获取目标虚拟服饰的至少两幅虚拟服饰图像;
根据至少两幅虚拟服饰图像获取服饰的深度信息,并根据服饰深度信息建立目标虚拟服饰的三维图像模型。
步骤S207,根据第一特征点位置信息在人体的三维图像模型确定相应的第二特征点位置信息。
步骤S208,根据第一特征点位置信息和第二特征点位置信息,将目标虚拟服饰的三维图像模型和人体的三维图像模型进行合成,以得到目标三维图像模型。
步骤S209,显示目标三维图像模型。
在一实施例当中,在显示目标三维图像模型之后,用户还可以对服饰的相关参数进行调整,比如颜色、尺码等,上述三维图像模型也做出相应的调整,也即在显示目标三维图像模型之后,该方法还可以包括:
接收用户输入的服饰参数调整指令;
根据服饰调整指令对目标三维图像模型进行调整;
显示调整后的目标三维图像模型。
在一实施例中,用户在观看上述目标三维图像模型后,还可以进一步查询该服饰的相关信息,比如价格、生产厂家、生产日期等等,以价格为例,在显示目标三维图像模型之后,该方法还可以包括:
接收用户对目标虚拟服饰的查询指令,查询指令携带目标虚拟服饰的属性信息;
根据属性信息查询目标服饰的价格。
由上可知,本发明实施例提供的图像显示方法可以通过终端的双摄像头采集至少两幅人体图像,获取双摄像头的两个摄像模组之间的距离,根据距离生成至少两幅人体图像之间的视差信息,根据视差信息,计算出人体的深度信息,根据人体的深度信息建立人体的三维图像模型,获取目标虚拟服饰的三维图像模型中的第一特征点位置信息,根据第一特征点位置信息在人体的三维图像模型确定相应的第二特征点位置信息,根据第一特征点位置信息和第二特征点位置信息,将目标虚拟服饰的三维图像模型和人体的三维图像模型进行合成,以 得到目标三维图像模型,并显示目标三维图像模型。本发明基于双摄像头的终端,可以建立用户人体的三维图像模型,并与服饰的三维图像模型进行合成,据此进行虚拟试衣,可以获得真实的试穿效果。
为了便于更好的实施本发明实施例提供的图像显示方法,本发明实施例还提供了一种基于上述图像显示方法的装置。其中名词的含义与上述图像显示方法中相同,具体实现细节可以参考方法实施例中的说明。
请参阅图6,图6为本发明实施例提供的一种图像显示装置的结构示意图,该图像显示装置30包括:人体图像采集模块301、人体深度获取模块302、人体模型建立模块303、合成模块304以及显示模块305;
人体图像采集模块301,用于通过终端的双摄像头采集至少两幅人体图像;
人体深度获取模块302,用于根据至少两幅人体图像获取人体的深度信息;
人体模型建立模块303,用于根据人体的深度信息建立人体的三维图像模型;
合成模块304,用于将目标虚拟服饰的三维图像模型和人体的三维图像模型进行合成,以得到目标三维图像模型;
显示模块305,用于显示目标三维图像模型。
在一实施例中,如图7所示,在该图像显示装置30当中,深度信息获取模块302包括:距离获取子模块3021、视差生成子模块3022以及深度计算子模块3023;
距离获取子模块3021,用于获取双摄像头的两个摄像模组之间的距离;
视差生成子模块3022,用于根据距离生成至少两幅人体图像之间的视差信息;
深度计算子模块3023,用于根据视差信息,计算出人体的深度信息。
在一实施例中,如图8所示,图像显示装置30还可以包括:服饰图像采集模块306、服饰深度获取模块307以及服饰模型建立模块308;
服饰图像采集模块306,用于在合成模块304将目标虚拟服饰的三维图像模型和人体的三维图像模型进行合成之前,获取目标虚拟服饰的至少两幅虚拟服 饰图像;
服饰深度获取模块307,用于根据至少两幅虚拟服饰图像获取服饰的深度信息;
服饰模型建立模块308,用于根据服饰深度信息建立目标虚拟服饰的三维图像模型。
在一实施例中,合成模块304包括:第一信息获取子模块、第二信息确定子模块以及合成子模块;
第一信息获取子模块,用于获取目标虚拟服饰的三维图像模型中的第一特征点位置信息;
第二信息确定子模块,用于根据第一特征点位置信息在人体的三维图像模型确定相应的第二特征点位置信息;
合成子模块,用于根据第一特征点位置信息和第二特征点位置信息,将目标虚拟服饰的三维图像模型和人体的三维图像模型进行合成。
在一实施例中,图像显示装置30还包括:选取模块和处理模块;
选取模块,用于在人体图像采集模块301通过终端的双摄像头采集至少两幅人体图像之后,选取至少两幅人体图像当中尺寸最小的图像为基准图像;
处理模块,用于根据基准图像的尺寸对其他人体图像进行压缩,以使至少两幅人体图像尺寸相同。
在一实施例中,图像显示装置30还包括:调整指令接收模块和调整模块;
调整指令接收模块,用于在显示模块305显示目标三维图像模型之后,接收用户输入的服饰参数调整指令;
调整模块,用于根据服饰调整指令对目标三维图像模型进行调整;
显示模块305,还用于显示调整后的目标三维图像模型。
在一实施例中,图像显示装置30还包括:查询指令接收模块和查询模块;
查询指令接收模块,用于在显示模块305显示目标三维图像模型之后,接收用户对目标虚拟服饰的查询指令,查询指令携带目标虚拟服饰的属性信息;
查询模块,用于根据属性信息查询目标服饰的价格。
由上可知,本发明实施例提供的图像显示装置可以通过终端的双摄像头采集至少两幅人体图像,根据至少两幅人体图像获取人体的深度信息,根据人体的深度信息建立人体的三维图像模型,将目标虚拟服饰的三维图像模型和人体的三维图像模型进行合成,以得到目标三维图像模型,并显示目标三维图像模型。本发明基于双摄像头的终端,可以建立用户人体的三维图像模型,并与服饰的三维图像模型进行合成,据此进行虚拟试衣,可以获得真实的试穿效果。
本发明还提供一种存储介质,所述存储介质存储有指令,所述指令被处理器加载以执行以下步骤:
通过终端的双摄像头采集至少两幅人体图像;
根据所述至少两幅人体图像获取人体的深度信息;
根据所述人体的深度信息建立所述人体的三维图像模型;
将目标虚拟服饰的三维图像模型和所述人体的三维图像模型进行合成,以得到目标三维图像模型;
显示所述目标三维图像模型。
本发明实施例还提供一种终端,该终端可以是智能手机、平板电脑等设备,如图9所示,终端400包括处理器401、存储器402。其中,处理器401与存储器402电性连接。
处理器401是终端400的控制中心,利用各种接口和线路连接整个终端的各个部分,通过运行或加载存储在存储器402内的应用程序,以及调用存储在存储器402内的数据,执行终端的各种功能和处理数据,从而对终端进行整体监控。
在本实施例中,终端400中的处理器401会按照如下的步骤,将一个或一个以上的应用程序的进程对应的指令加载到存储器402中,并由处理器401来运行存储在存储器402中的应用程序,从而实现以下功能:
通过终端的双摄像头采集至少两幅人体图像;
根据所述至少两幅人体图像获取人体的深度信息;
根据所述人体的深度信息建立所述人体的三维图像模型;
将目标虚拟服饰的三维图像模型和所述人体的三维图像模型进行合成,以得到目标三维图像模型;
显示所述目标三维图像模型。
在一实施例中,根据所述至少两幅人体图像获取人体的深度信息时,所述处理器401用于执行以下步骤:
获取所述双摄像头的两个摄像模组之间的距离;
根据所述距离生成所述至少两幅人体图像之间的视差信息;
根据所述视差信息,计算出所述人体的深度信息。
在一实施例中,在将目标虚拟服饰的三维图像模型和所述人体的三维图像模型进行合成之前,所述处理器401还用于执行以下步骤:
获取所述目标虚拟服饰的至少两幅虚拟服饰图像;
根据所述至少两幅虚拟服饰图像获取服饰的深度信息,并根据所述服饰深度信息建立所述目标虚拟服饰的三维图像模型。
在一实施例中,将目标虚拟服饰的三维图像模型和所述人体的三维图像模型进行合成时,所述处理器401用于执行以下步骤:
获取所述目标虚拟服饰的三维图像模型中的第一特征点位置信息;
根据所述第一特征点位置信息在所述人体的三维图像模型确定相应的第二特征点位置信息;
根据所述第一特征点位置信息和第二特征点位置信息,将目标虚拟服饰的三维图像模型和所述人体的三维图像模型进行合成。
在一实施例中,在通过终端的双摄像头采集用户的至少两幅人体图像之后,所述处理器401还用于执行以下步骤:
选取所述至少两幅人体图像当中尺寸最小的图像为基准图像;
根据所述基准图像的尺寸对其他人体图像进行压缩,以使所述至少两幅人体图像尺寸相同。
在一实施例中,请参阅图10,图10为本发明实施例提供的终端结构示意图。该终端500可以包括射频(RF,Radio Frequency)电路501、包括有一个 或一个以上计算机可读存储介质的存储器502、输入单元503、显示单元504、传感器504、音频电路506、无线保真(WiFi,Wireless Fidelity)模块507、包括有一个或者一个以上处理核心的处理器508、以及电源509等部件。本领域技术人员可以理解,图10中示出的终端结构并不构成对终端的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。
射频电路501可用于收发信息,或通话过程中信号的接收和发送,特别地,将基站的下行信息接收后,交由一个或者一个以上处理器508处理;另外,将涉及上行的数据发送给基站。通常,射频电路501包括但不限于天线、至少一个放大器、调谐器、一个或多个振荡器、用户身份模块(SIM,Subscriber Identity Module)卡、收发信机、耦合器、低噪声放大器(LNA,Low Noise Amplifier)、双工器等。
存储器502可用于存储应用程序和数据。存储器502存储的应用程序中包含有可执行代码。应用程序可以组成各种功能模块。处理器508通过运行存储在存储器502的应用程序,从而执行各种功能应用以及数据处理。存储器502可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作***、至少一个功能所需的应用程序(比如声音播放功能、图像播放功能等)等;存储数据区可存储根据终端的使用所创建的数据(比如音频数据、电话本等)等。
输入单元503可用于接收输入的数字、字符信息或用户特征信息(比如指纹),以及产生与用户设置以及功能控制有关的键盘、鼠标、操作杆、光学或者轨迹球信号输入。具体地,在一个具体的实施例中,输入单元503可包括触敏表面以及其他输入设备。
显示单元504可用于显示由用户输入的信息或提供给用户的信息以及终端的各种图形用户接口,这些图形用户接口可以由图形、文本、图标、视频和其任意组合来构成。显示单元504可包括显示面板。可选的,可以采用液晶显示器(LCD,Liquid Crystal Display)、有机发光二极管(OLED,Organic Light-Emitting Diode)等形式来配置显示面板。
终端还可包括至少一种传感器505,比如光传感器、运动传感器以及其他 传感器。具体地,光传感器可包括环境光传感器及接近传感器,其中,环境光传感器可根据环境光线的明暗来调节显示面板的亮度,接近传感器可在终端移动到耳边时,关闭显示面板和/或背光。作为运动传感器的一种,重力加速度传感器可检测各个方向上(一般为三轴)加速度的大小,静止时可检测出重力的大小及方向,可用于识别手机姿态的应用(比如横竖屏切换、相关游戏、磁力计姿态校准)、振动识别相关功能(比如计步器、敲击)等;至于终端还可配置的陀螺仪、气压计、湿度计、温度计、红外线传感器等其他传感器,在此不再赘述。
音频电路506可通过扬声器、传声器提供用户与终端之间的音频接口。音频电路506可将接收到的音频数据转换成电信号,传输到扬声器,由扬声器转换为声音信号输出;另一方面,传声器将收集的声音信号转换为电信号,由音频电路506接收后转换为音频数据,再将音频数据输出处理器508处理后,经射频电路501以发送给比如另一终端,或者将音频数据输出至存储器502以便进一步处理。
无线保真(WiFi)属于短距离无线传输技术,终端通过无线保真模块507可以帮助用户收发电子邮件、浏览网页和访问流式媒体等,它为用户提供了无线的宽带互联网访问。虽然图10示出了无线保真模块507,但是可以理解的是,其并不属于终端的必须构成,完全可以根据需要在不改变发明的本质的范围内而省略。
处理器508是终端的控制中心,利用各种接口和线路连接整个终端的各个部分,通过运行或执行存储在存储器502内的应用程序,以及调用存储在存储器502内的数据,执行终端的各种功能和处理数据,从而对终端进行整体监控。可选的,处理器508可包括一个或多个处理核心;优选的,处理器508可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作***、用户界面和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到处理器508中。
终端还包括给各个部件供电的电源509(比如电池)。优选的,电源可以 通过电源管理***与处理器508逻辑相连,从而通过电源管理***实现管理充电、放电、以及功耗管理等功能。电源509还可以包括一个或一个以上的直流或交流电源、再充电***、电源故障检测电路、电源转换器或者逆变器、电源状态指示器等任意组件。
尽管图10中未示出,终端还可以包括摄像头、蓝牙模块等,在此不再赘述。
处理器508还用于实现以下功能:通过终端的双摄像头采集至少两幅人体图像,根据至少两幅人体图像获取人体的深度信息,根据人体的深度信息建立人体的三维图像模型,将目标虚拟服饰的三维图像模型和人体的三维图像模型进行合成,以得到目标三维图像模型,并显示目标三维图像模型。
具体实施时,以上各个模块可以作为独立的实体来实现,也可以进行任意组合,作为同一或若干个实体来实现,以上各个模块的具体实施可参见前面的方法实施例,在此不再赘述。
需要说明的是,本领域普通技术人员可以理解上述实施例的各种方法中的全部或部分步骤是可以通过程序来指令相关的硬件来完成,该程序可以存储于计算机可读存储介质中,如存储在终端的存储器中,并被该终端内的至少一个处理器执行,在执行过程中可包括如信息发布方法的实施例的流程。其中,存储介质可以包括:只读存储器(ROM,Read Only Memory)、随机存取记忆体(RAM,Random Access Memory)、磁盘或光盘等。
以上对本发明实施例提供的一种图像显示方法、装置、存储介质和终端进行了详细介绍,其各功能模块可以集成在一个处理芯片中,也可以是各个模块单独物理存在,也可以两个或两个以上模块集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。本文中应用了具体个例对本发明的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本发明的方法及其核心思想;同时,对于本领域的技术人员,依据本发明的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本发明的限制。

Claims (20)

  1. 一种图像显示方法,其中,包括:
    通过终端的双摄像头采集至少两幅人体图像;
    根据所述至少两幅人体图像获取人体的深度信息;
    根据所述人体的深度信息建立所述人体的三维图像模型;
    将目标虚拟服饰的三维图像模型和所述人体的三维图像模型进行合成,以得到目标三维图像模型;
    显示所述目标三维图像模型。
  2. 如权利要求1所述的图像显示方法,其中,根据所述至少两幅人体图像获取人体的深度信息的步骤包括:
    获取所述双摄像头的两个摄像模组之间的距离;
    根据所述距离生成所述至少两幅人体图像之间的视差信息;
    根据所述视差信息,计算出所述人体的深度信息。
  3. 如权利要求1所述的图像显示方法,其中,在将目标虚拟服饰的三维图像模型和所述人体的三维图像模型进行合成之前,所述方法还包括:
    获取所述目标虚拟服饰的至少两幅虚拟服饰图像;
    根据所述至少两幅虚拟服饰图像获取服饰的深度信息,并根据所述服饰深度信息建立所述目标虚拟服饰的三维图像模型。
  4. 如权利要求1所述的图像显示方法,其中,将目标虚拟服饰的三维图像模型和所述人体的三维图像模型进行合成的步骤包括:
    获取所述目标虚拟服饰的三维图像模型中的第一特征点位置信息;
    根据所述第一特征点位置信息在所述人体的三维图像模型确定相应的第二特征点位置信息;
    根据所述第一特征点位置信息和第二特征点位置信息,将目标虚拟服饰的三维图像模型和所述人体的三维图像模型进行合成。
  5. 如权利要求1所述的图像显示方法,其中,在通过终端的双摄像头采集用户的至少两幅人体图像之后,所述方法还包括:
    选取所述至少两幅人体图像当中尺寸最小的图像为基准图像;
    根据所述基准图像的尺寸对其他人体图像进行压缩,以使所述至少两幅人体图像尺寸相同。
  6. 如权利要求1所述的图像显示方法,其中,在显示所述目标三维图像模型之后,所述方法还包括:
    接收用户输入的服饰参数调整指令;
    根据所述服饰调整指令对所述目标三维图像模型进行调整;
    显示调整后的目标三维图像模型。
  7. 如权利要求1所述的图像显示方法,其中,在显示所述目标三维图像模型之后,所述方法还包括:
    接收用户对所述目标虚拟服饰的查询指令,所述查询指令携带所述目标虚拟服饰的属性信息;
    根据所述属性信息查询所述目标服饰的价格。
  8. 一种图像显示装置,其中,包括:人体图像采集模块、人体深度获取模块、人体模型建立模块、合成模块以及显示模块;
    所述人体图像采集模块,用于通过终端的双摄像头采集至少两幅人体图像;
    所述人体深度获取模块,用于根据所述至少两幅人体图像获取人体的深度信息;
    所述人体模型建立模块,用于根据所述人体的深度信息建立所述人体的三维图像模型;
    所述合成模块,用于将目标虚拟服饰的三维图像模型和所述人体的三维图像模型进行合成,以得到目标三维图像模型;
    所述显示模块,用于显示所述目标三维图像模型。
  9. 如权力要求8所述的图像显示装置,其中,所述深度信息获取模块包括:距离获取子模块、视差生成子模块以及深度计算子模块;
    所述距离获取子模块,用于获取所述双摄像头的两个摄像模组之间的距离;
    所述视差生成子模块,用于根据所述距离生成所述至少两幅人体图像之间 的视差信息;
    所述深度计算子模块,用于根据所述视差信息,计算出所述人体的深度信息。
  10. 如权力要求8所述的图像显示装置,其中,所述装置还包括:服饰图像采集模块、服饰深度获取模块以及服饰模型建立模块;
    所述服饰图像采集模块,用于在所述合成模块将目标虚拟服饰的三维图像模型和所述人体的三维图像模型进行合成之前,获取所述目标虚拟服饰的至少两幅虚拟服饰图像;
    所述服饰深度获取模块,用于根据所述至少两幅虚拟服饰图像获取服饰的深度信息;
    所述服饰模型建立模块,用于根据所述服饰深度信息建立所述目标虚拟服饰的三维图像模型。
  11. 如权力要求8所述的图像显示装置,其中,所述合成模块包括:第一信息获取子模块、第二信息确定子模块以及合成子模块;
    所述第一信息获取子模块,用于获取所述目标虚拟服饰的三维图像模型中的第一特征点位置信息;
    所述第二信息确定子模块,用于根据所述第一特征点位置信息在所述人体的三维图像模型确定相应的第二特征点位置信息;
    所述合成子模块,用于根据所述第一特征点位置信息和第二特征点位置信息,将目标虚拟服饰的三维图像模型和所述人体的三维图像模型进行合成。
  12. 如权力要求8所述的图像显示装置,其中,所述装置还包括:选取模块和处理模块;
    所述选取模块,用于在所述人体图像采集模块通过终端的双摄像头采集至少两幅人体图像之后,选取所述至少两幅人体图像当中尺寸最小的图像为基准图像;
    所述处理模块,用于根据所述基准图像的尺寸对其他人体图像进行压缩,以使所述至少两幅人体图像尺寸相同。
  13. 如权力要求8所述的图像显示装置,其中,所述装置还包括:调整指令接收模块和调整模块;
    所述调整指令接收模块,用于在所述显示模块显示所述目标三维图像模型之后,接收用户输入的服饰参数调整指令;
    所述调整模块,用于根据所述服饰调整指令对所述目标三维图像模型进行调整;
    所述显示模块,还用于显示调整后的目标三维图像模型。
  14. 如权力要求8所述的图像显示装置,其中,所述装置还包括:查询指令接收模块和查询模块;
    所述查询指令接收模块,用于在所述显示模块显示所述目标三维图像模型之后,接收用户对所述目标虚拟服饰的查询指令,所述查询指令携带所述目标虚拟服饰的属性信息;
    所述查询模块,用于根据所述属性信息查询所述目标服饰的价格。
  15. 一种存储介质,其中,所述存储介质存储有指令,所述指令被处理器加载以执行以下步骤:
    通过终端的双摄像头采集至少两幅人体图像;
    根据所述至少两幅人体图像获取人体的深度信息;
    根据所述人体的深度信息建立所述人体的三维图像模型;
    将目标虚拟服饰的三维图像模型和所述人体的三维图像模型进行合成,以得到目标三维图像模型;
    显示所述目标三维图像模型。
  16. 一种终端,其中,包括存储器和处理器,所述存储器存储有指令,所述处理器加载所述指令以执行以下步骤:
    通过终端的双摄像头采集至少两幅人体图像;
    根据所述至少两幅人体图像获取人体的深度信息;
    根据所述人体的深度信息建立所述人体的三维图像模型;
    将目标虚拟服饰的三维图像模型和所述人体的三维图像模型进行合成,以 得到目标三维图像模型;
    显示所述目标三维图像模型。
  17. 如权利要求16所述的终端,其中,根据所述至少两幅人体图像获取人体的深度信息时,所述处理器用于执行以下步骤:
    获取所述双摄像头的两个摄像模组之间的距离;
    根据所述距离生成所述至少两幅人体图像之间的视差信息;
    根据所述视差信息,计算出所述人体的深度信息。
  18. 如权利要求16所述的终端,其中,在将目标虚拟服饰的三维图像模型和所述人体的三维图像模型进行合成之前,所述处理器还用于执行以下步骤:
    获取所述目标虚拟服饰的至少两幅虚拟服饰图像;
    根据所述至少两幅虚拟服饰图像获取服饰的深度信息,并根据所述服饰深度信息建立所述目标虚拟服饰的三维图像模型。
  19. 如权利要求16所述的终端,其中,将目标虚拟服饰的三维图像模型和所述人体的三维图像模型进行合成时,所述处理器用于执行以下步骤:
    获取所述目标虚拟服饰的三维图像模型中的第一特征点位置信息;
    根据所述第一特征点位置信息在所述人体的三维图像模型确定相应的第二特征点位置信息;
    根据所述第一特征点位置信息和第二特征点位置信息,将目标虚拟服饰的三维图像模型和所述人体的三维图像模型进行合成。
  20. 如权利要求16所述的终端,其中,在通过终端的双摄像头采集用户的至少两幅人体图像之后,所述处理器还用于执行以下步骤:
    选取所述至少两幅人体图像当中尺寸最小的图像为基准图像;
    根据所述基准图像的尺寸对其他人体图像进行压缩,以使所述至少两幅人体图像尺寸相同。
PCT/CN2017/091369 2017-06-30 2017-06-30 一种图像显示方法、装置、存储介质和终端 WO2019000464A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2017/091369 WO2019000464A1 (zh) 2017-06-30 2017-06-30 一种图像显示方法、装置、存储介质和终端
CN201780090737.9A CN110622218A (zh) 2017-06-30 2017-06-30 一种图像显示方法、装置、存储介质和终端

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/091369 WO2019000464A1 (zh) 2017-06-30 2017-06-30 一种图像显示方法、装置、存储介质和终端

Publications (1)

Publication Number Publication Date
WO2019000464A1 true WO2019000464A1 (zh) 2019-01-03

Family

ID=64742782

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/091369 WO2019000464A1 (zh) 2017-06-30 2017-06-30 一种图像显示方法、装置、存储介质和终端

Country Status (2)

Country Link
CN (1) CN110622218A (zh)
WO (1) WO2019000464A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110264393A (zh) * 2019-05-15 2019-09-20 联想(上海)信息技术有限公司 一种信息处理方法、终端和存储介质

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111563850B (zh) * 2020-03-20 2023-12-05 维沃移动通信有限公司 一种图像处理方法及电子设备
CN111709874B (zh) * 2020-06-16 2023-09-08 北京百度网讯科技有限公司 图像调整的方法、装置、电子设备及存储介质
CN115883814A (zh) * 2023-02-23 2023-03-31 阿里巴巴(中国)有限公司 实时视频流的播放方法、装置及设备

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102156810A (zh) * 2011-03-30 2011-08-17 北京触角科技有限公司 增强现实实时虚拟试衣***及方法
US20120086783A1 (en) * 2010-06-08 2012-04-12 Raj Sareen System and method for body scanning and avatar creation
CN102682211A (zh) * 2012-05-09 2012-09-19 晨星软件研发(深圳)有限公司 一种立体试衣方法及装置
CN102956004A (zh) * 2011-08-25 2013-03-06 鸿富锦精密工业(深圳)有限公司 虚拟试衣***及方法
CN103871099A (zh) * 2014-03-24 2014-06-18 惠州Tcl移动通信有限公司 一种基于移动终端进行3d模拟搭配处理方法及***
CN106408613A (zh) * 2016-09-18 2017-02-15 合肥视尔信息科技有限公司 一种适用于虚拟导诉员***的立体视觉建立方法
CN106815825A (zh) * 2016-12-09 2017-06-09 深圳市元征科技股份有限公司 一种试衣信息显示方法及显示设备

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20150124518A (ko) * 2014-04-28 2015-11-06 (주)에프엑스기어 증강 현실 기반 가상 피팅을 위한 가상 의상 생성 장치 및 방법
CN104952113B (zh) * 2015-07-08 2018-04-27 北京理工大学 服饰试穿体验方法、***及设备

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120086783A1 (en) * 2010-06-08 2012-04-12 Raj Sareen System and method for body scanning and avatar creation
CN102156810A (zh) * 2011-03-30 2011-08-17 北京触角科技有限公司 增强现实实时虚拟试衣***及方法
CN102956004A (zh) * 2011-08-25 2013-03-06 鸿富锦精密工业(深圳)有限公司 虚拟试衣***及方法
CN102682211A (zh) * 2012-05-09 2012-09-19 晨星软件研发(深圳)有限公司 一种立体试衣方法及装置
CN103871099A (zh) * 2014-03-24 2014-06-18 惠州Tcl移动通信有限公司 一种基于移动终端进行3d模拟搭配处理方法及***
CN106408613A (zh) * 2016-09-18 2017-02-15 合肥视尔信息科技有限公司 一种适用于虚拟导诉员***的立体视觉建立方法
CN106815825A (zh) * 2016-12-09 2017-06-09 深圳市元征科技股份有限公司 一种试衣信息显示方法及显示设备

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110264393A (zh) * 2019-05-15 2019-09-20 联想(上海)信息技术有限公司 一种信息处理方法、终端和存储介质

Also Published As

Publication number Publication date
CN110622218A (zh) 2019-12-27

Similar Documents

Publication Publication Date Title
CN110348543B (zh) 眼底图像识别方法、装置、计算机设备及存储介质
JP7058760B2 (ja) 画像処理方法およびその、装置、端末並びにコンピュータプログラム
CN108594997B (zh) 手势骨架构建方法、装置、设备及存储介质
CN109308727B (zh) 虚拟形象模型生成方法、装置及存储介质
CN110807361B (zh) 人体识别方法、装置、计算机设备及存储介质
CN111476306A (zh) 基于人工智能的物体检测方法、装置、设备及存储介质
WO2020221012A1 (zh) 图像特征点的运动信息确定方法、任务执行方法和设备
CN109815150B (zh) 应用测试方法、装置、电子设备及存储介质
CN110599593B (zh) 数据合成的方法、装置、设备及存储介质
WO2020215858A1 (zh) 基于虚拟环境的物体构建方法、装置、计算机设备及可读存储介质
CN112287852B (zh) 人脸图像的处理方法、显示方法、装置及设备
CN110570460B (zh) 目标跟踪方法、装置、计算机设备及计算机可读存储介质
WO2021244140A1 (zh) 物体测量、虚拟对象处理方法及装置、介质和电子设备
WO2019000464A1 (zh) 一种图像显示方法、装置、存储介质和终端
CN112270754A (zh) 局部网格地图构建方法及装置、可读介质和电子设备
CN112581358B (zh) 图像处理模型的训练方法、图像处理方法及装置
CN110765525A (zh) 生成场景图片的方法、装置、电子设备及介质
CN110807769B (zh) 图像显示控制方法及装置
CN113987326B (zh) 资源推荐方法、装置、计算机设备及介质
CN111068323A (zh) 智能速度检测方法、装置、计算机设备及存储介质
CN112766406A (zh) 物品图像的处理方法、装置、计算机设备及存储介质
CN111385481A (zh) 图像处理方法及装置、电子设备及存储介质
CN111982293B (zh) 体温测量方法、装置、电子设备及存储介质
CN113407774A (zh) 封面确定方法、装置、计算机设备及存储介质
CN114093020A (zh) 动作捕捉方法、装置、电子设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17915964

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17915964

Country of ref document: EP

Kind code of ref document: A1