CN109636898B - 3D model generation method and terminal - Google Patents

3D model generation method and terminal Download PDF

Info

Publication number
CN109636898B
CN109636898B CN201811447286.2A CN201811447286A CN109636898B CN 109636898 B CN109636898 B CN 109636898B CN 201811447286 A CN201811447286 A CN 201811447286A CN 109636898 B CN109636898 B CN 109636898B
Authority
CN
China
Prior art keywords
user
terminal
prompt
target
interface
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811447286.2A
Other languages
Chinese (zh)
Other versions
CN109636898A (en
Inventor
付浩翊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201811447286.2A priority Critical patent/CN109636898B/en
Publication of CN109636898A publication Critical patent/CN109636898A/en
Application granted granted Critical
Publication of CN109636898B publication Critical patent/CN109636898B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the invention provides a 3D model generation method and a terminal, which relate to the technical field of communication and are used for solving the problem of low matching degree between a 3D cartoon model and a user. The method comprises the following steps: receiving a first input of a user; responding to the first input, and acquiring target action information of a user; under the condition that the target motion information is matched with target preset motion information, acquiring a target two-dimensional 2D image of a user and depth information of the target 2D image; after collecting N2D images of a user and depth information of the N2D images, generating a 3D model according to the N2D images and the depth information of the N2D images, wherein N is a positive integer.

Description

3D model generation method and terminal
Technical Field
The embodiment of the invention relates to the technical field of communication, in particular to a 3D model generation method and a terminal.
Background
With the development of communication technology, the application scene of the terminal is more and more.
Generally, a user may input body type parameters (e.g., sex, three Dimensions, height, and weight) in the terminal, and then the terminal generates a 3D cartoon model corresponding to the body type parameters according to a preset Three-dimensional (3D) cartoon model and the body type parameters, and the user may use the 3D cartoon model to perform online fitting.
However, if the body type parameters of different users are the same, the 3D cartoon models corresponding to the users generated in the above manner are the same, and the actual body types of the users may be different greatly, so that the matching degree between the 3D cartoon models generated in the above manner and the users is low.
Disclosure of Invention
The embodiment of the invention provides a 3D model generation method and a terminal, which are used for solving the problem of low matching degree between a 3D cartoon model and a user.
In order to solve the technical problems, the embodiment of the invention is realized as follows:
in a first aspect, an embodiment of the present invention provides a 3D model generating method, applied to a terminal, where the method includes: receiving a first input of a user; responding to the first input, and acquiring target action information of a user; under the condition that the target motion information is matched with the preset motion information, acquiring a target two-dimensional 2D image of a user and depth information of the target 2D image; after N2D images of a user and depth information of the N2D images are acquired, a 3D model is generated according to the N2D images and the depth information of the N2D images, wherein N is a positive integer.
In a second aspect, an embodiment of the present invention further provides a terminal, where the terminal includes: the device comprises a receiving module, an acquisition module and a generation module; the receiving module is used for receiving a first input of a user; the acquisition module is used for responding to the first input received by the receiving module and acquiring target action information of a user; the acquisition module is used for acquiring a target two-dimensional 2D image of a user and depth information of the target 2D image under the condition that the target action information acquired by the acquisition module is matched with target preset action information; the generating module is used for generating a 3D model according to the N2D images and the depth information of the N2D images after the collecting module collects the N2D images and the depth information of the N2D images of the user, wherein N is a positive integer.
In a third aspect, an embodiment of the present invention provides a terminal, including a processor, a memory, and a computer program stored on the memory and executable on the processor, the computer program implementing the steps of the 3D model generation method according to the first aspect when executed by the processor.
In a fourth aspect, embodiments of the present invention provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the 3D model generation method according to the first aspect.
In the embodiment of the invention, firstly, a terminal receives a first input of a user, then, in response to the first input, the terminal acquires target motion information of the user, under the condition that the target motion information is matched with target preset motion information, the terminal acquires target two-dimensional 2D images of the user and depth information of the target 2D images, and finally, after acquiring N2D images of the user and depth information of the N2D images, the terminal generates a 3D model according to the N2D images and the depth information of the N2D images. Because the target motion information is matched with the target preset motion information, the matching of the motion executed by the user and the motion required by the terminal can be represented, under the condition that the target motion information of the user is matched with the target preset motion information, the N2D images acquired by the terminal can more accurately reflect the real figure, proportion and fat and thin of a user, and the depth information of the 2D images can also more accurately reflect the 3D effect of the human body images in the acquired images, so that the generated 3D model is more similar to the actual figure of the user and has higher matching degree according to the 3D model generating method.
Drawings
Fig. 1 is a schematic diagram of a possible architecture of an android operating system according to an embodiment of the present invention;
fig. 2 is a schematic flow chart of a 3D model generating method according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of an interface according to an embodiment of the present invention;
FIG. 4 is a second interface diagram according to an embodiment of the present invention;
FIG. 5 is a third interface diagram according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of an interface according to an embodiment of the present invention;
FIG. 7 is a fifth interface diagram according to an embodiment of the present invention;
FIG. 8 is a diagram illustrating an interface according to an embodiment of the present invention;
FIG. 9 is a schematic diagram of an interface according to an embodiment of the present invention;
FIG. 10 is a schematic diagram of an interface according to an embodiment of the present invention;
fig. 11 is a schematic diagram of a possible structure of a terminal according to an embodiment of the present invention;
fig. 12 is a schematic diagram of a possible structure of a terminal according to an embodiment of the present invention;
fig. 13 is a schematic diagram III of a possible structure of a terminal according to an embodiment of the present invention;
fig. 14 is a schematic diagram of a possible structure of a terminal according to an embodiment of the present invention;
fig. 15 is a schematic hardware structure of a terminal according to various embodiments of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In this context "/" means "or" for example, a/B may mean a or B; "and/or" herein is merely an association relationship describing an association object, and means that three relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist together, and B exists alone. "plurality" means two or more than two.
The terms first and second and the like in the description and in the claims, are used for distinguishing between different objects and not necessarily for describing a particular sequential or chronological order of the objects. For example, a first interface and a second interface, etc., are used to distinguish between different interfaces, and are not used to describe a particular order of interfaces.
It should be noted that, in the embodiments of the present invention, words such as "exemplary" or "such as" are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "e.g." in an embodiment should not be taken as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion.
The terminal in the embodiment of the invention can be a terminal with an operating system. The operating system may be an Android operating system, an ios operating system, or other possible operating systems, and the embodiment of the present invention is not limited specifically.
The software environment to which the 3D model generating method provided by the embodiment of the present invention is applied is described below by taking an android operating system as an example.
Fig. 1 is a schematic diagram of an architecture of a possible android operating system according to an embodiment of the present invention. In fig. 1, the architecture of the android operating system includes 4 layers, respectively: an application program layer, an application program framework layer, a system runtime layer and a kernel layer (specifically, a Linux kernel layer).
The application program layer comprises various application programs (including system application programs and third party application programs) in the android operating system.
The application framework layer is a framework of applications, and developers can develop some applications based on the application framework layer while adhering to the development principle of the framework of the applications.
The system runtime layer includes libraries (also referred to as system libraries) and android operating system runtime environments. The library mainly provides various resources required by the android operating system. The android operating system running environment is used for providing a software environment for the android operating system.
The kernel layer is an operating system layer of the android operating system, and belongs to the bottommost layer of the software hierarchy of the android operating system. The kernel layer provides core system services and a driver related to hardware for the android operating system based on a Linux kernel.
Taking an android operating system as an example, in the embodiment of the present invention, a developer may develop a software program for implementing the 3D model generation method provided in the embodiment of the present invention based on the system architecture of the android operating system shown in fig. 1, so that the 3D model generation method may be operated based on the android operating system shown in fig. 1. Namely, the processor or the terminal can realize the 3D model generation method provided by the embodiment of the invention by running the software program in the android operating system.
The following describes a 3D model generation method according to an embodiment of the present invention with reference to fig. 2. Fig. 2 is a flow chart of a 3D model generating method according to an embodiment of the present invention, as shown in fig. 2, the 3D model generating method includes S201 to S204:
s201, the terminal receives a first input of a user.
It should be noted that, the first input may be an input for triggering the terminal to generate or establish the 3D model by the user, the first input may be one input or may include multiple sub-inputs, and the first input may be a voice input or may be an input of the user on a display interface of the terminal, which is not limited in particular in the embodiments of the present invention.
It should be noted that, the terminal provided by the embodiment of the present invention may have a touch screen, where the touch screen may be configured to receive an input of a user and display, in response to the input, content corresponding to the input to the user. The first input may be a touch screen input, a fingerprint input, a gravity input, a key input, or the like. The touch screen input is input such as a press input, a long press input, a slide input, a click input, a hover input (input by a user in the vicinity of the touch screen) of the touch screen of the terminal by the user. The fingerprint input is input such as sliding fingerprint, long-press fingerprint, single-click fingerprint, double-click fingerprint and the like of a fingerprint identifier of the terminal. The gravity input is input such as shaking in a specific direction and shaking for a specific number of times of the terminal by a user. The key input corresponds to the input of a user for a single click input, a double click input, a long press input, a combination key input and the like of a key such as a power key, a volume key, a Home key and the like of the terminal. Specifically, the embodiment of the present invention does not specifically limit the manner of the first input, and may be any manner that can be implemented.
Illustratively, the first input may be an input of the user on the interface 301 shown in fig. 3, where the interface 301 displays a gender selection control and dressing prompt information, where the gender selection control includes a control 30 and a control 31, and the dressing prompt information is "please wear tight clothes to ensure accuracy of modeling", and the first input may be an input of the user on the control 30 or the control 31.
Alternatively, if the interface 301 is a camera interface of the terminal, the background image of the interface 301 may be an image obtained by the terminal after blurring the live-action acquired in real time.
Alternatively, the terminal provided by the embodiment of the invention can be a terminal with one screen or a terminal with two screens.
S202, responding to a first input, and acquiring target action information of a user by the terminal.
Alternatively, the target motion information may include a pattern of motions performed by the user, such as a large font, a standing motion, a double-sided fork waist, a side standing, and the like.
When the terminal obtains the target action information of the user, the position of the terminal is kept unchanged, and the user changes the action; in the case of generating a 3D model for the user a, the user B may control the terminal to move, where the user a keeps the motion unchanged, and the user B may control the terminal to move around the user a to obtain the target motion information, and the user may autonomously select any one of these two modes, which is not limited in the embodiment of the present invention.
S203, under the condition that the target action information is matched with the target preset action information, the terminal acquires the target two-dimensional 2D image of the user and the depth information of the target 2D image.
Specifically, the target preset action information may be information of one or more target preset actions, which is not specifically limited in the embodiment of the present invention.
S204, after collecting N2D images of the user and depth information of the N2D images, the terminal generates a 3D model according to the N2D images and the depth information of the N2D images.
Wherein N is a positive integer.
Optionally, the 3D model includes at least one of a body model of the user and a face model of the user.
Typically, a depth camera such as TOF (time of flight) camera may be used to collect depth information for the N2D images.
According to the 3D model generation method provided by the embodiment of the invention, firstly, a terminal receives a first input of a user, then, in response to the first input, the terminal acquires target action information of the user, under the condition that the target action information is matched with target preset action information, the terminal acquires target two-dimensional 2D images of the user and depth information of the target 2D images, and finally, after N2D images of the user and depth information of the N2D images are acquired, the terminal generates a 3D model according to the N2D images and the depth information of the N2D images. Because the target motion information is matched with the target preset motion information, the motion executed by the user can be represented and matched with the motion required by the terminal, N2D images acquired by the terminal can more accurately reflect the real figure, proportion and fat and thin of a user under the condition that the target motion information of the user is matched with the target preset motion information, and the depth information of the 2D images can also more accurately reflect the 3D effect of the human body images in the acquired images, so that the generated 3D model is more similar to the actual figure of the user and has higher matching degree according to the 3D model generating method.
In a possible implementation manner, the method for generating a 3D model provided by the embodiment of the present invention further includes S205 before the terminal obtains the target action information of the user:
s205, the terminal sequentially outputs M prompt messages.
Wherein, a prompt message is used for prompting the user to execute an action, and each prompt message comprises at least one of a prompt text, a prompt image and a prompt voice, and M is a positive integer less than or equal to N.
Optionally, the action of prompting the prompting message may be a preset action in the terminal, or may be a randomly generated action of the terminal, which is not particularly limited in the embodiment of the present invention.
Specifically, when m=1, the terminal may output a prompt message, for example, "please rotate one turn according to the illustrated action". When M >1, the terminal may sequentially output the prompt information, for example, "please perform action 1", "please perform action 2", "please perform action 3", etc.
Note that N may be the same value as M, or N may be a preset value. For example, N may be preset to 200, M may be 1, and specific values of M and N may be set as required in practical applications, which is not specifically limited in the embodiment of the present invention.
For example, if the prompt message is a prompt text, the prompt message may be "stand according to posture 1", and posture 1 in the prompt text is a specific posture, for example, "the left hand is inclined downward and the right hand is inclined downward.
Illustratively, as shown in fig. 4, taking the example of the terminal displaying a prompt image in the interface, the terminal may display a human-shaped outline on the interface 302, where the human-shaped outline is a human-shaped outline with the left-hand crotch and the right-hand straightened obliquely downward.
Based on the scheme, before the terminal acquires the target action information of the user, the terminal can sequentially output M prompt messages for prompting the user to sequentially execute the M actions, so that the user can execute the action prompted by the prompt messages according to the prompt information output by the terminal, on one hand, the standardized action information can enable the established model to be more rapid and accurate, on the other hand, the user can quickly master the step of how to generate the model according to the step operation respectively prompted by each prompt message, the user can conveniently and conveniently operate the model, and the use experience of the user can be improved.
In a possible implementation manner, the method for generating a 3D model according to the embodiment of the present invention may further include S206 after S201:
S206, responding to the first input, and detecting whether the first interface of the terminal comprises a human body image or not by the terminal.
It may be appreciated that the first interface may be a camera interface opened by a user, or other applications call a camera interface opened by a camera, which is not specifically shown in the embodiments of the present invention.
Specifically, when the terminal is a terminal having only one screen, the first interface is an interface on the one screen, and the first input may be an input on the interface. The user can open the front camera of the terminal and execute actions according to the information output on the interface to generate a 3D model; the user can also open the rear camera of the terminal, and other users use the terminal and execute actions according to the information output by the terminal to generate a 3D model.
When the terminal is a terminal having a first screen and a second screen, if the user generates a 3D model using one of the screen display interfaces of the terminal, reference may be made to the description of the terminal having only one screen. If a user generates a 3D model by using the display interfaces of the two screens of the terminal, the user 1 and the user 2 can generate a 3D model for the figure of the user 1 together, any one user inputs a first input on the first screen, and the terminal can start a camera on the second screen or the plane on which the second screen is positioned to acquire a 2D image and depth information of the 2D image.
Further, S205 may be performed by S205 a:
s205a, when the first interface comprises the human body image, the terminal sequentially outputs M pieces of prompt information.
Alternatively, the human body image included in the first interface may be a complete human body image included in the first interface, or may be a human body image including only the upper half of a user, or may be a human body image including only the lower half of a user, which is not particularly limited in the embodiment of the present invention.
It will be appreciated that the user may choose to build a 3D model of the whole body, may choose to build a 3D model of the upper body, or may choose to build a 3D model of the lower body, which is not particularly limited in the embodiments of the present invention.
Optionally, before the terminal displays the interface 302, after determining that the first interface includes the human body image, the terminal may vibrate first to prompt the user that the identification is successful, and then sequentially output M prompt messages.
Optionally, the first interface may further include a progress bar for indicating a progress of the 3D model for acquiring the 2D image.
For example, assuming that the hint information includes 2, one hint information is a hint image displayed in interface 302 shown in FIG. 4, one hint information may be "please keep gesture in place rotate" displayed in interface 303 shown in FIG. 5. Namely, the actions indicated by the 2 prompt messages are all the same action. A progress bar 32 may also be included in interface 303.
Based on the scheme, after the terminal receives the first input of the user, the terminal can detect whether the first interface of the terminal comprises the human body image or not, and then under the condition that the first interface comprises the human body image, the terminal sequentially outputs M prompt messages, so that the prompt message starting to be displayed is more flexible.
Optionally, the M pieces of prompt information include first prompt information and second prompt information, where the first prompt information is used to prompt the user to execute the first action, and the second prompt information is used to prompt the user to execute the second action.
In a possible implementation manner, the method for generating a 3D model according to the embodiment of the present invention may further include S205b1 or S205b2 after displaying the first prompt information:
s205b1, if the acquired first action information of the user is matched with the first preset action information, the terminal updates and displays the first prompt information as second prompt information.
The first preset action information is information of a first action.
For example, assuming that the action 1 executed by the user is matched with the first action, that is, the first action information of the user is matched with the first preset action information, the terminal updates the first prompt information to second prompt information, and is used for prompting the user to execute the action 2 according to the second action prompted by the second prompt information.
Based on the scheme, under the condition that the acquired first action information of the user is matched with the first preset action information, the terminal updates the first prompt information into the second prompt information, so that the user can know that the actions executed by the user according to the prompt information are correct, and the use experience of the user can be improved.
S205b2, if the obtained first action information of the user is not matched with the first preset action information, the terminal outputs adjustment information until the first action information is matched with the first preset action information, and then updates and outputs the first prompt information to be second prompt information, wherein the adjustment information is used for prompting the user to adjust the action of the user to be the first action.
Alternatively, the adjustment information may be in the form of text, images, or speech.
For example, assuming that the action 1 executed by the user does not match the first action, that is, the first action information of the user does not match the first preset action information, the terminal may output an adjustment information for prompting that the action currently executed by the user does not match the first action, and the user needs to adjust the action to be the first action.
Based on the scheme, the terminal outputs the adjustment information after the acquired first action information of the user is not matched with the first preset action information, so that the user can know that the current action is not standard, the user can be reminded of which action the user needs to adjust until the first action information of the first action executed by the user is matched with the first preset action information, then the first prompt information is updated and output as the second prompt information, after the second prompt information is output by the terminal, the user can know that the previous action executed by the user is correct, and further the user can continuously execute the second action prompted by the second prompt information according to the second prompt information, so that the man-machine interaction of the user and the terminal is enhanced, and the user experience is better.
Optionally, the terminal may display a first area before displaying the prompt information, and the terminal may display the prompt information or the user operation instruction information in the first area.
Illustratively, as shown in fig. 6, the first area is an area surrounded by the prompt box 33 shown in fig. 6. In this area, user operation instruction information "please put the human body in the frame" is displayed, and the user's image in the terminal interface is located in the prompt box 33 by the user's movement or the terminal movement.
Optionally, in the case that the target prompt information includes a prompt image, the prompt image is displayed in a first area in the target interface, and the target interface includes the first interface.
It should be noted that, in the case that the terminal is a terminal having a first screen and a second screen, the first interface is an interface on the first screen, the target interface may also include a third interface, the third interface may be an interface on the second screen, and the prompt image may be displayed in a first area in the third interface, so that the user on the second screen performs an action according to the prompt image.
It can be understood that the target prompt is any one of the M prompts.
It should be noted that, the shape of the first area may be rectangular, or may be other shapes such as oval or circular, which is not particularly limited in the embodiment of the present invention.
As shown in fig. 7, in the interface 305, the presentation image is a human-shaped outline, which is displayed in an area surrounded by the presentation frame 33. Outside the area surrounded by the prompt box 33, an instruction of "please stand in a posture of the human body outline" may be displayed.
It should be noted that, the "please stand according to the posture of the human body contour" displayed in the interface 305 is not the prompt information of the "M prompt information" described in the embodiment of the present invention, but a supplementary explanation of the human body contour displayed in the prompt box, and guides the user to stand according to the posture of the human body contour (i.e. the prompt information) displayed in the interface.
Optionally, after determining that the first interface includes the human body image, the terminal may vibrate the first interface, then the prompt box changes color or thickens, and then M prompt messages are sequentially output.
Optionally, in the case that the target prompt information includes a prompt image and a prompt text, the prompt image is displayed in a first area in the target interface and the prompt text is displayed in a second area in the target interface.
Alternatively, in the case where the target prompt message does not include a prompt image, including a prompt text, the prompt text may be displayed in a first area in the target interface.
Based on the scheme, the terminal displays the prompt image in the first area in the first interface when the target prompt information comprises the prompt image, and displays the prompt image in the first area in the target interface and the prompt text in the second area in the target interface when the target prompt information comprises the prompt image and the prompt text, namely, the prompt text is displayed in the different areas according to the type of the prompt information, so that the prompt action can be conveniently executed by a user according to the prompt information in different areas according to own preference, and the display of the prompt information is more humanized.
A possible implementation manner, the 3D model generating method provided by the embodiment of the present invention further includes S207:
s207, after collecting the N2D images and the depth information of the N2D images, the terminal displays a second interface, wherein the second interface comprises a target animation, and the target animation is used for indicating the generation process of the 3D model.
Optionally, the second interface may be displayed by triggering by the user, or may be displayed automatically by the terminal, which is not specifically limited in the embodiment of the present invention. Specifically, after the N2D images and depth information of the N2D images are acquired, the user may select to view a generation process of the 3D model, and then the terminal may display the target animation in the second interface. Of course, after the N2D images and the depth information of the N2D images are acquired, the terminal may directly display the second interface to display the generation process of the 3D model to the user.
For example, as shown in FIG. 8, the interface 306 may be a second interface that includes the target animation 34 therein. In interface 306, the user may also be shown "action complete, modeling".
Of course, a progress bar may also be included in the interface 306, which may present the user with the progress of the generation of the 3D model.
Based on the scheme, after the terminal acquires the N2D images and the depth information of the N2D images, the target animation can be displayed in the second interface, the generation process of the 3D model is displayed for the user, the user can watch the body model which is established according to the photo and matched with the user, and the interestingness of the terminal for generating the 3D model is improved.
Optionally, after generating the 3D model, the terminal may also display the 3D model on an interface to facilitate a user's view of the 3D model.
In a possible implementation manner, after the terminal generates the 3D model, the 3D model generating method provided by the embodiment of the present invention further includes S208:
and S208, the terminal updates and displays the target animation displayed in the second interface as a 3D model image.
Based on the scheme, after the terminal generates the 3D model, the terminal updates the target animation displayed in the second interface into the 3D model image, and the 3D model image generated according to the image of the user can be displayed to the user for the user to view, edit, or use the 3D model for online fitting and the like.
It can be appreciated that in the above-mentioned 3D model generation process, the user may be far from the terminal due to the need of capturing an image, and the face model may not be very accurate, if the user wants the face model to be more true and accurate, the user may also re-build the model for the face, and if the user does not need to have a more accurate face model, the user may choose not to build the face model.
In a possible implementation manner, the method for generating a 3D model according to the embodiment of the present invention may further include, after S208, S209:
s209, the terminal displays third prompt information, wherein the third prompt information is used for prompting a user to trigger generation of the face model.
It should be noted that, if the user selects to generate the face model, the execution flow may refer to the process in the above embodiment, and prompt information is also displayed to the user in the process of collecting the face image, where the prompt information in the solution may prompt the user to execute the action or the facial expression.
Illustratively, assuming that the user first selects the body model to be generated first, as shown in FIG. 9, after the body model is generated, the terminal may display a control 35 on the interface prompting the user to build a facial model. The controls 35 may display "whether to model the face", "model the face to obtain a more realistic manikin", "later" and "model", and the user may select "later" or "model".
It should be noted that, when the terminal is executing any of the schemes provided in the embodiments of the present invention, if the user clicks the back key or clicks the exit key, the terminal may display a prompt box in the current interface, where the content in the prompt box may be used to prompt the user whether to exit modeling, and prompt that the current progress may not be saved after the user exits. If the user selects to exit modeling, the terminal may return to an interface where modeling is started, such as the interface shown in fig. 1; the terminal may also return to a model preview interface in which the user may view other models that have been built, and in which the user may also use the 3D model that has been built to perform online fitting.
Optionally, after the terminal generates the 3D model, the terminal may directly save the 3D model, or may display a fourth interface, where the 3D model in the fourth interface may be edited, for example, may scale the 3D model, may rotate the 3D model, the fourth interface may further include a save control and a delete control, if the user clicks the save control, the terminal may save the 3D model, and of course, the fourth interface may also include a 3D fitting control, if the user directly clicks the 3D fitting control, the terminal may save the 3D model first, and then display a sub-interface in the fourth interface, where the user may name, group, etc. the 3D model in the sub-interface.
Of course, the terminal may also display a sub-interface in the interface 301 shown in fig. 3, which is used for naming the newly created 3D model by the user, and the embodiment of the present invention does not specifically limit when to name the newly created 3D model.
Optionally, when the terminal generates the face model, the first area may be a circular area, and the second area may also display "propose to remove glasses and finish hair" so as to ensure that the five sense organs are not blocked "," remove glasses "or" get close to some "advice information, where the advice information may be" face rotates left and back "," face rotates right and back ", or" keep smiling ", etc.
Based on the scheme, after the body type model is built by the user, the terminal displays third prompt information, the third prompt information is used for prompting the user to trigger generation of the face model, and the user can select to generate the face model of the user, so that the 3D model is more lifelike.
Alternatively, in the case where the terminal includes a first screen and a second screen, the first input is an input on the first screen, S205 may be specifically performed through S205 c.
S205c, the terminal sequentially displays the M prompt messages on the first screen, and sequentially displays the M prompt messages on the second screen.
Wherein, at the same time, the content displayed on the first screen is the same as the content displayed on the second screen, and the content comprises a preview image and prompt information or comprises a preview image and adjustment information.
Before a user inputs a first input on a first screen of the terminal, a second screen of the terminal can be in a screen-off state, and after the terminal receives the first input on the first screen, the terminal lights up the second screen and sequentially displays M prompt messages displayed on the first screen on the second screen; of course, the second screen of the terminal may also be in a bright screen state, and after the terminal receives the first input on the first screen, the content displayed on the second screen is updated and displayed as M pieces of prompt information displayed on the first screen.
To facilitate the description of the solution, it is assumed that a first user uses the terminal to build a 3D model for a second user, the first user facing the first screen and the second user facing the second screen.
Of course, in this solution, a prompt message may also be displayed on the first screen: please keep the opposite side in position rotated in posture 1.
Optionally, when the prompt information displayed on the second screen is a prompt text, the word size of the prompt text on the second screen may be a larger word size, so that the second user executes the action prompted by the prompt information according to the prompt information on the second screen.
Optionally, when the user uses two screens to implement the 3D model generation method provided by the embodiment of the present invention, the user may set that the touch operation on the second screen is invalid when the 2D image and the depth information of the 2D image are acquired.
Optionally, after the terminal determines that the second user performs the action according to the action prompted by the prompt information, the terminal may control the second screen to be extinguished. Before the second screen is extinguished, the terminal can vibrate to prompt the first user that the acquisition is finished.
Illustratively, as shown in fig. 10, assuming that the terminal inputs a first input on the interface 301 shown in fig. 3, the interface on the first screen may be the interface 308 and the interface displayed on the second screen may be the interface 309. Wherein, a prompt box can be displayed in the interface 308, a human body outline is displayed in the prompt box, and a 'please stand up according to the posture of the human body outline' is displayed outside the prompt box. In the interface 309, the terminal also displays a human body outline, which may be the same size as the human body outline in the interface 308, or may be an enlarged scale of the human body outline in the interface 308, and the interface 309 may also display "please stand according to the posture of the human body outline", and the text may be a larger font, so as to be convenient for the user to view.
It should be noted that, when the terminal executes the 3D model generating method provided by the embodiment of the present invention, the terminal may prohibit the function of dual-screen switching.
Based on the scheme, the terminal comprises a first screen and a second screen, the first input is input on the first screen, the terminal sequentially displays the M prompt messages on the first screen and sequentially displays the M prompt messages on the second screen, and as the content displayed on the first screen is the same as the content displayed on the second screen at the same moment, the content comprises a preview image and the prompt messages or comprises a preview image and adjustment information. That is, if the first user uses the terminal to build a 3D model for the second user, the first user may move the position of the terminal through the content displayed on the first screen, prompt the second user to move the position, prompt the second user to change the gesture of the action, etc., the second user may perform the corresponding action through the content displayed on the second screen, and if the second user is opposite to the terminal, the first user may prompt the second user to continue to perform the action through the content displayed on the first screen, so that the second user may be more conveniently instructed to perform the action.
Fig. 11 is a schematic diagram of a possible structure of a terminal according to an embodiment of the present invention, as shown in fig. 11, a terminal 400 includes: a receiving module 401, an acquiring module 402, an acquiring module 403 and a generating module 404; a receiving module 401 for receiving a first input of a user; an obtaining module 402, configured to obtain target action information of a user in response to the first input received by the receiving module 401; the acquisition module 403 is configured to acquire a target 2D image of a user and depth information of the target 2D image when the target motion information acquired by the acquisition module 402 matches the target preset motion information; the generating module 404 is configured to generate a 3D model according to the N2D images and the depth information of the N2D images after the acquiring module 403 acquires the N2D images and the depth information of the N2D images of the user, where N is a positive integer.
Optionally, in conjunction with fig. 11, as shown in fig. 12, the terminal 400 further includes an output module 405; the output module 405 is configured to sequentially output M pieces of prompt information before the obtaining module 402 obtains the target action of the user, where one prompt information is used to prompt the user to perform an action, and each prompt information includes at least one of a prompt text, a prompt image, and a prompt voice, where M is a positive integer less than or equal to N.
Optionally, in conjunction with fig. 12, as shown in fig. 13, the terminal 400 further includes a detection module 406; a detection module 406, configured to detect whether a human body image is included in the first interface of the terminal 400 in response to the first input received by the receiving module 401; the output module 405 is specifically configured to sequentially output the M pieces of prompt information when the detection module 406 detects that the first interface includes a human body image.
Optionally, in the case that the target prompt information includes a prompt image, the prompt image is displayed in a first area in the target interface, and the target interface includes the first interface; in the case where the target prompt message includes a prompt image and a prompt text, the prompt image is displayed in a first region in the target interface and the prompt text is displayed in a second region in the target interface.
Optionally, in conjunction with fig. 11, as shown in fig. 14, the terminal further includes a display module 407; the display module 407 is configured to display a second interface after the acquisition module 403 acquires the N2D images and the depth information of the N2D images, where the second interface includes a target animation, and the target animation is used to indicate a generation process of the 3D model.
Optionally, the display module 407 is further configured to update and display the target animation displayed in the second interface as a 3D model image after the generating module 404 generates the 3D model.
Optionally, the 3D model includes at least one of a body type module of the user and a face model of the user.
Optionally, the terminal 400 includes a first screen and a second screen, the first input being an input on the first screen; the output module 405 is specifically configured to sequentially display the M pieces of prompt information on the first screen, and sequentially display the M pieces of prompt information on the second screen; wherein, at the same time, the content displayed on the first screen is the same as the content displayed on the second screen, and the content comprises a preview image and prompt information or comprises a preview image and adjustment information.
The terminal 400 provided in the embodiment of the present invention can implement each process implemented by the terminal in the above embodiment of the method, and in order to avoid repetition, details are not repeated here.
The terminal provided by the embodiment of the invention firstly receives the first input of the user, then responds to the first input, acquires the target action information of the user, acquires the target two-dimensional 2D image of the user and the depth information of the target 2D image under the condition that the target action information is matched with the target preset action information, and finally generates a 3D model according to the N2D images and the depth information of the N2D images after acquiring the N2D images of the user and the depth information of the N2D images. Because the target motion information is matched with the target preset motion information, the matching of the motion executed by the user and the motion required by the terminal can be represented, under the condition that the target motion information of the user is matched with the target preset motion information, the N2D images acquired by the terminal can more accurately reflect the real figure, proportion and fat and thin of a user, and the depth information of the 2D images can also more accurately reflect the 3D effect of the human body images in the acquired images, so that the generated 3D model is more similar to the actual figure of the user and has higher matching degree according to the 3D model generating method.
Fig. 15 is a schematic hardware architecture of a terminal implementing various embodiments of the present invention, where the terminal 100 includes, but is not limited to: radio frequency unit 101, network module 102, audio output unit 103, input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, processor 110, and power supply 111. It will be appreciated by those skilled in the art that the terminal structure shown in fig. 15 is not limiting of the terminal and that the terminal may include more or fewer components than shown, or may combine certain components, or a different arrangement of components. In the embodiment of the invention, the terminal comprises, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer and the like.
Wherein the user input unit 107 is configured to receive a first input of a user; a processor 110 for acquiring target action information of a user in response to a first input; the processor 110 is further configured to acquire a target two-dimensional 2D image of the user and depth information of the target 2D image if the target motion information matches the target preset motion information; after N2D images of a user and depth information of the N2D images are acquired, a 3D model is generated according to the N2D images and the depth information of the N2D images, wherein N is a positive integer.
The embodiment of the invention provides a terminal, which comprises the steps that firstly, the terminal receives a first input of a user, then, in response to the first input, the terminal acquires target action information of the user, under the condition that the target action information is matched with target preset action information, the terminal acquires target two-dimensional 2D images of the user and depth information of the target 2D images, and finally, after acquiring N2D images of the user and the depth information of the N2D images, the terminal generates a 3D model according to the N2D images and the depth information of the N2D images. Because the target motion information is matched with the target preset motion information, the matching of the motion executed by the user and the motion required by the terminal can be represented, under the condition that the target motion information of the user is matched with the target preset motion information, the N2D images acquired by the terminal can more accurately reflect the real figure, proportion and fat and thin of a user, and the depth information of the 2D images can also more accurately reflect the 3D effect of the human body images in the acquired images, so that the generated 3D model is more similar to the actual figure of the user and has higher matching degree according to the 3D model generating method.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 101 may be configured to receive and send information or signals during a call, specifically, receive downlink data from a base station, and then process the received downlink data with the processor 110; and, the uplink data is transmitted to the base station. Typically, the radio frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 101 may also communicate with networks and other devices through a wireless communication system.
The terminal provides wireless broadband internet access to the user through the network module 102, such as helping the user to send and receive e-mail, browse web pages, access streaming media, etc.
The audio output unit 103 may convert audio data received by the radio frequency unit 101 or the network module 102 or stored in the memory 109 into an audio signal and output as sound. Also, the audio output unit 103 may also provide audio output (e.g., a call signal reception sound, a message reception sound, etc.) related to a specific function performed by the terminal 100. The audio output unit 103 includes a speaker, a buzzer, a receiver, and the like.
The input unit 104 is used for receiving an audio or video signal. The input unit 104 may include a graphics processor (Graphics Processing Unit, GPU) 1041 and a microphone 1042, the graphics processor 1041 processing image data of still pictures or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 106. The image frames processed by the graphics processor 1041 may be stored in the memory 109 (or other storage medium) or transmitted via the radio frequency unit 101 or the network module 102. Microphone 1042 may receive sound and be capable of processing such sound into audio data. The processed audio data may be converted into a format output that can be transmitted to the mobile communication base station via the radio frequency unit 101 in the case of a telephone call mode.
The terminal 100 further comprises at least one sensor 105, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor and a proximity sensor, wherein the ambient light sensor can adjust the brightness of the display panel 1061 according to the brightness of ambient light, and the proximity sensor can turn off the display panel 1061 and/or the backlight when the terminal 100 moves to the ear. As one of the motion sensors, the accelerometer sensor can detect the acceleration in all directions (generally three axes), and can detect the gravity and direction when the accelerometer sensor is stationary, and can be used for recognizing the terminal gesture (such as horizontal and vertical screen switching, related games, magnetometer gesture calibration), vibration recognition related functions (such as pedometer and knocking), and the like; the sensor 105 may further include a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, an infrared sensor, etc., which are not described herein.
The display unit 106 is used to display information input by a user or information provided to the user. The display unit 106 may include a display panel 1061, and the display panel 1061 may be configured in the form of a liquid crystal display (Liquid Crystal Display, LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 107 may be used to receive input numeric or character information and to generate key signal inputs related to user settings and function control of the terminal. Specifically, the user input unit 107 includes a touch panel 1071 and other input devices 1072. The touch panel 1071, also referred to as a touch screen, may collect touch operations thereon or thereabout by a user (e.g., operations of the user on the touch panel 1071 or thereabout using any suitable object or accessory such as a finger, stylus, etc.). The touch panel 1071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch azimuth of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device, converts the touch information into touch point coordinates, and sends the touch point coordinates to the processor 110, and receives and executes commands sent by the processor 110. Further, the touch panel 1071 may be implemented in various types such as resistive, capacitive, infrared, and surface acoustic wave. The user input unit 107 may include other input devices 1072 in addition to the touch panel 1071. In particular, other input devices 1072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
Further, the touch panel 1071 may be overlaid on the display panel 1061, and when the touch panel 1071 detects a touch operation thereon or nearby, the touch operation is transmitted to the processor 110 to determine the type of touch event, and then the processor 110 provides a corresponding visual output on the display panel 1061 according to the type of touch event. Although in fig. 15, the touch panel 1071 and the display panel 1061 are two independent components for implementing the input and output functions of the terminal, in some embodiments, the touch panel 1071 may be integrated with the display panel 1061 to implement the input and output functions of the terminal, which is not limited herein.
The interface unit 108 is an interface to which an external device is connected to the terminal 100. For example, the external devices may include a wired or wireless headset port, an external power (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 108 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the terminal 100 or may be used to transmit data between the terminal 100 and an external device.
Memory 109 may be used to store software programs as well as various data. The memory 109 may mainly include a storage program area that may store an operating system, application programs required for at least one function (such as a sound playing function, an image playing function, etc.), and a storage data area; the storage data area may store data (such as audio data, phonebook, etc.) created according to the use of the handset, etc. In addition, memory 109 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
The processor 110 is a control center of the terminal, and connects various parts of the entire terminal using various interfaces and lines, and performs various functions of the terminal and processes data by running or executing software programs and/or modules stored in the memory 109 and calling data stored in the memory 109, thereby performing overall monitoring of the terminal. Processor 110 may include one or more processing units; preferably, the processor 110 may integrate an application processor that primarily handles operating systems, user interfaces, applications, etc., with a modem processor that primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The terminal 100 may further include a power source 111 (e.g., a battery) for supplying power to the respective components, and preferably, the power source 111 may be logically connected to the processor 110 through a power management system, so as to perform functions of managing charging, discharging, and power consumption management through the power management system.
In addition, the terminal 100 includes some functional modules, which are not shown, and will not be described herein.
Optionally, in conjunction with fig. 15, the embodiment of the present invention further provides a terminal, which includes a processor 110, a memory 109, and a computer program stored in the memory 109 and capable of running on the processor 110, where the computer program when executed by the processor 110 implements each process of the embodiment of the 3D model generating method, and the same technical effect can be achieved, and for avoiding repetition, a detailed description is omitted herein.
The embodiment of the invention also provides a computer readable storage medium, on which a computer program is stored, which when executed by a processor, implements the processes of the above embodiment of the 3D model generating method, and can achieve the same technical effects, so that repetition is avoided, and no further description is given here. Wherein the computer readable storage medium is selected from Read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), magnetic disk or optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the method according to the embodiments of the present invention.
The embodiments of the present invention have been described above with reference to the accompanying drawings, but the present invention is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those having ordinary skill in the art without departing from the spirit of the present invention and the scope of the claims, which are to be protected by the present invention.

Claims (17)

1. A 3D model generation method, applied to a terminal, the method comprising:
receiving a first input of a user, wherein the first input is an input for triggering the terminal to generate or establish a 3D model by the user;
responding to the first input, and acquiring target action information of a user;
the obtaining the target action information of the user comprises the following steps:
under the condition that the position of the terminal is kept unchanged and the user changes actions, acquiring target action information of the user; or under the condition that the position of the user is kept unchanged and the terminal moves, acquiring target action information of the user;
under the condition that the target motion information is matched with target preset motion information, acquiring a target two-dimensional 2D image of a user and depth information of the target 2D image;
After N2D images of a user and depth information of the N2D images are acquired, a 3D model is generated according to the N2D images and the depth information of the N2D images, and N is a positive integer.
2. The method of claim 1, further comprising, prior to obtaining the target action information for the user:
and sequentially outputting M prompt messages, wherein one prompt message is used for prompting a user to execute an action, and each prompt message comprises at least one of a prompt text, a prompt image and a prompt voice, and M is a positive integer less than or equal to N.
3. The method of claim 2, wherein after the receiving the first input from the user, the method further comprises:
detecting whether a human body image is included in a first interface of the terminal in response to the first input;
the M prompt messages are sequentially output, and the method comprises the following steps:
and under the condition that the first interface comprises the human body image, outputting the M prompting messages in sequence.
4. The method of claim 3, wherein the step of,
in the case that the target prompt information comprises a prompt image, the prompt image is displayed in a first area in a target interface, and the target interface comprises the first interface;
In the case where the target prompt message includes a prompt image and a prompt text, the prompt image is displayed in a first region in the target interface and the prompt text is displayed in a second region in the target interface.
5. The method according to claim 1, wherein the method further comprises:
and after the N2D images and the depth information of the N2D images are acquired, displaying a second interface, wherein the second interface comprises a target animation, and the target animation is used for indicating the generation process of the 3D model.
6. The method of claim 5, further comprising, after generating the 3D model:
and updating and displaying the target animation displayed in the second interface as a 3D model image.
7. The method of claim 1, wherein the 3D model comprises at least one of a body model of the user and a facial model of the user.
8. The method of claim 2, wherein the terminal comprises a first screen and a second screen, the first input being an input on the first screen;
sequentially outputting M prompt messages, including:
sequentially displaying the M prompting messages on the first screen, and sequentially displaying the M prompting messages on the second screen;
And at the same moment, the content displayed on the first screen is the same as the content displayed on the second screen, and the content comprises a preview image and prompt information or comprises a preview image and adjustment information.
9. A terminal, the terminal comprising: the device comprises a receiving module, an acquisition module and a generation module;
the receiving module is used for receiving a first input of a user, wherein the first input is an input for triggering the terminal to generate or establish a 3D model for the user;
the acquisition module is used for responding to the first input received by the receiving module and acquiring target action information of a user;
the acquisition module is specifically configured to acquire target action information of a user under the condition that the position of the terminal is kept unchanged and the user changes actions in response to the first input received by the receiving module; or under the condition that the position of the user is kept unchanged and the terminal moves, acquiring target action information of the user;
the acquisition module is used for acquiring a target two-dimensional 2D image of a user and depth information of the target 2D image under the condition that the target action information acquired by the acquisition module is matched with target preset action information;
The generating module is configured to generate a 3D model according to the N2D images and the depth information of the N2D images after the collecting module collects the N2D images and the depth information of the N2D images of the user, where N is a positive integer.
10. The terminal of claim 9, wherein the terminal further comprises an output module;
the input module is used for sequentially outputting M prompt messages before the acquisition module acquires the target action of the user, one prompt message is used for prompting the user to execute one action, and each prompt message comprises at least one of a prompt text, a prompt image and a prompt voice, wherein M is a positive integer less than or equal to N.
11. The terminal of claim 10, wherein the terminal further comprises a detection module;
the detection module is used for responding to the first input received by the receiving module and detecting whether a first interface of the terminal comprises a human body image or not;
the output module is specifically configured to sequentially output the M prompt messages when the detection module detects that the first interface includes the human body image.
12. The terminal of claim 11, wherein the terminal comprises a base station,
In the case that the target prompt information comprises a prompt image, the prompt image is displayed in a first area in a target interface, and the target interface comprises the first interface;
in the case where the target prompt message includes a prompt image and a prompt text, the prompt image is displayed in a first region in the target interface and the prompt text is displayed in a second region in the target interface.
13. The terminal of claim 9, wherein the terminal further comprises a display module;
the display module is used for displaying a second interface after the acquisition module acquires the N2D images and the depth information of the N2D images, wherein the second interface comprises a target animation, and the target animation is used for indicating the generation process of the 3D model.
14. The terminal of claim 13, wherein the terminal comprises a base station,
and the display module is further used for updating and displaying the target animation displayed in the second interface as a 3D model image after the 3D model is generated by the generation module.
15. The terminal of claim 9, wherein the 3D model comprises at least one of a user's body conformation module and a user's face model.
16. The terminal of claim 10, wherein the terminal comprises a first screen and a second screen, the first input being an input on the first screen;
the display module is specifically configured to sequentially display the M prompt messages on the first screen, and sequentially display the M prompt messages on the second screen;
and at the same moment, the content displayed on the first screen is the same as the content displayed on the second screen, and the content comprises a preview image and prompt information or comprises a preview image and adjustment information.
17. A terminal comprising a processor, a memory and a computer program stored on the memory and executable on the processor, which when executed by the processor, implements the steps of the 3D model generation method according to any of claims 1-8.
CN201811447286.2A 2018-11-29 2018-11-29 3D model generation method and terminal Active CN109636898B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811447286.2A CN109636898B (en) 2018-11-29 2018-11-29 3D model generation method and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811447286.2A CN109636898B (en) 2018-11-29 2018-11-29 3D model generation method and terminal

Publications (2)

Publication Number Publication Date
CN109636898A CN109636898A (en) 2019-04-16
CN109636898B true CN109636898B (en) 2023-08-22

Family

ID=66070261

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811447286.2A Active CN109636898B (en) 2018-11-29 2018-11-29 3D model generation method and terminal

Country Status (1)

Country Link
CN (1) CN109636898B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2413607A2 (en) * 2010-07-27 2012-02-01 LG Electronics Mobile terminal and method of controlling a three-dimensional image therein
CN103258078A (en) * 2013-04-02 2013-08-21 上海交通大学 Human-computer interaction virtual assembly system fusing Kinect equipment and Delmia environment
CN103366782A (en) * 2012-04-06 2013-10-23 腾讯科技(深圳)有限公司 Method and device automatically playing expression on virtual image
CN107551551A (en) * 2017-08-09 2018-01-09 广东欧珀移动通信有限公司 Game effect construction method and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2413607A2 (en) * 2010-07-27 2012-02-01 LG Electronics Mobile terminal and method of controlling a three-dimensional image therein
CN103366782A (en) * 2012-04-06 2013-10-23 腾讯科技(深圳)有限公司 Method and device automatically playing expression on virtual image
CN103258078A (en) * 2013-04-02 2013-08-21 上海交通大学 Human-computer interaction virtual assembly system fusing Kinect equipment and Delmia environment
CN107551551A (en) * 2017-08-09 2018-01-09 广东欧珀移动通信有限公司 Game effect construction method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
3D成像技术在智能手机交互设计中的应用研究;方伟;《佳木斯大学学报(自然科学版)》;20180915(第05期);全文 *

Also Published As

Publication number Publication date
CN109636898A (en) 2019-04-16

Similar Documents

Publication Publication Date Title
CN109391792B (en) Video communication method, device, terminal and computer readable storage medium
US11605193B2 (en) Artificial intelligence-based animation character drive method and related apparatus
JP7206388B2 (en) Virtual character face display method, apparatus, computer device, and computer program
CN108184050B (en) Photographing method and mobile terminal
CN109215007B (en) Image generation method and terminal equipment
CN108984087B (en) Social interaction method and device based on three-dimensional virtual image
CN109525874B (en) Screen capturing method and terminal equipment
CN106878390B (en) Electronic pet interaction control method and device and wearable equipment
US20170269712A1 (en) Immersive virtual experience using a mobile communication device
CN108848313B (en) Multi-person photographing method, terminal and storage medium
CN108683850B (en) Shooting prompting method and mobile terminal
CN111127595B (en) Image processing method and electronic equipment
CN108876878B (en) Head portrait generation method and device
CN111010523B (en) Video recording method and electronic equipment
CN109426343B (en) Collaborative training method and system based on virtual reality
CN109815462B (en) Text generation method and terminal equipment
CN109448069B (en) Template generation method and mobile terminal
CN109819167A (en) A kind of image processing method, device and mobile terminal
CN109525704A (en) A kind of control method and mobile terminal
CN108833791B (en) Shooting method and device
CN112870697B (en) Interaction method, device, equipment and medium based on virtual relation maintenance program
CN109166164B (en) Expression picture generation method and terminal
CN108924413B (en) Shooting method and mobile terminal
CN111093033B (en) Information processing method and device
CN109547696B (en) Shooting method and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant