CN112037338A - AR image creating method, terminal device and readable storage medium - Google Patents

AR image creating method, terminal device and readable storage medium Download PDF

Info

Publication number
CN112037338A
CN112037338A CN202010903161.7A CN202010903161A CN112037338A CN 112037338 A CN112037338 A CN 112037338A CN 202010903161 A CN202010903161 A CN 202010903161A CN 112037338 A CN112037338 A CN 112037338A
Authority
CN
China
Prior art keywords
image
area
whole
creating
character
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010903161.7A
Other languages
Chinese (zh)
Inventor
沈剑锋
付雪婷
周凡贻
徐娟
陈蓉
汪智勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Microphone Holdings Co Ltd
Shenzhen Transsion Holdings Co Ltd
Original Assignee
Shenzhen Microphone Holdings Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Microphone Holdings Co Ltd filed Critical Shenzhen Microphone Holdings Co Ltd
Priority to CN202010903161.7A priority Critical patent/CN112037338A/en
Publication of CN112037338A publication Critical patent/CN112037338A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the application provides an AR image creating method, terminal equipment and a readable storage medium. The method for creating the AR image comprises the following steps: after entering an AR shooting mode, acquiring a first creation instruction of an AR image; outputting a corresponding creation interface according to the first creation instruction, wherein the creation interface at least comprises an editing area and a preview area; when a selection operation input based on the editing area is acquired, creating a corresponding AR image according to the selection operation; displaying the created AR character in the preview area; it is further supported that each created avatar is provided with at least one avatar label for subsequent selection and/or interaction of the AR avatar according to the avatar label. According to the method, the AR image is created based on each selection operation, and each created image is provided with an image label so as to select the AR image and/or interact with the AR image.

Description

AR image creating method, terminal device and readable storage medium
Technical Field
The embodiment of the application relates to the technical field of augmented reality, in particular to an AR image creating method, terminal equipment and a readable storage medium.
Background
The current terminal device such as a mobile phone has a basic augmented reality function, for example, when a camera takes a picture, a virtual sticker can be added to realize seamless fusion of a real image and the virtual sticker, but the current terminal device can only add the preset virtual sticker to an image formed in a real world, and cannot create a virtual image such as a virtual sticker.
The foregoing description is provided for general background information and is not admitted to be prior art.
Content of application
The embodiment of the application provides an AR image creating method, terminal equipment and a readable storage medium, which are used for solving the problem that the conventional terminal equipment cannot create an AR image.
In one aspect, an embodiment of the present application provides a method for creating an AR image, which is applied to a terminal device, and the method for creating an AR image includes the following steps:
after entering an AR shooting mode, acquiring a first creation instruction of an AR image;
outputting a corresponding creation interface according to the first creation instruction, wherein the creation interface at least comprises an editing area and a preview area;
when a selection operation input based on the editing area is acquired, creating a corresponding AR image according to the selection operation;
displaying the created AR character in the preview area;
optionally, each created avatar is provided with at least one avatar label for subsequent selection and/or interaction with the AR avatar according to the avatar label.
Further, the editing area includes at least two elements to be selected, and when a selection operation input based on the editing area is acquired, the step of creating a corresponding AR shape according to the selection operation includes:
and creating the AR image by adopting the selected elements to be selected based on the selection operation of the elements to be selected in the editing area.
Further, the editing area at least comprises a navigation area and a selection area corresponding to the navigation area, the navigation area comprises at least two different types of navigation elements, the selection area comprises at least two elements to be selected, and each navigation element corresponds to at least two elements to be selected belonging to the same category; the step of creating the AR character using the selected element to be selected based on the selection operation of the element to be selected in the editing region includes:
and outputting at least two elements to be selected which correspond to the navigation elements and belong to the same category in the selection area based on a selection instruction of the navigation elements.
Further, the navigation elements include a first type of navigation element, the element to be selected corresponding to the first type of navigation element is a first type of element to be selected, the first type of element to be selected is a basic element necessary for creating the AR image, and the step of outputting the creation interface of the AR image according to the first creation instruction includes:
and displaying the elements to be selected which are displayed by default in the elements to be selected of the first type in the preview interface.
Further, the step of outputting at least two elements to be selected belonging to the same category and corresponding to the navigation element in the selection area based on the selection instruction of the navigation element comprises:
outputting at least two elements to be selected of a first type corresponding to the navigation elements of the first type in the selection area based on a selection instruction of the navigation elements of the first type;
the step of creating the AR character using the selected element to be selected based on the selection operation of the element to be selected in the editing region includes:
and replacing the default displayed element to be selected with the selected element to be selected of the first type based on the selection instruction of the element to be selected of the first type.
Further, the navigation elements include a second type of navigation element, the element to be selected corresponding to the second type of navigation element is a second type of element to be selected, the second type of element to be selected is a decoration element of the AR character, and the step of creating the AR character by using the selected element to be selected based on the selection operation of the element to be selected in the editing area includes:
outputting at least two elements to be selected of a second type corresponding to the navigation elements of the second type in the selection area based on a selection instruction of the navigation elements of the second type;
and decorating the element to be selected of the first type in the preview area by adopting the selected element to be selected of the second type based on the selection instruction of the element to be selected of the second type.
Further, the step of decorating the first type of element to be selected in the preview area with the selected second type of element to be selected includes:
acquiring a region to be updated of the selected element to be selected of the second type in the preview region;
when the area to be updated does not have the corresponding selected element, placing the selected element to be selected of the second type on the area to be updated; and/or the presence of a gas in the gas,
and when the corresponding selected element exists in the area to be updated, replacing the selected element by the selected element to be selected of the second type.
Further, the step of displaying the to-be-selected element displayed by default in the first type of to-be-selected element in the preview interface includes:
acquiring the gender of the AR image to be created;
and displaying the default displayed elements to be selected in the first type of elements to be selected according to the gender.
Further, the step of acquiring the gender of the AR character to be created includes:
outputting a gender selection interface prior to outputting the creation interface;
and acquiring the gender of the AR image to be created based on the gender selection instruction.
Further, the step of acquiring the gender of the AR character to be created includes:
before the creation interface is output, an AR interface is obtained and acquired by a camera of the terminal equipment;
and identifying the gender of the target object according to the obtained AR interface, and taking the gender of the target object as the gender of the AR image to be created.
Further, the method for creating an AR character further includes:
and displaying the selected navigation element at the middle position of the navigation area based on a selection instruction of the navigation element.
Further, the selection area further includes a default element, and the step of creating the AR character using the selected element to be selected based on the selection operation of the element to be selected in the editing area includes:
and restoring the previously selected element to be replaced to the default element to be selected under the same navigation element based on the selection instruction of the restored default element.
Further, in the editing region, each element to be selected is further provided with at least two corresponding color selection elements, and after the step of creating the AR image by using the selected element to be selected based on the selection operation on the element to be selected in the editing region, the method further includes:
and updating the color of the selected element to be selected by adopting the color corresponding to the selected color selection element based on the selection instruction of the color selection element.
Further, the displaying the created AR character in the preview area includes:
acquiring a target area to which the selected element to be replaced belongs on the AR image to be created, wherein optionally, the target area comprises a head area and/or a body area;
and displaying the AR image in the preview area according to the target area to which the selected element to be replaced belongs.
Further, the step of displaying the AR character in the preview area according to the target area to which the selected element to be replaced belongs includes:
when the target area to which the selected element to be replaced belongs is the head area, displaying the head area as the AR image in the preview area; and/or the presence of a gas in the gas,
and when the target area to which the selected element to be replaced belongs is the body area, displaying the head area and the body area as the AR image in the preview area.
On the other hand, the embodiment of the application further provides a method for creating an AR image, which is applied to a terminal device, and the method for creating an AR image includes the following steps:
after entering an AR shooting mode, acquiring a first creation instruction of a whole body AR image;
outputting a corresponding creation interface according to the first creation instruction, wherein the creation interface at least comprises an editing area and a preview area;
creating a corresponding whole-body AR image based on the selection operation of the editing area;
displaying the created whole-body AR avatar in the preview area;
optionally, each created avatar is provided with at least one avatar label for subsequent selection and/or interaction with the AR avatar according to the avatar label.
Further, the editing region includes at least two elements to be selected, and the step of creating a corresponding whole-body AR avatar based on a selection operation of the editing region includes:
and creating the whole-body AR image by adopting the selected elements to be selected based on the selection operation of at least two elements to be selected in the editing area.
Further, the step of creating the whole-body AR figure with the selected elements to be selected based on the selection operation of at least two of the elements to be selected in the editing region, after:
based on each selection operation, acquiring interaction parameters corresponding to the selected element to be selected;
and controlling the AR image of the whole body according to the interaction parameters.
Further, the interactive parameter includes at least one of a text parameter, a voice parameter and an action parameter, and the step of controlling the whole body AR image according to the interactive parameter at least includes one of the following steps:
outputting characters corresponding to the character parameters on the preview interface;
controlling the whole-body AR image to execute an action corresponding to the action parameter;
outputting a voice corresponding to the voice parameter, and/or controlling the whole-body AR avatar to perform a lip action corresponding to the voice.
Further, the creating method further comprises:
when a creation completion instruction is received, saving the whole-body AR image;
controlling the terminal device to jump to an AR interface in the AR shooting mode, wherein optionally, the AR interface is obtained by acquiring an image of the real world through a camera of the terminal device;
and displaying the preview icon corresponding to the stored whole AR image in a preset area of the AR interface.
Further, the step of displaying the preview icon corresponding to the saved whole-body AR avatar in a preset area of the AR interface includes:
acquiring image labels corresponding to the whole-body AR image, wherein optionally, different image labels correspond to different expression amplitudes and/or dance actions;
and displaying a preview icon corresponding to the whole-body AR image in the preset area according to the image tag, wherein optionally, the image tag is displayed or not displayed in the preview area.
Further, the step of obtaining the character label corresponding to the whole-body AR character includes:
and acquiring an image label corresponding to the created whole-body AR image according to the created whole-body AR image and the preset corresponding relation between the whole-body AR image and the image label, wherein optionally, the created whole-body AR image at least corresponds to one image label.
Further, the step of obtaining the image label corresponding to the created whole-body AR image according to the created whole-body AR image and the preset corresponding relationship between the whole-body AR image and the image label includes:
outputting at least two preset image labels before storing the created whole-body AR image;
and when a selection instruction of the preset image label is received, the selected image label is stored in association with the created whole-body AR image.
Further, the step of obtaining the character label corresponding to the whole-body AR character includes:
acquiring the image style of the stored whole-body AR image;
and acquiring an image label corresponding to the whole AR image according to the image style.
Further, the avatar label is displayed in the preview interface, the avatar label at least includes at least two primary avatar labels and at least two secondary avatar labels, and the step of displaying the preview icon corresponding to the whole-body AR avatar in the preset area according to the avatar label includes:
displaying a preview icon corresponding to the whole-body AR image of the selected primary image label based on a selection instruction of the primary image label;
and displaying a preview icon corresponding to the whole-body AR image of the selected second-level image label based on a selection instruction of the second-level image label under the first-level image label.
Further, after the step of displaying the preview icon corresponding to the saved whole-body AR avatar in the preset area of the AR interface, the method for creating an AR avatar further includes:
identifying an area to be replaced in the AR interface;
replacing the area to be replaced in the AR interface by adopting the whole-body AR image corresponding to the selected preview icon;
displaying the replaced AR interface.
Further, the method for creating an AR character further includes:
and displaying the selected preview icon at the middle position of the preset area.
Further, the method for creating an AR character further includes the steps of:
after entering an AR shooting mode, acquiring an AR interface, wherein optionally, the AR interface is obtained by acquiring an image of the real world by a camera of the terminal device;
acquiring a placing plane in the AR interface after receiving a second creating instruction of the whole body AR image;
and when receiving a placing instruction of the whole-body AR image, placing the whole-body AR image corresponding to the selected preview icon on the placing plane.
Further, when receiving a placement instruction of the whole-body AR figure, placing the whole-body AR figure corresponding to the selected preview icon on the placement plane includes:
when the AR interface is detected to be changed, the placing plane is obtained again;
controlling the whole-body AR figure to move onto the retrieved placement plane.
Further, the step of placing the selected whole-body AR figure on the placement plane upon receiving a placement instruction of the whole-body AR figure, after comprising:
when a zooming instruction of the whole-body AR image is received, acquiring a corresponding zooming proportion;
adjusting the whole-body AR image according to the scaling.
Further, the step of placing the selected whole-body AR figure on the placement plane upon receiving a placement instruction of the whole-body AR figure, after comprising:
acquiring the size of a placing space above the placing plane;
adjusting the whole-body AR figure according to the size of the placement space so that the whole-body AR figure is placed in the placement space.
Further, the step of placing the selected whole-body AR figure on the placement plane upon receiving a placement instruction of the whole-body AR figure, after comprising:
and controlling the whole body AR image to execute preset dance actions and/or playing audio data associated with the dance actions.
Further, the step of displaying the selected preview icon at the middle position of the preset area includes:
and when a photographing and/or recording instruction sent out based on the selected preview icon is received, photographing or recording the AR interface.
On the other hand, the embodiment of the present application provides a terminal device, the terminal device includes a memory, a processor, and a creation program of an AR character stored on the memory and operable on the processor, the creation program of the AR character, when executed by the processor, implements the steps of the method for creating an AR character as described in any one of the above.
On the other hand, an embodiment of the present application provides a readable storage medium having stored thereon a creation program of an AR character, which when executed by a processor, implements the steps of the method of creating an AR character as described in any one of the above.
In the embodiment of the application, after entering an AR shooting mode, a first creation instruction of an AR image is obtained, a corresponding creation interface is output according to the first creation instruction, the corresponding AR image is created based on the selection operation of an editing area of the creation interface, and the created AR image is displayed in a preview area, so that the AR image is created according to the selection operation of a user and displayed in the preview area, the created AR image is updated based on each selection operation in the creation process of the AR image, the newly created AR image is obtained, and the AR image is displayed in the preview area so as to be visually checked by the user for the newly created AR image, and if the user is not satisfied with the AR image, the operations such as modification, resetting and the like can be carried out on the AR image, the ease with which a user creates and modifies the AR image is improved. While each created avatar is provided with an avatar label for selecting and/or interacting with the AR avatar.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
Fig. 1 is a schematic structural diagram of a terminal device in a hardware operating environment according to an embodiment of the present application;
FIG. 2 is a flowchart illustrating an embodiment of a method for creating an AR image according to the present application;
FIG. 3 is a flowchart illustrating an embodiment of step S30 of the present application;
FIG. 4 is a flowchart illustrating an embodiment of step S31 of the present application;
FIG. 5 is a flowchart illustrating an embodiment of step S313 of the present application;
FIG. 6 is a flowchart illustrating an embodiment of step S21 of the present application;
FIG. 7 is a flowchart illustrating an embodiment of step S211 of the present application;
fig. 8 is a detailed flowchart of another embodiment of step S211 of the present application;
fig. 9 is a detailed flowchart of another embodiment of step S30 of the present application;
FIG. 10 is a detailed flowchart of another embodiment of step S30 of the present application;
FIG. 11 is a flowchart illustrating an embodiment of step S40 of the present application;
FIG. 12 is a schematic flow chart diagram illustrating a method for creating an AR image according to another embodiment of the present application;
FIG. 13 is a detailed flowchart of another embodiment of step S530 of the present application;
FIG. 14 is a schematic flow chart diagram illustrating a method for creating an AR character according to another embodiment of the present application;
FIG. 15 is a detailed flowchart of another embodiment of step S570 of the present application;
fig. 16 is a detailed flowchart of an embodiment of step S571 of the present application;
fig. 17 is a detailed flowchart of another embodiment of step S571 of the present application;
fig. 18 is a detailed flowchart of an embodiment of step S572 of the present application;
FIG. 19 is a schematic flow chart diagram illustrating a method for creating an AR character according to another embodiment of the present application;
FIG. 20 is a schematic flow chart diagram illustrating a method for creating an AR image according to another embodiment of the present application;
FIG. 21 is a schematic flow chart diagram illustrating a method for creating an AR image according to another embodiment of the present application;
FIG. 22 is a schematic flow chart diagram illustrating a method for creating an AR image according to another embodiment of the present application;
FIG. 23 is a schematic flow chart diagram illustrating a method for creating an AR character according to another embodiment of the present application;
FIG. 24 is a schematic flow chart diagram illustrating a method for creating an AR image according to another embodiment of the present application;
fig. 25 is a schematic hardware configuration diagram of another mobile terminal implementing various embodiments of the present application;
fig. 26 is a communication network system architecture diagram according to an embodiment of the present application.
With the above figures, there are shown specific embodiments of the present application, which will be described in more detail below. These drawings and written description are not intended to limit the scope of the inventive concepts in any manner, but rather to illustrate the inventive concepts to those skilled in the art by reference to specific embodiments.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, the recitation of an element by the phrase "comprising an … …" does not exclude the presence of additional like elements in the process, method, article, or apparatus that comprises the element, and further, where similarly-named elements, features, or elements in different embodiments of the disclosure may have the same meaning, or may have different meanings, that particular meaning should be determined by their interpretation in the embodiment or further by context with the embodiment.
It should be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope herein. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context. Also, as used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context indicates otherwise. It will be further understood that the terms "comprises," "comprising," "includes" and/or "including," when used in this specification, specify the presence of stated features, steps, operations, elements, components, items, species, and/or groups, but do not preclude the presence, or addition of one or more other features, steps, operations, elements, components, items, species, and/or groups thereof. The terms "or" and/or "as used herein are to be construed as inclusive or meaning any one or any combination. Thus, "A, B or C" or "A, B and/or C" means "any of the following: a; b; c; a and B; a and C; b and C; A. b and C ". An exception to this definition will occur only when a combination of elements, functions, steps or operations are inherently mutually exclusive in some way.
It should be understood that, although the steps in the flowcharts in the embodiments of the present application are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and may be performed in other orders unless explicitly stated herein. Moreover, at least a portion of the steps in the figures may include at least two sub-steps or at least two stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed sequentially, but may be performed alternately or at least partially with other steps or sub-steps of other steps.
It should be noted that step numbers such as S10 and S20 are used herein for the purpose of more clearly and briefly describing the corresponding content, and do not constitute a substantial limitation on the sequence, and those skilled in the art may perform S20 first and then S10 in specific implementation, which should be within the scope of the present application.
First, terms related to embodiments of the present application will be explained:
AR: the Augmented Reality technology is a technology for skillfully fusing virtual information and a real world, and a plurality of technical means such as multimedia, three-dimensional modeling, real-time tracking and registration, intelligent interaction, sensing and the like are widely applied, virtual information such as characters, images, three-dimensional models, music, videos and the like created by a computer are applied to the real world after being simulated, and the two kinds of information complement each other, so that the real world is enhanced.
As shown in fig. 1, fig. 1 is a schematic structural diagram of a terminal device in a hardware operating environment according to an embodiment of the present application.
The terminal device in the embodiment of the application can be a mobile terminal, and can also be a mobile terminal device with a display function, such as a smart phone, a tablet computer, a portable computer, and the like.
As shown in fig. 1, the terminal device may include: a processor 1001, such as a CPU, a network interface 1004, a user interface 1003, a memory 1005, a communication bus 1002. Wherein a communication bus 1002 is used to enable connective communication between these components. The user interface 1003 may include a Display screen (Display), an input unit such as a virtual Keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface). The memory 1005 may be a high-speed RAM memory or a non-volatile memory (non-volatile memory), such as a disk memory. The memory 1005 may alternatively be a storage device separate from the processor 1001.
Optionally, the terminal device may further include a camera, a Radio Frequency (RF) circuit, a sensor, an audio circuit, a WiFi module, and the like. Such as light sensors, motion sensors, and other sensors. In particular, the light sensor may comprise an ambient light sensor, which may optionally adjust the brightness of the display screen according to the brightness of ambient light, and a proximity sensor, which may turn off the display screen and/or the backlight when the mobile terminal device is moved to the ear. As one of the motion sensors, the gravity acceleration sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when the mobile terminal device is stationary, and can be used for applications (such as horizontal and vertical screen switching, related games, magnetometer attitude calibration), vibration recognition related functions (such as pedometer and tapping) and the like for recognizing the attitude of the mobile terminal device; of course, the mobile terminal device may also be configured with other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which are not described herein again.
Those skilled in the art will appreciate that the terminal device configuration shown in fig. 1 is not intended to be limiting of the terminal device and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
As shown in fig. 1, a memory 1005, which is a kind of computer-readable storage medium, may include therein an operating system, a network communication module, a user interface module, and a creation program of an AR character.
In the terminal device shown in fig. 1, the network interface 1004 is mainly used for connecting to a backend server and performing data communication with the backend server; the user interface 1003 is mainly used for connecting a client (user side) and performing data communication with the client; and the processor 1001 may be configured to call the creation program of the AR character stored in the memory 1005 and perform the following operations:
after entering an AR shooting mode, acquiring a first creation instruction of an AR image;
outputting a corresponding creation interface according to the first creation instruction, wherein the creation interface at least comprises an editing area and a preview area;
when a selection operation input based on the editing area is acquired, creating a corresponding AR image according to the selection operation;
displaying the created AR character in the preview area;
optionally, each created avatar is provided with at least one avatar label for subsequent selection and/or interaction with the AR avatar according to the avatar label.
The specific application scene of the embodiment of the application is an interaction scene of a user and an AR image in an AR image after the terminal equipment enters an AR mode.
The following describes in detail the technical solutions of the embodiments of the present application and how to solve the above technical problems with specific embodiments. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments. Embodiments of the present application will be described below with reference to the accompanying drawings.
Referring to fig. 2, embodiment 1 of the present application provides a method for creating an AR image, which is applied to a terminal device, and the method for creating an AR image includes the following steps:
in step S10, after entering the AR photographing mode, a first creation instruction of the AR character is acquired.
In this embodiment, the AR shooting mode is a shooting mode applied by a camera of the terminal device, in the AR shooting mode, the terminal device collects images in the real world through a front camera or a rear camera to obtain an AR interface, and a user can add a virtual AR image such as a virtual head image sticker to the AR interface by operating the terminal device, or the terminal device automatically identifies a target area of the AR interface such as a head of the user or a whole body of the user, and replaces the target area with the selected AR image to obtain an AR interaction interface combining the AR interface and the AR interface.
In this embodiment, the user may trigger the first creation instruction of the AR image through the preset virtual key of the AR interface, and the user may also trigger the first creation instruction of the AR image through voice, or the user enters the AR interface for the first time, when there is no AR image in the AR shooting mode, the terminal device automatically triggers the first creation instruction of the AR image, and this embodiment is right the trigger mode of the first creation instruction may not be limited.
Step S20, outputting a corresponding creation interface according to the first creation instruction, where the creation interface includes at least an editing area and a preview area.
In this embodiment, after receiving the first creation instruction of the AR character, the terminal device outputs a corresponding creation interface according to the first creation instruction, that is, the terminal device jumps from the AR interface to the creation interface, optionally, the creation interface includes an editing area and a preview area, the editing area is used for a user to operate, and selects to create a relevant element required by the AR character, and the preview area is used for displaying the AR character created after each operation of the user, so that the user can preview, modify, and the like. Specifically, when the creation interface is output, a default basic AR figure may be displayed in the preview area, for example, if the AR figure to be created is a human being, the basic AR figure of the human being, for example, the head and five sense organs of the human being, may be displayed in the preview area, and thus, the creation speed of the AR figure may be increased, and it may be understood that, when the creation interface is output, the default basic AR figure may not be displayed in the preview area but the AR figure may be created from scratch.
And step S30, when a selection operation input based on the editing area is acquired, creating a corresponding AR shape according to the selection operation.
In this embodiment, the user may add, delete or replace elements required for creating the AR character by a selection operation on the editing area, or the user may change a color, a size, a position, an angle, etc. of the selected elements by the selection operation, and the terminal device creates a corresponding AR character according to the selection operation of the user.
Step S40, displaying the created AR character in the preview area; optionally, each created avatar is provided with at least one avatar label for subsequent selection and/or interaction with the AR avatar according to the avatar label.
In this embodiment, in the process of creating the AR character, the created AR character is updated based on each selection operation, the newly created AR character is obtained, the AR character is displayed in the preview area, so that the user can visually check the newly created AR character, and if the user is not satisfied with the AR character, the user can modify, reset, etc. the AR character.
In this embodiment, the preview area and the edit area together form the creation interface, the preview area and the edit area may be arranged left and right or up and down, and when the terminal device is a mobile phone, the preview area is preferably arranged up and down, and is disposed above the creation interface, and the edit area is disposed below the creation interface, thereby facilitating the user to operate the edit area and previewing the AR image in the preview area. Specifically, a hidden key for hiding the editing area by a user can be arranged on the editing area, and the user hides the editing area by operating the hidden key, so that the creating interface can be used as the preview area as a whole, the created AR image is displayed in a larger display area, and the detail part of the AR image can be conveniently viewed.
In this embodiment, each created character is provided with at least one character tag, the character tag may be set before the creation of the AR character, or may be set during the creation of the AR character, or may be set after the creation of the AR character is finished, for example, after the creation of the AR character is finished, since the respective elements of the AR character are previously provided with character tags due to the creation, character tags whose number of character tags exceeds a preset number are regarded as the character tags of the AR character by counting the number of the respective elements of the AR character created.
In this embodiment, the image tags may be various and may have different image tags according to different types of the AR image, and the AR image may have image tags representing different genders, image tags representing different genres, image tags representing different vocabularies, image tags representing different talents, and the like, taking the AR image as a human being as an example. The present embodiment is not limited thereto.
In summary, in the embodiment of the present application, after entering an AR shooting mode, a first creation instruction of an AR image is obtained, a corresponding creation interface is output according to the first creation instruction, a corresponding AR image is created based on a selection operation on an editing area of the creation interface, and the created AR image is displayed in a preview area, so that the AR image is created according to a selection operation of a user and displayed in the preview area, the created AR image is updated based on each selection operation in the creation process of the AR image, the newly created AR image is obtained, the AR image is displayed in the preview area for the user to visually view the newly created AR image, and if the user is not satisfied with the AR image, the AR image can be modified, And operations such as resetting improve the simplicity of creating and modifying the AR image by the user.
Based on the foregoing embodiment 1, in an embodiment 2 of the method for creating an AR character according to the present application, the editing area includes at least two elements to be selected, and when a selection operation input based on the editing area is obtained, the step of creating a corresponding AR character according to the selection operation includes:
and step S31, based on the selection operation of at least two elements to be selected in the editing area, adopting the selected elements to be selected to create the AR image.
In this embodiment, at least two elements to be selected are disposed in the editing area, and the elements to be selected are some constituent elements constituting the AR image, for example, when the AR image to be created is a human, the elements to be selected may include a face, eyes, mouth, nose, ears, eyebrows, beard, pupil, makeup, glasses, hat, clothes, shoes, and the like. Each element to be selected occupies a sub-area with a smaller area in the editing area, a user can select the element to be selected in the sub-area by touching the sub-area where the element to be selected is located, each area to be selected is preset with a placement position in the preview area, after a certain element to be selected, such as an eye, is selected, the terminal device places the selected element to be selected, such as the eye, in the preset placement position of the preview area according to the preset placement position of the selected element to be selected, and therefore the AR image is updated and displayed.
Referring to fig. 3, based on the above 1-2 embodiments, in a 3 rd embodiment of the method for creating an AR image of the present application, the editing area at least includes a navigation area and a selection area corresponding to the navigation area, the navigation area includes at least two different types of navigation elements, the selection area includes at least two to-be-selected elements, and each of the navigation elements corresponds to at least two to-be-selected elements belonging to the same category; the step of creating the AR character using the selected element to be selected based on the selection operation of the element to be selected in the editing region includes:
and step S32, outputting at least two elements to be selected which belong to the same category and correspond to the navigation elements in the selection area based on the selection instruction of the navigation elements.
In this embodiment, taking an AR image to be created as an example of a human being, a navigation area of the editing area is used to form at least two sub-areas, at least two navigation elements are correspondingly placed in the at least two sub-areas, and the navigation elements are used to classify at least two elements to be selected, for example, at least two elements to be selected have hairstyles with different shapes, then at least two hair style elements with different shapes are classified under the hair style navigation element, when a user selects the hair style navigation element in the navigation area, at least two hair style elements to be selected with different shapes corresponding to the hair style navigation element are displayed in a selection area corresponding to the navigation area, that is, at least two hair style elements to be selected corresponding to the hair style navigation element belonging to the same category are output in the selection area based on a selection instruction of the hair style navigation element, for the user to select the hair style element. In the embodiment, at least two to-be-selected elements are classified through the navigation elements, and when a user selects a certain navigation element, at least two to-be-selected elements belonging to the same category and corresponding to the selected navigation element are displayed for the user to select, so that the user can find the required to-be-selected element more easily according to the navigation element, and the AR image creating speed and the AR image creating convenience are improved.
Based on the above embodiments 1 to 3, in the 4 th embodiment of the method for creating an AR character of the present application, the navigation element includes a navigation element of a first type, the element to be selected corresponding to the navigation element of the first type is an element to be selected of the first type, the element to be selected of the first type is a basic element necessary for creating the AR character, and the step of outputting a creation interface of the AR character according to the first creation instruction includes:
step S21, displaying, in the preview interface, the to-be-selected element displayed by default in the to-be-selected element of the first type.
In this embodiment, the first type of to-be-selected element is a basic element necessary for creating the AR image, and taking the to-be-created AR image as a human being as an example, the first type of to-be-selected element includes a face, eyes, mouth, nose, ears, eyebrows, and the like, and correspondingly, the first type of navigation element corresponding to the first type of to-be-selected element may include an eye navigation element, a mouth navigation element, a nose navigation element, an ear navigation element, an eyebrow navigation element, and the like.
In this embodiment, when the creating interface of the AR image is output according to the first creating instruction, the first type of elements to be selected, such as the face, the eyes, the mouth, the nose, the ears, the eyebrows, and the like, necessary for creating the AR image are obtained, and the default elements to be selected among the first type of elements to be selected are displayed in the preview interface, at this time, the basic AR image is created in the preview interface, that is, the basic AR image is created by the default elements to be selected among the first type of elements to be selected, and on the basic AR image, the user can modify the basic AR image, so as to quickly create the basic AR image required by the user.
It is to be understood that the default displayed candidate element of the first type of candidate element may be a candidate element arranged at a first position under the same navigation element, for example, under the eye navigation element, there are at least two eye elements with different shapes to be selected, and the eye element arranged at the first position of the at least two eye elements is taken as the default displayed eye element, such as mouth, nose, ear, eyebrow, and the like, so that the default basic AR image may be output when the creation visual plane is output.
Based on the above embodiments 1 to 4, in the 5 th embodiment of the method for creating an AR character of the present application, the step of outputting at least two elements to be selected belonging to the same category and corresponding to the navigation element in the selection area based on the selection instruction for the navigation element includes:
step S321, outputting at least two to-be-selected elements of the first type corresponding to the navigation elements of the first type in the selection area based on a selection instruction for the navigation elements of the first type;
the step of creating the AR character using the selected element to be selected based on the selection operation of the element to be selected in the editing region includes:
step S311, based on the selection instruction for the to-be-selected element of the first type, replacing the default displayed to-be-selected element with the selected to-be-selected element of the first type.
In this embodiment, after displaying the basic AR figure in the preview area, if the user wants to modify the basic AR figure, a first type of navigation element such as an eye navigation element corresponding to an element that wants to be modified, such as an eye navigation element, may be selected first, the terminal device outputs at least two first type of elements to be selected such as different shapes of eyes corresponding to the eye navigation element in the selection area based on a selection instruction for the first type of navigation element such as an eye navigation element, and then the user replaces a default displayed element to be selected such as an eye with the selected first type of element to be selected such as an eye by selecting at least two first type of elements to be selected such as different shapes of eyes based on a selection instruction for the first type of elements to be selected such as different shapes of eyes, therefore, the modification and the update of the basic AR image are realized, and the AR image required by the user is created.
Referring to fig. 4, based on the foregoing embodiments 1 to 5, in a 6 th embodiment of the method for creating an AR image according to the present application, the navigation elements include a navigation element of a second type, the to-be-selected element corresponding to the navigation element of the second type is a to-be-selected element of the second type, the to-be-selected element of the second type is a decoration element of the AR image, and the creating the AR image by using the selected to-be-selected element based on the selection operation of the to-be-selected element in the editing area includes:
step S312, outputting at least two to-be-selected elements of the second type corresponding to the navigation elements of the second type in the selection area based on the selection instruction for the navigation elements of the second type;
step 313, based on the selection instruction for the second type of element to be selected, decorating the first type of element to be selected in the preview area with the selected second type of element to be selected.
In this embodiment, the second type of element to be selected is a decorative element of the AR image, the decorative element is not an element essential for creating the AR image, taking the AR image to be created as a human being as an example, and the second type of element to be selected is a beard, a pupil, a makeup, glasses, a hat, clothes, shoes, and the like.
In this embodiment, after the basic AR figure is displayed in the preview area, if the user wants to add a decoration element on the basis of the basic AR figure, a second type navigation element such as a hat navigation element corresponding to an element that the user wants to modify, such as a hat, may be selected first, the terminal device outputs at least two second type elements to be selected such as different-shape hats corresponding to the second type navigation element such as a hat navigation element in the selection area based on a selection instruction for the hat navigation element, and then the user adds the selected second type elements to be selected such as hats to the AR figure by selecting at least two second type elements to be selected such as different-shape hats based on a selection instruction for the second type elements to be selected such as different-shape hats, therefore, the basic AR image is added with decorative elements, and the AR image required by the user is created.
Referring to fig. 5, based on the above-mentioned embodiments 1 to 6, in embodiment 7 of the method for creating an AR image of the present application, further, the step of decorating the to-be-selected element of the first type in the preview area with the to-be-selected element of the selected second type includes:
step S3131, acquiring a to-be-updated region of the selected to-be-selected element of the second type in the preview region;
step S3132, when there is no corresponding selected element in the area to be updated, placing the selected element to be selected of the second type on the area to be updated; and/or the presence of a gas in the gas,
step S3133, when there is a corresponding selected element in the area to be updated, replacing the selected element with the selected element to be selected of the second type.
In this embodiment, after the selected element to be selected of the second type is acquired, the area to be updated of the selected element to be selected of the second type in the preview area is acquired, and if the selected element to be selected of the second type is a hat, the area to be updated of the hat in the preview area is acquired.
In this embodiment, when there is no corresponding selected element such as a hat in the area to be updated, the selected element to be selected of the second type such as a hat is placed on the area to be updated, and when there is a corresponding selected element such as a hat in the area to be updated (the selected hat is different from the currently selected hat, for example, in shape or color), the selected element to be selected of the second type such as a hat is substituted for the selected element hat, so that addition and substitution of the element to be selected of the second type are completed, and an AR image required by a user is created, and in this embodiment, when there is a corresponding selected element in the area to be updated, the selected element is substituted for the selected element, for example, the currently selected hat is substituted for the previously selected hat, instead of deleting the previously selected hat and then adding the currently selected hat, the step of deleting selected elements is omitted, thereby speeding up the updating of the AR image.
Referring to fig. 6, based on the above embodiments 1 to 7, in an embodiment 8 of the method for creating an AR character of the present application, the step of displaying, in the preview interface, a to-be-selected element displayed by default in the to-be-selected element of the first type includes:
step S211, acquiring the gender of the AR image to be created;
and S212, displaying the default displayed elements to be selected in the first type of elements to be selected according to the gender.
In this embodiment, when the created AR character is a human, first obtaining a gender of the AR character to be created, displaying the default displayed elements to be selected in the first type of elements to be selected according to the gender, for example, when the gender of the AR character to be created is female, the default displayed elements to be selected are sharper jaw, larger eyes, smaller mouth, smaller nose and smaller ears of the face compared to male, and when the gender of the AR character to be created is male, the default displayed elements to be selected are larger jaw, smaller eyes, larger mouth, larger nose and larger ears of the face compared to female, the present embodiment displays the default displayed elements to be selected in the first type of elements to be selected according to the gender, so that the created base AR character is more accurate, and the subsequent modification of the base AR character by the user is reduced, thereby increasing the speed of creating the AR figure required for creation.
In this embodiment, the position of the terminal device may be further obtained, and the candidate elements displayed by default in the first type of candidate elements are displayed according to the image features of local people, for example, the position of the terminal device is in africa, the mouth and nose of an african are large, the inward concave degree of eyes is deep, and the hip is large and tilted, so that it may be determined that the candidate elements displayed by default in the first type of candidate elements correspond to the image features of the african according to the image features of the african, and excessive subsequent modification is avoided.
Further, after acquiring the gender of the AR image to be created, the corresponding navigation element and the element to be selected can be output according to the gender, for example, when the gender is female, the output navigation element may include a skirt navigation element, and at least two skirt elements of different styles to be selected are arranged corresponding to the skirt navigation element; for another example, when the gender is male, the outputted navigation element may include a pants navigation element, and at least two pants elements of different styles to be selected are provided corresponding to the pants navigation element;
referring to fig. 7, based on the above embodiments 1 to 8, in the 9 th embodiment of the method for creating an AR character of the present application, the step of obtaining the gender of the AR character to be created includes:
step S2111, before the creation interface is output, a gender selection interface is output;
step S2112, acquiring the gender of the AR image to be created based on the gender selection instruction.
In this embodiment, after receiving a first creation instruction of the AR character, before outputting the creation interface, a gender selection interface is output for a user to select a gender of the AR character to be created, the gender of the AR character to be created is acquired based on the gender selection instruction, and then an element to be selected displayed by default among the elements to be selected of the first type is displayed according to the gender.
Referring to fig. 8, based on the above embodiments 1 to 9, in the 10 th embodiment of the method for creating an AR figure of the present application, the step of obtaining the gender of the AR figure to be created includes:
step S2113, before the creation interface is output, an AR interface is obtained, wherein the AR interface is acquired by a camera of the terminal equipment;
step S2114, identifying the gender of the target object according to the obtained AR interface, and taking the gender of the user as the gender of the AR image to be created.
In this embodiment, after receiving a first creation instruction of the AR image, before outputting the creation interface, an AR interface is obtained, where the AR interface is acquired by a camera of the terminal device, the gender of a user is identified according to an obtained target object in the AR interface, that is, a user image, the gender of the user is used as the gender of the AR image to be created, and then an element to be selected, which is displayed by default in the first type of element to be selected, is displayed according to the gender. In this embodiment, by acquiring the gender of the user and taking the mind of the user as the gender of the AR character to be created, the AR character identical to the gender of the user can be created.
Further, the default displayed element to be selected in the first type of element to be selected may be updated according to the user feature by identifying the user feature in the AR interface, for example, the feature of the five sense organs of the user may be obtained, and the default displayed element to be selected may be updated according to the feature of the five sense organs of the user, so that the created AR image is closer to the feature of the five sense organs of the user, and further, for example, the figure feature of the user may be obtained, and the default displayed element to be selected may be updated according to the feature of the figure of the user, so that the created AR image is closer to the figure feature of the user, and the modification operation of the subsequent user is reduced.
Based on the above embodiments 1 to 10, in an 11 th embodiment of the method for creating an AR character of the present application, the method further includes:
and step S50, displaying the selected navigation element at the middle position of the navigation area based on the selection instruction of the navigation element.
In this embodiment, the navigation area is a strip area, at least two navigation elements are arranged in a row in the strip area, only a preset number, such as 5, of navigation elements can be displayed in the navigation area at the same time, and the other navigation elements are hidden, so that, in order to display other hidden navigation elements, in this embodiment, based on a selection instruction for the navigation elements, the selected navigation elements are displayed in the middle position of the navigation area, so that the hidden elements are displayed, for example, at the current time, 5 navigation elements, such as a hair navigation element, an eye navigation element, a mouth navigation element, a nose navigation element, and an eyebrow navigation element, are displayed in the navigation area in sequence, and since only a preset number, such as 5, of navigation elements can be displayed in the navigation area at the same time, therefore, the beard navigation element, the pupil navigation element, the makeup navigation element, the glasses navigation element, the hat navigation element, the clothes navigation element, the shoes navigation element and the like which are arranged at the back cannot be displayed, and at the current moment, the navigation element positioned in the middle is the mouth navigation element; however, at the next moment, when the user selects an eyebrow navigation element, the terminal device displays the selected eyebrow navigation element in the middle position of the navigation area, and displays two navigation elements behind the eyebrow navigation element, namely, the beard navigation element and the beautiful pupil navigation element, while the hair navigation element and the eye navigation element displayed before are hidden, so that each navigation element can be displayed for the user to select, and because the number of the navigation elements displayed in the navigation area is limited, the situation that each navigation element is too crowded cannot be avoided, thereby avoiding the misoperation of the user, namely, avoiding the user from touching at least two navigation elements simultaneously by mistake.
Referring to fig. 9, based on the above embodiments 1 to 11, in a 12 th embodiment of the method for creating an AR image according to the present application, the selecting area further includes a default restoring element, and the creating the AR image using the selected element to be selected based on the selecting operation of the element to be selected in the editing area includes:
step S33, restoring the previously selected element to be replaced to the default element to be selected under the same navigation element based on the instruction for selecting the restored default element.
In this embodiment, by setting the default restoring element in the selection area, the default restoring element and at least two elements to be selected are placed in the selection area together, for example, the default restoring element may be placed behind or in front of the at least two elements to be selected, or may be placed between the at least two elements to be selected.
In this embodiment, after the user touches the default restoring element, based on a selection instruction for the default restoring element, the previously selected element to be replaced is restored to the default element to be selected under the same navigation element, for example, under the eye navigation element, at least two eye elements of different types from the eye navigation element are provided in the selection area, and meanwhile, one default restoring element is provided, after the user selects the default restoring element, the eye element selected by the user is restored to the default displayed eye element, so as to implement the re-selection of the default eye element, the user can reset the AR image to the basic AR image by selecting the default restoring element under each navigation element, and when the user needs to recreate the AR image or greatly modify the AR image, the AR image can be quickly reset through the restoration default element without exiting the creation interface, and the speed of creating the AR image is improved.
Referring to fig. 10, based on the above embodiments 1 to 12, in an embodiment 13 of the method for creating an AR image according to the present application, each of the elements to be selected in the editing area is further provided with at least two corresponding color selection elements, and after the step of creating the AR image by using the selected elements to be selected based on the selection operation on the at least two elements to be selected in the editing area, the method further includes:
and step S34, updating the color of the selected element to be selected by adopting the color corresponding to the selected color selection element based on the selection instruction of the color selection element.
In this embodiment, each element to be selected is further provided with at least two corresponding color selection elements in the editing area, after a user selects one of the elements to be selected, the at least two corresponding color selection elements are output in the editing area, the user can select the at least two color selection elements, and based on a selection instruction for the color selection elements, the color of the selected element to be selected is updated by using the color corresponding to the selected color selection element, so that the color of the selected element to be selected is modified into the color selected by the user. For example, when the selected element to be selected is a hair style element of one curly hair style element of the at least two hair style elements, the user may update the curly hair style element to red by selecting the color selection element of red. Therefore, the created AR image can be enriched, and the created AR image is more suitable for the requirements of users.
Referring to fig. 11, based on the above embodiments 1 to 13, in an embodiment 14 of the method for creating an AR character of the present application, the step of displaying the created AR character in the preview area includes:
step S41, obtaining a target region to which the selected element to be replaced belongs on the AR image to be created, optionally, the target region includes a head region and/or a body region;
step S42, displaying the AR image in the preview area according to the target area to which the selected element to be replaced belongs.
In this embodiment, by obtaining a target area to which a selected element to be replaced belongs on an AR figure to be created, for example, a selected eye element in a head area of the AR figure, and a selected shoe element or clothes element in a body area of the AR figure, the AR figure is displayed in the preview area according to the target area to which the selected element to be replaced belongs, for example, the selected eye element belongs to the head area, when the AR figure is displayed in the preview area, only the head area is displayed, and the head area is displayed in a larger display area, so that a user can conveniently view more details; and when the selected clothes element or shoe element belongs to the body area, simultaneously displaying the head area and the body area in the preview area, namely displaying the integral image of the AR image so that the user can view the integral sense of the AR image.
Based on the above embodiments 1 to 14, in the 15 th embodiment of the method for creating an AR character of the present application, the step of displaying the AR character in the preview area according to the target area to which the selected element to be replaced belongs includes:
step S421, when the target area to which the selected element to be replaced belongs is the head area, displaying the head area as the AR image in the preview area; and/or the presence of a gas in the gas,
step S422, when the target area to which the selected element to be replaced belongs is the body area, displaying the head area and the body area as the AR image in the preview area.
In this embodiment, by obtaining a target area to which a selected element to be replaced belongs on an AR figure to be created, for example, a selected eye element in a head area of the AR figure, and a selected shoe element or clothes element in a body area of the AR figure, the AR figure is displayed in the preview area according to the target area to which the selected element to be replaced belongs, for example, the selected eye element belongs to the head area, when the AR figure is displayed in the preview area, only the head area is displayed, and the head area is displayed in a larger display area, so that a user can conveniently view more details; and when the selected clothes element or shoe element belongs to the body area, simultaneously displaying the head area and the body area in the preview area, namely displaying the integral image of the AR image so that the user can view the integral sense of the AR image.
Referring to fig. 12, on the other hand, an embodiment of the present application further provides a method for creating an AR image, which is applied to a terminal device, in embodiment 16 of the method for creating an AR image, including the following steps:
step S510, after entering an AR shooting mode, acquiring a first establishing instruction of a whole AR image;
step S520, outputting a corresponding creation interface according to the first creation instruction, wherein the creation interface at least comprises an editing area and a preview area;
step S530, based on the selection operation of the editing area, a corresponding whole AR image is created;
step S540 of displaying the created whole-body AR figure in the preview area; optionally, each created avatar is provided with at least one avatar label for subsequent selection and/or interaction with the AR avatar according to the avatar label.
In this embodiment, the present invention may be based on any of the above embodiments 1 to 15, or may not be based on the above embodiments 1 to 15. The difference between this embodiment and the above-mentioned embodiments 1-15 is that the AR image is a whole-body AR image, which is not limited to human (i.e. user), and the following description is made by taking the whole-body AR image as a whole-body AR image of a user, and since this embodiment can be based on any of the above-mentioned embodiments 1-15, the present embodiment at least has the beneficial effects of any of the embodiments 1-15, and is not repeated herein.
Based on the above-mentioned 16 th embodiment, in a 17 th embodiment of the method for creating an AR character of the present application, the editing region includes at least two elements to be selected, and the step of creating a corresponding whole-body AR character based on the selection operation of the editing region includes:
and step S531, creating the whole body AR image by using the selected to-be-selected elements based on the selection operation of the at least two to-be-selected elements in the editing area.
In this embodiment, at least two elements to be selected are disposed in the editing area, and the elements to be selected are some constituent elements constituting the AR image, for example, when the AR image to be created is a human, the elements to be selected may include stature, clothes, shoes, face, eyes, mouth, nose, ears, eyebrows, beard, pupil, makeup, glasses, hat, clothes, shoes, and the like. Each element to be selected occupies a sub-area with a smaller area in the editing area, a user can select the element to be selected in the sub-area by touching the sub-area where the element to be selected is located, a placement position of each region to be selected in the preview area is preset, and after a certain element to be selected, such as an eye, is selected, the terminal device places the selected element to be selected, such as clothes, in the preset placement position of the preview area according to the preset placement position of the selected element to be selected, so that the whole body AR image is updated and displayed.
Referring to fig. 13, in an 18 th embodiment of the method for creating an AR figure according to the present application, based on the 17 th embodiment, the step of creating the whole-body AR figure using the selected elements to be selected based on the selection operation of at least two elements to be selected in the editing area includes:
step S532, based on each selection operation, acquiring interaction parameters corresponding to the selected element to be selected;
and step S533, controlling the AR image of the whole body according to the interaction parameters.
In this embodiment, after a user selects a certain element to be selected, the interaction parameters, such as motion parameters and voice parameters, corresponding to the selected element to be selected are obtained, and the whole body AR image is controlled according to the interaction parameters, for example, when the element to be selected by the user is a skirt, the corresponding motion parameters are that the whole body AR image raises the skirt and enjoys the ring with the head down or left or right, and/or the mouth of the whole body AR image is controlled to make a lip motion of "real beauty" and the terminal device is controlled to send out "real beautiful" voice, so that the whole body AR image is prevented from being excessively boring when being created, and the interestingness of the user when the whole body AR image is created is improved.
Based on the above 18 th embodiment, in the 19 th embodiment of the method for creating an AR image of the present application, the interaction parameter includes at least one of a text parameter, a voice parameter, and an action parameter, and the step of controlling the whole-body AR image according to the interaction parameter includes at least one of the following steps:
step S5321, outputting characters corresponding to the character parameters on the preview interface;
step S5322, controlling the whole body AR image to execute the action corresponding to the action parameter;
step S5323 of outputting a voice corresponding to the voice parameter and/or controlling the whole body AR avatar to perform a lip action corresponding to the voice.
In this embodiment, the interaction parameter includes at least one of a text parameter, a voice parameter, and an action parameter, and when the interaction parameter corresponding to the selected element to be selected is a text parameter, a text corresponding to the text parameter is output on the preview interface, where the text may be displayed for a preset time period, such as 5 seconds, or may be in a display state until a user selects a next element to be selected; when the interaction parameter corresponding to the selected element to be selected is an action parameter, controlling the whole body AR image to execute an action corresponding to the action parameter, wherein the action can be continued until a user selects the next element to be selected; when the interaction parameter corresponding to the selected element to be selected is a voice parameter, the terminal equipment outputs voice corresponding to the voice parameter, and/or controls the whole AR image to execute lip action corresponding to the voice; therefore, after the user selects the element to be selected each time, some interaction is given to the user, the phenomenon that the whole-body AR image is too boring when being created is avoided, and interestingness of the user when the whole-body AR image is created is improved.
Referring to fig. 14, based on the above-mentioned 19 th embodiment, in a20 th embodiment of the method for creating an AR character of the present application, the method further includes:
step S550, when receiving the creation completion instruction, saving the whole AR image;
step S560, controlling the terminal device to jump to an AR interface in the AR shooting mode, optionally, the AR interface is obtained by a camera of the terminal device acquiring an image of the real world;
step S570, displaying the preview icon corresponding to the saved whole-body AR image in a preset area of the AR interface.
In this embodiment, the creating interface may be set at a finishing key, the user triggers the creating finishing instruction by touching the finishing key, the terminal device stores the AR image when receiving the creating finishing instruction, and then controls the terminal device to jump to an initial interface in the AR shooting mode, that is, the AR interface is obtained by acquiring an image of the real world by a camera of the terminal device, and displays a preview icon corresponding to the stored AR image in a preset area of the AR interface, because the area of the stored AR image is large, by placing the preview icon corresponding to the AR image in the preset area of the AR interface, the area of the AR interface is not occupied too much, the preset area may be a strip area located at the bottom of the AR interface, and at least two stored preview icons of the AR image are arranged in a row in the preset area, and selecting the corresponding AR image by the user through selecting the preview icon, and automatically displaying the selected AR image in the AR interface, so that the virtual AR image is added into the AR interface formed by the real world. The preset area may be specifically set as a strip area located below the AR interface.
Referring to fig. 15, based on the above-mentioned embodiment 20, in the 21 st embodiment of the method for creating an AR character of the present application, the preview icon corresponding to the stored whole-body AR character is displayed
The steps shown in the preset area of the AR interface include:
step S571, obtaining image labels corresponding to the whole-body AR image, wherein optionally, different image labels correspond to different expression amplitudes and/or dance actions;
step S572, displaying a preview icon corresponding to the whole-body AR avatar in the preset area according to the avatar label, optionally, displaying or not displaying the avatar label in the preview area.
In this embodiment, when the preview icon corresponding to the saved whole-body AR image is displayed in the preset area of the AR interface, the image tag corresponding to the whole-body AR image is obtained, optionally, different image tags correspond to different expression amplitudes and/or dance motions, for example, when the whole-body AR image corresponds to the a image tag, the expression amplitude and/or the dance motion amplitude of the whole-body AR image may be large and may be exaggerated, and when the whole-body AR image corresponds to the B image tag, the expression amplitude and/or the dance motion amplitude of the whole-body AR image may be small and more converged; according to the image tag is displayed in the preset area, the preview icon corresponding to the whole-body AR image, for example, at least one of the whole-body AR image having the a image tag at the same time may be placed in the same area in the preset area, or at least one of the whole-body AR image having the a image tag at the same time may be placed adjacently, so that the user is selecting the same whole-body AR image having the a image tag.
In this embodiment, the avatar label is displayed or not displayed in the preview area. The avatar tag is not limited to the preview area, at which time the avatar tag by default serves to classify a multitude of the whole-body AR avatars; when the image tag is displayed in the preview area, the user may allow the whole-body AR image having the image tag to be displayed in the preset area by selecting a certain image tag, that is, when the image tag is displayed, the user may quickly find at least one whole-body AR image under the image tag through the image tag, thereby increasing the speed of the whole-body AR image of the user.
Based on the above-mentioned 21 st embodiment, in a 22 nd embodiment of the method for creating an AR character of the present application, the step of obtaining a character tag corresponding to the whole-body AR character includes:
step S5711, an image label corresponding to the created whole-body AR image is obtained according to the created whole-body AR image and a preset corresponding relationship between the whole-body AR image and the image label, and optionally, the created whole-body AR image corresponds to at least one image label.
In this embodiment, the corresponding relationship between the whole-body AR image and the image tag may be preserved by presetting, optionally, each of the whole-body AR images has at least one image tag, and then the image tag corresponding to the whole-body AR image is obtained according to the currently created whole-body AR image and the corresponding relationship.
Referring to fig. 16, based on the above 22 nd embodiment, in the 23 th embodiment of the method for creating an AR figure of the present application, the step of obtaining an image tag corresponding to the created whole-body AR figure according to the created whole-body AR figure and the preset corresponding relationship between the whole-body AR figure and the image tag includes:
step S5712, before the created whole-body AR image is saved, at least two preset image labels are output;
and S5713, storing the selected image label in association with the created whole-body AR image when a selection instruction of the preset image label is received.
In this embodiment, after the user issues a saving instruction of the whole-body AR image, before the created whole-body AR image is saved, at least two preset image tags are output for the user to select, and when a selection instruction of the preset image tags is received, the selected image tags are saved in association with the created whole-body AR image, so that each created whole-body AR image is provided with at least one image tag, optionally, the preset image tags for the user to select may be default set tags, or may be manually re-created by the user.
Referring to fig. 17, based on the 23 rd embodiment, in a 24 th embodiment of the method for creating an AR image of the present application, the step of obtaining an image tag corresponding to the whole-body AR image includes:
step S5714, obtaining the image style of the stored whole AR image;
and S5715, acquiring an image label corresponding to the whole-body AR image according to the image style.
In this embodiment, the created whole-body AR image is obtained, for example, an image style is obtained according to makeup, hairstyle, and wearing of the AR image, the image style may include modesty, enthusiasm, lively and lively, and a new trend, and an image tag corresponding to the whole-body AR image is obtained according to the image style, and the image tag corresponds to the image style, and may also include modesty, enthusiasm, lively and lively, and a new trend, so that the image tag of the whole-body AR image is directly determined through appearance, and a user can feel more intuitively.
Referring to fig. 18, based on the above-mentioned 24 th embodiment, in a 25 th embodiment of the method for creating an AR avatar of the present application, the avatar tags are displayed in the preview interface, the avatar tags include at least two primary avatar tags and at least two secondary avatar tags, and the step of displaying the preview icon corresponding to the whole-body AR avatar in the preset area according to the avatar tags includes:
step S5721, displaying a preview icon corresponding to the whole AR image of the selected primary image label based on a selection instruction of the primary image label;
step S5722, displaying a preview icon corresponding to the whole AR image of the selected secondary image label based on a selection instruction of the secondary image label under the primary image label.
In this embodiment, the avatar label is displayed in the preview area, since the whole-body AR figure has at least one avatar label, a preview icon corresponding to the whole-body AR figure selected with the one-level avatar label is displayed based on a selection instruction to the one-level avatar label by a user, and further, a preview icon corresponding to the whole-body AR figure selected with the two-level avatar label is displayed under the one-level avatar label, and a preview icon corresponding to the whole-body AR figure selected with the two-level avatar label is displayed based on a selection instruction to the two-level avatar label under the one-level avatar label by a user, so that the user can gradually narrow down the range of the whole-body AR figure to be selected by selecting the one-level avatar label and the two-level avatar label, enabling the user to quickly find the whole-body AR figure to be,
referring to fig. 19, based on the above-mentioned 25 th embodiment, in a 26 th embodiment of the method for creating an AR character of the present application, after the step of displaying the preview icon corresponding to the saved whole-body AR character in the preset area of the AR interface, the method for creating an AR character further includes:
step S580, identifying an area to be replaced in the AR interface;
step S590, replacing the area to be replaced in the AR interface by adopting the whole-body AR image corresponding to the selected preview icon;
and step S600, displaying the replaced AR interface.
In this embodiment, after entering the AR shooting mode, the terminal device may automatically identify the area to be replaced, for example, the terminal device defaults that a head area or a whole body area of a user in the AR interface is the area to be replaced, and after obtaining the AR interface, the terminal device automatically identifies the head area or the whole body area in the AR interface as the area to be replaced; in addition, the area to be replaced may also be an area defined by a closed touch track formed by a user touching a touch screen of the terminal device; the area to be replaced may also be a fixed area in the AR interface, for example, the AR interface is pre-divided into 9 areas with the same size, where the areas are 3 rows and 3 columns arranged in an array, and an area located in the middle of the 9 areas is used as the area to be replaced.
In this embodiment, the whole-body AR image refers to the created avatar capable of interacting with the user, and the whole-body AR image information may include various object images in the real world, including people, animals, plants, living goods, and the like, which is not limited in this embodiment. The terminal device or a server communicating with the terminal device may be preset with a plurality of whole-body AR images, for example, for a type of whole-body AR image, the whole-body AR images of various persons with different genders, ages, skin colors, faces and hairstyles may be preset, and a user may select a different whole-body AR image for automatically replacing an area to be replaced in the whole-body AR interface, that is, the selected AR image may automatically shield the area to be replaced and be displayed, and the replacement area is shielded by the whole-body AR image and is not displayed. After the selected whole-body AR image is adopted to replace the area to be replaced in the whole-body AR interface, seamless fusion of the whole-body AR image and the whole-body AR interface is realized, and the replaced AR interface is displayed for a user to view.
Based on the above-mentioned 26 th embodiment, in a 27 th embodiment of the method for creating an AR character of the present application, the method further includes:
step S610, displaying the selected preview icon in the middle of the preset area.
In this embodiment, the preset region is a strip region, at least two preview icons are arranged in a row in the strip region, and only a preset number, for example, 5 preview icons can be displayed at the same time in the preset region, and the other preview icons are hidden, so that, in order to display other hidden preview icons, in this embodiment, based on a selection instruction for the preview icons, the selected preview icon is displayed at the middle position of the preset region, so that the hidden preview icons are displayed, for example, at the current time, 5 preview icons, such as a hair preview icon, an eye preview icon, a mouth preview icon, a nose preview icon, and an eyebrow preview icon, are displayed in the preset region in sequence, and since only a preset number, for example, 5 preview icons can be displayed at the same time in the preset region, therefore, the preview icon arranged in the middle of the 6 th preview icon cannot be displayed, and the preview icon at the current time is the 3 rd preview icon; however, at the next moment, when the 5 th preview icon is displayed by the user, the terminal device displays the selected 5 th preview icon at the middle position of the preset area, displays two preview icons behind the 5 th preview icon, namely, the 6 th preview icon and the 7 th preview icon, and hides the 1 st preview icon and the 2 nd preview icon displayed before, so that each preview icon can be displayed for the user to select, and because the number of the preview icons displayed in the preset area is limited, the situation that each preview icon is too crowded cannot be avoided, thereby avoiding the misoperation of the user, namely, avoiding the user from mistakenly touching at least two preview icons at the same time.
Referring to fig. 20, based on the 27 th embodiment, in the 28 th embodiment of the method for creating an AR character of the present application, the method for creating an AR character further includes the following steps:
step S620, after entering an AR shooting mode, acquiring an AR interface, wherein optionally, the AR interface is obtained by acquiring an image of the real world through a camera of the terminal equipment;
step S630, when a second creation instruction of the whole-body AR image is received, a placing plane in the AR interface is obtained;
and step S640, when receiving the placing instruction of the whole-body AR image, placing the whole-body AR image corresponding to the selected preview icon on the placing plane.
In this embodiment, after entering the AR shooting mode, the AR interface is acquired, at this time, the terminal device searches for the area to be replaced in the AR interface by default, and replaces the area to be replaced with the default AR avatar, however, after receiving a second creation instruction of the AR image, the terminal equipment automatically acquires a placement plane in the AR interface, the placing plane is a plane with a relatively flat AR interface, such as the ground, the desktop, the front and the like, used for selecting the required AR image through the preview icon, namely triggering the placing instruction, when the terminal equipment receives the placing instruction of the AR image, the terminal equipment places the AR image corresponding to the selected preview icon on the placing plane, therefore, the AR image is added into the AR interface, the AR interface and the AR image are fused, and the interestingness of interaction between the user and the AR image is increased.
Based on the foregoing embodiment 28, in an embodiment 29 of the method for creating an AR character of the present application, the step of placing the whole-body AR character corresponding to the selected preview icon on the placement plane after receiving the instruction for placing the whole-body AR character includes:
step S650, when the AR interface is detected to be changed, the placing plane is obtained again;
and step S660, controlling the whole body AR image to move to the obtained placing plane.
In this embodiment, when the user moves the terminal device, the placement plane in the AR interface may disappear, so when it is detected that the AR interface changes, if the change ratio of the two frames of pictures of the acquired AR interface exceeds the preset value, for example, 50% within the preset time, it is considered that the AR interface changes, the placement plane is acquired again, and the whole body AR image is controlled to move to the re-acquired placement plane, and more vividly, the whole body AR image can be controlled to move to the re-acquired placement plane with a walking motion, so that the movement of the whole body AR image is closer to the user in the real world, and the interestingness is increased.
Referring to fig. 21, based on the above-mentioned 29 th embodiment, in a 30 th embodiment of the AR figure creating method of the present application, the step of placing the selected whole-body AR figure on the placement plane upon receiving a placement instruction of the whole-body AR figure includes:
when a zooming instruction of the whole-body AR image is received, acquiring a corresponding zooming proportion;
adjusting the whole-body AR image according to the scaling.
In this embodiment, after the AR character replaces the area where the user is located, the AR character may be too small to occlude the user's head or body, or the AR image may be too large to block part of the background area, so that the AR image cannot be completely matched and replaced with the area to be replaced, or after the AR character is placed on the placing plane, the AR character is not the size required by the user, therefore, when the embodiment receives the zooming instruction of the AR character, obtaining a corresponding scaling according to the scaling instruction, adjusting the AR image according to the scaling, therefore, the size of the AR image can be adjusted, so that the size of the AR image is consistent with that of the area to be replaced, the matching degree of the AR image and the area to be replaced is improved, or the AR image meets the size requirement of a user.
Specifically, when the clicking operation of the user in the area where the AR image is located is detected, the fact that the user selects the AR image is confirmed, and then when the opening or gathering track of two fingers or three fingers of the user on a touch screen is detected, the fact that the zooming instruction of the AR image is received is confirmed. It is understood that the user may also issue the zoom instruction in other manners, for example, the user may also issue the zoom instruction through a voice control, and the zoom scale of the AR character is described in the voice control instruction.
Referring to fig. 22, based on the above-mentioned embodiment 30, in an embodiment 31 of the method for creating an AR figure of the present application, the step of placing the selected whole-body AR figure on the placement plane upon receiving a placement instruction of the whole-body AR figure includes:
step S670, obtaining the size of the placing space above the placing plane;
step S680, adjusting the whole body AR image according to the size of the placing space so that the whole body AR image is placed in the placing space.
In this embodiment, the size of the placement space above the placement plane is obtained, and the size of the whole-body AR image is automatically adjusted according to the size of the placement space, so that the whole-body AR image is placed in the placement space, and thus the AR image is more realistically fused with the placement space in the AR interface, and is not too large or too small in size to appear obtrusive.
Referring to fig. 23, based on the above 31 st embodiment, in a 32 nd embodiment of the AR figure creating method of the present application, the step of placing the selected whole-body AR figure on the placement plane upon receiving the placement instruction of the whole-body AR figure includes:
and S690, controlling the whole body AR image to execute a preset dance action, and/or playing audio data associated with the dance action.
In this embodiment, after the AR image corresponding to the selected preview icon is placed on the placement plane, the selected AR image is controlled to execute a preset dance action, and/or audio data associated with the dance action is played, so that the interestingness of interaction between the user and the AR image is increased.
Referring to fig. 24, based on the above-mentioned 32 th embodiment, in a 33 th embodiment of the method for creating an AR character of the present application, after the step of displaying the selected preview icon at the middle position of the preset area, the method includes:
and step S700, when a photographing and/or recording instruction sent out based on the selected preview icon is received, photographing or recording is carried out on the AR interface.
In this embodiment, in this embodiment the AR image has replaced treat the replacement region or the AR image place in place after the plane, the user can send out through the preview icon that the touch-control was selected and take a picture and/or record a video instruction, for example, the user can send out the instruction of taking a picture through short preview icon of selecting, press through long the instruction of taking a picture is sent out according to preview icon of selecting, for example long send out the instruction of recording a video in 3 seconds, at the video recording in-process, the user can also send out the instruction of taking a picture through short preview icon of selecting to record the action or the image of the AR image in the AR interface, so that the user shares or follow-up looks over, further promote the user with the interactive interest of AR image. The selected preview icon plays a role in previewing the AR image, has the functions of photographing and/or recording at the same time, and avoids the situation that the keys on the AR interface are more, so that the AR interface is simpler.
The present application further provides a terminal device, the terminal device includes: a memory, a processor and a computer program stored on the memory and executable on the processor, which computer program, when executed by the processor, carries out the steps of the method as described above.
The present application also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, carries out the steps of the method as described above.
Embodiments of the present application also provide a computer program product, which includes computer program code, when the computer program code runs on a computer, the computer is caused to execute the method as described in the above various possible embodiments.
An embodiment of the present application further provides a chip, which includes a memory and a processor, where the memory is used to store a computer program, and the processor is used to call and run the computer program from the memory, so that a device in which the chip is installed executes the method described in the above various possible embodiments.
In the description herein, suffixes such as "module", "component", or "unit" used to denote elements are used only for the convenience of description of the present application, and have no specific meaning by themselves. Thus, "module", "component" or "unit" may be used mixedly.
The terminal device may be implemented in various forms. For example, the terminal devices described in the present application may include mobile terminals such as a mobile phone, a tablet computer, a notebook computer, a palmtop computer, a Personal Digital Assistant (PDA), a Portable Media Player (PMP), a navigation device, a wearable device, a smart band, a pedometer, and the like, and fixed terminals such as a Digital TV, a desktop computer, and the like.
The description herein will be given taking as an example a mobile terminal, and it will be understood by those skilled in the art that the configuration according to the embodiment of the present application can be applied to a fixed type terminal in addition to elements particularly used for mobile purposes.
Referring to fig. 25, which is a schematic diagram of a hardware structure of another mobile terminal for implementing various embodiments of the present application, the mobile terminal 100 may include: RF (Radio Frequency) unit 101, WiFi module 102, audio output unit 103, a/V (audio/video) input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, processor 110, and power supply 111. Those skilled in the art will appreciate that the mobile terminal architecture shown in fig. 18 is not intended to be limiting of mobile terminals, and that a mobile terminal may include more or fewer components than shown, or some components may be combined, or a different arrangement of components.
The following describes each component of the mobile terminal in detail with reference to fig. 25:
the radio frequency unit 101 may be configured to receive and transmit signals during information transmission and reception or during a call, and specifically, receive downlink information of a base station and then process the downlink information to the processor 110; in addition, the uplink data is transmitted to the base station. Typically, radio frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 101 can also communicate with a network and other devices through wireless communication. The wireless communication may use any communication standard or protocol, including but not limited to GSM (Global System for Mobile communications), GPRS (General Packet Radio Service), CDMA2000(Code Division Multiple Access 2000), WCDMA (Wideband Code Division Multiple Access), TD-SCDMA (Time Division-Synchronous Code Division Multiple Access), FDD-LTE (Frequency Division duplex Long Term Evolution), and TDD-LTE (Time Division duplex Long Term Evolution).
WiFi belongs to short-distance wireless transmission technology, and the mobile terminal can help a user to receive and send e-mails, browse webpages, access streaming media and the like through the WiFi module 102, and provides wireless broadband internet access for the user. Although fig. 18 shows the WiFi module 102, it is understood that it does not belong to the essential constitution of the mobile terminal, and may be omitted entirely as needed within the scope not changing the essence of the invention.
The audio output unit 103 may convert audio data received by the radio frequency unit 101 or the WiFi module 102 or stored in the memory 109 into an audio signal and output as sound when the mobile terminal 100 is in a call signal reception mode, a call mode, a recording mode, a voice recognition mode, a broadcast reception mode, or the like. Also, the audio output unit 103 may also provide audio output related to a specific function performed by the mobile terminal 100 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 103 may include a speaker, a buzzer, and the like.
The a/V input unit 104 is used to receive audio or video signals. The a/V input Unit 104 may include a Graphics Processing Unit (GPU) 1041 and a microphone 1042, the Graphics processor 1041 Processing image data of still pictures or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 106. The image frames processed by the graphic processor 1041 may be stored in the memory 109 (or other storage medium) or transmitted via the radio frequency unit 101 or the WiFi module 102. The microphone 1042 may receive sounds (audio data) via the microphone 1042 in a phone call mode, a recording mode, a voice recognition mode, or the like, and may be capable of processing such sounds into audio data. The processed audio (voice) data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 101 in case of a phone call mode. The microphone 1042 may implement various types of noise cancellation (or suppression) algorithms to cancel (or suppress) noise or interference generated in the course of receiving and transmitting audio signals.
The mobile terminal 100 also includes at least one sensor 105, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that may optionally adjust the brightness of the display panel 1061 according to the brightness of ambient light, and a proximity sensor that may turn off the display panel 1061 and/or the backlight when the mobile terminal 100 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when stationary, and can be used for applications of recognizing the posture of a mobile phone (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured on the mobile phone, further description is omitted here.
The display unit 106 is used to display information input by a user or information provided to the user. The Display unit 106 may include a Display panel 1061, and the Display panel 1061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 107 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the mobile terminal. Specifically, the user input unit 107 may include a touch panel 1071 and other input devices 1072. The touch panel 1071, also referred to as a touch screen, may collect a touch operation performed by a user on or near the touch panel 1071 (e.g., an operation performed by the user on or near the touch panel 1071 using a finger, a stylus, or any other suitable object or accessory), and drive a corresponding connection device according to a predetermined program. The touch panel 1071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 110, and can receive and execute commands sent by the processor 110. In addition, the touch panel 1071 may be implemented in various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 1071, the user input unit 107 may include other input devices 1072. In particular, other input devices 1072 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like, and are not limited to these specific examples.
Further, the touch panel 1071 may cover the display panel 1061, and when the touch panel 1071 detects a touch operation thereon or nearby, the touch panel 1071 transmits the touch operation to the processor 110 to determine the type of the touch event, and then the processor 110 provides a corresponding visual output on the display panel 1061 according to the type of the touch event. Although in fig. 18, the touch panel 1071 and the display panel 1061 are two independent components to implement the input and output functions of the mobile terminal, in some embodiments, the touch panel 1071 and the display panel 1061 may be integrated to implement the input and output functions of the mobile terminal, and is not limited herein.
The interface unit 108 serves as an interface through which at least one external device is connected to the mobile terminal 100. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 108 may be used to receive input (e.g., data information, power, etc.) from external devices and transmit the received input to one or more elements within the mobile terminal 100 or may be used to transmit data between the mobile terminal 100 and external devices.
The memory 109 may be used to store software programs as well as various data. The memory 109 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 109 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 110 is a control center of the mobile terminal, connects various parts of the entire mobile terminal using various interfaces and lines, and performs various functions of the mobile terminal and processes data by operating or executing software programs and/or modules stored in the memory 109 and calling data stored in the memory 109, thereby performing overall monitoring of the mobile terminal. Processor 110 may include one or more processing units; preferably, the processor 110 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The mobile terminal 100 may further include a power supply 111 (e.g., a battery) for supplying power to various components, and preferably, the power supply 111 may be logically connected to the processor 110 via a power management system, so as to manage charging, discharging, and power consumption management functions via the power management system.
Although not shown in fig. 18, the mobile terminal 100 may further include a bluetooth module or the like, which is not described in detail herein.
In order to facilitate understanding of the embodiments of the present application, a communication network system on which the mobile terminal of the present application is based is described below.
Referring to fig. 26, fig. 26 is an architecture diagram of a communication Network system provided in this embodiment, where the communication Network system is an LTE system of a universal mobile telecommunications technology, and the LTE system includes a UE (User Equipment) 201, an E-UTRAN (Evolved UMTS Terrestrial Radio Access Network) 202, an EPC (Evolved Packet Core) 203, and an IP service 204 of an operator, which are in communication connection in sequence.
Specifically, the UE201 may be the terminal 100 described above, and is not described herein again.
The E-UTRAN202 includes eNodeB2021 and other eNodeBs 2022, among others. Among them, the eNodeB2021 may be connected with other eNodeB2022 through backhaul (e.g., X2 interface), the eNodeB2021 is connected to the EPC203, and the eNodeB2021 may provide the UE201 access to the EPC 203.
The EPC203 may include an MME (Mobility Management Entity) 2031, an HSS (Home Subscriber Server) 2032, other MMEs 2033, an SGW (Serving gateway) 2034, a PGW (PDN gateway) 2035, and a PCRF (Policy and Charging Rules Function) 2036, and the like. The MME2031 is a control node that handles signaling between the UE201 and the EPC203, and provides bearer and connection management. HSS2032 is used to provide registers to manage functions such as home location register (not shown) and holds subscriber specific information about service characteristics, data rates, etc. All user data may be sent through SGW2034, PGW2035 may provide IP address assignment for UE201 and other functions, and PCRF2036 is a policy and charging control policy decision point for traffic data flow and IP bearer resources, which selects and provides available policy and charging control decisions for a policy and charging enforcement function (not shown).
The IP services 204 may include the internet, intranets, IMS (IP Multimedia Subsystem), or other IP services, among others.
Although the LTE system is described as an example, it should be understood by those skilled in the art that the present application is not limited to the LTE system, but may also be applied to other wireless communication systems, such as GSM, CDMA2000, WCDMA, TD-SCDMA, and future new network systems.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the application disclosed herein. The embodiments of the present application are intended to cover any variations, uses, or adaptations of the application following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the application pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (36)

1. A method for creating an AR image is applied to a terminal device, and comprises the following steps:
after entering an AR shooting mode, acquiring a first creation instruction of an AR image;
outputting a corresponding creation interface according to the first creation instruction, wherein the creation interface at least comprises an editing area and a preview area;
and when a selection operation input based on the editing area is acquired, creating a corresponding AR image according to the selection operation.
2. The method of creating an AR character according to claim 1, further comprising, after the step of creating a corresponding AR character according to the selection operation, displaying the created AR character in the preview area.
3. The method for creating an AR character according to claim 1 or 2, wherein the editing area includes at least two elements to be selected, and the step of creating a corresponding AR character according to a selection operation when the selection operation input based on the editing area is acquired includes:
and creating the AR image by adopting the selected elements to be selected based on the selection operation of the elements to be selected in the editing area.
4. The method for creating an AR image according to claim 3, wherein the editing area comprises at least a navigation area and a selection area corresponding to the navigation area, the navigation area comprises at least two different types of navigation elements, the selection area comprises at least two elements to be selected, each of the navigation elements corresponds to at least two elements to be selected belonging to the same category; the step of creating the AR character using the selected element to be selected based on the selection operation of the element to be selected in the editing region includes:
and outputting at least two elements to be selected which correspond to the navigation elements and belong to the same category in the selection area based on a selection instruction of the navigation elements.
5. The method for creating an AR character according to claim 4, wherein the navigation elements include a first type of navigation element, the element to be selected corresponding to the first type of navigation element is a first type of element to be selected, the first type of element to be selected is a basic element necessary for creating the AR character, and the step of outputting the creation interface of the AR character according to the first creation instruction includes:
and displaying the elements to be selected which are displayed by default in the elements to be selected of the first type in the preview interface.
6. The method for creating an AR character according to claim 5, wherein the step of outputting at least two elements to be selected belonging to the same category corresponding to the navigation element in the selection area based on the selection instruction for the navigation element comprises:
outputting at least two elements to be selected of a first type corresponding to the navigation elements of the first type in the selection area based on a selection instruction of the navigation elements of the first type;
the step of creating the AR character using the selected element to be selected based on the selection operation of the element to be selected in the editing region includes:
and replacing the default displayed element to be selected with the selected element to be selected of the first type based on the selection instruction of the element to be selected of the first type.
7. The method for creating an AR character according to claim 5, wherein the navigation elements include a navigation element of a second type, the element to be selected corresponding to the navigation element of the second type is an element to be selected of the second type, the element to be selected of the second type is a decoration element of the AR character, and the step of creating the AR character using the selected element to be selected based on the selection operation of the element to be selected in the editing area includes:
outputting at least two elements to be selected of a second type corresponding to the navigation elements of the second type in the selection area based on a selection instruction of the navigation elements of the second type;
and decorating the element to be selected of the first type in the preview area by adopting the selected element to be selected of the second type based on the selection instruction of the element to be selected of the second type.
8. The method of creating an AR character according to claim 7, wherein the step of decorating the first type of the to-be-selected element in the preview area with the selected second type of the to-be-selected element comprises:
acquiring a region to be updated of the selected element to be selected of the second type in the preview region;
when the area to be updated does not have the corresponding selected element, placing the selected element to be selected of the second type on the area to be updated; and/or the presence of a gas in the gas,
and when the corresponding selected element exists in the area to be updated, replacing the selected element by the selected element to be selected of the second type.
9. The method of creating an AR character according to claim 3, wherein the step of displaying the default selected element among the first type selected elements in the preview interface comprises:
acquiring the gender of the AR image to be created;
and displaying the default displayed elements to be selected in the first type of elements to be selected according to the gender.
10. The method of creating an AR figure according to claim 9, wherein the step of acquiring the sex of the AR figure to be created includes:
outputting a gender selection interface prior to outputting the creation interface;
and acquiring the gender of the AR image to be created based on the gender selection instruction.
11. The method of creating an AR figure according to claim 9, wherein the step of acquiring the sex of the AR figure to be created includes:
before the creation interface is output, an AR interface is obtained and acquired by a camera of the terminal equipment;
and identifying the gender of the target object according to the obtained AR interface, and taking the gender of the target object as the gender of the AR image to be created.
12. The method of creating an AR figure according to claim 4, wherein the method of creating an AR figure further comprises:
and displaying the selected navigation element at the middle position of the navigation area based on a selection instruction of the navigation element.
13. The method for creating an AR character according to claim 4, wherein the selection area further comprises a default element, and the step of creating the AR character using the selected element to be selected based on the selection operation of the element to be selected in the editing area comprises the following steps:
and restoring the previously selected element to be replaced to the default element to be selected under the same navigation element based on the selection instruction of the restored default element.
14. The method for creating an AR character according to claim 3, wherein each of the elements to be selected in the editing area is further provided with at least two corresponding color selection elements, and further comprising, after the step of creating the AR character using the selected elements to be selected based on the selection operation of the elements to be selected in the editing area:
and updating the color of the selected element to be selected by adopting the color corresponding to the selected color selection element based on the selection instruction of the color selection element.
15. The method of creating an AR figure according to claim 3, wherein the step of displaying the created AR figure in the preview area comprises:
acquiring a target area of the selected element to be replaced on the AR image to be created;
and displaying the AR image in the preview area according to the target area to which the selected element to be replaced belongs.
16. The method of creating an AR character according to claim 15, wherein the step of displaying the AR character in the preview area according to the target area to which the selected element to be replaced belongs comprises:
when the target area to which the selected element to be replaced belongs is the head area, displaying the head area as the AR image in the preview area; and/or the presence of a gas in the gas,
and when the target area to which the selected element to be replaced belongs is the body area, displaying the head area and the body area as the AR image in the preview area.
17. A method for creating an AR image is applied to a terminal device, and comprises the following steps:
after entering an AR shooting mode, acquiring a first creation instruction of a whole body AR image;
outputting a corresponding creation interface according to the first creation instruction, wherein the creation interface at least comprises an editing area and a preview area;
and creating a corresponding whole-body AR character based on the selection operation of the editing area.
18. The method of creating an AR character according to claim 17, wherein the editing area includes at least two elements to be selected, and the step of creating a corresponding whole-body AR character based on the selection operation of the editing area includes:
and creating the whole-body AR image by adopting the selected elements to be selected based on the selection operation of at least two elements to be selected in the editing area.
19. The method for creating an AR character according to claim 18, wherein the step of creating the whole-body AR character using the selected elements to be selected based on the selection operation of at least two elements to be selected in the editing area, is followed by:
based on each selection operation, acquiring interaction parameters corresponding to the selected element to be selected;
and controlling the AR image of the whole body according to the interaction parameters.
20. The method of claim 19, wherein the interaction parameter comprises at least one of a text parameter, a voice parameter and an action parameter, and the step of controlling the whole-body AR character according to the interaction parameter comprises at least one of the following steps:
outputting characters corresponding to the character parameters on the preview interface;
controlling the whole-body AR image to execute an action corresponding to the action parameter;
outputting a voice corresponding to the voice parameter, and/or controlling the whole-body AR avatar to perform a lip action corresponding to the voice.
21. The method of creating an AR character according to claim 17, further comprising:
when a creation completion instruction is received, saving the whole-body AR image;
controlling the terminal equipment to jump to an AR interface in the AR shooting mode;
and displaying the preview icon corresponding to the stored whole AR image in a preset area of the AR interface.
22. The method of claim 21, wherein the step of displaying the preview icon corresponding to the saved whole-body AR character in a preset area of the AR interface comprises:
acquiring an image label corresponding to the whole AR image;
and displaying a preview icon corresponding to the whole AR image in the preset area according to the image label.
23. The method of creating an AR figure of claim 22, wherein the step of obtaining a figure label corresponding to the whole-body AR figure comprises:
and acquiring an image label corresponding to the created whole-body AR image according to the created whole-body AR image and the preset corresponding relation between the whole-body AR image and the image label.
24. The method of claim 23, wherein the step of obtaining the image tag corresponding to the created whole-body AR figure according to the created whole-body AR figure and the preset corresponding relationship between the whole-body AR figure and the image tag comprises:
outputting at least two preset image labels before storing the created whole-body AR image;
and when a selection instruction of the preset image label is received, the selected image label is stored in association with the created whole-body AR image.
25. The method of creating an AR figure of claim 22, wherein the step of obtaining a figure label corresponding to the whole-body AR figure comprises:
acquiring the image style of the stored whole-body AR image;
and acquiring an image label corresponding to the whole AR image according to the image style.
26. The method of creating an AR figure according to claim 22, wherein the figure labels are displayed in the preview interface, the figure labels including at least two primary figure labels and at least two secondary figure labels, the step of displaying the preview icon corresponding to the whole-body AR figure in the preset area according to the figure labels comprising:
displaying a preview icon corresponding to the whole-body AR image of the selected primary image label based on a selection instruction of the primary image label;
and displaying a preview icon corresponding to the whole-body AR image of the selected second-level image label based on a selection instruction of the second-level image label under the first-level image label.
27. The method of claim 17, wherein after the step of displaying the preview icon corresponding to the saved whole-body AR character in the preset area of the AR interface, the method of creating an AR character further comprises:
identifying an area to be replaced in the AR interface;
replacing the area to be replaced in the AR interface by adopting the whole-body AR image corresponding to the selected preview icon;
displaying the replaced AR interface.
28. The method of creating an AR figure of claim 27, wherein the method of creating an AR figure further comprises:
and displaying the selected preview icon at the middle position of the preset area.
29. The method of creating an AR character according to claim 17, further comprising the steps of:
after entering an AR shooting mode, acquiring an AR interface;
acquiring a placing plane in the AR interface after receiving a second creating instruction of the whole body AR image;
and when receiving a placing instruction of the whole-body AR image, placing the whole-body AR image corresponding to the selected preview icon on the placing plane.
30. The method of creating an AR figure according to claim 29, wherein the step of placing the whole-body AR figure corresponding to the selected preview icon on the placement plane upon receiving the instruction of placing the whole-body AR figure comprises, after:
when the AR interface is detected to be changed, the placing plane is obtained again;
controlling the whole-body AR figure to move onto the retrieved placement plane.
31. The method of creating an AR figure according to claim 29, wherein the step of placing the selected whole-body AR figure on the placement plane upon receiving a placement instruction of the whole-body AR figure, comprises, after:
when a zooming instruction of the whole-body AR image is received, acquiring a corresponding zooming proportion;
adjusting the whole-body AR image according to the scaling.
32. The method of creating an AR figure according to claim 29, wherein the step of placing the selected whole-body AR figure on the placement plane upon receiving a placement instruction of the whole-body AR figure, comprises, after:
acquiring the size of a placing space above the placing plane;
adjusting the whole-body AR figure according to the size of the placement space so that the whole-body AR figure is placed in the placement space.
33. The method of creating an AR figure according to claim 29, wherein the step of placing the selected whole-body AR figure on the placement plane upon receiving a placement instruction of the whole-body AR figure, comprises, after:
and controlling the whole body AR image to execute preset dance actions and/or playing audio data associated with the dance actions.
34. The method of creating an AR character according to any one of claims 17 to 33, wherein the step of displaying the selected preview icon at a middle position of the preset area is followed by:
and when a photographing and/or recording instruction sent out based on the selected preview icon is received, photographing or recording the AR interface.
35. A terminal device, characterized in that the terminal device comprises a memory, a processor, and a creation program of an AR figure stored on the memory and executable on the processor, the creation program of an AR figure realizing the steps of the creation method of an AR figure according to any one of claims 1 to 34 when executed by the processor.
36. A readable storage medium, on which a creation program of an AR character is stored, the creation program of the AR character implementing the steps of the creation method of an AR character according to any one of claims 1 to 34 when being executed by a processor.
CN202010903161.7A 2020-08-31 2020-08-31 AR image creating method, terminal device and readable storage medium Pending CN112037338A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010903161.7A CN112037338A (en) 2020-08-31 2020-08-31 AR image creating method, terminal device and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010903161.7A CN112037338A (en) 2020-08-31 2020-08-31 AR image creating method, terminal device and readable storage medium

Publications (1)

Publication Number Publication Date
CN112037338A true CN112037338A (en) 2020-12-04

Family

ID=73590482

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010903161.7A Pending CN112037338A (en) 2020-08-31 2020-08-31 AR image creating method, terminal device and readable storage medium

Country Status (1)

Country Link
CN (1) CN112037338A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022222082A1 (en) * 2021-04-21 2022-10-27 深圳传音控股股份有限公司 Image control method, mobile terminal, and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107924579A (en) * 2015-08-14 2018-04-17 麦特尔有限公司 The method for generating personalization 3D head models or 3D body models
CN108845741A (en) * 2018-06-19 2018-11-20 北京百度网讯科技有限公司 A kind of generation method, client, terminal and the storage medium of AR expression
CN109427083A (en) * 2017-08-17 2019-03-05 腾讯科技(深圳)有限公司 Display methods, device, terminal and the storage medium of three-dimensional avatars
CN110460797A (en) * 2018-05-07 2019-11-15 苹果公司 Intention camera
CN110457092A (en) * 2018-05-07 2019-11-15 苹果公司 Head portrait creates user interface
CN110782515A (en) * 2019-10-31 2020-02-11 北京字节跳动网络技术有限公司 Virtual image generation method and device, electronic equipment and storage medium
CN110827378A (en) * 2019-10-31 2020-02-21 北京字节跳动网络技术有限公司 Virtual image generation method, device, terminal and storage medium
US10665037B1 (en) * 2018-11-28 2020-05-26 Seek Llc Systems and methods for generating and intelligently distributing forms of extended reality content

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107924579A (en) * 2015-08-14 2018-04-17 麦特尔有限公司 The method for generating personalization 3D head models or 3D body models
CN109427083A (en) * 2017-08-17 2019-03-05 腾讯科技(深圳)有限公司 Display methods, device, terminal and the storage medium of three-dimensional avatars
CN110460797A (en) * 2018-05-07 2019-11-15 苹果公司 Intention camera
CN110457092A (en) * 2018-05-07 2019-11-15 苹果公司 Head portrait creates user interface
CN108845741A (en) * 2018-06-19 2018-11-20 北京百度网讯科技有限公司 A kind of generation method, client, terminal and the storage medium of AR expression
US10665037B1 (en) * 2018-11-28 2020-05-26 Seek Llc Systems and methods for generating and intelligently distributing forms of extended reality content
CN110782515A (en) * 2019-10-31 2020-02-11 北京字节跳动网络技术有限公司 Virtual image generation method and device, electronic equipment and storage medium
CN110827378A (en) * 2019-10-31 2020-02-21 北京字节跳动网络技术有限公司 Virtual image generation method, device, terminal and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022222082A1 (en) * 2021-04-21 2022-10-27 深圳传音控股股份有限公司 Image control method, mobile terminal, and storage medium

Similar Documents

Publication Publication Date Title
US10609334B2 (en) Group video communication method and network device
US20230117197A1 (en) Bimanual gestures for controlling virtual and graphical elements
CN110168618B (en) Augmented reality control system and method
CN108898068B (en) Method and device for processing face image and computer readable storage medium
CN108712603B (en) Image processing method and mobile terminal
CN113535306B (en) Avatar creation user interface
JP2019145108A (en) Electronic device for generating image including 3d avatar with facial movements reflected thereon, using 3d avatar for face
CN107390863B (en) Device control method and device, electronic device and storage medium
WO2018150831A1 (en) Information processing device, information processing method, and recording medium
US20230419582A1 (en) Virtual object display method and apparatus, electronic device, and medium
CN110263617B (en) Three-dimensional face model obtaining method and device
CN109272473B (en) Image processing method and mobile terminal
CN109218648A (en) A kind of display control method and terminal device
WO2020233403A1 (en) Personalized face display method and apparatus for three-dimensional character, and device and storage medium
CN108876878B (en) Head portrait generation method and device
CN111047511A (en) Image processing method and electronic equipment
US20220262080A1 (en) Interfaces for presenting avatars in three-dimensional environments
CN111080747B (en) Face image processing method and electronic equipment
CN107563353B (en) Image processing method and device and mobile terminal
CN107368253B (en) Picture zooming display method, mobile terminal and storage medium
CN109859115A (en) A kind of image processing method, terminal and computer readable storage medium
CN112037338A (en) AR image creating method, terminal device and readable storage medium
CN112449098B (en) Shooting method, device, terminal and storage medium
CN111915744A (en) Interaction method, terminal and storage medium for augmented reality image
US20220189128A1 (en) Temporal segmentation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination