CN110597397A - Augmented reality implementation method, mobile terminal and storage medium - Google Patents

Augmented reality implementation method, mobile terminal and storage medium Download PDF

Info

Publication number
CN110597397A
CN110597397A CN201910952957.9A CN201910952957A CN110597397A CN 110597397 A CN110597397 A CN 110597397A CN 201910952957 A CN201910952957 A CN 201910952957A CN 110597397 A CN110597397 A CN 110597397A
Authority
CN
China
Prior art keywords
augmented reality
image
virtual element
target object
mobile terminal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910952957.9A
Other languages
Chinese (zh)
Inventor
彭叶斌
周凡贻
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Transsion Holdings Co Ltd
Original Assignee
Shenzhen Transsion Holdings Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Transsion Holdings Co Ltd filed Critical Shenzhen Transsion Holdings Co Ltd
Priority to CN201910952957.9A priority Critical patent/CN110597397A/en
Publication of CN110597397A publication Critical patent/CN110597397A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a method for realizing augmented reality, which comprises the following steps: acquiring attitude information of a target object according to a shot image of the mobile terminal; determining a virtual element according to the attitude information; and generating an augmented reality image according to the shot image and the virtual element. The invention also discloses a mobile terminal and a computer readable storage medium, which achieve the effect of simplifying the control steps of enhancing the current functions of the mobile terminal.

Description

Augmented reality implementation method, mobile terminal and storage medium
Technical Field
The present invention relates to the field of augmented reality technologies, and in particular, to a method for implementing augmented reality, a mobile terminal, and a computer-readable storage medium.
Background
The AR (Augmented Reality) technology is a technology that skillfully fuses virtual information and the real world, and a plurality of technical means such as multimedia, three-dimensional modeling, real-time tracking and registration, intelligent interaction, sensing and the like are widely applied, and virtual information such as characters, images, three-dimensional models, music, videos and the like generated by a computer is applied to the real world after analog simulation, and the two kinds of information complement each other, thereby realizing the 'enhancement' of the real world.
In a conventional AR implementation, a user needs to manually select a virtual element or an AR mode to be added to a real scene in a control interface, and then add the selected virtual element or a specific virtual element in the selected AR mode to the real scene. Therefore, manual change is required each time a virtual element added to a real scene needs to be changed, which has a drawback that the control step is cumbersome.
Disclosure of Invention
The invention mainly aims to provide a realization method for augmented reality, a mobile terminal and a computer readable storage medium, aiming at achieving the effect of simplifying the control steps of the mobile terminal for enhancing the current functions.
In order to achieve the above object, the present invention provides an augmented reality implementation method, which includes the following steps:
acquiring attitude information of a target object according to a shot image of the mobile terminal;
determining a virtual element according to the attitude information;
and generating an augmented reality image according to the shot image and the virtual element.
Optionally, the step of acquiring the user gesture according to the shot image of the mobile terminal includes:
when the shot image is acquired, identifying a target object in the shot image;
and when the shot image contains the target object, acquiring the attitude information of the target object in the shot image.
Optionally, the step of acquiring the posture information of the target object in the captured image includes:
intercepting at least one picture frame containing the target object in the shot image;
acquiring the contour feature of the target object in the picture frame, and determining the attitude information according to the contour feature
Optionally, the step of obtaining the pose information of the target object according to the shot image of the mobile terminal includes:
when a plurality of target objects exist in the shot image, acquiring the attitude information of each target object;
the step of determining a virtual element from the pose information comprises:
and after the attitude information of a plurality of target objects is acquired, acquiring the virtual element corresponding to the attitude information of each target object.
Optionally, before the step of generating an augmented reality image according to the captured image and the virtual element, the method further includes:
acquiring the position of the target object in the shot image;
determining the display position of the virtual element corresponding to the target object in the shot image according to the position of the target object;
the step of generating an augmented reality image from the captured image and the virtual element includes:
and synthesizing the virtual element at the display position of the shot image, and taking the shot image after the virtual element is synthesized as the augmented reality image.
Optionally, after the step of generating an augmented reality image according to the captured image and the virtual element, the method further includes:
displaying the augmented reality image in real time by a display device; and/or
And when a saving instruction is received, saving the currently displayed augmented reality image.
Optionally, after the step of generating an augmented reality image according to the captured image and the virtual element, the method further includes:
acquiring a motion track of the target object in the shot image, and acquiring virtual element adjustment parameters corresponding to the motion track, wherein the virtual element adjustment parameters comprise other virtual elements and/or position adjustment parameters of the virtual elements in the augmented reality image;
and adjusting the virtual elements in the augmented reality image according to the virtual element adjustment parameters.
Optionally, the step of adjusting the virtual element in the augmented reality image according to the virtual element adjustment parameter includes:
updating the virtual element in the augmented reality image in accordance with the other virtual element; and/or
And adjusting the display position of the virtual element in the augmented reality image according to the position adjustment parameter.
In addition, in order to achieve the above object, the present invention further provides a mobile terminal, where the mobile terminal includes a memory, a processor, and a control program of the mobile terminal stored in the memory and executable on the processor, and the control program of the mobile terminal implements the steps of the method for implementing augmented reality as described above when executed by the processor.
In addition, to achieve the above object, the present invention further provides a computer-readable storage medium having a control program of a mobile terminal stored thereon, where the control program of the mobile terminal, when executed by a processor, implements the steps of the augmented reality implementing method as described above.
According to the method for realizing the augmented reality, the mobile terminal and the computer readable storage medium provided by the embodiment of the invention, the attitude information of the target object is obtained according to the shot image of the mobile terminal, then the virtual element is determined according to the attitude information, and the augmented reality image is generated according to the shot image and the virtual element.
Drawings
Fig. 1 is a schematic terminal structure diagram of a hardware operating environment according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart illustrating an embodiment of a method for implementing augmented reality according to the present invention;
FIG. 3 is a schematic flow chart of another embodiment of the present invention;
FIG. 4 is a schematic flow chart of another embodiment of the present invention;
FIG. 5 is a schematic flow chart of another embodiment of the present invention;
fig. 6 is a schematic diagram of shot data displayed in a mobile terminal according to the present invention;
fig. 7 is a schematic diagram of an augmented reality image displayed in a mobile terminal according to the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Because, in the conventional AR implementation, the user needs to manually select a virtual element or an AR mode to be added to the real scene in the control interface, and then add the selected virtual element or a specific virtual element in the selected AR mode to the real scene. Therefore, manual change is required each time a virtual element added to a real scene needs to be changed, which has a drawback that the control step is cumbersome.
In order to solve the above-mentioned drawbacks, an embodiment of the present invention provides a method for implementing augmented reality, and the main solution is as follows:
acquiring attitude information of a target object according to a shot image of the mobile terminal;
determining a virtual element according to the attitude information;
and generating an augmented reality image according to the shot image and the virtual element.
Because the virtual elements can be automatically determined according to the user posture, the user does not need to manually select the virtual elements when the augmented reality image is generated, and the effect of simplifying the control steps of the augmented reality function of the mobile terminal is achieved.
As shown in fig. 1, fig. 1 is a schematic terminal structure diagram of a hardware operating environment according to an embodiment of the present invention.
The terminal of the embodiment of the invention can be terminal equipment such as a smart phone.
As shown in fig. 1, the terminal may include: a processor 1001, such as a CPU, a network interface 1004, a user interface 1003, a memory 1005, a communication bus 1002. Wherein a communication bus 1002 is used to enable connective communication between these components. The user interface 1003 may comprise a Display screen (Display), an input unit such as a keyboard, etc., and the optional user interface 1003 may also comprise a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface). The memory 1005 may be a high-speed RAM memory or a non-volatile memory (e.g., a magnetic disk memory). The memory 1005 may alternatively be a storage device separate from the processor 1001.
Those skilled in the art will appreciate that the terminal structure shown in fig. 1 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
As shown in fig. 1, a memory 1005, which is a kind of computer storage medium, may include therein an operating system, a network communication module, a user interface module, and a control program of the mobile terminal.
In the terminal shown in fig. 1, the network interface 1004 is mainly used for connecting to a backend server and performing data communication with the backend server; the processor 1001 may be configured to invoke a control program of the mobile terminal stored in the memory 1005 and perform the following operations:
acquiring attitude information of a target object according to a shot image of the mobile terminal;
determining a virtual element according to the attitude information;
and generating an augmented reality image according to the shot image and the virtual element.
Further, the processor 1001 may call a control program of the mobile terminal stored in the memory 1005, and also perform the following operations:
when the shot image is acquired, identifying a target object in the shot image;
and when the shot image contains the target object, acquiring the attitude information of the target object in the shot image.
Further, the processor 1001 may call a control program of the mobile terminal stored in the memory 1005, and also perform the following operations:
intercepting at least one picture frame containing the target object in the shot image;
acquiring the contour feature of the target object in the picture frame, and determining the attitude information according to the contour feature
Further, the processor 1001 may call a control program of the mobile terminal stored in the memory 1005, and also perform the following operations:
when a plurality of target objects exist in the shot image, acquiring the attitude information of each target object;
the step of determining a virtual element from the pose information comprises:
and after the attitude information of a plurality of target objects is acquired, acquiring the virtual element corresponding to the attitude information of each target object.
Further, the processor 1001 may call a control program of the mobile terminal stored in the memory 1005, and also perform the following operations:
acquiring the position of the target object in the shot image;
determining the display position of the virtual element corresponding to the target object in the shot image according to the position of the target object;
the step of generating an augmented reality image from the captured image and the virtual element includes:
and synthesizing the virtual element at the display position of the shot image, and taking the shot image after the virtual element is synthesized as the augmented reality image.
Further, the processor 1001 may call a control program of the mobile terminal stored in the memory 1005, and also perform the following operations:
displaying the augmented reality image in real time by a display device; and/or
And when a saving instruction is received, saving the currently displayed augmented reality image.
Further, the processor 1001 may call a control program of the mobile terminal stored in the memory 1005, and also perform the following operations:
acquiring a motion track of the target object in the shot image, and acquiring virtual element adjustment parameters corresponding to the motion track, wherein the virtual element adjustment parameters comprise other virtual elements and/or position adjustment parameters of the virtual elements in the augmented reality image;
and adjusting the virtual elements in the augmented reality image according to the virtual element adjustment parameters.
Further, the processor 1001 may call a control program of the mobile terminal stored in the memory 1005, and also perform the following operations:
updating the virtual element in the augmented reality image in accordance with the other virtual element; and/or
And adjusting the display position of the virtual element in the augmented reality image according to the position adjustment parameter.
Referring to fig. 2, in an embodiment of the augmented reality implementation method of the present invention, the augmented reality implementation method includes the following steps:
step S10, acquiring the attitude information of the target object according to the shot image of the mobile terminal;
in this embodiment, the mobile terminal may be a portable mobile terminal such as a mobile phone and a tablet computer. The mobile terminal may be provided with a camera device. Wherein, the camera device can be a camera. The camera device of the mobile terminal can be used for shooting the space where the mobile terminal is located, and data shot by the mobile terminal in real time is used as a shot image.
The space where the mobile terminal is located may further include a target object, where the target object may be a human body and/or a preset object. So that the target object can be included in the shot image shot by the mobile terminal.
Further, when the mobile terminal captures a captured image containing a target object through the camera device, the mobile terminal may perform target object recognition on the captured image, so that the mobile terminal may determine whether the target object is contained in the captured image through a result of the target object recognition.
Specifically, when the mobile terminal acquires a shot image, the target object is identified according to an image processing algorithm. Wherein the image processing algorithm may be a convolutional neural network based intelligent recognition algorithm. Each object feature in the captured data can be determined by extracting feature parameters of each object in the captured image. Or the image processing algorithm may also intercept the frame of the shot image first, and then obtain the outline of each object in the frame of the image. The type of the object is then determined from the contour of the object. Thereby determining whether the target object is included in the captured image.
When the target object is not included in the shot image, the mobile terminal may directly output the shot image through a display device of the mobile terminal.
When it is determined that the target object is included in the photographed image, the mobile terminal may acquire posture information of the mobile terminal. When the target objects are different, the corresponding attitude information may also be different. For example, when the target object is a human body, the posture information may be a human body posture, and when the target object is a table, the posture information may be whether an object is placed on the table.
Specifically, when obtaining the pose information of the target object, a picture frame including the target object in the captured image may be captured first, and then the picture frame may be preprocessed, and then the contour feature of the target object may be extracted according to the preprocessed picture frame. And determining the attitude information of the target object according to the contour features.
Alternatively, a plurality of picture frames containing the target object may be captured first, and then or the contour feature of the target object in each picture frame may be captured to obtain a plurality of contour features corresponding to the plurality of picture frames one to one. An average contour feature is then determined from the plurality of contour features, and pose information is determined from the average contour feature.
It should be noted that the preprocessing may include a picture noise reduction process, a decorrelation process, a gray scale process, and the like.
Optionally, the gesture information of the target object in the captured image may be obtained by performing intelligent analysis on the captured image through an intelligent robot. And extracting the attitude information according to the analysis result.
For example, when the target object is a human body, the distances between the respective limbs of the human body and the positions of the limbs with respect to the trunk may be acquired. Therefore, the human body posture is determined according to the acquired distance between the limbs and the position of the limbs relative to the trunk. Or the hand characteristics of the human body can be acquired, and then gesture recognition is carried out on the hand characteristics to acquire human body gestures as the posture information.
Step S20, determining virtual elements according to the attitude information;
in this embodiment, when the posture information of the target object is obtained, pre-stored posture information matched with the posture information may be queried. The pre-stored pose information may be associated with a virtual element. When the pre-stored posture information matched with the posture information is inquired, the virtual element associated with the matched pre-stored posture information can be acquired.
Illustratively, the target object is set as a human body. When the human body posture of the human body is acquired, a pre-stored human body posture which is pre-stored in the mobile terminal and is matched with the acquired human body posture can be inquired. The pre-stored body gestures are associated with the virtual elements. Thus, when the body posture is acquired, a virtual element associated with a pre-stored body posture matching the body posture may be acquired.
Optionally, when the pre-stored human body posture matched with the human body posture cannot be queried in the mobile terminal, prompt information with an unrecognizable posture may be output. To prompt the user to change or adjust the current pose.
It should be noted that the virtual element may be an animation, a picture, a pattern, a 3D graphic, etc. pre-stored in the mobile terminal.
And step S30, generating an augmented reality image according to the shot image and the virtual element.
In this embodiment, after the virtual element is determined, the virtual element and the captured image may be subjected to screen synthesis to generate an augmented reality image. Thereby making it possible to simultaneously display the virtual element and the captured image in the augmented reality image.
Illustratively, as shown in fig. 6, when the mobile terminal 10 captures the captured data including the human body 20 through the capturing device, the gesture detection may be performed on the human body 20. When it is detected that the human body posture in the captured image is "barycenter", a virtual element of a red heart shape may be synthesized on the captured image. So that in the augmented reality image, the human body and the virtual element of the red heart shape can be displayed simultaneously. The enhanced display image is shown in fig. 7, and the enhanced display image includes a human body 20 and a heart-shaped virtual element 30.
When the preset target is a table, the posture information of the table may include whether an object is placed on the table and/or specific contents of the object placed on the table. When no object is placed on the table, a wine bottle and/or a wine glass and the like can be used as virtual elements and synthesized into the shot image, so that the table and the wine bottle and the virtual elements such as the wine bottle and/or the wine glass can be displayed in the augmented reality image at the same time.
In the technical scheme disclosed in this embodiment, the attitude information of the target object is obtained according to the shot image of the mobile terminal, then the virtual element is determined according to the attitude information, and the augmented reality image is generated according to the shot image and the virtual element.
Referring to fig. 3, based on the above embodiment, in another embodiment, the step S10 further includes:
step S11 of acquiring posture information of each of the target objects when a plurality of the target objects exist in the captured image;
the step S20 includes:
step S21, after obtaining the pose information of the plurality of target objects, obtaining the virtual element corresponding to the pose information of each target object.
In the present embodiment, the target object in the captured image is identified when the captured image is acquired. When it is recognized that a plurality of target objects are included in the captured image, the posture information of the plurality of target objects may be acquired one by one.
Specifically, when it is recognized that a plurality of target objects are included in the captured image, the plurality of target objects may be numbered first. So that the mobile terminal can distinguish the plurality of target objects. And then acquiring the attitude information of each target object, and associating the attitude information with the number of the target object corresponding to the attitude information. Therefore, the aim of distinguishing a plurality of posture information through the number of the target object is achieved.
Further, after the attitude information of each target object is acquired and the attitude information corresponding to the plurality of target objects is distinguished through the numbers. A virtual element corresponding to each pose information may be obtained. And then after the virtual element corresponding to each attitude information is obtained, determining a target object corresponding to each virtual element according to the number. Therefore, the purpose of acquiring the virtual element corresponding to each target object is achieved.
It should be noted that, when there is a target object that cannot acquire the pose information or cannot determine the virtual element according to the pose information among the plurality of target objects, the target object that cannot acquire the pose information is ignored or cannot determine the virtual element according to the pose information, and the virtual element corresponding to another identifiable target object is acquired.
Further, an augmented reality image may be produced from the plurality of virtual elements and the captured image, so that the plurality of target objects and the plurality of virtual elements are displayed in the augmented reality image.
In the technical solution disclosed in this embodiment, when a plurality of target objects exist in a captured image, the target objects may be numbered first to distinguish the plurality of target objects, and then virtual elements corresponding to the plurality of target objects are obtained, respectively, and an augmented reality image including the plurality of target objects and the plurality of virtual elements is generated, so that an effect of generating an augmented reality image including a plurality of virtual elements corresponding to the plurality of target objects one to one is achieved.
Referring to fig. 4, based on any one of the above embodiments, in a further embodiment, before the step S30, the method further includes:
step S40, acquiring the position of the target object in the shot image;
and step S50, determining the display position of the virtual element corresponding to the target object in the shot image according to the position of the target object.
In this embodiment, before generating the augmented reality image, the position of the target object in the photographic subject may be acquired. Wherein the position of the target object refers to the position of the target object in the picture of the shot image. When the position of the target object is determined, a rectangular coordinate system can be established by taking any pixel point of a picture of a shot image as a coordinate system origin. The position of the target object is then described in terms of its coordinates. Or the position of the target object may also be described by its distance to the edge of the picture in which the image was taken.
After the position of the target object is acquired, the display position of the virtual element in the shot image can be determined according to the position of the target object.
Specifically, when a virtual element is determined, the virtual element may be associated with a preset position setting parameter. Wherein the position setting parameter is a position setting parameter adjusted based on a position of the target object. For example, when the position setting parameter set by a virtual element is set to (0, 0), the display position of the virtual element is the center point of the destination object. When the position setting parameter is (1,0), the display position of the virtual element is moved vertically upward by one unit amount for the center point of the destination object. That is, the position adjustment parameter of the virtual element may be set to (x, y). When x is a positive number, the display position of the virtual element is moved vertically upward by | x | unit amount based on the center point of the target object, and when x is a negative number, the display position of the virtual element is moved vertically downward by | x | unit amount based on the center point of the target object; when y is a positive number, the display position of the virtual element is horizontally moved rightward by | y | unit amounts based on the center point of the target object; when y is negative, the display position of the virtual element is horizontally moved leftward by | y | unit amounts based on the center point of the target object.
Further, after the display position of the virtual element in the captured image is determined, the virtual element is synthesized at the display position of the captured image, and the captured image after the virtual element is synthesized is taken as the augmented reality image.
In the technical scheme disclosed in this embodiment, the display position of the virtual element can be determined according to the position of the target object, so that the degree of fit between the virtual element and the target object is improved, and the display position of the virtual element can be changed along with the movement of the target object, thereby achieving the effect of improving the reality of the augmented reality image.
Referring to fig. 5, based on any one of the above embodiments, in a further embodiment, after step S30, the method further includes:
step S60, taking the motion trail of the target object in the shot image, and obtaining a virtual element adjustment parameter corresponding to the motion trail, wherein the virtual element adjustment parameter comprises other virtual elements and/or position adjustment parameters of the virtual elements in the augmented reality image;
and step S70, adjusting the virtual elements in the augmented reality image according to the virtual element adjusting parameters.
In this embodiment, after the augmented reality image is generated, the augmented reality image generated at the current time may be output in real time through a display device of the mobile terminal. And/or saving the currently displayed virtual display picture as a picture when a saving instruction triggered by a user is received.
Further, the mobile terminal generates an augmented reality image in real time, and can also acquire a motion track of a target object in the shot image. After the motion trajectory of the target object is acquired, the motion trajectory may be matched with a pre-stored motion trajectory. And acquiring a virtual element adjustment parameter associated with the pre-stored motion trail as a virtual element adjustment parameter corresponding to the currently acquired motion trail.
Wherein the virtual element adjustment parameters include other virtual elements and/or position adjustment parameters of the virtual elements in the augmented reality image.
After the virtual element adjustment parameter is acquired, the virtual element in the augmented reality image can be updated according to the other virtual elements; and/or adjusting a display position of the virtual element in the augmented reality image according to the position adjustment parameter.
In the technical scheme disclosed in this embodiment, the virtual elements in the augmented reality image can be adjusted according to the motion trajectory of the target object, so that the effect of improving the reality degree of the augmented reality image is achieved.
In addition, an embodiment of the present invention further provides a mobile terminal, where the mobile terminal includes a memory, a processor, and a control program of the mobile terminal that is stored in the memory and is executable on the processor, and the control program of the mobile terminal implements the steps of the method for implementing augmented reality according to the above embodiments when executed by the processor.
In addition, an embodiment of the present invention further provides a computer-readable storage medium, where a control program of a mobile terminal is stored on the computer-readable storage medium, and when the control program of the mobile terminal is executed by a processor, the steps of the method for implementing augmented reality according to the above embodiments are implemented.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) as described above and includes instructions for enabling a terminal device (e.g. a smart phone, etc.) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. An implementation method of augmented reality, the implementation method of augmented reality comprising the following steps:
acquiring attitude information of a target object according to a shot image of the mobile terminal;
determining a virtual element according to the attitude information;
and generating an augmented reality image according to the shot image and the virtual element.
2. The method for implementing augmented reality according to claim 1, wherein the step of acquiring the user gesture from the captured image of the mobile terminal comprises:
when the shot image is acquired, identifying a target object in the shot image;
and when the shot image contains the target object, acquiring the attitude information of the target object in the shot image.
3. The method for implementing augmented reality according to claim 2, wherein the step of acquiring the posture information of the target object in the captured image comprises:
intercepting at least one picture frame containing the target object in the shot image;
and acquiring the contour feature of the target object in the picture frame, and determining the attitude information according to the contour feature.
4. The method for implementing augmented reality according to any one of claims 1 to 3, wherein the step of obtaining the posture information of the target object from the captured image of the mobile terminal includes:
when a plurality of target objects exist in the shot image, acquiring the attitude information of each target object;
the step of determining a virtual element from the pose information comprises:
and after the attitude information of a plurality of target objects is acquired, acquiring the virtual element corresponding to the attitude information of each target object.
5. The method for implementing augmented reality according to claim 4, wherein the step of generating an augmented reality image from the captured image and the virtual element is preceded by:
acquiring the position of the target object in the shot image;
determining the display position of the virtual element corresponding to the target object in the shot image according to the position of the target object;
the step of generating an augmented reality image from the captured image and the virtual element includes:
and synthesizing the virtual element at the display position of the shot image, and shooting an image augmented reality image to generate the virtual display image.
6. The method for implementing augmented reality according to claim 1, wherein the step of generating an augmented reality image from the captured image and the virtual element further comprises:
displaying the augmented reality image in real time by a display device; and/or
And when a saving instruction is received, saving the currently displayed augmented reality image.
7. The method for implementing augmented reality according to claim 1, wherein the step of generating an augmented reality image from the captured image and the virtual element further comprises:
acquiring a motion track of the target object in the shot image, and acquiring virtual element adjustment parameters corresponding to the motion track, wherein the virtual element adjustment parameters comprise other virtual elements and/or position adjustment parameters of the virtual elements in the augmented reality image;
and adjusting the virtual elements in the augmented reality image according to the virtual element adjustment parameters.
8. The method for implementing augmented reality according to claim 7, wherein the step of adjusting the virtual element in the augmented reality image according to the virtual element adjustment parameter comprises:
updating the virtual element in the augmented reality image in accordance with the other virtual element; and/or
And adjusting the display position of the virtual element in the augmented reality image according to the position adjustment parameter.
9. A mobile terminal, characterized in that the mobile terminal comprises a memory, a processor and a control program of the mobile terminal stored on the memory and operable on the processor, and the control program of the mobile terminal, when executed by the processor, implements the steps of the method for implementing augmented reality according to any one of claims 1 to 8.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored thereon a control program of a mobile terminal, which when executed by a processor implements the steps of the augmented reality implementing method according to any one of claims 1 to 8.
CN201910952957.9A 2019-09-29 2019-09-29 Augmented reality implementation method, mobile terminal and storage medium Pending CN110597397A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910952957.9A CN110597397A (en) 2019-09-29 2019-09-29 Augmented reality implementation method, mobile terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910952957.9A CN110597397A (en) 2019-09-29 2019-09-29 Augmented reality implementation method, mobile terminal and storage medium

Publications (1)

Publication Number Publication Date
CN110597397A true CN110597397A (en) 2019-12-20

Family

ID=68865905

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910952957.9A Pending CN110597397A (en) 2019-09-29 2019-09-29 Augmented reality implementation method, mobile terminal and storage medium

Country Status (1)

Country Link
CN (1) CN110597397A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113327329A (en) * 2020-12-15 2021-08-31 广州富港万嘉智能科技有限公司 Indoor projection method, device and system based on three-dimensional model
WO2022055418A1 (en) * 2020-09-09 2022-03-17 脸萌有限公司 Display method and device based on augmented reality, and storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022055418A1 (en) * 2020-09-09 2022-03-17 脸萌有限公司 Display method and device based on augmented reality, and storage medium
US11594000B2 (en) 2020-09-09 2023-02-28 Beijing Zitiao Network Technology Co., Ltd. Augmented reality-based display method and device, and storage medium
US11989845B2 (en) 2020-09-09 2024-05-21 Beijing Zitiao Network Technology Co., Ltd. Implementation and display of augmented reality
CN113327329A (en) * 2020-12-15 2021-08-31 广州富港万嘉智能科技有限公司 Indoor projection method, device and system based on three-dimensional model

Similar Documents

Publication Publication Date Title
US10460512B2 (en) 3D skeletonization using truncated epipolar lines
CN111880657B (en) Control method and device of virtual object, electronic equipment and storage medium
CN110716645A (en) Augmented reality data presentation method and device, electronic equipment and storage medium
US11137824B2 (en) Physical input device in virtual reality
US9392248B2 (en) Dynamic POV composite 3D video system
CN112148197A (en) Augmented reality AR interaction method and device, electronic equipment and storage medium
KR20130099317A (en) System for implementing interactive augmented reality and method for the same
JP5780142B2 (en) Image processing apparatus and image processing method
CN104156998A (en) Implementation method and system based on fusion of virtual image contents and real scene
US10810801B2 (en) Method of displaying at least one virtual object in mixed reality, and an associated terminal and system
CN113643356B (en) Camera pose determination method, virtual object display method, device and electronic equipment
CN108829250A (en) A kind of object interaction display method based on augmented reality AR
CN106203286A (en) The content acquisition method of a kind of augmented reality, device and mobile terminal
CN109788359B (en) Video data processing method and related device
WO2016165614A1 (en) Method for expression recognition in instant video and electronic equipment
CN110597397A (en) Augmented reality implementation method, mobile terminal and storage medium
CN114998935A (en) Image processing method, image processing device, computer equipment and storage medium
KR101586071B1 (en) Apparatus for providing marker-less augmented reality service and photographing postion estimating method therefor
CN113282164A (en) Processing method and device
CN108027647B (en) Method and apparatus for interacting with virtual objects
CN113342157A (en) Eyeball tracking processing method and related device
CN111258413A (en) Control method and device of virtual object
CN110941327A (en) Virtual object display method and device
CN109410338A (en) Generation method, device, augmented reality glasses and the terminal of augmented reality image
JP6341540B2 (en) Information terminal device, method and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination