CN113902520A - Augmented reality image display method, device, equipment and storage medium - Google Patents

Augmented reality image display method, device, equipment and storage medium Download PDF

Info

Publication number
CN113902520A
CN113902520A CN202111130772.3A CN202111130772A CN113902520A CN 113902520 A CN113902520 A CN 113902520A CN 202111130772 A CN202111130772 A CN 202111130772A CN 113902520 A CN113902520 A CN 113902520A
Authority
CN
China
Prior art keywords
augmented reality
real
image
virtual object
virtual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111130772.3A
Other languages
Chinese (zh)
Inventor
魏焱烽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Chenbei Technology Co Ltd
Original Assignee
Shenzhen Chenbei Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Chenbei Technology Co Ltd filed Critical Shenzhen Chenbei Technology Co Ltd
Priority to CN202111130772.3A priority Critical patent/CN113902520A/en
Publication of CN113902520A publication Critical patent/CN113902520A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0641Shopping interfaces
    • G06Q30/0643Graphical representation of items or shoppers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data

Landscapes

  • Business, Economics & Management (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Finance (AREA)
  • Accounting & Taxation (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Computer Graphics (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application provides an augmented reality image display method, device, equipment and storage medium, wherein the method comprises the steps of displaying a first augmented reality image under the condition that a display instruction of a first virtual object is obtained, wherein the first augmented reality image is obtained by performing virtual-real fusion processing on a currently obtained real-time scene image and a first augmented reality model, and the first augmented reality model is an augmented reality model corresponding to the first virtual object; and displaying a second augmented reality image under the condition that an object matching instruction is obtained, wherein the object matching instruction is used for indicating that the first virtual object is matched with a real object at a target position, a matching result of the first virtual object and the real object is displayed in the second augmented reality image, and the target position is a position included in a real-time scene image in the first augmented reality image. The technical scheme can facilitate the user to know the matching condition of the household appliance and the real space.

Description

Augmented reality image display method, device, equipment and storage medium
Technical Field
The present application relates to the field of augmented reality, and in particular, to a method, an apparatus, a device, and a storage medium for displaying an augmented reality image.
Background
Shopping is a behavior that people often take in the daily life of a target. In the process of purchasing goods by the user, the following problems may be faced: it is impossible to determine whether the size of the goods to be purchased is suitable for placing in the home, for example, when purchasing the home appliance, whether there is a room in the home to accommodate the home appliance, or whether a certain room in the home can accommodate the home appliance, that is, whether the size of the certain room in the home matches the size of the home appliance. The user often loses the interest in purchasing for the reasons, or the user purchases unsuitable goods, which results in a poor shopping experience.
Disclosure of Invention
The application provides an augmented reality image display method, device, equipment and storage medium, which are used for solving the technical problem that a user cannot determine whether the size of a commodity to be purchased is appropriate in the existing shopping scene.
In a first aspect, an augmented reality image display method is provided, including:
under the condition that a display instruction of a first virtual object is obtained, displaying a first augmented reality image, wherein the first augmented reality image is obtained by performing virtual-real fusion processing on a currently obtained real-time scene image and a first augmented reality model, and the first augmented reality model is an augmented reality model corresponding to the first virtual object;
and displaying a second augmented reality image under the condition that an object matching instruction is obtained, wherein the object matching instruction is used for indicating that the first virtual object is matched with a real object at a target position, a matching result of the first virtual object and the real object is displayed in the second augmented reality image, and the target position is a position included in a real-time scene image in the first augmented reality image.
In the technical scheme, the virtual object and the real scene object are combined and displayed by carrying out virtual-real fusion processing on the augmented reality model of the virtual object and the real-time scene image and displaying the processed augmented reality image, so that a user can conveniently know the condition of placing the virtual object in a real space; and under the condition that an instruction for matching the virtual object with the real object is obtained, the augmented reality image containing the matching result of the virtual object and the real object is displayed, so that a user can conveniently know the matching condition of the virtual object and the real object. When the virtual object is a household appliance article, the matching condition of the household appliance and the real space can be conveniently known by a user, so that the user can select the more appropriate household appliance, and the shopping experience is improved.
With reference to the first aspect, in a possible implementation manner, before displaying the first augmented reality image, the method further includes: determining coordinate conversion parameters between a real-world coordinate system and an image coordinate system according to the real-time scene image; performing coordinate conversion on the first augmented reality model according to the coordinate conversion parameter to obtain a virtual image corresponding to the first virtual object; and performing image superposition on the virtual image and the real-time scene image to obtain the first augmented reality image. By determining the conversion parameters between the real world coordinate system and the image coordinate system, three-dimensional registration can be realized, so that better virtual-real fusion can be performed.
With reference to the first aspect, in a possible implementation manner, before displaying the second augmented reality image, the method further includes: determining image coordinates of the real object and the first virtual object in the first augmented reality image; after the image coordinates are converted into real position coordinates of a real world, calculating the actual sizes of the real object and the first virtual object based on the real position coordinates obtained through conversion; and obtaining the matching result according to the actual sizes of the real object and the first virtual object. Because the real position coordinates can not be influenced by the angle of a camera and the like, the real object and the sub-virtual object are converted into the real world for comparison, so that the size comparison can be better carried out, and a more accurate size matching result can be obtained.
With reference to the first aspect, in a possible implementation manner, the method further includes: and under the condition that the matching result of the first virtual object and the real object is determined to be unmatched according to the matching result, pushing and displaying target information according to the actual size of the real object, wherein the target information is recommendation information of a second virtual object matched with the actual size of the real object. When it is determined that the virtual object is not appropriate in size with the real object, the user can be prompted to select an appropriate home appliance by recommending information of a second virtual object that matches the actual size of the real object.
With reference to the first aspect, after displaying the first augmented reality image, the method further includes: determining a placement plane of the first virtual object in the first augmented reality image, determining to obtain the object matching instruction under the condition that a selected instruction for the placement plane is obtained, and determining the placement plane as the real object, wherein the placement plane is a plane in the real-time scene image; and/or determining to acquire the object matching instruction when acquiring a selected instruction for a target region in the first augmented reality image, and determining an object in the target region as the real object, wherein the target region is a spatial region at the target position in the real-time scene image; and/or determining to acquire the object matching instruction and determining an object corresponding to the target position as the real object under the condition that the selected instruction of the target position is acquired, wherein the target position comprises at least two position points in the real-time scene image. By determining the plane of the virtual object in the real-time scene, the area or position selected by the user, and the like, the size matching requirement of the user can be determined, so that the object in the real scene can be matched with the object in the virtual scene.
With reference to the first aspect, in a possible implementation manner, the displaying a first augmented reality image includes: determining a target plane type which can be placed by the first virtual object and a placement threshold value corresponding to the target plane type; performing plane detection on the currently acquired real-time scene image, and determining a target plane according to the type of the target plane; displaying the first augmented reality image if it is determined that the distance between the target plane and the camera center is less than the placement threshold corresponding to the target plane type. By setting the plane type and placement threshold for the virtual object, invalid placement and display of the virtual object is avoided.
With reference to the first aspect, in a possible implementation manner, before displaying the first augmented reality image, the method further includes: acquiring an identifier of a first virtual object, wherein the identifier is used for indicating the first virtual object; and acquiring the first augmented reality model and the second augmented reality model according to the identification.
In a second aspect, an augmented reality image rendering display device is provided, including:
the display device comprises a first display module and a second display module, wherein the first display module is used for displaying a first augmented reality image under the condition that a display instruction of a first virtual object is obtained, the first augmented reality image is an augmented reality image obtained by carrying out virtual-real fusion processing on a currently obtained real-time scene image and a first augmented reality model, and the first augmented reality model is an augmented reality model corresponding to the first virtual object;
the second display module is configured to display a second augmented reality image when an object matching instruction is obtained, where the object matching instruction is used to instruct to match the first virtual object with a real object at a target position, a matching result of the first virtual object and the real object is displayed in the second augmented reality image, and the target position is a position included in a real-time scene image in the first augmented reality image.
In a third aspect, there is provided a computer device comprising a memory and one or more processors and a display, the display for displaying an augmented reality image, the one or more processors for executing one or more computer programs stored in the memory, the one or more processors, when executing the one or more computer programs, causing the computer device to implement the augmented reality image display method of the first aspect.
In a fourth aspect, a computer-readable storage medium is provided, in which a computer program is stored, the computer program comprising program instructions, which, when executed by a processor, cause the processor to perform the augmented reality image display method of the first aspect.
The application can realize the following technical effects: when the virtual object is the household appliance, the user can conveniently know the matching condition of the household appliance and the real space, so that the user can select the more appropriate household appliance, and the shopping experience is improved.
Drawings
Fig. 1 is an augmented reality image provided in an embodiment of the present application;
fig. 2 is a flowchart of an augmented reality image display method according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of an augmented reality image rendering and displaying apparatus according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application.
The technical scheme of the application can be suitable for various scenes needing to match the real scene with the virtual article. For example, the technical scheme of the application can be applied to a scene that a user purchases household equipment; when a user selects certain household equipment through a user terminal, a scene image corresponding to a real space can be obtained through the user terminal, and the user terminal generates an Augmented Reality (AR) image according to a preset AR model of the household equipment and the scene image corresponding to the real space; in addition, the user terminal can compare the household equipment with the object in the real space based on the AR image to generate a comparison result, so that the user can know the matching condition of the household equipment and the real space, and shopping experience is improved. Specifically, the technical solution of the present application can be applied to terminal devices, including but not limited to mobile phones, tablet computers, and the like.
For ease of understanding, some concepts related to the present application will be first introduced.
1. Virtual object: refers to an object which does not exist in the real world, and the virtual object is an virtual three-dimensional image which is created in the virtual world in advance and can be used for reflecting the object in the real world in the virtual world, wherein the virtual object can be rendered into the real scene.
2. Augmented reality model: the augmented reality model is a three-dimensional model which is constructed by using a three-dimensional image technology and used for showing a virtual object in a virtual world, wherein the augmented reality model can be used for restoring attributes such as size, texture and the like of an article in the real world in the virtual world so as to obtain the virtual object corresponding to the article. In some possible cases, the augmented reality model may be used to perform a one-to-one reduction on the items in the real scene to obtain virtual objects having the same size as the items in the real scene. Specifically, the augmented reality model may be obtained in advance based on three-dimensional software modeling, image or video based modeling, three-dimensional scanner based modeling, or the like.
3. Augmented reality image: the augmented reality image is a two-dimensional image obtained by superposing an image in a real scene and an image of a virtual object, and the augmented reality image contains the content of the real scene and the content of the virtual object. For example, referring to fig. 1, fig. 1 is an augmented reality image provided by an embodiment of the present application, where an air fryer Q in fig. 1 is a virtual object, and other contents (such as a desktop W, a ground G, and the like) in the image are real scene images.
4. Image coordinate system: refers to a coordinate system with the center of an image plane (also called as a camera imaging plane) as a coordinate origin, and an X axis and a Y axis respectively parallel to two vertical sides of the image plane, wherein the image coordinate system represents coordinate values by (X, Y), and the position of a pixel in the image is represented by a physical unit (such as millimeter). The image coordinate system is understood to be a coordinate system constructed in the two-dimensional image for describing the position of the object in the two-dimensional image.
5. Camera coordinate system: also known as the optical center coordinate system, is a coordinate system in which the optical center of the camera (also known as the camera center) is used as the origin of coordinates, the X-axis and the Y-axis are respectively parallel to the X-axis and the Y-axis of the image coordinate system, and the optical axis of the camera is used as the Z-axis. Wherein the distance between the optical center of the camera and the origin of the image coordinate system is equal to the focal length of the camera.
6. World coordinate system: also known as an absolute coordinate system, refers to a reference coordinate system constructed in the three-dimensional world for describing the position of the camera in the three-dimensional world, which can be used for describing the position of the object in the three-dimensional world. The world coordinate system may be divided into a real world coordinate system and a virtual world coordinate system, the real world coordinate system is a world coordinate system for describing a position of a real object in a real three-dimensional world, and the virtual world coordinate system is a world coordinate system for describing a position of a virtual object in a virtual three-dimensional world.
The technical solution of the present application is specifically described below.
Referring to fig. 2, fig. 2 is a flowchart of an augmented reality image display method provided in an embodiment of the present application, where the method is applicable to a terminal device, and as shown in fig. 2, the method includes the following steps:
and S101, displaying the first augmented reality image when the display instruction of the first virtual object is acquired.
Here, the first virtual object may refer to a virtual object selected by a user to be displayed in an overlaid manner in the real scene. In the embodiment of the present application, the first virtual object corresponds to a type/kind of object in the real world, and the first virtual object may be a virtual three-dimensional image of an air fryer, a refrigerator, a television, a socket, a door lock, and other household appliances. The display instruction of the first virtual object is used for indicating that the first virtual object is displayed in the real scene. The first augmented reality image is obtained by performing virtual-real fusion processing on the currently acquired real-time scene image and the first augmented reality model, and the first augmented reality model is an augmented reality model corresponding to the first virtual object. The real-time scene image is an image reflecting a scene in the real world.
In some embodiments, two-dimensional images of each virtual object may be displayed to a user, and when it is detected that a certain two-dimensional image is selected by the user, a display instruction for acquiring a first virtual object is determined, where a virtual object corresponding to the two-dimensional image selected by the user is the first virtual object; in other embodiments, the name of each virtual object may be displayed, and the display instruction of the first virtual object may be obtained according to the selection of the name of the virtual object by the user, where the user selects the name of the virtual object, that is, the first virtual object is selected. In still other embodiments, a visual element such as a button or a floating window for displaying the trigger virtual object may be further configured, and when a user operation that a user clicks the visual element such as the button or the floating window is acquired, a display instruction for acquiring the first virtual object is determined, and a virtual object of the button or the floating window clicked by the user is the first virtual object. The present application is not limited to the manner of triggering the display command of the first virtual object.
Before the first augmented reality image is displayed, virtual-real fusion processing needs to be performed on the real-time scene and the virtual object to obtain the first augmented reality image. In one possible implementation, the first augmented reality image may be obtained according to the following steps a 1-A3.
And A1, determining coordinate conversion parameters between the real-world coordinate system and the image coordinate system according to the real-time scene image.
Here, with regard to the definition of the real world coordinate system and the image coordinate system, reference may be made to the foregoing description. And converting the coordinate conversion parameters between the real world coordinate system and the image coordinate system into a camera matrix, wherein the camera matrix comprises a camera internal reference matrix and a camera external reference matrix. The camera external reference matrix is used for reflecting the mapping relation between the coordinates in the real world coordinate system and the coordinates in the camera coordinate system, and the conversion between the coordinates in the real world coordinate system and the coordinates in the camera coordinate system can be realized through the camera external reference matrix; the camera internal reference matrix is used for reflecting the mapping relation between the coordinates in the camera coordinate system and the coordinates in the image coordinate system, and the conversion between the coordinates in the camera coordinate system and the coordinates in the image coordinate system can be realized through the camera internal reference matrix; by means of the camera matrix, a conversion between coordinates in the real world coordinate system and coordinates in the image coordinate system can be achieved. Specifically, the conversion relationship between the coordinates in the real world coordinate system and the coordinates in the image coordinate system can be expressed as follows:
Figure BDA0003280395050000071
wherein, U is the coordinate in the real world coordinate system, U is the coordinate in the image coordinate system, P is the camera matrix, K is the camera internal reference matrix, [ R, t ] is the camera external reference matrix, R is the rotation matrix, and t is the translation matrix.
In particular, the camera internal reference matrix describes internal parameters of the camera, which may include focal length and origin coordinates in an image coordinate system, etc., and is generally fixed. In a specific implementation, the camera internal reference matrix can be determined based on Artoolkit. The camera external reference matrix describes the position, the rotation angle and the like of the camera in the real world, and changes along with the changes of the relative rotation angle and the relative position between the camera and a reference coordinate system in the real world, so that the camera external reference matrix changes, and the real-time scene image shot by the camera also changes. Therefore, the camera external reference matrix can be determined in real time according to the real-time scene, and then the conversion relation between the coordinates in the real-world coordinate system and the coordinates in the image coordinate system is determined by combining the camera external reference matrix and the determined camera internal reference matrix. In specific implementation, feature extraction and three-dimensional reconstruction can be performed on a real-time scene image based on a (Structure From Motion, SFM) algorithm, a plane in a real-time scene is solved according to a reconstructed three-dimensional point cloud, the solved plane is used as a registration plane, a reference coordinate system in the real world is established according to the registration plane, and a camera external reference matrix reflecting the rotation angle and the position of a camera compared with the reference coordinate system is determined based on feature points on the registration plane. It should be understood that when different registration planes are determined, the position and rotation angle of the camera relative to the registration plane is different, and the camera external parameter matrix is different.
And A2, performing coordinate conversion on the first augmented reality model according to the coordinate conversion parameters to obtain a virtual image corresponding to the first virtual object.
Here, the coordinate conversion of the first augmented reality model means that the coordinates of the first augmented reality model in the virtual world coordinate system are converted into the coordinates in the image coordinate system, thereby completing the conversion from virtual three dimensions to two dimensions. In order to ensure synchronization between the real world and the virtual world, a virtual world coordinate system in which the first virtual object is located may be made to coincide with the real world coordinate system, that is, a reference coordinate system in the registration plane is used as the reference coordinate system in the virtual world, and the registration plane is used as a projection plane of the virtual object for placing the virtual object, and then conversion from the virtual world coordinate system to an image coordinate system is realized based on the camera internal reference matrix and the camera external reference matrix, thereby realizing coordinate conversion of the first augmented reality model. Specifically, the camera internal parameter matrix may be used as an OpenGL projection matrix of the first augmented reality model, and the camera external parameter matrix may be used as a model view matrix of the first augmented reality model, and the first augmented reality model is subjected to projection conversion to obtain a virtual image corresponding to the first virtual object.
And A3, carrying out image superposition on the virtual image and the real-time scene image to obtain a first augmented reality image.
Because the real world coordinate system is overlapped with the virtual world coordinate system, and the coordinate conversion parameter between the real world coordinate system and the image coordinate system is determined, the coordinate conversion parameter between the virtual world coordinate system and the image coordinate system can be determined, so that the augmented reality model is converted into a two-dimensional virtual image, and better virtual-real fusion can be performed.
Before the first augmented reality image is rendered, an augmented reality model of the first virtual object needs to be acquired. In some cases, the augmented reality model of the first virtual object is pre-constructed and stored in the cloud. Specifically, an identifier of the first virtual object may be obtained, where the identifier of the first virtual object is used to uniquely indicate the first virtual object; and then acquiring an augmented reality model of the first virtual object according to the identification of the first virtual object. The identification of the first virtual object includes, but is not limited to, the aforementioned two-dimensional image, name, and the like. In specific implementation, the identifier of the first virtual object may be sent to the cloud device to request the cloud device to send the augmented reality model of the first virtual object.
Optionally, in some possible cases, after the augmented reality model of the first virtual object is acquired, it may be further determined whether the currently acquired real-time scene image can place the first augmented reality model, that is, whether the currently acquired real-time scene image is suitable for placing and displaying the first virtual object. In one possible implementation, the determination of whether it is appropriate to place and present the first virtual image may be accomplished according to steps B1-B3 as follows.
B1, determining a target plane type which the first virtual object can be placed and a placement threshold corresponding to the target plane type.
Here, the target plane type may include a horizontal plane and a vertical plane, and the type of the target plane on which the first virtual object may be placed is determined by the type of the device characterized by the first virtual object. For example, if the first virtual object represents a cooking device, the type of the target plane on which the first virtual object can be placed is a horizontal plane; for another example, if the first virtual object represents a door lock, the type of the target plane where the first virtual object can be placed is a vertical plane; for another example, if the first virtual object represents a socket or a camera, the types of target planes on which the first virtual object can be placed are a horizontal plane and a vertical plane.
Here, the placement threshold corresponding to the target plane type is a threshold that is set in advance for a plane where the first virtual object can be placed, and is used to measure whether the plane corresponding to the target plane type is suitable for placing the first virtual object.
In specific implementation, a mapping relationship between the virtual object and the plane type and a placement threshold may be preset, and then a target plane type where the first virtual object may be placed and a placement threshold corresponding to the target plane type are determined according to the first virtual object. It should be understood that when there are a plurality of determined target plane types on which the first virtual object can be placed, the target plane type may be determined according to a user's selection.
B2, carrying out plane detection on the currently acquired real-time scene image, and determining a target plane according to the type of the target plane.
Specifically, the plane detection may be performed by referring to the manner of solving the plane described in the foregoing step a1, and then, whether the detected plane is the target plane type is determined, and if the detected plane belongs to the target plane type, the detected plane is determined as the target plane.
And B3, displaying the first augmented reality image under the condition that the distance between the target plane and the camera center is determined to be less than the placement threshold corresponding to the target plane.
Here, the camera center refers to an origin in a camera coordinate system, that is, an optical center of the camera. If the distance between the target plane and the camera center is less than the placement threshold, which indicates that the target plane is a plane suitable for placing the virtual object, the target plane may be determined as the registration plane, and the above steps a2 and A3 may be performed, so as to obtain the first augmented reality image, and then the first augmented reality image is displayed. If the distance between the target plane and the center of the camera is greater than the placement threshold, which indicates that the target plane is not a plane suitable for placing the virtual object, the above steps a2 and A3 are not executed, and the plane detection is continued until a plane suitable for placing the virtual object is found and is used as the registration plane.
In a specific implementation, after a target plane is obtained, the center of a camera can be converted into a reference coordinate system in the target plane according to a camera matrix corresponding to the target plane, and then the distance between the center of the camera and the target plane is calculated.
By setting placeable plane types and placement thresholds for different virtual objects in advance, it is possible to avoid displaying the virtual object on a plane unsuitable for placing the virtual object, so that it is possible to avoid invalid display of the first virtual object.
And S102, displaying the second augmented reality image under the condition that the object matching instruction is acquired.
Here, the object matching instruction is used to instruct to match the first virtual object with a real object at a target position, where the target position is a position included in the real-time scene image in the first augmented reality image, and the real object is an object at the target position in the real-time scene image, and may be understood as an object in the real scene. The matching result of the first virtual object and the real object is displayed in the second augmented reality image. When the object matching instruction is acquired, it indicates that the user wishes to compare the first virtual object with the real object at the target position to determine whether the first virtual object matches with the real object at the target position.
In a possible implementation manner, a placement plane where the first virtual object is located in the first augmented reality image may be determined, in a case where the selected instruction for the placement plane is obtained, an object matching instruction is determined to be obtained, and the placement plane where the first virtual object is located in the first augmented reality image is determined as the real object.
Specifically, in the case where the real object is a placement plane on which the first virtual object is located in the first augmented reality image, the length of each edge of the placement plane, the length of each edge of the first virtual object, the area of each face of the first virtual object, the area of the contact face between the first virtual object and the placement plane, the length of each side of the contact face from each side of the placement plane, and the like may be displayed in the second augmented reality image as a matching result.
For example, a "size comparison" button is displayed in the first augmented reality image, the first virtual object is an air fryer, a placement plane of the first virtual object in the first augmented reality image is a hearth plane in the real-time scene image, when the user selects and confirms the hearth plane in the first augmented reality image and clicks a size button in the first augmented reality image, determining to acquire an object matching instruction, and then directly determining the hearth plane as a real object, further determining the length, the width and the height of the air fryer, the area of the hearth plane, the length and the width of the hearth plane, determining the area of one surface of the air fryer, which is in contact with the hearth plane, the distance between one surface of the air fryer, which is in contact with the hearth plane, and the edge of the hearth plane and the like as matching results, and displaying the matching results in a second augmented reality image.
In another possible implementation manner, in the case that the selected instruction for the target region in the first augmented reality image is acquired, the acquisition of the object matching instruction is determined, and the object in the target region is determined as the real object, wherein the target region is a spatial region at the target position in the real-time scene image.
Specifically, when the real object is an object in the target region in the first augmented reality image, the size of each edge of the first virtual object, the distance between each edge of the virtual object and each inner wall of the spatial region in the spatial region, the size of each inner wall in the spatial region, and the like may be displayed in the second augmented reality image as the matching result.
For example, a size comparison button is displayed in the first augmented reality image, the first virtual object is an inner cavity of the air fryer, after a user moves the inner cavity of the air fryer to the position of the inner space region of the dishwasher, the inner space region of the dishwasher is selected, and the size comparison button is clicked, it is determined that an object matching instruction is obtained, the size and the cross-sectional area of each edge of the inner cavity of the air fryer, the distance between the inner wall of the air fryer and each inner wall of the dishwasher in the inner space region of the dishwasher, the size of each inner wall of the dishwasher and the like are used as matching results, and the matching results are displayed in the second augmented reality image.
In another possible implementation manner, in the case that the selected instruction for the target position is acquired, the acquired object matching instruction is determined, and an object corresponding to the target position is determined as a real object, where the target position includes at least two position points in the real-time scene image.
Specifically, when the real object is an object between two position points in the real-time scene image, the distance between the two position points may be calculated, the size in the target direction of the first virtual object may be determined, and the distance between the two position points and the size in the target direction of the first virtual object may be used as a matching result to be displayed on the second augmented reality image, where the target direction is a direction of a straight line formed by connecting the two position points.
For example, a "size comparison" button is displayed in the first augmented reality image, the first virtual object is a socket, when the user selects two points on a vertical plane in the first augmented reality image and clicks the size comparison button, it is determined that an object matching instruction is acquired, and then the size of the socket in the direction indicated by the two points and the distance between the two points are displayed in the second augmented reality image as a size matching result.
In the embodiment of the present application, in the process of matching the first virtual object and the real object, the first virtual object and the real object may be converted into the real world and then subjected to size comparison. Specifically, the sizes of the real object and the first virtual object may be calculated through the following steps C1-C3.
And C1, determining the image coordinates of the real object and the first virtual object in the second augmented reality image.
C2, converting the image coordinates of the real object and the first virtual object in the second augmented reality image into real position coordinates of the real world, and calculating the actual sizes of the real object and the first virtual object based on the converted real position coordinates.
And C3, obtaining a matching result of the first virtual object and the real object according to the real object and the actual size of the first virtual object.
Specifically, the image coordinates of the real object and the first virtual object in the second augmented reality image may be converted into real position coordinates of the real world according to the camera matrix corresponding to the second augmented reality image, and then various actual sizes between the first virtual object and the real object may be calculated according to a distance calculation formula.
Specifically, the size calculation formula is
Figure BDA0003280395050000111
Dv is the actual size calculated, (U)1xv,U1yv,U1zv) And (U)2xv,U2yv,U2zv) The coordinates of two points respectively required for determining the size.
By converting the coordinates of the real object and the sub-virtual object into the real world for comparison, the real position coordinates can be free from the influence of the camera angle and the like, and the size comparison can be better performed.
Optionally, the angle of the first virtual object may also be compared with the angle of the real object at the target location to determine whether the first virtual object matches the real object at the target location; alternatively, the shape of the first virtual object may be compared to the shape of the real object at the target location to determine whether the first virtual object matches the real object at the target location; alternatively, the texture of the first virtual object may be compared to the texture of the real object at the target location to determine whether the first virtual object matches the real object at the target location. And the like, are not limited to the description herein, and are not limited in this application in regard to matching.
It should be understood that, the manner of triggering the object matching instruction may be various, the form and the position of the first virtual object in the first augmented reality image may also change in real time along with the user operation, and since the form and the position of the first virtual object in the first augmented reality image change in real time along with the user operation and the specific situation of the real object selected by the user is correspondingly adjusted and changed, the content included in the matching result of the first virtual object and the real object also changes correspondingly, and the present application does not limit the manner of triggering the object matching instruction and the specific content included in the matching result of the first virtual object and the real object.
In the technical scheme, the virtual object and the real scene object are combined and displayed by carrying out virtual-real fusion processing on the augmented reality model of the virtual object and the real-time scene image and displaying the processed augmented reality image, so that a user can conveniently know the condition of placing the virtual object in a real space; and under the condition that an instruction for matching the virtual object with the real object is obtained, the augmented reality image containing the matching result of the virtual object and the real object is displayed, so that a user can conveniently know the matching condition of the virtual object and the real object. When the virtual object is a household appliance article, the matching condition of the household appliance and the real space can be conveniently known by a user, so that the user can select the more appropriate household appliance, and the shopping experience is improved.
Optionally, in some possible scenarios, in a case that the device characterized by the first virtual object is a detachable device, multiple augmented reality models may be further constructed in advance for the first virtual object, and the multiple augmented reality models constitute an augmented reality model set of the first virtual object, where the augmented reality model set of the first virtual object may include a first augmented reality module and a second augmented reality model, and the second augmented reality module is an augmented reality model of a sub-virtual object in the first virtual object, and the sub-virtual object belongs to a part of the first virtual object. Taking the first virtual object as an air fryer as an example, an augmented reality model M1 of the overall structure of the air fryer can be constructed in advance aiming at the overall structure of the air fryer, and the augmented reality model M1 is used for restoring the overall structure of the air fryer in a virtual world; an augmented reality model M2 of the fry basket is then constructed for the fry basket in the air fryer, augmented reality model M2 being used to restore the fry basket in the virtual world. It will be appreciated that how many augmented reality models are specifically set for a virtual object may depend on the specific requirements.
Optionally, in a case where a placement instruction of a child virtual object in the first virtual child object is acquired, the third augmented reality image is displayed.
And the placing instruction is used for indicating that the sub virtual object is placed at any position where the sub virtual object can be placed. The second augmented reality image is obtained by fusing and rendering the second augmented reality model to a position in the first augmented reality image, and the second augmented reality model is an augmented reality model corresponding to the sub-virtual object. It is to be understood that the coordinates of the second augmented reality model in the virtual world coincide with the coordinates of the first augmented reality model in the virtual world without disassembling the first virtual object, i.e. without moving or placing the sub-virtual object separately.
Specifically, in the process of displaying the first augmented reality image, the user operation of the user on the first virtual object or the user operation on the virtual sub-object may be acquired, the coordinate of the first augmented reality model or the second augmented reality model in the virtual world is updated in response to the user operation, the coordinate of the first augmented reality model or the second augmented reality model in the virtual world is subjected to coordinate conversion according to the coordinate conversion parameter determined in real time, and the image coordinate of the first augmented reality model or the second augmented reality model in the first augmented reality image is acquired, so that the purpose of updating the position and the angle of the first virtual object in the first augmented reality image in real time according to the real-time user operation is achieved. When the real-time user operation for indicating the placement of the virtual sub-object is acquired, it is determined that the placement instruction of the sub-virtual object is acquired. Under the condition that the placing instruction of the sub-virtual object is obtained, the coordinates of the second augmented reality model in the virtual world can be determined, projection conversion is carried out on the second augmented reality model according to the coordinate conversion parameters when the placing instruction is obtained, the coordinates of the second augmented reality model in the virtual world and the coordinate conversion parameters when the placing instruction is obtained, the virtual image of the sub-virtual object can be obtained, the sub-virtual image is superposed and displayed in the real-time scene image when the placing instruction is obtained, and the third augmented reality image can be obtained.
By way of example, a first virtual object is an air fryer, a second virtual object is an air fryer basket, after the first augmented reality model and the real-time scene image are subjected to virtual-real fusion to obtain a first augmented reality image, the user may operate on the air fryer or the fry-basket in the first augmented reality image, for example, the movement of the first augmented reality model can be realized by long pressing the whole air fryer with a single finger and moving the finger, the whole air fryer is pressed with two fingers, the first augmented reality model can be rotated, the first augmented reality model can be placed by clicking the whole empty air fryer, the frying basket in the air fryer is pressed by a single finger, can realize the removal to second augmented reality model, press through two fingers and hold fried the basket, can realize the rotation to second augmented reality model, through clicking fried the basket, can realize the mode to second augmented reality model. While the user is pressing and moving the fry basket with a single finger in the first augmented reality imageAccording to the movement situation of the finger in the real world, the movement situation of the second augmented reality model in the virtual world can be determined, and then the coordinate of the second augmented reality model in the virtual world is determined, and the coordinate calculation formula of the second augmented reality model in the virtual world is (U)x2,Uy2,Uz2)=(Ux1,Uy1,Uz1) + (φ x, φ y, φ z), wherein (U)x2,Uy2,Uz2) For the coordinates of the second augmented reality model in the virtual world, (U)x1,Uy1,Uz1) Coordinates in the virtual world for the first augmented reality model, (phix, phiy, phiz) coordinates for the user's finger movement; the coordinates of the second augmented reality model in the first augmented reality image may then be determined from the coordinates of the second augmented reality model in the virtual world, and finally based on the calculated coordinate conversion parameters. After the user moves the fry basket to the target position and clicks the fry basket, the placement of the fry basket can be realized.
Further optionally, when an object matching instruction corresponding to the sub virtual object is obtained, a fifth augmented reality image may also be displayed, where a matching result of the sub virtual object and the real object in the scene image is displayed in the fifth augmented reality image. The step S102 may be referred to for a specific implementation manner of matching the first sub-virtual object and the real image and displaying the fifth augmented reality image, and details are not repeated here.
Further optionally, in some possible scenarios, the sub-virtual object may be restored to the first virtual object, and after the third augmented reality image is displayed, in a case where a restoration instruction of the sub-virtual object is obtained, a fourth augmented reality image is displayed, where the fourth augmented reality image is an augmented reality image obtained by restoring the sub-virtual object to the first virtual object.
Specifically, a coordinate conversion parameter corresponding to the third augmented reality image may be determined, coordinates of the second augmented reality model and the first augmented reality model in the virtual world are determined according to the coordinate conversion parameter corresponding to the third augmented reality image and image coordinates of the first virtual object and the virtual sub-object in the third augmented reality image, and then the second augmented reality model is moved to the coordinates of the first augmented reality image in the virtual world according to the coordinates of the second augmented reality model and the first augmented reality model in the virtual world, so as to obtain a fourth augmented reality image. By rendering and restoring the sub-virtual objects, the interestingness of rendering display is enhanced, and meanwhile, a user can conveniently select other virtual objects for matching.
Optionally, in some possible scenarios, in a case that it is determined that the matching result of the child virtual object and the real object is not matched, target information may also be pushed according to the actual size of the real object, where the target information is recommendation information of a second virtual object that matches the actual size of the real object. Wherein the second virtual object is of the same type as the first virtual object. For example, if the first virtual object is an air fryer, then the second virtual object is also an air fryer.
In a specific implementation, a virtual object of the same type as the first virtual object may be determined according to the prestored virtual object information, a virtual object matching with the actual size of the real object is selected from the virtual objects of the same type as the first virtual object to serve as a second virtual object, and then information of the second virtual object is acquired from the prestored virtual object information to serve as target information for pushing. Specifically, the pre-stored virtual object information includes, but is not limited to, name, size, type, color, shape, and the like. By recommending information for the second virtual object that matches the actual size of the real object, the user can be prompted to select an appropriate home appliance.
The method of the present application is described above, and in order to better carry out the method of the present application, the apparatus of the present application is described next.
Referring to fig. 3, fig. 3 is a schematic structural diagram of an augmented reality image rendering and displaying apparatus provided in an embodiment of the present application, where the augmented reality image rendering and displaying apparatus may be a terminal device or a part of the terminal device. As shown in fig. 3, the augmented reality image rendering display device 20 includes:
a first display module 201, configured to display a first augmented reality image under a condition that a display instruction of a first virtual object is obtained, where the first augmented reality image is an augmented reality image obtained by performing virtual-real fusion processing on a currently obtained real-time scene image and a first augmented reality model, and the first augmented reality model is an augmented reality model corresponding to the first virtual object;
a second display module 202, configured to display a second augmented reality image when an object matching instruction is obtained, where the object matching instruction is used to instruct to match the first virtual object with a real object at a target position, a matching result of the first virtual object and the real object is displayed in the second augmented reality image, and the target position is a position included in a real-time scene image in the first augmented reality image.
In one possible design, the augmented reality image rendering and displaying apparatus 20 further includes an image overlaying module 203, configured to: determining coordinate conversion parameters between a real-world coordinate system and an image coordinate system according to the real-time scene image; performing coordinate conversion on the first augmented reality model according to the coordinate conversion parameter to obtain a virtual image corresponding to the first virtual object; and performing image superposition on the virtual image and the real-time scene image to obtain the first augmented reality image.
In one possible design, the augmented reality image rendering and displaying apparatus 20 further includes a matching module 204 configured to: determining image coordinates of the real object and the first virtual object in the first augmented reality image; after the image coordinates are converted into real position coordinates of a real world, calculating the actual sizes of the real object and the first virtual object based on the real position coordinates obtained through conversion; and obtaining the matching result according to the actual sizes of the real object and the first virtual object.
In a possible design, the above-mentioned augmented reality image rendering and displaying apparatus 20 further includes an information pushing module 205, configured to, in a case that it is determined that the matching result of the first virtual object and the matching result of the real object are not matched according to the matching result, push and display target information according to the actual size of the real object, where the target information is recommendation information of a second virtual object that matches the actual size of the real object.
In one possible design, the second display module 202 is further configured to: determining a placement plane of the first virtual object in the first augmented reality image, determining to obtain the object matching instruction under the condition that a selected instruction for the placement plane is obtained, and determining the placement plane as the real object, wherein the placement plane is a plane in the real-time scene image; and/or determining to acquire the object matching instruction when acquiring a selected instruction for a target region in the first augmented reality image, and determining an object in the target region as the real object, wherein the target region is a spatial region at the target position in the real-time scene image; and/or determining to acquire the object matching instruction and determining an object corresponding to the target position as the real object under the condition that the selected instruction of the target position is acquired, wherein the target position comprises at least two position points in the real-time scene image.
In a possible design, the first display module 201 is specifically configured to: determining a target plane type which can be placed by the first virtual object and a placement threshold value corresponding to the target plane type; performing plane detection on the currently acquired real-time scene image, and determining a target plane according to the type of the target plane; displaying the first augmented reality image if it is determined that the distance between the target plane and the camera center is less than the placement threshold corresponding to the target plane type.
In a possible design, the above-mentioned augmented reality image rendering and displaying apparatus 20 is further specifically configured to obtain an identifier of the first virtual object, where the identifier is used to indicate the first virtual object; and acquiring the first augmented reality model according to the identification.
It should be noted that, for what is not mentioned in the embodiment corresponding to fig. 3, reference may be made to the description of the foregoing method embodiment, and details are not described here again.
According to the device, the virtual-real fusion processing is carried out on the augmented reality model of the virtual object and the real-time scene image, and the augmented reality image obtained by the processing is displayed, so that the virtual object and the real scene object are displayed in a combined manner, and a user can conveniently know the condition that the virtual object is placed in a real space; under the condition that an instruction for matching the virtual object with the real object is obtained, the augmented reality image containing the matching result of the virtual object and the real object is displayed, and a user can conveniently know the matching condition of the virtual object and the real object. When the virtual object is the household appliance, the user can conveniently know the matching condition of the household appliance and the real space, so that the user can select the more appropriate household appliance, and the shopping experience is improved.
Referring to fig. 4, fig. 4 is a schematic structural diagram of a computer device provided in an embodiment of the present application, where the computer device 30 includes a processor 301, a memory 302, and a display 303. The processor 301 is connected to the memory 302 and the display 303, for example, the processor 301 may be connected to the memory 302 and the display 303 through a bus.
The processor 301 is configured to enable the computer device 30 to perform the respective functions of the methods in the above-described method embodiments. The processor 301 may be a Central Processing Unit (CPU), a Network Processor (NP), a hardware chip, or any combination thereof. The hardware chip may be an Application Specific Integrated Circuit (ASIC), a Programmable Logic Device (PLD), or a combination thereof. The PLD may be a Complex Programmable Logic Device (CPLD), a field-programmable gate array (FPGA), a General Array Logic (GAL), or any combination thereof.
The memory 302 is used to store program codes and the like. Memory 302 may include Volatile Memory (VM), such as Random Access Memory (RAM); the memory 302 may also include a non-volatile memory (NVM), such as a read-only memory (ROM), a flash memory (flash memory), a Hard Disk Drive (HDD) or a solid-state drive (SSD); the memory 302 may also comprise a combination of memories of the kind described above.
The display 301 is used to display an augmented reality image, such as the first augmented reality image, the second augmented reality image, the third augmented reality image, and the like in the foregoing method embodiments.
Processor 301 may call the program code to perform the following:
under the condition that a display instruction of a first virtual object is obtained, displaying a first augmented reality image, wherein the first augmented reality image is obtained by performing virtual-real fusion processing on a currently obtained real-time scene image and a first augmented reality model, and the first augmented reality model is an augmented reality model corresponding to the first virtual object;
and displaying a second augmented reality image under the condition that an object matching instruction is obtained, wherein the object matching instruction is used for indicating that the first virtual object is matched with a real object at a target position, a matching result of the first virtual object and the real object is displayed in the second augmented reality image, and the target position is a position included in a real-time scene image in the first augmented reality image.
Embodiments of the present application also provide a computer-readable storage medium, which stores a computer program, where the computer program includes program instructions, and the program instructions, when executed by a computer, cause the computer to execute the method according to the foregoing embodiments.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only memory (ROM), a Random Access Memory (RAM), or the like.
The above disclosure is only for the purpose of illustrating the preferred embodiments of the present application and is not to be construed as limiting the scope of the present application, so that the present application is not limited thereto, and all equivalent variations and modifications can be made to the present application.

Claims (10)

1. An augmented reality image display method, comprising:
under the condition that a display instruction of a first virtual object is obtained, displaying a first augmented reality image, wherein the first augmented reality image is obtained by performing virtual-real fusion processing on a currently obtained real-time scene image and a first augmented reality model, and the first augmented reality model is an augmented reality model corresponding to the first virtual object;
and displaying a second augmented reality image under the condition that an object matching instruction is obtained, wherein the object matching instruction is used for indicating that the first virtual object is matched with a real object at a target position, a matching result of the first virtual object and the real object is displayed in the second augmented reality image, and the target position is a position included in a real-time scene image in the first augmented reality image.
2. The method of claim 1, wherein prior to displaying the first augmented reality image, further comprising:
determining coordinate conversion parameters between a real-world coordinate system and an image coordinate system according to the real-time scene image;
performing coordinate conversion on the first augmented reality model according to the coordinate conversion parameter to obtain a virtual image corresponding to the first virtual object;
and performing image superposition on the virtual image and the real-time scene image to obtain the first augmented reality image.
3. The method of claim 1, wherein before displaying the second augmented reality image, further comprising:
determining image coordinates of the real object and the first virtual object in the first augmented reality image;
after the image coordinates are converted into real position coordinates of a real world, calculating the actual sizes of the real object and the first virtual object based on the real position coordinates obtained through conversion;
and obtaining the matching result according to the actual sizes of the real object and the first virtual object.
4. The method of claim 3, further comprising:
and under the condition that the matching result of the first virtual object and the real object is determined to be unmatched according to the matching result, pushing and displaying target information according to the actual size of the real object, wherein the target information is recommendation information of a second virtual object matched with the actual size of the real object.
5. The method of claim 1, wherein displaying the first augmented reality image further comprises:
determining a placement plane of the first virtual object in the first augmented reality image, determining to obtain the object matching instruction under the condition that a selected instruction for the placement plane is obtained, and determining the placement plane as the real object, wherein the placement plane is a plane in the real-time scene image; and/or
Under the condition that a selected instruction of a target region in the first augmented reality image is obtained, determining to obtain the object matching instruction, and determining an object in the target region as the real object, wherein the target region is a spatial region at the target position in the real-time scene image; and/or
And under the condition that the selected instruction of the target position is obtained, determining to obtain the object matching instruction, and determining an object corresponding to the target position as the real object, wherein the target position comprises at least two position points in the real-time scene image.
6. The method of claim 1, wherein displaying the first augmented reality image comprises:
determining a target plane type which can be placed by the first virtual object and a placement threshold value corresponding to the target plane type;
performing plane detection on the currently acquired real-time scene image, and determining a target plane according to the type of the target plane;
displaying the first augmented reality image if it is determined that the distance between the target plane and the camera center is less than the placement threshold corresponding to the target plane type.
7. The method of claim 1, wherein prior to displaying the first augmented reality image, further comprising:
acquiring an identifier of a first virtual object, wherein the identifier is used for indicating the first virtual object;
and acquiring the first augmented reality model according to the identification.
8. An augmented reality image rendering display device, comprising:
the display device comprises a first display module and a second display module, wherein the first display module is used for displaying a first augmented reality image under the condition that a display instruction of a first virtual object is obtained, the first augmented reality image is an augmented reality image obtained by carrying out virtual-real fusion processing on a currently obtained real-time scene image and a first augmented reality model, and the first augmented reality model is an augmented reality model corresponding to the first virtual object;
the second display module is configured to display a second augmented reality image when an object matching instruction is obtained, where the object matching instruction is used to instruct to match the first virtual object with a real object at a target position, a matching result of the first virtual object and the real object is displayed in the second augmented reality image, and the target position is a position included in a real-time scene image in the first augmented reality image.
9. A computer device comprising a memory and one or more processors and a display, the display to display an augmented reality image, the one or more processors to execute one or more computer programs stored in the memory, the one or more processors, when executing the one or more computer programs, cause the computer device to implement the method of any of claims 1-7.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program comprising program instructions that, when executed by a processor, cause the processor to carry out the method according to any one of claims 1-7.
CN202111130772.3A 2021-09-26 2021-09-26 Augmented reality image display method, device, equipment and storage medium Pending CN113902520A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111130772.3A CN113902520A (en) 2021-09-26 2021-09-26 Augmented reality image display method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111130772.3A CN113902520A (en) 2021-09-26 2021-09-26 Augmented reality image display method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113902520A true CN113902520A (en) 2022-01-07

Family

ID=79029297

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111130772.3A Pending CN113902520A (en) 2021-09-26 2021-09-26 Augmented reality image display method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113902520A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114419293A (en) * 2022-01-26 2022-04-29 广州鼎飞航空科技有限公司 Augmented reality data processing method, device and equipment
CN115396656A (en) * 2022-08-29 2022-11-25 歌尔科技有限公司 AR SDK-based augmented reality method, system, device and medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109448045A (en) * 2018-10-23 2019-03-08 南京华捷艾米软件科技有限公司 Plane polygon object measuring method and machine readable storage medium based on SLAM
CN109903129A (en) * 2019-02-18 2019-06-18 北京三快在线科技有限公司 Augmented reality display methods and device, electronic equipment, storage medium
WO2021073268A1 (en) * 2019-10-15 2021-04-22 北京市商汤科技开发有限公司 Augmented reality data presentation method and apparatus, electronic device, and storage medium
CN113379825A (en) * 2021-07-01 2021-09-10 北京亮亮视野科技有限公司 Object size detection method and device, electronic equipment and readable medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109448045A (en) * 2018-10-23 2019-03-08 南京华捷艾米软件科技有限公司 Plane polygon object measuring method and machine readable storage medium based on SLAM
CN109903129A (en) * 2019-02-18 2019-06-18 北京三快在线科技有限公司 Augmented reality display methods and device, electronic equipment, storage medium
WO2021073268A1 (en) * 2019-10-15 2021-04-22 北京市商汤科技开发有限公司 Augmented reality data presentation method and apparatus, electronic device, and storage medium
CN113379825A (en) * 2021-07-01 2021-09-10 北京亮亮视野科技有限公司 Object size detection method and device, electronic equipment and readable medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
明德烈等: "非定标的虚实注册方法", 《红外与激光工程》, vol. 31, no. 2, 30 April 2002 (2002-04-30), pages 171 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114419293A (en) * 2022-01-26 2022-04-29 广州鼎飞航空科技有限公司 Augmented reality data processing method, device and equipment
CN115396656A (en) * 2022-08-29 2022-11-25 歌尔科技有限公司 AR SDK-based augmented reality method, system, device and medium

Similar Documents

Publication Publication Date Title
JP6644833B2 (en) System and method for rendering augmented reality content with albedo model
CN110889890B (en) Image processing method and device, processor, electronic equipment and storage medium
Mueller et al. Real-time hand tracking under occlusion from an egocentric rgb-d sensor
EP2786353B1 (en) Methods and systems for capturing and moving 3d models and true-scale metadata of real world objects
US9420253B2 (en) Presenting realistic designs of spaces and objects
Camplani et al. Efficient spatio-temporal hole filling strategy for kinect depth maps
CN110163942B (en) Image data processing method and device
CN113902520A (en) Augmented reality image display method, device, equipment and storage medium
CN113129450A (en) Virtual fitting method, device, electronic equipment and medium
CN111640184A (en) Ancient building reproduction method, ancient building reproduction device, electronic equipment and storage medium
CN113470112A (en) Image processing method, image processing device, storage medium and terminal
US20170330384A1 (en) Product Image Processing Method, and Apparatus and System Thereof
KR101977519B1 (en) Generating and displaying an actual sized interactive object
CN113870439A (en) Method, apparatus, device and storage medium for processing image
JP7398819B2 (en) Three-dimensional reconstruction method and device
JP2005275646A (en) Three-dimensional plotting model generation method, three-dimensional model plotting method, and program therefor
CN114399610A (en) Texture mapping system and method based on guide prior
CN111192308B (en) Image processing method and device, electronic equipment and computer storage medium
CN112819559A (en) Article comparison method and device
CN116342831A (en) Three-dimensional scene reconstruction method, three-dimensional scene reconstruction device, computer equipment and storage medium
Stastny et al. Augmented reality usage for prototyping speed up
JP7119854B2 (en) Changed pixel region extraction device, image processing system, changed pixel region extraction method, image processing method and program
CN112750195B (en) Three-dimensional reconstruction method and device of target object, storage medium and electronic equipment
Kang Geometrically valid pixel reprojection methods for novel view synthesis
Poon et al. Enabling 3D online shopping with affordable depth scanned models

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination