WO2021249390A1 - Method and apparatus for implementing augmented reality, storage medium, and electronic device - Google Patents

Method and apparatus for implementing augmented reality, storage medium, and electronic device Download PDF

Info

Publication number
WO2021249390A1
WO2021249390A1 PCT/CN2021/098887 CN2021098887W WO2021249390A1 WO 2021249390 A1 WO2021249390 A1 WO 2021249390A1 CN 2021098887 W CN2021098887 W CN 2021098887W WO 2021249390 A1 WO2021249390 A1 WO 2021249390A1
Authority
WO
WIPO (PCT)
Prior art keywords
dimensional model
setting space
user
acquisition device
target setting
Prior art date
Application number
PCT/CN2021/098887
Other languages
French (fr)
Chinese (zh)
Inventor
王明远
陶宁
Original Assignee
贝壳技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN202010534339.5A external-priority patent/CN111681320B/en
Priority claimed from CN202010987132.3A external-priority patent/CN112102479B/en
Application filed by 贝壳技术有限公司 filed Critical 贝壳技术有限公司
Publication of WO2021249390A1 publication Critical patent/WO2021249390A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics

Definitions

  • the present disclosure relates to the field of augmented reality, and in particular to a method and device for realizing augmented reality, computer-readable storage media, electronic equipment, and computer program products.
  • Augmented Reality is a technology that ingeniously integrates virtual information with the real world. It uses a variety of technical means such as multimedia, three-dimensional modeling, real-time tracking and registration, intelligent interaction, and sensing to generate computer-generated data. After the virtual information such as text, image, 3D model, music, video, etc. is simulated and simulated, it is applied to the real world, and the two kinds of information complement each other, thus realizing the "enhancement" of the real world.
  • a method for realizing augmented reality including: obtaining a three-dimensional model corresponding to a target setting space based on a current position of an image acquisition device, wherein the current position is in the target In the setting space; control the image acquisition device to start acquiring images at the current position, and initialize the coordinates based on the current position as the origin coordinates; and based on the origin coordinates and the images collected by the image acquisition device, Aligning the three-dimensional model with the target setting space.
  • a device for realizing augmented reality including a device for realizing the above method.
  • a computer-readable storage medium stores a computer program, and the computer program is used to execute the above-mentioned method.
  • an electronic device including: a processor; and a memory for storing executable instructions of the processor.
  • the executable instructions when executed by the processor, implement the above methods.
  • a computer program product including a computer program, wherein the computer program implements the foregoing method when being executed by a processor.
  • Fig. 1 is a schematic flowchart of a method for realizing augmented reality provided by an exemplary embodiment of the present disclosure.
  • Fig. 2 is a schematic flowchart of the steps of obtaining a three-dimensional model in Fig. 1.
  • Fig. 3 is a schematic flowchart of the alignment step in Fig. 1.
  • Fig. 4 is another flow diagram of the alignment step in Fig. 1.
  • Fig. 5 is a schematic flowchart of the coordinate initialization step in Fig. 1.
  • FIG. 6 is a schematic flowchart of the initial alignment step in FIG. 4.
  • FIG. 7 is a schematic flowchart of the step of determining the pose information in FIG. 4.
  • FIG. 8 is a schematic flowchart of a model display method used in a three-dimensional model in a method for realizing augmented reality provided by an exemplary embodiment of the present disclosure.
  • Fig. 9A is one of the schematic diagrams of the three-dimensional model.
  • Fig. 9B is the second schematic diagram of the three-dimensional model.
  • Fig. 9C is the third schematic diagram of the three-dimensional model.
  • FIG. 10A is one of the schematic diagrams of the partial model corresponding to the user's perspective in the three-dimensional model.
  • FIG. 10B is the second schematic diagram of the partial model corresponding to the user's perspective in the three-dimensional model.
  • FIG. 10C is the third schematic diagram of the partial model corresponding to the user's perspective in the three-dimensional model.
  • Fig. 11A is a schematic diagram of the house type corresponding to the three-dimensional model before the partition wall is removed.
  • Fig. 11B is a schematic diagram of the house type corresponding to the three-dimensional model after the partition wall is removed.
  • FIG. 12A is one of the schematic diagrams of the viewing angle operation interface.
  • FIG. 12B is the second schematic diagram of the viewing angle operation interface.
  • Fig. 13 is a schematic structural diagram of an augmented reality device provided by an exemplary embodiment of the present disclosure.
  • Fig. 14 is a schematic structural diagram of a model display device used in a three-dimensional model in an augmented reality device provided by an exemplary embodiment of the present disclosure.
  • Fig. 15 is a structural diagram of an electronic device provided by an exemplary embodiment of the present disclosure.
  • plural may refer to two or more than two, and “at least one” may refer to one, two, or more than two.
  • the term "and/or" in the present disclosure is merely an association relationship describing associated objects, which means that there can be three types of relationships, for example, A and/or B can mean that there is A alone, and A and B exist at the same time. , There are three cases of B alone.
  • the character "/" in the present disclosure generally indicates that the associated objects before and after are in an "or" relationship.
  • the embodiments of the present disclosure can be applied to electronic devices such as terminal devices, computer systems, servers, etc., which can operate with many other general-purpose or special-purpose computing system environments or configurations.
  • Examples of well-known terminal devices, computing systems, environments and/or configurations suitable for use with electronic devices such as terminal devices, computer systems, servers, etc. include, but are not limited to: personal computer systems, server computer systems, thin clients, thick clients Computers, handheld or laptop devices, microprocessor-based systems, set-top boxes, programmable consumer electronics, networked personal computers, small computer systems, large computer systems, and distributed cloud computing technology environments including any of the above systems, etc.
  • Electronic devices such as terminal devices, computer systems, and servers can be described in the general context of computer system executable instructions (such as program modules) executed by the computer system.
  • program modules may include routines, programs, object programs, components, logic, data structures, etc., which perform specific tasks or implement specific abstract data types.
  • the computer system/server can be implemented in a distributed cloud computing environment. In the distributed cloud computing environment, tasks are executed by remote processing equipment linked through a communication network. In a distributed cloud computing environment, program modules may be located on a storage medium of a local or remote computing system including a storage device.
  • the inventor found that the augmented reality scene in the prior art is only applied to the initial image recognition and positioning, and this technical solution has at least the following problem: it is necessary to manually align the model with the real scene.
  • Fig. 1 is a schematic flowchart of a method for realizing augmented reality provided by an exemplary embodiment of the present disclosure. This embodiment can be applied to an electronic device, as shown in FIG. 1, and includes the following steps:
  • Step 102 Obtain a three-dimensional model corresponding to the target setting space based on the current position of the image acquisition device.
  • the current position is in the target setting space.
  • the image capture device may include a mobile phone and other devices that can realize the image capture function.
  • the current location is determined by the device itself (for example, the mobile phone has a positioning function) or other auxiliary positioning device or according to the user report.
  • the target setting space is a space that requires a three-dimensional model to achieve augmented reality.
  • the three-dimensional model can be a scene design model after decoration and design processing based on the structure of the target setting space.
  • Step 104 Control the image acquisition device to start acquiring images at the current position, and perform coordinate initialization based on the current position as the origin coordinates.
  • this embodiment in order to align the three-dimensional model with the target setting space, this embodiment first establishes a coordinate system with the current position as the origin coordinate through the current position where the image acquisition device is located.
  • the coordinate system all coordinates They are all relative coordinates relative to the origin coordinates.
  • the three-dimensional model can be embedded in the real scene (target setting space) by using the origin coordinates and the image obtained at the origin coordinates.
  • Step 106 align the three-dimensional model with the target setting space based on the coordinates of the origin and the image collected by the image acquisition device.
  • the initial 3D model embedded in the obtained image may not match the current picture, resulting in the 3D model being drawn and affecting the user
  • This embodiment can determine the multiple orientation angles of the three-dimensional model that need to be adjusted at this time based on the images collected by the image capture device at different positions and the coordinates of the origin. The target space is aligned.
  • the foregoing embodiment of the present disclosure provides a method for realizing augmented reality, which obtains a three-dimensional model corresponding to a target setting space based on the current position of an image acquisition device, wherein the current position is in the target setting space; control The image acquisition device starts to acquire images at the current position, and initializes the coordinates based on the current position as the origin coordinates; based on the origin coordinates and the images collected by the image acquisition device, the three-dimensional model is compared with the The target setting space is aligned.
  • the three-dimensional model corresponding to the target setting space is aligned to the target setting space through the collected image and origin coordinates, so as to realize the automatic alignment between the model and the real world; in the real scene
  • the virtual scene matching the real scene can be viewed, which improves the efficiency and effect of augmented reality.
  • step 102 in FIG. 1 may include the following steps:
  • Step 201 Obtain the current position of the image acquisition device.
  • Step 202 Determine the target setting space from the at least one setting space by matching the current position with the sitting position of the at least one setting space.
  • each setting space in the at least one setting space corresponds to at least one three-dimensional model.
  • Step 203 Determine a three-dimensional model from at least one three-dimensional model corresponding to the target setting space as the three-dimensional model corresponding to the target setting space.
  • the ratio of the three-dimensional model to the setting space is 1:1.
  • the alignment of the three-dimensional model and the setting space only involves rotation and displacement, and does not involve the scaling of the three-dimensional model.
  • the three-dimensional model in this embodiment is prefabricated.
  • the same setting space can correspond to multiple types Three-dimensional models with different effects.
  • the specific three-dimensional model can be determined according to the user's choice, or recommended to the user based on the rendering effect, or randomly selected a three-dimensional model. This embodiment does not limit the specific way of obtaining the three-dimensional model corresponding to the target setting space .
  • step 202 may include:
  • the setting space corresponding to the target distance in the at least one setting space is taken as the target setting space.
  • the difference is calculated by calculating the difference between the coordinates in the world coordinate system corresponding to the current position (for example, obtained based on GPS, etc.) and the coordinates in the world coordinate system corresponding to each setting space in at least one setting space, Or determine the distance between the current position and each setting space on a map that can view the positioning coordinates (such as Baidu map, Gaode map, etc.), when the distance between the current position and a setting space is less than the preset value ( Due to the error of the positioning system, a larger range can be set to determine that the current position is in the setting space (for example, 100 meters, etc.), indicating that the current position corresponds to the setting space.
  • a map that can view the positioning coordinates such as Baidu map, Gaode map, etc.
  • the target setting space that needs to be realized for augmented reality is determined, and when it is checked that the image capture device reaches the target setting distance from the target setting space, the alignment of the three-dimensional model is started based on the position of the image capture device, for example,
  • the target setting space is a house
  • the house information selected by the user from the interactive page determines the house to be viewed, and the location information of the house is obtained from the service.
  • the GPS information of the mobile phone it is determined that the user is within a certain range of the house (such as 100m). ), if it is within a certain range of the house, load the pre-generated room model (three-dimensional model).
  • step 106 in FIG. 1 may include the following steps:
  • Step 301 Move the image acquisition device in the target setting space, and collect multiple frames of images in the target setting space at a set frequency during the movement.
  • the setting frequency can be more than 8 frames per second.
  • the more frames obtained per second the less likely it is to lose the scene in the target setting space.
  • the acquisition frequency is related to the moving speed of the image acquisition device. Faster, the more frames need to be obtained accordingly, otherwise, it is easy to lose the picture.
  • Step 302 in response to the presence of a fixed identifier in one frame of the multiple images, align the three-dimensional model with the target setting space according to the fixed identifier.
  • the three-dimensional model in order to improve the fit between the three-dimensional model and the target setting space, is aligned with the target setting space through fixed markers that are easier to identify and are not deformed in both the target setting space and the three-dimensional model. , Improve the accuracy and speed of alignment.
  • step 106 in FIG. 1 may alternatively include the following steps:
  • Step 401 Move the image acquisition device in the target setting space, and collect multiple frames of images in the target setting space at a set frequency during the movement.
  • Step 402 Use the pose acquisition device to determine the pose information relative to the coordinates of the origin when the image acquisition device collects at least one frame of the multi-frame images.
  • the pose information includes information of 6 degrees of freedom, including three directions of rotation (three degrees of freedom) and translation (three degrees of freedom).
  • the pose acquisition device includes a gyroscope and/or a geomagnetometer sensor, and the pose acquisition device may be integrated in the image acquisition device or set separately.
  • Step 403 In response to the presence of a fixed marker in one frame of images in the multiple frames, the three-dimensional model and the target setting space are initially aligned based on the correspondence between the fixed marker and the corresponding marker model in the three-dimensional model.
  • Step 404 Determine the pose information of the image acquisition device relative to the origin coordinates when acquiring the image including the fixed identifier.
  • Step 405 Adjust the initial alignment based on the pose information to achieve the alignment of the three-dimensional model and the target setting space.
  • the pose information of the current image acquisition device relative to the origin coordinates is also obtained through the pose acquisition device to adjust the initial alignment so that the three-dimensional
  • the model and the target setting space not only have a better match on the fixed markers, but also achieve a better alignment in the overall space.
  • step 104 in FIG. 1 may further include the following steps:
  • Step 501 Establish an origin coordinate system with origin coordinates as a center.
  • Step 502 Determine the model coordinates of the origin coordinates in the three-dimensional model.
  • Step 503 Based on the model coordinates and the origin coordinates, the three-dimensional model is embedded in the target setting space to obtain the initial pose of the three-dimensional model in the origin coordinate system.
  • the current position is used to initialize, and the three-dimensional model is embedded according to the origin coordinates and the image collected at the origin coordinates.
  • the coordinates of the origin coordinates in the world coordinate system and the coordinates of the three-dimensional model in the world Match the coordinates in the system to determine the model coordinates corresponding to the origin coordinates in the 3D model.
  • the 3D model can be roughly embedded in the target setting space (not aligned).
  • the 3D model is regarded as a Overall, the initial pose (6 degrees of freedom) of the three-dimensional model in the origin coordinate system can be obtained.
  • step 403 in FIG. 4 may further include the following steps:
  • Step 601 Based on the deformation of the fixed marker in the image in the image, determine the primary pose information of the image capture device relative to the origin coordinates when the image including the fixed marker is acquired.
  • the fixed markers are both in the target setting space and the three-dimensional model, and the ratio between the target setting space and the three-dimensional model is 1:1, when the space and the model are aligned, the fixed marker The marker should be completely overlapped with the fixed marker in the three-dimensional model. When there is no overlap, the primary pose information of the image acquisition device relative to the origin coordinates can be determined through the non-overlap.
  • Step 602 Determine the position coordinates in the origin coordinate system when the image acquisition device acquires the image including the fixed marker.
  • the fixed markers can include but are not limited to: doors, windows or other distinctive features in the room, such as paintings on the walls. Wait.
  • Step 603 Based on the primary pose information, the position coordinates, and the initial pose of the three-dimensional model, the three-dimensional model is initially aligned with the target setting space.
  • This embodiment combines the primary pose information determined by the fixed marker to perform the first alignment of the three-dimensional model based on the initial pose (adjusting the 6 degrees of freedom of the three-dimensional model), and realizes the realization of the fixed marker and the fixed marker in the three-dimensional model after the first alignment
  • this embodiment also combines the position coordinates of the image acquisition device at this time, and adjusts the degree of freedom of the translation direction in combination with the position coordinates to improve the alignment effect.
  • step 603 may include: determining, based on the position coordinates, the displacement information of the image capture device relative to the origin coordinates when acquiring the image including the fixed marker; determining the rotation information of the image capture device relative to the origin coordinates based on the primary pose information; Displacement information and rotation information adjust the initial pose of the 3D model to achieve the initial alignment of the 3D model with the target setting space.
  • step 404 in FIG. 4 may further include the following steps:
  • Step 701 Use the pose acquisition device to track the pose information of the image acquisition device relative to the origin coordinates during the movement of the image acquisition device.
  • step 701 includes: based on the displacement between the same feature points in two frames of images continuously collected by the image acquisition device, and the corresponding two positions acquired by the pose acquisition device when the image acquisition device continuously collects two frames of images. Posture information, tracking the posture information of the image acquisition device.
  • the characteristic points have characteristics: constant displacement and rotation, and constant scaling.
  • the feature points may include, but are not limited to: corner points in the image, points of different colors, more special points, etc., which can be identified and tracked by identifying the same feature points in different frames of images.
  • Step 702 Determine the pose information relative to the origin coordinate when the image acquisition device acquires an image including a fixed marker based on the tracking.
  • This embodiment uses feature points to track multiple frames of images. Among multiple frames of continuous images with the same feature points, the position and posture information of the image acquisition device relative to the origin coordinates when the corresponding image is collected can be determined by the displacement changes of the feature points. In this embodiment, through continuous tracking, when a fixed identifier is recognized in the image, the pose information relative to the origin coordinates when the image acquisition device collects the image including the fixed identifier can be determined.
  • the method for realizing augmented reality does not stop after the alignment between the three-dimensional model at the fixed marker and the target setting space is achieved.
  • this embodiment can The three-dimensional model and the target setting space are aligned based on each frame of image, so that the user always obtains an augmented reality scene with a better alignment effect when viewing, until the image acquisition device stops acquiring images or the user manually stops the alignment.
  • Any method for realizing augmented reality provided by the embodiments of the present disclosure can be executed by any suitable device with data processing capabilities, including but not limited to: terminal devices and servers.
  • any method for realizing augmented reality provided by the embodiment of the present disclosure may be executed by a processor, for example, the processor executes any method for realizing augmented reality mentioned in the embodiment of the present disclosure by calling a corresponding instruction stored in a memory. I won't repeat them below.
  • FIG. 8 is a schematic flowchart of a model display method used in a three-dimensional model in a method for realizing augmented reality provided by an exemplary embodiment of the present disclosure.
  • the steps shown in FIG. 8 can be executed after step 106 described above in conjunction with FIG.
  • Step 801 Display a partial model corresponding to the user's perspective in the three-dimensional model.
  • the three-dimensional model can be a model corresponding to a real house drawn by using three-dimensional software.
  • the real house is located in the real world, and the real world can also be called the physical world; the three-dimensional model is located in the virtual world, and the three-dimensional model can also be called the virtual house. .
  • the size ratio of the 3D model to the real house can be 1:1.
  • the frame of the house and the position of the entrance door in the 3D model can be completely identical to the real house. overlapping.
  • the size ratio of the three-dimensional model to the real house can also be 1:2, 1:5, 1:10, etc., which will not be listed here.
  • the three-dimensional model can be used in an indoor augmented reality scene.
  • the three-dimensional model can be used in an AR house viewing scene or an AR decoration scene (household renovations can be carried out in this scene).
  • the user's perspective can be selected by the user according to actual needs.
  • the local model corresponding to the user's perspective in the three-dimensional model may be as shown in Figure 10A , Or it may be as shown in FIG. 10B, or it may be as shown in FIG. 10C.
  • Step 802 in response to determining that the external perspective area exists in the partial model, determine the reference visual information of the external perspective area in the user's perspective according to the house data of the real house corresponding to the three-dimensional model.
  • the house data of the real house corresponding to the three-dimensional model can be stored in advance.
  • the house data of the real house can record a large amount of information, including but not limited to house structure information, spatial function information, house size information, home placement information, etc. .
  • step 802 it can be detected whether there is an external perspective area in the local model.
  • the external perspective area refers to the area through which the external scene of the house can be seen.
  • the external perspective area can be the window area.
  • the external perspective area can also be the entrance area of the open balcony, etc. I won't list them one by one again.
  • the reference visual information of the external perspective area under the user's perspective can be determined based on the pre-stored house data of the real house .
  • the reference visual information can be used to characterize: when mapped to the real world, in the user's perspective, whether there are areas that are occluded in the external perspective area for the user, and which areas are occluded; or, the reference visual information can be Used to characterize: when mapped to the real world, under the user's perspective, whether the external perspective area is visible to the user, and which areas are visible.
  • Step 803 Based on the reference visual information, control the external perspective area in the partial model to display the image according to the corresponding display strategy.
  • the user can modify the three-dimensional model according to actual needs. Specifically, the user can move or dismantle the walls in the three-dimensional model, or the user can add walls to the three-dimensional model.
  • the three-dimensional model may include a living room 1101, a bedroom 1103, and a bathroom 1105, where a partition wall may exist between the living room 1101 and the bedroom 1103; after the modification, the living room in the three-dimensional model There is no partition wall between 1101 and the bedroom 1103.
  • the living room and the bedroom in the three-dimensional model are connected to form an open space 1107.
  • the reference visual information determined in step 802 may have multiple possible situations.
  • the reference visual information can indicate that the external perspective area is occluded as a whole, and the whole is not occluded. Or part of it is occluded.
  • the external perspective area in the partial model can be controlled to be displayed with the corresponding display strategy. In this way, the model display effect of each situation can be different.
  • the partial model corresponding to the user's perspective in the three-dimensional model can be displayed.
  • the user's perspective can be determined according to the house data of the real house corresponding to the three-dimensional model.
  • the reference visual information of the external perspective area later, based on the reference visual information, the external perspective area in the local model can be controlled to display the screen with the corresponding display strategy.
  • the three-dimensional model is not only displayed at different angles, and the display strategy of the external perspective area in the partial model in the three-dimensional model can be changed accordingly when the reference visual information changes. Thereby, the display effect of the three-dimensional model can be changed. Therefore, compared with the prior art, in the embodiment of the present disclosure, the display effect of the three-dimensional model is more diversified.
  • controlling the external perspective area in the partial model to display the screen with a corresponding display strategy includes: in response to determining that the reference visual information represents that the external perspective area as a whole is not occluded, controlling the external perspective in the partial model The area displays the corresponding real scene picture; in response to determining that the reference visual information indicates that the external perspective area is occluded as a whole, the external perspective area in the partial model is controlled to display the virtual scene picture; and in response to the determination that the reference visual information represents the external perspective area includes The blocked first area and the blocked second area are controlled to display the corresponding real scene picture in the first area, and the second area is controlled to display the virtual scene picture.
  • a unified virtual scene picture may be stored in advance, and the virtual scene picture may be a virtual garden scene picture, a virtual sky scene picture, and the like.
  • the real scene picture corresponding to each external perspective area in the real house can be collected in advance through the camera, and the corresponding relationship between each external perspective area and the corresponding real scene picture can be stored, wherein any external perspective area corresponds to The real scene picture of is used to present the real scene that can be seen through the external perspective area.
  • the real scene image corresponding to the external perspective area in the partial model can be obtained from the stored correspondence, and the display of the external perspective area in the partial model can be controlled The obtained real scene picture.
  • the external perspective area in the partial model presents the user with a real scene, such as a street scene, so as to ensure the consistency between the virtual world and the real world, so as to enhance the realism of the model.
  • the stored virtual scene image can be acquired, and the external perspective area in the partial model can be controlled to display the acquired virtual scene image.
  • the external perspective area in the partial model presents a virtual scene to the user. Then, it can be determined that due to the modification of the three-dimensional model, the external perspective area that should have been occluded from the user’s perspective is actually The above is not blocked, that is, under the user's perspective, the visibility of the entire external perspective area is different between the real world and the virtual world.
  • the stored virtual scene picture may be acquired, and the second area may be controlled to display the acquired virtual scene picture; It is also possible to obtain the real scene picture corresponding to the external perspective area in the local model from the stored correspondence, and crop the obtained real scene picture according to the specific position of the first area in the external perspective area to obtain the same
  • the first area corresponds to the real scene picture, and the first area is controlled to display its corresponding real scene picture.
  • the first area in the external perspective area in the partial model presents the real scene to the user, which helps to improve the sense of reality of the model;
  • the second area in the external perspective area in the partial model presents the user to the real scene. If the virtual scene is presented, it can be determined based on this that due to the modification of the three-dimensional model, the second area that should have been occluded in the user's perspective is not actually occluded, that is, in the user's perspective, the second area There is a difference between the visibility of the real world and the virtual world.
  • the partial model corresponding to the user's perspective in the three-dimensional model is shown in FIG. 10B.
  • the body area displays the corresponding real scene picture; in the case where the entire window area in FIG. 10B is occluded with reference to visual information, the virtual scene picture can be displayed in the entire window area in FIG. 10B; In the case that the Q1 area in the window area in the window area is not blocked, and the Q2 area is blocked, the corresponding real scene picture can be displayed in the Q1 area, and the virtual scene picture can be displayed in the Q2 area.
  • the realism of the model can be improved through the display of the real scene picture, and the display of the virtual scene picture can also enable the user to know that the visibility exists in the real world and the virtual world. Different areas.
  • the implementation of controlling the external perspective area in the partial model to display the screen with the corresponding display strategy is not limited to this.
  • the first virtual scene picture and the second virtual scene picture can be set in advance, and the external perspective area in the partial model can be controlled to display the first virtual scene picture under the condition that the external perspective area as a whole is not obstructed by referring to the visual information ;
  • the reference visual information characterizes that the external perspective area is occluded as a whole
  • the external perspective area in the local model can be controlled to display the second virtual scene picture
  • the reference visual information characterizes the external perspective area including the first area that is not occluded
  • the first area can be controlled to display the first virtual scene image
  • the second area can be controlled to display the second virtual scene image.
  • the method further includes: detecting whether there are obstacles within a preset distance in front of the user's viewpoint in the three-dimensional model to obtain the detection result; determining the position of the user's viewpoint in the three-dimensional model according to the house data of the real house Whether there is an obstacle within the preset distance range ahead to obtain a determination result; in response to the detection result that there is no obstacle and the determination result is that there is an obstacle, an obstacle response operation is performed.
  • the preset distance range in front of the user's viewpoint position may be a range in which the distance from the user's viewpoint position is not greater than 0.3 meters, 0.4 meters, 0.5 meters, or other distance values.
  • the model data of the three-dimensional model can be stored in advance, and based on the model data, it can be detected whether there is an obstacle within a preset distance in front of the user's viewpoint position in the three-dimensional model, so as to obtain the detection result.
  • the detection result can be It is thought to correspond to the situation in the virtual world.
  • the detection result is that there is no obstacle, and the determination result is that there is an obstacle, it means that in the 3D model, there should be an obstacle within the preset distance in front of the user's point of view.
  • the modification of the 3D model has caused obstacles. The object is moved or removed. In this case, you can perform obstacle response operations.
  • executing an obstacle response operation includes: outputting obstacle collision warning information.
  • the obstacle collision warning information can be output in the form of voice, text, etc., for example, it can display "Please pay attention to obstacles ahead to avoid collision" on the display screen of the electronic device.
  • the user can understand the situation of the obstacle ahead, thereby ensuring the consistency of the experience in the real world and the virtual world.
  • performing obstacle handling operations includes: prohibiting the user's point of view from moving forward and displaying a viewing angle operation interface; wherein, the viewing angle operation interface includes N operation controls, and N is greater than or equal to 1. Integer; receiving an input operation on at least one of the N operation controls; responding to the input operation, adjusting the user's perspective; and exiting the perspective operation interface, and restoring the user's perspective.
  • the value of N can be 1, 2 or 3
  • the type of operation control can be a virtual button
  • the input operation can be touch operations such as tap, press, drag, etc.
  • the value of N and the type of operation control, And the type of input operation is not limited to this, and can be specifically determined according to the actual situation, which is not limited in the embodiment of the present disclosure.
  • the user's viewpoint position can be prohibited from moving forward to ensure the consistency of the experience in the real world and the virtual world, avoiding interaction difficulties.
  • a viewing angle operation interface including N operation controls can be displayed .
  • the viewing angle operation interface When the viewing angle operation interface is displayed, the user's input operation on at least one of the N operation controls can be received, and the user's viewing angle can be adjusted in response to the input operation.
  • the N operation controls include a first operation control; in response to an input operation, adjusting the user's perspective includes: responding to an input operation on the first operation control to control the movement of the user's perspective; and/or the N operation controls include the first operation control 2.
  • Operation control Respond to the input operation to adjust the user's perspective, including: responding to the input operation of the second operation control, controlling the rotation and/or pitch of the user's perspective.
  • the perspective operation interface as shown in FIG. 12A or FIG. 12B can be displayed.
  • the operation control M in FIGS. 12A and 12B can be used as the first operation control
  • the operation control N in FIGS. 12A and 12B can be used as the second operation control.
  • the user's perspective can be moved, and accordingly, the local model displayed to the user can be updated;
  • the user's perspective can be rotated and/or pitched , Accordingly, the partial model shown to the user can be updated.
  • the perspective operation interface can be exited, that is, the display of the perspective operation interface can be eliminated.
  • the user perspective can also be restored, that is, the user perspective before the input operation is received.
  • this implementation can not only ensure the consistency of the experience in the real world and the virtual world, and avoid interaction difficulties, but also can operate the interface according to the user’s perspective when the user’s viewpoint does not move forward. Input operations to adjust the operating angle of view so that users can view the required parts of the three-dimensional model.
  • exiting the perspective operation interface and restoring the user perspective includes: obtaining adjustment information of the user perspective; in response to determining that the adjustment information meets a preset condition, exiting the perspective operation interface and restoring the user perspective; here, the user perspective
  • the adjustment information includes, but is not limited to, the continuous adjustment duration of the user's perspective, the moving range of the user's perspective, and so on.
  • the adjustment information of the user's perspective After obtaining the adjustment information of the user's perspective, it can be determined whether the adjustment information satisfies a preset condition. Specifically, in the case where the continuous adjustment duration of the user's viewing angle exceeds 10 seconds (which can also be other duration values), the preset conditions can be considered to be satisfied. At this time, the viewing angle operation interface can be exited and restored to before the input operation is received. User perspective. Or, when the movement range of the user's perspective exceeds 50 cm (which can also be other distance values), it can be considered that the preset conditions are satisfied. At this time, you can exit the perspective operation interface and restore to the user before receiving the input operation Perspective.
  • a preset condition Specifically, in the case where the continuous adjustment duration of the user's viewing angle exceeds 10 seconds (which can also be other duration values), the preset conditions can be considered to be satisfied. At this time, the viewing angle operation interface can be exited and restored to before the input operation is received. User perspective. Or, when the movement range of the
  • exiting the perspective operation interface and restoring the user perspective includes: in a case where the end of the input operation is detected, exiting the perspective operation interface and restoring the user perspective.
  • the pressure sensor can be used to detect whether the user releases the operation control applied by the input operation, so as to determine whether the input operation ends.
  • the model display method in any three-dimensional model provided by the embodiments of the present disclosure can be executed by any suitable device with data processing capabilities, including but not limited to: terminal devices and servers.
  • the model display method in any three-dimensional model provided by the embodiments of the present disclosure may be executed by a processor.
  • the processor executes any of the three-dimensional models mentioned in the embodiments of the present disclosure by calling corresponding instructions stored in a memory.
  • Fig. 13 is a schematic structural diagram of a device for realizing augmented reality provided by an exemplary embodiment of the present disclosure.
  • the device of this embodiment includes:
  • the model obtaining module 1301 is configured to obtain a three-dimensional model corresponding to the target setting space based on the current position of the image acquisition device.
  • the current position is in the target setting space.
  • the initialization module 1302 is used to control the image acquisition device to start acquiring images at the current position, and to initialize the coordinates based on the current position as the origin coordinates.
  • the model alignment module 1303 is used to align the three-dimensional model with the target setting space based on the coordinates of the origin and the image collected by the image acquisition device.
  • the above-mentioned embodiment of the present disclosure provides a device for realizing augmented reality, which obtains a three-dimensional model corresponding to a target setting space based on the current position of an image acquisition device, wherein the current position is in the target setting space; and controlling the The image acquisition device starts to acquire an image at the current position, and initializes the coordinates based on the current position as the origin coordinates; and based on the origin coordinates and the images collected by the image acquisition device, the three-dimensional model and the The target setting space is aligned.
  • the three-dimensional model corresponding to the target setting space is aligned to the target setting space through the collected images and origin coordinates, so as to realize the automatic alignment of the model and the real world;
  • the virtual scene improves the efficiency and effect of augmented reality.
  • the model obtaining module 1301 is used to obtain the current position of the image capture device; by matching the current position with the sitting position of the at least one set space, determine the target setting from the at least one set space Space, wherein each setting space in the at least one setting space corresponds to at least one three-dimensional model; and determining a three-dimensional model from at least one three-dimensional model corresponding to the target setting space as the target setting space The corresponding three-dimensional model.
  • the model obtaining module 1301 is used to determine the current location and the location of the at least one set space when determining the target setting space corresponding to the image capture device based on matching the current location with the location of at least one set space Determine a target distance smaller than a preset value from the corresponding distance; and use a setting space corresponding to the target distance in the at least one setting space as the target setting space.
  • the model alignment module 1303 includes: an image acquisition unit for moving the image acquisition device in the target setting space, and collecting multiple frames of images in the target setting space at a set frequency during the movement; And a marker aligning unit, which is used to align the three-dimensional model with the target setting space according to the fixed marker in response to the presence of a fixed marker in one frame of the images.
  • the model alignment module 1303 further includes: a pose determination unit, configured to use the pose acquisition device to determine the pose information relative to the origin coordinates when the image capture device collects at least one of the multiple frames of images;
  • the alignment unit is used for initial alignment of the three-dimensional model and the target setting space based on the corresponding relationship between the fixed marker and the corresponding marker model in the three-dimensional model in response to the presence of a fixed marker in one frame of the images in the multiple frames ;
  • the initialization module 1302 is used to establish the origin coordinate system with the origin coordinates as the center; determine the model coordinates of the origin coordinates in the 3D model; and based on the model coordinates and origin coordinates, embed the 3D model in the target setting space to obtain The initial pose of the 3D model in the origin coordinate system.
  • the marker aligning unit is used to initially align the three-dimensional model and the setting space based on the correspondence between the fixed marker and the corresponding marker model in the three-dimensional model.
  • the primary pose information, position coordinates, and the initial pose of the 3D model are used to initially align the 3D model with the target setting space.
  • the marker aligning unit is used to determine the image acquisition device acquisition based on the position coordinates when the three-dimensional model is initially aligned with the target setting space based on the primary pose information, the position coordinates, and the initial pose of the three-dimensional model.
  • the displacement information of the image of the marker relative to the origin coordinates; the rotation information of the image acquisition device relative to the origin coordinates is determined based on the primary pose information; and the initial pose of the three-dimensional model is adjusted according to the displacement information and the rotation information to realize the three-dimensional model The initial alignment with the target setting space.
  • the marker alignment unit is used to use the pose acquisition device to track the image acquisition during the movement of the image acquisition device when determining the pose information of the image capture device relative to the origin coordinates when acquiring the image including the fixed marker.
  • the pose information of the device relative to the coordinates of the origin; and the pose information of the device relative to the coordinates of the origin when the image acquisition device collects an image that includes a fixed marker is determined based on tracking.
  • the marker alignment unit tracks the pose information of the image capture device relative to the origin coordinates based on the pose capture device during the movement of the image capture device, it is used for the same in two frames of images continuously captured by the image capture device.
  • Fig. 14 is a schematic structural diagram of a model display device used in a three-dimensional model in an augmented reality device provided by an exemplary embodiment of the present disclosure.
  • the device shown in FIG. 14 includes a display model 1401, a determination module 1402, and a control module 1403.
  • the display model 1401 is used to display a partial model corresponding to the user's perspective in the three-dimensional house model.
  • the determining module 1402 is configured to determine the reference visual information of the external perspective area in the user's perspective according to the house data of the real house corresponding to the three-dimensional house model in response to determining that the external perspective area exists in the partial model.
  • the control module 1403 is used for controlling the external perspective area in the partial model to display the images according to the corresponding display strategy based on the reference visual information.
  • control module 1403 is specifically configured to: control the external perspective area in the partial model to display the corresponding real scene picture when the external perspective area is represented by reference visual information as being unobstructed as a whole; When the perspective area is occluded as a whole, the external perspective area in the partial model is controlled to display the virtual scene; when the external perspective area includes the first area that is not blocked and the second area that is occluded in the reference visual information characterizing the external perspective area, Control the first area to display the corresponding real scene picture, and control the second area to display the virtual scene picture.
  • the device further includes: a first acquisition module for detecting whether there are obstacles within a preset distance in front of the user's viewpoint position in the three-dimensional house model to obtain the detection result; a second acquisition module for According to the house data of the real house, determine whether there is an obstacle within the preset distance in front of the user's viewpoint position in the three-dimensional house model to obtain the determination result; the execution module is used to determine if there is no obstacle in the detection result, and the determination result is If there is an obstacle, perform obstacle handling operations.
  • the execution module includes: a first processing unit for prohibiting the user's viewpoint from moving forward and displaying a viewing angle operation interface; wherein the viewing angle operation interface includes N operation controls, and N is greater than or equal to 1. Integer; receiving unit for receiving input operations on at least one of the N operation controls; adjustment unit for responding to input operations and adjusting the user's perspective; second processing unit for exiting the perspective operation interface and restoring User perspective; and/or execution module, specifically used to: output obstacle collision warning information.
  • the N operation controls include a first operation control; the adjustment unit is specifically configured to: respond to an input operation on the first operation control to control the movement of the user's perspective; and/or the N operation controls include a second operation Control; the adjustment unit, specifically used to: respond to the input operation of the second operation control, control the user's viewing angle rotation and/or pitch.
  • the second processing unit includes: an acquisition sub-unit for acquiring adjustment information of the user's perspective; a processing sub-unit for exiting the perspective operation interface and restoring the user when the adjustment information meets preset conditions Perspective; and/or the second processing unit, specifically configured to: in the case of detecting the end of the input operation, exit the perspective operation interface, and restore the user perspective.
  • the size ratio of the three-dimensional house model to the real house is 1:1; and/or the external perspective area is the window area.
  • the electronic device can be either or both of the first device and the second device, or a stand-alone device independent of them, and the stand-alone device can communicate with the first device and the second device to receive collected data from them Input signal.
  • FIG. 15 illustrates a block diagram of an electronic device according to an embodiment of the present disclosure.
  • the electronic device 1500 includes one or more processors 1501 and a memory 1502.
  • the processor 1501 may be a central processing unit (CPU) or other form of processing unit with data processing capability and/or instruction execution capability, and may control other components in the electronic device 1500 to perform desired functions.
  • CPU central processing unit
  • the memory 1502 may include one or more computer program products, and the computer program products may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory.
  • the volatile memory may include random access memory (RAM) and/or cache memory (cache), for example.
  • the non-volatile memory may include, for example, read-only memory (ROM), hard disk, flash memory, and the like.
  • One or more computer program instructions may be stored on the computer-readable storage medium, and the processor 1501 may run the program instructions to implement the methods for realizing augmented reality in the various embodiments of the present disclosure described above and/ Or other desired functions.
  • Various contents such as input signal, signal component, noise component, etc. can also be stored in the computer-readable storage medium.
  • the electronic device 1500 may further include: an input device 1503 and an output device 1504, and these components are interconnected by a bus system and/or other forms of connection mechanisms (not shown).
  • the input device 1503 may be the aforementioned microphone or microphone array for capturing the input signal of the sound source.
  • the input device 1503 may be a communication network connector for receiving collected input signals from the first device and the second device.
  • the input device 1503 may also include, for example, a keyboard, a mouse, and so on.
  • the output device 1504 can output various information to the outside, including determined distance information, direction information, and so on.
  • the output device 1504 may include, for example, a display, a speaker, a printer, a communication network and a remote output device connected to it, and so on.
  • the electronic device 1500 may also include any other appropriate components.
  • the embodiments of the present disclosure may also be a computer program product, which includes computer program instructions that, when run by a processor, cause the processor to execute the “exemplary method” described above in this specification.
  • the steps in the method for realizing augmented reality according to various embodiments of the present disclosure are described in the section.
  • the computer program product can be used to write program codes for performing the operations of the embodiments of the present disclosure in any combination of one or more programming languages, the programming languages including object-oriented programming languages, such as Java, C++, etc. , Also includes conventional procedural programming languages, such as "C" language or similar programming languages.
  • the program code can be executed entirely on the user's computing device, partly on the user's device, executed as an independent software package, partly on the user's computing device and partly executed on the remote computing device, or entirely on the remote computing device or server Executed on.
  • embodiments of the present disclosure may also be a computer-readable storage medium, on which computer program instructions are stored, and when the computer program instructions are executed by a processor, the processor executes the “exemplary method” part of this specification.
  • the computer-readable storage medium may adopt any combination of one or more readable media.
  • the readable medium may be a readable signal medium or a readable storage medium.
  • the readable storage medium may include, but is not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the above, for example. More specific examples (non-exhaustive list) of readable storage media include: electrical connections with one or more wires, portable disks, hard disks, random access memory (RAM), read-only memory (ROM), erasable Type programmable read only memory (EPROM or flash memory), optical fiber, portable compact disk read only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • the method and apparatus of the present disclosure may be implemented in many ways.
  • the method and apparatus of the present disclosure can be implemented by software, hardware, firmware or any combination of software, hardware, and firmware.
  • the above-mentioned order of the steps of the method is for illustration only, and the steps of the method of the present disclosure are not limited to the order specifically described above, unless specifically stated otherwise.
  • the present disclosure can also be implemented as programs recorded in a recording medium, and these programs include machine-readable instructions for implementing the method according to the present disclosure.
  • the present disclosure also covers a recording medium storing a program for executing the method according to the present disclosure.
  • each component or each step can be decomposed and/or recombined. These decomposition and/or recombination should be regarded as equivalent solutions of the present disclosure.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

A method and an apparatus for implementing augmented reality, a storage medium, and an electronic device. The method comprises: on the basis of the current position of an image collection device, acquiring a three-dimensional model corresponding to a target set space, the current position being in the target set space; controlling the image collection device to start collecting images in the current position and implementing coordinate initialisation based on the current position being the origin coordinates; and, on the basis of the origin coordinates and the images collected by the image collection device, aligning the three-dimensional model and the target set space.

Description

实现增强现实的方法和装置、存储介质、电子设备Method and device, storage medium and electronic equipment for realizing augmented reality 技术领域Technical field
本公开涉及增强现实领域,尤其涉及一种实现增强现实的方法和装置、计算机可读存储介质、电子设备以及计算机程序产品。The present disclosure relates to the field of augmented reality, and in particular to a method and device for realizing augmented reality, computer-readable storage media, electronic equipment, and computer program products.
背景技术Background technique
增强现实(Augmented Reality)技术是一种将虚拟信息与真实世界巧妙融合的技术,广泛运用了多媒体、三维建模、实时跟踪及注册、智能交互、传感等多种技术手段,将计算机生成的文字、图像、三维模型、音乐、视频等虚拟信息模拟仿真后,应用到真实世界中,两种信息互为补充,从而实现对真实世界的“增强”。Augmented Reality (Augmented Reality) technology is a technology that ingeniously integrates virtual information with the real world. It uses a variety of technical means such as multimedia, three-dimensional modeling, real-time tracking and registration, intelligent interaction, and sensing to generate computer-generated data. After the virtual information such as text, image, 3D model, music, video, etc. is simulated and simulated, it is applied to the real world, and the two kinds of information complement each other, thus realizing the "enhancement" of the real world.
发明内容Summary of the invention
根据本公开实施例的一个方面,提供了一种实现增强现实的方法,包括:基于图像采集设备的当前位置,获得与目标设定空间对应的三维模型,其中,所述当前位置在所述目标设定空间中;控制所述图像采集设备在所述当前位置开始采集图像,并基于所述当前位置作为原点坐标进行坐标初始化;以及基于所述原点坐标和所述图像采集设备采集到的图像,将所述三维模型与所述目标设定空间进行对齐。According to one aspect of the embodiments of the present disclosure, there is provided a method for realizing augmented reality, including: obtaining a three-dimensional model corresponding to a target setting space based on a current position of an image acquisition device, wherein the current position is in the target In the setting space; control the image acquisition device to start acquiring images at the current position, and initialize the coordinates based on the current position as the origin coordinates; and based on the origin coordinates and the images collected by the image acquisition device, Aligning the three-dimensional model with the target setting space.
根据本公开实施例的另一方面,提供了一种实现增强现实的装置,包括用于实现上述方法的装置。According to another aspect of the embodiments of the present disclosure, there is provided a device for realizing augmented reality, including a device for realizing the above method.
根据本公开实施例的又一方面,提供了一种计算机可读存储介质,所述存储介质存储有计算机程序,所述计算机程序用于执行上述方法。According to another aspect of the embodiments of the present disclosure, a computer-readable storage medium is provided, the storage medium stores a computer program, and the computer program is used to execute the above-mentioned method.
根据本公开实施例的还一方面,提供了一种电子设备,所述电子设备包括:处理器;以及用于存储所述处理器可执行指令的存储器。所述可执行指令在由所 述处理器执行时实现上述方法。According to another aspect of the embodiments of the present disclosure, there is provided an electronic device, the electronic device including: a processor; and a memory for storing executable instructions of the processor. The executable instructions, when executed by the processor, implement the above methods.
根据本公开实施例的还又一方面,提供了一种计算机程序产品,包括计算机程序,其中,所述计算机程序在被处理器执行时实现上述方法。According to still another aspect of the embodiments of the present disclosure, there is provided a computer program product, including a computer program, wherein the computer program implements the foregoing method when being executed by a processor.
下面通过附图和实施例,对本公开的技术方案做进一步的详细描述。The technical solutions of the present disclosure will be further described in detail below through the accompanying drawings and embodiments.
附图说明Description of the drawings
通过结合附图对本公开实施例进行更详细的描述,本公开的上述以及其他目的、特征和优势将变得更加明显。附图用来提供对本公开实施例的进一步理解,并且构成说明书的一部分,与本公开实施例一起用于解释本公开,并不构成对本公开的限制。在附图中,相同的参考标号通常代表相同部件或步骤。Through a more detailed description of the embodiments of the present disclosure in conjunction with the accompanying drawings, the above and other objectives, features, and advantages of the present disclosure will become more apparent. The accompanying drawings are used to provide a further understanding of the embodiments of the present disclosure, and constitute a part of the specification, and are used to explain the present disclosure together with the embodiments of the present disclosure, and do not constitute a limitation to the present disclosure. In the drawings, the same reference numerals generally represent the same components or steps.
图1是本公开一示例性实施例提供的实现增强现实的方法的流程示意图。Fig. 1 is a schematic flowchart of a method for realizing augmented reality provided by an exemplary embodiment of the present disclosure.
图2是图1中获得三维模型的步骤的一个流程示意图。Fig. 2 is a schematic flowchart of the steps of obtaining a three-dimensional model in Fig. 1.
图3是图1中对齐步骤的一个流程示意图。Fig. 3 is a schematic flowchart of the alignment step in Fig. 1.
图4是图1中对齐步骤的另一个流程示意图。Fig. 4 is another flow diagram of the alignment step in Fig. 1.
图5是图1中坐标初始化步骤的一个流程示意图。Fig. 5 is a schematic flowchart of the coordinate initialization step in Fig. 1.
图6是图4中初始对齐步骤的一个流程示意图。FIG. 6 is a schematic flowchart of the initial alignment step in FIG. 4.
图7是图4中确定位姿信息步骤的一个流程示意图。FIG. 7 is a schematic flowchart of the step of determining the pose information in FIG. 4.
图8是本公开一示例性实施例提供的实现增强现实的方法中,用于三维模型中的模型展示方法的流程示意图。FIG. 8 is a schematic flowchart of a model display method used in a three-dimensional model in a method for realizing augmented reality provided by an exemplary embodiment of the present disclosure.
图9A是三维模型的示意图之一。Fig. 9A is one of the schematic diagrams of the three-dimensional model.
图9B是三维模型的示意图之二。Fig. 9B is the second schematic diagram of the three-dimensional model.
图9C是三维模型的示意图之三。Fig. 9C is the third schematic diagram of the three-dimensional model.
图10A是三维模型中的与用户视角对应的局部模型的示意图之一。FIG. 10A is one of the schematic diagrams of the partial model corresponding to the user's perspective in the three-dimensional model.
图10B是三维模型中的与用户视角对应的局部模型的示意图之二。FIG. 10B is the second schematic diagram of the partial model corresponding to the user's perspective in the three-dimensional model.
图10C是三维模型中的与用户视角对应的局部模型的示意图之三。FIG. 10C is the third schematic diagram of the partial model corresponding to the user's perspective in the three-dimensional model.
图11A是拆除隔墙之前,三维模型对应的户型示意图。Fig. 11A is a schematic diagram of the house type corresponding to the three-dimensional model before the partition wall is removed.
图11B是拆除隔墙之后,三维模型对应的户型示意图。Fig. 11B is a schematic diagram of the house type corresponding to the three-dimensional model after the partition wall is removed.
图12A是视角操作界面的示意图之一。FIG. 12A is one of the schematic diagrams of the viewing angle operation interface.
图12B是视角操作界面的示意图之二。FIG. 12B is the second schematic diagram of the viewing angle operation interface.
图13是本公开一示例性实施例提供的实现增强现实装置的结构示意图。Fig. 13 is a schematic structural diagram of an augmented reality device provided by an exemplary embodiment of the present disclosure.
图14是本公开一示例性实施例提供的实现增强现实装置中,用于三维模型中的模型展示装置的结构示意图。Fig. 14 is a schematic structural diagram of a model display device used in a three-dimensional model in an augmented reality device provided by an exemplary embodiment of the present disclosure.
图15是本公开一示例性实施例提供的电子设备的结构图。Fig. 15 is a structural diagram of an electronic device provided by an exemplary embodiment of the present disclosure.
具体实施方式detailed description
下面,将参考附图详细地描述根据本公开的示例实施例。显然,所描述的实施例仅仅是本公开的一部分实施例,而不是本公开的全部实施例,应理解,本公开不受这里描述的示例实施例的限制。Hereinafter, exemplary embodiments according to the present disclosure will be described in detail with reference to the accompanying drawings. Obviously, the described embodiments are only a part of the embodiments of the present disclosure, rather than all the embodiments of the present disclosure, and it should be understood that the present disclosure is not limited by the exemplary embodiments described herein.
应注意到:除非另外具体说明,否则在这些实施例中阐述的部件和步骤的相对布置、数字表达式和数值不限制本公开的范围。It should be noted that unless specifically stated otherwise, the relative arrangement of components and steps, numerical expressions and numerical values set forth in these embodiments do not limit the scope of the present disclosure.
本领域技术人员可以理解,本公开实施例中的“第一”、“第二”等术语仅用于区别不同步骤、设备或模块等,既不代表任何特定技术含义,也不表示它们之间的必然逻辑顺序。Those skilled in the art can understand that terms such as “first” and “second” in the embodiments of the present disclosure are only used to distinguish different steps, devices, or modules, etc., and do not represent any specific technical meanings, nor do they mean that they are different from each other. The necessary logical order.
还应理解,在本公开实施例中,“多个”可以指两个或两个以上,“至少一个”可以指一个、两个或两个以上。It should also be understood that in the embodiments of the present disclosure, "plurality" may refer to two or more than two, and "at least one" may refer to one, two, or more than two.
还应理解,对于本公开实施例中提及的任一部件、数据或结构,在没有明确限定或者在前后文给出相反启示的情况下,一般可以理解为一个或多个。It should also be understood that any component, data, or structure mentioned in the embodiments of the present disclosure can generally be understood as one or more unless it is clearly defined or given contrary enlightenment in the context.
另外,本公开中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本公开中字符“/”,一般表示前后关联对象是一种“或”的关系。In addition, the term "and/or" in the present disclosure is merely an association relationship describing associated objects, which means that there can be three types of relationships, for example, A and/or B can mean that there is A alone, and A and B exist at the same time. , There are three cases of B alone. In addition, the character "/" in the present disclosure generally indicates that the associated objects before and after are in an "or" relationship.
还应理解,本公开对各个实施例的描述着重强调各个实施例之间的不同之处, 其相同或相似之处可以相互参考,为了简洁,不再一一赘述。It should also be understood that the description of the various embodiments of the present disclosure emphasizes the differences between the various embodiments, and the same or similarities can be referred to each other, and for the sake of brevity, the details are not repeated one by one.
同时,应当明白,为了便于描述,附图中所示出的各个部分的尺寸并不是按照实际的比例关系绘制的。At the same time, it should be understood that, for ease of description, the sizes of the various parts shown in the drawings are not drawn in accordance with actual proportional relationships.
以下对至少一个示例性实施例的描述实际上仅仅是说明性的,决不作为对本公开及其应用或使用的任何限制。The following description of at least one exemplary embodiment is actually only illustrative, and in no way serves as any limitation to the present disclosure and its application or use.
对于相关领域普通技术人员已知的技术、方法和设备可能不作详细讨论,但在适当情况下,所述技术、方法和设备应当被视为说明书的一部分。The techniques, methods, and equipment known to those of ordinary skill in the relevant fields may not be discussed in detail, but where appropriate, the techniques, methods, and equipment should be regarded as part of the specification.
应注意到:相似的标号和字母在下面的附图中表示类似项,因此,一旦某一项在一个附图中被定义,则在随后的附图中不需要对其进行进一步讨论。It should be noted that similar reference numerals and letters indicate similar items in the following drawings, therefore, once an item is defined in one drawing, it does not need to be further discussed in the subsequent drawings.
本公开实施例可以应用于终端设备、计算机***、服务器等电子设备,其可与众多其它通用或专用计算***环境或配置一起操作。适于与终端设备、计算机***、服务器等电子设备一起使用的众所周知的终端设备、计算***、环境和/或配置的例子包括但不限于:个人计算机***、服务器计算机***、瘦客户机、厚客户机、手持或膝上设备、基于微处理器的***、机顶盒、可编程消费电子产品、网络个人电脑、小型计算机***、大型计算机***和包括上述任何***的分布式云计算技术环境,等等。The embodiments of the present disclosure can be applied to electronic devices such as terminal devices, computer systems, servers, etc., which can operate with many other general-purpose or special-purpose computing system environments or configurations. Examples of well-known terminal devices, computing systems, environments and/or configurations suitable for use with electronic devices such as terminal devices, computer systems, servers, etc. include, but are not limited to: personal computer systems, server computer systems, thin clients, thick clients Computers, handheld or laptop devices, microprocessor-based systems, set-top boxes, programmable consumer electronics, networked personal computers, small computer systems, large computer systems, and distributed cloud computing technology environments including any of the above systems, etc.
终端设备、计算机***、服务器等电子设备可以在由计算机***执行的计算机***可执行指令(诸如程序模块)的一般语境下描述。通常,程序模块可以包括例程、程序、目标程序、组件、逻辑、数据结构等等,它们执行特定的任务或者实现特定的抽象数据类型。计算机***/服务器可以在分布式云计算环境中实施,分布式云计算环境中,任务是由通过通信网络链接的远程处理设备执行的。在分布式云计算环境中,程序模块可以位于包括存储设备的本地或远程计算***存储介质上。Electronic devices such as terminal devices, computer systems, and servers can be described in the general context of computer system executable instructions (such as program modules) executed by the computer system. Generally, program modules may include routines, programs, object programs, components, logic, data structures, etc., which perform specific tasks or implement specific abstract data types. The computer system/server can be implemented in a distributed cloud computing environment. In the distributed cloud computing environment, tasks are executed by remote processing equipment linked through a communication network. In a distributed cloud computing environment, program modules may be located on a storage medium of a local or remote computing system including a storage device.
申请概述Application overview
在实现本公开的过程中,发明人发现,现有技术中的增强现实场景仅应用于初始的图像识别定位,该技术方案至少存在以下问题:需要人为对模型和现实场景进行对齐。In the process of implementing the present disclosure, the inventor found that the augmented reality scene in the prior art is only applied to the initial image recognition and positioning, and this technical solution has at least the following problem: it is necessary to manually align the model with the real scene.
示例性方法Exemplary method
图1是本公开一示例性实施例提供的实现增强现实的方法的流程示意图。本实施例可应用在电子设备上,如图1所示,包括如下步骤:Fig. 1 is a schematic flowchart of a method for realizing augmented reality provided by an exemplary embodiment of the present disclosure. This embodiment can be applied to an electronic device, as shown in FIG. 1, and includes the following steps:
步骤102,基于图像采集设备的当前位置,获得与目标设定空间对应的三维模型。Step 102: Obtain a three-dimensional model corresponding to the target setting space based on the current position of the image acquisition device.
其中,当前位置在目标设定空间中。Among them, the current position is in the target setting space.
在示例中,图像采集设备可以包括手机等可以实现图像采集功能的设备,通过设备自带(例如,手机具有定位功能)或其他辅助定位装置或根据用户上报确定当前位置,该当前位置在目标设定空间中,目标设定空间即需要进行三维模型以实现增强现实的空间,三维模型可以是基于目标设定空间的结构进行装修、设计等处理后的场景设计模型。In an example, the image capture device may include a mobile phone and other devices that can realize the image capture function. The current location is determined by the device itself (for example, the mobile phone has a positioning function) or other auxiliary positioning device or according to the user report. In the fixed space, the target setting space is a space that requires a three-dimensional model to achieve augmented reality. The three-dimensional model can be a scene design model after decoration and design processing based on the structure of the target setting space.
步骤104,控制图像采集设备在当前位置开始采集图像,并基于当前位置作为原点坐标进行坐标初始化。Step 104: Control the image acquisition device to start acquiring images at the current position, and perform coordinate initialization based on the current position as the origin coordinates.
在一实施例中,为了将三维模型与目标设定空间进行对齐,本实施例首先通过图像采集设备所在的当前位置建立一个以该当前位置为原点坐标的坐标系,该坐标系中,所有坐标都是相对于该原点坐标的相对坐标,在初始化过程中,例如,通过原点坐标和在原点坐标获得的图像,可将三维模型嵌入现实场景(目标设定空间)中。In one embodiment, in order to align the three-dimensional model with the target setting space, this embodiment first establishes a coordinate system with the current position as the origin coordinate through the current position where the image acquisition device is located. In the coordinate system, all coordinates They are all relative coordinates relative to the origin coordinates. In the initialization process, for example, the three-dimensional model can be embedded in the real scene (target setting space) by using the origin coordinates and the image obtained at the origin coordinates.
步骤106,基于原点坐标和图像采集设备采集到的图像,将三维模型与目标设定空间进行对齐。Step 106: align the three-dimensional model with the target setting space based on the coordinates of the origin and the image collected by the image acquisition device.
本实施例中,由于空间不仅具有一个方向和角度,当图像采集设备移动和/或 转动后,获得的图像中以初始化嵌入的三维模型可能与当前画面不匹配,导致三维模型出画,影响用户的观感,本实施例通过基于图像采集设备在不同位置采集到的图像和原点坐标可确定此时三维模型需要调整的多个方位的角度,通过调整使三维模型在当前画面(对应图像)中与目标空间是对齐的。In this embodiment, since the space not only has one direction and angle, when the image capture device moves and/or rotates, the initial 3D model embedded in the obtained image may not match the current picture, resulting in the 3D model being drawn and affecting the user This embodiment can determine the multiple orientation angles of the three-dimensional model that need to be adjusted at this time based on the images collected by the image capture device at different positions and the coordinates of the origin. The target space is aligned.
本公开上述实施例提供的一种用于实现增强现实的方法,基于图像采集设备的当前位置,获得与目标设定空间对应的三维模型,其中,所述当前位置在目标设定空间中;控制所述图像采集设备在所述当前位置开始采集图像,并基于所述当前位置作为原点坐标进行坐标初始化;基于所述原点坐标和所述图像采集设备采集到的图像,将所述三维模型与所述目标设定空间进行对齐,本实施例通过采集到的图像和原点坐标将目标设定空间对应的三维模型对齐到目标设定空间中,实现了模型与现实世界的自动对齐;在现实场景中可查看到与现实场景匹配的虚拟场景,提高了增强现实的效率和效果。The foregoing embodiment of the present disclosure provides a method for realizing augmented reality, which obtains a three-dimensional model corresponding to a target setting space based on the current position of an image acquisition device, wherein the current position is in the target setting space; control The image acquisition device starts to acquire images at the current position, and initializes the coordinates based on the current position as the origin coordinates; based on the origin coordinates and the images collected by the image acquisition device, the three-dimensional model is compared with the The target setting space is aligned. In this embodiment, the three-dimensional model corresponding to the target setting space is aligned to the target setting space through the collected image and origin coordinates, so as to realize the automatic alignment between the model and the real world; in the real scene The virtual scene matching the real scene can be viewed, which improves the efficiency and effect of augmented reality.
如图2所示,图1中的步骤102可包括如下步骤:As shown in FIG. 2, step 102 in FIG. 1 may include the following steps:
步骤201,获得图像采集设备的当前位置。Step 201: Obtain the current position of the image acquisition device.
步骤202,通过将当前位置与至少一个设定空间的坐落位置进行匹配,从所述至少一个设定空间中确定所述目标设定空间。Step 202: Determine the target setting space from the at least one setting space by matching the current position with the sitting position of the at least one setting space.
其中,所述至少一个设定空间中的每个设定空间对应至少一个三维模型。Wherein, each setting space in the at least one setting space corresponds to at least one three-dimensional model.
步骤203,从目标设定空间对应的至少一个三维模型中确定一个三维模型,作为与目标设定空间对应的三维模型。Step 203: Determine a three-dimensional model from at least one three-dimensional model corresponding to the target setting space as the three-dimensional model corresponding to the target setting space.
在示例中,三维模型与设定空间的比例为1比1,在本公开实施例中对于三维模型与设定空间的对齐仅涉及到旋转和位移,不涉及到三维模型的缩放。In the example, the ratio of the three-dimensional model to the setting space is 1:1. In the embodiment of the present disclosure, the alignment of the three-dimensional model and the setting space only involves rotation and displacement, and does not involve the scaling of the three-dimensional model.
本实施例中的三维模型是预制的,通过对多个可能需要进行增强现实的设定空间(如,房屋等)通过三维模拟技术设计至少一个效果渲染三维模型,同一设定空间可对应多种不同效果的三维模型,具体确定哪个三维模型可根据用户的选择确定,或根据渲染效果对用户推荐,或随机选择一个三维模型,本实施例不限 制获得目标设定空间对应的三维模型的具体方式。The three-dimensional model in this embodiment is prefabricated. By designing at least one effect rendering three-dimensional model through three-dimensional simulation technology for multiple setting spaces (such as houses, etc.) that may require augmented reality, the same setting space can correspond to multiple types Three-dimensional models with different effects. The specific three-dimensional model can be determined according to the user's choice, or recommended to the user based on the rendering effect, or randomly selected a three-dimensional model. This embodiment does not limit the specific way of obtaining the three-dimensional model corresponding to the target setting space .
在示例中,步骤202可包括:In an example, step 202 may include:
确定当前位置与至少一个设定空间的坐落位置之间的相应距离;Determine the corresponding distance between the current location and the location of at least one set space;
从所述相应距离中确定小于预设值的一个目标距离;以及Determine a target distance less than a preset value from the corresponding distances; and
将所述至少一个设定空间中与目标距离对应的设定空间作为目标设定空间。The setting space corresponding to the target distance in the at least one setting space is taken as the target setting space.
在示例中,通过将当前位置对应的世界坐标系中的坐标(如,基于GPS等获得)与至少一个设定空间中的每个设定空间对应的世界坐标系下的坐标进行差值计算,或在可查看定位坐标的地图(如,百度地图、高德地图等)上确定当前位置与每个设定空间之间的距离,当存在当前位置与一个设定空间的距离小于预设值(由于定位***存在误差,可设置一个较大范围确定当前位置在设定空间中,如,100米等)时,说明当前位置对应该设定空间。或者,根据用户的请求确定需要实现增强现实的目标设定空间,当查看到图像采集设备到达距离该目标设定空间设定距离时,开始基于图像采集设备的位置进行三维模型的对齐,例如,目标设定空间为一个房屋时,用户从交互页面选择的房屋信息确定要查看的房屋,从服务获取到此房屋的位置信息,结合手机的GPS信息,确定用户在房屋的一定范围内(比如100m),如果满足在房屋的一定范围内,则加载预先生成的房间模型(三维模型)。In the example, the difference is calculated by calculating the difference between the coordinates in the world coordinate system corresponding to the current position (for example, obtained based on GPS, etc.) and the coordinates in the world coordinate system corresponding to each setting space in at least one setting space, Or determine the distance between the current position and each setting space on a map that can view the positioning coordinates (such as Baidu map, Gaode map, etc.), when the distance between the current position and a setting space is less than the preset value ( Due to the error of the positioning system, a larger range can be set to determine that the current position is in the setting space (for example, 100 meters, etc.), indicating that the current position corresponds to the setting space. Or, according to the user's request, the target setting space that needs to be realized for augmented reality is determined, and when it is checked that the image capture device reaches the target setting distance from the target setting space, the alignment of the three-dimensional model is started based on the position of the image capture device, for example, When the target setting space is a house, the house information selected by the user from the interactive page determines the house to be viewed, and the location information of the house is obtained from the service. Combined with the GPS information of the mobile phone, it is determined that the user is within a certain range of the house (such as 100m). ), if it is within a certain range of the house, load the pre-generated room model (three-dimensional model).
如图3所示,图1中的步骤106可包括如下步骤:As shown in FIG. 3, step 106 in FIG. 1 may include the following steps:
步骤301,在目标设定空间内移动图像采集设备,并在移动过程中以设定频率采集目标设定空间内的多帧图像。Step 301: Move the image acquisition device in the target setting space, and collect multiple frames of images in the target setting space at a set frequency during the movement.
其中,设定频率可以为每秒8帧以上,每秒获得的帧数越多,越不容易丢失目标设定空间中的场景,但该采集频率与图像采集设备的移动速度相关,移动速度越快,需要相应的获得的帧数越多,否则,容易丢失画面。Among them, the setting frequency can be more than 8 frames per second. The more frames obtained per second, the less likely it is to lose the scene in the target setting space. However, the acquisition frequency is related to the moving speed of the image acquisition device. Faster, the more frames need to be obtained accordingly, otherwise, it is easy to lose the picture.
步骤302,响应于多帧图像中存在一帧图像中包括固定标识物,根据固定标识物将三维模型与目标设定空间进行对齐。 Step 302, in response to the presence of a fixed identifier in one frame of the multiple images, align the three-dimensional model with the target setting space according to the fixed identifier.
本实施例为了提高三维模型与目标设定空间的贴合度,通过在目标设定空间和三维模型中都具有的较易识别且不变形的固定标识物对三维模型与目标设定空间进行对齐,提高了对齐的准确性和速度。In this embodiment, in order to improve the fit between the three-dimensional model and the target setting space, the three-dimensional model is aligned with the target setting space through fixed markers that are easier to identify and are not deformed in both the target setting space and the three-dimensional model. , Improve the accuracy and speed of alignment.
如图4所示,图1中的步骤106替换地可包括如下步骤:As shown in FIG. 4, step 106 in FIG. 1 may alternatively include the following steps:
步骤401,在目标设定空间内移动图像采集设备,并在移动过程中以设定频率采集目标设定空间内的多帧图像。Step 401: Move the image acquisition device in the target setting space, and collect multiple frames of images in the target setting space at a set frequency during the movement.
步骤402,利用位姿获取设备确定图像采集设备采集多帧图像中的至少一帧图像时相对于原点坐标的位姿信息。Step 402: Use the pose acquisition device to determine the pose information relative to the coordinates of the origin when the image acquisition device collects at least one frame of the multi-frame images.
其中,位姿信息包括6个自由度的信息,包括三个方向的旋转(三个自由度)和平移(三个自由度)。在示例中,位姿获取设备包括陀螺仪和/或地磁仪传感器,该位姿获取设备可以集成在图像采集设备中,或单独设置。Among them, the pose information includes information of 6 degrees of freedom, including three directions of rotation (three degrees of freedom) and translation (three degrees of freedom). In an example, the pose acquisition device includes a gyroscope and/or a geomagnetometer sensor, and the pose acquisition device may be integrated in the image acquisition device or set separately.
步骤403,响应于多帧图像中存在一帧图像中包括固定标识物,基于固定标识物与三维模型中对应的标识物模型之间的对应关系对三维模型和目标设定空间进行初始对齐。Step 403: In response to the presence of a fixed marker in one frame of images in the multiple frames, the three-dimensional model and the target setting space are initially aligned based on the correspondence between the fixed marker and the corresponding marker model in the three-dimensional model.
步骤404,确定获取包括固定标识物的图像时图像采集设备相对于原点坐标的位姿信息。Step 404: Determine the pose information of the image acquisition device relative to the origin coordinates when acquiring the image including the fixed identifier.
步骤405,基于位姿信息对初始对齐进行调整,实现三维模型与目标设定空间的对齐。Step 405: Adjust the initial alignment based on the pose information to achieve the alignment of the three-dimensional model and the target setting space.
本实施例中,不仅利于固定标识物对三维模型进行对齐,在初始对齐的基础上,还通过位姿获取设备获取当前图像采集设备相对于原点坐标的位姿信息对初始对齐进行调整,使三维模型与目标设定空间不仅在固定标识物上有较好的匹配,在整体空间上也实现较好的对齐。In this embodiment, it is not only beneficial to fix the markers to align the three-dimensional model, but on the basis of the initial alignment, the pose information of the current image acquisition device relative to the origin coordinates is also obtained through the pose acquisition device to adjust the initial alignment so that the three-dimensional The model and the target setting space not only have a better match on the fixed markers, but also achieve a better alignment in the overall space.
如图5所示,图1中的步骤104还可包括如下步骤:As shown in FIG. 5, step 104 in FIG. 1 may further include the following steps:
步骤501,建立以原点坐标作为中心的原点坐标系。Step 501: Establish an origin coordinate system with origin coordinates as a center.
步骤502,确定原点坐标在三维模型中的模型坐标。Step 502: Determine the model coordinates of the origin coordinates in the three-dimensional model.
步骤503,基于模型坐标和原点坐标,将三维模型嵌入目标设定空间,获得在原点坐标系下的三维模型的初始位姿。Step 503: Based on the model coordinates and the origin coordinates, the three-dimensional model is embedded in the target setting space to obtain the initial pose of the three-dimensional model in the origin coordinate system.
本实施例中,以当前位置进行初始化,将三维模型按照原点坐标和在原点坐标采集的图像进行嵌入,本实施例中,可通过原点坐标在世界坐标系中的坐标以及三位模型在世界坐标系中的坐标进行匹配,确定原点坐标在三维模型中对应的模型坐标,结合该模型坐标可将三维模型粗略的嵌入到目标设定空间中(未进行对齐),此时将三维模型看做一个整体,可得到三维模型在原点坐标系下的初始位姿(6个自由度)。In this embodiment, the current position is used to initialize, and the three-dimensional model is embedded according to the origin coordinates and the image collected at the origin coordinates. In this embodiment, the coordinates of the origin coordinates in the world coordinate system and the coordinates of the three-dimensional model in the world Match the coordinates in the system to determine the model coordinates corresponding to the origin coordinates in the 3D model. Combining the model coordinates, the 3D model can be roughly embedded in the target setting space (not aligned). At this time, the 3D model is regarded as a Overall, the initial pose (6 degrees of freedom) of the three-dimensional model in the origin coordinate system can be obtained.
如图6所示,图4中的步骤403还可包括如下步骤:As shown in FIG. 6, step 403 in FIG. 4 may further include the following steps:
步骤601,基于图像中的固定标识物在图像中的形变,确定获取包括固定标识物的图像时图像采集设备相对于原点坐标的初级位姿信息。Step 601: Based on the deformation of the fixed marker in the image in the image, determine the primary pose information of the image capture device relative to the origin coordinates when the image including the fixed marker is acquired.
在示例中,由于固定标识物在目标设定空间和三维模型中都有,并且,目标设定空间与三维模型之间的比例为1:1,因此,在空间和模型对齐的情况下,固定标识物应当与三维模型中的固定标识物完全重合,当存在不重合的情况时,可通过不重合的情况确定此时图像采集设备相对于原点坐标的初级位姿信息。In the example, since the fixed markers are both in the target setting space and the three-dimensional model, and the ratio between the target setting space and the three-dimensional model is 1:1, when the space and the model are aligned, the fixed marker The marker should be completely overlapped with the fixed marker in the three-dimensional model. When there is no overlap, the primary pose information of the image acquisition device relative to the origin coordinates can be determined through the non-overlap.
步骤602,确定图像采集设备获取包括固定标识物的图像时的在原点坐标系下的位置坐标。Step 602: Determine the position coordinates in the origin coordinate system when the image acquisition device acquires the image including the fixed marker.
在示例中,固定标示物可以有多种,例如,目标设定空间为房屋时,固定标识物可以包括但不限于:门,窗或其他房间里唯一的明显的特征物,比如墙上的画等。In the example, there can be many kinds of fixed markers. For example, when the target setting space is a house, the fixed markers can include but are not limited to: doors, windows or other distinctive features in the room, such as paintings on the walls. Wait.
步骤603,基于初级位姿信息、位置坐标和三维模型的初始位姿,对三维模型与目标设定空间进行初始对齐。Step 603: Based on the primary pose information, the position coordinates, and the initial pose of the three-dimensional model, the three-dimensional model is initially aligned with the target setting space.
本实施例结合固定标识物确定的初级位姿信息对三维模型进行基于初始位姿的首次对齐(调整三维模型的6个自由度),首次对齐后固定标识物与三维模型中的固定标识物实现对齐,但三维模型其他部分可能存在偏差,为了对齐整个 三维模型,本实施例还结合了图像采集设备此时的位置坐标,结合位置坐标对平移方向的自由度进行调整,提高了对齐效果。This embodiment combines the primary pose information determined by the fixed marker to perform the first alignment of the three-dimensional model based on the initial pose (adjusting the 6 degrees of freedom of the three-dimensional model), and realizes the realization of the fixed marker and the fixed marker in the three-dimensional model after the first alignment However, other parts of the three-dimensional model may have deviations. In order to align the entire three-dimensional model, this embodiment also combines the position coordinates of the image acquisition device at this time, and adjusts the degree of freedom of the translation direction in combination with the position coordinates to improve the alignment effect.
在示例中,步骤603可包括:基于位置坐标确定图像采集设备获取包括固定标识物的图像时相对于原点坐标的位移信息;基于初级位姿信息确定图像采集设备相对于原点坐标的旋转信息;根据位移信息和旋转信息对三维模型的初始位姿进行调整,实现三维模型与目标设定空间的初始对齐。In an example, step 603 may include: determining, based on the position coordinates, the displacement information of the image capture device relative to the origin coordinates when acquiring the image including the fixed marker; determining the rotation information of the image capture device relative to the origin coordinates based on the primary pose information; Displacement information and rotation information adjust the initial pose of the 3D model to achieve the initial alignment of the 3D model with the target setting space.
如图7所示,图4中的步骤404还可包括如下步骤:As shown in FIG. 7, step 404 in FIG. 4 may further include the following steps:
步骤701,利用位姿获取设备在图像采集设备移动过程中追踪图像采集设备相对于原点坐标的位姿信息。Step 701: Use the pose acquisition device to track the pose information of the image acquisition device relative to the origin coordinates during the movement of the image acquisition device.
在示例中,步骤701包括:基于图像采集设备连续采集的两帧图像中相同的特征点之间的位移,以及位姿获取设备在图像采集设备连续采集两帧图像时获取的对应的两个位姿信息,追踪图像采集设备的位姿信息。In an example, step 701 includes: based on the displacement between the same feature points in two frames of images continuously collected by the image acquisition device, and the corresponding two positions acquired by the pose acquisition device when the image acquisition device continuously collects two frames of images. Posture information, tracking the posture information of the image acquisition device.
其中,特征点具有特性:位移不变和旋转不变、缩放不变。在示例中,特征点可以包括但不限于:图像中的角点、不同颜色的点、较为特殊的点等,通过在不同帧图像中识别相同的特征点可以识别和追踪。Among them, the characteristic points have characteristics: constant displacement and rotation, and constant scaling. In an example, the feature points may include, but are not limited to: corner points in the image, points of different colors, more special points, etc., which can be identified and tracked by identifying the same feature points in different frames of images.
步骤702,基于追踪确定图像采集设备在采集包括固定标识物的图像时相对于原点坐标的位姿信息。Step 702: Determine the pose information relative to the origin coordinate when the image acquisition device acquires an image including a fixed marker based on the tracking.
本实施例通过特征点实现对多帧图像的追踪,在具有相同特征点的多帧连续图像之间,可通过特征点的位移变化确定采集对应图像时图像采集设备相对于原点坐标的位姿信息,本实施例通过不断追踪,在图像中识别到固定标识物时,即可确定图像采集设备在采集包括固定标识物的图像时相对于原点坐标的位姿信息。This embodiment uses feature points to track multiple frames of images. Among multiple frames of continuous images with the same feature points, the position and posture information of the image acquisition device relative to the origin coordinates when the corresponding image is collected can be determined by the displacement changes of the feature points. In this embodiment, through continuous tracking, when a fixed identifier is recognized in the image, the pose information relative to the origin coordinates when the image acquisition device collects the image including the fixed identifier can be determined.
本公开上述实施例提供的实现增强现实的方法,在实现在固定标识物处的三维模型与目标设定空间之间的对齐之后并不停止,只要图像采集设备还在获取图像,本实施例可基于每帧图像对三维模型和目标设定空间进行对齐,以使用户在 查看时始终获得的都是对齐效果较好的增强现实的场景,直到图像采集设备停止获取图像或用户手动停止对齐。The method for realizing augmented reality provided by the foregoing embodiment of the present disclosure does not stop after the alignment between the three-dimensional model at the fixed marker and the target setting space is achieved. As long as the image acquisition device is still acquiring images, this embodiment can The three-dimensional model and the target setting space are aligned based on each frame of image, so that the user always obtains an augmented reality scene with a better alignment effect when viewing, until the image acquisition device stops acquiring images or the user manually stops the alignment.
本公开实施例提供的任一种实现增强现实的方法可以由任意适当的具有数据处理能力的设备执行,包括但不限于:终端设备和服务器等。或者,本公开实施例提供的任一种实现增强现实的方法可以由处理器执行,如处理器通过调用存储器存储的相应指令来执行本公开实施例提及的任一种实现增强现实的方法。下文不再赘述。Any method for realizing augmented reality provided by the embodiments of the present disclosure can be executed by any suitable device with data processing capabilities, including but not limited to: terminal devices and servers. Alternatively, any method for realizing augmented reality provided by the embodiment of the present disclosure may be executed by a processor, for example, the processor executes any method for realizing augmented reality mentioned in the embodiment of the present disclosure by calling a corresponding instruction stored in a memory. I won't repeat them below.
在示例中,图1的方法还可以包括如图8所示的步骤。图8是本公开一示例性实施例提供的实现增强现实的方法中,用于三维模型中的模型展示方法的流程示意图。图8所示的步骤可以在如上结合图1所述的步骤106之后执行,包括步骤801、步骤802和步骤803,下面对各步骤分别进行说明。In an example, the method of FIG. 1 may further include the steps shown in FIG. 8. FIG. 8 is a schematic flowchart of a model display method used in a three-dimensional model in a method for realizing augmented reality provided by an exemplary embodiment of the present disclosure. The steps shown in FIG. 8 can be executed after step 106 described above in conjunction with FIG.
步骤801,展示三维模型中的与用户视角对应的局部模型。Step 801: Display a partial model corresponding to the user's perspective in the three-dimensional model.
这里,三维模型可以为利用三维软件绘制的,与真实房屋对应的模型,其中,真实房屋位于真实世界,真实世界也可以称为物理世界;三维模型位于虚拟世界,三维模型也可以称为虚拟房屋。Here, the three-dimensional model can be a model corresponding to a real house drawn by using three-dimensional software. The real house is located in the real world, and the real world can also be called the physical world; the three-dimensional model is located in the virtual world, and the three-dimensional model can also be called the virtual house. .
在示例中,三维模型与真实房屋的尺寸比例可以为1:1,这样,如果将三维模型校准地面后放置于物理世界中,三维模型中的户型外框及入户门位置与真实房屋可以完全重叠。当然,三维模型与真实房屋的尺寸比例也可以为1:2、1:5、1:10等,在此不再一一列举。In the example, the size ratio of the 3D model to the real house can be 1:1. In this way, if the 3D model is calibrated on the ground and placed in the physical world, the frame of the house and the position of the entrance door in the 3D model can be completely identical to the real house. overlapping. Of course, the size ratio of the three-dimensional model to the real house can also be 1:2, 1:5, 1:10, etc., which will not be listed here.
在示例中,三维模型可以在室内增强现实场景使用,例如,三维模型可以在AR看房场景或者AR装修场景(该场景下可进行户型改造)使用。In the example, the three-dimensional model can be used in an indoor augmented reality scene. For example, the three-dimensional model can be used in an AR house viewing scene or an AR decoration scene (household renovations can be carried out in this scene).
需要说明的是,用户视角可以由用户根据实际需求选择,在三维模型如图9A、图9B或图9C所示的情况下,三维模型中的与用户视角对应的局部模型可能如图10A所示,或者可能如图10B所示,或者也可能如图10C所示。It should be noted that the user's perspective can be selected by the user according to actual needs. In the case of the three-dimensional model shown in Figure 9A, Figure 9B or Figure 9C, the local model corresponding to the user's perspective in the three-dimensional model may be as shown in Figure 10A , Or it may be as shown in FIG. 10B, or it may be as shown in FIG. 10C.
步骤802,响应于确定局部模型中存在外部透视区,依据与三维模型对应的 真实房屋的房屋数据,确定用户视角下,外部透视区的参考视觉信息。 Step 802, in response to determining that the external perspective area exists in the partial model, determine the reference visual information of the external perspective area in the user's perspective according to the house data of the real house corresponding to the three-dimensional model.
这里,可以预先存储与三维模型对应的真实房屋的房屋数据,真实房屋的房屋数据中可以记载有大量的信息,包括但不限于房屋结构信息、空间功能信息、房屋尺寸信息、家居摆放信息等。Here, the house data of the real house corresponding to the three-dimensional model can be stored in advance. The house data of the real house can record a large amount of information, including but not limited to house structure information, spatial function information, house size information, home placement information, etc. .
在步骤802中,可以检测局部模型中是否存在外部透视区。需要指出的是,外部透视区是指透过其能够看到房屋外部场景的区域,例如,外部透视区可以为窗体区,当然,外部透视区也可以为开放阳台的入口区等,在此不再一一列举。In step 802, it can be detected whether there is an external perspective area in the local model. It should be pointed out that the external perspective area refers to the area through which the external scene of the house can be seen. For example, the external perspective area can be the window area. Of course, the external perspective area can also be the entrance area of the open balcony, etc. I won't list them one by one again.
在局部模型中存在外部透视区(即用户通过展示的局部模型能够看到外部透视区)的情况下,可以依据预先存储的真实房屋的房屋数据,确定用户视角下,外部透视区的参考视觉信息。具体地,参考视觉信息可以用于表征:当映射至真实世界中时,用户视角下,外部透视区对于用户而言是否存在被遮挡的区域,以及具体哪些区域被遮挡;或者,参考视觉信息可以用于表征:当映射至真实世界中时,用户视角下,外部透视区对于用户而言是否可见,以及具体哪些区域可见。In the case that there is an external perspective area in the partial model (that is, the user can see the external perspective area through the displayed partial model), the reference visual information of the external perspective area under the user's perspective can be determined based on the pre-stored house data of the real house . Specifically, the reference visual information can be used to characterize: when mapped to the real world, in the user's perspective, whether there are areas that are occluded in the external perspective area for the user, and which areas are occluded; or, the reference visual information can be Used to characterize: when mapped to the real world, under the user's perspective, whether the external perspective area is visible to the user, and which areas are visible.
步骤803,基于参考视觉信息,控制局部模型中的外部透视区以相应展示策略进行画面展示。Step 803: Based on the reference visual information, control the external perspective area in the partial model to display the image according to the corresponding display strategy.
一般而言,用户是可以根据实际需求,对三维模型进行修改的,具体地,用户可以移动或拆除三维模型中的墙体,或者,用户可以向三维模型中添加墙体。举例而言,在修改之前,如图11A所示,三维模型中可以包括客厅1101、卧室1103和卫生间1105,其中客厅1101与卧室1103之间可以存在隔墙;在修改之后,三维模型中的客厅1101与卧室1103之前不存在隔墙,如图11B所示,三维模型中的客厅与卧室连通形成了一个开放空间1107。Generally speaking, the user can modify the three-dimensional model according to actual needs. Specifically, the user can move or dismantle the walls in the three-dimensional model, or the user can add walls to the three-dimensional model. For example, before the modification, as shown in FIG. 11A, the three-dimensional model may include a living room 1101, a bedroom 1103, and a bathroom 1105, where a partition wall may exist between the living room 1101 and the bedroom 1103; after the modification, the living room in the three-dimensional model There is no partition wall between 1101 and the bedroom 1103. As shown in Fig. 11B, the living room and the bedroom in the three-dimensional model are connected to form an open space 1107.
这样,在局部模型中存在外部透视区的情况下,步骤802中确定出的参考视觉信息可以存在多种可能的情况,例如,参考视觉信息可以表征外部透视区整体被遮挡、整体均未被遮挡或者部分被遮挡,针对每种情况,可以控制局部模型中的外部透视区分别以相应的展示策略进行画面展示,这样,每种情况的模型展示 效果能够存在区别。In this way, in the case that there is an external perspective area in the local model, the reference visual information determined in step 802 may have multiple possible situations. For example, the reference visual information can indicate that the external perspective area is occluded as a whole, and the whole is not occluded. Or part of it is occluded. For each situation, the external perspective area in the partial model can be controlled to be displayed with the corresponding display strategy. In this way, the model display effect of each situation can be different.
本公开的实施例中,可以展示三维模型中的与用户视角对应的局部模型,在局部模型中存在外部透视区的情况下,可以依据与三维模型对应的真实房屋的房屋数据,确定用户视角下,外部透视区的参考视觉信息;之后,可以基于参考视觉信息,控制局部模型中的外部透视区以相应展示策略进行画面展示。可见,本公开的实施例中,三维模型并不仅仅只能以不同角度进行展示,在参考视觉信息变化的情况下,三维模型中的局部模型中的外部透视区的展示策略能够进行相应变化,从而能够使三维模型的展示效果发生变化,因此,与现有技术相比,本公开的实施例中,三维模型的展示效果更加多样化。In the embodiments of the present disclosure, the partial model corresponding to the user's perspective in the three-dimensional model can be displayed. In the case that there is an external perspective area in the partial model, the user's perspective can be determined according to the house data of the real house corresponding to the three-dimensional model. , The reference visual information of the external perspective area; later, based on the reference visual information, the external perspective area in the local model can be controlled to display the screen with the corresponding display strategy. It can be seen that, in the embodiments of the present disclosure, the three-dimensional model is not only displayed at different angles, and the display strategy of the external perspective area in the partial model in the three-dimensional model can be changed accordingly when the reference visual information changes. Thereby, the display effect of the three-dimensional model can be changed. Therefore, compared with the prior art, in the embodiment of the present disclosure, the display effect of the three-dimensional model is more diversified.
在一个示例中,基于参考视觉信息,控制局部模型中的外部透视区以相应展示策略进行画面展示,包括:响应于确定参考视觉信息表征外部透视区整体未被遮挡,控制局部模型中的外部透视区展示对应的真实场景画面;响应于确定参考视觉信息表征外部透视区整体被遮挡,控制局部模型中的外部透视区展示虚拟场景画面;以及响应于确定参考视觉信息表征外部透视区中包括未被遮挡的第一区域和被遮挡的第二区域,控制第一区域展示对应的真实场景画面,并控制第二区域展示虚拟场景画面。In one example, based on the reference visual information, controlling the external perspective area in the partial model to display the screen with a corresponding display strategy includes: in response to determining that the reference visual information represents that the external perspective area as a whole is not occluded, controlling the external perspective in the partial model The area displays the corresponding real scene picture; in response to determining that the reference visual information indicates that the external perspective area is occluded as a whole, the external perspective area in the partial model is controlled to display the virtual scene picture; and in response to the determination that the reference visual information represents the external perspective area includes The blocked first area and the blocked second area are controlled to display the corresponding real scene picture in the first area, and the second area is controlled to display the virtual scene picture.
这里,可以预先存储统一的虚拟场景画面,虚拟场景画面可以为虚拟花园场景画面、虚拟天空场景画面等。Here, a unified virtual scene picture may be stored in advance, and the virtual scene picture may be a virtual garden scene picture, a virtual sky scene picture, and the like.
这里,可以预先通过摄像头,分别采集真实房屋中的每个外部透视区对应的真实场景画面,并存储每个外部透视区与相应真实场景画面之间的对应关系,其中,任一外部透视区对应的真实场景画面用于呈现通过该外部透视区所能够看到的真实场景。Here, the real scene picture corresponding to each external perspective area in the real house can be collected in advance through the camera, and the corresponding relationship between each external perspective area and the corresponding real scene picture can be stored, wherein any external perspective area corresponds to The real scene picture of is used to present the real scene that can be seen through the external perspective area.
在参考视觉信息表征外部透视区整体未被遮挡的情况下,可以从所存储的对应关系中,获取局部模型中的外部透视区所对应的真实场景画面,并控制局部模型中的外部透视区展示所获取的真实场景画面。这样,从视觉上来看,局部模型 中的外部透视区向用户呈现的是真实场景,例如街道场景,从而能够保证虚拟世界和真实世界的一致性,以提升模型的真实感。When referring to visual information to indicate that the external perspective area as a whole is not occluded, the real scene image corresponding to the external perspective area in the partial model can be obtained from the stored correspondence, and the display of the external perspective area in the partial model can be controlled The obtained real scene picture. In this way, from a visual point of view, the external perspective area in the partial model presents the user with a real scene, such as a street scene, so as to ensure the consistency between the virtual world and the real world, so as to enhance the realism of the model.
在参考视觉信息表征外部透视区整体被遮挡的情况下,可以获取所存储的虚拟场景画面,并控制局部模型中的外部透视区展示所获取的虚拟场景画面。这样,从视觉上来看,局部模型中的外部透视区向用户呈现的是虚拟场景,那么,可以据此确定,由于对三维模型的修改,导致用户视角下,原本应该被遮挡的外部透视区实际上未被遮挡,也即,用户视角下,整个外部透视区的可见性在真实世界和虚拟世界中存在区别。In the case of referring to the visual information to characterize that the external perspective area is occluded as a whole, the stored virtual scene image can be acquired, and the external perspective area in the partial model can be controlled to display the acquired virtual scene image. In this way, from a visual point of view, the external perspective area in the partial model presents a virtual scene to the user. Then, it can be determined that due to the modification of the three-dimensional model, the external perspective area that should have been occluded from the user’s perspective is actually The above is not blocked, that is, under the user's perspective, the visibility of the entire external perspective area is different between the real world and the virtual world.
在参考视觉信息表征外部透视区中包括未被遮挡的第一区域和被遮挡的第二区域的情况下,可以获取所存储的虚拟场景画面,并控制第二区域展示所获取的虚拟场景画面;还可以从所存储的对应关系中,获取局部模型中的外部透视区所对应的真实场景画面,依据第一区域在外部透视区的具***置,对所获取的真实场景画面进行裁剪,以得到与第一区域对应的真实场景画面,并控制第一区域展示其对应的真实场景画面。这样,从视觉上来看,局部模型中的外部透视区中的第一区域向用户呈现的是真实场景,从而有利于提升模型的真实感;局部模型中的外部透视区中的第二区域向用户呈现的是虚拟场景,那么,可以据此确定,由于对三维模型的修改,导致用户视角下,原本应该被遮挡的第二区域实际上并未被遮挡,也即,用户视角下,第二区域的可见性在真实世界和虚拟世界中存在区别。In the case where the reference visual information characterizes that the outer perspective area includes the first area that is not blocked and the second area that is blocked, the stored virtual scene picture may be acquired, and the second area may be controlled to display the acquired virtual scene picture; It is also possible to obtain the real scene picture corresponding to the external perspective area in the local model from the stored correspondence, and crop the obtained real scene picture according to the specific position of the first area in the external perspective area to obtain the same The first area corresponds to the real scene picture, and the first area is controlled to display its corresponding real scene picture. In this way, from a visual point of view, the first area in the external perspective area in the partial model presents the real scene to the user, which helps to improve the sense of reality of the model; the second area in the external perspective area in the partial model presents the user to the real scene. If the virtual scene is presented, it can be determined based on this that due to the modification of the three-dimensional model, the second area that should have been occluded in the user's perspective is not actually occluded, that is, in the user's perspective, the second area There is a difference between the visibility of the real world and the virtual world.
具体实施时,假设三维模型中的与用户视角对应的局部模型如图10B所示,在参考视觉信息表征图10B中的窗体区整体未被遮挡的情况下,可以在图10B中的整个窗体区展示对应的真实场景画面;在参考视觉信息表征图10B中的窗体区整体被遮挡的情况下,可以在图10B中的整个窗体区展示虚拟场景画面;在理论可见性表征图10B中的窗体区中的Q1区域未被遮挡,且Q2区域被遮挡的情况下,可以在Q1区域展示对应的真实场景画面,并在Q2区域展示虚拟场景画面。In specific implementation, it is assumed that the partial model corresponding to the user's perspective in the three-dimensional model is shown in FIG. 10B. When the entire window area in FIG. The body area displays the corresponding real scene picture; in the case where the entire window area in FIG. 10B is occluded with reference to visual information, the virtual scene picture can be displayed in the entire window area in FIG. 10B; In the case that the Q1 area in the window area in the window area is not blocked, and the Q2 area is blocked, the corresponding real scene picture can be displayed in the Q1 area, and the virtual scene picture can be displayed in the Q2 area.
可见,本公开的实施例中,基于参考视觉信息,能够通过真实场景画面的展示,提升模型的真实感,还能够通过虚拟场景画面的展示,使用户获知可见性在真实世界和虚拟世界中存在区别的区域。It can be seen that, in the embodiments of the present disclosure, based on the reference visual information, the realism of the model can be improved through the display of the real scene picture, and the display of the virtual scene picture can also enable the user to know that the visibility exists in the real world and the virtual world. Different areas.
需要指出的是,基于参考视觉信息,控制局部模型中的外部透视区以相应展示策略进行画面展示的实施方式并不局限于此。举例而言,可以预先设置第一虚拟场景画面和第二虚拟场景画面,在参考视觉信息表征外部透视区整体未被遮挡的情况下,可以控制局部模型中的外部透视区展示第一虚拟场景画面;在参考视觉信息表征外部透视区整体被遮挡的情况下,可以控制局部模型中的外部透视区展示第二虚拟场景画面;在参考视觉信息表征外部透视区包括未被遮挡的第一区域和被遮挡的第二区域的情况下,可以控制第一区域展示第一虚拟场景画面,并控制第二区域展示第二虚拟场景画面。这样,通过所展示的虚拟场景画面的不同,能够使用户获知可见性在真实世界和虚拟世界中存在区别的区域,以及可见性在真实世界和虚拟世界中一致的区域。It should be pointed out that, based on the reference visual information, the implementation of controlling the external perspective area in the partial model to display the screen with the corresponding display strategy is not limited to this. For example, the first virtual scene picture and the second virtual scene picture can be set in advance, and the external perspective area in the partial model can be controlled to display the first virtual scene picture under the condition that the external perspective area as a whole is not obstructed by referring to the visual information ; In the case that the reference visual information characterizes that the external perspective area is occluded as a whole, the external perspective area in the local model can be controlled to display the second virtual scene picture; the reference visual information characterizes the external perspective area including the first area that is not occluded and the In the case of the blocked second area, the first area can be controlled to display the first virtual scene image, and the second area can be controlled to display the second virtual scene image. In this way, through the difference of the displayed virtual scene pictures, the user can be informed of areas where the visibility is different between the real world and the virtual world, and areas where the visibility is consistent between the real world and the virtual world.
在一个示例中,该方法还包括:检测三维模型中,用户视点位置前方的预设距离范围内是否存在障碍物,以得到检测结果;依据真实房屋的房屋数据,确定三维模型中,用户视点位置前方的预设距离范围内是否存在障碍物,以得到确定结果;响应于检测结果为不存在障碍物并且确定结果为存在障碍物,执行障碍物应对操作。In an example, the method further includes: detecting whether there are obstacles within a preset distance in front of the user's viewpoint in the three-dimensional model to obtain the detection result; determining the position of the user's viewpoint in the three-dimensional model according to the house data of the real house Whether there is an obstacle within the preset distance range ahead to obtain a determination result; in response to the detection result that there is no obstacle and the determination result is that there is an obstacle, an obstacle response operation is performed.
这里,用户视点位置前方的预设距离范围可以为与用户视点位置的距离不大于0.3米、0.4米、0.5米或者其他距离取值的范围。Here, the preset distance range in front of the user's viewpoint position may be a range in which the distance from the user's viewpoint position is not greater than 0.3 meters, 0.4 meters, 0.5 meters, or other distance values.
本公开的实施例中,可以预先存储三维模型的模型数据,依据模型数据,可以检测出三维模型中,用户视点位置前方的预设距离范围内是否存在障碍物,以得到检测结果,检测结果可以认为与虚拟世界中的情况对应。另外,还可以依据真实房屋的房屋数据,确定三维模型中,用户视点位置前方的预设距离范围内是否存在障碍物,以得到确定结果,确定结果可以认为与真实世界中的情况相符。In the embodiment of the present disclosure, the model data of the three-dimensional model can be stored in advance, and based on the model data, it can be detected whether there is an obstacle within a preset distance in front of the user's viewpoint position in the three-dimensional model, so as to obtain the detection result. The detection result can be It is thought to correspond to the situation in the virtual world. In addition, it is also possible to determine whether there is an obstacle within a preset distance in front of the user's viewpoint position in the three-dimensional model based on the house data of the real house to obtain the determination result, which can be considered to be consistent with the real world situation.
如果检测结果为不存在障碍物,而确定结果为存在障碍物,这说明三维模型中,用户视点位置前方的预设距离范围内原本是应该存在障碍物的,但是,对三维模型的修改导致障碍物被移动或移除了,这种情况下,可以执行障碍物应对操作。If the detection result is that there is no obstacle, and the determination result is that there is an obstacle, it means that in the 3D model, there should be an obstacle within the preset distance in front of the user's point of view. However, the modification of the 3D model has caused obstacles. The object is moved or removed. In this case, you can perform obstacle response operations.
在一种具体实施方式中,执行障碍物应对操作,包括:输出障碍物碰撞预警信息。In a specific implementation manner, executing an obstacle response operation includes: outputting obstacle collision warning information.
这里,可以通过语音形式、文字形式等输出障碍物碰撞预警信息,例如,可以在电子设备的显示屏上展示“请注意前方有障碍物,避免发生碰撞”。Here, the obstacle collision warning information can be output in the form of voice, text, etc., for example, it can display "Please pay attention to obstacles ahead to avoid collision" on the display screen of the electronic device.
这种实施方式中,通过障碍物碰撞预警信息的输出,能够使用户了解前方存在障碍物的情况,从而保证在真实世界和虚拟世界中的体验的一致性。In this implementation manner, through the output of the obstacle collision warning information, the user can understand the situation of the obstacle ahead, thereby ensuring the consistency of the experience in the real world and the virtual world.
在另一种具体实施方式中,执行障碍物应对操作,包括:禁止用户视点位置向前方移动,并展示视角操作界面;其中,视角操作界面中包括N个操作控件,N为大于或等于1的整数;接收对N个操作控件中的至少一个操作控件的输入操作;响应输入操作,调整用户视角;以及退出视角操作界面,并还原用户视角。In another specific implementation manner, performing obstacle handling operations includes: prohibiting the user's point of view from moving forward and displaying a viewing angle operation interface; wherein, the viewing angle operation interface includes N operation controls, and N is greater than or equal to 1. Integer; receiving an input operation on at least one of the N operation controls; responding to the input operation, adjusting the user's perspective; and exiting the perspective operation interface, and restoring the user's perspective.
这里,N的取值可以为1、2或者3,操作控件的类型可以为虚拟按键,输入操作可以为点击、按压、拖动等触控操作,当然,N的取值、操作控件的类型,以及输入操作的类型并不局限于此,具体可以根据实际情况来确定,本公开的实施例对此不做任何限定。Here, the value of N can be 1, 2 or 3, the type of operation control can be a virtual button, and the input operation can be touch operations such as tap, press, drag, etc. Of course, the value of N and the type of operation control, And the type of input operation is not limited to this, and can be specifically determined according to the actual situation, which is not limited in the embodiment of the present disclosure.
这种实施方式中,可以禁止用户视点位置向前方移动,以保证真实世界和虚拟世界中的体验的一致性,避免造成交互上的困难,另外,还可以展示包括N个操作控件的视角操作界面。In this embodiment, the user's viewpoint position can be prohibited from moving forward to ensure the consistency of the experience in the real world and the virtual world, avoiding interaction difficulties. In addition, a viewing angle operation interface including N operation controls can be displayed .
在视角操作界面被展示的情况下,可以接收用户对N个操作控件中的至少一个操作控件的输入操作,并响应输入操作,调整用户视角。When the viewing angle operation interface is displayed, the user's input operation on at least one of the N operation controls can be received, and the user's viewing angle can be adjusted in response to the input operation.
在示例中,N个操作控件中包括第一操作控件;响应输入操作,调整用户视角,包括:响应对第一操作控件的输入操作,控制用户视角移动;和/或N个操作 控件中包括第二操作控件;响应输入操作,调整用户视角,包括:响应对第二操作控件的输入操作,控制用户视角旋转和/或俯仰。In an example, the N operation controls include a first operation control; in response to an input operation, adjusting the user's perspective includes: responding to an input operation on the first operation control to control the movement of the user's perspective; and/or the N operation controls include the first operation control 2. Operation control: Respond to the input operation to adjust the user's perspective, including: responding to the input operation of the second operation control, controlling the rotation and/or pitch of the user's perspective.
这里,可以展示如图12A或图12B所示的视角操作界面,图12A和图12B中的操作控件M可以作为第一操作控件,图12A和图12B中的操作控件N可以作为第二操作控件。这样,通过对操作控件M的输入操作,能够使用户视角发生移动,相应地,展示给用户的局部模型能够发生更新;通过对操作控件N的输入操作,能够使用户视角发生旋转和/或俯仰,相应地,展示给用户的局部模型能够发生更新。Here, the perspective operation interface as shown in FIG. 12A or FIG. 12B can be displayed. The operation control M in FIGS. 12A and 12B can be used as the first operation control, and the operation control N in FIGS. 12A and 12B can be used as the second operation control. . In this way, through the input operation of the operation control M, the user's perspective can be moved, and accordingly, the local model displayed to the user can be updated; through the input operation of the operation control N, the user's perspective can be rotated and/or pitched , Accordingly, the partial model shown to the user can be updated.
在调整用户视角之后,可以退出视角操作界面,即消除视角操作界面的展示,另外,还可以还原用户视角,即还原为接收到输入操作之前的用户视角。After adjusting the user perspective, the perspective operation interface can be exited, that is, the display of the perspective operation interface can be eliminated. In addition, the user perspective can also be restored, that is, the user perspective before the input operation is received.
可见,这种实施方式不仅可以保证真实世界和虚拟世界中的体验的一致性,避免造成交互上的困难,还可以在用户视点位置不向前移动的情况下,根据用户在视角操作界面上的输入操作,进行操作视角的调整,以便于用户查看三维模型中所需的部分。It can be seen that this implementation can not only ensure the consistency of the experience in the real world and the virtual world, and avoid interaction difficulties, but also can operate the interface according to the user’s perspective when the user’s viewpoint does not move forward. Input operations to adjust the operating angle of view so that users can view the required parts of the three-dimensional model.
在一种示例中,退出视角操作界面,并还原用户视角,包括:获取用户视角的调整信息;响应于确定调整信息满足预设条件,退出视角操作界面,并还原用户视角;这里,用户视角的调节信息包括但不限于用户视角的连续调节时长、用户视角的移动范围等。In one example, exiting the perspective operation interface and restoring the user perspective includes: obtaining adjustment information of the user perspective; in response to determining that the adjustment information meets a preset condition, exiting the perspective operation interface and restoring the user perspective; here, the user perspective The adjustment information includes, but is not limited to, the continuous adjustment duration of the user's perspective, the moving range of the user's perspective, and so on.
在获取到用户视角的调节信息之后,可以判断调节信息是否满足预设条件。具体地,在用户视角的连续调节时长超过10秒(其也可以为其他时长值)的情况下,可以认为预设条件满足,这时,可以退出视角操作界面,并还原成接收到输入操作之前的用户视角。或者,在用户视角的移动范围超过50厘米(其也可以为其他距离值)的情况下,可以认为预设条件满足,这时,可以退出视角操作界面,并还原成接收到输入操作之前的用户视角。After obtaining the adjustment information of the user's perspective, it can be determined whether the adjustment information satisfies a preset condition. Specifically, in the case where the continuous adjustment duration of the user's viewing angle exceeds 10 seconds (which can also be other duration values), the preset conditions can be considered to be satisfied. At this time, the viewing angle operation interface can be exited and restored to before the input operation is received. User perspective. Or, when the movement range of the user's perspective exceeds 50 cm (which can also be other distance values), it can be considered that the preset conditions are satisfied. At this time, you can exit the perspective operation interface and restore to the user before receiving the input operation Perspective.
可见,基于用户视角的调节信息,能够非常便捷地识别出需要退出视角操作 界面的情况。It can be seen that based on the adjustment information of the user's perspective, it is very convenient to identify the situation that needs to exit the perspective operation interface.
在一种示例中,退出视角操作界面,并还原用户视角,包括:在检测到输入操作结束的情况下,退出视角操作界面,并还原用户视角。In an example, exiting the perspective operation interface and restoring the user perspective includes: in a case where the end of the input operation is detected, exiting the perspective operation interface and restoring the user perspective.
这里,可以通过压力传感器,检测用户是否松开输入操作所作用的操作控件,从而确定输入操作是否结束。在输入操作结束的情况下,即可退出视角操作界面,并还原成接收到输入操作之前的用户视角。Here, the pressure sensor can be used to detect whether the user releases the operation control applied by the input operation, so as to determine whether the input operation ends. When the input operation ends, you can exit the perspective operation interface and restore the user perspective before receiving the input operation.
可见,基于输入操作结束与否,能够非常便捷地识别出需要退出视角操作界面的情况。It can be seen that, based on whether the input operation is completed or not, it is very convenient to identify the need to exit the viewing angle operation interface.
本公开的实施例提供的任一种三维模型中的模型展示方法可以由任意适当的具有数据处理能力的设备执行,包括但不限于:终端设备和服务器等。或者,本公开实施例提供的任一种三维模型中的模型展示方法可以由处理器执行,如处理器通过调用存储器存储的相应指令来执行本公开实施例提及的任一种三维模型中的模型展示方法。下文不再赘述。The model display method in any three-dimensional model provided by the embodiments of the present disclosure can be executed by any suitable device with data processing capabilities, including but not limited to: terminal devices and servers. Alternatively, the model display method in any three-dimensional model provided by the embodiments of the present disclosure may be executed by a processor. For example, the processor executes any of the three-dimensional models mentioned in the embodiments of the present disclosure by calling corresponding instructions stored in a memory. Model display method. I won't repeat them below.
示例性装置Exemplary device
图13是本公开一示例性实施例提供的实现增强现实的装置的结构示意图。该实施例装置包括:Fig. 13 is a schematic structural diagram of a device for realizing augmented reality provided by an exemplary embodiment of the present disclosure. The device of this embodiment includes:
模型获得模块1301,用于基于图像采集设备的当前位置,获得与目标设定空间对应的三维模型。The model obtaining module 1301 is configured to obtain a three-dimensional model corresponding to the target setting space based on the current position of the image acquisition device.
其中,当前位置在目标设定空间中。Among them, the current position is in the target setting space.
初始化模块1302,用于控制图像采集设备在当前位置开始采集图像,并基于当前位置作为原点坐标进行坐标初始化。The initialization module 1302 is used to control the image acquisition device to start acquiring images at the current position, and to initialize the coordinates based on the current position as the origin coordinates.
模型对齐模块1303,用于基于原点坐标和图像采集设备采集到的图像,将三维模型与目标设定空间进行对齐。The model alignment module 1303 is used to align the three-dimensional model with the target setting space based on the coordinates of the origin and the image collected by the image acquisition device.
本公开上述实施例提供的一种实现增强现实的装置,基于图像采集设备的当 前位置,获得与目标设定空间对应的三维模型,其中,所述当前位置在目标设定空间中;控制所述图像采集设备在所述当前位置开始采集图像,并基于所述当前位置作为原点坐标进行坐标初始化;以及基于所述原点坐标和所述图像采集设备采集到的图像,将所述三维模型与所述目标设定空间进行对齐。本实施例通过采集到的图像和原点坐标将目标设定空间对应的三维模型对齐到目标设定空间中,实现了模型与现实世界的自动对齐;在现实场景中可查看到与现实场景匹配的虚拟场景,提高了增强现实的效率和效果。The above-mentioned embodiment of the present disclosure provides a device for realizing augmented reality, which obtains a three-dimensional model corresponding to a target setting space based on the current position of an image acquisition device, wherein the current position is in the target setting space; and controlling the The image acquisition device starts to acquire an image at the current position, and initializes the coordinates based on the current position as the origin coordinates; and based on the origin coordinates and the images collected by the image acquisition device, the three-dimensional model and the The target setting space is aligned. In this embodiment, the three-dimensional model corresponding to the target setting space is aligned to the target setting space through the collected images and origin coordinates, so as to realize the automatic alignment of the model and the real world; The virtual scene improves the efficiency and effect of augmented reality.
在一些实施例中,模型获得模块1301,用于获得图像采集设备的当前位置;通过将当前位置与至少一个设定空间的坐落位置进行匹配,从所述至少一个设定空间中确定目标设定空间,其中,所述至少一个设定空间中的每个设定空间对应至少一个三维模型;以及从目标设定空间对应的至少一个三维模型中确定一个三维模型,作为所述与目标设定空间对应的三维模型。In some embodiments, the model obtaining module 1301 is used to obtain the current position of the image capture device; by matching the current position with the sitting position of the at least one set space, determine the target setting from the at least one set space Space, wherein each setting space in the at least one setting space corresponds to at least one three-dimensional model; and determining a three-dimensional model from at least one three-dimensional model corresponding to the target setting space as the target setting space The corresponding three-dimensional model.
在示例中,模型获得模块1301在基于当前位置与至少一个设定空间的坐落位置进行匹配,确定图像采集设备对应的目标设定空间时,用于确定当前位置与至少一个设定空间的坐落位置之间的相应距离;从所述相应距离中确定小于预设值的一个目标距离;以及将所述至少一个设定空间中与目标距离对应的设定空间作为目标设定空间。In an example, the model obtaining module 1301 is used to determine the current location and the location of the at least one set space when determining the target setting space corresponding to the image capture device based on matching the current location with the location of at least one set space Determine a target distance smaller than a preset value from the corresponding distance; and use a setting space corresponding to the target distance in the at least one setting space as the target setting space.
在一些实施例中,模型对齐模块1303,包括:图像采集单元,用于在目标设定空间内移动图像采集设备,并在移动过程中以设定频率采集目标设定空间内的多帧图像;以及标识物对齐单元,用于响应于多帧图像中存在一帧图像中包括固定标识物,根据固定标识物将三维模型与目标设定空间进行对齐。In some embodiments, the model alignment module 1303 includes: an image acquisition unit for moving the image acquisition device in the target setting space, and collecting multiple frames of images in the target setting space at a set frequency during the movement; And a marker aligning unit, which is used to align the three-dimensional model with the target setting space according to the fixed marker in response to the presence of a fixed marker in one frame of the images.
在示例中,模型对齐模块1303,还包括:位姿确定单元,用于利用位姿获取设备确定图像采集设备采集多帧图像中的至少一帧图像时相对于原点坐标的位姿信息;标识物对齐单元,用于响应于多帧图像中存在一帧图像中包括固定标识物,基于固定标识物与三维模型中对应的标识物模型之间的对应关系对三维模型 和目标设定空间进行初始对齐;确定获取包括固定标识物的图像时图像采集设备相对于原点坐标的位姿信息;以及基于位姿信息对初始对齐进行调整,实现三维模型与目标设定空间的对齐。In an example, the model alignment module 1303 further includes: a pose determination unit, configured to use the pose acquisition device to determine the pose information relative to the origin coordinates when the image capture device collects at least one of the multiple frames of images; The alignment unit is used for initial alignment of the three-dimensional model and the target setting space based on the corresponding relationship between the fixed marker and the corresponding marker model in the three-dimensional model in response to the presence of a fixed marker in one frame of the images in the multiple frames ; Determine the pose information of the image acquisition device relative to the origin coordinates when acquiring the image including the fixed marker; and adjust the initial alignment based on the pose information to achieve the alignment of the three-dimensional model with the target setting space.
在示例中,初始化模块1302,用于建立以原点坐标作为中心的原点坐标系;确定原点坐标在三维模型中的模型坐标;以及基于模型坐标和原点坐标,将三维模型嵌入目标设定空间,获得在原点坐标系下的三维模型的初始位姿。In the example, the initialization module 1302 is used to establish the origin coordinate system with the origin coordinates as the center; determine the model coordinates of the origin coordinates in the 3D model; and based on the model coordinates and origin coordinates, embed the 3D model in the target setting space to obtain The initial pose of the 3D model in the origin coordinate system.
在示例中,标识物对齐单元在基于固定标识物与三维模型中对应的标识物模型之间的对应关系对三维模型和设定空间进行初始对齐时,用于基于图像中的固定标识物在图像中的形变,确定获取包括固定标识物的图像时图像采集设备相对于原点坐标的初级位姿信息;确定图像采集设备获取包括固定标识物的图像时的在原点坐标系下的位置坐标;以及基于初级位姿信息、位置坐标和三维模型的初始位姿,对三维模型与目标设定空间进行初始对齐。In the example, the marker aligning unit is used to initially align the three-dimensional model and the setting space based on the correspondence between the fixed marker and the corresponding marker model in the three-dimensional model. Determine the primary pose information of the image acquisition device relative to the origin coordinates when acquiring the image including the fixed marker; determine the position coordinates in the origin coordinate system when the image acquisition device acquires the image including the fixed marker; and based on The primary pose information, position coordinates, and the initial pose of the 3D model are used to initially align the 3D model with the target setting space.
在示例中,标识物对齐单元在基于初级位姿信息、位置坐标和三维模型的初始位姿,对三维模型与目标设定空间进行初始对齐时,用于基于位置坐标确定图像采集设备获取包括固定标识物的图像时相对于原点坐标的位移信息;基于初级位姿信息确定图像采集设备相对于原点坐标的旋转信息;以及根据位移信息和旋转信息对三维模型的初始位姿进行调整,实现三维模型与目标设定空间的初始对齐。In the example, the marker aligning unit is used to determine the image acquisition device acquisition based on the position coordinates when the three-dimensional model is initially aligned with the target setting space based on the primary pose information, the position coordinates, and the initial pose of the three-dimensional model. The displacement information of the image of the marker relative to the origin coordinates; the rotation information of the image acquisition device relative to the origin coordinates is determined based on the primary pose information; and the initial pose of the three-dimensional model is adjusted according to the displacement information and the rotation information to realize the three-dimensional model The initial alignment with the target setting space.
在一些实施例中,标识物对齐单元在确定获取包括固定标识物的图像时图像采集设备相对于原点坐标的位姿信息时,用于利用位姿获取设备在图像采集设备移动过程中追踪图像采集设备相对于原点坐标的位姿信息;以及基于追踪确定图像采集设备在采集包括固定标识物的图像时相对于原点坐标的位姿信息。In some embodiments, the marker alignment unit is used to use the pose acquisition device to track the image acquisition during the movement of the image acquisition device when determining the pose information of the image capture device relative to the origin coordinates when acquiring the image including the fixed marker. The pose information of the device relative to the coordinates of the origin; and the pose information of the device relative to the coordinates of the origin when the image acquisition device collects an image that includes a fixed marker is determined based on tracking.
在示例中,标识物对齐单元在基于位姿获取设备在图像采集设备移动过程中追踪图像采集设备相对于原点坐标的位姿信息时,用于基于图像采集设备连续采集的两帧图像中相同的特征点之间的位移,以及位姿获取设备在图像采集设备连 续采集两帧图像时获取的对应的两个位姿信息,追踪图像采集设备的位姿信息。In the example, when the marker alignment unit tracks the pose information of the image capture device relative to the origin coordinates based on the pose capture device during the movement of the image capture device, it is used for the same in two frames of images continuously captured by the image capture device. The displacement between the feature points, and the corresponding two pose information obtained by the pose acquisition device when the image capture device continuously captures two frames of images, track the pose information of the image capture device.
图14是本公开一示例性实施例提供的实现增强现实装置中,用于三维模型中的模型展示装置的结构示意图。图14所示的装置包括展示模型1401、确定模块1402和控制模块1403。Fig. 14 is a schematic structural diagram of a model display device used in a three-dimensional model in an augmented reality device provided by an exemplary embodiment of the present disclosure. The device shown in FIG. 14 includes a display model 1401, a determination module 1402, and a control module 1403.
展示模型1401,用于展示三维房屋模型中的与用户视角对应的局部模型。The display model 1401 is used to display a partial model corresponding to the user's perspective in the three-dimensional house model.
确定模块1402,用于响应于确定局部模型中存在外部透视区,依据与三维房屋模型对应的真实房屋的房屋数据,确定用户视角下,外部透视区的参考视觉信息。The determining module 1402 is configured to determine the reference visual information of the external perspective area in the user's perspective according to the house data of the real house corresponding to the three-dimensional house model in response to determining that the external perspective area exists in the partial model.
控制模块1403,用于基于参考视觉信息,控制局部模型中的外部透视区以相应展示策略进行画面展示。The control module 1403 is used for controlling the external perspective area in the partial model to display the images according to the corresponding display strategy based on the reference visual information.
在一个示例中,控制模块1403,具体用于:在参考视觉信息表征外部透视区整体未被遮挡的情况下,控制局部模型中的外部透视区展示对应的真实场景画面;在参考视觉信息表征外部透视区整体被遮挡的情况下,控制局部模型中的外部透视区展示虚拟场景画面;在参考视觉信息表征外部透视区中包括未被遮挡的第一区域和被遮挡的第二区域的情况下,控制第一区域展示对应的真实场景画面,并控制第二区域展示虚拟场景画面。In one example, the control module 1403 is specifically configured to: control the external perspective area in the partial model to display the corresponding real scene picture when the external perspective area is represented by reference visual information as being unobstructed as a whole; When the perspective area is occluded as a whole, the external perspective area in the partial model is controlled to display the virtual scene; when the external perspective area includes the first area that is not blocked and the second area that is occluded in the reference visual information characterizing the external perspective area, Control the first area to display the corresponding real scene picture, and control the second area to display the virtual scene picture.
在一个示例中,该装置还包括:第一获取模块,用于检测三维房屋模型中,用户视点位置前方的预设距离范围内是否存在障碍物,以得到检测结果;第二获取模块,用于依据真实房屋的房屋数据,确定三维房屋模型中,用户视点位置前方的预设距离范围内是否存在障碍物,以得到确定结果;执行模块,用于在检测结果为不存在障碍物,确定结果为存在障碍物的情况下,执行障碍物应对操作。In an example, the device further includes: a first acquisition module for detecting whether there are obstacles within a preset distance in front of the user's viewpoint position in the three-dimensional house model to obtain the detection result; a second acquisition module for According to the house data of the real house, determine whether there is an obstacle within the preset distance in front of the user's viewpoint position in the three-dimensional house model to obtain the determination result; the execution module is used to determine if there is no obstacle in the detection result, and the determination result is If there is an obstacle, perform obstacle handling operations.
在一个示例中,执行模块,包括:第一处理单元,用于禁止用户视点位置向 前方移动,并展示视角操作界面;其中,视角操作界面中包括N个操作控件,N为大于或等于1的整数;接收单元,用于接收对N个操作控件中的至少一个操作控件的输入操作;调整单元,用于响应输入操作,调整用户视角;第二处理单元,用于退出视角操作界面,并还原用户视角;和/或执行模块,具体用于:输出障碍物碰撞预警信息。In an example, the execution module includes: a first processing unit for prohibiting the user's viewpoint from moving forward and displaying a viewing angle operation interface; wherein the viewing angle operation interface includes N operation controls, and N is greater than or equal to 1. Integer; receiving unit for receiving input operations on at least one of the N operation controls; adjustment unit for responding to input operations and adjusting the user's perspective; second processing unit for exiting the perspective operation interface and restoring User perspective; and/or execution module, specifically used to: output obstacle collision warning information.
在一个示例中,N个操作控件中包括第一操作控件;调整单元,具体用于:响应对第一操作控件的输入操作,控制用户视角移动;和/或N个操作控件中包括第二操作控件;调整单元,具体用于:响应对第二操作控件的输入操作,控制用户视角旋转和/或俯仰。In an example, the N operation controls include a first operation control; the adjustment unit is specifically configured to: respond to an input operation on the first operation control to control the movement of the user's perspective; and/or the N operation controls include a second operation Control; the adjustment unit, specifically used to: respond to the input operation of the second operation control, control the user's viewing angle rotation and/or pitch.
在一个示例中,第二处理单元,包括:获取子单元,用于获取用户视角的调整信息;处理子单元,用于在调整信息满足预设条件的情况下,退出视角操作界面,并还原用户视角;和/或第二处理单元,具体用于:在检测到输入操作结束的情况下,退出视角操作界面,并还原用户视角。In an example, the second processing unit includes: an acquisition sub-unit for acquiring adjustment information of the user's perspective; a processing sub-unit for exiting the perspective operation interface and restoring the user when the adjustment information meets preset conditions Perspective; and/or the second processing unit, specifically configured to: in the case of detecting the end of the input operation, exit the perspective operation interface, and restore the user perspective.
在一个示例中,三维房屋模型与真实房屋的尺寸比例为1:1;和/或外部透视区为窗体区。In an example, the size ratio of the three-dimensional house model to the real house is 1:1; and/or the external perspective area is the window area.
示例性电子设备Exemplary electronic equipment
下面,参考图15来描述根据本公开实施例的电子设备。该电子设备可以是第一设备和第二设备中的任一个或两者、或与它们独立的单机设备,该单机设备可以与第一设备和第二设备进行通信,以从它们接收所采集到的输入信号。Hereinafter, an electronic device according to an embodiment of the present disclosure will be described with reference to FIG. 15. The electronic device can be either or both of the first device and the second device, or a stand-alone device independent of them, and the stand-alone device can communicate with the first device and the second device to receive collected data from them Input signal.
图15图示了根据本公开实施例的电子设备的框图。FIG. 15 illustrates a block diagram of an electronic device according to an embodiment of the present disclosure.
如图15所示,电子设备1500包括一个或多个处理器1501和存储器1502。As shown in FIG. 15, the electronic device 1500 includes one or more processors 1501 and a memory 1502.
处理器1501可以是中央处理单元(CPU)或者具有数据处理能力和/或指令执行能力的其他形式的处理单元,并且可以控制电子设备1500中的其他组件以执行期望的功能。The processor 1501 may be a central processing unit (CPU) or other form of processing unit with data processing capability and/or instruction execution capability, and may control other components in the electronic device 1500 to perform desired functions.
存储器1502可以包括一个或多个计算机程序产品,所述计算机程序产品可以包括各种形式的计算机可读存储介质,例如易失性存储器和/或非易失性存储器。所述易失性存储器例如可以包括随机存取存储器(RAM)和/或高速缓冲存储器(cache)等。所述非易失性存储器例如可以包括只读存储器(ROM)、硬盘、闪存等。在所述计算机可读存储介质上可以存储一个或多个计算机程序指令,处理器1501可以运行所述程序指令,以实现上文所述的本公开的各个实施例的实现增强现实的方法以及/或者其他期望的功能。在所述计算机可读存储介质中还可以存储诸如输入信号、信号分量、噪声分量等各种内容。The memory 1502 may include one or more computer program products, and the computer program products may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include random access memory (RAM) and/or cache memory (cache), for example. The non-volatile memory may include, for example, read-only memory (ROM), hard disk, flash memory, and the like. One or more computer program instructions may be stored on the computer-readable storage medium, and the processor 1501 may run the program instructions to implement the methods for realizing augmented reality in the various embodiments of the present disclosure described above and/ Or other desired functions. Various contents such as input signal, signal component, noise component, etc. can also be stored in the computer-readable storage medium.
在一个示例中,电子设备1500还可以包括:输入装置1503和输出装置1504,这些组件通过总线***和/或其他形式的连接机构(未示出)互连。In an example, the electronic device 1500 may further include: an input device 1503 and an output device 1504, and these components are interconnected by a bus system and/or other forms of connection mechanisms (not shown).
例如,在该电子设备是第一设备或第二设备时,该输入装置1503可以是上述的麦克风或麦克风阵列,用于捕捉声源的输入信号。在该电子设备是单机设备时,该输入装置1503可以是通信网络连接器,用于从第一设备和第二设备接收所采集的输入信号。For example, when the electronic device is the first device or the second device, the input device 1503 may be the aforementioned microphone or microphone array for capturing the input signal of the sound source. When the electronic device is a stand-alone device, the input device 1503 may be a communication network connector for receiving collected input signals from the first device and the second device.
此外,该输入设备1503还可以包括例如键盘、鼠标等等。In addition, the input device 1503 may also include, for example, a keyboard, a mouse, and so on.
该输出装置1504可以向外部输出各种信息,包括确定出的距离信息、方向信息等。该输出设备1504可以包括例如显示器、扬声器、打印机、以及通信网络及其所连接的远程输出设备等等。The output device 1504 can output various information to the outside, including determined distance information, direction information, and so on. The output device 1504 may include, for example, a display, a speaker, a printer, a communication network and a remote output device connected to it, and so on.
当然,为了简化,图15中仅示出了该电子设备1500中与本公开有关的组件 中的一些,省略了诸如总线、输入/输出接口等等的组件。除此之外,根据具体应用情况,电子设备1500还可以包括任何其他适当的组件。Of course, for simplification, only some of the components related to the present disclosure in the electronic device 1500 are shown in FIG. 15, and components such as buses, input/output interfaces, etc. are omitted. In addition, according to specific application conditions, the electronic device 1500 may also include any other appropriate components.
示例性计算机程序产品和计算机可读存储介质Exemplary computer program product and computer readable storage medium
除了上述方法和设备以外,本公开的实施例还可以是计算机程序产品,其包括计算机程序指令,所述计算机程序指令在被处理器运行时使得所述处理器执行本说明书上述“示例性方法”部分中描述的根据本公开各种实施例的实现增强现实的方法中的步骤。In addition to the above-mentioned method and device, the embodiments of the present disclosure may also be a computer program product, which includes computer program instructions that, when run by a processor, cause the processor to execute the “exemplary method” described above in this specification. The steps in the method for realizing augmented reality according to various embodiments of the present disclosure are described in the section.
所述计算机程序产品可以以一种或多种程序设计语言的任意组合来编写用于执行本公开实施例操作的程序代码,所述程序设计语言包括面向对象的程序设计语言,诸如Java、C++等,还包括常规的过程式程序设计语言,诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算设备上执行、部分地在用户设备上执行、作为一个独立的软件包执行、部分在用户计算设备上部分在远程计算设备上执行、或者完全在远程计算设备或服务器上执行。The computer program product can be used to write program codes for performing the operations of the embodiments of the present disclosure in any combination of one or more programming languages, the programming languages including object-oriented programming languages, such as Java, C++, etc. , Also includes conventional procedural programming languages, such as "C" language or similar programming languages. The program code can be executed entirely on the user's computing device, partly on the user's device, executed as an independent software package, partly on the user's computing device and partly executed on the remote computing device, or entirely on the remote computing device or server Executed on.
此外,本公开的实施例还可以是计算机可读存储介质,其上存储有计算机程序指令,所述计算机程序指令在被处理器运行时使得所述处理器执行本说明书上述“示例性方法”部分中描述的根据本公开各种实施例的实现增强现实方法中的步骤。In addition, the embodiments of the present disclosure may also be a computer-readable storage medium, on which computer program instructions are stored, and when the computer program instructions are executed by a processor, the processor executes the “exemplary method” part of this specification. The steps in the method for realizing augmented reality according to various embodiments of the present disclosure described in.
所述计算机可读存储介质可以采用一个或多个可读介质的任意组合。可读介质可以是可读信号介质或者可读存储介质。可读存储介质例如可以包括但不限于电、磁、光、电磁、红外线、或半导体的***、装置或器件,或者任意以上的组合。可读存储介质的更具体的例子(非穷举的列表)包括:具有一个或多个导线 的电连接、便携式盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。The computer-readable storage medium may adopt any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium may include, but is not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the above, for example. More specific examples (non-exhaustive list) of readable storage media include: electrical connections with one or more wires, portable disks, hard disks, random access memory (RAM), read-only memory (ROM), erasable Type programmable read only memory (EPROM or flash memory), optical fiber, portable compact disk read only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
以上结合具体实施例描述了本公开的基本原理,但是,需要指出的是,在本公开中提及的优点、优势、效果等仅是示例而非限制,不能认为这些优点、优势、效果等是本公开的各个实施例必须具备的。另外,上述公开的具体细节仅是为了示例的作用和便于理解的作用,而非限制,上述细节并不限制本公开为必须采用上述具体的细节来实现。The above describes the basic principles of the present disclosure in conjunction with specific embodiments. However, it should be pointed out that the advantages, advantages, effects, etc. mentioned in the present disclosure are only examples and not limitations, and these advantages, advantages, effects, etc. cannot be considered as Required for each embodiment of the present disclosure. In addition, the specific details of the foregoing disclosure are only for exemplary functions and easy-to-understand functions, rather than limitations, and the foregoing details do not limit the present disclosure to be implemented by using the foregoing specific details.
本说明书中各个实施例均采用递进的方式描述,每个实施例重点说明的都是与其它实施例的不同之处,各个实施例之间相同或相似的部分相互参见即可。对于***实施例而言,由于其与方法实施例基本对应,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。The various embodiments in this specification are described in a progressive manner, and each embodiment focuses on the differences from other embodiments, and the same or similar parts between the various embodiments can be referred to each other. As for the system embodiment, since it basically corresponds to the method embodiment, the description is relatively simple, and for related parts, please refer to the part of the description of the method embodiment.
本公开中涉及的器件、装置、设备、***的方框图仅作为例示性的例子并且不意图要求或暗示必须按照方框图示出的方式进行连接、布置、配置。如本领域技术人员将认识到的,可以按任意方式连接、布置、配置这些器件、装置、设备、***。诸如“包括”、“包含”、“具有”等等的词语是开放性词汇,指“包括但不限于”,且可与其互换使用。这里所使用的词汇“或”和“和”指词汇“和/或”,且可与其互换使用,除非上下文明确指示不是如此。这里所使用的词汇“诸如”指词组“诸如但不限于”,且可与其互换使用。The block diagrams of the devices, devices, equipment, and systems involved in the present disclosure are merely illustrative examples and are not intended to require or imply that they must be connected, arranged, and configured in the manner shown in the block diagrams. As those skilled in the art will recognize, these devices, devices, equipment, and systems can be connected, arranged, and configured in any manner. Words such as "include", "include", "have", etc. are open vocabulary and mean "including but not limited to" and can be used interchangeably. The terms "or" and "and" as used herein refer to the terms "and/or" and can be used interchangeably, unless the context clearly indicates otherwise. The word "such as" used herein refers to the phrase "such as but not limited to" and can be used interchangeably with it.
可能以许多方式来实现本公开的方法和装置。例如,可通过软件、硬件、固件或者软件、硬件、固件的任何组合来实现本公开的方法和装置。用于所述方法的步骤的上述顺序仅是为了进行说明,本公开的方法的步骤不限于以上具体描述 的顺序,除非以其它方式特别说明。此外,在一些实施例中,还可将本公开实施为记录在记录介质中的程序,这些程序包括用于实现根据本公开的方法的机器可读指令。因而,本公开还覆盖存储用于执行根据本公开的方法的程序的记录介质。The method and apparatus of the present disclosure may be implemented in many ways. For example, the method and apparatus of the present disclosure can be implemented by software, hardware, firmware or any combination of software, hardware, and firmware. The above-mentioned order of the steps of the method is for illustration only, and the steps of the method of the present disclosure are not limited to the order specifically described above, unless specifically stated otherwise. In addition, in some embodiments, the present disclosure can also be implemented as programs recorded in a recording medium, and these programs include machine-readable instructions for implementing the method according to the present disclosure. Thus, the present disclosure also covers a recording medium storing a program for executing the method according to the present disclosure.
还需要指出的是,在本公开的装置、设备和方法中,各部件或各步骤是可以分解和/或重新组合的。这些分解和/或重新组合应视为本公开的等效方案。It should also be pointed out that in the device, equipment and method of the present disclosure, each component or each step can be decomposed and/or recombined. These decomposition and/or recombination should be regarded as equivalent solutions of the present disclosure.
提供所公开的方面的以上描述以使本领域的任何技术人员能够做出或者使用本公开。对这些方面的各种修改对于本领域技术人员而言是非常显而易见的,并且在此定义的一般原理可以应用于其他方面而不脱离本公开的范围。因此,本公开不意图被限制到在此示出的方面,而是按照与在此公开的原理和新颖的特征一致的最宽范围。The above description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present disclosure. Various modifications to these aspects are very obvious to those skilled in the art, and the general principles defined herein can be applied to other aspects without departing from the scope of the present disclosure. Therefore, the present disclosure is not intended to be limited to the aspects shown here, but in accordance with the widest scope consistent with the principles and novel features disclosed herein.
为了例示和描述的目的已经给出了以上描述。此外,此描述不意图将本公开的实施例限制到在此公开的形式。尽管以上已经讨论了多个示例方面和实施例,但是本领域技术人员将认识到其某些变型、修改、改变、添加和子组合。The above description has been given for the purposes of illustration and description. In addition, this description is not intended to limit the embodiments of the present disclosure to the form disclosed herein. Although a number of example aspects and embodiments have been discussed above, those skilled in the art will recognize certain variations, modifications, changes, additions, and subcombinations thereof.

Claims (21)

  1. 一种实现增强现实的方法,包括:A method of realizing augmented reality includes:
    基于图像采集设备的当前位置,获得与目标设定空间对应的三维模型,其中,所述当前位置在所述目标设定空间中;Obtaining a three-dimensional model corresponding to the target setting space based on the current position of the image acquisition device, wherein the current position is in the target setting space;
    控制所述图像采集设备在所述当前位置开始采集图像,并基于所述当前位置作为原点坐标进行坐标初始化;以及Controlling the image acquisition device to start acquiring images at the current position, and performing coordinate initialization based on the current position as the origin coordinates; and
    基于所述原点坐标和所述图像采集设备采集到的图像,将所述三维模型与所述目标设定空间进行对齐。Aligning the three-dimensional model with the target setting space based on the coordinates of the origin and the image collected by the image collecting device.
  2. 根据权利要求1所述的方法,其中,所述获得与目标设定空间对应的三维模型,包括:The method according to claim 1, wherein the obtaining a three-dimensional model corresponding to the target setting space comprises:
    获得所述图像采集设备的当前位置;Obtaining the current position of the image acquisition device;
    通过将所述当前位置与至少一个设定空间的坐落位置进行匹配,从所述至少一个设定空间中确定所述目标设定空间,其中,所述至少一个设定空间中的每个设定空间对应至少一个三维模型;以及The target setting space is determined from the at least one setting space by matching the current position with the sitting position of the at least one setting space, wherein each setting in the at least one setting space The space corresponds to at least one three-dimensional model; and
    从所述目标设定空间对应的至少一个三维模型中确定一个三维模型,作为所述与目标设定空间对应的三维模型。A three-dimensional model is determined from at least one three-dimensional model corresponding to the target setting space as the three-dimensional model corresponding to the target setting space.
  3. 根据权利要求2所述的方法,其中,所述确定所述目标设定空间,包括:The method according to claim 2, wherein said determining said target setting space comprises:
    确定所述当前位置与所述至少一个设定空间的坐落位置之间的相应距离;Determine the corresponding distance between the current position and the location of the at least one set space;
    从所述相应距离中确定小于预设值的一个目标距离;以及Determine a target distance less than a preset value from the corresponding distances; and
    将所述至少一个设定空间中与所述目标距离对应的设定空间作为所述目标设定空间。Use a setting space corresponding to the target distance in the at least one setting space as the target setting space.
  4. 根据权利要求1-3中任一项所述的方法,其中,所述将所述三维模型与所述目标设定空间进行对齐,包括:The method according to any one of claims 1 to 3, wherein the aligning the three-dimensional model with the target setting space comprises:
    在所述目标设定空间内移动所述图像采集设备,并在移动过程中以设定频率采集所述目标设定空间内的多帧图像;以及Moving the image acquisition device in the target setting space, and collecting multiple frames of images in the target setting space at a set frequency during the movement; and
    响应于所述多帧图像中存在一帧图像中包括固定标识物,根据所述固定标识 物将所述三维模型与所述目标设定空间进行对齐。In response to the presence of a fixed marker in one of the multiple frames of images, the three-dimensional model is aligned with the target setting space according to the fixed marker.
  5. 根据权利要求4所述的方法,还包括,在根据所述固定标识物将所述三维模型与所述目标设定空间进行对齐之前:The method according to claim 4, further comprising, before aligning the three-dimensional model with the target setting space according to the fixed marker:
    利用位姿获取设备确定所述图像采集设备采集所述多帧图像中的至少一帧图像时相对于所述原点坐标的位姿信息,Using a pose acquisition device to determine the pose information relative to the coordinates of the origin when the image acquisition device collects at least one frame of the multi-frame images,
    其中,所述根据所述固定标识物将所述三维模型与所述目标设定空间进行对齐,包括:Wherein, the aligning the three-dimensional model with the target setting space according to the fixed marker includes:
    响应于所述多帧图像中存在一帧图像中包括固定标识物,基于所述固定标识物与所述三维模型中对应的标识物模型之间的对应关系对所述三维模型和所述目标设定空间进行初始对齐;In response to the presence of a fixed marker in one frame of the image in the multiple frames, the three-dimensional model and the target design are set based on the correspondence between the fixed marker and the corresponding marker model in the three-dimensional model. Fixed space for initial alignment;
    确定获取所述包括固定标识物的图像时所述图像采集设备相对于所述原点坐标的位姿信息;以及Determining the pose information of the image acquisition device relative to the coordinates of the origin when the image including the fixed identifier is acquired; and
    基于所述位姿信息对所述初始对齐进行调整,实现所述三维模型与所述目标设定空间的对齐。The initial alignment is adjusted based on the pose information to achieve the alignment between the three-dimensional model and the target setting space.
  6. 根据权利要求5所述的方法,其中,所述基于所述当前位置作为原点坐标进行坐标初始化,包括:The method according to claim 5, wherein said initializing coordinates based on said current position as origin coordinates comprises:
    建立以所述原点坐标作为中心的原点坐标系;Establishing an origin coordinate system with the origin coordinates as a center;
    确定所述原点坐标在所述三维模型中的模型坐标;以及Determining the model coordinates of the origin coordinates in the three-dimensional model; and
    基于所述模型坐标和所述原点坐标,将所述三维模型嵌入所述目标设定空间,获得在所述原点坐标系下的所述三维模型的初始位姿。Based on the model coordinates and the origin coordinates, the three-dimensional model is embedded in the target setting space to obtain the initial pose of the three-dimensional model in the origin coordinate system.
  7. 根据权利要求6所述的方法,其中,所述对所述三维模型和所述目标设定空间进行初始对齐,包括:The method according to claim 6, wherein the initial alignment of the three-dimensional model and the target setting space comprises:
    基于所述图像中的所述固定标识物在图像中的形变,确定获取所述包括固定标识物的图像时所述图像采集设备相对于所述原点坐标的初级位姿信息;Based on the deformation of the fixed marker in the image in the image, determine the primary pose information of the image acquisition device relative to the origin coordinates when the image including the fixed marker is acquired;
    确定所述图像采集设备获取所述包括固定标识物的图像时的在所述原点坐 标系下的位置坐标;以及Determining the position coordinates in the origin coordinate system when the image acquisition device acquires the image including the fixed marker; and
    基于所述初级位姿信息、所述位置坐标和所述三维模型的初始位姿,对所述三维模型与所述目标设定空间进行初始对齐。Based on the primary pose information, the position coordinates, and the initial pose of the three-dimensional model, the three-dimensional model is initially aligned with the target setting space.
  8. 根据权利要求7所述的方法,其中,所述基于所述初级位姿信息、所述位置坐标和所述三维模型的初始位姿,对所述三维模型与所述目标设定空间进行初始对齐,包括:The method according to claim 7, wherein the initial alignment of the three-dimensional model and the target setting space is performed based on the primary pose information, the position coordinates, and the initial pose of the three-dimensional model ,include:
    基于所述位置坐标确定所述图像采集设备获取所述包括固定标识物的图像时相对于所述原点坐标的位移信息;Determining, based on the position coordinates, displacement information relative to the origin coordinates when the image acquisition device acquires the image including the fixed identifier;
    基于所述初级位姿信息确定所述图像采集设备相对于所述原点坐标的旋转信息;以及Determining the rotation information of the image acquisition device relative to the origin coordinate based on the primary pose information; and
    根据所述位移信息和所述旋转信息对所述三维模型的初始位姿进行调整,实现所述三维模型与所述目标设定空间的初始对齐。The initial pose of the three-dimensional model is adjusted according to the displacement information and the rotation information to realize the initial alignment of the three-dimensional model with the target setting space.
  9. 根据权利要求5-8中任一项所述的方法,其中,所述确定获取所述包括固定标识物的图像时所述图像采集设备相对于所述原点坐标的位姿信息,包括:8. The method according to any one of claims 5-8, wherein the determining the pose information of the image capture device relative to the origin coordinates when the image including the fixed marker is acquired comprises:
    利用所述位姿获取设备在所述图像采集设备移动过程中追踪所述图像采集设备相对于所述原点坐标的位姿信息;以及Using the pose acquisition device to track the pose information of the image acquisition device relative to the origin coordinates during the movement of the image acquisition device; and
    基于所述追踪确定所述图像采集设备在采集所述包括固定标识物的图像时相对于所述原点坐标的位姿信息。Based on the tracking, the pose information relative to the coordinates of the origin when the image acquisition device is acquiring the image including the fixed marker is determined.
  10. 根据权利要求9所述的方法,其中,所述追踪所述图像采集设备相对于所述原点坐标的位姿信息,包括:The method according to claim 9, wherein the tracking the pose information of the image acquisition device relative to the origin coordinate comprises:
    基于所述图像采集设备连续采集的两帧图像中相同的特征点之间的位移,以及所述位姿获取设备在所述图像采集设备连续采集所述两帧图像时获取的对应的两个位姿信息,追踪所述图像采集设备的位姿信息。Based on the displacement between the same feature points in the two frames of images continuously collected by the image acquisition device, and the corresponding two positions acquired by the pose acquisition device when the image acquisition device continuously collects the two frames of images Posture information, tracking the posture information of the image acquisition device.
  11. 根据权利要求1所述的方法,还包括:The method according to claim 1, further comprising:
    展示所述三维模型中的与用户视角对应的局部模型;Displaying the partial model corresponding to the user's perspective in the three-dimensional model;
    响应于确定所述局部模型中存在外部透视区,依据与所述三维模型对应的真实房屋的房屋数据,确定所述用户视角下,所述外部透视区的参考视觉信息;以及In response to determining that there is an external perspective area in the partial model, determine the reference visual information of the external perspective area in the user's perspective according to the house data of the real house corresponding to the three-dimensional model; and
    基于所述参考视觉信息,控制所述局部模型中的所述外部透视区以相应展示策略进行画面展示。Based on the reference visual information, the external perspective area in the partial model is controlled to display images with corresponding display strategies.
  12. 根据权利要求11所述的方法,其中,所述控制所述局部模型中的所述外部透视区以相应展示策略进行画面展示,包括:11. The method according to claim 11, wherein said controlling said external perspective area in said partial model to display images according to a corresponding display strategy comprises:
    响应于确定所述参考视觉信息表征所述外部透视区整体未被遮挡,控制所述局部模型中的所述外部透视区展示对应的真实场景画面;In response to determining that the reference visual information indicates that the external perspective area is not blocked as a whole, controlling the external perspective area in the partial model to display a corresponding real scene picture;
    响应于确定所述参考视觉信息表征所述外部透视区整体被遮挡,控制所述局部模型中的所述外部透视区展示虚拟场景画面;以及In response to determining that the reference visual information indicates that the external perspective area is occluded as a whole, controlling the external perspective area in the partial model to display a virtual scene image; and
    响应于确定所述参考视觉信息表征所述外部透视区中包括未被遮挡的第一区域和被遮挡的第二区域,控制所述第一区域展示对应的真实场景画面,并控制所述第二区域展示虚拟场景画面。In response to determining that the reference visual information indicates that the external perspective area includes a first area that is not blocked and a second area that is blocked, the first area is controlled to display the corresponding real scene image, and the second area is controlled. The area displays the virtual scene screen.
  13. 根据权利要求11所述的方法,还包括:The method according to claim 11, further comprising:
    检测所述三维模型中,用户视点位置前方的预设距离范围内是否存在障碍物,以得到检测结果;Detecting whether there is an obstacle within a preset distance range in front of the user's viewpoint position in the three-dimensional model, so as to obtain a detection result;
    依据所述真实房屋的房屋数据,确定所述三维模型中,所述用户视点位置前方的所述预设距离范围内是否存在障碍物,以得到确定结果;以及According to the house data of the real house, determine whether there is an obstacle within the preset distance range in front of the user's viewpoint position in the three-dimensional model, so as to obtain a determination result; and
    响应于确定所述检测结果为不存在障碍物并且所述确定结果为存在障碍物,执行障碍物应对操作。In response to determining that the detection result is that there is no obstacle and that the determination result is that there is an obstacle, an obstacle response operation is performed.
  14. 根据权利要求13所述的方法,其中,所述执行障碍物应对操作,包括:The method according to claim 13, wherein the performing an obstacle response operation comprises:
    禁止所述用户视点位置向前方移动,并展示视角操作界面,其中,所述视角操作界面中包括N个操作控件,N为大于或等于1的整数;Forbid the user's viewpoint position to move forward, and display a viewing angle operation interface, wherein the viewing angle operation interface includes N operation controls, and N is an integer greater than or equal to 1;
    接收对所述N个操作控件中的至少一个操作控件的输入操作;Receiving an input operation on at least one of the N operation controls;
    响应所述输入操作,调整所述用户视角;以及In response to the input operation, adjust the user perspective; and
    退出所述视角操作界面,并还原所述用户视角;和/或Exit the viewing angle operation interface and restore the user viewing angle; and/or
    其中,所述执行障碍物应对操作,包括:Wherein, the execution of obstacle response operations includes:
    输出障碍物碰撞预警信息。Output obstacle collision warning information.
  15. 根据权利要求14所述的方法,其中,所述N个操作控件包括第一操作控件,并且所述响应所述输入操作,调整所述用户视角,包括:响应对所述第一操作控件的输入操作,控制所述用户视角移动;和/或The method according to claim 14, wherein the N operation controls include a first operation control, and the responding to the input operation to adjust the user's perspective comprises: responding to an input to the first operation control Operate to control the movement of the user's perspective; and/or
    其中,所述N个操作控件包括第二操作控件,并且所述响应所述输入操作,调整所述用户视角,包括:响应对所述第二操作控件的输入操作,控制所述用户视角旋转和/或俯仰。Wherein, the N operation controls include a second operation control, and in response to the input operation, adjusting the user perspective includes: responding to the input operation on the second operation control, controlling the rotation of the user perspective and / Or pitch.
  16. 根据权利要求14所述的方法,其中,所述退出所述视角操作界面,并还原所述用户视角,包括:The method according to claim 14, wherein the exiting the viewing angle operation interface and restoring the user viewing angle comprises:
    获取所述用户视角的调整信息;以及Obtaining adjustment information of the user's perspective; and
    响应于确定所述调整信息满足预设条件,退出所述视角操作界面,并还原所述用户视角;和/或In response to determining that the adjustment information satisfies a preset condition, exit the viewing angle operation interface, and restore the user perspective; and/or
    其中,所述退出所述视角操作界面,并还原所述用户视角,包括:Wherein, the exiting the viewing angle operation interface and restoring the user viewing angle includes:
    响应于检测到所述输入操作结束,退出所述视角操作界面,并还原所述用户视角。In response to detecting the end of the input operation, exit the viewing angle operation interface, and restore the user perspective.
  17. 根据权利要求11至16中任一项所述的方法,其中,所述三维模型与所述真实房屋的尺寸比例为1:1,和/或其中,所述外部透视区为窗体区。The method according to any one of claims 11 to 16, wherein the size ratio of the three-dimensional model to the real house is 1:1, and/or wherein the external perspective area is a window area.
  18. 一种实现增强现实的装置,包括:用于实现权利要求1-17中任一项所述方法的装置。A device for realizing augmented reality, comprising: a device for realizing the method described in any one of claims 1-17.
  19. 一种计算机可读存储介质,其中,所述存储介质存储有计算机程序,所述计算机程序用于执行权利要求1-17中任一项所述的方法。A computer-readable storage medium, wherein the storage medium stores a computer program, and the computer program is used to execute the method according to any one of claims 1-17.
  20. 一种电子设备,包括:An electronic device including:
    处理器;以及Processor; and
    用于存储所述处理器可执行指令的存储器,其中,所述可执行指令在由所述处理器执行时实现权利要求1-17中任一项所述的方法。A memory for storing executable instructions of the processor, wherein the executable instructions, when executed by the processor, implement the method according to any one of claims 1-17.
  21. 一种计算机程序产品,包括计算机程序,其中,所述计算机程序在被处理器执行时实现权利要求1-17中任一项所述的方法。A computer program product, comprising a computer program, wherein the computer program, when executed by a processor, implements the method according to any one of claims 1-17.
PCT/CN2021/098887 2020-06-12 2021-06-08 Method and apparatus for implementing augmented reality, storage medium, and electronic device WO2021249390A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN202010534339.5 2020-06-12
CN202010534339.5A CN111681320B (en) 2020-06-12 2020-06-12 Model display method and device in three-dimensional house model
CN202010987132.3 2020-09-18
CN202010987132.3A CN112102479B (en) 2020-09-18 2020-09-18 Augmented reality method and device based on model alignment, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
WO2021249390A1 true WO2021249390A1 (en) 2021-12-16

Family

ID=78845352

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/098887 WO2021249390A1 (en) 2020-06-12 2021-06-08 Method and apparatus for implementing augmented reality, storage medium, and electronic device

Country Status (1)

Country Link
WO (1) WO2021249390A1 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114357348A (en) * 2021-12-29 2022-04-15 北京有竹居网络技术有限公司 Display method and device and electronic equipment
CN114727090A (en) * 2022-03-17 2022-07-08 阿里巴巴(中国)有限公司 Entity space scanning method, device, terminal equipment and storage medium
CN115297275A (en) * 2022-08-02 2022-11-04 北京国信互通科技有限公司 Restricted space audio and video data acquisition method based on wireless MESH transmission
CN115861039A (en) * 2022-11-21 2023-03-28 北京城市网邻信息技术有限公司 Information display method, device, equipment and medium
CN116756893A (en) * 2023-06-16 2023-09-15 深圳讯道实业股份有限公司 Power transmission and distribution cable layout and control method applied to industrial and mining control system
WO2023207345A1 (en) * 2022-04-29 2023-11-02 惠州Tcl移动通信有限公司 Data interaction method, apparatus, computer device, and computer readable storage medium
WO2024066208A1 (en) * 2022-09-26 2024-04-04 如你所视(北京)科技有限公司 Method and apparatus for displaying panoramic image of point location outside model, and device and medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102467756A (en) * 2010-10-29 2012-05-23 国际商业机器公司 Perspective method used for a three-dimensional scene and apparatus thereof
CN108830939A (en) * 2018-06-08 2018-11-16 杭州群核信息技术有限公司 A kind of scene walkthrough experiential method and experiencing system based on mixed reality
US20190373204A1 (en) * 2017-10-10 2019-12-05 Trimble Inc. Augmented reality device for leveraging high-accuracy gnss data
CN111226218A (en) * 2017-10-20 2020-06-02 赵汉茂 Russian-block-type house design system
CN111459269A (en) * 2020-03-24 2020-07-28 视辰信息科技(上海)有限公司 Augmented reality display method, system and computer readable storage medium
CN111681320A (en) * 2020-06-12 2020-09-18 贝壳技术有限公司 Model display method and device in three-dimensional house model
CN112102479A (en) * 2020-09-18 2020-12-18 贝壳技术有限公司 Augmented reality method and device based on model alignment, storage medium and electronic equipment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102467756A (en) * 2010-10-29 2012-05-23 国际商业机器公司 Perspective method used for a three-dimensional scene and apparatus thereof
US20190373204A1 (en) * 2017-10-10 2019-12-05 Trimble Inc. Augmented reality device for leveraging high-accuracy gnss data
CN111226218A (en) * 2017-10-20 2020-06-02 赵汉茂 Russian-block-type house design system
CN108830939A (en) * 2018-06-08 2018-11-16 杭州群核信息技术有限公司 A kind of scene walkthrough experiential method and experiencing system based on mixed reality
CN111459269A (en) * 2020-03-24 2020-07-28 视辰信息科技(上海)有限公司 Augmented reality display method, system and computer readable storage medium
CN111681320A (en) * 2020-06-12 2020-09-18 贝壳技术有限公司 Model display method and device in three-dimensional house model
CN112102479A (en) * 2020-09-18 2020-12-18 贝壳技术有限公司 Augmented reality method and device based on model alignment, storage medium and electronic equipment

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114357348A (en) * 2021-12-29 2022-04-15 北京有竹居网络技术有限公司 Display method and device and electronic equipment
CN114727090A (en) * 2022-03-17 2022-07-08 阿里巴巴(中国)有限公司 Entity space scanning method, device, terminal equipment and storage medium
CN114727090B (en) * 2022-03-17 2024-01-26 阿里巴巴(中国)有限公司 Entity space scanning method, device, terminal equipment and storage medium
WO2023207345A1 (en) * 2022-04-29 2023-11-02 惠州Tcl移动通信有限公司 Data interaction method, apparatus, computer device, and computer readable storage medium
CN115297275A (en) * 2022-08-02 2022-11-04 北京国信互通科技有限公司 Restricted space audio and video data acquisition method based on wireless MESH transmission
WO2024066208A1 (en) * 2022-09-26 2024-04-04 如你所视(北京)科技有限公司 Method and apparatus for displaying panoramic image of point location outside model, and device and medium
CN115861039A (en) * 2022-11-21 2023-03-28 北京城市网邻信息技术有限公司 Information display method, device, equipment and medium
CN116756893A (en) * 2023-06-16 2023-09-15 深圳讯道实业股份有限公司 Power transmission and distribution cable layout and control method applied to industrial and mining control system
CN116756893B (en) * 2023-06-16 2024-01-05 深圳讯道实业股份有限公司 Power transmission and distribution cable layout and control method applied to industrial and mining control system

Similar Documents

Publication Publication Date Title
WO2021249390A1 (en) Method and apparatus for implementing augmented reality, storage medium, and electronic device
CN111127627B (en) Model display method and device in three-dimensional house model
US11165959B2 (en) Connecting and using building data acquired from mobile devices
US11494973B2 (en) Generating floor maps for buildings from automated analysis of visual data of the buildings' interiors
US11252329B1 (en) Automated determination of image acquisition locations in building interiors using multiple data capture devices
WO2022095543A1 (en) Image frame stitching method and apparatus, readable storage medium, and electronic device
US11645781B2 (en) Automated determination of acquisition locations of acquired building images based on determined surrounding room data
US11632602B2 (en) Automated determination of image acquisition locations in building interiors using multiple data capture devices
Sankar et al. Capturing indoor scenes with smartphones
US20140248950A1 (en) System and method of interaction for mobile devices
US10991161B2 (en) Generating virtual representations
US11842464B2 (en) Automated exchange and use of attribute information between building images of multiple types
CN111681320B (en) Model display method and device in three-dimensional house model
CA3154186A1 (en) Automated building floor plan generation using visual data of multiple building images
CN112037279B (en) Article position identification method and device, storage medium and electronic equipment
US20230154027A1 (en) Spatial construction using guided surface detection
CA3069813C (en) Capturing, connecting and using building interior data from mobile devices
CN112102479B (en) Augmented reality method and device based on model alignment, storage medium and electronic equipment
US9881419B1 (en) Technique for providing an initial pose for a 3-D model
CN115439634B (en) Interactive presentation method of point cloud data and storage medium
JP6693069B2 (en) Image display device, method, and program
CN111627061A (en) Pose detection method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21820988

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21820988

Country of ref document: EP

Kind code of ref document: A1