CN110914871A - Method and device for acquiring three-dimensional scene - Google Patents

Method and device for acquiring three-dimensional scene Download PDF

Info

Publication number
CN110914871A
CN110914871A CN201880038658.8A CN201880038658A CN110914871A CN 110914871 A CN110914871 A CN 110914871A CN 201880038658 A CN201880038658 A CN 201880038658A CN 110914871 A CN110914871 A CN 110914871A
Authority
CN
China
Prior art keywords
dimensional
scene
panoramic image
training data
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201880038658.8A
Other languages
Chinese (zh)
Inventor
陆真国
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SZ DJI Technology Co Ltd
Original Assignee
SZ DJI Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SZ DJI Technology Co Ltd filed Critical SZ DJI Technology Co Ltd
Publication of CN110914871A publication Critical patent/CN110914871A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Image Analysis (AREA)

Abstract

There is provided a method of acquiring a three-dimensional scene, the method comprising: acquiring a two-dimensional panoramic image; and inputting the two-dimensional panoramic image into a model to obtain a three-dimensional scene corresponding to the two-dimensional panoramic image. As only two-dimensional panoramic images need to be acquired, and then the corresponding three-dimensional scene reconstruction result can be acquired through one model, compared with the traditional three-dimensional reconstruction technology, the method simplifies the implementation process, reduces the cost, and improves the convenience and efficiency of three-dimensional reconstruction.

Description

Method and device for acquiring three-dimensional scene
Copyright declaration
The disclosure of this patent document contains material which is subject to copyright protection. The copyright is owned by the copyright owner. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the patent and trademark office official records and records.
Technical Field
The present application relates to the field of three-dimensional scene reconstruction, and more particularly, to a method and apparatus for acquiring a three-dimensional scene.
Background
The three-dimensional reconstruction technique can be applied to indoor scene reconstruction mapping. The three-dimensional reconstruction result can be applied to indoor decoration design effect preview, furniture arrangement preview and other applications requiring three-dimensional scenes by matching with an Augmented Reality (AR) technology.
Conventional three-dimensional reconstruction techniques typically use a color (RGB) camera in conjunction with a sensor that can provide depth information, such as a depth camera or laser scanner, to reconstruct the three-dimensional scene. This scheme requires moving the camera or moving the photographic subject along a certain path. In addition, there are some reconstruction methods for reconstructing a three-dimensional scene by arranging a large number of complex templates (patterns) in the scene in advance.
Therefore, the existing three-dimensional scene reconstruction technology has a complex implementation process.
Disclosure of Invention
The application provides a method and a device for acquiring a three-dimensional scene, which can effectively improve the convenience and efficiency of three-dimensional scene reconstruction.
In a first aspect, a method for acquiring a three-dimensional scene is provided, the method comprising: acquiring a two-dimensional panoramic image; and inputting the two-dimensional panoramic image into a model to obtain a three-dimensional scene corresponding to the two-dimensional panoramic image.
In a second aspect, a method for three-dimensional scene reconstruction is provided, the method comprising: acquiring training data, wherein the training data comprises a two-dimensional panoramic image sample and a three-dimensional scene sample corresponding to the two-dimensional panoramic image sample; and training a model through the training data by adopting a machine learning algorithm, so that the model has the functions of receiving the two-dimensional panoramic image and outputting a three-dimensional scene.
In a third aspect, an apparatus for acquiring a three-dimensional scene is provided, the apparatus comprising: the image acquisition unit is used for acquiring a two-dimensional panoramic image; and the processing unit is used for inputting the two-dimensional panoramic image acquired by the image acquisition unit into a model to acquire a three-dimensional scene corresponding to the two-dimensional panoramic image.
In a fourth aspect, there is provided an apparatus for three-dimensional scene reconstruction, the apparatus comprising: the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring training data, and the training data comprises a two-dimensional panoramic image sample and a corresponding three-dimensional scene sample thereof; and the training unit is used for training the model through the training data by adopting a machine learning algorithm, so that the model has the functions of receiving the two-dimensional panoramic image and outputting a three-dimensional scene.
In a fifth aspect, an apparatus for three-dimensional scene reconstruction is provided, the apparatus comprising a memory for storing instructions and a processor for executing the instructions stored in the memory, and execution of the instructions stored in the memory causes the processor to perform the method provided in the first aspect or the second aspect.
In a sixth aspect, a computer storage medium is provided, on which a computer program is stored, which, when executed by a computer, causes the computer to perform the method provided in the first or second aspect.
In a seventh aspect, a computer program product is provided, which contains instructions that, when executed by a computer, cause the computer to perform the method provided in the first or second aspect.
According to the scheme provided by the application, the two-dimensional panoramic image is input into the model, and the three-dimensional scene reconstruction result corresponding to the two-dimensional panoramic image can be obtained through the output of the model. Therefore, according to the scheme provided by the application, only the two-dimensional panoramic image needs to be acquired, and then the corresponding three-dimensional scene reconstruction result can be acquired through one model.
Drawings
Fig. 1 is a schematic flow chart of a method for acquiring a three-dimensional scene according to an embodiment of the present application.
Fig. 2 is a schematic flow chart of a method for reconstructing a three-dimensional scene according to an embodiment of the present application.
Fig. 3 is a schematic block diagram of an apparatus for acquiring a three-dimensional scene according to an embodiment of the present application.
Fig. 4 is a schematic block diagram of an apparatus for reconstructing a three-dimensional scene according to an embodiment of the present application.
Fig. 5 is a schematic block diagram of a system for reconstructing a three-dimensional scene according to an embodiment of the present application.
Detailed Description
Fig. 1 is a schematic flow chart of a method 100 for acquiring a three-dimensional scene according to an embodiment of the present application. The method 100 includes the following steps.
And S110, acquiring a two-dimensional panoramic image.
The two-dimensional panoramic image referred to herein refers to an image taken by "panoramic shooting". "panoramic shooting" means that the shooting position of the shooting device is fixed, a plurality of pictures with different angles and directions are shot towards the surrounding 360 degrees (or other non-zero angles, such as 90 degrees or 270 degrees, and other angles), and then the shot pictures are spliced into a panoramic picture.
Alternatively, a two-dimensional panoramic image may be acquired by shooting with a panoramic camera. For example, a two-dimensional panoramic image is captured by a 270-degree panoramic camera.
Alternatively, a plurality of pictures may be taken by a common camera in a "panoramic shooting" manner, and then the plurality of pictures may be stitched into one picture by using related software (e.g., Photoshop), so as to obtain a two-dimensional panoramic image.
And S120, inputting the two-dimensional panoramic image into a model to obtain a three-dimensional scene corresponding to the two-dimensional panoramic image.
The model referred to herein has the following functions (also referred to as functional relationships): inputting two-dimensional panoramic image information and outputting corresponding three-dimensional scene structure information.
According to the method and the device, the two-dimensional panoramic image is input into the model, and the three-dimensional scene reconstruction result corresponding to the two-dimensional panoramic image can be obtained through the output of the model. Therefore, according to the scheme provided by the application, only the two-dimensional panoramic image needs to be acquired, and then the corresponding three-dimensional scene reconstruction result can be acquired through one model.
The model for receiving the two-dimensional panoramic image and outputting the three-dimensional scene can be obtained by training through a machine learning method.
Specifically, the model is obtained by adopting supervised learning algorithm training. The training data used in the model training process comprises two-dimensional panoramic image samples and three-dimensional scene samples corresponding to the two-dimensional panoramic image samples. The three-dimensional scene sample is a three-dimensional scene obtained based on the reconstruction of the two-dimensional panoramic image sample; alternatively, the two-dimensional panoramic image sample is a two-dimensional panoramic image generated in the three-dimensional scene sample.
It should be understood that the two-dimensional panoramic image samples and the three-dimensional scene samples in the training data of the model correspond to the input and output, respectively, in the training process of the model.
Specifically, there are various ways to obtain training data for training the model.
Optionally, as a way of acquiring the training data, the training data is acquired in an actual scene. Such training data is referred to herein as actual scene training data. The actual scene training data includes a two-dimensional panoramic image photographed in an actual scene and a three-dimensional scene reconstructed from the photographed two-dimensional panoramic image.
Specifically, for an actual scene, for example, an indoor scene, a two-dimensional panoramic image a is taken with a panoramic camera; and then, a corresponding three-dimensional scene reconstruction result A' is obtained based on the shot two-dimensional panoramic image A by adopting a related three-dimensional reconstruction tool. The two-dimensional panoramic image A and the three-dimensional scene reconstruction result A' are respectively a two-dimensional panoramic image sample and a three-dimensional scene sample for training the model.
The three-dimensional reconstruction tool for obtaining the three-dimensional scene reconstruction result a' from the two-dimensional panoramic image a may be any one of the existing three-dimensional reconstruction technologies, for example, a three-dimensional reconstruction technology based on a color camera and a depth sensor, a three-dimensional reconstruction technology through an unknown template in the field, a laser scanner reconstruction technology, or a three-dimensional reconstruction technology based on a consumer-grade depth camera Microsoft Kinect, and the like, which is not limited in this application.
Optionally, as another way to acquire training data, training data is acquired in a virtual scene. Such training data is referred to herein as virtual scene training data, which includes a virtual three-dimensional scene and a two-dimensional panoramic image generated in the virtual three-dimensional scene.
Specifically, a three-dimensional virtual scene B' is generated by a computer virtual technique, and then a two-dimensional panoramic image B is generated in the three-dimensional virtual scene. The two-dimensional panoramic image B and the three-dimensional virtual scene B' are respectively a two-dimensional panoramic image sample and a three-dimensional scene sample for training the model.
The computer virtual technique for generating the three-dimensional virtual scene B may be any existing technique that can generate a three-dimensional virtual scene, for example, a computer graphics technique.
Optionally, as another way to acquire training data, the training data is acquired in an actual scene and a virtual scene respectively. In other words, training data for training the model is acquired based on the actual scene and the virtual scene, respectively. The training data includes the actual scene training data and the virtual scene training data.
Aiming at an actual scene, such as an indoor scene, a two-dimensional panoramic image A is shot by a panoramic camera; and then, a corresponding three-dimensional scene reconstruction result A' is obtained based on the shot two-dimensional panoramic image A by adopting a related three-dimensional reconstruction tool. The two-dimensional panoramic image A and the three-dimensional scene reconstruction result A' are respectively a two-dimensional panoramic image sample and a three-dimensional scene sample for training the model. Further, a three-dimensional virtual scene B' is generated by a computer virtual technique, and then a two-dimensional panoramic image B is generated in the three-dimensional virtual scene. The two-dimensional panoramic image B and the three-dimensional virtual scene B' are respectively a two-dimensional panoramic image sample and a three-dimensional scene sample for training the model.
That is, the two-dimensional panoramic image sample used for training the model includes an image a captured in an actual scene and an image B generated in a virtual scene; the three-dimensional scene sample used for training the model comprises a three-dimensional reconstruction result A 'obtained based on actual scene reconstruction and a three-dimensional virtual scene B' generated by utilizing a computer virtual technology. The two-dimensional panoramic image sample A corresponds to the three-dimensional scene sample A ', and the two-dimensional panoramic image sample B corresponds to the three-dimensional scene sample B'.
It should be appreciated that the richer the training data, the better the resulting model.
In the embodiment of the application, the model has the functions of receiving the two-dimensional panoramic image and outputting the three-dimensional scene through the two-dimensional panoramic image sample and the three-dimensional scene sample training model, so that the three-dimensional scene reconstruction result corresponding to the two-dimensional panoramic image can be obtained through the model in a relatively quick and efficient mode.
Alternatively, the supervised learning algorithm used to train the model may be any of the following techniques: decision trees, random forests, or support vector machines.
When the model is trained in a decision tree fashion, the model may be referred to as a decision tree. When the model is trained in a random forest manner, the model may be referred to as a random forest. When the model is trained using a support vector machine, the model may be referred to as a support vector machine.
It should be noted that the model for outputting three-dimensional scene information according to the input two-dimensional panoramic image information proposed herein may be trained in advance, and in practical applications, may be directly used.
Optionally, the scheme provided by the application can be applied to indoor three-dimensional scene reconstruction.
For example, two-dimensional panoramic image samples used in training the model are acquired in an indoor real scene. As another example, three-dimensional scene samples used in training the model are generated based on an indoor virtual scene. It should be understood that the model obtained based on the indoor scene training is suitable for processing the three-dimensional scene reconstruction in the indoor scene, that is, the two-dimensional panoramic image in step S110 is a two-dimensional panoramic image captured by a panoramic camera in the indoor scene.
Optionally, the scheme provided by the application can also be applied to three-dimensional scene reconstruction in other occasions, for example, outdoor three-dimensional scene reconstruction.
For example, two-dimensional panoramic image samples used in training the model are acquired in an outdoor real scene. As another example, three-dimensional scene samples used in training the model are generated based on an outdoor virtual scene. It should be understood that the model trained based on the outdoor scene is suitable for processing the three-dimensional scene reconstruction in the outdoor scene, that is, the two-dimensional panoramic image in step S110 is a two-dimensional panoramic image shot by a panoramic camera in the outdoor scene.
According to the scheme, only the two-dimensional panoramic image needs to be acquired, then the corresponding three-dimensional scene reconstruction result can be acquired through one model, compared with the traditional three-dimensional reconstruction technology, the method simplifies the implementation process, reduces the cost, and improves the convenience and the efficiency of three-dimensional reconstruction.
As shown in fig. 2, an embodiment of the present application further provides a method 200 for reconstructing a three-dimensional scene, where the method 200 includes the following steps.
S210, training data are obtained, wherein the training data comprise two-dimensional panoramic image samples and three-dimensional scene samples corresponding to the two-dimensional panoramic image samples.
Specifically, the training data may be acquired in any one of the three ways of acquiring training data as described above.
S220, training a model through the training data by adopting a machine learning algorithm, so that the model has the functions of receiving the two-dimensional panoramic image and outputting a three-dimensional scene.
Alternatively, the model may be trained based on the training data acquired at S210 using any one of the following supervised learning algorithms: decision tree, random forest and support vector machine.
Alternatively, other machine learning algorithms may be employed to train the model.
In the embodiment of the application, the model is trained by the two-dimensional panoramic image sample and the three-dimensional scene sample, so that the model has the functions of receiving the two-dimensional panoramic image and outputting the three-dimensional scene, and a three-dimensional scene reconstruction result corresponding to the two-dimensional panoramic image can be obtained in a relatively quick and efficient mode through the model.
Method embodiments of the present application are described above and apparatus embodiments of the present application are described below. It should be understood that the apparatus embodiments correspond to the method embodiments, and the related schemes and technical effects thereof are equally applicable to the apparatus embodiments.
Fig. 3 is a schematic block diagram of an apparatus 300 for acquiring a three-dimensional scene according to an embodiment of the present application. The apparatus 300 comprises an image acquisition unit 310 and a processing unit 320.
The image capturing unit 310 is configured to obtain a two-dimensional panoramic image.
Alternatively, the image capturing unit 310 is an image capturing device with a "panoramic shooting" function, such as a panoramic camera/camcorder. As another example, the image capturing unit 310 is a 270 degree panoramic camera.
Optionally, the image capturing unit 310 includes a general camera for fixing a shooting position and shooting a plurality of photos by rotating a certain angle, and an image stitching module for stitching the plurality of photos.
The processing unit 320 is configured to input the two-dimensional panoramic image obtained by the image acquisition unit 310 into a model, and obtain a three-dimensional scene corresponding to the two-dimensional panoramic image.
The model receives the two-dimensional panoramic image and can output corresponding three-dimensional scene structure information.
The processing unit 320 may be implemented by a processor or processor-related circuitry.
According to the method and the device, the two-dimensional panoramic image is input into the model, and the three-dimensional scene reconstruction result corresponding to the two-dimensional panoramic image can be obtained through the output of the model. Therefore, according to the scheme provided by the application, only the two-dimensional panoramic image needs to be acquired, and then the corresponding three-dimensional scene reconstruction result can be acquired through one model.
It is understood that the apparatus 300 may correspond to the execution subject of the method 100 in the above embodiments.
Optionally, in this embodiment, the model is obtained by training through training data, where the training data includes a two-dimensional panoramic image sample and a three-dimensional scene sample corresponding to the two-dimensional panoramic image sample.
Optionally, in this embodiment, the training data includes actual scene training data, and the actual scene training data includes a two-dimensional panoramic image captured in an actual scene and a three-dimensional scene reconstructed from the captured two-dimensional panoramic image.
Optionally, in this embodiment, the training data includes virtual scene training data, and the virtual scene training data includes a virtual three-dimensional scene and a two-dimensional panoramic image generated in the virtual three-dimensional scene.
Optionally, in this embodiment, the training data includes real scene training data and virtual scene training data. The actual scene training data comprises a two-dimensional panoramic image shot in an actual scene and a three-dimensional scene obtained by rebuilding according to the shot two-dimensional panoramic image; the virtual scene training data includes a virtual three-dimensional scene and a two-dimensional panoramic image generated in the virtual three-dimensional scene.
Optionally, in this embodiment, the model is any one of the following models: decision tree, random forest and support vector machine.
Optionally, the apparatus 300 provided in this embodiment of the present application may be applied to indoor three-dimensional scene reconstruction.
Optionally, the apparatus 300 provided in this embodiment of the present application may be applied to three-dimensional scene reconstruction in other situations, for example, outdoor three-dimensional scene reconstruction.
As shown in fig. 4, an apparatus 400 for reconstructing a three-dimensional scene is further provided in an embodiment of the present application. The apparatus 400 comprises an acquisition unit 410 and a training unit 420.
The obtaining unit 410 is configured to obtain training data, where the training data includes two-dimensional panoramic image samples and three-dimensional scene samples corresponding to the two-dimensional panoramic image samples.
The training unit 420 is configured to train a model through the training data by using a machine learning algorithm, so that the model has a function of receiving a two-dimensional panoramic image and outputting a three-dimensional scene.
In the embodiment of the application, the model is trained by the two-dimensional panoramic image sample and the three-dimensional scene sample, so that the model has the functions of receiving the two-dimensional panoramic image and outputting the three-dimensional scene, and the three-dimensional scene reconstruction result corresponding to the two-dimensional panoramic image can be obtained in a relatively quick and efficient manner through the model.
The acquisition unit 410 and the training unit 420 may each be implemented by a processor or processor-related circuitry.
Optionally, in this embodiment, the obtaining unit 410 is configured to obtain actual scene training data, where the actual scene training data includes a two-dimensional panoramic image captured in an actual scene and a three-dimensional scene reconstructed from the captured two-dimensional panoramic image.
Optionally, in this embodiment, the obtaining unit 410 is configured to obtain virtual scene training data, where the virtual scene training data includes a virtual three-dimensional scene and a two-dimensional panoramic image generated in the virtual three-dimensional scene.
Optionally, in this embodiment, the obtaining unit 410 is configured to obtain actual scene training data and virtual scene training data, where the actual scene training data includes a two-dimensional panoramic image captured in an actual scene and a three-dimensional scene reconstructed from the captured two-dimensional panoramic image, and the virtual scene training data includes a virtual three-dimensional scene and a two-dimensional panoramic image generated in the virtual three-dimensional scene.
Optionally, in this embodiment, the machine learning algorithm is any one of the following algorithms: decision tree, random forest and support vector machine.
As shown in fig. 5, an embodiment of the present application further provides a system 500 for reconstructing a three-dimensional scene. The system 500 includes a panoramic camera 510 and a three-dimensional reconstruction device 520, and the three-dimensional reconstruction device 520 includes a module 521 therein, and the module 521 has a function of receiving a two-dimensional panoramic image and outputting a three-dimensional scene.
The panoramic imaging apparatus 510 is used to acquire a two-dimensional panoramic image.
The three-dimensional reconstruction apparatus 520 is configured to acquire a two-dimensional panoramic image from the panoramic imaging apparatus 510, input the two-dimensional panoramic image into a model 521, and acquire a three-dimensional scene corresponding to the two-dimensional panoramic image through output of the model 521.
Optionally, as shown in fig. 5, the system 500 further includes a model training device 530 for training the model 521 by a machine learning method, wherein the training data used in the training process includes two-dimensional panoramic image samples and three-dimensional scene samples.
Optionally, the model training device 530 is used to acquire training data in any of the three manners of acquiring training data described above.
Optionally, the system 500 may further include a display device (not shown in fig. 5) for presenting the three-dimensional scene structure acquired by the three-dimensional reconstruction device 520, or may further be used for simultaneously displaying the two-dimensional panoramic image acquired by the panoramic camera device 510 and the three-dimensional scene acquired by the three-dimensional reconstruction device 520.
An embodiment of the present application further provides a device for reconstructing a three-dimensional scene, where the device includes: a memory for storing instructions and a processor for executing the instructions stored by the memory, and execution of the instructions stored in the memory causes the processor to perform the method 100 or the method 200 provided by the above method embodiments.
Embodiments of the present application also provide a computer storage medium having a computer program stored thereon, where the computer program causes a computer to execute the method 100 or the method 200 provided by the above method embodiments when the computer program is executed by the computer.
Embodiments of the present application also provide a computer program product comprising instructions that, when executed by a computer, cause the computer to perform the method 100 or the method 200 provided by the above method embodiments.
In the above embodiments, all or part of the implementation may be realized by software, hardware, firmware or any other combination. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the invention to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website, computer, server, or data center to another website, computer, server, or data center via wire (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., a floppy disk, a hard disk, a magnetic tape), an optical medium (e.g., a Digital Video Disk (DVD)), or a semiconductor medium (e.g., a Solid State Disk (SSD)), among others.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (27)

  1. A method of acquiring a three-dimensional scene, comprising:
    acquiring a two-dimensional panoramic image;
    and inputting the two-dimensional panoramic image into a model to obtain a three-dimensional scene corresponding to the two-dimensional panoramic image.
  2. The method of claim 1, wherein the model is obtained by machine learning training, and wherein the training data used in the machine learning process includes two-dimensional panoramic image samples and corresponding three-dimensional scene samples.
  3. The method of claim 2, wherein the training data comprises actual scene training data comprising a two-dimensional panoramic image captured in an actual scene and a three-dimensional scene reconstructed from the captured two-dimensional panoramic image.
  4. A method according to claim 2 or 3, wherein the training data comprises virtual scene training data comprising a virtual three-dimensional scene and a two-dimensional panoramic image generated in the virtual three-dimensional scene.
  5. The method of any one of claims 1 to 4, wherein the model is a supervised learning training model.
  6. The method according to any one of claims 1 to 5, wherein the model is any one of the following models: decision tree, random forest and support vector machine.
  7. The method of any of claims 1 to 6, wherein said obtaining a two-dimensional panoramic image comprises:
    and acquiring the two-dimensional panoramic image by using a panoramic camera.
  8. The method of any of claims 1 to 7, wherein the two-dimensional panoramic image is a two-dimensional panoramic image indoors.
  9. A method for reconstructing a three-dimensional scene, comprising:
    acquiring training data, wherein the training data comprises a two-dimensional panoramic image sample and a three-dimensional scene sample corresponding to the two-dimensional panoramic image sample;
    and training a model through the training data by adopting a machine learning algorithm, so that the model has the functions of receiving the two-dimensional panoramic image and outputting a three-dimensional scene.
  10. The method of claim 9, wherein the obtaining training data comprises:
    acquiring actual scene training data, wherein the actual scene training data comprises a two-dimensional panoramic image shot in an actual scene and a three-dimensional scene obtained by rebuilding according to the shot two-dimensional panoramic image.
  11. The method of claim 9 or 10, wherein the obtaining training data comprises:
    virtual scene training data is obtained, wherein the virtual scene training data comprises a virtual three-dimensional scene and a two-dimensional panoramic image generated in the virtual three-dimensional scene.
  12. The method according to any one of claims 9 to 11, wherein the model is any one of the following models: decision tree, random forest and support vector machine.
  13. An apparatus for acquiring a three-dimensional scene, comprising:
    the image acquisition unit is used for acquiring a two-dimensional panoramic image;
    and the processing unit is used for inputting the two-dimensional panoramic image acquired by the image acquisition unit into a model to acquire a three-dimensional scene corresponding to the two-dimensional panoramic image.
  14. The apparatus of claim 13, wherein the model is obtained by machine learning training, and wherein the training data used in the machine learning process includes two-dimensional panoramic image samples and corresponding three-dimensional scene samples.
  15. The apparatus of claim 14, wherein the training data comprises actual scene training data comprising a two-dimensional panoramic image captured in an actual scene and a three-dimensional scene reconstructed from the captured two-dimensional panoramic image.
  16. The apparatus of claim 14 or 15, wherein the training data comprises virtual scene training data comprising a virtual three-dimensional scene and a two-dimensional panoramic image generated in the virtual three-dimensional scene.
  17. The apparatus of any one of claims 13 to 16, wherein the model is a supervised learning training model.
  18. The apparatus according to any one of claims 13 to 17, wherein the model is any one of the following models: decision tree, random forest and support vector machine.
  19. The apparatus according to any one of claims 13 to 18, wherein the image capturing unit is a panoramic camera.
  20. The apparatus of any of claims 13 to 19, wherein the two-dimensional panoramic image is a two-dimensional panoramic image indoors.
  21. An apparatus for three-dimensional scene reconstruction, comprising:
    the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring training data, and the training data comprises two-dimensional panoramic image samples and three-dimensional scene samples corresponding to the two-dimensional panoramic image samples;
    and the training unit is used for training a model through the training data by adopting a machine learning algorithm, so that the model has the functions of receiving the two-dimensional panoramic image and outputting a three-dimensional scene.
  22. The apparatus of claim 21, wherein the obtaining unit is configured to obtain actual scene training data, and the actual scene training data comprises a two-dimensional panoramic image captured in an actual scene and a three-dimensional scene reconstructed from the captured two-dimensional panoramic image.
  23. The apparatus of claim 21, wherein the obtaining unit is configured to obtain virtual scene training data, and wherein the virtual scene training data comprises a virtual three-dimensional scene and a two-dimensional panoramic image generated in the virtual three-dimensional scene.
  24. The apparatus of any one of claims 21 to 23, wherein the model is any one of the following: decision tree, random forest and support vector machine.
  25. An apparatus for three-dimensional scene reconstruction, comprising: a memory for storing instructions and a processor for executing the instructions stored by the memory, and execution of the instructions stored in the memory causes the processor to perform the method of any of claims 1 to 7 or to perform the method of any of claims 9 to 12.
  26. A computer storage medium, having stored thereon a computer program which, when executed by a computer, causes the computer to perform the method of any one of claims 1 to 7, or to perform the method of any one of claims 9 to 12.
  27. A computer program product comprising instructions which, when executed by a computer, cause the computer to perform the method of any one of claims 1 to 7 or the method of any one of claims 9 to 12.
CN201880038658.8A 2018-07-27 2018-07-27 Method and device for acquiring three-dimensional scene Pending CN110914871A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/097458 WO2020019304A1 (en) 2018-07-27 2018-07-27 Method and device for acquiring three-dimensional scene

Publications (1)

Publication Number Publication Date
CN110914871A true CN110914871A (en) 2020-03-24

Family

ID=69180362

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201880038658.8A Pending CN110914871A (en) 2018-07-27 2018-07-27 Method and device for acquiring three-dimensional scene

Country Status (2)

Country Link
CN (1) CN110914871A (en)
WO (1) WO2020019304A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113205591A (en) * 2021-04-30 2021-08-03 北京奇艺世纪科技有限公司 Method and device for acquiring three-dimensional reconstruction training data and electronic equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103955942A (en) * 2014-05-22 2014-07-30 哈尔滨工业大学 SVM-based depth map extraction method of 2D image
CN106980728A (en) * 2017-03-30 2017-07-25 理光图像技术(上海)有限公司 House Upholstering design experience apparatus and system
CN106991716A (en) * 2016-08-08 2017-07-28 深圳市圆周率软件科技有限责任公司 A kind of panorama three-dimensional modeling apparatus, method and system
CN107369204A (en) * 2017-07-27 2017-11-21 北京航空航天大学 A kind of method for recovering the basic three-dimensional structure of scene from single width photo based on deep learning
CN107393000A (en) * 2017-08-24 2017-11-24 广东欧珀移动通信有限公司 Image processing method, device, server and computer-readable recording medium
US20180189974A1 (en) * 2017-05-19 2018-07-05 Taylor Clark Machine learning based model localization system
WO2018120888A1 (en) * 2016-12-29 2018-07-05 北京奇艺世纪科技有限公司 Panoramic image compression method and apparatus

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7463280B2 (en) * 2003-06-03 2008-12-09 Steuart Iii Leonard P Digital 3D/360 degree camera system
CN106294918A (en) * 2015-06-10 2017-01-04 中国科学院宁波材料技术与工程研究所 A kind of method for designing of virtual transparence office system
CN106296783B (en) * 2016-07-28 2019-01-11 众趣(北京)科技有限公司 A kind of space representation method of combination space overall situation 3D view and panoramic pictures
CN108305327A (en) * 2017-11-22 2018-07-20 北京居然设计家家居连锁集团有限公司 A kind of image rendering method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103955942A (en) * 2014-05-22 2014-07-30 哈尔滨工业大学 SVM-based depth map extraction method of 2D image
CN106991716A (en) * 2016-08-08 2017-07-28 深圳市圆周率软件科技有限责任公司 A kind of panorama three-dimensional modeling apparatus, method and system
WO2018120888A1 (en) * 2016-12-29 2018-07-05 北京奇艺世纪科技有限公司 Panoramic image compression method and apparatus
CN106980728A (en) * 2017-03-30 2017-07-25 理光图像技术(上海)有限公司 House Upholstering design experience apparatus and system
US20180189974A1 (en) * 2017-05-19 2018-07-05 Taylor Clark Machine learning based model localization system
CN107369204A (en) * 2017-07-27 2017-11-21 北京航空航天大学 A kind of method for recovering the basic three-dimensional structure of scene from single width photo based on deep learning
CN107393000A (en) * 2017-08-24 2017-11-24 广东欧珀移动通信有限公司 Image processing method, device, server and computer-readable recording medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113205591A (en) * 2021-04-30 2021-08-03 北京奇艺世纪科技有限公司 Method and device for acquiring three-dimensional reconstruction training data and electronic equipment
CN113205591B (en) * 2021-04-30 2024-03-08 北京奇艺世纪科技有限公司 Method and device for acquiring three-dimensional reconstruction training data and electronic equipment

Also Published As

Publication number Publication date
WO2020019304A1 (en) 2020-01-30

Similar Documents

Publication Publication Date Title
US11488355B2 (en) Virtual world generation engine
Mori et al. A survey of diminished reality: Techniques for visually concealing, eliminating, and seeing through real objects
US20230334798A1 (en) Systems and methods for presenting digital assets within artificial environments via a loosely coupled relocalization service and asset management service
US11051000B2 (en) Method for calibrating cameras with non-overlapping views
US10891781B2 (en) Methods and systems for rendering frames based on virtual entity description frames
WO2016155377A1 (en) Picture display method and device
CN110874818B (en) Image processing and virtual space construction method, device, system and storage medium
CN108416832B (en) Media information display method, device and storage medium
US11044398B2 (en) Panoramic light field capture, processing, and display
CN109743584B (en) Panoramic video synthesis method, server, terminal device and storage medium
KR20130089649A (en) Method and arrangement for censoring content in three-dimensional images
JP2018026064A (en) Image processor, image processing method, system
Queguiner et al. Towards mobile diminished reality
US20130050190A1 (en) Dressing simulation system and method
CN112511815B (en) Image or video generation method and device
JP6730695B2 (en) A method for reconstructing 3D multi-view by feature tracking and model registration.
CN110914871A (en) Method and device for acquiring three-dimensional scene
JP2022171739A (en) Generation device, generation method and program
JP6341540B2 (en) Information terminal device, method and program
CN109978761B (en) Method and device for generating panoramic picture and electronic equipment
WO2020212761A1 (en) Method for assisting the acquisition of media content at a scene
WO2021060016A1 (en) Image processing device, image processing method, program, and image processing system
KR101773929B1 (en) System for processing video with wide viewing angle, methods for transmitting and displaying vide with wide viewing angle and computer programs for the same
CN109348132B (en) Panoramic shooting method and device
CN113298868B (en) Model building method, device, electronic equipment, medium and program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20200324

WD01 Invention patent application deemed withdrawn after publication