WO2020006941A1 - 一种基于拍照重建三维空间场景的方法 - Google Patents
一种基于拍照重建三维空间场景的方法 Download PDFInfo
- Publication number
- WO2020006941A1 WO2020006941A1 PCT/CN2018/112554 CN2018112554W WO2020006941A1 WO 2020006941 A1 WO2020006941 A1 WO 2020006941A1 CN 2018112554 W CN2018112554 W CN 2018112554W WO 2020006941 A1 WO2020006941 A1 WO 2020006941A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- room
- plane
- dimensional space
- wall
- reconstructing
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 89
- 238000012937 correction Methods 0.000 claims description 23
- 230000008676 import Effects 0.000 claims description 6
- 238000013507 mapping Methods 0.000 claims description 6
- 230000002194 synthesizing effect Effects 0.000 claims description 3
- 238000010586 diagram Methods 0.000 description 10
- 239000013598 vector Substances 0.000 description 5
- 230000009286 beneficial effect Effects 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 238000005034 decoration Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/04—Texture mapping
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/60—Rotation of whole images or parts thereof
- G06T3/608—Rotation of whole images or parts thereof by skew deformation, e.g. two-pass or three-pass rotation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
- G06T7/579—Depth or shape recovery from multiple images from motion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/62—Analysis of geometric attributes of area, perimeter, diameter or volume
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/698—Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20112—Image segmentation details
- G06T2207/20164—Salient point detection; Corner detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30204—Marker
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2210/00—Indexing scheme for image generation or computer graphics
- G06T2210/04—Architectural design, interior design
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2210/00—Indexing scheme for image generation or computer graphics
- G06T2210/21—Collision detection, intersection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2210/00—Indexing scheme for image generation or computer graphics
- G06T2210/56—Particle system, point based geometry or rendering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2215/00—Indexing scheme for image rendering
- G06T2215/16—Using real world measurements to influence rendering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2219/00—Indexing scheme for manipulating 3D models or images for computer graphics
- G06T2219/004—Annotating, labelling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2219/00—Indexing scheme for manipulating 3D models or images for computer graphics
- G06T2219/008—Cut plane or projection plane definition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2219/00—Indexing scheme for manipulating 3D models or images for computer graphics
- G06T2219/012—Dimensioning, tolerancing
Definitions
- the invention relates to a three-dimensional modeling method, in particular to a method for reconstructing a three-dimensional space scene based on photographs.
- Three-dimensional space modeling is a technology that has been rapidly developed and applied in recent years. It has been widely used in virtual reality, house decoration, interior design and other fields.
- the existing three-dimensional space scene modeling generally adopts the following solutions:
- Binocular Stereo Vision system A method based on the principle of parallax and using imaging equipment to obtain two images of the measured object from different positions, and calculate the positional deviation between the corresponding points of the image to obtain the three-dimensional geometric information of the object. Fusion the images obtained by two eyes and observe the differences between them, so that we can obtain a clear sense of depth, establish the corresponding relationship between features, and map the same physical point of the space in different images.
- this method has simple shooting equipment, if it is to be used for the reconstruction of three-dimensional space, the number of pictures needed to be taken is very large, which is very time-consuming, and it takes a lot of time to calculate the model later. Once the model has a problem, the repair method is very complicated. , Non-professionals can not operate, so although the binocular stereo vision system has been around for many years, it cannot be promoted on a large scale.
- Laser point cloud technology uses the principle of TOF or structured light ranging to obtain the spatial coordinates of each sampling point on the surface of the object, and obtains a series of massive points that express the spatial distribution of the target and the characteristics of the target surface.
- This point set Call it a "point cloud” Cloud).
- the properties of the point cloud include: spatial resolution, point accuracy, surface normal vector, and so on.
- laser point cloud technology requires users to purchase additional point cloud equipment that is bulky, expensive, and complicated to operate, on the other hand, it will generate massive data that is not conducive to storage and processing. When multiple sets of data need to be stitched, due to the huge amount of data , It will take a long time and the results are not satisfactory. Therefore, although point cloud technology has been around for many years, its promotion has been difficult.
- the technical problem to be solved by the present invention is to provide a method for reconstructing a three-dimensional space scene based on photographs, which can restore a three-dimensional space model of the scene, and simultaneously contains texture information and size information without losing details, and can quickly and conveniently perform three-dimensional space scenes. Edit and modify, and at the same time can generate two-dimensional plan with size information without distortion.
- the technical solution adopted by the present invention to solve the above technical problem is to provide a method for reconstructing a three-dimensional space scene based on photographs, including the following steps: S1: import photos of all spaces, and for each space, import the containing space shot at the same shooting point.
- a set of photos of the feature corresponding to the three-dimensional space according to the direction and perspective of the shooting, so that when viewed from the camera position in the three-dimensional space, the viewing direction of each pixel is the same as when shooting;
- S4 collected through S3
- the point coordinate information establishes a three-dimensional space model of the room.
- step S1 further includes synthesizing a photo of each space into a 360-degree panorama, and then corresponding the panorama to the three-dimensional space, so that when viewing from a camera position in the three-dimensional space, , The viewing direction of each pixel is the same as when shooting.
- determining the first plane method includes: determining a plane by finding three vertical intersecting wall lines on the plane or finding four corners of the plane to determine one flat.
- determining the first plane method further includes: determining a plane position by recording a projection point of the camera lens on the plane or recording a relevant point to calculate a camera lens Projection point on the plane to determine the position of the plane.
- the method for reconstructing a three-dimensional space scene based on a photograph wherein before step S2, the method further includes: marking a vertical correction line on the photo, correcting the image tilt caused by the distortion of the shooting device during the photo shooting; Line as the vertical correction line, or find the horizontal line of the photo, draw a line perpendicular to the horizontal line in the photo as the vertical correction line; using the vertical correction line as a reference, rotate the photo until the vertical correction line is perpendicular to the actual horizontal plane; A vertical line correction line to complete vertical correction.
- the step S3 includes the following steps: S31: placing the photograph in a high-precision sphere, setting the lens position of the camera in the center of the sphere, and restoring the photograph in the center of the sphere Viewing angle; S32: preset four frame points in the sphere, each frame point contains upper and lower points, drag the frame points so that each point corresponds to the corner position in the actual room space, forming the main frame structure of the room; Or mark the positions of the lower wall corners in turn to form the ground outline, then combine the vertical wall lines to find the corresponding upper wall corners, mark the position of the upper wall corners, and get the basic outline of the entire room; S33: Calculate the height of the camera through a known camera Obtain the height of the room and estimate the outline size of the room; or by placing a ruler of known length to locate the outline size of the plane, and then estimate the outline size of the room.
- step S32 when the corner of the wall that needs to be marked is occluded, if the wall line is visible, the position of the wall corner is determined by the intersection of two wall lines that intersect vertically and horizontally; Both the lower corner and the wall line are blocked, and the upper corner or wall line is visible.
- step S32 further includes adding punctuation marks to objects in the room, and combining the two punctuation marks to form a graticule, extending the wall of the room and adding the basic object structure of the room.
- the basic objects include doors, open spaces, ordinary windows, bay windows, stairs; for non-rectangular rooms, the punctuation of the convex wall structure is added to expand the room structure, and the expanded space structure is determined by adjusting the position and depth of the concave wall; for any For free-form walls, add free punctuation to expand the wall structure arbitrarily; for duplex structures, add punctuation to the stairs and stairway structure, and connect the stairways of two floors to connect the upper and lower floors in series to expand the stair structure; step S33 also includes If you know the height of the camera lens or a ruler with a known length, mark the objects in the room to get the real-world size and the calculated ratio in the model, and then perform the proportional scaling on the entire model to calculate The true size of objects in the room.
- the above method for reconstructing a three-dimensional space scene based on photographs further includes step S5: comparing the doors or open space image information in photos of different rooms and connecting the rooms to obtain the spatial position and orientation information of each room; For a door or open space, compare the pictures of the door or open space in the photos, get matching pictures in each other's photos, find the same door or open space, and connect the rooms with the same door or open space; After connecting all the rooms in series, calculate the position and orientation of the connected room, select a room as the connected room, traverse the door of the room, find the unconnected room, and connect the two through the position and orientation of the currently connected room. Position and orientation of the door or open space of each room, calculate the position and orientation of the unconnected room, and then mark the unconnected room as a connected room; continue to search for unconnected rooms until there are no unconnected rooms and complete all The connection of the room.
- the method for reconstructing a three-dimensional space scene based on photographs further includes step S6: segmenting the photo by using the point coordinate information collected in S3, obtaining a room texture map, and obtaining a room texture: first set each face to a polygon,
- the vertex information is a set of three-dimensional coordinates.
- the size of the map is calculated by the outer rectangle of the vertex coordinates. Iterate through the pixels of the picture to obtain the position of the spatial coordinate point corresponding to each pixel on the polygon. Iterate through all the pixels to complete the single surface texture mapping. ; Complete the texture mapping of all faces of the room in turn.
- the method for reconstructing a three-dimensional space scene based on photographs further includes step S7: obtaining a two-dimensional contour of a single room by ignoring the height information of the room model obtained through the marking system in step S3; according to each of the steps obtained in step S5 Position and orientation information of the room, set the position and orientation of each room outline, and complete the generation of a two-dimensional plan.
- the above method for reconstructing a three-dimensional space scene based on photographing wherein the photographing equipment includes a mobile phone with a fisheye lens, a panoramic camera, a camera with a fisheye lens, an ordinary mobile phone, and an ordinary digital camera.
- the present invention has the following beneficial effects: the method for reconstructing a three-dimensional space scene based on photographs provided by the present invention restores a three-dimensional space model of the scene, and simultaneously contains texture information and size information without losing details, and will not be caused by incomplete scanning Holes in the model will not cause serious interference to the model due to furniture, interior decoration, etc .; it can quickly and easily edit and modify 3D space scenes, and simultaneously generate 2D floor plans with size information without distortion; support a wide range of shooting methods, Including but not limited to mobile phone fisheye lens, panoramic camera, camera with fisheye lens, and ordinary mobile phones and ordinary digital cameras, the cost is low.
- FIG. 1 is a flowchart of a method for reconstructing a three-dimensional space scene based on a photograph according to an embodiment of the present invention
- FIG. 2 is a schematic diagram of determining a plane in an embodiment of the present invention.
- FIG. 3 is a schematic diagram of determining a first plane by searching three vertical intersecting wall lines on a plane in an embodiment of the present invention
- FIG. 4 is a schematic diagram of conversion of polar coordinates to rectangular coordinates in an embodiment of the present invention.
- FIG. 5 is a schematic diagram of determining a first plane by searching four corners of a plane in an embodiment of the present invention
- FIG. 6 is a schematic diagram of determining a first plane by calculating a projection point of a camera lens on a plane by recording related points in an embodiment of the present invention
- FIG. 7 is a schematic diagram of vertical correction in an embodiment of the present invention.
- FIG. 8 is a schematic diagram of a mark room spatial structure added with an uneven wall structure in the embodiment of the present invention.
- FIG. 9 is a schematic diagram of adding a free punctuation point to a spatial structure of a marked room in the embodiment of the present invention to arbitrarily extend the wall;
- FIG. 10a is a side view of a wall corner seen by a camera according to an embodiment of the present invention.
- FIG. 10b is a top view of the corner of the wall viewed from the camera in the embodiment of the present invention.
- FIG. 11 is a schematic diagram of estimating a contour size by placing a known length ruler on the ground according to an embodiment of the present invention
- FIG. 12 is a schematic diagram of estimating a contour dimension by placing a known length ruler at a certain angle with the ground in the embodiment of the present invention.
- FIG. 1 is a flowchart of a method for reconstructing a three-dimensional space scene based on photos according to the present invention.
- a method for reconstructing a three-dimensional space scene based on photographs includes the following steps: S1: import photos of all spaces, and import a set of photos including main features of the space, taken at the same shooting point, for each space , The photos are corresponding to the three-dimensional space according to the direction and perspective when shooting, so that when viewing from the camera position in three-dimensional space, the viewing direction of each pixel is the same as when shooting; S2: treat the room as a collection of multiple planes, first determine The first plane, and then determine all planes one by one through the relationship between the planes and the intersection between the planes; S3: Mark the space structure of the room through the marking system and obtain the size information; S4: Create the room through the point coordinate information collected by S3 Three-dimensional space model. Zh
- step S1 further comprises synthesizing the photos of each space into a 360-degree panorama, and then corresponding the panorama to the three-dimensional space, so that when viewed from the camera position in the three-dimensional space, the viewing direction of each pixel is consistent with the time of shooting.
- determining a first plane method includes: determining a plane by finding three vertical intersecting wall lines on the plane or finding four wall corners of the plane to determine a plane.
- the method for determining the first plane further includes: determining a position of the plane by recording a projection point of the camera lens on the plane or recording a relevant point to calculate a projection point of the camera lens on the plane to determine the position of the plane.
- each pixel Since the photo corresponds to the principle of three-dimensional space, each pixel is kept in the same direction as the time of shooting, and no distance information from a pixel to the shooting point (the position of the lens when shooting) is recorded or provided.
- the basic principle of modeling is to treat the indoor model as a collection of multiple planes (including the ground, wall and ceiling).
- the lower corner is the intersection of the three planes (the ground and the two walls), and the upper corner is the three planes.
- the intersection (ceiling and two walls), the wall line is the intersection of two planes (walls). If we can first locate the position of a plane, we can determine the other planes in turn by relying on the corners and lines on the plane, until all planes are restored to complete the modeling.
- the wall S2 can be determined by knowing the vertical relationship between the wall S2 and the ground S1 and the intersection line (wall line) L1 between S1 and S2. In the same way, the wall surface S3 and the positions of all mutually perpendicular wall surfaces in the space can be determined through the wall line L2.
- method a adjacent right-angle method: determine the plane by three perpendicular intersection lines on the plane, and the premise of the adjacent right-angle method is that the interior wall surface is mostly composed of rectangles.
- P is the position of the camera lens, and the angles formed by P1P2P3 and P2P3P4 are all right angles, then the plane where P1P2P3P4 is located can be determined.
- a wall is observed in the photo in Figure 3, and P1, P2, P3, and P4 are all on this wall.
- P2P3 is the vertical edge of the wall and the ground
- P1P2 is the intersection of the wall and the ceiling
- P3P4 is the intersection of the wall and the ground.
- P1 is a point different from P2 on the intersection of wall and ceiling
- P4 is a point different from P3 on the intersection of wall and floor.
- the coordinates of the four points relative to the observation point are expressed in polar coordinates as ( , , ), Obviously the radius Is unknown, the other two values can be observed.
- P1P2 is parallel to P3P4, so the dot product of the two vectors is equal to the product of the modulo of the two vectors
- the same method can be used to determine other planes.
- use the following method Use the adjacent right angle method to determine the first plane S1; use a positive real number solution of the above plane to determine the position of plane S1; choose one Adjacent plane S2, determine the plane using the adjacent right angle method, and use a positive real number solution to determine the position of plane S2; repeat the previous step to determine the position of other planes one by one until all are completed; if a plane cannot find a positive real number Solution, or the intersection with the adjacent plane is wrong, then go back to the previous plane and use the next positive real number solution, repeat the previous steps to determine all planes. Since all planes exist in space, the positions of all planes can be determined by the above method.
- each plane can find two adjacent right angles. If an individual plane does not satisfy the condition of two adjacent right angles, it can be determined by the adjacent plane at the determined position and the line of intersection with the adjacent plane. As shown in Fig. 2, assuming that S2 is perpendicular to S1, S2 can be uniquely determined according to the position of S1 and the line of intersection L1 between S1 and S2.
- method b (rectangular method): A plane is determined by assuming that a wall is rectangular and finding four corners of the plane. In the room, most of the walls are rectangular, so it is natural to use the 4 vertices of the rectangle to determine the position of the wall.
- the rectangular method is a special case of the adjacent right-angle method.
- the line segment P1P4 must be parallel to the line segment P2P3.
- P1 can be any point on the line where P1P2 is located.
- P4 can also be any point on the line where P3P4 is located. Rectangle
- the solution method is similar to the adjacent right-angle method, so it will not be described here.
- method c the plane is determined by the projection of the camera onto the plane. If the projection of the camera lens onto a plane is recorded at the time of shooting, the position of the plane can be uniquely determined (the distance from the camera to the plane is known or assumed).
- the line connecting the camera position and the projection is perpendicular to the ground S1.
- the ground position is determined.
- the methods for obtaining / recording the projection point are: put the camera on a tripod or stand for shooting, and the tripod / stand is perpendicular to the ground.
- the center point where the tripod / stand falls on the ground is the projection point.
- use the projection method to determine the first plane S1, and select an adjacent plane S2.
- S2 can be uniquely determined according to the position of S1 and the line of intersection L1 between S1 and S2. Identify all planes. In practice, since almost all walls are perpendicular to the ground or ceiling, if the first plane is a wall, select the ground or ceiling as the first adjacent plane in the above steps, and then determine all the walls in turn.
- method d (inclination projection method): Record the relevant points and calculate the projection point of the camera lens on the plane to determine the position of the plane. This method is similar to the projection method, except that the projection point is not recorded, but the position of the projection point can be calculated from the recorded point.
- the methods are: the tripod / bracket is placed on the ground (the base is attached to the ground), but it is not perpendicular to the ground; the tripod / bracket is against the wall (the base is attached to the wall), but not perpendicular to the wall. When the tripod / stand is not perpendicular to the plane, the projection on the plane is a line segment.
- P is the position of the camera lens
- P1 is the center point of the base of the camera bracket
- P2 is the projection point on the plane
- the projection line of the bracket P1P3 on the plane is P1P2.
- the position of the projection point P2 can be calculated: from the projection point P2, all planes can be determined using the projection method.
- the method further includes: marking a vertical correction line on the photo, correcting the image tilt caused by the distortion of the shooting device when the photo is taken; finding a line perpendicular to the ground of the photo as a vertical correction line in the photo, or finding a horizontal line of the photo , Draw a line perpendicular to the horizontal line in the photo as the vertical correction line; using the vertical correction line as a reference, rotate the photo until the vertical correction line is perpendicular to the actual horizontal plane; obtain multiple vertical line correction lines in different directions to complete the vertical correction.
- step S3 includes the following steps: S31: placing the photo in a high-precision sphere, setting the lens position of the camera in the center of the sphere, and then restoring the photo shooting perspective at the center of the sphere; S32: preset four frame points in the sphere, each frame point includes an upper point and a lower point, and drag the frame points so that each point corresponds to a corner position in the actual room space to form the main frame structure of the room; or Mark the positions of the lower corners to form the ground outline, and then combine the vertical wall lines to find the corresponding upper corners. Mark the positions of the upper corners to get the basic outline of the entire room. S33: Calculate the room by using a known camera shooting height The height of the room to calculate the room's outline size; or by placing a ruler of known length to locate the outline size of the plane, and then calculate the room's outline size.
- step S32 can mark the spatial structure and objects of the room in a variety of ways.
- One of the marking methods is as follows:
- each frame point corresponds to an upper frame point (P1a, P2a, P3a, and P4a), and the line connecting each pair of upper and lower frame points (such as P1P1a) is perpendicular to the plane S1.
- P1a, P2a, P3a, and P4a the line connecting each pair of upper and lower frame points (such as P1P1a) is perpendicular to the plane S1.
- P7 can be any point on the plane S1
- P5P7 and P2P5 can be non-vertical
- P5P7 and P7P8 can be non-vertical
- P7P8 and P8P3 can be non-vertical
- the wall line is generally used as a reference object.
- the position of the wall line can be determined by the connection between the frame point and the frame point.
- a line is formed between the two punctuation points.
- the room wall is expanded and the basic object structures of various rooms are added, such as doors, open spaces, ordinary windows, bay windows, stairs, etc. You can also add basic object structures to the wall.
- the punctuation structure records the room wall structure and the main room objects (such as doors, doorways, open spaces, ordinary windows, bay windows, stairs), and uses the The text form is stored locally.
- step S32 can also form the outline of each plane by marking the position of each corner on a certain plane, and other planes can be confirmed step by step through wall lines and corners until all planes are marked to form the outline of the room (label one by one) Corner method). Take the ground marking as an example.
- the specific steps are as follows:
- the first method for determining the size in step S33 is to calculate the height of the room through a known camera shooting height, and estimate the outline size of the room;
- the second method for determining the size in step S33 is to locate the outline size of a certain plane (ground / wall surface) by placing a ruler of known length, and then infer the outline dimensions of the other planes.
- the camera height (distance to the ground) can be calculated from the known ruler length.
- the subsequent steps are the same as Method 1.
- the camera height is unknown.
- the ground ruler R is known as the true length Lr, and the length measured in the coordinate system is Lr ', and h, Lr' are in the same coordinate system, and the unit is the same.
- h ' Lr / Lr' * h.
- the subsequent method is the same as shown in Figure 11.
- the ruler R is placed on the wall s2.
- the length of the plane projection can also be calculated by a trigonometric function, so as to determine the position of the plane again.
- the scale is Pr1Pr2
- the true length of Pr1Pr2 is known
- the fixed angle between the scale and the plane is known as ⁇ , that is, ⁇ Pr1
- the true length of Pr1Pr2 ' is determined according to the angle ⁇ and the projection of the scale on the ground.
- step S32 when the wall corner to be marked is blocked, if the wall line is visible, the position of the wall corner is determined by the intersection of two wall lines that intersect vertically and horizontally; if both the lower wall corner and the wall line are blocked, While the upper corner or wall line is visible, first determine the upper plane and upper corner, then scale the upper corner and lower corner positions in proportion, and keep the lower corner point on the lower plane to determine the corner position.
- the method for reconstructing a three-dimensional space scene based on photographs provided by the present invention further includes step S5: comparing the doors or open space image information in photos of different rooms and connecting the rooms to obtain the spatial position and orientation information of each room; For a door or open space, compare the pictures of the door or open space in the photos, get matching pictures in each other's photos, find the same door or open space, and connect the rooms with the same door or open space; After connecting all the rooms in series, calculate the position and orientation of the connected room, select a room as the connected room, traverse the door of the room, find the unconnected room, and connect the two through the position and orientation of the currently connected room. Position and orientation of the door or open space of each room, calculate the position and orientation of the unconnected room, and then mark the unconnected room as a connected room; continue to search for unconnected rooms until there are no unconnected rooms and complete all The connection of the room.
- the method for reconstructing a three-dimensional space scene based on photographs provided by the present invention further includes step S6: segmenting the photo by using point coordinate information collected in S3 to obtain a room texture map to obtain a room texture: first set each face to a polygon,
- the vertex information is a set of three-dimensional coordinates.
- the size of the map is calculated by the outer rectangle of the vertex coordinates. Iterate through the pixels of the picture to obtain the position of the spatial coordinate point corresponding to each pixel on the polygon. Iterate through all the pixels to complete the single surface texture mapping. ; Complete the texture mapping of all faces of the room in turn.
- the method for reconstructing a three-dimensional space scene based on photographs provided by the present invention further includes step S7: a model of the room obtained through the marking system in step S3, ignoring the height information, and obtaining a two-dimensional outline of a single room; according to each of the steps obtained in step S5 Position and orientation information of the room, set the position and orientation of each room outline, and complete the generation of a two-dimensional plan.
- the photographing device includes a mobile phone with a fisheye lens, a panoramic camera, a camera with a fisheye lens, and a common mobile phone and a common digital camera.
- the method for reconstructing a three-dimensional space scene based on photographs restores a three-dimensional space model of the scene, and contains both texture information and size information without losing details. Furniture, interiors, etc. severely interfere with the model; can quickly and easily edit and modify 3D space scenes, and simultaneously generate 2D floor plans with size information without distortion; support a wide range of shooting methods, including but not limited to mobile phone fisheye Lenses, panoramic cameras, cameras with fisheye lenses, and ordinary mobile phones and ordinary digital cameras are inexpensive.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Graphics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- General Engineering & Computer Science (AREA)
- Computer Hardware Design (AREA)
- Processing Or Creating Images (AREA)
- Length Measuring Devices By Optical Means (AREA)
- Image Processing (AREA)
- Image Generation (AREA)
Abstract
Description
Claims (10)
- 一种基于拍照重建三维空间场景的方法,其特征在于,包括以下步骤:S1:导入所有空间的照片,对每个空间导入在同一个拍摄点拍摄的包含空间主要特征的一组照片,将照片按照拍摄时的方向和视角对应于三维空间,使得从三维空间的相机位置观看时,每个像素的观看方向与拍摄时一致;S2:将房间视为多个平面的集合,先确定第一个平面,再通过平面间相互关系和平面间的交线,逐一确定所有平面;S3:通过标记***标记房间空间结构,并获得尺寸信息;S4:通过S3收集到的点坐标信息建立房间的三维空间模型。
- 根据权利要求1所述的基于拍照重建三维空间场景的方法,其特征在于,所述步骤S1还包括,将每个空间的照片合成360度全景图,然后将全景图对应于三维空间,使得从三维空间的相机位置观看时,每个像素的观看方向与拍摄时一致。
- 根据权利要求1所述的基于拍照重建三维空间场景的方法,其特征在于,所述步骤S2中,确定第一个平面方法包括:通过查找平面上的三条垂直相交墙线来确定一个平面或者查找平面四个墙角来确定一个平面。
- 根据权利要求1所述的基于拍照重建三维空间场景的方法,其特征在于,所述步骤S2中,确定第一个平面方法还包括:通过记录相机镜头在平面的投影点来确定平面的位置或者记录相关点推算出相机镜头在平面的投影点来确定平面的位置。
- 根据权利要求1所述的基于拍照重建三维空间场景的方法,其特征在于,所述步骤S2之前还包括:在照片上标记垂直矫正线,矫正照片拍摄时拍摄设备歪斜导致的图像倾斜;在照片中寻找垂直于照片地面的线作为垂直矫正线,或者寻找照片的水平线,画一条与照片中的水平线垂直的线作为垂直矫正线;以垂直矫正线为参照,旋转照片直到垂直矫正线与实际水平面垂直;获取不同方位的多条垂直线矫正线,完成垂直矫正。
- 根据权利要求1所述的基于拍照重建三维空间场景的方法,其特征在于,所述步骤S3包括以下步骤:S31:将照片放置于一个高精度球体内,将相机的镜头位置设置在球体中央,然后在球体中心还原照片拍摄视角;S32:在球体中预置四个框架点,每个框架点包含上点和下点,拖动框架点使得每个点对应于实际房间空间中的墙角位置,形成房间的主体框架结构;或者依次标记各个下墙角的位置,形成地面轮廓,再结合垂直的墙线,找到对应的上墙角,标记出上墙角的位置,得到整个房间的基本轮廓;S33:通过已知的相机拍摄高度计算得到房间的高度,推算房间的轮廓尺寸;或者通过摆放已知长度的标尺,定位平面的轮廓尺寸,再推算房间的轮廓尺寸。
- 根据权利要求6所述的基于拍照重建三维空间场景的方法,其特征在于,步骤S32中,需要标记的墙角被遮挡时,如果墙线可见,通过垂直与水平相交的两个墙线的交点来确定墙角的位置;如果墙的下墙角与墙线均被遮挡,而上墙角或墙线可见,先确定上平面和上墙角,然后将上墙角和下墙角的位置进行等比缩放,并保持下墙角的点在下平面上,从而确定墙角位置。
- 根据权利要求6所述的基于拍照重建三维空间场景的方法,其特征在于,步骤S32还包括,对房间内物体添加标点,并将两个标点组成一条标线,对房间墙体进行扩展并添加房间的基本物体结构,所述基本物体包括门、开放空间、普通窗、飘窗、楼梯;对于非矩形的房间,添加凹凸墙结构的标点扩展房间结构,通过调节凹凸墙的位置和深度来确定扩展的空间结构;对于任意自由结构的墙体,添加自由标点来任意扩展墙体结构;对于复式结构,添加楼梯和楼梯口结构的标点,并连接两个楼层的楼梯口来串联上下楼层,扩展楼梯结构;步骤S33还包括,已知相机镜头高度或者摆放已知长度的标尺,通过对房间内物体的标记,得到真实世界中的尺寸和在模型中的尺寸的计算比例,然后对整个模型进行等比缩放,计算得到房间内物体真实尺寸。
- 根据权利要求8所述的基于拍照重建三维空间场景的方法,其特征在于,还包括步骤S5:通过对比不同房间照片中门或者开放空间图像信息,对各个房间进行连接,得到的各个房间的空间位置及朝向信息;通过标记了的门或者开放空间,对比照片中透过门或者开放空间的画面,在彼此的照片中获取相匹配的画面,找到相同的门或者开放空间,将具有相同的门或者开放空间的房间连接起来;将所有房间互相串联起来后,计算连接房间的位置和朝向,选取一个房间设为已连接房间,遍历该房间的门,寻找未连接的房间,通过当前已连接房间的位置和朝向,结合连接两个房间的门或开放空间的位置和朝向,计算出未连接的房间的位置和朝向,然后将未连接房间标记为已连接房间;继续寻找未连接的房间,直到没有未连接房间为止,完成所有房间的连接。
- 根据权利要求1所述的基于拍照重建三维空间场景的方法,其特征在于,还包括步骤S6:通过S3收集到的点坐标信息对照片进行分割,获得房间纹理贴图获得房间纹理:先将每个面设定为一个多边形,顶点信息为一组三维坐标,通过顶点坐标的外包矩形来计算贴图的大小;遍历图片像素点,获取每个像素点对应在多边形上的空间坐标点位置;遍历所有像素点,完成单个面纹理贴图;依次完成房间所有面的纹理贴图。11. 根据权利要求9所述的基于拍照重建三维空间场景的方法,其特征在于,还包括步骤S7:通过步骤S3中标记***获取的房间的模型,忽略高度信息,得到单个房间的二维轮廓;根据在步骤S5中得到的各个房间的位置坐标和朝向信息,设置每个房间轮廓的位置和朝向,完成二维平面图的生成。12. 根据权利要求1所述的基于拍照重建三维空间场景的方法,其特征在于,所述照片的拍摄设备包括带鱼眼镜头的手机、全景相机、带鱼眼镜头的相机、普通手机和普通数码相机。
Priority Applications (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CA3103844A CA3103844C (en) | 2018-07-03 | 2018-10-30 | Method for reconstructing three-dimensional space scene based on photographing |
JP2021520260A JP7162933B2 (ja) | 2018-07-03 | 2018-10-30 | オブジェクトの内部空間モデルを確立するための方法、装置及びシステム、並びに、コンピュータ装置及びコンピュータ可読記憶媒体 |
CN201880066029.6A CN111247561B (zh) | 2018-07-03 | 2018-10-30 | 一种基于拍照重建三维空间场景的方法 |
KR1020207035296A KR20210008400A (ko) | 2018-07-03 | 2018-10-30 | 촬영을 기반으로 3차원 공간 장면을 재구성하는 방법 |
GB2018574.0A GB2588030B (en) | 2018-07-03 | 2018-10-30 | Method for reconstructing three-dimensional space scene based on photographing |
US16/588,111 US11200734B2 (en) | 2018-07-03 | 2019-09-30 | Method for reconstructing three-dimensional space scene based on photographing |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810717163.XA CN108961395B (zh) | 2018-07-03 | 2018-07-03 | 一种基于拍照重建三维空间场景的方法 |
CN201810717163.X | 2018-07-03 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/588,111 Continuation US11200734B2 (en) | 2018-07-03 | 2019-09-30 | Method for reconstructing three-dimensional space scene based on photographing |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020006941A1 true WO2020006941A1 (zh) | 2020-01-09 |
Family
ID=64485147
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2018/112554 WO2020006941A1 (zh) | 2018-07-03 | 2018-10-30 | 一种基于拍照重建三维空间场景的方法 |
Country Status (7)
Country | Link |
---|---|
US (1) | US11200734B2 (zh) |
JP (1) | JP7162933B2 (zh) |
KR (1) | KR20210008400A (zh) |
CN (2) | CN108961395B (zh) |
CA (1) | CA3103844C (zh) |
GB (1) | GB2588030B (zh) |
WO (1) | WO2020006941A1 (zh) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112465947A (zh) * | 2020-11-18 | 2021-03-09 | 李刚 | 影像的虚拟空间建立方法及*** |
CN113379901A (zh) * | 2021-06-23 | 2021-09-10 | 武汉大学 | 利用大众自拍全景数据建立房屋实景三维的方法及*** |
CN113487723A (zh) * | 2021-06-23 | 2021-10-08 | 武汉微景易绘科技有限公司 | 基于可量测全景三维模型的房屋在线展示方法及*** |
CN115330943A (zh) * | 2022-08-11 | 2022-11-11 | 北京城市网邻信息技术有限公司 | 多层空间三维建模方法、装置、设备和存储介质 |
CN117689846A (zh) * | 2024-02-02 | 2024-03-12 | 武汉大学 | 线状目标的无人机摄影重建多交向视点生成方法及装置 |
WO2024108350A1 (zh) * | 2022-11-21 | 2024-05-30 | 北京城市网邻信息技术有限公司 | 空间结构图和户型图生成方法、装置、设备和存储介质 |
Families Citing this family (56)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020102772A1 (en) * | 2018-11-15 | 2020-05-22 | Qualcomm Incorporated | Coordinate estimation on n-spheres with spherical regression |
CN109698951B (zh) * | 2018-12-13 | 2021-08-24 | 歌尔光学科技有限公司 | 立体图像重现方法、装置、设备和存储介质 |
CN110675314B (zh) * | 2019-04-12 | 2020-08-21 | 北京城市网邻信息技术有限公司 | 图像处理和三维对象建模方法与设备、图像处理装置及介质 |
CN111862302B (zh) * | 2019-04-12 | 2022-05-17 | 北京城市网邻信息技术有限公司 | 图像处理和对象建模方法与设备、图像处理装置及介质 |
US11869148B2 (en) | 2019-04-12 | 2024-01-09 | Beijing Chengshi Wanglin Information Technology Co., Ltd. | Three-dimensional object modeling method, image processing method, image processing device |
CN110209864B (zh) * | 2019-05-22 | 2023-10-27 | 刘鹏 | 三维立体模型测量改尺标注重新建模的网络平台*** |
CN110209001B (zh) * | 2019-06-04 | 2024-06-14 | 上海亦我信息技术有限公司 | 一种用于3d建模的三脚支架及相机拍摄姿态识别方法 |
US20220358770A1 (en) * | 2019-06-17 | 2022-11-10 | Ariel Al, Ltd. | Scene reconstruction in three-dimensions from two-dimensional images |
US11508141B2 (en) * | 2019-07-03 | 2022-11-22 | Magic Leap, Inc. | Simple environment solver using planar extraction |
CN110633628B (zh) * | 2019-08-02 | 2022-05-06 | 杭州电子科技大学 | 基于人工神经网络的rgb图像场景三维模型重建方法 |
GB2591857B (en) * | 2019-08-23 | 2023-12-06 | Shang Hai Yiwo Information Tech Co Ltd | Photography-based 3D modeling system and method, and automatic 3D modeling apparatus and method |
CN110505463A (zh) * | 2019-08-23 | 2019-11-26 | 上海亦我信息技术有限公司 | 基于拍照的实时自动3d建模方法 |
CN112712584B (zh) * | 2019-10-25 | 2024-05-24 | 阿里巴巴集团控股有限公司 | 空间建模方法、装置、设备 |
CN111210028B (zh) * | 2019-12-05 | 2022-12-02 | 万翼科技有限公司 | 房间模型核查方法、装置、计算机设备和存储介质 |
CN111079619B (zh) * | 2019-12-10 | 2023-04-18 | 北京百度网讯科技有限公司 | 用于检测图像中的目标对象的方法和装置 |
CN113240769B (zh) * | 2019-12-18 | 2022-05-10 | 北京城市网邻信息技术有限公司 | 空间链接关系识别方法及装置、存储介质 |
CN111127655B (zh) * | 2019-12-18 | 2021-10-12 | 北京城市网邻信息技术有限公司 | 房屋户型图的构建方法及构建装置、存储介质 |
CN111223177B (zh) * | 2019-12-18 | 2020-12-04 | 北京城市网邻信息技术有限公司 | 三维空间的三维模型的构建方法和装置、存储介质 |
CN111207672B (zh) * | 2019-12-31 | 2021-08-17 | 上海简家信息技术有限公司 | 一种ar量房方法 |
WO2021142787A1 (zh) * | 2020-01-17 | 2021-07-22 | 上海亦我信息技术有限公司 | 行进路线及空间模型生成方法、装置、*** |
CN111325662A (zh) * | 2020-02-21 | 2020-06-23 | 广州引力波信息科技有限公司 | 一种基于球面投影全景图生成3d空间户型模型的方法 |
CN111508067B (zh) * | 2020-04-15 | 2024-01-30 | 中国人民解放军国防科技大学 | 一种基于垂直平面和垂直线的轻量级室内建模方法 |
CN111210512B (zh) * | 2020-04-17 | 2020-07-21 | 中联重科股份有限公司 | 物体的三维抽象模型建立方法、装置、存储介质和处理器 |
CN111583417B (zh) * | 2020-05-12 | 2022-05-03 | 北京航空航天大学 | 一种图像语义和场景几何联合约束的室内vr场景构建的方法、装置、电子设备和介质 |
CN111830966B (zh) * | 2020-06-04 | 2023-12-19 | 深圳市无限动力发展有限公司 | 角落识别和清扫方法、装置及存储介质 |
CN111698424A (zh) * | 2020-06-22 | 2020-09-22 | 四川易热科技有限公司 | 一种通过普通相机补全实景漫游3d信息的方法 |
CN111859510A (zh) * | 2020-07-28 | 2020-10-30 | 苏州金螳螂三维软件有限公司 | 房间快速换装方法、智能终端 |
CN112055192B (zh) * | 2020-08-04 | 2022-10-11 | 北京城市网邻信息技术有限公司 | 图像处理方法、图像处理装置、电子设备及存储介质 |
CN111951388A (zh) * | 2020-08-14 | 2020-11-17 | 广东申义实业投资有限公司 | 室内装修设计用图像拍摄处理装置及图像拍摄处理方法 |
CN112132163B (zh) * | 2020-09-21 | 2024-04-02 | 杭州睿琪软件有限公司 | 识别对象边缘的方法、***及计算机可读存储介质 |
EP4229552A4 (en) * | 2020-10-13 | 2024-03-06 | Flyreel, Inc. | GENERATION OF MEASUREMENTS OF PHYSICAL STRUCTURES AND ENVIRONMENTS THROUGH AUTOMATED ANALYSIS OF SENSOR DATA |
CN112365569A (zh) * | 2020-10-22 | 2021-02-12 | 北京五八信息技术有限公司 | 房源三维场景的展示方法、装置、电子设备和存储介质 |
CN112493228B (zh) * | 2020-10-28 | 2021-12-14 | 河海大学 | 一种基于三维信息估算的激光驱鸟方法及*** |
CN112270758B (zh) * | 2020-10-29 | 2022-10-14 | 山东科技大学 | 一种基于天花板点云分割的建筑物房间轮廓线提取方法 |
CN112233229B (zh) * | 2020-10-29 | 2023-07-28 | 字节跳动有限公司 | 地标数据的采集方法及地标建筑的建模方法 |
CN114549631A (zh) * | 2020-11-26 | 2022-05-27 | 株式会社理光 | 图像处理方法、装置以及存储介质 |
CN112683221B (zh) * | 2020-12-21 | 2022-05-17 | 深圳集智数字科技有限公司 | 一种建筑检测方法和相关装置 |
KR102321704B1 (ko) * | 2020-12-29 | 2021-11-05 | 고려대학교 산학협력단 | 인접 평면 정보를 이용한 3차원 공간 모델 생성 방법 및 장치 |
CN112950759B (zh) * | 2021-01-28 | 2022-12-06 | 贝壳找房(北京)科技有限公司 | 基于房屋全景图的三维房屋模型构建方法及装置 |
TWI784754B (zh) * | 2021-04-16 | 2022-11-21 | 威盛電子股份有限公司 | 電子裝置以及物件偵測方法 |
CN113324473B (zh) * | 2021-04-30 | 2023-09-15 | 螳螂慧视科技有限公司 | 房屋测量方法与测量设备 |
US11670045B2 (en) * | 2021-05-07 | 2023-06-06 | Tencent America LLC | Method and apparatus for constructing a 3D geometry |
CN113593052B (zh) * | 2021-08-06 | 2022-04-29 | 贝壳找房(北京)科技有限公司 | 场景朝向确定方法及标记方法 |
US11961181B2 (en) * | 2021-09-23 | 2024-04-16 | Msg Entertainment Group, Llc | Three-dimensional image space transformation |
CN113920144B (zh) * | 2021-09-30 | 2022-09-13 | 广东省国土资源测绘院 | 一种实景照片地面视域分析方法及*** |
CN113689482B (zh) * | 2021-10-20 | 2021-12-21 | 贝壳技术有限公司 | 拍摄点推荐方法、装置及存储介质 |
CN114092642B (zh) * | 2021-11-18 | 2024-01-26 | 抖音视界有限公司 | 一种三维户型模型生成方法、装置及设备 |
CN113822994B (zh) * | 2021-11-24 | 2022-02-15 | 深圳普罗米修斯视觉技术有限公司 | 三维模型构建方法、装置及存储介质 |
CN114494487B (zh) * | 2021-12-30 | 2022-11-22 | 北京城市网邻信息技术有限公司 | 基于全景图语义拼接的户型图生成方法、设备及存储介质 |
WO2023163500A1 (en) * | 2022-02-28 | 2023-08-31 | Samsung Electronics Co., Ltd. | Floorplan-aware camera pose refinement method and system |
CN114663618B (zh) * | 2022-03-03 | 2022-11-29 | 北京城市网邻信息技术有限公司 | 三维重建及校正方法、装置、设备及存储介质 |
CN114708383A (zh) * | 2022-03-22 | 2022-07-05 | 广州市圆方计算机软件工程有限公司 | 二维平面转三维立体场景的天花和地面构造方法及*** |
CN114792357B (zh) * | 2022-03-23 | 2023-05-26 | 北京城市网邻信息技术有限公司 | 全景图资源生成方法、装置、电子设备及存储介质 |
CN114529686B (zh) * | 2022-04-21 | 2022-08-02 | 三一筑工科技股份有限公司 | 建筑模型的生成方法、装置、设备及介质 |
EP4300410B1 (en) * | 2022-06-29 | 2024-05-08 | Axis AB | Self-learning image geometrical distortion correction |
CN116071490B (zh) * | 2022-10-25 | 2023-06-23 | 杭州华橙软件技术有限公司 | 室内空间布局的重构方法及重构装置、电子设备和介质 |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
FR2787612B1 (fr) * | 1998-12-21 | 2001-03-09 | Eastman Kodak Co | Procede de construction d'un modele a faces pratiquement planes |
CN1539120A (zh) * | 2001-06-20 | 2004-10-20 | ���ڹɷ�����˾ | 三维电子地图数据的生成方法 |
CN101281034A (zh) * | 2008-05-16 | 2008-10-08 | 南京师范大学 | 基于空间直角关系的建筑物单影像三维测量方法 |
US20090279784A1 (en) * | 2008-05-07 | 2009-11-12 | Microsoft Corporation | Procedural authoring |
CN104240289A (zh) * | 2014-07-16 | 2014-12-24 | 崔岩 | 一种基于单个相机的三维数字化重建方法及*** |
CN104851127A (zh) * | 2015-05-15 | 2015-08-19 | 北京理工大学深圳研究院 | 一种基于交互的建筑物点云模型纹理映射方法及装置 |
CN106780421A (zh) * | 2016-12-15 | 2017-05-31 | 苏州酷外文化传媒有限公司 | 基于全景平台的装修效果展示方法 |
CN107978017A (zh) * | 2017-10-17 | 2018-05-01 | 厦门大学 | 基于框线提取的室内结构快速建模方法 |
Family Cites Families (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4119529B2 (ja) * | 1998-06-17 | 2008-07-16 | オリンパス株式会社 | 仮想環境生成方法および装置、並びに仮想環境生成プログラムを記録した記録媒体 |
JP2000076453A (ja) * | 1998-08-28 | 2000-03-14 | Kazuhiro Shiina | 立体データ作成方法及び装置 |
US8542872B2 (en) * | 2007-07-03 | 2013-09-24 | Pivotal Vision, Llc | Motion-validating remote monitoring system |
US8350850B2 (en) * | 2008-03-31 | 2013-01-08 | Microsoft Corporation | Using photo collections for three dimensional modeling |
US9196084B2 (en) * | 2013-03-15 | 2015-11-24 | Urc Ventures Inc. | Determining object volume from mobile device images |
US9025861B2 (en) * | 2013-04-09 | 2015-05-05 | Google Inc. | System and method for floorplan reconstruction and three-dimensional modeling |
US9595134B2 (en) * | 2013-05-11 | 2017-03-14 | Mitsubishi Electric Research Laboratories, Inc. | Method for reconstructing 3D scenes from 2D images |
JP5821012B2 (ja) * | 2013-05-31 | 2015-11-24 | パナソニックIpマネジメント株式会社 | モデリング装置、3次元モデル生成装置、モデリング方法、プログラム、レイアウトシミュレータ |
US9830681B2 (en) * | 2014-01-31 | 2017-11-28 | Hover Inc. | Multi-dimensional model dimensioning and scale error correction |
WO2015120188A1 (en) * | 2014-02-08 | 2015-08-13 | Pictometry International Corp. | Method and system for displaying room interiors on a floor plan |
CN103955960B (zh) * | 2014-03-21 | 2017-01-11 | 南京大学 | 一种基于单幅输入图像的图像视点变换方法 |
CN104202890B (zh) * | 2014-09-24 | 2016-10-05 | 北京极澈远技术有限公司 | 照明设备的待机电路和照明设备的工作电路 |
CN105279787B (zh) * | 2015-04-03 | 2018-01-12 | 北京明兰网络科技有限公司 | 基于拍照的户型图识别生成三维房型的方法 |
CN105205858B (zh) * | 2015-09-18 | 2018-04-13 | 天津理工大学 | 一种基于单个深度视觉传感器的室内场景三维重建方法 |
JP6220486B1 (ja) * | 2016-05-27 | 2017-10-25 | 楽天株式会社 | 3次元モデル生成システム、3次元モデル生成方法、及びプログラム |
CN106485785B (zh) * | 2016-09-30 | 2023-09-26 | 李娜 | 一种基于室内三维建模和定位的场景生成方法及*** |
US10572970B2 (en) * | 2017-04-28 | 2020-02-25 | Google Llc | Extracting 2D floor plan from 3D GRID representation of interior space |
CN107248193A (zh) * | 2017-05-22 | 2017-10-13 | 北京红马传媒文化发展有限公司 | 二维平面与虚拟现实场景进行切换的方法、***及装置 |
CN107393003B (zh) * | 2017-08-07 | 2020-12-04 | 苍穹数码技术股份有限公司 | 一种基于云计算的三维房屋自动建模的方法与实现 |
CN107798725B (zh) * | 2017-09-04 | 2020-05-22 | 华南理工大学 | 基于Android的二维住房户型识别和三维呈现方法 |
CN108053473A (zh) * | 2017-12-29 | 2018-05-18 | 北京领航视觉科技有限公司 | 一种室内三维模型数据的处理方法 |
US10445913B2 (en) * | 2018-03-05 | 2019-10-15 | Faro Technologies, Inc. | System and method of scanning and editing two dimensional floorplans |
US11055532B2 (en) * | 2018-05-02 | 2021-07-06 | Faro Technologies, Inc. | System and method of representing and tracking time-based information in two-dimensional building documentation |
-
2018
- 2018-07-03 CN CN201810717163.XA patent/CN108961395B/zh active Active
- 2018-10-30 JP JP2021520260A patent/JP7162933B2/ja active Active
- 2018-10-30 GB GB2018574.0A patent/GB2588030B/en active Active
- 2018-10-30 KR KR1020207035296A patent/KR20210008400A/ko not_active IP Right Cessation
- 2018-10-30 CN CN201880066029.6A patent/CN111247561B/zh active Active
- 2018-10-30 CA CA3103844A patent/CA3103844C/en active Active
- 2018-10-30 WO PCT/CN2018/112554 patent/WO2020006941A1/zh active Application Filing
-
2019
- 2019-09-30 US US16/588,111 patent/US11200734B2/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
FR2787612B1 (fr) * | 1998-12-21 | 2001-03-09 | Eastman Kodak Co | Procede de construction d'un modele a faces pratiquement planes |
CN1539120A (zh) * | 2001-06-20 | 2004-10-20 | ���ڹɷ�����˾ | 三维电子地图数据的生成方法 |
US20090279784A1 (en) * | 2008-05-07 | 2009-11-12 | Microsoft Corporation | Procedural authoring |
CN101281034A (zh) * | 2008-05-16 | 2008-10-08 | 南京师范大学 | 基于空间直角关系的建筑物单影像三维测量方法 |
CN104240289A (zh) * | 2014-07-16 | 2014-12-24 | 崔岩 | 一种基于单个相机的三维数字化重建方法及*** |
CN104851127A (zh) * | 2015-05-15 | 2015-08-19 | 北京理工大学深圳研究院 | 一种基于交互的建筑物点云模型纹理映射方法及装置 |
CN106780421A (zh) * | 2016-12-15 | 2017-05-31 | 苏州酷外文化传媒有限公司 | 基于全景平台的装修效果展示方法 |
CN107978017A (zh) * | 2017-10-17 | 2018-05-01 | 厦门大学 | 基于框线提取的室内结构快速建模方法 |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112465947A (zh) * | 2020-11-18 | 2021-03-09 | 李刚 | 影像的虚拟空间建立方法及*** |
CN112465947B (zh) * | 2020-11-18 | 2024-04-23 | 李刚 | 影像的虚拟空间建立方法及*** |
CN113379901A (zh) * | 2021-06-23 | 2021-09-10 | 武汉大学 | 利用大众自拍全景数据建立房屋实景三维的方法及*** |
CN113487723A (zh) * | 2021-06-23 | 2021-10-08 | 武汉微景易绘科技有限公司 | 基于可量测全景三维模型的房屋在线展示方法及*** |
CN113487723B (zh) * | 2021-06-23 | 2023-04-18 | 武汉微景易绘科技有限公司 | 基于可量测全景三维模型的房屋在线展示方法及*** |
CN115330943A (zh) * | 2022-08-11 | 2022-11-11 | 北京城市网邻信息技术有限公司 | 多层空间三维建模方法、装置、设备和存储介质 |
CN115330943B (zh) * | 2022-08-11 | 2023-03-28 | 北京城市网邻信息技术有限公司 | 多层空间三维建模方法、装置、设备和存储介质 |
WO2024108350A1 (zh) * | 2022-11-21 | 2024-05-30 | 北京城市网邻信息技术有限公司 | 空间结构图和户型图生成方法、装置、设备和存储介质 |
CN117689846A (zh) * | 2024-02-02 | 2024-03-12 | 武汉大学 | 线状目标的无人机摄影重建多交向视点生成方法及装置 |
CN117689846B (zh) * | 2024-02-02 | 2024-04-12 | 武汉大学 | 线状目标的无人机摄影重建多交向视点生成方法及装置 |
Also Published As
Publication number | Publication date |
---|---|
US20200111250A1 (en) | 2020-04-09 |
JP2021528794A (ja) | 2021-10-21 |
GB2588030A (en) | 2021-04-14 |
CA3103844A1 (en) | 2020-01-09 |
KR20210008400A (ko) | 2021-01-21 |
US11200734B2 (en) | 2021-12-14 |
CN108961395B (zh) | 2019-07-30 |
CN111247561A (zh) | 2020-06-05 |
GB2588030B (en) | 2023-03-29 |
CN111247561B (zh) | 2021-06-08 |
GB202018574D0 (en) | 2021-01-06 |
CA3103844C (en) | 2023-10-31 |
JP7162933B2 (ja) | 2022-10-31 |
CN108961395A (zh) | 2018-12-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020006941A1 (zh) | 一种基于拍照重建三维空间场景的方法 | |
US11875537B2 (en) | Multi view camera registration | |
CN110111262B (zh) | 一种投影仪投影畸变校正方法、装置和投影仪 | |
CN110505463A (zh) | 基于拍照的实时自动3d建模方法 | |
US7737967B2 (en) | Method and apparatus for correction of perspective distortion | |
TW201915944A (zh) | 圖像處理方法、裝置、系統和儲存介質 | |
US10580205B2 (en) | 3D model generating system, 3D model generating method, and program | |
WO2018077071A1 (zh) | 一种全景图像的生成方法及装置 | |
CN110490916A (zh) | 三维对象建模方法与设备、图像处理装置及介质 | |
CN108629829B (zh) | 一种球幕相机与深度相机结合的三维建模方法和*** | |
GB2591857A (en) | Photographing-based 3D modeling system and method, and automatic 3D modeling apparatus and method | |
JP2015022510A (ja) | 自由視点画像撮像装置およびその方法 | |
Soycan et al. | Perspective correction of building facade images for architectural applications | |
US20190220952A1 (en) | Method of acquiring optimized spherical image using multiple cameras | |
TW201635242A (zh) | 室內二維平面圖的生成方法、裝置和系統 | |
US8509522B2 (en) | Camera translation using rotation from device | |
WO2018056802A1 (en) | A method for estimating three-dimensional depth value from two-dimensional images | |
Fleischmann et al. | Fast projector-camera calibration for interactive projection mapping | |
JP4149732B2 (ja) | ステレオマッチング方法、3次元計測方法及び3次元計測装置並びにステレオマッチング方法のプログラム及び3次元計測のプログラム | |
KR101996226B1 (ko) | 피사체의 3차원 위치 측정 장치 및 그 방법 | |
TWI662694B (zh) | 三維影像攝取方法及系統 | |
JP2006300656A (ja) | 画像計測方法、装置、プログラム及び記録媒体 | |
JP4282361B2 (ja) | 写真測量方法および写真測量プログラム | |
US8260007B1 (en) | Systems and methods for generating a depth tile | |
CN111768446B (zh) | 一种室内全景影像逆向建模融合方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18925186 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 202018574 Country of ref document: GB Kind code of ref document: A Free format text: PCT FILING DATE = 20181030 |
|
ENP | Entry into the national phase |
Ref document number: 20207035296 Country of ref document: KR Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 3103844 Country of ref document: CA |
|
ENP | Entry into the national phase |
Ref document number: 2021520260 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 11.05.2021) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18925186 Country of ref document: EP Kind code of ref document: A1 |