WO2020006941A1 - 一种基于拍照重建三维空间场景的方法 - Google Patents

一种基于拍照重建三维空间场景的方法 Download PDF

Info

Publication number
WO2020006941A1
WO2020006941A1 PCT/CN2018/112554 CN2018112554W WO2020006941A1 WO 2020006941 A1 WO2020006941 A1 WO 2020006941A1 CN 2018112554 W CN2018112554 W CN 2018112554W WO 2020006941 A1 WO2020006941 A1 WO 2020006941A1
Authority
WO
WIPO (PCT)
Prior art keywords
room
plane
dimensional space
wall
reconstructing
Prior art date
Application number
PCT/CN2018/112554
Other languages
English (en)
French (fr)
Inventor
黄孝敏
赵明
蔡锫
Original Assignee
上海亦我信息技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海亦我信息技术有限公司 filed Critical 上海亦我信息技术有限公司
Priority to CA3103844A priority Critical patent/CA3103844C/en
Priority to JP2021520260A priority patent/JP7162933B2/ja
Priority to CN201880066029.6A priority patent/CN111247561B/zh
Priority to KR1020207035296A priority patent/KR20210008400A/ko
Priority to GB2018574.0A priority patent/GB2588030B/en
Priority to US16/588,111 priority patent/US11200734B2/en
Publication of WO2020006941A1 publication Critical patent/WO2020006941A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/60Rotation of whole images or parts thereof
    • G06T3/608Rotation of whole images or parts thereof by skew deformation, e.g. two-pass or three-pass rotation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/579Depth or shape recovery from multiple images from motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20164Salient point detection; Corner detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/04Architectural design, interior design
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/21Collision detection, intersection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/56Particle system, point based geometry or rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2215/00Indexing scheme for image rendering
    • G06T2215/16Using real world measurements to influence rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/004Annotating, labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/008Cut plane or projection plane definition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/012Dimensioning, tolerancing

Definitions

  • the invention relates to a three-dimensional modeling method, in particular to a method for reconstructing a three-dimensional space scene based on photographs.
  • Three-dimensional space modeling is a technology that has been rapidly developed and applied in recent years. It has been widely used in virtual reality, house decoration, interior design and other fields.
  • the existing three-dimensional space scene modeling generally adopts the following solutions:
  • Binocular Stereo Vision system A method based on the principle of parallax and using imaging equipment to obtain two images of the measured object from different positions, and calculate the positional deviation between the corresponding points of the image to obtain the three-dimensional geometric information of the object. Fusion the images obtained by two eyes and observe the differences between them, so that we can obtain a clear sense of depth, establish the corresponding relationship between features, and map the same physical point of the space in different images.
  • this method has simple shooting equipment, if it is to be used for the reconstruction of three-dimensional space, the number of pictures needed to be taken is very large, which is very time-consuming, and it takes a lot of time to calculate the model later. Once the model has a problem, the repair method is very complicated. , Non-professionals can not operate, so although the binocular stereo vision system has been around for many years, it cannot be promoted on a large scale.
  • Laser point cloud technology uses the principle of TOF or structured light ranging to obtain the spatial coordinates of each sampling point on the surface of the object, and obtains a series of massive points that express the spatial distribution of the target and the characteristics of the target surface.
  • This point set Call it a "point cloud” Cloud).
  • the properties of the point cloud include: spatial resolution, point accuracy, surface normal vector, and so on.
  • laser point cloud technology requires users to purchase additional point cloud equipment that is bulky, expensive, and complicated to operate, on the other hand, it will generate massive data that is not conducive to storage and processing. When multiple sets of data need to be stitched, due to the huge amount of data , It will take a long time and the results are not satisfactory. Therefore, although point cloud technology has been around for many years, its promotion has been difficult.
  • the technical problem to be solved by the present invention is to provide a method for reconstructing a three-dimensional space scene based on photographs, which can restore a three-dimensional space model of the scene, and simultaneously contains texture information and size information without losing details, and can quickly and conveniently perform three-dimensional space scenes. Edit and modify, and at the same time can generate two-dimensional plan with size information without distortion.
  • the technical solution adopted by the present invention to solve the above technical problem is to provide a method for reconstructing a three-dimensional space scene based on photographs, including the following steps: S1: import photos of all spaces, and for each space, import the containing space shot at the same shooting point.
  • a set of photos of the feature corresponding to the three-dimensional space according to the direction and perspective of the shooting, so that when viewed from the camera position in the three-dimensional space, the viewing direction of each pixel is the same as when shooting;
  • S4 collected through S3
  • the point coordinate information establishes a three-dimensional space model of the room.
  • step S1 further includes synthesizing a photo of each space into a 360-degree panorama, and then corresponding the panorama to the three-dimensional space, so that when viewing from a camera position in the three-dimensional space, , The viewing direction of each pixel is the same as when shooting.
  • determining the first plane method includes: determining a plane by finding three vertical intersecting wall lines on the plane or finding four corners of the plane to determine one flat.
  • determining the first plane method further includes: determining a plane position by recording a projection point of the camera lens on the plane or recording a relevant point to calculate a camera lens Projection point on the plane to determine the position of the plane.
  • the method for reconstructing a three-dimensional space scene based on a photograph wherein before step S2, the method further includes: marking a vertical correction line on the photo, correcting the image tilt caused by the distortion of the shooting device during the photo shooting; Line as the vertical correction line, or find the horizontal line of the photo, draw a line perpendicular to the horizontal line in the photo as the vertical correction line; using the vertical correction line as a reference, rotate the photo until the vertical correction line is perpendicular to the actual horizontal plane; A vertical line correction line to complete vertical correction.
  • the step S3 includes the following steps: S31: placing the photograph in a high-precision sphere, setting the lens position of the camera in the center of the sphere, and restoring the photograph in the center of the sphere Viewing angle; S32: preset four frame points in the sphere, each frame point contains upper and lower points, drag the frame points so that each point corresponds to the corner position in the actual room space, forming the main frame structure of the room; Or mark the positions of the lower wall corners in turn to form the ground outline, then combine the vertical wall lines to find the corresponding upper wall corners, mark the position of the upper wall corners, and get the basic outline of the entire room; S33: Calculate the height of the camera through a known camera Obtain the height of the room and estimate the outline size of the room; or by placing a ruler of known length to locate the outline size of the plane, and then estimate the outline size of the room.
  • step S32 when the corner of the wall that needs to be marked is occluded, if the wall line is visible, the position of the wall corner is determined by the intersection of two wall lines that intersect vertically and horizontally; Both the lower corner and the wall line are blocked, and the upper corner or wall line is visible.
  • step S32 further includes adding punctuation marks to objects in the room, and combining the two punctuation marks to form a graticule, extending the wall of the room and adding the basic object structure of the room.
  • the basic objects include doors, open spaces, ordinary windows, bay windows, stairs; for non-rectangular rooms, the punctuation of the convex wall structure is added to expand the room structure, and the expanded space structure is determined by adjusting the position and depth of the concave wall; for any For free-form walls, add free punctuation to expand the wall structure arbitrarily; for duplex structures, add punctuation to the stairs and stairway structure, and connect the stairways of two floors to connect the upper and lower floors in series to expand the stair structure; step S33 also includes If you know the height of the camera lens or a ruler with a known length, mark the objects in the room to get the real-world size and the calculated ratio in the model, and then perform the proportional scaling on the entire model to calculate The true size of objects in the room.
  • the above method for reconstructing a three-dimensional space scene based on photographs further includes step S5: comparing the doors or open space image information in photos of different rooms and connecting the rooms to obtain the spatial position and orientation information of each room; For a door or open space, compare the pictures of the door or open space in the photos, get matching pictures in each other's photos, find the same door or open space, and connect the rooms with the same door or open space; After connecting all the rooms in series, calculate the position and orientation of the connected room, select a room as the connected room, traverse the door of the room, find the unconnected room, and connect the two through the position and orientation of the currently connected room. Position and orientation of the door or open space of each room, calculate the position and orientation of the unconnected room, and then mark the unconnected room as a connected room; continue to search for unconnected rooms until there are no unconnected rooms and complete all The connection of the room.
  • the method for reconstructing a three-dimensional space scene based on photographs further includes step S6: segmenting the photo by using the point coordinate information collected in S3, obtaining a room texture map, and obtaining a room texture: first set each face to a polygon,
  • the vertex information is a set of three-dimensional coordinates.
  • the size of the map is calculated by the outer rectangle of the vertex coordinates. Iterate through the pixels of the picture to obtain the position of the spatial coordinate point corresponding to each pixel on the polygon. Iterate through all the pixels to complete the single surface texture mapping. ; Complete the texture mapping of all faces of the room in turn.
  • the method for reconstructing a three-dimensional space scene based on photographs further includes step S7: obtaining a two-dimensional contour of a single room by ignoring the height information of the room model obtained through the marking system in step S3; according to each of the steps obtained in step S5 Position and orientation information of the room, set the position and orientation of each room outline, and complete the generation of a two-dimensional plan.
  • the above method for reconstructing a three-dimensional space scene based on photographing wherein the photographing equipment includes a mobile phone with a fisheye lens, a panoramic camera, a camera with a fisheye lens, an ordinary mobile phone, and an ordinary digital camera.
  • the present invention has the following beneficial effects: the method for reconstructing a three-dimensional space scene based on photographs provided by the present invention restores a three-dimensional space model of the scene, and simultaneously contains texture information and size information without losing details, and will not be caused by incomplete scanning Holes in the model will not cause serious interference to the model due to furniture, interior decoration, etc .; it can quickly and easily edit and modify 3D space scenes, and simultaneously generate 2D floor plans with size information without distortion; support a wide range of shooting methods, Including but not limited to mobile phone fisheye lens, panoramic camera, camera with fisheye lens, and ordinary mobile phones and ordinary digital cameras, the cost is low.
  • FIG. 1 is a flowchart of a method for reconstructing a three-dimensional space scene based on a photograph according to an embodiment of the present invention
  • FIG. 2 is a schematic diagram of determining a plane in an embodiment of the present invention.
  • FIG. 3 is a schematic diagram of determining a first plane by searching three vertical intersecting wall lines on a plane in an embodiment of the present invention
  • FIG. 4 is a schematic diagram of conversion of polar coordinates to rectangular coordinates in an embodiment of the present invention.
  • FIG. 5 is a schematic diagram of determining a first plane by searching four corners of a plane in an embodiment of the present invention
  • FIG. 6 is a schematic diagram of determining a first plane by calculating a projection point of a camera lens on a plane by recording related points in an embodiment of the present invention
  • FIG. 7 is a schematic diagram of vertical correction in an embodiment of the present invention.
  • FIG. 8 is a schematic diagram of a mark room spatial structure added with an uneven wall structure in the embodiment of the present invention.
  • FIG. 9 is a schematic diagram of adding a free punctuation point to a spatial structure of a marked room in the embodiment of the present invention to arbitrarily extend the wall;
  • FIG. 10a is a side view of a wall corner seen by a camera according to an embodiment of the present invention.
  • FIG. 10b is a top view of the corner of the wall viewed from the camera in the embodiment of the present invention.
  • FIG. 11 is a schematic diagram of estimating a contour size by placing a known length ruler on the ground according to an embodiment of the present invention
  • FIG. 12 is a schematic diagram of estimating a contour dimension by placing a known length ruler at a certain angle with the ground in the embodiment of the present invention.
  • FIG. 1 is a flowchart of a method for reconstructing a three-dimensional space scene based on photos according to the present invention.
  • a method for reconstructing a three-dimensional space scene based on photographs includes the following steps: S1: import photos of all spaces, and import a set of photos including main features of the space, taken at the same shooting point, for each space , The photos are corresponding to the three-dimensional space according to the direction and perspective when shooting, so that when viewing from the camera position in three-dimensional space, the viewing direction of each pixel is the same as when shooting; S2: treat the room as a collection of multiple planes, first determine The first plane, and then determine all planes one by one through the relationship between the planes and the intersection between the planes; S3: Mark the space structure of the room through the marking system and obtain the size information; S4: Create the room through the point coordinate information collected by S3 Three-dimensional space model. Zh
  • step S1 further comprises synthesizing the photos of each space into a 360-degree panorama, and then corresponding the panorama to the three-dimensional space, so that when viewed from the camera position in the three-dimensional space, the viewing direction of each pixel is consistent with the time of shooting.
  • determining a first plane method includes: determining a plane by finding three vertical intersecting wall lines on the plane or finding four wall corners of the plane to determine a plane.
  • the method for determining the first plane further includes: determining a position of the plane by recording a projection point of the camera lens on the plane or recording a relevant point to calculate a projection point of the camera lens on the plane to determine the position of the plane.
  • each pixel Since the photo corresponds to the principle of three-dimensional space, each pixel is kept in the same direction as the time of shooting, and no distance information from a pixel to the shooting point (the position of the lens when shooting) is recorded or provided.
  • the basic principle of modeling is to treat the indoor model as a collection of multiple planes (including the ground, wall and ceiling).
  • the lower corner is the intersection of the three planes (the ground and the two walls), and the upper corner is the three planes.
  • the intersection (ceiling and two walls), the wall line is the intersection of two planes (walls). If we can first locate the position of a plane, we can determine the other planes in turn by relying on the corners and lines on the plane, until all planes are restored to complete the modeling.
  • the wall S2 can be determined by knowing the vertical relationship between the wall S2 and the ground S1 and the intersection line (wall line) L1 between S1 and S2. In the same way, the wall surface S3 and the positions of all mutually perpendicular wall surfaces in the space can be determined through the wall line L2.
  • method a adjacent right-angle method: determine the plane by three perpendicular intersection lines on the plane, and the premise of the adjacent right-angle method is that the interior wall surface is mostly composed of rectangles.
  • P is the position of the camera lens, and the angles formed by P1P2P3 and P2P3P4 are all right angles, then the plane where P1P2P3P4 is located can be determined.
  • a wall is observed in the photo in Figure 3, and P1, P2, P3, and P4 are all on this wall.
  • P2P3 is the vertical edge of the wall and the ground
  • P1P2 is the intersection of the wall and the ceiling
  • P3P4 is the intersection of the wall and the ground.
  • P1 is a point different from P2 on the intersection of wall and ceiling
  • P4 is a point different from P3 on the intersection of wall and floor.
  • the coordinates of the four points relative to the observation point are expressed in polar coordinates as ( , , ), Obviously the radius Is unknown, the other two values can be observed.
  • P1P2 is parallel to P3P4, so the dot product of the two vectors is equal to the product of the modulo of the two vectors
  • the same method can be used to determine other planes.
  • use the following method Use the adjacent right angle method to determine the first plane S1; use a positive real number solution of the above plane to determine the position of plane S1; choose one Adjacent plane S2, determine the plane using the adjacent right angle method, and use a positive real number solution to determine the position of plane S2; repeat the previous step to determine the position of other planes one by one until all are completed; if a plane cannot find a positive real number Solution, or the intersection with the adjacent plane is wrong, then go back to the previous plane and use the next positive real number solution, repeat the previous steps to determine all planes. Since all planes exist in space, the positions of all planes can be determined by the above method.
  • each plane can find two adjacent right angles. If an individual plane does not satisfy the condition of two adjacent right angles, it can be determined by the adjacent plane at the determined position and the line of intersection with the adjacent plane. As shown in Fig. 2, assuming that S2 is perpendicular to S1, S2 can be uniquely determined according to the position of S1 and the line of intersection L1 between S1 and S2.
  • method b (rectangular method): A plane is determined by assuming that a wall is rectangular and finding four corners of the plane. In the room, most of the walls are rectangular, so it is natural to use the 4 vertices of the rectangle to determine the position of the wall.
  • the rectangular method is a special case of the adjacent right-angle method.
  • the line segment P1P4 must be parallel to the line segment P2P3.
  • P1 can be any point on the line where P1P2 is located.
  • P4 can also be any point on the line where P3P4 is located. Rectangle
  • the solution method is similar to the adjacent right-angle method, so it will not be described here.
  • method c the plane is determined by the projection of the camera onto the plane. If the projection of the camera lens onto a plane is recorded at the time of shooting, the position of the plane can be uniquely determined (the distance from the camera to the plane is known or assumed).
  • the line connecting the camera position and the projection is perpendicular to the ground S1.
  • the ground position is determined.
  • the methods for obtaining / recording the projection point are: put the camera on a tripod or stand for shooting, and the tripod / stand is perpendicular to the ground.
  • the center point where the tripod / stand falls on the ground is the projection point.
  • use the projection method to determine the first plane S1, and select an adjacent plane S2.
  • S2 can be uniquely determined according to the position of S1 and the line of intersection L1 between S1 and S2. Identify all planes. In practice, since almost all walls are perpendicular to the ground or ceiling, if the first plane is a wall, select the ground or ceiling as the first adjacent plane in the above steps, and then determine all the walls in turn.
  • method d (inclination projection method): Record the relevant points and calculate the projection point of the camera lens on the plane to determine the position of the plane. This method is similar to the projection method, except that the projection point is not recorded, but the position of the projection point can be calculated from the recorded point.
  • the methods are: the tripod / bracket is placed on the ground (the base is attached to the ground), but it is not perpendicular to the ground; the tripod / bracket is against the wall (the base is attached to the wall), but not perpendicular to the wall. When the tripod / stand is not perpendicular to the plane, the projection on the plane is a line segment.
  • P is the position of the camera lens
  • P1 is the center point of the base of the camera bracket
  • P2 is the projection point on the plane
  • the projection line of the bracket P1P3 on the plane is P1P2.
  • the position of the projection point P2 can be calculated: from the projection point P2, all planes can be determined using the projection method.
  • the method further includes: marking a vertical correction line on the photo, correcting the image tilt caused by the distortion of the shooting device when the photo is taken; finding a line perpendicular to the ground of the photo as a vertical correction line in the photo, or finding a horizontal line of the photo , Draw a line perpendicular to the horizontal line in the photo as the vertical correction line; using the vertical correction line as a reference, rotate the photo until the vertical correction line is perpendicular to the actual horizontal plane; obtain multiple vertical line correction lines in different directions to complete the vertical correction.
  • step S3 includes the following steps: S31: placing the photo in a high-precision sphere, setting the lens position of the camera in the center of the sphere, and then restoring the photo shooting perspective at the center of the sphere; S32: preset four frame points in the sphere, each frame point includes an upper point and a lower point, and drag the frame points so that each point corresponds to a corner position in the actual room space to form the main frame structure of the room; or Mark the positions of the lower corners to form the ground outline, and then combine the vertical wall lines to find the corresponding upper corners. Mark the positions of the upper corners to get the basic outline of the entire room. S33: Calculate the room by using a known camera shooting height The height of the room to calculate the room's outline size; or by placing a ruler of known length to locate the outline size of the plane, and then calculate the room's outline size.
  • step S32 can mark the spatial structure and objects of the room in a variety of ways.
  • One of the marking methods is as follows:
  • each frame point corresponds to an upper frame point (P1a, P2a, P3a, and P4a), and the line connecting each pair of upper and lower frame points (such as P1P1a) is perpendicular to the plane S1.
  • P1a, P2a, P3a, and P4a the line connecting each pair of upper and lower frame points (such as P1P1a) is perpendicular to the plane S1.
  • P7 can be any point on the plane S1
  • P5P7 and P2P5 can be non-vertical
  • P5P7 and P7P8 can be non-vertical
  • P7P8 and P8P3 can be non-vertical
  • the wall line is generally used as a reference object.
  • the position of the wall line can be determined by the connection between the frame point and the frame point.
  • a line is formed between the two punctuation points.
  • the room wall is expanded and the basic object structures of various rooms are added, such as doors, open spaces, ordinary windows, bay windows, stairs, etc. You can also add basic object structures to the wall.
  • the punctuation structure records the room wall structure and the main room objects (such as doors, doorways, open spaces, ordinary windows, bay windows, stairs), and uses the The text form is stored locally.
  • step S32 can also form the outline of each plane by marking the position of each corner on a certain plane, and other planes can be confirmed step by step through wall lines and corners until all planes are marked to form the outline of the room (label one by one) Corner method). Take the ground marking as an example.
  • the specific steps are as follows:
  • the first method for determining the size in step S33 is to calculate the height of the room through a known camera shooting height, and estimate the outline size of the room;
  • the second method for determining the size in step S33 is to locate the outline size of a certain plane (ground / wall surface) by placing a ruler of known length, and then infer the outline dimensions of the other planes.
  • the camera height (distance to the ground) can be calculated from the known ruler length.
  • the subsequent steps are the same as Method 1.
  • the camera height is unknown.
  • the ground ruler R is known as the true length Lr, and the length measured in the coordinate system is Lr ', and h, Lr' are in the same coordinate system, and the unit is the same.
  • h ' Lr / Lr' * h.
  • the subsequent method is the same as shown in Figure 11.
  • the ruler R is placed on the wall s2.
  • the length of the plane projection can also be calculated by a trigonometric function, so as to determine the position of the plane again.
  • the scale is Pr1Pr2
  • the true length of Pr1Pr2 is known
  • the fixed angle between the scale and the plane is known as ⁇ , that is, ⁇ Pr1
  • the true length of Pr1Pr2 ' is determined according to the angle ⁇ and the projection of the scale on the ground.
  • step S32 when the wall corner to be marked is blocked, if the wall line is visible, the position of the wall corner is determined by the intersection of two wall lines that intersect vertically and horizontally; if both the lower wall corner and the wall line are blocked, While the upper corner or wall line is visible, first determine the upper plane and upper corner, then scale the upper corner and lower corner positions in proportion, and keep the lower corner point on the lower plane to determine the corner position.
  • the method for reconstructing a three-dimensional space scene based on photographs provided by the present invention further includes step S5: comparing the doors or open space image information in photos of different rooms and connecting the rooms to obtain the spatial position and orientation information of each room; For a door or open space, compare the pictures of the door or open space in the photos, get matching pictures in each other's photos, find the same door or open space, and connect the rooms with the same door or open space; After connecting all the rooms in series, calculate the position and orientation of the connected room, select a room as the connected room, traverse the door of the room, find the unconnected room, and connect the two through the position and orientation of the currently connected room. Position and orientation of the door or open space of each room, calculate the position and orientation of the unconnected room, and then mark the unconnected room as a connected room; continue to search for unconnected rooms until there are no unconnected rooms and complete all The connection of the room.
  • the method for reconstructing a three-dimensional space scene based on photographs provided by the present invention further includes step S6: segmenting the photo by using point coordinate information collected in S3 to obtain a room texture map to obtain a room texture: first set each face to a polygon,
  • the vertex information is a set of three-dimensional coordinates.
  • the size of the map is calculated by the outer rectangle of the vertex coordinates. Iterate through the pixels of the picture to obtain the position of the spatial coordinate point corresponding to each pixel on the polygon. Iterate through all the pixels to complete the single surface texture mapping. ; Complete the texture mapping of all faces of the room in turn.
  • the method for reconstructing a three-dimensional space scene based on photographs provided by the present invention further includes step S7: a model of the room obtained through the marking system in step S3, ignoring the height information, and obtaining a two-dimensional outline of a single room; according to each of the steps obtained in step S5 Position and orientation information of the room, set the position and orientation of each room outline, and complete the generation of a two-dimensional plan.
  • the photographing device includes a mobile phone with a fisheye lens, a panoramic camera, a camera with a fisheye lens, and a common mobile phone and a common digital camera.
  • the method for reconstructing a three-dimensional space scene based on photographs restores a three-dimensional space model of the scene, and contains both texture information and size information without losing details. Furniture, interiors, etc. severely interfere with the model; can quickly and easily edit and modify 3D space scenes, and simultaneously generate 2D floor plans with size information without distortion; support a wide range of shooting methods, including but not limited to mobile phone fisheye Lenses, panoramic cameras, cameras with fisheye lenses, and ordinary mobile phones and ordinary digital cameras are inexpensive.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Processing Or Creating Images (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Processing (AREA)
  • Image Generation (AREA)

Abstract

本发明公开了一种基于拍照重建三维空间场景的方法,包括以下步骤:S1:导入所有空间的照片,将照片按照拍摄时的方向和视角对应于三维空间,使得从三维空间的相机位置观看时,每个像素的观看方向与拍摄时一致;S2:将房间视为多个平面的集合,先确定一个平面,再通过平面间相互关系和平面间的交线,逐一确定所有平面;S3:通过标记***标记房间空间结构,并获得尺寸信息;S4:通过S3收集到的点坐标信息建立房间的三维空间模型。本发明既能还原场景的三维空间模型,同时包含尺寸信息和纹理信息且不丢失细节,同时能够快速方便地对三维空间场景进行编辑修改,又能同时生成带尺寸信息且不失真的二维平面图。

Description

一种基于拍照重建三维空间场景的方法 技术领域
本发明涉及一种三维建模方法,尤其涉及一种基于拍照重建三维空间场景的方法。
背景技术
三维空间的建模是近年来得到快速发展和应用的一项技术,在虚拟现实、房屋装修、室内设计等领域都得到广泛的应用,现有的三维空间场景建模一般采用以下方案:
1、采用双目立体视觉(Binocular Stereo Vision)***。基于视差原理并利用成像设备从不同的位置获取被测物体的两幅图像,通过计算图像对应点间的位置偏差,来获取物体三维几何信息的方法。融合两只眼睛获得的图像并观察它们之间的差别,使我们可以获得明显的深度感,建立特征间的对应关系,将同一空间物理点在不同图像中的映像点对应起来。该方法虽然拍摄设备简单,但若要用于三维空间的重建,所需拍摄的照片数量极多,非常耗时,而且后期需要大量的时间来计算模型,一旦模型出现问题,修复方法又非常繁琐,非专业人员无法操作,因此,双目立体视觉***虽然已出现多年但却无法做到大面积推广。
2、激光点云技术,是使用TOF或者结构光测距原理,获取物体表面每个采样点的空间坐标,得到的是一系列表达目标空间分布和目标表面特性的海量点的集合,这个点集合就称之为“点云”(Point Cloud)。点云的属性包括:空间分辨率,点位精度,表面法向量等。但是激光点云技术一方面需要用户额外采购体积庞大、价格昂贵、操作复杂的点云设备,另一方面会产生海量的数据不利于存储和处理,当需要拼接多组数据时,由于数据量巨大,花费时间会很长,效果不尽如人意。因此,点云技术虽然已出现多年但推广起来却困难重重。
由上可见,目前还缺乏一种简单易用、成本较低但效果较好的方法来解决上述问题。
技术问题
本发明所要解决的技术问题是提供一种基于拍照重建三维空间场景的方法,既能还原场景的三维空间模型,同时包含纹理信息和尺寸信息且不丢失细节,同时能够快速方便地对三维空间场景进行编辑修改,又能同时生成带尺寸信息且不失真的二维平面图。
技术解决方案
本发明为解决上述技术问题而采用的技术方案是提供基于拍照重建三维空间场景的方法,包括以下步骤:S1:导入所有空间的照片,对每个空间导入在同一个拍摄点拍摄的包含空间主要特征的一组照片,将照片按照拍摄时的方向和视角对应于三维空间,使得从三维空间的相机位置观看时,每个像素的观看方向与拍摄时一致; S2:将房间视为多个平面的集合,先确定第一个平面,再通过平面间相互关系和平面间的交线,逐一确定所有平面;S3:通过标记***标记房间空间结构,并获得尺寸信息;S4:通过S3收集到的点坐标信息建立房间的三维空间模型。
上述的基于拍照重建三维空间场景的方法,其中,所述步骤S1还包括,将每个空间的照片合成360度全景图,然后将全景图对应于三维空间,使得从三维空间的相机位置观看时,每个像素的观看方向与拍摄时一致。
上述的基于拍照重建三维空间场景的方法,其中,所述步骤S2中,确定第一个平面方法包括:通过查找平面上的三条垂直相交墙线来确定一个平面或者查找平面四个墙角来确定一个平面。
上述的基于拍照重建三维空间场景的方法,其中,所述步骤S2中,确定第一个平面方法还包括:通过记录相机镜头在平面的投影点来确定平面的位置或者记录相关点推算出相机镜头在平面的投影点来确定平面的位置。
上述的基于拍照重建三维空间场景的方法,其中,所述步骤S2之前还包括:在照片上标记垂直矫正线,矫正照片拍摄时拍摄设备歪斜导致的图像倾斜;在照片中寻找垂直于照片地面的线作为垂直矫正线,或者寻找照片的水平线,画一条与照片中的水平线垂直的线作为垂直矫正线;以垂直矫正线为参照,旋转照片直到垂直矫正线与实际水平面垂直;获取不同方位的多条垂直线矫正线,完成垂直矫正。
上述的基于拍照重建三维空间场景的方法,其中,所述步骤S3包括以下步骤: S31:将照片放置于一个高精度球体内,将相机的镜头位置设置在球体中央,然后在球体中心还原照片拍摄视角;S32:在球体中预置四个框架点,每个框架点包含上点和下点,拖动框架点使得每个点对应于实际房间空间中的墙角位置,形成房间的主体框架结构;或者依次标记各个下墙角的位置,形成地面轮廓,再结合垂直的墙线,找到对应的上墙角,标记出上墙角的位置,得到整个房间的基本轮廓;S33:通过已知的相机拍摄高度计算得到房间的高度,推算房间的轮廓尺寸;或者通过摆放已知长度的标尺,定位平面的轮廓尺寸,再推算房间的轮廓尺寸。
上述的基于拍照重建三维空间场景的方法,其中,步骤S32中,需要标记的墙角被遮挡时,如果墙线可见,通过垂直与水平相交的两个墙线的交点来确定墙角的位置;如果墙的下墙角与墙线均被遮挡,而上墙角或墙线可见,先确定上平面和上墙角,然后将上墙角和下墙角的位置进行等比缩放,并保持下墙角的点在下平面上,从而确定墙角位置。
上述的基于拍照重建三维空间场景的方法,其中,步骤S32还包括,对房间内物体添加标点,并将两个标点组成一条标线,对房间墙体进行扩展并添加房间的基本物体结构,所述基本物体包括门、开放空间、普通窗、飘窗、楼梯;对于非矩形的房间,添加凹凸墙结构的标点扩展房间结构,通过调节凹凸墙的位置和深度来确定扩展的空间结构;对于任意自由结构的墙体,添加自由标点来任意扩展墙体结构;对于复式结构,添加楼梯和楼梯口结构的标点,并连接两个楼层的楼梯口来串联上下楼层,扩展楼梯结构;步骤S33还包括,已知相机镜头高度或者摆放已知长度的标尺,通过对房间内物体的标记,得到真实世界中的尺寸和在模型中的尺寸的计算比例,然后对整个模型进行等比缩放,计算得到房间内物体真实尺寸。
上述的基于拍照重建三维空间场景的方法,其中,还包括步骤S5:通过对比不同房间照片中门或者开放空间图像信息,对各个房间进行连接,得到的各个房间的空间位置及朝向信息;通过标记了的门或者开放空间,对比照片中透过门或者开放空间的画面,在彼此的照片中获取相匹配的画面,找到相同的门或者开放空间,将具有相同的门或者开放空间的房间连接起来;将所有房间互相串联起来后,计算连接房间的位置和朝向,选取一个房间设为已连接房间,遍历该房间的门,寻找未连接的房间,通过当前已连接房间的位置和朝向,结合连接两个房间的门或开放空间的位置和朝向,计算出未连接的房间的位置和朝向,然后将未连接房间标记为已连接房间;继续寻找未连接的房间,直到没有未连接房间为止,完成所有房间的连接。
上述的基于拍照重建三维空间场景的方法,其中,还包括步骤S6:通过S3收集到的点坐标信息对照片进行分割,获得房间纹理贴图获得房间纹理:先将每个面设定为一个多边形,顶点信息为一组三维坐标,通过顶点坐标的外包矩形来计算贴图的大小;遍历图片像素点,获取每个像素点对应在多边形上的空间坐标点位置;遍历所有像素点,完成单个面纹理贴图;依次完成房间所有面的纹理贴图。
上述的基于拍照重建三维空间场景的方法,其中,还包括步骤S7:通过步骤S3中标记***获取的房间的模型,忽略高度信息,得到单个房间的二维轮廓;根据在步骤S5中得到的各个房间的位置坐标和朝向信息,设置每个房间轮廓的位置和朝向,完成二维平面图的生成。
上述的基于拍照重建三维空间场景的方法,其中,所述照片的拍摄设备包括带鱼眼镜头的手机、全景相机、带鱼眼镜头的相机、普通手机和普通数码相机。
有益效果
本发明对比现有技术有如下的有益效果:本发明提出的基于拍照重建三维空间场景的方法,还原场景的三维空间模型,同时包含纹理信息和尺寸信息且不丢失细节,不会因为扫描不全导致模型破洞,不会因为家具、内饰等对模型产生严重干扰;能够快速方便地对三维空间场景进行编辑修改,又能同时生成带尺寸信息且不失真的二维平面图;支持拍摄方式广泛,包括但不限于手机鱼眼镜头、全景相机、带鱼眼镜头的相机,以及普通手机和普通数码相机,成本低廉。
附图说明
图1是本发明实施例中的基于拍照重建三维空间场景的方法的流程图;
图2是本发明实施例中确定平面的示意图;
图3是本发明实施例中的查找平面上的三条垂直相交墙线来确定第一个平面的示意图;
图4是本发明实施例中的极坐标到直角坐标的转换示意图;
图5是本发明实施例中的查找平面四个墙角来确定第一个平面的示意图;
图6是本发明实施例中的记录相关点推算出相机镜头在平面的投影点来确定第一个平面的示意图;
图7是本发明实施例中垂直矫正的示意图;
图8是本发明实施例中的标记房间空间结构添加凹凸墙结构的示意图;
图9是本发明实施例中的标记房间空间结构添加自由标点来任意扩展墙体的示意图;
图10a是本发明实施例中的由相机看向墙角的侧视图;
图10b是本发明实施例中的由相机看向墙角的顶视图;
图11是本发明实施例中的在地面上摆放已知长度的标尺推算轮廓尺寸的示意图;
图12是本发明实施例中的与地面成一定角度摆放已知长度的标尺推算轮廓尺寸的示意图。
本发明的实施方式
为使本发明的上述目的、特征和有益效果能够更为明显易懂,下面结合附图对本发明的具体实施例做详细的说明。
图1是本发明的基于拍照重建三维空间场景的方法的流程图。
请参见图1,本发明提供的基于拍照重建三维空间场景的方法,包括以下步骤:S1:导入所有空间的照片,对每个空间导入在同一个拍摄点拍摄的包含空间主要特征的一组照片,将照片按照拍摄时的方向和视角对应于三维空间,使得从三维空间的相机位置观看时,每个像素的观看方向与拍摄时一致;S2:将房间视为多个平面的集合,先确定第一个平面,再通过平面间相互关系和平面间的交线,逐一确定所有平面;S3:通过标记***标记房间空间结构,并获得尺寸信息;S4:通过S3收集到的点坐标信息建立房间的三维空间模型。   
优选的,步骤S1还包括,将每个空间的照片合成360度全景图,然后将全景图对应于三维空间,使得从三维空间的相机位置观看时,每个像素的观看方向与拍摄时一致。
本发明提供的基于拍照重建三维空间场景的方法,步骤S2中,确定第一个平面方法包括:通过查找平面上的三条垂直相交墙线来确定一个平面或者查找平面四个墙角来确定一个平面。确定第一个平面方法还包括;通过记录相机镜头在平面的投影点来确定平面的位置或者记录相关点推算出相机镜头在平面的投影点来确定平面的位置。
由于照片对应于三维空间的原则,是维持每个像素与拍摄时方向一致,并未记录或提供一个像素到拍摄点(拍摄时镜头的位置)距离信息。建模的基本原理是把室内模型视为多个平面(包括地面,墙面和天花板)的集合,下墙角是三个平面的交点(地面和两个墙面),上墙角是三个平面的交点(天花板和两个墙面),墙线是两个平面(墙面)的交线。如果我们能够首先定位一个平面的位置,依靠在该平面上的墙角和线就能依次确定其他平面,直到复原所有平面完成建模。
请参见图2,如果能先确定地面S1,则可以通过已知墙面S2与地面S1垂直的关系,以及S1和S2的交线(墙线)L1,确定墙面S2。同理,可以继续通过墙线L2确定墙面S3以及空间内所有相互垂直墙面的位置。
具体的,方法a(相邻直角法):通过平面上的三条垂直相交线来确定平面,相邻直角法的前提是室内的墙面多数是由矩形构成。
请参见图3,P为相机镜头位置,P1P2P3及P2P3P4构成的角均为直角,则P1P2P3P4所在平面即可确定。图3中在照片中观测到一堵墙,P1、P2、P3和P4均在此墙面上。其中P2P3为该墙与地面垂直的边,P1P2为该墙与天花板的交线,P3P4为该墙与地面的交线。P1为墙与天花板交线上不同于P2的一点,P4为墙与地板交线上不同于P3的一点。四个点相对于观察点的坐标用极坐标表示为(
Figure dest_path_image001
,
Figure 258186dest_path_image002
,
Figure dest_path_image003
),显然半径
Figure 549490dest_path_image001
是未知数,另两个值可由观察得到。
请参见图4,由极坐标到直角坐标的转换定义为:
Figure 82102dest_path_image004
 =
Figure dest_path_image005
 (1)
定义
Figure 513958dest_path_image006
  (2)
则点
Figure dest_path_image007
=
Figure 464728dest_path_image008
    (3)
设线段向量
Figure dest_path_image009
=
Figure 395775dest_path_image010
   (4)
请参见图3,由于P1P2与P2P3垂直,可知两线段向量的点积为0:
Figure dest_path_image011
 (5)
同理,由于P3P4与P2P3垂直可得
Figure 87787dest_path_image012
 (6)
又P1P2与P3P4平行,所以两向量的点积等于两向量的模的乘积
Figure dest_path_image013
 (7)
将(5)(6)(7)展开,可得
Figure 762482dest_path_image014
 (8)
Figure dest_path_image015
 (9)
Figure 244059dest_path_image016
 (10)
我们有三个方程,但有4个未知数,故不能直接解上述方程组;但可假设
Figure dest_path_image017
为1,并求得其他三个点的r与
Figure 611586dest_path_image017
的比值。这样求出的平面与相机位置
Figure 384370dest_path_image018
带入(8),得到
Figure dest_path_image019
 (11)
定义
Figure 597177dest_path_image020
(12)
则可得
Figure dest_path_image021
 (12)
同理可得
Figure 443910dest_path_image022
 (13)
将(12)(13)带入(10),得到
Figure dest_path_image023
 (14)
将(14)两边同时平方再除以
Figure 451180dest_path_image024
可以得到
Figure dest_path_image025
 (15)
这是一个关于
Figure 117785dest_path_image026
的四次方程,解这个方程,可以求得
Figure 603124dest_path_image026
的4个值。一元四次方程求根公式可以参考百度百科,在此不再详细展开。
https://baike.***.com/item/%E4%B8%80%E5%85%83%E5%9B%9B%E6%AC%A1%E6%96%B9%E7%A8%8B%E6%B1%82%E6%A0%B9%E5%85%AC%E5%BC%8F/10721996?fr=aladdin
四次方程可以参考***,在此不再详细展开。https://zh.wikipedia.org/wiki/%E5%9B%9B%E6%AC%A1%E6%96%B9%E7%A8%8B
Figure 366681dest_path_image026
的4个值中,只有正实数才有意义,但是可能存在多个正实数值。
确定其他平面也可以使用以上同样方法。为了在每个平面的多个可能性中找到正确的解,使用以下方法: 使用相邻直角法确定第一个平面S1;使用以上平面的一个正实数解,可以确定平面S1的位置;选择一个相邻平面S2,使用相邻直角法确定该平面,并使用一个正实数解确定平面S2的位置;重复前一步骤逐次确定其它平面的位置,直至全部完成;如果某个平面找不到正实数解,或者与相邻平面的交线错误,则回溯到前一平面使用下一个正实数解,重复前面的步骤,确定所有平面。由于所有平面在空间中真实存在,所以通过以上方法能够确定所有平面的位置。以上假定每个平面均可找到两个相邻的直角。如果个别平面不满足两个相邻直角的条件,可以通过已确定位置的相邻平面以及与该相邻平面的交线来确定。如图2,假设S2垂直于S1,则S2可以根据S1的位置以及S1与S2的交线L1唯一确定。
具体的,方法b(矩形法):通过假设一面墙是矩形,查找平面四个墙角来确定一个平面。在房间中,绝大多数墙面是矩形,所以很自然地使用矩形的4个顶点来确定墙面的位置。矩形法是相邻直角法的特例。
请参见图5,在矩形法中,线段P1P4必须与线段P2P3平行,而在相邻直角法中,P1可以是P1P2所在直线上任意一点;同样,P4也可以是P3P4所在直线上任意一点,矩形法的求解方式与相邻直角法类似,这里就不再描述。
具体的,方法c(投影法):通过相机到平面的投影来确定平面。如果拍摄时,相机镜头到某个平面的投影被记录下来,就可以依此唯一地确定该平面的位置(已知或假设相机到该平面的距离)。
如图2中所示,相机位置与投影的连线垂直于地面S1。如果已知或假设相机高度,则地面位置得以确定。实际操作中,获取/记录投影点的方法有:把相机放在三脚架或支架上拍摄,三脚架/支架垂直与地面。三脚架/支架落在地面上的中心点即是投影点。或者把三脚架/支架抵住并垂直于墙面;三脚架/支架落在墙面上的中心点即是投影点;或者在投影点上做标记。确定其他平面时,使用投影法确定第一个平面S1,选择一个相邻平面S2,假设S2垂直于S1,则S2可以根据S1的位置以及S1与S2的交线L1唯一确定,使用同样方法依次确定所有平面。实际操作中,由于几乎所有墙面都与地面或天花板垂直,所以如果第一个平面是墙面,在以上步骤中选择地面或天花板作为第一个相邻平面,然后再依次确定所有墙面。
具体的,方法d(倾角投影法):记录相关点推算出相机镜头在平面的投影点来确定平面的位置。该方法与投影法类似,不同的是记录的不是投影点,但是通过记录的点能够推算出投影点的位置。方法有:三脚架/支架放于地面(底座贴在地面),但是并不垂直于地面;三脚架/支架抵住墙面(底座贴在墙面),但是并不垂直于墙面。当三脚架/支架不垂直于平面时,在平面上的投影是一条线段。
请参见图6,P为相机镜头位置,P1为相机支架底座中心点,P2为平面上的投影点,支架P1P3在平面上的投影线为P1P2。如果已知:投影线相对于底座的方向(通过镜头方向与底座的相对关系,或者底座上与支架位置固定且预知关系的标记来确定)P1P2,根据已知支架的倾角P3P1P2,以及已知支架的高度或一个假设高度,可以计算出投影点P2的位置:由投影点P2,可以使用投影法确定所有平面。
优选的,进行步骤S2之前还包括:在照片上标记垂直矫正线,矫正照片拍摄时拍摄设备歪斜导致的图像倾斜;在照片中寻找垂直于照片地面的线作为垂直矫正线,或者寻找照片的水平线,画一条与照片中的水平线垂直的线作为垂直矫正线;以垂直矫正线为参照,旋转照片直到垂直矫正线与实际水平面垂直;获取不同方位的多条垂直线矫正线,完成垂直矫正。
请参见图7,垂直矫正时,将找到的线段的两个端点(包括通过水平线转化出来的垂直线段)投影到球面上,得到P1,P2,由于相机有倾斜,故球心O与P1,P2组成的平面与水平面H的夹角不是直角;取P1P2的中点为P,将P投影到H上得到P’, 以球心O和P’的连线为轴,旋转照片,直到平面OP1P2与平面H垂直。
本发明提供的基于拍照重建三维空间场景的方法,步骤S3包括以下步骤:S31:将照片放置于一个高精度球体内,将相机的镜头位置设置在球体中央,然后在球体中心还原照片拍摄视角;S32:在球体中预置四个框架点,每个框架点包含上点和下点,拖动框架点使得每个点对应于实际房间空间中的墙角位置,形成房间的主体框架结构;或者依次标记各个下墙角的位置,形成地面轮廓,再结合垂直的墙线,找到对应的上墙角,标记出上墙角的位置,得到整个房间的基本轮廓;S33:通过已知的相机拍摄高度计算得到房间的高度,推算房间的轮廓尺寸;或者通过摆放已知长度的标尺,定位平面的轮廓尺寸,再推算房间的轮廓尺寸。
具体实施中,步骤S32可以采用多种方式标记房间空间结构及物体,其中一种标记方式(框架法)步骤如下:
请参见图8和图9,场景内预置在平面S1上设置4个框架点(P1, P2, P3和P4)构成一个矩形,可以改变矩形的长宽,但是必须维持矩形关系。每个框架点分别对应一个上框架点(P1a, P2a,P3a和P4a),每对上下框架点的连线(如P1P1a)垂直于平面S1。移动每个下框架点到空间中的墙角位置。当房间形状为规则的长方体时,上下8个框架点就完全描述了长方体的所有平面,完成空间建模。
对于非长方体的房间,可通过添加凹凸墙结构的4个标点来快速扩展房间结构,如图8中 P5, P6, P7和P8,通过调节凹凸墙的位置和深度来确定扩展的空间结构。由于多数墙体的凹凸结构仍然成矩形或直角关系(如图8中P5P6与P2P5垂直,P6P7分别与P5P6及P7P8垂直,P7P8与P8P3垂直),这个方法可以有效地建模。
对于任意自由结构的墙体,可通过添加自由标点来任意扩展墙体结构。如图9中P7可以是平面S1上任意一点,P5P7与P2P5可以不垂直,P5P7与P7P8可以不垂直,P7P8与P8P3可以不垂直。
对于有墙角被物体遮挡的情况,一般会露出墙线当作参照物,可通过框架点与框架点之间的连线,来确定墙线的位置,对于一个墙角,当确定该墙角的垂直墙线与水平墙线时,即可确认该墙角的位置。如图8中,如果P4被遮挡,但是P1P4和P3P4部分可见,则仍然可以通过两线交点得出P4的位置。
两个标点之间组成一条标线,通过点击标线来对房间墙体进行扩展并添加各类房间的基本物体结构,比如门、开放空间、普通窗、飘窗、楼梯等。也可在墙面上添加基本物体结构。
对于复式结构,可通过添加楼梯和楼梯口,并连接两个楼层的楼梯口来串联上下楼层,多个楼层以此类推扩展。
通过上述标记步骤可得到一套基于标点的数据结构,该标点结构记录了房间墙体结构及主要房间物体(比如门、门洞、开放空间、普通窗、飘窗、楼梯)的空间信息,并以文本形式存储在本地。
具体实施中,步骤S32还可以通过在某一平面上标记出每个墙角的位置来形成该平面的轮廓,其他平面通过墙线和墙角再逐步确认,直到标记出所有平面形成房间轮廓(逐一标注墙角法)。以先标记地面为例,具体步骤如下:
依次在图中标记各个墙角的位置,使其形成一个完整的地面轮廓。因为地面已经给出了各个墙面的下墙角信息,且天花板平面轮廓一般与地面轮廓相同,再结合垂直的墙线,只要确认房间高度即可还原房间轮廓。选择其中一个下墙角,找到其对应的上墙角,标记出上墙角的位置,得到房间高度,最终得到整个房间的基本轮廓。如图8中在平面S1上标出P1,P2,P3,P4,P5,P6,P7和P8,就描述了S1的轮廓。然后从每个点拉出与S1的垂线,继而得出S2,S3及其他平面。
具体实施中,步骤S33确定尺寸的方法一:通过已知的相机拍摄高度计算得到房间的高度,推算房间的轮廓尺寸;
请参见图10a和图10b,墙角位置和房间高度的基本计算方法如下:
墙角的位置:已知相机高度h1,以及相机看向墙角P1的俯仰角φ1,则d = tanφ1 * h1,已知相机看向下墙角的方向角θ,通过顶视图可计算出墙角P1坐标(x, y, z)= (d * sinθ,-h1,d * cosθ)。
房间高度:已知相机看向上墙角的俯仰角φ2,相机高度h1,上墙角P2与下墙角P1在同一垂线上,则h2 = d / tan(180 – φ2),房间高度h = h1 + h2。
具体实施中,步骤S33确定尺寸的方法二:通过摆放已知长度的标尺,定位某个平面(地面/墙面)的轮廓尺寸,再由此推断其他平面的轮廓尺寸。
请参见图11,在地面上放置标尺,由已知的标尺长度可以计算相机高度(到地面的距离),后续与方法一相同。相机高度未知,假设相机高度为h,地面标尺R已知真实长度为Lr,而在坐标系中测量的长度为Lr’,且h,Lr’在同一坐标系内,单位相同,则相机真实高度h’= Lr / Lr’ * h。也可以紧贴某墙面放置标尺,可以计算相机到墙面距离,再由下墙角确定地面位置,后续与方法以相同;如图11,标尺R放在墙面s2上,其共同点在于,在知道定位的第一个平面的相对大小(大小为事先设定的任意默认正值)后,因为标尺在该平面上,所以能通过标尺在坐标系中测量到的长度Lr’与标尺在真实世界中的实际长度Lr计算比例,即比例r=Lr / Lr’,表示坐标系中单位长度对应真实世界中的长度,若该平面中任意一条边在坐标系中的长度为L’,则真实长度L = L’ * r,因此可以知道坐标系中所有边的真实长度。
请参见图12,标尺不一定要紧贴地面或墙面。只要与某个平面形成固定的角度,也可以通过三角函数计算平面投影的长度,以此再确定该平面的位置。以地面为例,标尺为Pr1Pr2,已知Pr1Pr2真实长度,以及已知标尺与平面的固定夹角为θ,即∠Pr1,则先根据θ角以及标尺在地面的投影,确定Pr1Pr2’的真实长度为Lr;通过在坐标系中测量投影Pr1Pr2’的长度为Lr’,后续计算和上一方法相同,通过比例r = Lr / Lr’来确定其他所有边的真实长度。
具体的,步骤S32中,需要标记的墙角被遮挡时,如果墙线可见,通过垂直与水平相交的两个墙线的交点来确定墙角的位置;如果墙的下墙角与墙线均被遮挡,而上墙角或墙线可见,先确定上平面和上墙角,然后将上墙角和下墙角的位置进行等比缩放,并保持下墙角的点在下平面上,从而确定墙角位置。
本发明提供的基于拍照重建三维空间场景的方法,还包括步骤S5:通过对比不同房间照片中门或者开放空间图像信息,对各个房间进行连接,得到的各个房间的空间位置及朝向信息;通过标记了的门或者开放空间,对比照片中透过门或者开放空间的画面,在彼此的照片中获取相匹配的画面,找到相同的门或者开放空间,将具有相同的门或者开放空间的房间连接起来;将所有房间互相串联起来后,计算连接房间的位置和朝向,选取一个房间设为已连接房间,遍历该房间的门,寻找未连接的房间,通过当前已连接房间的位置和朝向,结合连接两个房间的门或开放空间的位置和朝向,计算出未连接的房间的位置和朝向,然后将未连接房间标记为已连接房间;继续寻找未连接的房间,直到没有未连接房间为止,完成所有房间的连接。
本发明提供的基于拍照重建三维空间场景的方法,还包括步骤S6:通过S3收集到的点坐标信息对照片进行分割,获得房间纹理贴图获得房间纹理:先将每个面设定为一个多边形,顶点信息为一组三维坐标,通过顶点坐标的外包矩形来计算贴图的大小;遍历图片像素点,获取每个像素点对应在多边形上的空间坐标点位置;遍历所有像素点,完成单个面纹理贴图;依次完成房间所有面的纹理贴图。
本发明提供的基于拍照重建三维空间场景的方法,还包括步骤S7:通过步骤S3中标记***获取的房间的模型,忽略高度信息,得到单个房间的二维轮廓;根据在步骤S5中得到的各个房间的位置坐标和朝向信息,设置每个房间轮廓的位置和朝向,完成二维平面图的生成。
具体的,所述照片的拍摄设备包括鱼眼镜头的手机、全景相机、带鱼眼镜头的相机,以及普通手机和普通数码相机。
综上所述,本发明提出的基于拍照重建三维空间场景的方法,还原场景的三维空间模型,同时包含纹理信息和尺寸信息且不丢失细节,不会因为扫描不全导致模型破洞,不会因为家具、内饰等对模型产生严重干扰;能够快速方便地对三维空间场景进行编辑修改,又能同时生成带尺寸信息且不失真的二维平面图;支持拍摄方式广泛,包括但不限于手机鱼眼镜头、全景相机、带鱼眼镜头的相机,以及普通手机和普通数码相机,成本低廉。
虽然本发明已以较佳实施例揭示如上,然其并非用以限定本发明,任何本领域技术人员,在不脱离本发明的精神和范围内,当可作些许的修改和完善,因此本发明的保护范围当以权利要求书所界定的为准。

Claims (10)

  1. 一种基于拍照重建三维空间场景的方法,其特征在于,包括以下步骤:
    S1:导入所有空间的照片,对每个空间导入在同一个拍摄点拍摄的包含空间主要特征的一组照片,将照片按照拍摄时的方向和视角对应于三维空间,使得从三维空间的相机位置观看时,每个像素的观看方向与拍摄时一致;
    S2:将房间视为多个平面的集合,先确定第一个平面,再通过平面间相互关系和平面间的交线,逐一确定所有平面;
    S3:通过标记***标记房间空间结构,并获得尺寸信息;
    S4:通过S3收集到的点坐标信息建立房间的三维空间模型。
  2. 根据权利要求1所述的基于拍照重建三维空间场景的方法,其特征在于,所述步骤S1还包括,将每个空间的照片合成360度全景图,然后将全景图对应于三维空间,使得从三维空间的相机位置观看时,每个像素的观看方向与拍摄时一致。
  3. 根据权利要求1所述的基于拍照重建三维空间场景的方法,其特征在于,所述步骤S2中,确定第一个平面方法包括:通过查找平面上的三条垂直相交墙线来确定一个平面或者查找平面四个墙角来确定一个平面。
  4. 根据权利要求1所述的基于拍照重建三维空间场景的方法,其特征在于,所述步骤S2中,确定第一个平面方法还包括:通过记录相机镜头在平面的投影点来确定平面的位置或者记录相关点推算出相机镜头在平面的投影点来确定平面的位置。
  5. 根据权利要求1所述的基于拍照重建三维空间场景的方法,其特征在于,所述步骤S2之前还包括:在照片上标记垂直矫正线,矫正照片拍摄时拍摄设备歪斜导致的图像倾斜;在照片中寻找垂直于照片地面的线作为垂直矫正线,或者寻找照片的水平线,画一条与照片中的水平线垂直的线作为垂直矫正线;以垂直矫正线为参照,旋转照片直到垂直矫正线与实际水平面垂直;获取不同方位的多条垂直线矫正线,完成垂直矫正。
  6. 根据权利要求1所述的基于拍照重建三维空间场景的方法,其特征在于,所述步骤S3包括以下步骤:
    S31:将照片放置于一个高精度球体内,将相机的镜头位置设置在球体中央,然后在球体中心还原照片拍摄视角;
    S32:在球体中预置四个框架点,每个框架点包含上点和下点,拖动框架点使得每个点对应于实际房间空间中的墙角位置,形成房间的主体框架结构;或者依次标记各个下墙角的位置,形成地面轮廓,再结合垂直的墙线,找到对应的上墙角,标记出上墙角的位置,得到整个房间的基本轮廓;
    S33:通过已知的相机拍摄高度计算得到房间的高度,推算房间的轮廓尺寸;或者通过摆放已知长度的标尺,定位平面的轮廓尺寸,再推算房间的轮廓尺寸。
  7. 根据权利要求6所述的基于拍照重建三维空间场景的方法,其特征在于,步骤S32中,需要标记的墙角被遮挡时,如果墙线可见,通过垂直与水平相交的两个墙线的交点来确定墙角的位置;如果墙的下墙角与墙线均被遮挡,而上墙角或墙线可见,先确定上平面和上墙角,然后将上墙角和下墙角的位置进行等比缩放,并保持下墙角的点在下平面上,从而确定墙角位置。
  8. 根据权利要求6所述的基于拍照重建三维空间场景的方法,其特征在于,步骤S32还包括,对房间内物体添加标点,并将两个标点组成一条标线,对房间墙体进行扩展并添加房间的基本物体结构,所述基本物体包括门、开放空间、普通窗、飘窗、楼梯;对于非矩形的房间,添加凹凸墙结构的标点扩展房间结构,通过调节凹凸墙的位置和深度来确定扩展的空间结构;对于任意自由结构的墙体,添加自由标点来任意扩展墙体结构;对于复式结构,添加楼梯和楼梯口结构的标点,并连接两个楼层的楼梯口来串联上下楼层,扩展楼梯结构;步骤S33还包括,已知相机镜头高度或者摆放已知长度的标尺,通过对房间内物体的标记,得到真实世界中的尺寸和在模型中的尺寸的计算比例,然后对整个模型进行等比缩放,计算得到房间内物体真实尺寸。
  9. 根据权利要求8所述的基于拍照重建三维空间场景的方法,其特征在于,还包括步骤S5:通过对比不同房间照片中门或者开放空间图像信息,对各个房间进行连接,得到的各个房间的空间位置及朝向信息;通过标记了的门或者开放空间,对比照片中透过门或者开放空间的画面,在彼此的照片中获取相匹配的画面,找到相同的门或者开放空间,将具有相同的门或者开放空间的房间连接起来;将所有房间互相串联起来后,计算连接房间的位置和朝向,选取一个房间设为已连接房间,遍历该房间的门,寻找未连接的房间,通过当前已连接房间的位置和朝向,结合连接两个房间的门或开放空间的位置和朝向,计算出未连接的房间的位置和朝向,然后将未连接房间标记为已连接房间;继续寻找未连接的房间,直到没有未连接房间为止,完成所有房间的连接。
  10. 根据权利要求1所述的基于拍照重建三维空间场景的方法,其特征在于,还包括步骤S6:通过S3收集到的点坐标信息对照片进行分割,获得房间纹理贴图获得房间纹理:先将每个面设定为一个多边形,顶点信息为一组三维坐标,通过顶点坐标的外包矩形来计算贴图的大小;遍历图片像素点,获取每个像素点对应在多边形上的空间坐标点位置;遍历所有像素点,完成单个面纹理贴图;依次完成房间所有面的纹理贴图。
    11. 根据权利要求9所述的基于拍照重建三维空间场景的方法,其特征在于,还包括步骤S7:通过步骤S3中标记***获取的房间的模型,忽略高度信息,得到单个房间的二维轮廓;根据在步骤S5中得到的各个房间的位置坐标和朝向信息,设置每个房间轮廓的位置和朝向,完成二维平面图的生成。
    12. 根据权利要求1所述的基于拍照重建三维空间场景的方法,其特征在于,所述照片的拍摄设备包括带鱼眼镜头的手机、全景相机、带鱼眼镜头的相机、普通手机和普通数码相机。
PCT/CN2018/112554 2018-07-03 2018-10-30 一种基于拍照重建三维空间场景的方法 WO2020006941A1 (zh)

Priority Applications (6)

Application Number Priority Date Filing Date Title
CA3103844A CA3103844C (en) 2018-07-03 2018-10-30 Method for reconstructing three-dimensional space scene based on photographing
JP2021520260A JP7162933B2 (ja) 2018-07-03 2018-10-30 オブジェクトの内部空間モデルを確立するための方法、装置及びシステム、並びに、コンピュータ装置及びコンピュータ可読記憶媒体
CN201880066029.6A CN111247561B (zh) 2018-07-03 2018-10-30 一种基于拍照重建三维空间场景的方法
KR1020207035296A KR20210008400A (ko) 2018-07-03 2018-10-30 촬영을 기반으로 3차원 공간 장면을 재구성하는 방법
GB2018574.0A GB2588030B (en) 2018-07-03 2018-10-30 Method for reconstructing three-dimensional space scene based on photographing
US16/588,111 US11200734B2 (en) 2018-07-03 2019-09-30 Method for reconstructing three-dimensional space scene based on photographing

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810717163.XA CN108961395B (zh) 2018-07-03 2018-07-03 一种基于拍照重建三维空间场景的方法
CN201810717163.X 2018-07-03

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/588,111 Continuation US11200734B2 (en) 2018-07-03 2019-09-30 Method for reconstructing three-dimensional space scene based on photographing

Publications (1)

Publication Number Publication Date
WO2020006941A1 true WO2020006941A1 (zh) 2020-01-09

Family

ID=64485147

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/112554 WO2020006941A1 (zh) 2018-07-03 2018-10-30 一种基于拍照重建三维空间场景的方法

Country Status (7)

Country Link
US (1) US11200734B2 (zh)
JP (1) JP7162933B2 (zh)
KR (1) KR20210008400A (zh)
CN (2) CN108961395B (zh)
CA (1) CA3103844C (zh)
GB (1) GB2588030B (zh)
WO (1) WO2020006941A1 (zh)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112465947A (zh) * 2020-11-18 2021-03-09 李刚 影像的虚拟空间建立方法及***
CN113379901A (zh) * 2021-06-23 2021-09-10 武汉大学 利用大众自拍全景数据建立房屋实景三维的方法及***
CN113487723A (zh) * 2021-06-23 2021-10-08 武汉微景易绘科技有限公司 基于可量测全景三维模型的房屋在线展示方法及***
CN115330943A (zh) * 2022-08-11 2022-11-11 北京城市网邻信息技术有限公司 多层空间三维建模方法、装置、设备和存储介质
CN117689846A (zh) * 2024-02-02 2024-03-12 武汉大学 线状目标的无人机摄影重建多交向视点生成方法及装置
WO2024108350A1 (zh) * 2022-11-21 2024-05-30 北京城市网邻信息技术有限公司 空间结构图和户型图生成方法、装置、设备和存储介质

Families Citing this family (56)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020102772A1 (en) * 2018-11-15 2020-05-22 Qualcomm Incorporated Coordinate estimation on n-spheres with spherical regression
CN109698951B (zh) * 2018-12-13 2021-08-24 歌尔光学科技有限公司 立体图像重现方法、装置、设备和存储介质
CN110675314B (zh) * 2019-04-12 2020-08-21 北京城市网邻信息技术有限公司 图像处理和三维对象建模方法与设备、图像处理装置及介质
CN111862302B (zh) * 2019-04-12 2022-05-17 北京城市网邻信息技术有限公司 图像处理和对象建模方法与设备、图像处理装置及介质
US11869148B2 (en) 2019-04-12 2024-01-09 Beijing Chengshi Wanglin Information Technology Co., Ltd. Three-dimensional object modeling method, image processing method, image processing device
CN110209864B (zh) * 2019-05-22 2023-10-27 刘鹏 三维立体模型测量改尺标注重新建模的网络平台***
CN110209001B (zh) * 2019-06-04 2024-06-14 上海亦我信息技术有限公司 一种用于3d建模的三脚支架及相机拍摄姿态识别方法
US20220358770A1 (en) * 2019-06-17 2022-11-10 Ariel Al, Ltd. Scene reconstruction in three-dimensions from two-dimensional images
US11508141B2 (en) * 2019-07-03 2022-11-22 Magic Leap, Inc. Simple environment solver using planar extraction
CN110633628B (zh) * 2019-08-02 2022-05-06 杭州电子科技大学 基于人工神经网络的rgb图像场景三维模型重建方法
GB2591857B (en) * 2019-08-23 2023-12-06 Shang Hai Yiwo Information Tech Co Ltd Photography-based 3D modeling system and method, and automatic 3D modeling apparatus and method
CN110505463A (zh) * 2019-08-23 2019-11-26 上海亦我信息技术有限公司 基于拍照的实时自动3d建模方法
CN112712584B (zh) * 2019-10-25 2024-05-24 阿里巴巴集团控股有限公司 空间建模方法、装置、设备
CN111210028B (zh) * 2019-12-05 2022-12-02 万翼科技有限公司 房间模型核查方法、装置、计算机设备和存储介质
CN111079619B (zh) * 2019-12-10 2023-04-18 北京百度网讯科技有限公司 用于检测图像中的目标对象的方法和装置
CN113240769B (zh) * 2019-12-18 2022-05-10 北京城市网邻信息技术有限公司 空间链接关系识别方法及装置、存储介质
CN111127655B (zh) * 2019-12-18 2021-10-12 北京城市网邻信息技术有限公司 房屋户型图的构建方法及构建装置、存储介质
CN111223177B (zh) * 2019-12-18 2020-12-04 北京城市网邻信息技术有限公司 三维空间的三维模型的构建方法和装置、存储介质
CN111207672B (zh) * 2019-12-31 2021-08-17 上海简家信息技术有限公司 一种ar量房方法
WO2021142787A1 (zh) * 2020-01-17 2021-07-22 上海亦我信息技术有限公司 行进路线及空间模型生成方法、装置、***
CN111325662A (zh) * 2020-02-21 2020-06-23 广州引力波信息科技有限公司 一种基于球面投影全景图生成3d空间户型模型的方法
CN111508067B (zh) * 2020-04-15 2024-01-30 中国人民解放军国防科技大学 一种基于垂直平面和垂直线的轻量级室内建模方法
CN111210512B (zh) * 2020-04-17 2020-07-21 中联重科股份有限公司 物体的三维抽象模型建立方法、装置、存储介质和处理器
CN111583417B (zh) * 2020-05-12 2022-05-03 北京航空航天大学 一种图像语义和场景几何联合约束的室内vr场景构建的方法、装置、电子设备和介质
CN111830966B (zh) * 2020-06-04 2023-12-19 深圳市无限动力发展有限公司 角落识别和清扫方法、装置及存储介质
CN111698424A (zh) * 2020-06-22 2020-09-22 四川易热科技有限公司 一种通过普通相机补全实景漫游3d信息的方法
CN111859510A (zh) * 2020-07-28 2020-10-30 苏州金螳螂三维软件有限公司 房间快速换装方法、智能终端
CN112055192B (zh) * 2020-08-04 2022-10-11 北京城市网邻信息技术有限公司 图像处理方法、图像处理装置、电子设备及存储介质
CN111951388A (zh) * 2020-08-14 2020-11-17 广东申义实业投资有限公司 室内装修设计用图像拍摄处理装置及图像拍摄处理方法
CN112132163B (zh) * 2020-09-21 2024-04-02 杭州睿琪软件有限公司 识别对象边缘的方法、***及计算机可读存储介质
EP4229552A4 (en) * 2020-10-13 2024-03-06 Flyreel, Inc. GENERATION OF MEASUREMENTS OF PHYSICAL STRUCTURES AND ENVIRONMENTS THROUGH AUTOMATED ANALYSIS OF SENSOR DATA
CN112365569A (zh) * 2020-10-22 2021-02-12 北京五八信息技术有限公司 房源三维场景的展示方法、装置、电子设备和存储介质
CN112493228B (zh) * 2020-10-28 2021-12-14 河海大学 一种基于三维信息估算的激光驱鸟方法及***
CN112270758B (zh) * 2020-10-29 2022-10-14 山东科技大学 一种基于天花板点云分割的建筑物房间轮廓线提取方法
CN112233229B (zh) * 2020-10-29 2023-07-28 字节跳动有限公司 地标数据的采集方法及地标建筑的建模方法
CN114549631A (zh) * 2020-11-26 2022-05-27 株式会社理光 图像处理方法、装置以及存储介质
CN112683221B (zh) * 2020-12-21 2022-05-17 深圳集智数字科技有限公司 一种建筑检测方法和相关装置
KR102321704B1 (ko) * 2020-12-29 2021-11-05 고려대학교 산학협력단 인접 평면 정보를 이용한 3차원 공간 모델 생성 방법 및 장치
CN112950759B (zh) * 2021-01-28 2022-12-06 贝壳找房(北京)科技有限公司 基于房屋全景图的三维房屋模型构建方法及装置
TWI784754B (zh) * 2021-04-16 2022-11-21 威盛電子股份有限公司 電子裝置以及物件偵測方法
CN113324473B (zh) * 2021-04-30 2023-09-15 螳螂慧视科技有限公司 房屋测量方法与测量设备
US11670045B2 (en) * 2021-05-07 2023-06-06 Tencent America LLC Method and apparatus for constructing a 3D geometry
CN113593052B (zh) * 2021-08-06 2022-04-29 贝壳找房(北京)科技有限公司 场景朝向确定方法及标记方法
US11961181B2 (en) * 2021-09-23 2024-04-16 Msg Entertainment Group, Llc Three-dimensional image space transformation
CN113920144B (zh) * 2021-09-30 2022-09-13 广东省国土资源测绘院 一种实景照片地面视域分析方法及***
CN113689482B (zh) * 2021-10-20 2021-12-21 贝壳技术有限公司 拍摄点推荐方法、装置及存储介质
CN114092642B (zh) * 2021-11-18 2024-01-26 抖音视界有限公司 一种三维户型模型生成方法、装置及设备
CN113822994B (zh) * 2021-11-24 2022-02-15 深圳普罗米修斯视觉技术有限公司 三维模型构建方法、装置及存储介质
CN114494487B (zh) * 2021-12-30 2022-11-22 北京城市网邻信息技术有限公司 基于全景图语义拼接的户型图生成方法、设备及存储介质
WO2023163500A1 (en) * 2022-02-28 2023-08-31 Samsung Electronics Co., Ltd. Floorplan-aware camera pose refinement method and system
CN114663618B (zh) * 2022-03-03 2022-11-29 北京城市网邻信息技术有限公司 三维重建及校正方法、装置、设备及存储介质
CN114708383A (zh) * 2022-03-22 2022-07-05 广州市圆方计算机软件工程有限公司 二维平面转三维立体场景的天花和地面构造方法及***
CN114792357B (zh) * 2022-03-23 2023-05-26 北京城市网邻信息技术有限公司 全景图资源生成方法、装置、电子设备及存储介质
CN114529686B (zh) * 2022-04-21 2022-08-02 三一筑工科技股份有限公司 建筑模型的生成方法、装置、设备及介质
EP4300410B1 (en) * 2022-06-29 2024-05-08 Axis AB Self-learning image geometrical distortion correction
CN116071490B (zh) * 2022-10-25 2023-06-23 杭州华橙软件技术有限公司 室内空间布局的重构方法及重构装置、电子设备和介质

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2787612B1 (fr) * 1998-12-21 2001-03-09 Eastman Kodak Co Procede de construction d'un modele a faces pratiquement planes
CN1539120A (zh) * 2001-06-20 2004-10-20 ���ڹɷ����޹�˾ 三维电子地图数据的生成方法
CN101281034A (zh) * 2008-05-16 2008-10-08 南京师范大学 基于空间直角关系的建筑物单影像三维测量方法
US20090279784A1 (en) * 2008-05-07 2009-11-12 Microsoft Corporation Procedural authoring
CN104240289A (zh) * 2014-07-16 2014-12-24 崔岩 一种基于单个相机的三维数字化重建方法及***
CN104851127A (zh) * 2015-05-15 2015-08-19 北京理工大学深圳研究院 一种基于交互的建筑物点云模型纹理映射方法及装置
CN106780421A (zh) * 2016-12-15 2017-05-31 苏州酷外文化传媒有限公司 基于全景平台的装修效果展示方法
CN107978017A (zh) * 2017-10-17 2018-05-01 厦门大学 基于框线提取的室内结构快速建模方法

Family Cites Families (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4119529B2 (ja) * 1998-06-17 2008-07-16 オリンパス株式会社 仮想環境生成方法および装置、並びに仮想環境生成プログラムを記録した記録媒体
JP2000076453A (ja) * 1998-08-28 2000-03-14 Kazuhiro Shiina 立体データ作成方法及び装置
US8542872B2 (en) * 2007-07-03 2013-09-24 Pivotal Vision, Llc Motion-validating remote monitoring system
US8350850B2 (en) * 2008-03-31 2013-01-08 Microsoft Corporation Using photo collections for three dimensional modeling
US9196084B2 (en) * 2013-03-15 2015-11-24 Urc Ventures Inc. Determining object volume from mobile device images
US9025861B2 (en) * 2013-04-09 2015-05-05 Google Inc. System and method for floorplan reconstruction and three-dimensional modeling
US9595134B2 (en) * 2013-05-11 2017-03-14 Mitsubishi Electric Research Laboratories, Inc. Method for reconstructing 3D scenes from 2D images
JP5821012B2 (ja) * 2013-05-31 2015-11-24 パナソニックIpマネジメント株式会社 モデリング装置、3次元モデル生成装置、モデリング方法、プログラム、レイアウトシミュレータ
US9830681B2 (en) * 2014-01-31 2017-11-28 Hover Inc. Multi-dimensional model dimensioning and scale error correction
WO2015120188A1 (en) * 2014-02-08 2015-08-13 Pictometry International Corp. Method and system for displaying room interiors on a floor plan
CN103955960B (zh) * 2014-03-21 2017-01-11 南京大学 一种基于单幅输入图像的图像视点变换方法
CN104202890B (zh) * 2014-09-24 2016-10-05 北京极澈远技术有限公司 照明设备的待机电路和照明设备的工作电路
CN105279787B (zh) * 2015-04-03 2018-01-12 北京明兰网络科技有限公司 基于拍照的户型图识别生成三维房型的方法
CN105205858B (zh) * 2015-09-18 2018-04-13 天津理工大学 一种基于单个深度视觉传感器的室内场景三维重建方法
JP6220486B1 (ja) * 2016-05-27 2017-10-25 楽天株式会社 3次元モデル生成システム、3次元モデル生成方法、及びプログラム
CN106485785B (zh) * 2016-09-30 2023-09-26 李娜 一种基于室内三维建模和定位的场景生成方法及***
US10572970B2 (en) * 2017-04-28 2020-02-25 Google Llc Extracting 2D floor plan from 3D GRID representation of interior space
CN107248193A (zh) * 2017-05-22 2017-10-13 北京红马传媒文化发展有限公司 二维平面与虚拟现实场景进行切换的方法、***及装置
CN107393003B (zh) * 2017-08-07 2020-12-04 苍穹数码技术股份有限公司 一种基于云计算的三维房屋自动建模的方法与实现
CN107798725B (zh) * 2017-09-04 2020-05-22 华南理工大学 基于Android的二维住房户型识别和三维呈现方法
CN108053473A (zh) * 2017-12-29 2018-05-18 北京领航视觉科技有限公司 一种室内三维模型数据的处理方法
US10445913B2 (en) * 2018-03-05 2019-10-15 Faro Technologies, Inc. System and method of scanning and editing two dimensional floorplans
US11055532B2 (en) * 2018-05-02 2021-07-06 Faro Technologies, Inc. System and method of representing and tracking time-based information in two-dimensional building documentation

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2787612B1 (fr) * 1998-12-21 2001-03-09 Eastman Kodak Co Procede de construction d'un modele a faces pratiquement planes
CN1539120A (zh) * 2001-06-20 2004-10-20 ���ڹɷ����޹�˾ 三维电子地图数据的生成方法
US20090279784A1 (en) * 2008-05-07 2009-11-12 Microsoft Corporation Procedural authoring
CN101281034A (zh) * 2008-05-16 2008-10-08 南京师范大学 基于空间直角关系的建筑物单影像三维测量方法
CN104240289A (zh) * 2014-07-16 2014-12-24 崔岩 一种基于单个相机的三维数字化重建方法及***
CN104851127A (zh) * 2015-05-15 2015-08-19 北京理工大学深圳研究院 一种基于交互的建筑物点云模型纹理映射方法及装置
CN106780421A (zh) * 2016-12-15 2017-05-31 苏州酷外文化传媒有限公司 基于全景平台的装修效果展示方法
CN107978017A (zh) * 2017-10-17 2018-05-01 厦门大学 基于框线提取的室内结构快速建模方法

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112465947A (zh) * 2020-11-18 2021-03-09 李刚 影像的虚拟空间建立方法及***
CN112465947B (zh) * 2020-11-18 2024-04-23 李刚 影像的虚拟空间建立方法及***
CN113379901A (zh) * 2021-06-23 2021-09-10 武汉大学 利用大众自拍全景数据建立房屋实景三维的方法及***
CN113487723A (zh) * 2021-06-23 2021-10-08 武汉微景易绘科技有限公司 基于可量测全景三维模型的房屋在线展示方法及***
CN113487723B (zh) * 2021-06-23 2023-04-18 武汉微景易绘科技有限公司 基于可量测全景三维模型的房屋在线展示方法及***
CN115330943A (zh) * 2022-08-11 2022-11-11 北京城市网邻信息技术有限公司 多层空间三维建模方法、装置、设备和存储介质
CN115330943B (zh) * 2022-08-11 2023-03-28 北京城市网邻信息技术有限公司 多层空间三维建模方法、装置、设备和存储介质
WO2024108350A1 (zh) * 2022-11-21 2024-05-30 北京城市网邻信息技术有限公司 空间结构图和户型图生成方法、装置、设备和存储介质
CN117689846A (zh) * 2024-02-02 2024-03-12 武汉大学 线状目标的无人机摄影重建多交向视点生成方法及装置
CN117689846B (zh) * 2024-02-02 2024-04-12 武汉大学 线状目标的无人机摄影重建多交向视点生成方法及装置

Also Published As

Publication number Publication date
US20200111250A1 (en) 2020-04-09
JP2021528794A (ja) 2021-10-21
GB2588030A (en) 2021-04-14
CA3103844A1 (en) 2020-01-09
KR20210008400A (ko) 2021-01-21
US11200734B2 (en) 2021-12-14
CN108961395B (zh) 2019-07-30
CN111247561A (zh) 2020-06-05
GB2588030B (en) 2023-03-29
CN111247561B (zh) 2021-06-08
GB202018574D0 (en) 2021-01-06
CA3103844C (en) 2023-10-31
JP7162933B2 (ja) 2022-10-31
CN108961395A (zh) 2018-12-07

Similar Documents

Publication Publication Date Title
WO2020006941A1 (zh) 一种基于拍照重建三维空间场景的方法
US11875537B2 (en) Multi view camera registration
CN110111262B (zh) 一种投影仪投影畸变校正方法、装置和投影仪
CN110505463A (zh) 基于拍照的实时自动3d建模方法
US7737967B2 (en) Method and apparatus for correction of perspective distortion
TW201915944A (zh) 圖像處理方法、裝置、系統和儲存介質
US10580205B2 (en) 3D model generating system, 3D model generating method, and program
WO2018077071A1 (zh) 一种全景图像的生成方法及装置
CN110490916A (zh) 三维对象建模方法与设备、图像处理装置及介质
CN108629829B (zh) 一种球幕相机与深度相机结合的三维建模方法和***
GB2591857A (en) Photographing-based 3D modeling system and method, and automatic 3D modeling apparatus and method
JP2015022510A (ja) 自由視点画像撮像装置およびその方法
Soycan et al. Perspective correction of building facade images for architectural applications
US20190220952A1 (en) Method of acquiring optimized spherical image using multiple cameras
TW201635242A (zh) 室內二維平面圖的生成方法、裝置和系統
US8509522B2 (en) Camera translation using rotation from device
WO2018056802A1 (en) A method for estimating three-dimensional depth value from two-dimensional images
Fleischmann et al. Fast projector-camera calibration for interactive projection mapping
JP4149732B2 (ja) ステレオマッチング方法、3次元計測方法及び3次元計測装置並びにステレオマッチング方法のプログラム及び3次元計測のプログラム
KR101996226B1 (ko) 피사체의 3차원 위치 측정 장치 및 그 방법
TWI662694B (zh) 三維影像攝取方法及系統
JP2006300656A (ja) 画像計測方法、装置、プログラム及び記録媒体
JP4282361B2 (ja) 写真測量方法および写真測量プログラム
US8260007B1 (en) Systems and methods for generating a depth tile
CN111768446B (zh) 一种室内全景影像逆向建模融合方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18925186

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 202018574

Country of ref document: GB

Kind code of ref document: A

Free format text: PCT FILING DATE = 20181030

ENP Entry into the national phase

Ref document number: 20207035296

Country of ref document: KR

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 3103844

Country of ref document: CA

ENP Entry into the national phase

Ref document number: 2021520260

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 11.05.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 18925186

Country of ref document: EP

Kind code of ref document: A1