MX2009001951A - Modeling and texturing digital surface models in a mapping application - Google Patents

Modeling and texturing digital surface models in a mapping application

Info

Publication number
MX2009001951A
MX2009001951A MXMX/A/2009/001951A MX2009001951A MX2009001951A MX 2009001951 A MX2009001951 A MX 2009001951A MX 2009001951 A MX2009001951 A MX 2009001951A MX 2009001951 A MX2009001951 A MX 2009001951A
Authority
MX
Mexico
Prior art keywords
image
image capture
aerial
images
component
Prior art date
Application number
MXMX/A/2009/001951A
Other languages
Spanish (es)
Inventor
Ofek Eyal
Kimchi Gur
Original Assignee
Microsoft Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Microsoft Corporation filed Critical Microsoft Corporation
Publication of MX2009001951A publication Critical patent/MX2009001951A/en

Links

Abstract

Digital Surface Model (DSM) texturing and modeling of various objects on the earth's surface are provided for implementation in a mapping application. One or more image capture devices having wide-angle lenses can be placed in various configurations to obtain nadir and oblique photography. Such configurations include a single lens, single sensor;single lens, multiple sensor;multiple lens, multiple sensor;and multiple lens, multiple sensor and a reflective surface. Positions, distances and areas can be measured from the imagery. Also provided is a continuous morph between aerial panorama and ground images.

Description

MOLDING AND CURLING TEXTU OF DIGITAL SURFACE MODELS IN A DELINATION APPLICATION BACKGROUND Large-scale delineation applications have increased the importance and volume of earth ground molding as well as building models and other objects that exist on the earth's surface. The general name of these objects is Digital Surface Model (DSM). The name for the terrain only, without buildings and other structures is Digital Elevation Model (DEM). Buildings, structures, and various other objects (e.g., mountains, trees, and the like) can be observed at a variety of viewing angles (e.g., oblique view, bird's eye angle, perspective angle, top view angle) , front view angle, downward trajectory, upward trajectory, and so on) in such delineation projects. While such navigation angles are available for some locations, the information lacks a multitude of other locations. Therefore, such delineation applications lack detail and molding aspects for a majority of locations. To overcome the aforementioned deficiencies as well as others, modalities are provided that provide a means for molding and texturing DSM and applying such information in a delineation application.
BRIEF DESCRIPTION OF THE INVENTION The following presents a brief simplified description in order to provide a basic understanding of some aspects of the described modalities. This brief description is not an extensive review and does not pretend to identify key or critical elements or delineate the scope of such modalities. Its purpose is to present some concepts of the modalities described in a simplified form as a prelude to the more detailed description that is presented later. According to one or more modalities and corresponding description thereof, several aspects are described in connection with texturing and molding of DSM to delineate applications. According to some modalities it is a technique to capture images that combines advantages of nadir and oblique photography for textured molding. Position, distances and areas can be measured from the creation of images. According to some modalities, there is a continuous transformation between aerial panorama and earth images. For carrying out the foregoing and related purposes, one or more embodiments comprise the features described hereinafter fully and particularly pointed out in the claims. The following description and the accompanying drawings mention in certain details some illustrative aspects and are indicative of a few of the various ways in which they can be used. principles of the modalities. Other advantages and novel features will be apparent from the following detailed description when considered in conjunction with the drawings and the described embodiments are intended to include all such aspects and their equivalents.
BRIEF DESCRIPTION OF THE DRAWINGS Figure 1 shows an illustrative system for capturing aerial DSM images and applying such aerial images to a plurality of photogrammetric products. Figure 2 illustrates another illustrative system for texturing and shaping images. Figure 3 illustrates an illustrative image capture device that can be used with the described modalities. Figure 4 illustrates another illustrative image capture device that can be used with the described modalities. Figure 5 illustrates an illustrative configuration of multiple image capture devices that can be used with the various embodiments described herein. Figure 6 illustrates another illustrative configuration of multiple image capture devices that can be used with the various embodiments described herein. Figure 7 illustrates an oblique image that can be used with one or more embodiments.
Figure 8 illustrates a respective difference between an oblique plane image plane and a hemispherical image plane of an ultra wide image. Figure 9 illustrates the generation of an oblique to virtual image of a super wide image. Figure 10 illustrates a methodology for molding and texturing of DSM. Figure 11 illustrates another methodology for molding and texturing of DSM.
DETAILED DESCRIPTION Various modalities will now be described with reference to the drawings, wherein similar reference numbers are used to refer to similar elements throughout the document. In the following description, for purposes of explanation, numerous specific details are mentioned in order to provide a complete understanding of one or more aspects. It may be evident, however, that the various modalities can be practiced without these specific details. In other cases, well-known structures and devices are shown in block diagram form in order to facilitate describing these modalities. As used in this application, the terms "component", "module", "system", and the like are intended to refer to a computer-related entity, whether hardware, a combination of hardware and software, software, or software running. For example, a component can be, but is not limited to, a process running on a processor, a processor, and an object, an executable, a sequence of execution, a program and / or a computer. By way of illustration, both an application running on a server and the server can be a component. One or more components may reside within a process and / or sequence of execution and a component may be located on a computer and / or distributed between two or more computers. The word "illustrative" is used here to say that it serves as an example, case, or illustration. Any aspect or design herein described as "illustrative" is not necessarily to be construed as preferred or advantageous in other aspects or designs. Various modalities will be presented in terms of systems that may include a number of components, modules, and the like. It should be understood and appreciated that the various systems may include additional components, modules, etc. and / or may not include all components, modules etc. discussed in relation to the figures. A combination of these approaches can also be used. Figure 1 illustrates an illustrative system 100 for capturing aerial images of DS and applying such aerial images to a plurality of photogrammetric products. The 100 system combines the advantages of nadir photography and oblique photography. In order to fully appreciate the described modalities, the general process for generation of a Model of Digital Surface (DSM). The DSM image involves capturing several nadir photographs (for example, the camera points down, directly on the ground) of the ground and other structures. The position of the camera is calculated substantially the same time that the image is captured or after the image is captured, such as in postprocessing. The position can be determined by identifying corresponding points between each image and known ground points, and between the images (known as group adjustment). The molding of the floor, buildings, and other objects is done by matching corresponding points on the object, as seen in more than one image. For example, a corner of a building is identified in at least two images. The molding can be done when using nadir image since the images that are parallel to the ground show almost a constant scale. The molding can also be performed on sensors, such as oblique image or wide-angle sensors. For example, a building that appears on the edge of the frame is represented on a scale similar to another building in the center of the frame. Given the position of the camera when each image was taken as well as the internal parameters of the camera, each image point can be translated into a ray of sight in space. The intersection of those rays generates the position in the space of the corner of the building. The identification of corresponding points and recovery of three-dimensional points can be done manually or automatically by using several techniques. By using the recovered three-dimensional points, build a model of each building. Building models can be textured. Each model consists of flat facets or other surface primitives (for example, Random Strip B Non-uniform or NURBS, which is a mathematical definition for a surface element used in molding). Each primitive is projected onto an image in which it is visible. The image data is used as the primitive texture. The molding and texturing use a combination of nadir photographs oblique photography. There is an option to mold oblique photography buildings, however, there are several difficulties associated with this. First, the floor scale of an oblique image changes along the frame. As a building is closer to the horizon, this reconstruction accuracy deteriorates at a rate of 1 / z, where z is the distance from the camera to the ground point. Also, an oblique image captures the facets of objects that are oriented to that direction, for example, only facets of northern buildings. To obtain a complete model of the buildings (and texture coverage) several images are necessary, where each image is taken from a different direction. Visibility is more complex than the angle of view that approaches a horizontal direction since buildings can be avoided by an object (eg, building, structure, tree, and so on) that is between the building and the camera. Under complex visibility, an oblique image, going northward, for example, may not capture the full southern facets of all the buildings in the frame. Therefore, there is a need for more images. In that way, the system 100 can be configured to combine the advantages of nadir photography and oblique photography. The system 100 includes an image capture component 102, an identification component of which object 104, and a presentation component 106. Although a number of image capture component (s) 102 and object identification component (s) 104 can be included in the system 100, as will be appreciated, individual image capture components 102 that interfere with an individual identification component 104 are illustrated for simplicity purposes. The image capture component 102 includes a lens designed to capture light that approaches the lenses at a wide angle. The image capture component 102 can be configured to capture the image in at least one nadir position and one oblique position. Various aspects associated with an image capture component 102 can be described here with reference to a camera. It should be appreciated that any technique for capturing or taking a photograph of the terrestrial surface and objects along the terrestrial surface can be used with one or more described modalities. Various configurations of the image capture component 102 will be provided later. The system 100 can use an extreme wide lens (over 120 °) associated with image capture component 102, which is Direct straight down. The central part of the field of view may be equivalent to a nadir photograph while the edge of the image may be equivalent to oblique photography in a 360 ° direction (in the azimuth direction). While the system 100 scans the ground, the high accuracy center image can be used as the basis for molding while the edge of the image generates dense coverage of building sides of each direction. The object identification component 104 can be configured to accept multiple images and identify a similar object or location in that image. Such identification of objects or locations can be a manual function, whereby the object identification component 104 receives an input from a user and / or entity (e.g., Internet, another system, a computer, ...) and associates a particular image or subgroup of an image with an object or location. According to some modalities, the object identification component 104 autonomously identifies similar objects or locations among a multitude of images and automatically associates the image or a portion of the image with the object or location. The association of the object or location with the image can be applied to a delineation application that uses a large number of images to represent a model of the earth as well as several objects located on the earth's surface. The presentation component 106 can be configured to display or display the resulting image on a screen of presentation. Such a presentation may be a delineation application where a user requests a presentation of a particular location in a multitude of navigation angles. In this way, the user is presented with a rich presentation that provides molding and texturing of real-world images. Figure 2 illustrates another illustrative system 200 for texturing and shaping images. The system 200 can use image processing methods to produce a variety of products. For example, the system 200 can be configured to obtain direct measurement of position, distances and areas of image data. The system 200 can obtain ortho-photo images of the land surface and / or oblique views of the land surface. Three-dimensional DSM molding can be provided by the 200 and / or textured system of the model. Alternatively or in addition, the system 200 can provide a continuous transformation or change between a 360 ° aerial view to a 360 ° terrestrial view. An aerial panorama is an image that shows a complete 360 view of the scene. Such a view may be a non-cylindrical band that is approximately parallel to the ground or a part of the hemisphere around the camera view direction. An earth panorama is a band that shows the environment around the point of capture, or a hemisphere around the point of view. By using a ground DEM and a recovered DSM, intermediate scenarios can be generated that take a trajectory between an aerial panorama and those based on earth. For example, the image can be generated by reprojection of the texture geometry, or by transformation of the original images to the position of the projected geometry. This can provide a smooth transition between the aerial image to the ground level image, between two aerial images, or between two terrestrial images. The system 200 includes an image capture component 202 that can be configured to capture a multitude of images from various angles. The image capture component 202 can also be included in the system 200 and is an object identification component 204 that can be configured to identify an object in a view area and a presentation component 206 that can be configured to present the captured image to a user in a delineation application display screen, for example. The image capture component 202 may comprise several configurations. For example, the image capture component 202 may include a single lens and individual sensor; an individual lens and multiple sensors; multiple lenses and multiple sensors; Multiple lenses, multiple sensors and a reflective surface, or other configurations. The image capture component 202 can be, for example, an aerial camera that includes a very wide angle (e.g., at least 120 °) and a high resolution sensor. According to some modalities, a module of synchronization 208 may be associated with image capture component 202. The synchronization module 208 may be configured to synchronize an image capture time or other parameters with at least one image capture component to facilitate a common capture of a similar scene. It should be understood that while the synchronization module 208 is illustrated as being included in the image capture component 202, according to some embodiments, the synchronization module 208 may be a separate component or associated with other components of the system 200. A module of combiner 210 can be included in object identification component 204 or can be a separate module, according to some modalities. The combiner module 210 may be configured to obtain multiple images received from multiple image capture components and combine the images. In this way, the combiner module 210 can present a larger image as well as a more detailed image of various navigation angles. For example, a first image capture device can capture a first and a second image and a second image capture device can capture a third and a fourth image. The synchronization module 208 can synchronize the capture of the four (or more images) and the combiner module and combine the images, based in part on at least one identified object located in the image.
System 200 may also include an object location component 208 that may be interfaced with image capture component 202, object identification component 204, or both components 202, 204. Location component 212 may be configured to convert any location in an image plane of the image capture component 202 to a beam in space. Such a configuration may take internal patterns into consideration as the image capture component 202. Given a position and orientation of the image capture component 202, the location component 212 may be configured to intersect a beam with a ground model and determine a position. in the space that corresponds to the point in the image. The orientation position can be determined by using an Inert Measurement Unit (INU) or can be retrieved from the image by identifying ground control points. In some embodiments, the location component 212 may be configured to intersect two or more rays, each originating from a different position of the image capture device 202. Each ray may correspond to different images of a similar world point in each image plane of the image capture device 202. The location component 202 can also be configured to measure a distance that can be a linear distance between two points or a length along a polyline, which is a continuous line composed of one or more line segments that they can be defined by a series of points. The image points can be delineated by location component 212 to the corresponding ground points for distance calculation. In a similar way, the location component 212 can measure areas on the earth or land surface by defining a polygon boundary of area in the image. Such an area can be terminated by using the terrestrial position of the points corresponding to the polygon vertices in the image.
When a user wishes to see a particular location or object in a delineation application, the user is interconnected with the presentation component 206, which may be associated with a computer or other device, whether stationary or mobile. Such an interface may include the user entering an exact location (eg, longitude, latitude) or entering an address, city, state, or other means of identification. The presentation component 206 can provide various types of user interfaces. For example, presentation component 206 may provide a graphical user interface (GUI), a command line interface, a dialogue interface, natural language text interface, and the like. For example, a GUI can be presented and provides a user with a region or means to load, import, select, read, etc. the desired location, and may include a region to present the results of such. These regions may comprise known text and / or graphic regions comprising dialog boxes, static controls, scroll menus, list boxes, appearance menus Instant, such as vision controls, combo boxes, radio buttons, review boxes, buttons to press, and graphic boxes. In addition, the utilities that facilitate and in the transportation of information such as vertical and / or horizontal scroll bars for navigation and toolbar buttons to determine if a region can be seen can be used. The user can also interact with the regions to select and provide information through various devices such as a mouse, a rolling ball, a numeric keypad, a keyboard, a pen, gestures captured by an image, and / or voice activation, for example. Typically, a mechanism such as a button to press or the key to enter the keyboard can be used subsequently and enter the information in order to initiate information transportation. Nevertheless, it should be appreciated that the described modalities are not limited as such. For example, simply highlighting a review box can initiate information transportation. In another example, a command line interface can be used. For example, the command line interface may prompt the user to information by providing a text message, which produces an audio tone, or the like. The user can then provide suitable information, such as alphanumeric input corresponding to an option provided in the interface pulse or a response to a question asked in the pulse. It should also be appreciated that the command line interface can be used in connection with a GUI and / or API.
In addition, the command line interface can be used in connection with hardware (eg, video cards) and / or presentations (eg, black and white, and EGA) with limited graphic support, and / or low communication channels. bandwidth. The images obtained can be maintained in a format that can be recovered on a storage medium 214. The storage means 214 can be associated with presentation component 206 or another system component 200. The storage means 214 can be memory and / or other medium that can store information. By way of illustration, and not limitation, storage means 214 may include non-volatile and / or volatile memory. Suitable nonvolatile memory may include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory. Volatile memory can include random access memory (RAM), which acts as an external cache memory. By way of illustration and not limitation, RAM is available in many forms such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), dual data rate SDRAM (DDR SDRAM), enhanced SDRAM ( ESDRAM), synchronous link DRAM (SMDRAM), direct RAM of common RAM driver (RDRAM), dynamic RAM of direct driver of direct RAM (DRDRAM), and dynamic RAM of driver like RAM (RDRAM).
Figure 3 illustrates an illustrative image capture device 300 that can be used with the described modalities. The image capture device 300 includes individual lenses 302 and an individual sensor 304. The image capture device 300 can be a high definition camera 304 which uses super wide lenses 302. The camera 304 can be mounted on an aircraft pointing towards down on earth 306 or other terrain. The resulting image has a variant soil scale that is higher near its center and that reduces while the image reaches the horizon. Figure 4 illustrates another illustrative image capture device 400 that can be used with the described embodiments. The image capture device includes an individual lens 402 and multiple sensors 404. While four sensors 404 are shown, the image capture device 400 may include any number of sensors. The use of multiple sensors 404 can reduce the expense associated with image capture device 300 illustrated in Figure 3. There are 406 links (of which only one is labeled) between neighboring sensors 408, 410, which are not transported in the order of sensors 404. The image capture device 400, which may be a wide-angle lens camera, points directly down into the ground or other terrain 412. Images of various terrain areas 412 may not be captured or photographed due to the junctions 406 between neighboring sensors 408, 410. These areas of the land 412 may be covered in a subsequent exposure or by the second device for capturing image (not shown) that captures a similar area of the terrain 412 substantially the same time as the first image capture device 400. In this way, the entire terrain can be captured through the image interaction of multiple image capture devices . Figure 5 illustrates an illustrative configuration 500 of multiple image capture devices that can be used with the various embodiments described herein. A multitude of image capture devices 502, 504, 506, 508, 510 can be mounted under an aircraft or other vehicle in a downwardly oriented configuration such as the combined viewing angles 512, 514, 516, 518, 520 that cover a full hemisphere or approximately 120 ° around the nadir direction by some minimum distance. The image capture devices 502, 504, 506, 508, 510 may be tuned to facilitate a common capture of a similar scene or terrain 522. This configuration 500 provides multiple lenses and multiple sensors due to multiple image capture devices 502, 504, 506, 508, 510. It should be noted that while five image capture devices are shown, there may be more or fewer capture devices in the system 500. Figure 6 illustrates another illustrative configuration 600 of multiple capture devices. image that can be used with the various modalities described here. This configuration 600 of multiple image capture devices 602, 604, 606, 608, 610 provide multiple lenses, multiple sensors, and a reflective surface 612. The reflective surface 612 may be a reflected surface that reflects the rays of the hemisphere around an aircraft to image capture devices 602, 604, 606, 608, 610 that are placed in a downward oriented configuration. It should be noted that more or less image capture devices can be included in system 600. This configuration 600 is known for omnivision cameras. However, using an individual camera suffers from several limitations. First, the reflector surface or mirror 612 delineates hemisphere rays to a circular image on the camera sensor. In many situations, the resolution of angles that are far from the direction of the camera deteriorates or is at a low resolution. Afterwards, the total resolution of the image is limited by the resolution of the sensors. Finally, the center of the image illustrates the reflection of the same camera on the surface or reflecting mirror. In that way, the same camera blocks the rays for a particular area of the resulting scene or image. To mitigate the problems associated with an individual image capture device, multiple image capture devices 602, 604, 606, 608, 610 are used to capture the surface image or reflector mirror 612 of various directions. The final generated image is a fusion of the images of each image capture device 602, 604, 606, 608, 610 and with the removal of the reflection images of the 602 devices, 604, 606, 608, 610. An orthophotograph is a photographic coverage of the earth, which has a constant ground scale and taken by orthographic projection of the general. Generally, an orthophotograph is generated by using a nadir image (for example, taken by a camera or device that looks directly downwards), and DSM. The DSM can be a 2.5 dimensional surface, which measures the height at each ground point (which includes buildings and other objects on the ground). The DSM can be captured by a sensor scale, such as a Light Detection and Variation Sensor (LIDAR), an Interferometric Synthetic Aperture Radar (IFSAR), or other sensors. AND! DSM can be generated from pairs of larger groups of nadir images that cover the same area of land or regions, such as a stereo process. Orthophotographs can be generated by defining an image with a defined ground resolution. Each pixel of that image can represent a ground point that is projected onto the original image, taking into account the orientation of camera position and internal parameters (for example, focal length, ...). The land point can be defined by a longitude-latitude location (X-Y) and its length obtained from DSM. The color of the image, in the projection point, can be used to color the orthophotographic image.
A nadir image that is used to generate an orthophotograph can have an angle of view of approximately 40 ° or less. An aerial image capture component that uses The described modalities can capture a group of rays of sight in the central part of its image similar to those captured by a nadir image. For example, the central 40 ° area of the image can substantially be the same as a view of a standard 40 ° aerial camera. The orthophotographs can be generated by using the modalities shown and described here. For example, in a configuration that includes a sensor and a lens, such as a wide-angle lens, the lenses can behave like a small-hole camera. In this way, the area of the sensor that of the same footprint (assuming the same height of flight and an angle of view of 120 °) is approximately 16% of the sensor area. To obtain a similar resolution orthophotograph a sensor such as a 60m pixel sensor can be used. In another modalityIf a lens that delineates equal angles to equalize the distance in the image plane is used, a sensor approximately 90m pixels can be used. However, other lenses and sensors may be used with one or more modalities described herein. For a multiple sensor and a lens configuration, an order of sensors, such as an order of 3 x 3 of sensors can be used to generate an image. Each sensor can be similar to the sensor used for orthophotography photography. The average sensor in an order of 3 x 3 can generate an image equivalent to regular orthophotography. For a configuration of multiple lenses and multiple sensors The camera, which is pointed in the nadir direction, can be used to generate a similar orthophoto image. For a configuration that includes multiple lenses, multiple sensors, and a reflecting surface, the reflection of the image capture component must be removed from the image. The images obtained by the other image capture components can be used to reconstruct the image without reflection, such as a fusion technique. Oblique views are images obtained by the image capture components or cameras that are tilted relative to the nadir direction. Oblique views show details that are dark from nadir images, such as details of a side of a building. Oblique images may be easier for a user to recognize, as it illustrates scene objects from a closer angle to ground level, which may be more familiar to users. In contrast to nadir images, oblique images are directional. That is, an oblique image from a sensor point is different from an oblique image from the same point from the north. As a result, if all possible oblique images of a scene are to be displayed in a delineation application, it is not sufficient to capture the scene view at each point, but the scene must be captured from all different directions at each point. Referring to Figure 7 now, a illustrative view direction for an oblique image. An oblique image may have a view direction measured by an angle from the nadir direction, vertical. A typical value, for example, can be 40 ° for the angle of inclination and approximately 40 ° for the angle of view. The illustration in 702 is an oblique image, as was observed from the above and, in 704, a super-wide image using the described modalities was illustrated. The first view, 702, shows the camera position 706 and a beam represented 708. The camera 706 can capture the color of all the sight rays 708 in the frustrum defined by the image plane. The super wide image 704 has a camera position 710 and captures all the rays, of which one is illustrated at 712, which passes through a ring 714 around the viewing angle. The marked area of the ring 716 is the beam equivalent to the oblique image of 702. With reference to Figure 8, a representative difference between an oblique plane image plane and a hemispherical image plane of an ultra wide image is illustrated. The generation of the oblique image is done by defining an oblique image plane and connecting each pixel of! plane with the center of the image capture component. The intersection of the ray, which connects a pixel in the oblique image plane and the center, with the super wide image plane, defined the oblique pixel color. A camera focal point 802 points directly down to an address of nadir 804. While using a super wide lens to capture image, the ray ring, of about 20 ° of the nadir 804 at about 60 ° of the nadir 804, which can be an oblique view direction 806 generates an oblique image plane 808. The corresponding ultra-wide image plane that can be obtained by using the described modalities illustrated in 810. Figure 9 illustrates the generation of a virtual oblique image of a super wide image. A camera focal point 902 points down the direction of nadir 904. For each pixel in the new oblique image plane, a ray is defined, which connects the center of the pixel and the center of the camera. The oblique view direction is shown at 906 and the point of the oblique image plane is shown at 908. The corresponding direction in the ultra wide image plane is illustrated at 910. Sampling of the super wide image at the intersection point of the Ray with the image plane generated a color for the new oblique pixel. As discussed, DSM is the three-dimensional model of the face of the earth that includes any object located on the ground (for example, trees, houses, and the like). DSM can be used for several applications, such as the generation of orthophotographs or to generate new virtual views of a particular scene. High quality DSM can be obtained from manual survey, which is expensive. Alternatively, DSM can be generated by analyzing multiple views of the scene taken from different points of view. The process that DSM generates includes matching corresponding features between those images. Each feature it defines a ray of sight of the camera position and the intersections of that sight rays define the position in the space of the characteristic. Automatic matching is difficult and prone to error, however. The techniques described can increase the transportation of each scene point. The use of an angle of view that is approximately 120 ° (as opposed to 40 °) provides the same number of images as the same ground point that is multiplied more than six times. For example, if the common area between two nadir images of 40 ° neighbors is 66% then nine images can see a particular ground point, while more than 56 120 ° images taken substantially in the same range can see substantially the same point . In this way, the reliability of the recovered DSM increases. In addition, super wide images have more visible transportation of the scene (for example, vertical walls of buildings, areas under trees) that generally can not be seen in a regular nadir image. In this way, the combination of coverage from all views can generate improved DSivl models. Textures add to the visual content and realism of models. Aerial views are often used to texture terrain models, buildings, and other objects on the ground. Basic texturing can be created by taking an object point and projecting it into one or more images where it is visible to generate a colour. However, since complex geometry has complex visibility, different cavities in objects can be observed only by a limited view direction. For example, the walls of buildings may not be seen properly, if it is at all, from the top view or on the ground under a tree it can be completely avoided from sight. In addition, different materials, such as reflective objects or semi-reflective objects have different reflective properties of different viewing directions. In this way, texturing these objects from an individual point of view can generate non-real texture when such an object is observed from a different direction. Another problem can be associated with joining the texture of multiple images that are taken from different directions when the texture contains a directional component. The use of the described modalities provides improved coverage that allows to texture areas that may not be covered by the limited capture of oblique and nadir images. The surfaces that are avoided of these images (for example, ground under trees) can be covered by new directions of view or more extreme angles. The described modes also provide each scene point seen by more images and more view directions. This provides improved molding of directional reflection properties. For example, the reflector component of a building window can be removed by analyzing several views of the window from different directions.
In view of the illustrative systems shown and described above, the methodologies that can be implemented according to the described topic will be better appreciated with reference to the flow charts of Figures 10 and 11. While, for purposes of simplicity of explanation, the methodologies are shown and described as a series of blocks, it is understood and appreciated that the claimed subject is not limited by the number or order of blocks, since some blocks may occur in different orders and / or concurrently with other blocks from what is illustrated and described here. Furthermore, not all illustrated blocks may be required to implement the methodologies described hereinafter. It should be appreciated that the functionality associated with the blocks can be implemented by software, hardware, a combination thereof or any other suitable means (e.g. device, system, process, component). Additionally, it should also be appreciated that the methodologies described herein and through this specification are capable of being stored in a manufacturing article to facilitate transportation and transfer of such methodologies to various devices. Those skilled in the art will understand and appreciate that a methodology can alternatively be represented as a series of interrelated states or events., as in a state diagram. Figure 10 illustrates a methodology 1000 for molding and texturing of DSM. Method 100 starts, at 1002, where one or more image capture devices are placed to capture Aereal images. The image capture devices include a wide-angle lens. Such positioning can include mounting one or more image capture devices under an aircraft to obtain nadir image in the terrestrial terrain as well as several objects on the terrestrial surface. The image capture devices, for example, may be cameras having wide-angle lenses. At 1004, one or more aerial images are captured by one or more image capture devices. Aerial images include at least one object located on the surface of the earth. Images can be captured when using nadir photographs, oblique photography, or a combination. Images captured from two or more devices can be combined to present a complete individual image that includes a granularity of detail greater than an individual image. A request to view the area captured by one or more images is received, at 1006. Such a request may be received from a user who wishes to see a particular area in a delineation application. The captured images can be presented at 1008. The images presented can be dynamic, so that if a user takes a complete view around a presentation screen, the image changes in response to such user request. For example, the presentation screen can switch between viewing aerial panorama images and images from a ground perspective, or other navigation angle.
Figure 11 illustrates another methodology 1100 for molding and texturing of DSM. In 1102, the configuration of one or more image capture devices is determined. Such configuration may include a single lens, individual sensor; individual lens, multiple sensors; multiple lenses, multiple sensors; and multiple lenses, multiple sensors and a reflecting surface. A lens, for example, can be a wide angle lens or an ultra wide angle lens. In 1104, the image data is obtained by a multitude of locations and objects on the surface of the earth. Such image data may be in the form of nadir images, oblique images, or other navigational angles. The image data may also include identifying an object in an observation area and / or identifying a similar object or location in a multitude of images. The association of the object or location with the image can be applied to a delineation application that uses a large number of images to represent a model of the earth as well as several objects located on the earth's surface. A measurement of position, distances and areas of an image is determined, at 1106. A distance that can be a linear distance between two points or a length along a polyline can be measured. The image points can be delineated to the corresponding ground points for distance calculation. In a similar way, the areas on the earth or land surface can be measured by defining a polygonal boundary of area in the image. Such area can determined by using the ground position of the points corresponding to the polygon vertices in the image. The resulting image may be presented in a user request, at 1108. In particular and with respect to the various functions performed by the components, devices, circuits, systems described above and the like, the terms (which include a reference to a "medium") used to describe such components are intended to correspond, unless otherwise indicated, or any component that performs the specified function of the described component (e.g., a functional equivalent), even though it is not structurally equivalent to the structure described, which performs the function in the illustrative aspects illustrated here. With respect to this, it will be recognized that the various aspects include a system as well as a computer readable medium having instructions executable by computers to perform the acts and / or events of the various methods. further, while a particular feature could be described with respect to only one of the various implementations, such a feature may be combined with one or more other features or other implementations as may be desired and be advantageous for any given or particular application. In addition, to the extent that the terms "include", and "including" and variants thereof are used in the detailed description or in the claims, these terms are intended to be inclusive in a manner similar to the term "comprising."

Claims (17)

1. - A system (100, 200) that facilitates molding and texturing for delineation applications, comprising: a first image capture component (102, 202) that captures an image of a plurality of angles; and an object identification component (104, 204) that identifies at least one object in the captured image; and a presentation component (106, 206) presenting the captured image identified in a delineation application.
2. - The system according to claim 1, wherein the first image capture component is a camera comprising a wide-angle lens.
3. - The system according to claim 1, wherein the first image capture component is a single lens, single sensor chamber.
4. - The system according to claim 1, wherein the first image capture component is a single lens, multiple sensor chambers.
5. The system according to claim 1, further comprising at least a second image capture component and a reflecting mirror, the first and at least one of the second image capture components that is in a configuration that is Orient inward
6. The system according to claim 1, which further comprising at least a second image capture component, the first and at least second image capture components that are in an outwardly oriented configuration.
7. The system according to claim 6, further comprising a synchronization module that synchronizes an image capture time between the first and at least a second image capture device to facilitate a common capture of a similar scene.
8. The system according to claim 6, further comprising a combiner module that combines images captured by the first and at least the second image capture device.
9. - The system according to claim 1, further comprising a storage medium that retains the captured image in a recoverable format.
10. - The system according to claim 1, wherein the first image capture component captures the image in at least one of the nadir position and an oblique position.
11. A method for texturizing and molding digital surface model (DSM), comprising: placing (1002, 1102) a first image capture device (102, 202, 300, 400, 502, 602) to obtain images aerial, the image capture device (102, 202, 300, 400, 502, 602) comprises: a wide angle lens (302, 402); capture (1004) the aerial image that includes at least one object located on the surface of the earth; receiving (1006) a request to view the captured aerial image; and presenting (1008, 1108) the aerial image requested in a delineation application. 12. - The method according to claim 11, wherein before placing the first image capture device further comprises determining a configuration for the first image capture device and at least one second image capture device. 13. - The method according to claim 12, wherein the configuration is one of a configuration that faces inwardly and a configuration that faces outwardly. The method according to claim 12, further comprising: placing the first image capture device and at least one second image capture device in a configuration that faces inward towards a reflective surface; and capture the aerial image that includes at least one object located on the surface of the earth from the reflecting surface.
15. - The method according to claim 11, wherein capturing the aerial image that includes at least one object located on the surface of the earth comprises at least one of the nadir position and an oblique position.
16. - The method according to claim 11, further comprising determining at least one of a position measurement, distance measurement and area measurement of the aerial image.
17. - The method according to claim 11, further comprising retaining the aerial image in a recoverable format. 18. - A system that provides texturing and shaping of DSM images, comprising: means for capturing (102, 202, 300, 400, 502, 602) a first and a second aerial image with a wide angle lens (302, 402), the aerial image comprising at least one object; means for identifying (104, 204) at least one object; means for combining (210) the first and second aerial images based on at least one identified object; and means for presenting (106, 206) the combined aerial images in a map application. 19. - The system according to claim 18, further comprising: means for capturing a third and a fourth aerial image with a wide angle lens; and means for synchronizing the means for capturing the first and second aerial images and the means for capturing the third and fourth aerial images. 20. - The system according to claim 19, further comprising means for combining the third and fourth aerial images with the first and second aerial images.
MXMX/A/2009/001951A 2006-08-24 2009-02-20 Modeling and texturing digital surface models in a mapping application MX2009001951A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US11466952 2006-08-24

Publications (1)

Publication Number Publication Date
MX2009001951A true MX2009001951A (en) 2009-05-13

Family

ID=

Similar Documents

Publication Publication Date Title
US7831089B2 (en) Modeling and texturing digital surface models in a mapping application
EP3359918B1 (en) Systems and methods for orienting a user in a map display
US11094113B2 (en) Systems and methods for modeling structures using point clouds derived from stereoscopic image pairs
US8963943B2 (en) Three-dimensional urban modeling apparatus and method
US7944547B2 (en) Method and system of generating 3D images with airborne oblique/vertical imagery, GPS/IMU data, and LIDAR elevation data
CN109801374B (en) Method, medium, and system for reconstructing three-dimensional model through multi-angle image set
US10789673B2 (en) Post capture imagery processing and deployment systems
KR20190020758A (en) Systems for creating complex reality environments
US11276244B2 (en) Fixing holes in a computer generated model of a real-world environment
US10140754B1 (en) Graphical user interface system and method for modeling lighting of areas captured by location scouts
CN113168712A (en) System and method for selecting complementary images from multiple images for 3D geometry extraction
Frueh Automated 3D model generation for urban environments
Kweon et al. Image-processing based panoramic camera employing single fisheye lens
Aliakbarpour et al. Imu-aided 3d reconstruction based on multiple virtual planes
CN111724488B (en) Map scene drawing method and device, readable storage medium and computer equipment
US11172125B2 (en) Method and a system to provide aerial visualization of large landscape area
MX2009001951A (en) Modeling and texturing digital surface models in a mapping application
Orlik et al. 3D modelling using aerial oblique images with close range UAV based data for single objects
Khatiwada et al. Texturing of digital surface maps (DSMs) by selecting the texture from multiple perspective texel swaths taken by a low-cost small unmanned aerial vehicle (UAV)
Scheibe Design and test of algorithms for the evaluation of modern sensors in close-range photogrammetry
Murtiyoso Geospatial recording and point cloud classification of heritage buildings
Mispelhorn et al. Real-time texturing and visualization of a 2.5 D terrain model from live LiDAR and RGB data streaming in a remote sensing workflow
Brun et al. On-the-way city mobile mapping using laser range scanner and fisheye camera
Iwaszczuk et al. Quality measure for textures extracted from airborne IR image sequences
Wahbeh Architectural Digital Photogrammetry