CN113610869A - Panoramic monitoring display method based on GIS system - Google Patents

Panoramic monitoring display method based on GIS system Download PDF

Info

Publication number
CN113610869A
CN113610869A CN202110903875.2A CN202110903875A CN113610869A CN 113610869 A CN113610869 A CN 113610869A CN 202110903875 A CN202110903875 A CN 202110903875A CN 113610869 A CN113610869 A CN 113610869A
Authority
CN
China
Prior art keywords
panoramic
point cloud
data
cloud data
panoramic monitoring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110903875.2A
Other languages
Chinese (zh)
Inventor
焦坦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Visionertech Co ltd
Original Assignee
Chengdu Visionertech Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Visionertech Co ltd filed Critical Chengdu Visionertech Co ltd
Priority to CN202110903875.2A priority Critical patent/CN113610869A/en
Publication of CN113610869A publication Critical patent/CN113610869A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/344Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map
    • G06T3/047Fisheye or wide-angle transformations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Remote Sensing (AREA)
  • Computer Graphics (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a GIS (geographic information system) -based panoramic monitoring display method, which relates to the field of monitoring and comprises the following steps: acquiring a panoramic image of a site to be monitored through panoramic monitoring equipment; the method comprises the following steps that a panoramic scanning device collects a field image according to a set flight route, wherein a plurality of fixed collection points are set in the flight route; processing the acquired image data, and generating point cloud data by combining the position relation of the acquisition points; performing three-dimensional reconstruction on the point cloud data to obtain a three-dimensional model; and performing animation display on the panoramic image and the three-dimensional model through a GIS system. The invention combines the panoramic monitoring with the GIS system, can bring all-around monitoring without dead angles to consumers, and brings visual feeling of spatial characteristics of the monitored area to the consumers.

Description

Panoramic monitoring display method based on GIS system
Technical Field
The invention relates to the field of monitoring, in particular to a GIS (geographic information system) -based panoramic monitoring display method.
Background
In the current video monitoring field, most of the monitoring systems are built by using common cameras, the display of real-time videos is relatively monotonous and flat, and the video monitoring without all-round dead angles and the three-dimensional stereo feeling of a monitored area cannot be brought to consumers. In the current market, although panoramic display based on GIS is available, the panoramic display is based on static off-line images, and the function of real-time monitoring without dead angles cannot be realized, so that the practical application value is very low. With the rapid development of the panoramic monitoring technology and the wide popularization of the GIS technology, the display method combining the panoramic view with the GIS has become a development trend.
Disclosure of Invention
In view of the technical defects, the invention provides a GIS system-based panoramic monitoring display method.
In order to achieve the purpose, the technical scheme of the invention is as follows:
the panoramic monitoring display method based on the GIS system comprises the following steps:
s1, acquiring a panoramic image of the site to be monitored through the panoramic monitoring equipment;
s2, collecting the field image by the panoramic scanning equipment according to a set flight route, wherein a plurality of fixed collection points are set in the flight route;
s3, processing the collected image data, and generating point cloud data by combining the position relation of the collection points;
s4, performing three-dimensional reconstruction on the point cloud data to obtain a three-dimensional model;
and S5, performing animation display on the panoramic image and the three-dimensional model through a GIS system.
Preferably, the panoramic image acquisition method in step S1 is: and installing the panoramic monitoring equipment in the center of each area according to the size of the site and the effective acquisition area of the panoramic monitoring equipment.
Preferably, the panoramic monitoring device is further provided with a positioning module, and the positioning module is used for reporting the geographic position coordinates of installation of the panoramic monitoring device.
Preferably, the panoramic scanning device in step S2 is embodied as a drone module, and the drone module is mounted with a plurality of lenses supporting oblique photography.
Preferably, the method for collecting the site by the drone module in step S2 includes:
s51, adjusting the unmanned aerial vehicle module to fly to a set height;
s52, the unmanned aerial vehicle module performs overlook shooting on the site from a plurality of visual angles, and the flight route of the unmanned aerial vehicle module reciprocates according to the set coincidence rate and direction to complete full-coverage shooting on the site;
and S53, obtaining orderly arranged picture data through full-coverage shooting of the site, wherein the picture data comprise longitude and latitude, altitude and shooting posture information.
Preferably, the three-dimensional reconstruction process of the point cloud data described in step S4 includes point cloud registration, data fusion, and surface reconstruction, where the point cloud registration is:
s61, extracting feature point data among the picture data, and obtaining QUOTE according to the feature point data
Figure DEST_PATH_IMAGE002A
、 QUOTE
Figure DEST_PATH_IMAGE004A
A matrix;
s62, QUOTE by ICP Algorithm
Figure DEST_PATH_IMAGE002AA
Figure DEST_PATH_IMAGE002AAA
、 QUOTE
Figure DEST_PATH_IMAGE004AA
Figure DEST_PATH_IMAGE004AAA
Optimizing the matrix to obtain optimized QUOTE
Figure DEST_PATH_IMAGE002AAAA
Figure DEST_PATH_IMAGE002_5A
、 QUOTE
Figure DEST_PATH_IMAGE004AAAA
Figure DEST_PATH_IMAGE004_5A
A matrix;
s63, transforming the optimized R, T matrix and the point cloud data to obtain registered point cloud data, wherein the mathematical expression is as follows:
in the formula: QUOTE
Figure DEST_PATH_IMAGE008A
For registered point cloud data, QUOTE
Figure DEST_PATH_IMAGE010A
For source cloud data, QUOTE
Figure DEST_PATH_IMAGE002_6A
Figure DEST_PATH_IMAGE002_7A
For rotating matrices, QUOTE
Figure DEST_PATH_IMAGE004_6A
Figure DEST_PATH_IMAGE004_7A
Is a translation matrix.
Preferably, the process of data fusion and surface reconstruction is as follows:
s71, constructing a volume grid by taking the point cloud data center after registration as an origin;
s72, segmenting the registered point cloud data through the volume grids to obtain a plurality of cubes;
and S73, constructing the isosurface of the cube through a voxel-level reconstruction algorithm, and combining the isosurfaces of all cubes to generate a three-dimensional model.
The invention has the beneficial effects that: through the combination of panoramic monitoring and a GIS system, the system can bring all-around dead-angle-free monitoring to consumers and bring visual feeling of spatial characteristics of a monitored area to the consumers.
Drawings
Fig. 1 is provided by the present invention: a panoramic equipment installation schematic diagram;
fig. 2 is provided by the present invention: schematic diagram of data grid data acquisition method of unmanned aerial vehicle;
fig. 3 is provided by the present invention: a model point cloud data schematic;
fig. 4 is provided by the present invention: reconstructing a three-dimensional model schematic diagram according to the point cloud data;
fig. 5 is provided by the present invention: the GIS is combined with a panoramic display schematic diagram;
fig. 6 is provided by the present invention: the flow chart is schematic.
Detailed Description
The technical solutions in the embodiments of the present invention are clearly and completely described below with reference to the accompanying drawings, and other advantages and effects of the present invention will be readily apparent to those skilled in the art from the disclosure of the present specification. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Examples
As shown in fig. 1-6, the panoramic monitoring display method based on the GIS system includes the following steps:
s1, acquiring a panoramic image of the site to be monitored through the panoramic monitoring equipment;
s2, collecting the field image by the panoramic scanning equipment according to a set flight route, wherein a plurality of fixed collection points are set in the flight route;
s3, processing the collected image data, and generating point cloud data by combining the position relation of the collection points;
s4, performing three-dimensional reconstruction on the point cloud data to obtain a three-dimensional model;
and S5, performing animation display on the panoramic image and the three-dimensional model through a GIS system.
Preferably, the panoramic image acquisition method in step S1 is: and installing the panoramic monitoring equipment in the center of each area according to the size of the site and the effective acquisition area of the panoramic monitoring equipment.
The method specifically comprises the following steps: according to every panoramic camera can clearly effectively gather 100 square meters space size as the basis, select equipment quantity and mounted position according to actual place space size, every panoramic equipment installation here equipment the regional space center of monitoring can to ensure that the maximum limit utilizes panoramic equipment's full field monitoring effect.
Preferably, the panoramic monitoring device is further provided with a positioning module, and the positioning module is used for reporting the geographic position coordinates of installation of the panoramic monitoring device.
Preferably, the panoramic scanning device in step S2 is embodied as a drone module, and the drone module is mounted with a plurality of lenses supporting oblique photography.
Preferably, the method for collecting the site by the drone module in step S2 includes:
s51, adjusting the unmanned aerial vehicle module to fly to a set height;
s52, the unmanned aerial vehicle module performs overlook shooting on the site from a plurality of visual angles, and the flight route of the unmanned aerial vehicle module reciprocates according to the set coincidence rate and direction to complete full-coverage shooting on the site;
and S53, obtaining orderly arranged picture data through full-coverage shooting of the site, wherein the picture data comprise longitude and latitude, altitude and shooting posture information.
Preferably, the three-dimensional reconstruction process of the point cloud data described in step S4 includes point cloud registration, data fusion, and surface reconstruction, where the point cloud registration is:
s61, extracting feature point data among the picture data, and obtaining QUOTE according to the feature point data
Figure DEST_PATH_IMAGE002_8A
Figure DEST_PATH_IMAGE002_9A
、 QUOTE
Figure DEST_PATH_IMAGE004_8A
Figure DEST_PATH_IMAGE004_9A
A matrix;
s62, QUOTE by ICP Algorithm
Figure DEST_PATH_IMAGE002_10A
Figure DEST_PATH_IMAGE002_11A
、 QUOTE
Figure DEST_PATH_IMAGE004_10A
Figure DEST_PATH_IMAGE004_11A
Optimizing the matrix to obtain optimized QUOTE
Figure DEST_PATH_IMAGE002_12A
Figure DEST_PATH_IMAGE002_13A
、 QUOTE
Figure DEST_PATH_IMAGE004_12A
Figure DEST_PATH_IMAGE004_13A
A matrix;
s63, transforming the optimized R, T matrix and the source point cloud data to obtain registered point cloud data, wherein the mathematical expression is as follows:
Figure DEST_PATH_IMAGE006A
in the formula: QUOTE
Figure DEST_PATH_IMAGE008AA
Figure DEST_PATH_IMAGE008AAA
After being registeredPoint cloud data of, QUOTE
Figure DEST_PATH_IMAGE010AA
Figure DEST_PATH_IMAGE010AAA
For source cloud data, QUOTE
Figure DEST_PATH_IMAGE002_14A
Figure DEST_PATH_IMAGE002_15A
For rotating matrices, QUOTE
Figure DEST_PATH_IMAGE004_14A
Figure DEST_PATH_IMAGE004_15A
Is a translation matrix.
Preferably, the process of data fusion and surface reconstruction is as follows:
s71, constructing a volume grid by taking the point cloud data center after registration as an origin;
s72, segmenting the registered point cloud data through the volume grids to obtain a plurality of cubes;
and S73, constructing the isosurface of the cube through a voxel-level reconstruction algorithm, and combining the isosurfaces of all cubes to generate a three-dimensional model.
The installation method of panoramic equipment monitoring comprises the following steps:
s1: the installation is installed in the center of a local area of a monitoring scene as much as possible so as to fully utilize the characteristic of no panoramic dead angle.
S2: and each panoramic camera monitoring effective area is controlled within 100 square meters.
S3: no object is required to be shielded in 3m around the panoramic camera.
The method for acquiring the geographic model data comprises the following steps:
s1: data acquisition
S11: the unmanned aerial vehicle supporting multi-lens oblique photography is adjusted to fly to a proper height.
S12: the unmanned aerial vehicle photography system shoots the ground from a plurality of visual angles.
S13: the aerial route returns along a certain direction with the overlapping rate of about 70 percent, and covers all the fields in sequence so as to achieve full-coverage aerial photography of the target terrain.
S14: and finally, orderly arranged picture data containing information such as longitude and latitude, altitude, shooting posture and the like are obtained.
S2: data processing
S21: and carrying out distortion correction on each picture according to the shot posture information of each picture.
S22: and generating point cloud data by combining the relation between each point according to the point data (longitude and latitude height information) obtained by shooting.
S23: and extracting characteristic points of the adjacent shot photos, and fusing image data according to the characteristic information.
S3: output model
And performing three-dimensional reconstruction on the point cloud data to obtain a three-dimensional model.
The three-dimensional modeling method of the monitoring area comprises the following steps:
s1: and (5) acquiring the panoramic image, wherein the distance between two adjacent acquisition points is controlled within 1 m.
S2: and processing the acquired panoramic image through the visual SLAM to acquire three-dimensional point cloud data.
S3: and performing three-dimensional reconstruction on the point cloud data to obtain a three-dimensional model.
Point cloud data modeling process:
s1: point cloud registration
S11: and (4) coarse registration. And extracting feature point data among the images, wherein the features comprise explicit features such as straight lines, inflection points, curve curvatures and the like, and self-defined features such as symbols, rotation graphs, axes and the like. And according to the characteristic data, preliminarily obtaining a rotation and translation matrix.
S12: and (6) accurate registration. The mode of fine registration has been largely fixed to use of the ICP algorithm and its various variants (the ICP algorithm is not described in detail here). And obtaining a R, T matrix after accurate optimization through an ICP algorithm.
S13: after registration, a transformation matrix for transforming the source point cloud data to the target point cloud is obtained, and can be expressed as: pt = R × Ps + T.
(Pt is target point cloud data, Ps is source point cloud data, R is a rotation matrix, and T is a translation matrix)
S2: data fusion
S21: the point cloud data after registration is scattered and disordered data in space, the scene information is not fully displayed, and fusion processing is needed to obtain a fine reconstruction model.
S22: and constructing a volume grid by taking each point cloud data center as an origin, and dividing the point cloud data into independent small cubes (voxels) by the grid.
S23: the surface is simulated by assigning SDF (effective distance field) values to the voxels. The SDF value is equal to the minimum distance value of this voxel to the reconstructed surface. When the SDF value is larger than zero, the voxel is in front of the surface; when the SDF is less than zero, it indicates that the voxel is behind the surface; as the SDF value approaches zero, it indicates that the voxel is closer to the real surface of the scene.
S3: surface reconstruction
S31: the purpose of surface reconstruction is to directly process raw gray-scale volume data using a voxel-level method in order to construct a visual iso-surface of an object.
S32: voxel level reconstruction algorithm: MC (Marching Cube) method. The marching cubes method first stores eight adjacently located data in a data field at eight vertices of a tetrahedral voxel, respectively. For two end points of an edge on a boundary voxel, when one of the values is greater than a given constant T and the other is less than T, there must be a vertex of the iso-surface on the edge. And then calculating the intersection points of twelve edges in the voxel and the isosurface, and constructing triangular patches in the voxel, wherein all the triangular patches divide the voxel into two areas, namely an isosurface area and an isosurface area. And finally, connecting the triangular patches of all voxels in the data field to form an isosurface. Merging the iso-surfaces of all cubes results in a complete three-dimensional surface.
The foregoing is illustrative of the preferred embodiments of this invention, and it is to be understood that the invention is not limited to the precise form disclosed herein and that various other combinations, modifications, and environments may be resorted to, falling within the scope of the concept as disclosed herein, either as described above or as apparent to those skilled in the relevant art. And that modifications and variations may be effected by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (7)

1. The GIS system-based panoramic monitoring display method is characterized by comprising the following steps of:
s1, acquiring a panoramic image of the site to be monitored through the panoramic monitoring equipment;
s2, collecting the field image by the panoramic scanning equipment according to a set flight route, wherein a plurality of fixed collection points are set in the flight route;
s3, processing the collected image data, and generating point cloud data by combining the position relation of the collection points;
s4, performing three-dimensional reconstruction on the point cloud data to obtain a three-dimensional model;
and S5, performing animation display on the panoramic image and the three-dimensional model through a GIS system.
2. The GIS system based panoramic monitoring display method of claim 1, wherein the panoramic image acquisition method in step S1 is as follows: and installing the panoramic monitoring equipment in the center of each area according to the size of the site and the effective acquisition area of the panoramic monitoring equipment.
3. The GIS system-based panoramic monitoring display method according to claim 2, wherein the panoramic monitoring device is further provided with a positioning module, and the positioning module is used for reporting geographic position coordinates of installation of the panoramic monitoring device.
4. The GIS system-based panoramic monitoring display method according to claim 1, wherein the panoramic scanning device in step S2 is an unmanned aerial vehicle module, and the unmanned aerial vehicle module is provided with a plurality of lenses supporting oblique photography.
5. The GIS system-based panoramic monitoring display method of claim 4, wherein the method for collecting the site by the UAV module in step S2 is as follows:
s51, adjusting the unmanned aerial vehicle module to fly to a set height;
s52, the unmanned aerial vehicle module performs overlook shooting on the site from a plurality of visual angles, and the flight route of the unmanned aerial vehicle module reciprocates according to the set coincidence rate and direction to complete full-coverage shooting on the site;
and S53, obtaining orderly arranged picture data through full-coverage shooting of the site, wherein the picture data comprise longitude and latitude, altitude and shooting posture information.
6. The GIS system based panoramic monitoring and displaying method according to claim 1, wherein the three-dimensional reconstruction process of the point cloud data in step S4 includes point cloud registration, data fusion and surface reconstruction, wherein the point cloud registration is:
s61, extracting feature point data among the picture data, and obtaining the feature point data according to the feature point data
Figure DEST_PATH_IMAGE002
Figure DEST_PATH_IMAGE004
A matrix;
s62, by ICP algorithm
Figure 951702DEST_PATH_IMAGE002
Figure 337684DEST_PATH_IMAGE004
Optimizing the matrix to obtain the optimized matrix
Figure 586263DEST_PATH_IMAGE002
Figure 450314DEST_PATH_IMAGE004
A matrix;
s63, transforming the optimized R, T matrix and the point cloud data to obtain registered point cloud data, wherein the mathematical expression is as follows:
Figure DEST_PATH_IMAGE006
in the formula:
Figure DEST_PATH_IMAGE008
for the point cloud data after the registration, the point cloud data,
Figure DEST_PATH_IMAGE010
in the form of source point cloud data,
Figure 608894DEST_PATH_IMAGE002
in order to be a matrix of rotations,
Figure 900198DEST_PATH_IMAGE004
is a translation matrix.
7. The GIS system-based panoramic monitoring and displaying method according to claim 6, wherein the data fusion and surface reconstruction processes are as follows:
s71, constructing a volume grid by taking the point cloud data center after registration as an origin;
s72, segmenting the registered point cloud data through the volume grids to obtain a plurality of cubes;
and S73, constructing the isosurface of the cube through a voxel-level reconstruction algorithm, and combining the isosurfaces of all cubes to generate a three-dimensional model.
CN202110903875.2A 2021-08-06 2021-08-06 Panoramic monitoring display method based on GIS system Pending CN113610869A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110903875.2A CN113610869A (en) 2021-08-06 2021-08-06 Panoramic monitoring display method based on GIS system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110903875.2A CN113610869A (en) 2021-08-06 2021-08-06 Panoramic monitoring display method based on GIS system

Publications (1)

Publication Number Publication Date
CN113610869A true CN113610869A (en) 2021-11-05

Family

ID=78307520

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110903875.2A Pending CN113610869A (en) 2021-08-06 2021-08-06 Panoramic monitoring display method based on GIS system

Country Status (1)

Country Link
CN (1) CN113610869A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105931234A (en) * 2016-04-19 2016-09-07 东北林业大学 Ground three-dimensional laser scanning point cloud and image fusion and registration method
CN106803267A (en) * 2017-01-10 2017-06-06 西安电子科技大学 Indoor scene three-dimensional rebuilding method based on Kinect
CN110223387A (en) * 2019-05-17 2019-09-10 武汉奥贝赛维数码科技有限公司 A kind of reconstructing three-dimensional model technology based on deep learning
CN110428501A (en) * 2019-08-01 2019-11-08 北京优艺康光学技术有限公司 Full-view image generation method, device, electronic equipment and readable storage medium storing program for executing
CN110765528A (en) * 2019-10-22 2020-02-07 江苏瑞中数据股份有限公司 Three-dimensional reconstruction transformer substation implementation method based on virtual simulation technology
US20200090303A1 (en) * 2016-12-16 2020-03-19 Hangzhou Hikvision Digital Technology Co., Ltd. Method and device for fusing panoramic video images
CN112449093A (en) * 2020-11-05 2021-03-05 北京德火科技有限责任公司 Three-dimensional panoramic video fusion monitoring platform
US11037346B1 (en) * 2020-04-29 2021-06-15 Nanjing University Of Aeronautics And Astronautics Multi-station scanning global point cloud registration method based on graph optimization

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105931234A (en) * 2016-04-19 2016-09-07 东北林业大学 Ground three-dimensional laser scanning point cloud and image fusion and registration method
US20200090303A1 (en) * 2016-12-16 2020-03-19 Hangzhou Hikvision Digital Technology Co., Ltd. Method and device for fusing panoramic video images
CN106803267A (en) * 2017-01-10 2017-06-06 西安电子科技大学 Indoor scene three-dimensional rebuilding method based on Kinect
CN110223387A (en) * 2019-05-17 2019-09-10 武汉奥贝赛维数码科技有限公司 A kind of reconstructing three-dimensional model technology based on deep learning
CN110428501A (en) * 2019-08-01 2019-11-08 北京优艺康光学技术有限公司 Full-view image generation method, device, electronic equipment and readable storage medium storing program for executing
CN110765528A (en) * 2019-10-22 2020-02-07 江苏瑞中数据股份有限公司 Three-dimensional reconstruction transformer substation implementation method based on virtual simulation technology
US11037346B1 (en) * 2020-04-29 2021-06-15 Nanjing University Of Aeronautics And Astronautics Multi-station scanning global point cloud registration method based on graph optimization
CN112449093A (en) * 2020-11-05 2021-03-05 北京德火科技有限责任公司 Three-dimensional panoramic video fusion monitoring platform

Similar Documents

Publication Publication Date Title
Xiang et al. Mini-unmanned aerial vehicle-based remote sensing: Techniques, applications, and prospects
CN112053446B (en) Real-time monitoring video and three-dimensional scene fusion method based on three-dimensional GIS
CN106356757B (en) A kind of power circuit unmanned plane method for inspecting based on human-eye visual characteristic
CN108168521A (en) One kind realizes landscape three-dimensional visualization method based on unmanned plane
CN109883401B (en) Method and system for measuring visual field of city mountain watching
Barazzetti et al. True-orthophoto generation from UAV images: Implementation of a combined photogrammetric and computer vision approach
CN110428501B (en) Panoramic image generation method and device, electronic equipment and readable storage medium
CN103226838A (en) Real-time spatial positioning method for mobile monitoring target in geographical scene
CN111080794A (en) Three-dimensional reconstruction method for farmland on-site edge cloud cooperation
CN103226830A (en) Automatic matching correction method of video texture projection in three-dimensional virtual-real fusion environment
CN115937288A (en) Three-dimensional scene model construction method for transformer substation
CN106683163B (en) Imaging method and system for video monitoring
CN108259858B (en) Method and device for monitoring scene and equipment of transformer substation
CN115641401A (en) Construction method and related device of three-dimensional live-action model
CN110660125B (en) Three-dimensional modeling device for power distribution network system
CN107767454A (en) A kind of three-dimensional mobile fast modeling method of outdoor scene, apparatus and system
Beck Real-time visualization of big 3D city models
CN109788270B (en) 3D-360-degree panoramic image generation method and device
CN112288637A (en) Unmanned aerial vehicle aerial image rapid splicing device and rapid splicing method
CN116342783B (en) Live-action three-dimensional model data rendering optimization method and system
CN110782498A (en) Rapid universal calibration method for visual sensing network
CN113066112A (en) Indoor and outdoor fusion method and device based on three-dimensional model data
CN112669448A (en) Virtual data set development method, system and storage medium based on three-dimensional reconstruction technology
Pan et al. Virtual-real fusion with dynamic scene from videos
CN114299230A (en) Data generation method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination