CN104486584A - City video map method based on augmented reality - Google Patents

City video map method based on augmented reality Download PDF

Info

Publication number
CN104486584A
CN104486584A CN201410794732.2A CN201410794732A CN104486584A CN 104486584 A CN104486584 A CN 104486584A CN 201410794732 A CN201410794732 A CN 201410794732A CN 104486584 A CN104486584 A CN 104486584A
Authority
CN
China
Prior art keywords
video
data
city
augmented reality
map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410794732.2A
Other languages
Chinese (zh)
Inventor
修文群
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Institute of Advanced Technology of CAS
Original Assignee
Shenzhen Institute of Advanced Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Institute of Advanced Technology of CAS filed Critical Shenzhen Institute of Advanced Technology of CAS
Priority to CN201410794732.2A priority Critical patent/CN104486584A/en
Publication of CN104486584A publication Critical patent/CN104486584A/en
Pending legal-status Critical Current

Links

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The invention provides a city video map system based on augmented reality, and a city video map method based on augmented reality. Through spatial orientation of city videos, the space-time locations, semantic attributes and vector features of the oriented videos are extracted and stereoscopically projected on an electronic map, so as to create a city video map; the augmented reality technology is further adopted to achieve the stereoscopic replay of city monitoring videos in real happening spaces thereof, on the basis of which interactive inquiry and analysis are carried out. The invention has the advantage that a novel application method of the monitoring videos is created.

Description

A kind of city video map method based on augmented reality
Technical field
The present invention relates to GIS-Geographic Information System (GIS), particularly relate to a kind of city video map method and system based on augmented reality.
Background technology
Augmented reality (Augmented Reality is called for short AR) is the new technology grown up on the basis of virtual reality.That the information adding users that provided by computer system is to the technology of real world perception, by virtual Information application to real world, and by the dummy object of Practical computer teaching, scene or system prompt information superposition in real scene, thus realize the enhancing to reality.The form that AR normally combines with through mode helmet-mounted display system and the registration location of the dummy object of user's point of observation and Practical computer teaching (in the AR system) system realizes.
Application number 201310675740.0 provides a kind of location based on video surveillance network and method for tracing;
Positioner and method are monitored in many video associations that application number 201310105427.3 provides based on spatial information,
Application number 201310443078.6 provides and a kind ofly in video file, adds geographical location information and set up the method for index,
Application number 201410115063.1 provides a kind of video semanteme retrieval camera system synchronous with compression and method,
Application number 201310695141.5 provides a kind of video abstraction generating method based on GIS,
Application number 201310589220.8 provides a kind of stereo street scene video projection method and system,
Application number 201310676340.1 provides a kind of Kinematic Positioning video electronic map projection system and method.
The application, just based on above-mentioned patent, in conjunction with augmented reality, further provides a kind of city video map system based on augmented reality.
Summary of the invention
This patent provides a kind of city video map system based on augmented reality, based on positioning video and GIS, by augmented reality, sets up city realistic video map.
For achieving the above object, the present invention adopts following technical proposals:
Based on a city video map method for augmented reality, comprise the steps:
Locate each video monitoring equipment, to obtain the spatial data of described watch-dog;
Based on described spatial data, form the integrated of the video data between many watch-dogs and associate;
In described video data, increase spatial positional information, form positioning video;
Extract key frame and semantic information thereof in described positioning video;
Extract the vector locus of target in described positioning video;
By the fixing monitor video stereoprojection in described positioning video to electronic chart, form city video map;
By the dynamic monitoring video stereoprojection in described positioning video to electronic chart, form city video map;
By augmented reality system, be superimposed on by described city video map to form augmented reality view in three-dimensional reality scene, the solid realizing described city video map is play and is inquired about, analyzes.
City video map system and method based on augmented reality provided by the invention, by carrying out space orientation to city video, extract the space-time position of city video behind location, semantic attribute and vector characteristic and solid projects to electronic chart, form city video map, in conjunction with augmented reality, realize the three-dimensional playback that space truly occurs at it supervision of the cities video, carry out interactive inquiry and analysis on this basis, create monitor video new application mode.
Accompanying drawing explanation
Fig. 1 is the flow chart of steps of the city video map method based on augmented reality provided by the invention.
Fig. 2 is each video monitoring equipment in location provided by the invention, to obtain the flow chart of steps of the spatial data of described watch-dog.
Fig. 3 is provided by the invention based on described spatial data, forms the integrated of the video data between many watch-dogs and the principle schematic associated.
Fig. 4 provided by the inventionly increases spatial positional information in described video data, forms the flow chart of steps of positioning video.
Fig. 5 is the flow chart of steps of key frame and semantic information thereof in the described positioning video of extraction provided by the invention.
Fig. 6 is the flow chart of steps of the vector locus of target in the described positioning video of extraction provided by the invention.
Fig. 7 be provided by the invention by the semantic information of the fixing monitor video in described positioning video and key frame thereof, vector data stereoprojection to electronic chart, form the flow chart of steps of city video map.
Fig. 8 is provided by the invention by augmented reality system, is superimposed on by city video map to form augmented reality view in three-dimensional reality scene, the flow chart of steps that the solid realizing described city video map is play and inquired about, analyzes.
Embodiment
In order to make object of the present invention, technical scheme and beneficial effect clearly understand, below in conjunction with drawings and Examples, the present invention is further elaborated.Should be appreciated that specific embodiment described herein only in order to explain the present invention, be not intended to limit the present invention.
Refer to Fig. 1, be the flow chart of steps of the city video map method 100 based on augmented reality provided by the invention, comprise the steps:
Step S110: locate each video monitoring equipment, to obtain the spatial data of described watch-dog;
Refer to Fig. 2, locate each video monitoring equipment, to obtain the spatial data of described watch-dog, comprise the steps:
Step S111: the geographical coordinate obtaining each video monitoring equipment;
Step S112: centered by each video monitoring equipment, according to monitoring range, sets up the space coordinates of video surveillance network;
Step S113: select characteristic point to survey and draw, obtain the geographical coordinate of characteristic point, and the pixel value of character pair point in the imaging array of video monitoring equipment;
Step S114: with above-mentioned characteristic point for control point, carries out projective transformation and Coordinate Conversion, makes other ground pixels in imaging array have corresponding geographic coordinate values;
Step S115: when target appears at monitoring range, by obtaining target place pixel coordinate, and combining environmental surround relationship, converse the geographical coordinate of target;
Step S116: by transferring the geographical coordinate of target different frame in video surveillance network, forms the motion track of target.
Be appreciated that through above-mentioned steps S111 to step S116, the location to each video monitoring equipment can be realized, obtain the spatial data of described watch-dog.
Step S120: based on described spatial data, forms the integrated of the video data between many watch-dogs and associates;
Refer to Fig. 3, for provided by the invention based on described spatial data, form the integrated of the video data between many watch-dogs and the principle schematic associated, comprise some camera heads and data server, described data server comprises data processing module 121b, memory module 122b, separation module 123b, characteristic extracting module 124b, discrimination module 125b and control module 126b, wherein
Described camera head is connected with described data server, and described some camera heads comprise some Mobile photographic device 127b and fixing camera head 128b;
Described Mobile photographic device 127b obtains positional information during video capture, and this positional information is attached in the video data of its shooting, and described memory module 122b storage comprises the video data of this positional information;
Described memory module 122b also stores the positional information of described fixing camera head 128b, this positional information is attached in the video data of described fixing camera head 128b shooting by described data processing module 121b, and described memory module 122b storage comprises the video data of this positional information;
Described memory module 122b stores monitoring radius R, described data processing module 121b obtains the positional information attached by video data comprising described target, described control module 126b by centered by this position, R is set to monitor state for the camera head in radius;
Described separation module 123b is by the prospect in described video data and background separation, described prospect comprises described target, described characteristic extracting module 124b extracts the characteristic information of described background, described discrimination module 125b differentiates the location type of described background according to this characteristic information, the camera head being positioned at this location type is set to monitor state by described control module 126b.
Step S130: increase spatial positional information in described video data, forms positioning video;
Refer to Fig. 4, wherein, in described video data, increase spatial positional information, form positioning video, comprise the steps:
Step S131: the geographical location information data obtaining described video data acquiring point;
Step S132: described geographical location information data is inserted in the file header reserved field of described video data;
Step S133: with described geographical location information data for index, sets up the video file database based on locus inquiry, cluster and association analysis.
Be appreciated that to achieve through above-mentioned steps S131 to step S133 and increase spatial positional information in described video data, form positioning video.
Step S140: extract key frame and semantic information thereof in described positioning video;
Refer to Fig. 5, extract key frame and semantic information thereof in described positioning video, comprise the steps:
Step S141: personal settings:
Described personal settings comprise:
Step S141a: the set selecting specific objective;
Step S141b: the video features semantic base setting up described positioning video;
Step S141c: carry out sample training to Sample video under offline environment, in order to obtain training parameter collection;
Step S141d: training parameter is configured in grader;
Step S142: application:
Described application comprises:
Step S142a: obtain video, starts compression;
Step S142b: extract key frame in the compressed domain;
Step S142c: at described key-frame extraction Moving Objects;
Step S142d: extract semantic feature in key frame or Moving Objects;
Step S142e: read the training parameter collection in grader;
Step S142f: the semantic feature of extraction mated with training parameter collection, obtains the index of video semanteme.
Step S150: the vector locus extracting target in described positioning video;
Refer to Fig. 6, extract the vector locus of target in described positioning video, comprise the steps:
Step S151: the geographical coordinate obtaining target in described positioning video;
Step S152: by the pixel coordinate of target in described positioning video key frame and geographical coordinate one_to_one corresponding;
Step S153: by carrying out rim detection to the target in described positioning video frame by frame, obtain the pixel coordinate of its Edge Feature Points;
Step S154: the pixel coordinate of described target, geographical coordinate, shooting time and frame number are recorded in tables of data;
Step S155: by superposing frame by frame, forms one group of time dependent vector current tables of data, former video monitoring data is converted into the set of coordinate values of [pixel coordinate+longitude and latitude+shooting time], stored in GIS, is formed and makes a summary to the continuous videos of destination object.
Step S160: by the fixing monitor video stereoprojection in described positioning video to electronic chart, forms city video map;
Refer to Fig. 7, by the fixing monitor video stereoprojection in described positioning video to electronic chart, form city video map, comprise the steps:
Step S161: the panorama or the half scape video flowing that obtain streetscape according to described positioning video;
Step S162: described panorama or half scape video flowing are decomposed into multiframe Streetscape picture, and project to spherical coordinate and fasten;
Step S163: described multiframe Streetscape picture is superposed, and video flowing passage is set up between described multiframe Streetscape picture;
Step S164: fasten in described spherical coordinate, according to multiframe Streetscape picture described in described video flowing passage Continuous Play.
Wherein, " panorama " video i.e. 360 degree of monitor videos, and common angle video monitoring then can be considered " half scape ", by its monitoring orientation, project on the relevant portion position of spherical coordinate.
Step S170: by the dynamic monitoring video in described positioning video and key frame semantic information thereof, vector data stereoprojection to electronic chart, forms city video map;
Particularly, by the semanteme of the dynamic monitoring video in described positioning video and key frame thereof, vector data stereoprojection to electronic chart, form city video map, comprising:
Coordinate centered by the shooting point of described positioning video, projects on the three-dimensional spherical coordinate system of electronic chart relevant position by panorama or half scape video and additional information framing thereof, forms city video map.
Be appreciated that by above-mentioned steps S110 to step S170, monitor video can be realized and play with three-dimensional based on the GIS integrated management of coordinatograph, summaryization (key frame), semantization, vector quantization information, form city video map.
Step S180: by augmented reality system, be superimposed on by described city video map to form augmented reality view in three-dimensional reality scene, the solid realizing described city video map is play and is inquired about, analyzes.
Particularly, refer to Fig. 8, step S180 comprises the steps:
Step S181: catch reality scene by camera;
Step S182: the transformational relation setting up Virtual Space coordinate system and realistic space coordinate system, dummy object is made to be merged on the tram of real world, and the position of person constantly changes according to the observation, rebuild the transformational relation of Virtual Space coordinate system and realistic space coordinate system;
Step S183: by described city video map with longitude and latitude and elevation for coordinate is superimposed in three-dimensional reality scene to form augmented reality view;
Step S184: realize observer and described augmented reality view natural interaction by interactive tool, show the real position of described city video map.
City video map method based on augmented reality provided by the invention, by carrying out space orientation to city video, extract the space-time position of positioning video, semantic attribute and vector characteristic and solid projects to electronic chart, form city video map, in conjunction with augmented reality, realize the three-dimensional playback that space truly occurs at it supervision of the cities video, carry out interactive inquiry and analysis on this basis, create monitor video new application mode.
It should be noted that: in the various embodiments described above, the description of each embodiment is all given priority to, and does not have the part described in detail in each embodiment, with reference to specification detailed description in full, can repeat no more herein.
The above is only the preferred embodiment of the present invention; it should be pointed out that for those skilled in the art, under the premise without departing from the principles of the invention; can also make some improvements and modifications, these improvements and modifications also should be considered as protection scope of the present invention.

Claims (9)

1., based on a method for the city video map of augmented reality, it is characterized in that, comprise the steps:
Locate each video monitoring equipment, to obtain the spatial data of described watch-dog;
Based on described spatial data, form the integrated of the video data between many watch-dogs and associate;
In described video data, increase spatial positional information, form positioning video;
Extract key frame and semantic information thereof in described positioning video;
Extract the vector locus of target in described positioning video;
By the semantic information of the fixing monitor video in described positioning video and key frame thereof, vector data stereoprojection to electronic chart, form city video map;
By the semantic information of the dynamic monitoring video in described positioning video and key frame thereof, vector data stereoprojection to electronic chart, form city video map;
By augmented reality system, be superimposed on by described city video map to form augmented reality view in three-dimensional reality scene, the solid realizing described city video map is play and is inquired about, analyzes.
2., as claimed in claim 1 based on the method for the city video map of augmented reality, it is characterized in that, wherein, locate each video monitoring equipment, to obtain the spatial data of described watch-dog, comprising:
Obtain the geographical coordinate of each video monitoring equipment;
Centered by each video monitoring equipment, according to monitoring range, set up the space coordinates of video surveillance network;
Select characteristic point to survey and draw, obtain the geographical coordinate of characteristic point, and the pixel value of character pair point in the imaging array of video monitoring equipment;
With above-mentioned characteristic point for control point, carry out projective transformation and Coordinate Conversion, make other ground pixels in imaging array have corresponding geographic coordinate values;
When target appears at monitoring range, by obtaining target place pixel coordinate, and combining environmental surround relationship, converse the geographical coordinate of target;
By transferring the geographical coordinate of target different frame in video surveillance network, form the motion track of target.
3. as claimed in claim 1 based on the method for the city video map of augmented reality, it is characterized in that, wherein, based on described spatial data, form the integrated of the video data between many watch-dogs and associate, comprise: video relating module, described video relating module comprises: some camera heads and data server, and described data server comprises data processing module, memory module, separation module, characteristic extracting module, discrimination module and control module;
Described camera head is connected with described data server, and described some camera heads comprise some Mobile photographic devices and fixing camera head;
Described Mobile photographic device obtains positional information during video capture, and this positional information is attached in the video data of its shooting, and described memory module storage comprises the video data of this positional information;
Described memory module also stores the positional information of described fixing camera head, and this positional information is attached in the video data of described fixing camera head shooting by described data processing module, and described memory module storage comprises the video data of this positional information;
Described memory module stores monitoring radius R, and described data processing module obtains the positional information attached by video data comprising described target, described control module by centered by this position, R is set to monitor state for the camera head in radius;
Described separation module is by the prospect in described video data and background separation, described prospect comprises described target, described characteristic extracting module extracts the characteristic information of described background, described discrimination module differentiates the location type of described background according to this characteristic information, and the camera head being positioned at this location type is set to monitor state by described control module.
4. as claimed in claim 1 based on the method for the city video map of augmented reality, it is characterized in that, wherein, in described video data, increase spatial positional information, form positioning video, comprising:
Obtain the geographical location information data of described video data acquiring point;
Described geographical location information data is inserted in the file header reserved field of described video data;
With described geographical location information data for index, set up the video file database based on locus inquiry, cluster and association analysis.
5., as claimed in claim 1 based on the method for the city video map of augmented reality, it is characterized in that, wherein, extract key frame and semantic information thereof in described positioning video, comprising:
Comprise personal settings and two stages of application, wherein:
Described personal settings comprise: the set selecting specific objective; Set up the video features semantic base of described positioning video; Under offline environment, sample training is carried out to Sample video, in order to obtain training parameter collection; Training parameter is configured in grader;
Described application comprises: obtain video, starts compression; Extract key frame in the compressed domain; At described key-frame extraction Moving Objects; Semantic feature is extracted in key frame or Moving Objects; Read the training parameter collection in grader; The semantic feature of extraction is mated with training parameter collection, obtains the index of video semanteme.
6., as claimed in claim 1 based on the method for the city video map of augmented reality, it is characterized in that, wherein, extract the vector locus of target in described positioning video, comprising:
Obtain the geographical coordinate of target in described positioning video;
By the pixel coordinate of target in described positioning video key frame and geographical coordinate one_to_one corresponding;
By carrying out rim detection to the target in described positioning video frame by frame, obtain the pixel coordinate of its Edge Feature Points;
The pixel coordinate of described target, geographical coordinate, shooting time and frame number are recorded in tables of data;
By superposing frame by frame, forming one group of time dependent vector current tables of data, former video monitoring data being converted into the set of coordinate values of [pixel coordinate+longitude and latitude+shooting time], stored in GIS, formed and the continuous videos of destination object is made a summary.
7. as claimed in claim 1 based on the method for the city video map of augmented reality, it is characterized in that, wherein, by the fixing monitor video stereoprojection in described positioning video to electronic chart, form city video map, comprising:
Panorama or the half scape video flowing of streetscape is obtained according to described positioning video; Described panorama or half scape video flowing are decomposed into multiframe Streetscape picture, and project to spherical coordinate and fasten; Described multiframe Streetscape picture is superposed, and set up video flowing passage between described multiframe Streetscape picture; Fasten in described spherical coordinate, according to multiframe Streetscape picture described in described video flowing passage Continuous Play.
8. as claimed in claim 1 based on the method for the city video map of augmented reality, it is characterized in that, wherein, by the dynamic monitoring video stereoprojection in described positioning video to electronic chart, form city video map, comprising:
Coordinate centered by the shooting point of described positioning video, projects to video framing on the three-dimensional spherical coordinate system of electronic chart relevant position, forms city video map.
9. as claimed in claim 1 based on the method for the city video map of augmented reality, it is characterized in that, wherein, by augmented reality system, described city video map is superimposed in three-dimensional reality scene to form augmented reality view, the solid realizing described city video map is play and is inquired about, analyzes, and comprising:
Reality scene is caught by camera;
Set up the transformational relation of Virtual Space coordinate system and realistic space coordinate system, dummy object is merged on the tram of real world, and the position of person constantly changes according to the observation, rebuild the transformational relation of Virtual Space coordinate system and realistic space coordinate system;
By described city video map with longitude and latitude and elevation for coordinate is superimposed in three-dimensional reality scene to form augmented reality view;
Realize observer and described augmented reality view natural interaction by interactive tool, show the real position of described city video map.
CN201410794732.2A 2014-12-18 2014-12-18 City video map method based on augmented reality Pending CN104486584A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410794732.2A CN104486584A (en) 2014-12-18 2014-12-18 City video map method based on augmented reality

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410794732.2A CN104486584A (en) 2014-12-18 2014-12-18 City video map method based on augmented reality

Publications (1)

Publication Number Publication Date
CN104486584A true CN104486584A (en) 2015-04-01

Family

ID=52761083

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410794732.2A Pending CN104486584A (en) 2014-12-18 2014-12-18 City video map method based on augmented reality

Country Status (1)

Country Link
CN (1) CN104486584A (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105204505A (en) * 2015-09-22 2015-12-30 深圳先进技术研究院 Positioning video acquiring and drawing system and method based on sweeping robot
CN105933609A (en) * 2015-12-29 2016-09-07 广东中星电子有限公司 Method and device for transferring rotatable camera
CN106027957A (en) * 2016-05-13 2016-10-12 深圳先进技术研究院 Video positioning and publishing system
CN106027959A (en) * 2016-05-13 2016-10-12 深圳先进技术研究院 Video recognizing-tracking-positioning system based on position linear fitting
CN106027960A (en) * 2016-05-13 2016-10-12 深圳先进技术研究院 Positioning system and method
CN106060471A (en) * 2016-06-23 2016-10-26 广东中星电子有限公司 Intelligent monitoring method and system based on augment reality technology
CN106767812A (en) * 2016-11-25 2017-05-31 梁海燕 A kind of interior semanteme map updating method and system based on Semantic features extraction
CN108268138A (en) * 2018-01-29 2018-07-10 广州市动景计算机科技有限公司 Processing method, device and the electronic equipment of augmented reality
CN109313812A (en) * 2016-05-31 2019-02-05 微软技术许可有限责任公司 Sharing experience with context enhancing
CN110674711A (en) * 2019-09-10 2020-01-10 深圳市城市公共安全技术研究院有限公司 Method and system for calibrating dynamic target of urban monitoring video
CN110989840A (en) * 2019-12-03 2020-04-10 成都纵横自动化技术股份有限公司 Data processing method, front-end equipment, back-end equipment and geographic information system
CN111651050A (en) * 2020-06-09 2020-09-11 浙江商汤科技开发有限公司 Method and device for displaying urban virtual sand table, computer equipment and storage medium
CN111857132A (en) * 2020-06-19 2020-10-30 深圳宏芯宇电子股份有限公司 Central control type automatic driving method and system and central control system
CN113646753A (en) * 2018-12-28 2021-11-12 浙江大华技术股份有限公司 Image display system and method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103679730A (en) * 2013-12-17 2014-03-26 深圳先进技术研究院 Video abstract generating method based on GIS
CN103716586A (en) * 2013-12-12 2014-04-09 中国科学院深圳先进技术研究院 Monitoring video fusion system and monitoring video fusion method based on three-dimension space scene
JP2014069907A (en) * 2012-09-28 2014-04-21 Sideland Co Ltd Home delivery system and program
CN103747230A (en) * 2013-12-11 2014-04-23 深圳先进技术研究院 Dynamic positioning video electronic map projection system and method
CN103905824A (en) * 2014-03-26 2014-07-02 深圳先进技术研究院 Video semantic retrieval and compression synchronization camera system and method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014069907A (en) * 2012-09-28 2014-04-21 Sideland Co Ltd Home delivery system and program
CN103747230A (en) * 2013-12-11 2014-04-23 深圳先进技术研究院 Dynamic positioning video electronic map projection system and method
CN103716586A (en) * 2013-12-12 2014-04-09 中国科学院深圳先进技术研究院 Monitoring video fusion system and monitoring video fusion method based on three-dimension space scene
CN103679730A (en) * 2013-12-17 2014-03-26 深圳先进技术研究院 Video abstract generating method based on GIS
CN103905824A (en) * 2014-03-26 2014-07-02 深圳先进技术研究院 Video semantic retrieval and compression synchronization camera system and method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王俊,李明建,邹扬庆,罗红霞: "增强现实技术在移动GIS***中的实现", 《安徽农业科学》 *

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105204505A (en) * 2015-09-22 2015-12-30 深圳先进技术研究院 Positioning video acquiring and drawing system and method based on sweeping robot
CN105933609A (en) * 2015-12-29 2016-09-07 广东中星电子有限公司 Method and device for transferring rotatable camera
CN105933609B (en) * 2015-12-29 2019-02-15 广东中星电子有限公司 Transfer the method and device of video camera capable of rotating
CN106027960B (en) * 2016-05-13 2019-03-26 深圳先进技术研究院 A kind of positioning system and method
CN106027957A (en) * 2016-05-13 2016-10-12 深圳先进技术研究院 Video positioning and publishing system
CN106027959A (en) * 2016-05-13 2016-10-12 深圳先进技术研究院 Video recognizing-tracking-positioning system based on position linear fitting
CN106027960A (en) * 2016-05-13 2016-10-12 深圳先进技术研究院 Positioning system and method
CN109313812A (en) * 2016-05-31 2019-02-05 微软技术许可有限责任公司 Sharing experience with context enhancing
CN106060471A (en) * 2016-06-23 2016-10-26 广东中星电子有限公司 Intelligent monitoring method and system based on augment reality technology
CN106767812A (en) * 2016-11-25 2017-05-31 梁海燕 A kind of interior semanteme map updating method and system based on Semantic features extraction
CN106767812B (en) * 2016-11-25 2017-12-08 郭得科 A kind of indoor semantic map updating method and system based on Semantic features extraction
CN108268138A (en) * 2018-01-29 2018-07-10 广州市动景计算机科技有限公司 Processing method, device and the electronic equipment of augmented reality
CN113646753A (en) * 2018-12-28 2021-11-12 浙江大华技术股份有限公司 Image display system and method
CN110674711A (en) * 2019-09-10 2020-01-10 深圳市城市公共安全技术研究院有限公司 Method and system for calibrating dynamic target of urban monitoring video
CN110989840A (en) * 2019-12-03 2020-04-10 成都纵横自动化技术股份有限公司 Data processing method, front-end equipment, back-end equipment and geographic information system
CN110989840B (en) * 2019-12-03 2023-07-25 成都纵横自动化技术股份有限公司 Data processing method, front-end equipment, back-end equipment and geographic information system
CN111651050A (en) * 2020-06-09 2020-09-11 浙江商汤科技开发有限公司 Method and device for displaying urban virtual sand table, computer equipment and storage medium
CN111857132A (en) * 2020-06-19 2020-10-30 深圳宏芯宇电子股份有限公司 Central control type automatic driving method and system and central control system
CN111857132B (en) * 2020-06-19 2024-04-19 深圳宏芯宇电子股份有限公司 Central control type automatic driving method and system and central control system

Similar Documents

Publication Publication Date Title
CN104486584A (en) City video map method based on augmented reality
US10643106B2 (en) System and method for procedurally synthesizing datasets of objects of interest for training machine-learning models
CN104484814B (en) A kind of advertising method and system based on video map
US20190197791A1 (en) Systems and methods for augmented reality representations of networks
JP4185052B2 (en) Enhanced virtual environment
US11315340B2 (en) Methods and systems for detecting and analyzing a region of interest from multiple points of view
JP6182607B2 (en) Video surveillance system, surveillance device
KR101965878B1 (en) Automatic connection of images using visual features
CN103514621B (en) The authentic dynamic 3D reproducting method of case, event scenarios and reconfiguration system
CN104486585B (en) A kind of city magnanimity monitor video management method and system based on GIS
CN104504753A (en) Internet three-dimensional IP (internet protocol) map system and method based on augmented reality
EP4148691A1 (en) A method of training a machine learning algorithm to identify objects or activities in video surveillance data
WO2022074294A1 (en) Network-based spatial computing for extended reality (xr) applications
Yu et al. Intelligent visual-IoT-enabled real-time 3D visualization for autonomous crowd management
Zhu et al. Large-scale architectural asset extraction from panoramic imagery
CN112288876A (en) Long-distance AR identification server and system
US11443477B2 (en) Methods and systems for generating a volumetric two-dimensional representation of a three-dimensional object
KR20160039447A (en) Spatial analysis system using stereo camera.
Yang et al. Seeing as it happens: Real time 3D video event visualization
KR20200007732A (en) Method and apparatus for providing additional information for processing media in a remote or cloud environment
WO2023053485A1 (en) Information processing device, information processing method, and information processing program
KR102542363B1 (en) Method for recognizing object in 3 dimentional space
CN102495907A (en) Video summary with depth information
Zhai et al. Survey of Visual Crowdsensing
KR20160071172A (en) Panoramic three-dimensional map generation system using stereo camera.

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20150401