CN112214642A - Multi-video event blind area change process deduction method based on geographic semantic association constraint - Google Patents

Multi-video event blind area change process deduction method based on geographic semantic association constraint Download PDF

Info

Publication number
CN112214642A
CN112214642A CN202010977915.3A CN202010977915A CN112214642A CN 112214642 A CN112214642 A CN 112214642A CN 202010977915 A CN202010977915 A CN 202010977915A CN 112214642 A CN112214642 A CN 112214642A
Authority
CN
China
Prior art keywords
semantic
geographic
constraint
objects
relation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010977915.3A
Other languages
Chinese (zh)
Other versions
CN112214642B (en
Inventor
谢潇
薛冰
鄂超
李京忠
伍庭晨
孔琪
任婉侠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Applied Ecology of CAS
Original Assignee
Institute of Applied Ecology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Applied Ecology of CAS filed Critical Institute of Applied Ecology of CAS
Priority to CN202010977915.3A priority Critical patent/CN112214642B/en
Publication of CN112214642A publication Critical patent/CN112214642A/en
Application granted granted Critical
Publication of CN112214642B publication Critical patent/CN112214642B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/75Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/787Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using geographical or spatial information, e.g. location
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Library & Information Science (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Processing Or Creating Images (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention relates to a method for deducing a multi-video event blind area change process based on geographic semantic association constraint, which belongs to the technical field of geographic space data processing and comprises the following steps: a) extracting a longitudinal hierarchical structure and a transverse topological network of the enhanced semantic geographic position to realize geographic position semantic association expression under the unified positioning division reference of the monitoring scene interval; b) judging a geographical motion mode which is a process line by line based on the relation change characteristics of the track and the geographical position; c) establishing a blind area behavior process characteristic parameter for mapping the spatio-temporal distance migration cost; d) and (4) planning the semantic path of the geographical entity of the monitoring blind area by combining the geographical position scene, the geographical motion mode and the spatial-temporal distance constraint to realize the deduction of the behavior process of the blind area. The invention completes the deduction of the geographical constraint of the discrete change process blind area information, realizes the geographical semantic enhancement facing the continuous change process of the monitoring area, and supports the understanding of the complete event change process information in different levels.

Description

Multi-video event blind area change process deduction method based on geographic semantic association constraint
Technical Field
The invention relates to a deduction method for a blind area change process of multi-camera video event information with geographic semantic association constraint, and belongs to the technical field of geospatial data processing.
Background
The geographic video content acquired by the single camera is influenced by the space-time locality of the imaging window, the short-time behavior process of the characteristic object in a specific place or region is recorded, and the geographic video content based on the single camera is only limited to analyzing local small-scale events corresponding to local abnormal changes; although the overall range of a monitoring time-space window is effectively expanded by globally associated multiple geographic videos in a network monitoring environment, and a geographic video data set for recording multi-scale complex event information is provided, event knowledge in data content still has a large number of information blind areas due to the influence of common discrete and non-overlapping distribution of action domains of monitoring equipment. In order to support complete cognition on the occurrence and development process of a complex event and provide rich retrieval entries and correct constraint conditions for the geographic video data organization retrieval oriented to different levels of event information, a geographic video associated semantic enhancement method oriented to an event information blind area is developed, and becomes another key problem which needs to be solved urgently after 'associated aggregation of multi-camera discrete geographic video data hierarchy qualitative, action orientation and characteristic quantification' is realized.
Event awareness in support of emergency treatment task decisions needs to evolve from "what" and "where" phenomenological awareness, to process awareness that can answer "what" is. The change process in the geo-video content is a set of information that is known, relative to the absence of blind area change process information. According to the logic research thought, all object information of certain objects which cannot be acquired by reasoning from partial data is a basic form of thinking based on the mutual relation rule of all components of the object. Therefore, how to reasonably deduce the change process in the blind area by fully utilizing the known content change information becomes a core problem for realizing the enhancement of the associated semantics of the geographic video.
Disclosure of Invention
The invention aims to provide a deduction method for a blind area change process of multi-camera video event information with geographic semantic association constraint. Specifically, a deduction method of a multi-camera video event information blind area change process constrained by geographic semantic association is provided for the problem of event information blind areas universally existing due to the influence of discrete distribution of action areas of monitoring equipment in a geographic video lens group oriented to event process association clustering. A geographic video content analysis mechanism from 'facing a single video image space' to 'combining an internal scene space and an external scene space of a geographic video' and a geographic video GIS analysis mechanism from 'local feature similarity constraint' to 'global geographic correlation constraint' are innovated.
The technical scheme adopted by the invention for realizing the purpose is as follows: the deduction method of the change process of the blind area of the multi-video event with the geographic semantic association constraint comprises the following steps:
step 1, construction of geographical position condition constraints: acquiring thematic semantic information of a monitored area, analyzing geographic position semantics in the thematic semantic information, and extracting a geographic position object and an object relation; establishing a longitudinal semantic position hierarchical structure and a transverse semantic position communication network facing the position relation, and expressing the geographic position semantic association under the unified positioning interval division of the monitoring scene interval;
step 2, analyzing trend constraint of the motion mode: on the basis of the expression dimension of the geometric figure of the three-dimensional building model forming elements and the semantic concept description granularity, classifying, inducing, analyzing and utilizing the characteristic semantic relationship in the three-dimensional building model to extract the position boundary characteristics which completely express the three-dimensional building model and are regular to form for distinguishing the motion mode based on the position characteristics;
step 3, cost constraint estimation of space-time distance: analyzing behavior movement characteristics of information blind areas among behavior processes by utilizing the statistical characteristics of the tracks and the track pairs of adjacent behavior processes among the orderly organized geographic video shot groups, and establishing blind area behavior process characteristic parameters for mapping the spatio-temporal distance migration cost;
step 4, deduction of a multi-constraint monitoring blind area behavior process: and (3) combining a geographical position relation network of a scene, a geographical motion mode of a behavior process in video content and a path discrimination index quantitatively solved based on a space-time distance and a track characteristic, and performing semantic path deduction of a moving behavior process of the geographical entity of the monitoring blind area so as to enhance the associated semantics of the monitoring blind area in the geographical video lens group semantic metadata.
The construction of the conditional constraint of the geographical location comprises the following steps:
step 1.1, extracting a thematic geographic position object set of a monitoring scene area:
firstly, a geographic video lens group facing a monitoring scene area extracts metadata of a geographic video frame object contained in each geographic video lens, and three-dimensional monitoring scene imaging intervals of each geographic video lens are obtained through imaging characteristic items in the metadata to form an imaging interval set;
then, based on the whole space area of the imaging interval set, obtaining a monitoring scene interval capable of completely covering the discrete change process in the multi-geographic video content, and taking the monitoring scene interval as a geographic position basic expression range of a unified reference;
then, the basic expression range of the geographic position is used as a space retrieval condition, target position information description including geometry, topology, semantics, attributes and functions related to the position is obtained, and each target position information description is stored as a geographic position object to form a thematic geographic position object set;
step 1.2, constructing a longitudinal semantic position hierarchical structure:
according to the spatial inclusion relationship of geometric elements of the geographic position objects in the thematic geographic position object set, a longitudinal hierarchical structure of the geographic position objects is constructed, namely a hierarchical structure in which parent positions and child positions are nested layer by layer; checking the position name expression of the positioning space division in the region of the hierarchically organized geographical position object one by one so as to ensure the normalization and uniqueness of the position object name in the region; taking the position name expressed by the specification as a unique identification code of the geographic position object;
step 1.3, constructing a transverse semantic position communication network:
on the basis of constructing a longitudinal semantic position hierarchical structure, judging the connectivity between every two geographic position objects by using topological, semantic, attribute and functional relationship information among the objects synchronously extracted with the geographic position objects and using a position interface as a conditional carrier for supporting feature object migration according to the semantic communication relationship of the geographic position objects, and storing a position communication relationship taking a conditional constraint information item for supporting the connectivity between the positions as conditional constraint; the condition constraint information items are divided and stored as space constraint, time constraint and function attribute constraint and serve as inquiry interface conditions of all communication relations when a communication network is constructed;
after the building of a longitudinal semantic position hierarchical structure and a transverse semantic position communication network is completed, a monitoring scene interval of geographic position semantic association expression is obtained; the constructed hierarchical relation and the communication relation are recorded in a data model for storing data through class objects; the link relation in the thematic geographic position object set forms a link network with conditional constraint, and the link network is used as a data basis for planning and analyzing a multi-constraint path.
Step 1.4, calculating-describing geographical location link logic simplification of semantic hierarchy separation:
in a monitoring scene interval of the geographic position semantic association expression, performing logic simplification by adopting a calculation-description hierarchical organization method, specifically for position association expression of calculation and description hierarchical mapping, wherein a description layer is used for recording and expressing complete position objects and relations; the calculation layer is used for position association enhancement deduction calculation, and simplifies original position nodes and communication relations of different semantic levels into position nodes and communication relations of the same semantic level;
step 1.5, simplifying the calculation layer geographic position communication network with conditional constraints:
after the position nodes and the communication relation of the position nodes of the calculation layer are constructed, the geographical position communication network which is reduced by the parameter conditions for representing geometry, semantics and topology is extracted from the calculation layer according to the set condition constraint information items and parameters, and the simplified geographical position communication network is obtained.
The trend constraint analysis of the motion mode comprises the following steps:
step 2.1, unifying the model surface subdivision of geometric body expression dimension and semantic concept description granularity:
the method comprises the following steps of dividing the method into three processing types according to the geometric form type of a surface object in a geographic scene model and the associated lowest-level semantic granularity characteristics: a mesh plane of the associated semantic surface; a mesh surface which is associated with a semantic surface and has an open boundary; associating semantic entities and forming a mesh surface of a limited closed space;
according to the three processing types, after a surface object set of multi-level semantic information is associated in the extraction model structure of a three-dimensional building model extracted based on a multi-geographic video shot range, a multi-form type surface object is divided object by object into a semantic object set with a unified geometric data structure, a unified surface form type and a unified basic semantic granularity;
step 2.2, carrying out automatic extraction of the atomic semantic position and position boundary correction of semantic relation constraint, comprising the following substeps:
step 2.2a, extracting semantic relations among objects in the semantic object set:
extracting semantic information related to the surface object set after the structure subdivision, and establishing a tree-shaped hierarchical structure of the semantic objects in a memory according to inclusion relations of the semantic objects; judging and classifying and extracting semantic relations between adjacent semantic objects; the semantic relations sequentially extracted from bottom to top comprise a semantic combination relation between the semantic surface objects and the semantic body objects and a semantic aggregation relation between the semantic body objects; storing the extracted semantic relation types as reference information for automatically extracting the atomic semantic positions in the step 2.2 b;
step 2.2b, analyzing the atomic semantic position object based on the semantic relation:
extracting semantic entities with the lowest aggregation level one by utilizing semantic aggregation relations among the semantic body objects to obtain an atomic semantic position set which completely and non-overlappingly covers the space range of the geographic scene model;
step 2.3c, correcting the position boundary of the atomic semantic position object:
extracting entity boundaries of each atomic semantic position object from a triangular grid-like and plane-discretized semantic surface object set obtained by subdivision by utilizing a semantic combination relation between a semantic surface object and a semantic body object, and performing classification correction according to surface features;
step 2.2d, checking and correcting the space coverage completeness of the atomic semantic position:
based on the space coverage of the regular body boundary voxelized atomic semantic entity, selecting correction operation from the following schemes according to the space relation of a voxel set:
for every two atomic semantic entity objects, eliminating the spatial overlap between the objects through voxel local boundary contraction;
and for every two adjacent atomic semantic positions, filling gaps between atomic semantic entities through voxel local boundary expansion.
The classification correction comprises the following steps:
firstly, correcting the topological connection with errors in the topological relation between the geometric surfaces by inserting virtual edges into every two atomic semantic positions with a semantic aggregation relation, and the method specifically comprises the following steps:
I) extracting a geometric surface set of every two atomic semantic positions;
II) calculating intersection line segments of every two surface objects between the surface sets in sequence through the intersection of the graphic polygon vectors, and respectively storing the intersection line segments to the intersection surfaces;
III) traversing each surface of the two atomic semantic entity objects, sequentially carrying out triangulation calculation by triangulation of feature constraint in graphics by taking the intersection line segment as a constraint feature, and storing the newly inserted intersection edge as an atomic semantic position virtual edge boundary;
correcting the open boundary between the geometric surfaces by inserting a virtual surface into each atomic semantic position, and specifically comprising the following steps of:
I) extracting an atom semantic position geometric surface set;
II) extracting the boundary contour line of each geometric plane and storing the boundary contour line as a line segment array;
III) traversing each line segment array, and extracting a line segment set which only appears once;
IV) searching a closed polygon in the line segment set which only appears once until all line segments in the set are used;
v) triangularly flattening each closed polygon, and taking the triangularly flattened polygon mesh as the boundary of the atomic semantic position virtual surface.
The cost constraint estimation of the space-time distance comprises the following sub-steps:
step 3.1, from the perspective of the correlation of the feature information of the geographic video data, the following assumptions and inferences are proposed:
suppose that: the characteristic object has the overall continuous change characteristic of the parameter value of the object in the change process of the monitoring scene;
and (3) deducing: based on the assumption, the behavior process of the characteristic object in a certain monitoring blind area and the behavior process in the geographic video shot of the adjacent blind area have the characteristic of relevance, which can be fitted by the parameter value of the object;
step 3.2, based on the above reasoning, when ob (x) is a certain blind zone behavior process, Oob(a),Oob(b) Behavior processes in the geographic video shots of adjacent blind areas respectively, wherein x represents a certain continuous process, a and b represent behavior process segments contained in and displayed in the existing video, and then:
3) each behavioral process object is described as: content semantic item CSE describing object individual characteristics, geographic semantic item GSE describing geographic position of each state under the reference of unified space-time frame, content semantic item CSB describing behavior action type, and geographic semantic item GSB describing position relation change motion mode under the reference of unified space-timeob(x) And Oob(a) And Oob(b) Are expressed as:
Oob(x)={CSE(x),GSE(x),CSB(x),GSB(x)}
Oob(a)={CSE(a),GSE(a),CSB(a),GSB(a)}
Oob(b)={CSE(b),GSE(b),CSB(b),GSB(b)}
CSE (x), GSE (x), CSB (x), GSB (x) as characteristic parameters of dead zone behavior process;
4) according to the inference: o isob(x) And Oob(a) And Oob(b) The characteristic parameters of the system establish a function fitting relation F based on the correlation characteristics, thereby realizing the utilization of the known Oob(a) And Oob(b) Estimation of the blind area Oob(x) Formally expressed as: { cse (x), gse (x), csb (x), gsb (x) } ═ F ({ cse (a), gse (a), csb (a), gsb (a)) }, { cse (b), gse (b), csb (b), gsb (b)) })
When the above formula is simply expressed as cgeb (x) ═ F (cgeb (a)) and cgeb (b)), the feature of the behavior process that can be analyzed and modeled, cgeb (a) and cgeb (b), and the spatiotemporal distance D between the behavior process and the feature are usedmin(Oob(a),Oob(b) Defining the optimal scheme of the blind area position migration change between two behaviors in the following constraint index COI calculation mode:
COI(Oob(a),Oob(b))=f(D(Oob(a),Oob(b)),CGEB(x))
=f(D(Oob(a),Oob(b)),F(CGEB(a),CGEB(b))
specifically, in the behavioral process expression using the track data object as a carrier, f is a parameter item D (O)ob(a),Oob(b) Functional operation relationship between the two elements (A) and (B), and parameter item D (O)ob(a),Oob(b) Expressed as a trajectory spatiotemporal interval solved by geo-video shot spatiotemporal metadata; CGEB (a) and CGEB (b) are specifically expressed as the statistical characteristic items of the global maximum, mean and variance of each structural characteristic and structural characteristic of the local position, corner, speed and acceleration of the behavior track.
The multi-constraint monitoring blind area behavior process deduction comprises the following substeps:
step 4.1, performing track semantic expression of geographic video shot semantic metadata, and executing the following substeps:
step 4.1a, semantic position positioning and distinguishing of track sequence points:
mapping discrete tracks of geographic video shot semantic metadata in a position scene into a thematic geographic position object set scene divided by a unified geographic framework;
step 4.2b, judging the geographical motion mode of the track data object:
taking track data objects described by sequence points as a whole, based on each geographical position object corrected by position boundaries in a monitoring scene, according to the change characteristics of the relation between the track and the position, according to a set motion mode judgment rule and by taking a unit position reference mode as a principle, analyzing and judging the geographical semantic motion mode expressed based on the geographical position by using local azimuth and corner information items in the track structure characteristic items of the object line by line;
step 4.2, semantic path deduction in the blind area moving behavior process, and the following substeps are executed:
step 4.2a, multi-level semantic position and calculation layer mapping of the incidence relation:
uniformly mapping the geographic positions related to the modes to a calculation layer expressed by atomic semantic positions in a monitoring scene according to a calculation-description hierarchical organization method on the basis of the geographic semantic motion mode distinguished in the last step for the behavior tracks corresponding to every two adjacent geographic video shots, and recording mapping paths for restoring semantic path descriptions of corresponding layers after obtaining estimated track data objects;
step 4.2b, calculating a blind area path plan:
and (3) solving the difference absolute value between the evaluation function E (n) and the constraint index COI by using the evaluation function E' (n) considering the constraint through the conventional A-algorithm: e' (n) ═ E (n) -COI-
n is a via node in the longitudinal semantic position hierarchical structure and the transverse semantic position communication network, and the node represents an atomic semantic position or an aggregate semantic position; the constraint index COI represents an index that is constrained according to one of the conditional constraint information items;
step 4.2c, expressing the multilevel semantic path of the blind area track:
for the geometric locus of the blind area behavior process obtained by analysis, through the step 4.1, in the monitoring scene expressed by the multilevel geographical position association, the semantic path is described by taking the nodes and edges of the semantic position graph as reference; and (5) realizing the deduction of the blind area change process.
In the step 4.2b, with the spatial distance as an index, the COI of Ogs1 and Ogs2 is calculated as follows:
COI(Ogs1,Ogs2)=ΔTmin(Ogs1,Ogs2)*AVERAGE(V(Ogs1),V(Ogs2))
wherein, the Ogs1 and the Ogs2 are discrete tracks of semantic objects of two geographic video shots in a geographic video lens group respectively, and behavior process parameters CGEB (a) in geographic video contents and CGEB (b) are specifically expressed as speed features V (O) of sequence points of the tracksgs1),V(Ogs2) (ii) a While V (O)gs1),V(Ogs2) The fitting function F of (a) is a mean function AVERAGE (a, b); space-time condition parameter D (O) between behavior processesgs1,Ogs2) Embodied as the minimum time difference Δ T of the trajectorymin(Ogs1,Ogs2) (ii) a The functional operation relationship f is specifically realized by multiplying a number by x.
The invention has the following beneficial effects and advantages:
1. a geographic video content analysis mechanism from 'facing a single video image space' to 'combining an internal scene space and an external scene space of a geographic video' is established: the method breaks through the limitation that the traditional geographic video content analysis is limited by the space-time locality of an imaging window, fully utilizes the locatable mapping relation of the internal and external scene spaces of the geographic video, and realizes a unified geographic video content expression and analysis mechanism combining the internal and external scenes of the geographic video; the mechanism establishes tight mapping of the internal and external scene spaces of the geographic video based on the unified geographic position, and provides a global unified geographic association basis for the discrete behavior process in the content of the multiple geographic videos.
2. A geographic video GIS analysis mechanism from a local feature similarity constraint to a global geographic correlation constraint is established: the method breaks through the limitation that the traditional video data is limited by the space-time locality of a video imaging window on the basis of the correlation analysis of signal coding similarity, image feature similarity and content semantic similarity, expresses the behavior process characteristic extrapolated towards the geographic position trend through semantic mapping of monitoring scenes and geographic environments, provides a core support condition for the deduction of the change process of discrete geographic video content dead zones, realizes the quantitative analysis of video scene correlation based on geographic migration cost, supports the analysis of the geographic correlation and the correlation process of multiple geographic video contents in a cross-space-time region, enhances the correlation semantics of geographic videos on the basis of the geographic video correlation relationship, and improves the knowledge representation capability of the geographic video data.
Drawings
FIG. 1 is a schematic diagram of the process of the present invention;
FIG. 2 is a flow chart of the method steps of the present invention;
FIG. 3 is a schematic diagram of a process of an embodiment of the method of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples.
The invention relates to a deduction method for a blind area change process of multi-camera video event information constrained by geographic semantic association, belonging to the technical field of geospatial data processing, and the technical scheme comprises the following steps: a) establishing geographical environment condition constraints of geographical environment condition constraint deduction of multiple geographical video constraint deduction facing to geographical positions, and realizing geographical position semantic association expression under the condition of uniformly positioning and dividing monitoring scene intervals by extracting a longitudinal hierarchical structure and a transverse topological network of enhanced semantic geographical positions; b) establishing a deduction trend constraint facing to a motion mode, and judging a geographic motion mode which is a process line by line on the basis of enhancing the geographic position positioning judgment feature of a monitored scene area to be analyzed by utilizing the track structure feature of a behavioral process segment in local geographic video content and on the basis of the relation change feature of a track and a geographic position; c) analyzing behavior movement characteristics of information blind areas among behavior processes by utilizing the statistical characteristics of the tracks and the track pairs of adjacent behavior processes among the orderly organized geographic video shot groups, and establishing blind area behavior process characteristic parameters for mapping the spatio-temporal distance migration cost; d) and (3) planning through a semantic path of a geographical entity of the monitoring blind area in combination with a geographical position scene, a geographical motion mode and space-time distance constraint to realize the deduction of the behavior process of the blind area, and storing the deduced semantic path as metadata of the associated semantic enhancement of the geographical video lens group. The core of the invention is that based on the locatable mapping relation of the inner scene space and the outer scene space of the geographic video, the GIS modeling expression of the unified reference of the geographic space information of the outer scene is combined to realize the open rule description of the geographic correlation in the discrete change process of the video content; and then, geographic constraint deduction of discrete change process blind area information is completed by using a GIS analysis method based on geographic environment dependency of the change process, geographic semantic enhancement facing a continuous change process of a monitored area is realized, and understanding of complete event change process information in different levels is supported.
As shown in fig. 1, a geographic video lens group acquires a video image, first acquires thematic semantic information of a geographic video lens group region in a monitoring region, analyzes geographic position semantics therein, and extracts a geographic position object and an object relation; secondly, constructing a longitudinal position hierarchical structure and a transverse position topological network of the lens group area; then extracting position boundary characteristics which completely express the complex three-dimensional building model and are in regular formalization for judging the motion mode based on the position characteristics; and then analyzing the behavior movement characteristics of the information blind zone between the behavior processes by utilizing the movement track characteristics of the regional geographic video content behavior processes through track pairs of adjacent behavior processes, and establishing a blind zone behavior process characteristic parameter for mapping the spatiotemporal distance migration cost. Finally, dead wells in the monitoring blind area are deduced by using the motion track modes of the discrete tracks Ogs1 and Ogs2 and the optimal path judgment indexes based on the space-time distance and the track characteristic quantitative solution.
As shown in fig. 1 and 2, the present invention includes the following steps:
step 1, construction of geographical position condition constraints: acquiring thematic semantic information of a monitored area, analyzing geographic position semantics in the thematic semantic information, and extracting a geographic position object and an object relation; and constructing a longitudinal position hierarchical structure and a transverse position topological network facing the position relation, and expressing the geographic position semantic association under the unified positioning interval division of the monitoring scene interval.
Step 2, analyzing trend constraint of the motion mode: on the basis of the expression dimension of the geometric figure of the constituent elements of the unified model and the description granularity of semantic concepts, the characteristic semantic relation in the model is classified, induced, analyzed and fully utilized, and the position boundary characteristics which completely express the complex three-dimensional building model and are regularly formed are extracted and used for judging the motion mode based on the position characteristics.
Step 3, cost constraint estimation of space-time distance: and (3) analyzing behavior movement characteristics of information blind areas among behavior processes by utilizing the statistical characteristics of the tracks and the track pairs of adjacent behavior processes among the orderly organized geographic video shot groups, and establishing blind area behavior process characteristic parameters for mapping the spatiotemporal distance migration cost.
Step 4, deduction of a multi-constraint monitoring blind area behavior process: combining a geographical position relation network of a scene, a geographical motion mode of a behavior process in the content of a view screen and an optimal path discrimination index based on space-time distance and track characteristic quantitative solution, providing a semantic path deduction method for a monitoring blind area geographical entity movement behavior process, and enhancing and realizing the associated semantic enhancement of the monitoring blind area in the geographical video lens group semantic metadata.
Furthermore, in step 1, the construction of the geographical constraint of the geographic location includes the following sub-steps:
step 1.1, extracting a thematic geographic position object set of the monitoring scene area. Increasingly rich and perfect geographic space information infrastructure construction under the background of digital earth research provides a data base for obtaining and analyzing rich geographic position information. Various theme databases in various related fields can be used as information sources of geographic position objects, wherein the indoor and outdoor position information sources which can serve public safety monitoring requirements mainly comprise the following three types:
type one: the police data management and service field comprises a place name library, an address library and other basic geographic information data; the position concept is based on the name of a national standard place, and comprises the most basic address information such as administrative divisions, streets, doorplates (cells and buildings), unit room numbers, postcodes of the cells, channels of the cells, properties and types of the cells and buildings according to a certain layering, segmenting and grading rule;
type two: data of road network models at all levels in the traffic management field; among them, the existing network model mainly used includes: (a) the method comprises the steps of orienting to a two-dimensional navigation road network model based on a geometric-topological double-layer characteristic structure; (b) a lane model facing to the geometry of the lane and the horizontal and vertical connectivity of the lane; and (c) referencing a three-dimensional road network model containing geometric, topological and attribute data for the multi-scale position;
type three: basic three-dimensional city semantic model data of the smart city field; among them, the semantic specifications mainly used include: open geographic information alliance (OGC) modeling standard (a) the urban geographic marking language OGC city gml, and (b) the indoor multidimensional location information marking language OGC inororgml; and (c) international Modeling standard specifications such as IFC data model standards widely used in the field of Building Information Models (BIM).
Based on the geographic space information topic data set, the extraction of the special topic position information related to the geographic video can be specifically realized through the following method flows: firstly, extracting the metadata information of geographic video frame object enhancement contained in each geographic video lens facing the geographic video lens group in an event cluster level, and specifically acquiring a three-dimensional monitoring scene imaging interval set through an imaging characteristic item in metadata; then, based on the whole space area of the imaging interval set, calculating a monitoring scene interval capable of completely covering the discrete change process in the multi-geographic video content, and taking the monitoring scene interval as a geographic position basic expression range of a unified reference; and then, acquiring thematic position information from the existing topic database by using the interval range as a spatial retrieval condition, wherein the thematic position information comprises detailed description of position-related geometry, topology, semantics, attributes, functions and other information, storing each position concept as a geographical position object, and expressing the geographical position object in a data model through a class object.
And 1.2, constructing a longitudinal semantic position hierarchical structure. After a thematic geographical position object set of a monitoring scene interval is extracted, a layer-by-layer nested hierarchical structure of longitudinal parent positions-child positions of position objects can be constructed according to the spatial inclusion relation of geometric elements of the position objects in the set. For hierarchically organized geo-location objects, the location name expressions within their region based on a uniform localization space partition are examined object by object: the normalization and uniqueness of the name of the position object in the local area are ensured; the location name, which is expressed by the specification, is used as a unique identification code that describes the concept of the location and uses the location object.
And 1.3, constructing a transverse semantic position communication network. On the basis of constructing a longitudinal semantic position hierarchical structure, judging the connectivity of supporting feature object migration between every two positions by using topology, semantics, attributes and functional relation (for example, an open feature of a conference room provides support for establishing semantic conditions) information among objects synchronously extracted with position objects and using the definition of a position semantic communication relation as a basis, wherein the connectivity uses a position interface as a condition carrier, and the behavior of the feature objects is quantized into track expression; storing the position communication relation which takes each condition supporting the communication between the positions as condition constraint; the condition constraint information items are divided and stored into three types of space constraint (T), time constraint (S) and function attribute constraint (A) and serve as inquiry interface conditions of all communication relations when a communication network is specifically constructed.
And obtaining a monitoring scene interval expressed by the semantic association of the geographic position after extracting the geographic position objects divided based on the uniform positioning space and constructing a longitudinal hierarchical structure and a transverse communication network among the position objects. The constructed hierarchical relationship and the link relationship are recorded in the data model through class objects. And the communication relation in the position set forms a communication network with conditional constraint, and the communication network is used as a data basis for planning and analyzing a multi-constraint path.
And 1.4, simplifying the logic of calculating and describing the geographic position communication separated by the semantic hierarchy. In order to simplify the semantic communication network serving the high-performance communication path analysis and calculation geographic position in the monitoring scene while considering the expression requirements of the geographic movement mode in the behavior process of the monitoring scene on the multi-level position concept and the multi-level communication relation thereof, a position association expression structure for calculating and describing hierarchical mapping is provided. The position association expression of calculation and description hierarchical mapping reduces the communication relation between the position nodes of different semantic levels in the longitudinal semantic position hierarchical structure and the transverse semantic position communication network and the leaf nodes in the longitudinal hierarchical structure into the position nodes and the communication relation of the same semantic level while keeping the description adaptability to the movement mode based on the multi-level position concept.
And step 1.5, simplifying the communication network of the geographical position of the calculation layer with the condition constraint. After the positions of the computing layer and the communication relation thereof are constructed, a geographical position communication network facing to the reduction of a specific parameter condition can be further extracted from the computing layer by setting specific condition parameters according to three communication conditions of set space constraint (T), "time constraint (S)" and "functional attribute constraint (a)," wherein the parameter condition refers to geometric, semantic and topological conditions, such as: the gate is closed a few times to a few times, and the path during that time can be reduced. Behavior instances in different physical video shot data contents correspond to different sets of condition parameters, so that different reduction results can be obtained.
The geographic position associated expression constructed by the processing flow can support the open description of the video content discrete change process based on the position in combination with the geographic monitoring scene, and provides basic condition constraint information based on the geographic position for solving the monitoring blind area event information by using a GIS method.
In step 2, the trend constraint analysis of the motion pattern includes the following sub-steps:
and 2.1, unifying the geometric body expression dimension and the model surface subdivision of semantic concept description granularity.
The basic geometric elements that make up a complex three-dimensional building model can be abstracted into multi-surface morphology types that are represented by a mixture of three types of geometric objects: the system comprises an independent plane, a regular grid curved surface with an open boundary and a parameterized grid curved surface which forms a regular geometric shape of a limited closed space.
Meanwhile, semantic objects in mainstream industry standards IFC, IndorGML and CityGML of the three-dimensional building model for describing the indoor fine scene can be divided into the following semantic objects according to semantic description granularity: firstly, semantic surface objects and secondly, semantic body objects; wherein the semantic volume object is a semantic granularity in the model that constitutes a concept of a geographic location in the scene. Further, the semantic volume object can be divided into: firstly, semantic objects which occupy continuous geometric space and can not be subdivided on semantic concepts are called as 'atomic semantic position' objects, and secondly, semantic objects which are formed by combining the atomic semantic objects are called as 'aggregated semantic position' objects; the 'atomic semantic body' object is the position semantic granularity for the method to introduce the virtual boundary to carry out the automatic complete correction processing of the position boundary.
Based on the type division of the geometric and semantic elements, the method is divided into three processing types according to the geometric form type of the surface object in the model and the associated lowest-level semantic granularity characteristics:
firstly, associating a grid plane of a semantic plane;
a mesh surface with associated semantic surface and open boundary;
associating semantic entities and forming a grid curved surface of a limited closed space;
unifying the model surface subdivision of geometric figure expression dimension and semantic concept description granularity, namely, respectively extracting a surface object set which is associated with multilevel semantic information in the model composition for a complex three-dimensional building model extracted based on a multi-geographic video shot range according to the three processing types; and subdividing the polymorphic type surface objects object by object into semantic object sets with uniform geometric data structures, uniform surface morphology types and uniform basic semantic granularity.
Step 2.2, the automatic extraction of the atomic semantic position and the position boundary correction of the semantic relation constraint comprise the following substeps:
and 2.2a, extracting the semantic relation between the objects. And extracting semantic information associated with the surface object set after the structure subdivision, and establishing a tree-shaped hierarchical structure of the semantic objects in the memory according to the inclusion relation of the semantic objects. Specifically, semantic relations between semantic objects of adjacent layers are extracted through judgment and classification; the semantic relations sequentially extracted from bottom to top comprise a semantic combination relation between the semantic surface objects and the semantic body objects and a semantic aggregation relation between the semantic body objects. And storing the semantic relation type atomic semantic position to automatically extract and utilize the reference information.
And 2.2b, analyzing the atomic semantic position object based on the semantic relation. Specifically, the semantic entities with the lowest aggregation level are extracted one by utilizing the semantic aggregation relation among the semantic body objects, and an atom semantic position set which completely and non-overlappingly covers the space range of the original model is obtained.
And 2.3c, correcting the position boundary of the atomic semantic position object. Specifically, firstly, by utilizing a semantic combination relation between a semantic surface object and a semantic body object, an entity boundary of each atomic semantic position object is extracted from a semantic surface object set which is obtained by subdivision, is meshed and is discretized in a plane. And according to the surface characteristics, carrying out classification correction:
firstly, correcting incomplete topological connection (geometric topological relation is missed) between geometric surfaces by inserting virtual edges into every two atomic semantic positions with semantic aggregation relation, and the method specifically comprises the following steps:
I) extracting a geometric surface set of every two atomic semantic positions;
II) adopting one or more combinations of technologies for solving traffic by using a graphical polygonal vector, sequentially calculating the intersection line segments of every two surface objects between the surface sets, and respectively storing the intersection line segments to the intersection surfaces;
III) traversing each surface of the two atom semantic entity objects, adopting one or more combinations of triangulation general techniques of feature constraint in graphics, taking the intersecting line segments as constraint features, sequentially carrying out triangulation calculation, and storing the newly inserted intersecting edge as the boundary of the atom semantic position 'virtual edge'.
Correcting the open boundary between the geometric surfaces by inserting a virtual surface into each atom semantic position, and specifically comprising the following steps of:
I) extracting an atom semantic position geometric surface set;
II) extracting the boundary contour line of each geometric plane and storing the boundary contour line as a line segment array;
III) traversing each line segment array, and extracting a line segment set which only appears once;
IV) searching a closed polygon in the line segment set which only appears once until all line segments in the set are used;
v) triangularly flattening each closed polygon, using the triangularly flattened polygon mesh as the boundary of the atomic semantic position 'virtual surface'.
And 2.2d, checking and correcting the space coverage completeness of the atomic semantic position. Based on the space coverage of the regular body boundary voxelized atomic semantic entity, selecting specific correction operation from the following schemes according to the space relation of a voxel set:
firstly, for two atomic semantic entity objects, eliminating the spatial overlap between the objects through voxel local boundary contraction;
and secondly, filling gaps between atomic semantic entities for every two adjacent atomic semantic positions through voxel local boundary expansion.
Based on the above processing flow, each geographical position object of the corrected boundary can be obtained. And analyzing and judging the geographical semantic motion mode expressed based on the geographical position according to the principle of judging the track motion mode based on the enhanced position characteristic, taking the position boundary, the internal and external spaces of the position divided based on the position change, the positioning point solved and the position interface attached to the position boundary as the enhanced position characteristic, and the local azimuth and the corner information in the object track structure characteristic item line by line according to the principle of giving priority to the unit position reference mode. The description form of the geographic semantic motion mode breaks through the limitation of content information of a geographic video imaging space-time window, and expresses behavior process characteristics extrapolated towards the geographic position trend, so that a trend constraint effect on dead zone movement behavior deduction is formed.
Furthermore, in step 3, the cost constraint estimation of the spatio-temporal distance includes the following sub-steps:
step 3.1, from the perspective of the correlation of the feature information of the geographic video data, the following assumptions and inferences are proposed:
suppose that: the characteristic object has the overall stable continuous change characteristic of each parameter (the object parameter, such as the moving speed of a person) value in the change process of the monitoring scene;
and (3) deducing: based on the assumption, the behavior process of the feature object in a certain monitoring blind area and the behavior process in the geographic video shot of the adjacent blind area have correlation characteristics that parameter values can be fitted.
Step 3.2, based on the above reasoning, when ob (x) is a certain blind zone behavior process, Oob(a),Oob(a) The behavior processes in the geographic video shots of the adjacent blind areas respectively include:
1) classifying and expressing according to semantic metadata parameters of geographic video content change characteristics (see section 3.4.3 for details): each behavioral process object may be described as being composed of: description of a content semantic item (CSE) of "object individual feature"; describing a geographical semantic item (GSE) of the geographical position of each state under the standard of a unified space-time frame; -a content semantic item (CSB) describing a behavior action type; and a geographic semantic item (GSB) describing a position relation change motion pattern under a uniform space-time reference, then Oob(x) And Oob(a) And Oob(b) Can be expressed as:
Oob(x)={CSE(x),GSE(x),CSB(x),GSB(x)}
Oob(a)={CSE(a),GSE(a),CSB(a),GSB(a)}
Oob(b)={CSE(b),GSE(b),CSB(b),GSB(b)}
2) according to the inference: o isob(x) And Oob(a) And Oob(b) Can establish a function fitting relation F based on the correlation characteristics, thereby realizing the utilization of the known Oob(a) And Oob(b) Estimation of the blind area Oob(x) Formally expressed as: { cse (x), gse (x), csb (x), gsb (x) } ═
F({CSE(a),GSE(a),CSB(a),GSB(a)},{CSE(b),GSE(b),CSB(b),GSB(b)})
When the above formula is simply expressed as cgeb (x) ═ F (cgeb (a)) and cgeb (b)), the self-analysis of the analytic and modeling (i) behavior process is utilizedThe characteristic of the body is CGEB (a), CGEB (b) and the space-time distance D between the behavior processes of IImin(Oob(a),Oob(b) Constraint Index (COI) of the optimal solution for the blind zone position migration change between two behaviors can be defined as follows:
COI(Oob(a),Oob(b))=f(D(Oob(a),Oob(b)),CGEB(x))
=f(D(Oob(a),Oob(b)),F(CGEB(a),CGEB(b))
specifically, in the behavioral process expression using the track data object as a carrier, f is a parameter item D (O)ob(a),Oob(b) Functional operation relationship between the two elements (A) and (B), and parameter item D (O)ob(a),Oob(b) ) is embodied as a trajectory spatiotemporal interval solved by geo-video shot spatiotemporal metadata; CGEB (a) and CGEB (b) are specifically expressed as the global most value, mean value and variance statistical feature items of each structural feature and structural feature such as local azimuth, corner, speed, acceleration and the like of the behavior track.
Furthermore, in the step 4, the deduction of the multi-constraint monitoring blind area behavior process includes the following sub-steps:
step 4.1, performing track semantic expression of geographic video shot semantic metadata, and executing the following substeps:
and 4.1a, positioning and judging the semantic position of the track sequence point. And mapping the schematic discrete tracks of the geographic video shot semantic metadata in the position scenes into geographic position set scenes divided by a unified geographic framework.
And 4.2b, judging the geographic motion mode of the track data object. The method comprises the steps of taking track data objects described by sequence points as a whole, based on each geographical position object corrected by position boundaries in a monitoring scene, analyzing and judging a geographical semantic motion mode expressed based on geographical positions by using a local azimuth information item and a corner information item in a track structure characteristic item of the object line by line according to the change characteristics of the relation between the track and the position, according to a motion mode judging rule and on the principle of unit position reference mode priority. The geographic semantic motion mode description form based on the network open expression of the geographic position of the monitoring scene breaks through the limitation of content information of a geographic video imaging space-time window, so that the behavior process characteristics of the track object facing the geographic position trend extrapolation are expressed, and the trend constraint effect of the blind area movement behavior deduction is formed.
Step 4.2, semantic path deduction in the blind area moving behavior process, and the following substeps are executed:
and 4.2a, mapping the multilayer semantic positions and the calculation layers of the incidence relation thereof. For the behavior tracks corresponding to every two adjacent geographic video shots, based on the geographic motion mode distinguished in the last step: and uniformly mapping the geographic positions related in the mode to a calculation layer expressed by atomic geographic positions in a monitoring scene according to a calculation-description hierarchical organization method, and recording mapping paths for restoring semantic path descriptions of corresponding layers after obtaining estimated track data objects.
And 4.2b, calculating the path plan of the blind area. The method realizes the planning of the dead zone geometric path based on the A-algorithm oriented to the static network solution, and considers the parameter influence of the constraint index COI of the optimal scheme defined by the invention on the basis of the selection of the evaluation function of the existing A-algorithm; when the current A-algorithm estimation function is expressed as E (n), and n is a passing node in the network, the method adopts an estimation function E '(n) considering constraint to replace the original algorithm function E (n) to execute optimal path solution, and the E' (n) is solved through the difference absolute value of the E (n) and the constraint index COI, wherein the function is expressed as:
E’(n)=|E(n)-COI|
as an example, using spatial distance as an index, the COI calculations for Ogs1 and Ogs2 can be embodied as:
COI(Ogs1,Ogs2)=ΔTmin(Ogs1,Ogs2)*AVERAGE(V(Ogs1),V(Ogs2))
wherein, the behavior process parameters CGEB (a) and CGEB (b) in the geographic video content are specifically expressed as the speed characteristic V (O) of each sequence point of the trackgs1),V(Ogs2) (ii) a While V (O)gs1),V(Ogs2) Is simply embodied as the mean function AVERAGE (a, b); space-time condition parameter D (O) between behavior processesgs1,Ogs2) Embodied as the minimum time difference Δ T of the trajectorymin(Ogs1,Ogs2) (ii) a The functional operation relationship f is specifically realized by multiplying a number by x. The exemplary set of parameters is a feasible set of parameters in terms of spatial distance, which can be implemented by further improving the functions F and F when a higher precision trajectory representation is required.
As shown in fig. 3, the blind area locus of the right monitoring scene is derived by performing a multi-video event blind area change process with a geographic semantic association constraint on the left discrete locus illustrative example.
And 4.2c, expressing the multilevel semantic path of the blind area track. For the geometric track of the blind area behavior process obtained by analysis, a semantic path is described by using the track semantic expression method of the geographic video lens semantic metadata in a monitoring scene of multi-level geographic position associated expression and taking the nodes and edges of a semantic position graph as reference; in particular, in order to support semantic understanding of geographic video content and blind zone continuous behavior process, the semantic expression of the blind zone needs to satisfy the logical connection with the track semantics of the geographic video lens semantic metadata on the geographic position. And finally, saving resolvable multilevel semantic paths as semantic metadata of the associated semantic enhancement of the corresponding geographic video lens group.

Claims (7)

1. The deduction method for the change process of the blind area of the multiple video events with the geographic semantic association constraint is characterized by comprising the following steps of:
step 1, construction of geographical position condition constraints: acquiring thematic semantic information of a monitored area, analyzing geographic position semantics in the thematic semantic information, and extracting a geographic position object and an object relation; establishing a longitudinal semantic position hierarchical structure and a transverse semantic position communication network facing the position relation, and expressing the geographic position semantic association under the unified positioning interval division of the monitoring scene interval;
step 2, analyzing trend constraint of the motion mode: on the basis of the expression dimension of the geometric figure of the three-dimensional building model forming elements and the semantic concept description granularity, classifying, inducing, analyzing and utilizing the characteristic semantic relationship in the three-dimensional building model to extract the position boundary characteristics which completely express the three-dimensional building model and are regular to form for distinguishing the motion mode based on the position characteristics;
step 3, cost constraint estimation of space-time distance: analyzing behavior movement characteristics of information blind areas among behavior processes by utilizing the statistical characteristics of the tracks and the track pairs of adjacent behavior processes among the orderly organized geographic video shot groups, and establishing blind area behavior process characteristic parameters for mapping the spatio-temporal distance migration cost;
step 4, deduction of a multi-constraint monitoring blind area behavior process: and (3) combining a geographical position relation network of a scene, a geographical motion mode of a behavior process in video content and a path discrimination index quantitatively solved based on a space-time distance and a track characteristic, and performing semantic path deduction of a moving behavior process of the geographical entity of the monitoring blind area so as to enhance the associated semantics of the monitoring blind area in the geographical video lens group semantic metadata.
2. The method according to claim 1, wherein said construction of conditional constraints on geographic locations comprises the following steps:
step 1.1, extracting a thematic geographic position object set of a monitoring scene area:
firstly, a geographic video lens group facing a monitoring scene area extracts metadata of a geographic video frame object contained in each geographic video lens, and three-dimensional monitoring scene imaging intervals of each geographic video lens are obtained through imaging characteristic items in the metadata to form an imaging interval set;
then, based on the whole space area of the imaging interval set, obtaining a monitoring scene interval capable of completely covering the discrete change process in the multi-geographic video content, and taking the monitoring scene interval as a geographic position basic expression range of a unified reference;
then, the basic expression range of the geographic position is used as a space retrieval condition, target position information description including geometry, topology, semantics, attributes and functions related to the position is obtained, and each target position information description is stored as a geographic position object to form a thematic geographic position object set;
step 1.2, constructing a longitudinal semantic position hierarchical structure:
according to the spatial inclusion relationship of geometric elements of the geographic position objects in the thematic geographic position object set, a longitudinal hierarchical structure of the geographic position objects is constructed, namely a hierarchical structure in which parent positions and child positions are nested layer by layer; checking the position name expression of the positioning space division in the region of the hierarchically organized geographical position object one by one so as to ensure the normalization and uniqueness of the position object name in the region; taking the position name expressed by the specification as a unique identification code of the geographic position object;
step 1.3, constructing a transverse semantic position communication network:
on the basis of constructing a longitudinal semantic position hierarchical structure, judging the connectivity between every two geographic position objects by using topological, semantic, attribute and functional relationship information among the objects synchronously extracted with the geographic position objects and using a position interface as a conditional carrier for supporting feature object migration according to the semantic communication relationship of the geographic position objects, and storing a position communication relationship taking a conditional constraint information item for supporting the connectivity between the positions as conditional constraint; the condition constraint information items are divided and stored as space constraint, time constraint and function attribute constraint and serve as inquiry interface conditions of all communication relations when a communication network is constructed;
after the building of a longitudinal semantic position hierarchical structure and a transverse semantic position communication network is completed, a monitoring scene interval of geographic position semantic association expression is obtained; the constructed hierarchical relation and the communication relation are recorded in a data model for storing data through class objects; the link relation in the thematic geographic position object set forms a link network with conditional constraint, and the link network is used as a data basis for planning and analyzing a multi-constraint path.
Step 1.4, calculating-describing geographical location link logic simplification of semantic hierarchy separation:
in a monitoring scene interval of the geographic position semantic association expression, performing logic simplification by adopting a calculation-description hierarchical organization method, specifically for position association expression of calculation and description hierarchical mapping, wherein a description layer is used for recording and expressing complete position objects and relations; the calculation layer is used for position association enhancement deduction calculation, and simplifies original position nodes and communication relations of different semantic levels into position nodes and communication relations of the same semantic level;
step 1.5, simplifying the calculation layer geographic position communication network with conditional constraints:
after the position nodes and the communication relation of the position nodes of the calculation layer are constructed, the geographical position communication network which is reduced by the parameter conditions for representing geometry, semantics and topology is extracted from the calculation layer according to the set condition constraint information items and parameters, and the simplified geographical position communication network is obtained.
3. The method according to claim 1, wherein said trend constraint analysis of the motion pattern comprises the following steps:
step 2.1, unifying the model surface subdivision of geometric body expression dimension and semantic concept description granularity:
the method comprises the following steps of dividing the method into three processing types according to the geometric form type of a surface object in a geographic scene model and the associated lowest-level semantic granularity characteristics: a mesh plane of the associated semantic surface; a mesh surface which is associated with a semantic surface and has an open boundary; associating semantic entities and forming a mesh surface of a limited closed space;
according to the three processing types, after a surface object set of multi-level semantic information is associated in the extraction model structure of a three-dimensional building model extracted based on a multi-geographic video shot range, a multi-form type surface object is divided object by object into a semantic object set with a unified geometric data structure, a unified surface form type and a unified basic semantic granularity;
step 2.2, carrying out automatic extraction of the atomic semantic position and position boundary correction of semantic relation constraint, comprising the following substeps:
step 2.2a, extracting semantic relations among objects in the semantic object set:
extracting semantic information related to the surface object set after the structure subdivision, and establishing a tree-shaped hierarchical structure of the semantic objects in a memory according to inclusion relations of the semantic objects; judging and classifying and extracting semantic relations between adjacent semantic objects; the semantic relations sequentially extracted from bottom to top comprise a semantic combination relation between the semantic surface objects and the semantic body objects and a semantic aggregation relation between the semantic body objects; storing the extracted semantic relation types as reference information for automatically extracting the atomic semantic positions in the step 2.2 b;
step 2.2b, analyzing the atomic semantic position object based on the semantic relation:
extracting semantic entities with the lowest aggregation level one by utilizing semantic aggregation relations among the semantic body objects to obtain an atomic semantic position set which completely and non-overlappingly covers the space range of the geographic scene model;
step 2.3c, correcting the position boundary of the atomic semantic position object:
extracting entity boundaries of each atomic semantic position object from a triangular grid-like and plane-discretized semantic surface object set obtained by subdivision by utilizing a semantic combination relation between a semantic surface object and a semantic body object, and performing classification correction according to surface features;
step 2.2d, checking and correcting the space coverage completeness of the atomic semantic position:
based on the space coverage of the regular body boundary voxelized atomic semantic entity, selecting correction operation from the following schemes according to the space relation of a voxel set:
for every two atomic semantic entity objects, eliminating the spatial overlap between the objects through voxel local boundary contraction;
and for every two adjacent atomic semantic positions, filling gaps between atomic semantic entities through voxel local boundary expansion.
4. The method according to claim 3, wherein the classification correction comprises the following steps:
firstly, correcting the topological connection with errors in the topological relation between the geometric surfaces by inserting virtual edges into every two atomic semantic positions with a semantic aggregation relation, and the method specifically comprises the following steps:
I) extracting a geometric surface set of every two atomic semantic positions;
II) calculating intersection line segments of every two surface objects between the surface sets in sequence through the intersection of the graphic polygon vectors, and respectively storing the intersection line segments to the intersection surfaces;
III) traversing each surface of the two atomic semantic entity objects, sequentially carrying out triangulation calculation by triangulation of feature constraint in graphics by taking the intersection line segment as a constraint feature, and storing the newly inserted intersection edge as an atomic semantic position virtual edge boundary;
correcting the open boundary between the geometric surfaces by inserting a virtual surface into each atomic semantic position, and specifically comprising the following steps of:
I) extracting an atom semantic position geometric surface set;
II) extracting the boundary contour line of each geometric plane and storing the boundary contour line as a line segment array;
III) traversing each line segment array, and extracting a line segment set which only appears once;
IV) searching a closed polygon in the line segment set which only appears once until all line segments in the set are used;
v) triangularly flattening each closed polygon, and taking the triangularly flattened polygon mesh as the boundary of the atomic semantic position virtual surface.
5. The method of deriving geo-semantic relevance constrained multi-video event blind zone variation process according to claim 1, wherein the spatio-temporal distance cost constraint estimation comprises the following sub-steps:
step 3.1, from the perspective of the correlation of the feature information of the geographic video data, the following assumptions and inferences are proposed:
suppose that: the characteristic object has the overall continuous change characteristic of the parameter value of the object in the change process of the monitoring scene;
and (3) deducing: based on the assumption, the behavior process of the characteristic object in a certain monitoring blind area and the behavior process in the geographic video shot of the adjacent blind area have the characteristic of relevance, which can be fitted by the parameter value of the object;
step 3.2, based on the above reasoning, when ob (x) is a certain blind zone behavior process, Oob(a),Oob(b) Behavior processes in the geographic video shots of adjacent blind areas respectively, wherein x represents a certain continuous process, a and b represent behavior process segments contained in and displayed in the existing video, and then:
1) each behavioral process object is described as: content semantic item CSE describing object individual characteristics, geographic semantic item GSE describing geographic position of each state under the reference of unified space-time frame, content semantic item CSB describing behavior action type, and geographic semantic item GSB describing position relation change motion mode under the reference of unified space-timeob(x) And Oob(a) And Oob(b) Are expressed as:
Oob(x)={CSE(x),GSE(x),CSB(x),GSB(x)}
Oob(a)={CSE(a),GSE(a),CSB(a),GSB(a)}
Oob(b)={CSE(b),GSE(b),CSB(b),GSB(b)}
CSE (x), GSE (x), CSB (x), GSB (x) as characteristic parameters of dead zone behavior process;
2) according to the inference: o isob(x) And Oob(a) And Oob(b) The characteristic parameters of the system establish a function fitting relation F based on the correlation characteristics, thereby realizing the utilization of the known Oob(a) And Oob(b) Estimation of the blind area Oob(x) Formally expressed as: { cse (x), gse (x), csb (x), gsb (x) } ═ F ({ cse (a), gse (a), csb (a), gsb (a)) }, { cse (b), gse (b), csb (b), gsb (b)) })
When the above formula is simply expressed as cgeb (x) ═ F (cgeb (a)) and cgeb (b)), the feature of the behavior process that can be analyzed and modeled, cgeb (a) and cgeb (b), and the spatiotemporal distance D between the behavior process and the feature are usedmin(Oob(a),Oob(b) Defining the optimal scheme of the blind area position migration change between two behaviors in the following constraint index COI calculation mode:
COI(Oob(a),Oob(b))=f(D(Oob(a),Oob(b)),CGEB(x))
=f(D(Oob(a),Oob(b)),F(CGEB(a),CGEB(b))
specifically, in the behavioral process expression using the track data object as a carrier, f is a parameter item D (O)ob(a),Oob(b) Functional operation relationship between the two elements (A) and (B), and parameter item D (O)ob(a),Oob(b) Expressed as a trajectory spatiotemporal interval solved by geo-video shot spatiotemporal metadata; CGEB (a) and CGEB (b) are specifically expressed as the statistical characteristic items of the global maximum, mean and variance of each structural characteristic and structural characteristic of the local position, corner, speed and acceleration of the behavior track.
6. The method of claim 1, wherein the method comprises: the multi-constraint monitoring blind area behavior process deduction comprises the following substeps:
step 4.1, performing track semantic expression of geographic video shot semantic metadata, and executing the following substeps:
step 4.1a, semantic position positioning and distinguishing of track sequence points:
mapping discrete tracks of geographic video shot semantic metadata in a position scene into a thematic geographic position object set scene divided by a unified geographic framework;
step 4.2b, judging the geographical motion mode of the track data object:
taking track data objects described by sequence points as a whole, based on each geographical position object corrected by position boundaries in a monitoring scene, according to the change characteristics of the relation between the track and the position, according to a set motion mode judgment rule and by taking a unit position reference mode as a principle, analyzing and judging the geographical semantic motion mode expressed based on the geographical position by using local azimuth and corner information items in the track structure characteristic items of the object line by line;
step 4.2, semantic path deduction in the blind area moving behavior process, and the following substeps are executed:
step 4.2a, multi-level semantic position and calculation layer mapping of the incidence relation:
uniformly mapping the geographic positions related to the modes to a calculation layer expressed by atomic semantic positions in a monitoring scene according to a calculation-description hierarchical organization method on the basis of the geographic semantic motion mode distinguished in the last step for the behavior tracks corresponding to every two adjacent geographic video shots, and recording mapping paths for restoring semantic path descriptions of corresponding layers after obtaining estimated track data objects;
step 4.2b, calculating a blind area path plan:
and (3) solving the difference absolute value between the evaluation function E (n) and the constraint index COI by using the evaluation function E' (n) considering the constraint through the conventional A-algorithm: e' (n) ═ E (n) -COI-
n is a via node in the longitudinal semantic position hierarchical structure and the transverse semantic position communication network, and the node represents an atomic semantic position or an aggregate semantic position; the constraint index COI represents an index that is constrained according to one of the conditional constraint information items;
step 4.2c, expressing the multilevel semantic path of the blind area track:
for the geometric locus of the blind area behavior process obtained by analysis, through the step 4.1, in the monitoring scene expressed by the multilevel geographical position association, the semantic path is described by taking the nodes and edges of the semantic position graph as reference; and (5) realizing the deduction of the blind area change process.
7. The method of claim 6, wherein in step 4.2b, using spatial distance as an index, COI of Ogs1 and Ogs2 is calculated as follows:
COI(Ogs1,Ogs2)=ΔTmin(Ogs1,Ogs2)*AVERAGE(V(Ogs1),V(Ogs2))
wherein, the Ogs1 and the Ogs2 are discrete tracks of semantic objects of two geographic video shots in a geographic video lens group respectively, and behavior process parameters CGEB (a) in geographic video contents and CGEB (b) are specifically expressed as speed features V (O) of sequence points of the tracksgs1),V(Ogs2) (ii) a While V (O)gs1),V(Ogs2) The fitting function F of (a) is a mean function AVERAGE (a, b); space-time condition parameter D (O) between behavior processesgs1,Ogs2) Embodied as the minimum time difference Δ T of the trajectorymin(Ogs1,Ogs2) (ii) a The functional operation relationship f is specifically realized by multiplying a number by x.
CN202010977915.3A 2020-09-17 2020-09-17 Multi-video event blind area change process deduction method based on geographic semantic association constraint Active CN112214642B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010977915.3A CN112214642B (en) 2020-09-17 2020-09-17 Multi-video event blind area change process deduction method based on geographic semantic association constraint

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010977915.3A CN112214642B (en) 2020-09-17 2020-09-17 Multi-video event blind area change process deduction method based on geographic semantic association constraint

Publications (2)

Publication Number Publication Date
CN112214642A true CN112214642A (en) 2021-01-12
CN112214642B CN112214642B (en) 2021-05-25

Family

ID=74049581

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010977915.3A Active CN112214642B (en) 2020-09-17 2020-09-17 Multi-video event blind area change process deduction method based on geographic semantic association constraint

Country Status (1)

Country Link
CN (1) CN112214642B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113076336A (en) * 2021-04-27 2021-07-06 刘文平 GIS macro-micro decision support system for site selection of water plant in remote area
CN113807102A (en) * 2021-08-20 2021-12-17 北京百度网讯科技有限公司 Method, device, equipment and computer storage medium for establishing semantic representation model
CN114860860A (en) * 2022-04-25 2022-08-05 南京泛在地理信息产业研究院有限公司 Novel GIS data structure based on geographic element interaction mechanism
CN115630191A (en) * 2022-12-22 2023-01-20 成都纵横自动化技术股份有限公司 Time-space data set retrieval method and device based on full-dynamic video and storage medium
CN116562172A (en) * 2023-07-07 2023-08-08 中国人民解放军国防科技大学 Geographical scene time deduction method, device and equipment for space-time narrative
CN116910131A (en) * 2023-09-12 2023-10-20 山东省国土测绘院 Linkage visualization method and system based on basic geographic entity database
CN117349764A (en) * 2023-12-05 2024-01-05 河北三臧生物科技有限公司 Intelligent analysis method for stem cell induction data

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130129307A1 (en) * 2011-11-23 2013-05-23 Objectvideo, Inc. Automatic event detection, text generation, and use thereof
CN103530995A (en) * 2013-10-12 2014-01-22 重庆邮电大学 Video monitoring intelligent early-warning system and method on basis of target space relation constraint
CN105354247A (en) * 2015-10-13 2016-02-24 武汉大学 Geographical video data organization management method supporting storage and calculation linkage
CN105630897A (en) * 2015-12-18 2016-06-01 武汉大学 Content-aware geographic video multilayer correlation method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130129307A1 (en) * 2011-11-23 2013-05-23 Objectvideo, Inc. Automatic event detection, text generation, and use thereof
CN103530995A (en) * 2013-10-12 2014-01-22 重庆邮电大学 Video monitoring intelligent early-warning system and method on basis of target space relation constraint
CN105354247A (en) * 2015-10-13 2016-02-24 武汉大学 Geographical video data organization management method supporting storage and calculation linkage
CN105630897A (en) * 2015-12-18 2016-06-01 武汉大学 Content-aware geographic video multilayer correlation method

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
C.WU 等: "An adaptive organization method of geovideo data for spatio-temporal association analysis", 《ISPRS ANNALS OF THE PHOTOGRAMMETRY, REMOTE SENSING AND SPATIAL INFORMATION SCIENCES》 *
JIANGJIAN XIAO: "Geo-spatial aerial video processing for scene understanding and object tracking", 《2008 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》 *
何娣: "基于语义的地理视频索引方法", 《中国优秀硕士学位论文全文数据库基础科学辑》 *
张兴国: "地理场景协同的多摄像机目标跟踪研究", 《中国博士学位论文全文数据库基础科学辑》 *
王哲: "语义关联的地理视频数据组织方法研究", 《中国优秀硕士学位论文全文数据库基础科学辑》 *
谢潇 等: "多层次地理视频语义模型", 《测绘学报》 *
谢潇: "语义感知的地理视频大数据自适应关联组织方法", 《测绘学报》 *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113076336A (en) * 2021-04-27 2021-07-06 刘文平 GIS macro-micro decision support system for site selection of water plant in remote area
CN113807102A (en) * 2021-08-20 2021-12-17 北京百度网讯科技有限公司 Method, device, equipment and computer storage medium for establishing semantic representation model
CN113807102B (en) * 2021-08-20 2022-11-01 北京百度网讯科技有限公司 Method, device, equipment and computer storage medium for establishing semantic representation model
CN114860860B (en) * 2022-04-25 2024-05-03 南京泛在地理信息产业研究院有限公司 Method for constructing GIS data structure based on geographic element interaction mechanism
CN114860860A (en) * 2022-04-25 2022-08-05 南京泛在地理信息产业研究院有限公司 Novel GIS data structure based on geographic element interaction mechanism
CN115630191A (en) * 2022-12-22 2023-01-20 成都纵横自动化技术股份有限公司 Time-space data set retrieval method and device based on full-dynamic video and storage medium
CN115630191B (en) * 2022-12-22 2023-03-28 成都纵横自动化技术股份有限公司 Time-space data set retrieval method and device based on full-dynamic video and storage medium
CN116562172A (en) * 2023-07-07 2023-08-08 中国人民解放军国防科技大学 Geographical scene time deduction method, device and equipment for space-time narrative
CN116562172B (en) * 2023-07-07 2023-09-15 中国人民解放军国防科技大学 Geographical scene time deduction method, device and equipment for space-time narrative
CN116910131A (en) * 2023-09-12 2023-10-20 山东省国土测绘院 Linkage visualization method and system based on basic geographic entity database
CN116910131B (en) * 2023-09-12 2023-12-08 山东省国土测绘院 Linkage visualization method and system based on basic geographic entity database
CN117349764A (en) * 2023-12-05 2024-01-05 河北三臧生物科技有限公司 Intelligent analysis method for stem cell induction data
CN117349764B (en) * 2023-12-05 2024-02-27 河北三臧生物科技有限公司 Intelligent analysis method for stem cell induction data

Also Published As

Publication number Publication date
CN112214642B (en) 2021-05-25

Similar Documents

Publication Publication Date Title
CN112214642B (en) Multi-video event blind area change process deduction method based on geographic semantic association constraint
CN105630897B (en) Content-aware geographic video multilevel correlation method
US11816907B2 (en) Systems and methods for extracting information about objects from scene information
CN115269751B (en) Method for constructing geographic entity space-time knowledge graph ontology library
Rao et al. Spatiotemporal data mining: Issues, tasks and applications
Chen et al. Learning-based spatio-temporal vehicle tracking and indexing for transportation multimedia database systems
CN110532340B (en) Spatial information space-time metadata construction method
Albanna et al. Semantic trajectories: a survey from modeling to application
Yan Semantic trajectories: computing and understanding mobility data
CN114661744B (en) Terrain database updating method and system based on deep learning
Chen et al. Uvlens: urban village boundary identification and population estimation leveraging open government data
Dong et al. Browsing behavior modeling and browsing interest extraction in the trajectories on web map service platforms
Wilson et al. Image and object Geo-localization
Wang et al. A graph-based visual query method for massive human trajectory data
Elliethy et al. Vector road map registration to oblique wide area motion imagery by exploiting vehicle movements
Leśniara et al. Highway2vec: Representing OpenStreetMap microregions with respect to their road network characteristics
Chakri et al. Semantic trajectory knowledge discovery: a promising way to extract meaningful patterns from spatiotemporal data
Yu et al. Abnormal crowdsourced data detection using remote sensing image features
Jiang et al. Deep learning based 3D object detection in indoor environments: A review
Adreani et al. Smart City Digital Twin Framework for Real-Time Multi-Data Integration and Wide Public Distribution
Eidenberger Visual data mining
Jiang et al. Indoor hierarchy relation graph construction method based on RGB‐D
Sharkawi et al. Improving semantic updating method on 3D city models using hybrid semantic-geometric 3D segmentation technique
Bento User Behaviour Identification based on location data
Rimey Recognizing Activity Structures in Massive Numbers of Simple Events Over Large Areas

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant