CN113032885A - View relation analysis method, device and computer storage medium - Google Patents

View relation analysis method, device and computer storage medium Download PDF

Info

Publication number
CN113032885A
CN113032885A CN202110376074.5A CN202110376074A CN113032885A CN 113032885 A CN113032885 A CN 113032885A CN 202110376074 A CN202110376074 A CN 202110376074A CN 113032885 A CN113032885 A CN 113032885A
Authority
CN
China
Prior art keywords
grid
reachable
visual
grids
depth
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110376074.5A
Other languages
Chinese (zh)
Other versions
CN113032885B (en
Inventor
王浩锋
金珊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen University
Original Assignee
Shenzhen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen University filed Critical Shenzhen University
Priority to CN202110376074.5A priority Critical patent/CN113032885B/en
Publication of CN113032885A publication Critical patent/CN113032885A/en
Application granted granted Critical
Publication of CN113032885B publication Critical patent/CN113032885B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/10Geometric CAD
    • G06F30/13Architectural design, e.g. computer-aided architectural design [CAAD] related to design of buildings, bridges, landscapes, production plants or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Geometry (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Structural Engineering (AREA)
  • Computational Mathematics (AREA)
  • Civil Engineering (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Architecture (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a method, equipment and a computer storage medium for analyzing a view relation, wherein the method comprises the following steps: generating reachable relation data and visual relation data of each grid in the building plan; based on the reachable space of the building plan, calculating the average visual depth and the average viewed depth of each grid according to the reachable relation data and the visual relation data, and completing the view relation analysis of each grid; the invention solves the problem that the analysis method of the prior art for splitting the two spatial experience modes of motion and vision can not truly describe the motion perception experience of the Chinese classical garden and the spatial characteristics of the 'watching' and 'seen' of the scenery, and better reveals the spatial perception characteristics of different parts of the Chinese classical garden by the quantitative analysis method of the 'watching' and 'seen' spatial experience.

Description

View relation analysis method, device and computer storage medium
Technical Field
The present invention relates to analysis of relationship between classical gardens and viewing areas, and more particularly, to a method, an apparatus, and a computer storage medium for analyzing relationship between viewing areas.
Background
In general buildings, the visible and reachable relationships are almost consistent, that is, most of the visible places can be directly passed. However, the space of the classical Chinese garden has the phenomenon of separation and dislocation of the reachable and visible relations, that is, the sight line can not reach directly, and often reaches the space through a tortuous path. The asymmetry of the reachable and visible relations brings unique 'seeing' and 'seen' spatial experiences of classical gardens: some places are easy to walk through and have good vision, and other places are not easy to walk through but are easy to see. Most studies are only depicted in literature language or by photographs, and cannot be objectively described from a quantitative point of view for such experiences.
Although visual field relationship Analysis (VGA: visual graphics Analysis) of Space Syntax (Space Syntax) theory attempts to simulate visual relationship changes in motion through continuous description of the visual field, the current method is almost ineffective for a Space system with significant misalignment between the visual relationship and the reachable Space system of classical gardens. Therefore, in applications, these two relationships have to be broken down, and "motion and vision" are modeled and analyzed as two independent systems: one is a reachable layer view model and the other is a visible layer view model. The visual relation of the space is hardly reflected truly by single visual analysis or accessible analysis, the visual degree of the space is overestimated by the pure visual analysis neglecting barriers to movement of transparent walls (such as windows and the like) or short obstacles, and people cannot fly through the barriers from one position to other positions to see other invisible places; the transparent boundary and the solid boundary are not different in the case of simple reachable analysis, and the visual performance of the space is inevitably underestimated due to the analysis result caused by the action of 'sight advance' not being considered.
The method for artificially splitting the two spatial experience modes of motion and vision naturally cannot truly describe the motion perception experience of Chinese classical gardens and the viewing and viewed spatial characteristics of scenery. Therefore, the method has more limitations in practical application and is difficult to deal with analysis of complex space.
Disclosure of Invention
In view of this, embodiments of the present application provide a method and an apparatus for analyzing a view relationship, and a computer storage medium, which solve the problem that in the prior art, an analysis method for splitting two spatial experience modes of motion and vision cannot truly describe the motion perception experience of the chinese classical garden and the spatial characteristics of "seeing" and "seen" of a scene.
The embodiment of the application provides a method for analyzing a view relation, which comprises the following steps:
generating reachable relation data and visual relation data of each grid in the building plan;
and calculating the average visual depth and the average viewed depth of each grid according to the reachable relation data and the visual relation data based on the reachable space of the building plan, and completing the view relation analysis of each grid.
In one embodiment, the generating reachable relationship data and visual relationship data for each grid in the building plane includes:
acquiring a building plan;
drawing the building plan as a space syntactic analysis base map, and dividing the space syntactic analysis base map into grids with a first number and equal size;
based on the spatial syntax analysis base map, sequentially storing reachable attribute data of each grid into a first data table according to the numbering sequence of the grids, and generating a reachable layer spatial relationship graphic data table;
and sequentially storing the visual attribute data of each grid into a second data table according to the numbering sequence of the grids based on the spatial syntactic analysis base map, and generating a visual layer spatial relationship graphic data table.
In one embodiment, the calculating an average visual depth and an average depth-of-view for each grid based on the reachable space of the building plan from the reachable relationship data and the visual relationship data, and performing the view relationship analysis for each grid includes:
acquiring a reachable grid matrix and a visible grid matrix of each grid through the number of the grid based on the reachable layer spatial relationship graphic data table and the visible layer spatial relationship graphic data table;
calculating the average visual depth of each grid by utilizing a first preset method based on the reachable grid matrix and the visual grid matrix;
and calculating the average viewed depth of each grid by utilizing a second preset method based on the reachable grid matrix and the visual grid matrix.
In an embodiment, the calculating an average visual depth of each grid by using a first preset method based on the reachable grid matrix and the visual grid matrix includes:
selecting a starting point grid, wherein the number of the visual grids corresponding to the starting point grid is V0(ii) a Wherein the starting grid is any grid in the spatial syntactic analysis base map;
starting from the starting point grid, executing a first reachable topology, obtaining the total number of direct reachable grids of the first reachable topology and the direct visual grids corresponding to the direct reachable grids, and recording the total number as the number V of the newly added visual grids of the first reachable topology1
Starting from any newly added visual grid of the last reachable topology, executing the ith reachable topology, obtaining the total number of the direct reachable grids of the ith reachable topology and the direct visual grids corresponding to the direct reachable grids, and recording the total number V of the newly added visual grids of the ith reachable topologyi(ii) a Wherein i is a positive integer;
repeating the above operations until the number of the visible grids of the starting grid reaches the total number of the grids minus 1, and stopping the topological operation; wherein the total number of grids is the first number;
the number V of visual grids corresponding to the starting grid0The current reachable topology depth i and the new visual grid number V of the ith reachable topologyiAnd calculating the average visual depth of the starting point grid by carrying out the total times of the reachable topology and the total grid number.
In an embodiment, the average visible depth of each grid is an average topological depth of any starting grid in the spatial syntactic analysis base map which can reach or see other grids.
In an embodiment, the calculating an average depth of view of each grid by using a second preset method based on the reachable grid matrix and the visual grid matrix includes:
taking the starting point grid as a target, acquiring a grid of the directly visible starting point grid from the first number of grids, and marking the grid as a visible grid area, wherein the number of the grids in the visible grid area is D0
Removing grids of the visual grid region from the first number of grids to obtain remaining grids;
executing a first reachable topology in the remaining grids, obtaining the number of grids in the remaining grids which can reach the visual grid region, and recording the number as a new reachable grid number D of the first reachable topology1
Adding the newly-added reachable grid obtained from the last reachable topology into the visual grid area to generate a new visual grid area;
removing the new visible grid region from the residual grids to obtain new residual grids;
executing the ith reachable topology in the new residual grid, obtaining the number of grids which can reach the new visual grid area in the new residual grid, and recording as the new reachable grid number D of the ith reachable topologyi(ii) a Wherein i is a positive integer;
repeatedly executing the operation until the number of the new remaining grids is reduced to zero, and stopping the topological operation;
number of grids D based on the visual grid area0The current reachable topology depth i and the new reachable grid number D of the ith reachable topologyiAnd calculating the average viewed depth of the starting point grids by carrying out the total times of the reachable topology and the total grid number.
In an embodiment, the average viewed depth of each grid is an average topological depth of any grid reachable or visible starting grid in the spatial parsing base map.
In an embodiment, the method further comprises:
and performing visualization operation based on the view relation analysis result of each grid.
To achieve the above object, there is also provided a computer storage medium having stored thereon a program of a viewing relation analysis method, the program implementing any of the steps of the viewing relation analysis method described above when executed by a processor.
To achieve the above object, there is also provided a viewing relation analysis apparatus including a memory, a processor, and a program of a viewing relation analysis method stored on the memory and operable on the processor, the processor implementing the steps of any one of the above-described viewing relation analysis methods when executing the program of the viewing relation analysis method.
One or more technical solutions provided in the embodiments of the present application have at least the following technical effects or advantages:
generating reachable relation data and visual relation data of each grid in the building plan; through calculation of software, accurate reachable relation data and visual relation data of each grid in the building plan are generated, and the accuracy of the average visual depth and the average viewed depth of each grid in subsequent calculation is guaranteed;
based on the reachable space of the building plan, calculating the average visual depth and the average viewed depth of each grid according to the reachable relation data and the visual relation data, and completing the view relation analysis of each grid; in the reachable space of the building area, i.e., the space that a person can enter (the range defined by the reachable layer), other spaces are excluded; the definition avoids increasing the space amount (the grid number of planes) of the model, thereby avoiding the error in the calculation caused by the deviation of the data distribution; and accurately carrying out relation analysis on each grid through the quantized average visual depth and the average viewed depth, thereby revealing the spatial characteristics of the building.
The method solves the problem that the analysis method of splitting the two space experience modes of motion and vision in the prior art cannot truly describe the motion perception experience of the Chinese classical garden and the space characteristics of the view and the viewed space of the scenery, and better reveals the space perception characteristics of different parts of the Chinese classical garden through the quantitative analysis method of the view and viewed space experience.
Drawings
FIG. 1 is a schematic flow chart of a first embodiment of a field of view relationship analysis method of the present application;
FIG. 2 is a flowchart illustrating a step S110 of a first embodiment of a viewing area relationship analysis method according to the present application;
FIG. 3 is a plan view of an exemplary Web site and corresponding reachable layer base and visual layer base views;
FIG. 4 is a flowchart illustrating a step S120 of a first embodiment of a viewing area relationship analysis method according to the present application;
FIG. 5 is a flowchart illustrating a step S122 of the present invention of a visual field relationship analysis method;
FIG. 6 is a schematic diagram of a "see" (left view) and "seen" (right view) measurement method in the view field relationship analysis method of the present application
FIG. 7 is a flowchart illustrating a specific step S123 of the present invention of a method for analyzing a relationship between fields of view;
FIG. 8 is a schematic flow chart diagram illustrating a second embodiment of a field of view relationship analysis method of the present application;
FIG. 9 is a comparison of the analysis results of the prior art and the present method;
fig. 10 is a schematic hardware architecture diagram of a viewing area relation analysis device according to an embodiment of the present application;
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The main solution of the embodiment of the invention is as follows: generating reachable relation data and visual relation data of each grid in the building plan; based on the reachable space of the building plan, calculating the average visual depth and the average viewed depth of each grid according to the reachable relation data and the visual relation data, and completing the view relation analysis of each grid; the invention solves the problem that the analysis method of the prior art for splitting the two spatial experience modes of motion and vision can not truly describe the motion perception experience of the Chinese classical garden and the spatial characteristics of the 'watching' and 'seen' of the scenery, and better reveals the spatial perception characteristics of different parts of the Chinese classical garden by the quantitative analysis method of the 'watching' and 'seen' spatial experience.
In order to better understand the technical solution, the technical solution will be described in detail with reference to the drawings and the specific embodiments.
Referring to fig. 1, fig. 1 is a first embodiment of a view relationship analysis method of the present application, the method including:
step S110: reachable relationship data and visual relationship data for each grid in the building plan are generated.
Specifically, the reachable relationship data and the visual relationship data of each grid in the building plan can be generated by a preset method, wherein the preset method can be related software, and in the embodiment, the spatial syntactic analysis base map can be generated by CAD software or other methods; and generating the grid, and using DepthMapX software to generate the reachable relation diagram data table and the visual relation diagram data table.
Specifically, the reachable relationship data may be a reachable layer spatial relationship graphical data table; the visual relation data can be a visual layer spatial relation graphic data table; the data table is not limited to the above data table, and may include other data having a reachable or visible relationship.
Step S120: and calculating the average visual depth and the average viewed depth of each grid according to the reachable relation data and the visual relation data based on the reachable space of the building plan, and completing the visual field relation analysis of each grid.
Specifically, in this embodiment, the reachable space of the building plan only includes a space accessible by a person, that is, a range defined by the reachable layer, and other spaces are excluded; the definition can avoid increasing the space amount (the grid number of the plane) value of the model, thereby reducing the error; since the calculation of the mean topology depth is significantly affected by the spatial magnitude and may in turn introduce statistical errors due to the skewing of the data distribution.
Specifically, analyzing the average visual depth and the average perceived depth of each grid generated can better measure the "deep" versus "shallow" of the visual relationship.
Specifically, the video field relationship Analysis (VGA: visual graphics Analysis) method of the Space Syntax (Space Syntax) theory is an important technical means for the Space research in the field of architecture, and forms a series of computer software tools, such as DepthMapX, Isovist, Decoding Space Toolbox, Syntax2D and the like. Taking the most influential space syntax software DepthmapX as an example, the method covers the analyzed building plane with uniform grids with a certain density, and draws an isoview map (Isovists, namely view area range seen from the 360-degree view of the point and view area polygon form attribute formed by wall shielding) of each grid (central point); in addition to the geometric properties of the visual field such as the measurement, the degree of overlap of the visual field such as between grids and the change in the topological Depth of the visual relationship are measured by indexes such as the Connectivity (Connectivity) and the Mean Depth (Mean Depth). Under the influence of space division objects such as walls in the layout of a building plane, the shape and the size of a constant vision field change along with the difference of observation points in the building plane, and the vision field change experienced by people in the motion process is reflected. A series of successive iso-fields constitute a continuous scene, representing a scene such as a walk through a building. A large amount of empirical studies at home and abroad reveal that the vision field topological connection mode of the building layout influences the cognitive process and behavior of people, such as dynamic line organization or static space occupation mode and the like.
In the above embodiment, there are beneficial effects of: the 'seeing' and 'seen' space experience is more obvious by quantifying the average visual depth and the average seen depth of each grid, and the 'seeing' and 'seen' space experience characteristics and differences of complex building spaces (such as classical gardens) are quantitatively described, so that the space design method of the classical gardens is better understood, and the space perception characteristics of different parts of the classical gardens in China are better revealed.
Referring to fig. 2, fig. 2 is a detailed implementation step of step S110 in the first embodiment of the viewing area relationship analysis method of the present application, where the generating of the reachable relationship data and the visual relationship data of each grid in the building plane includes:
step S111: and acquiring a building plan.
Specifically, the building plan, also called plan for short, is a drawing composed of the building conditions of the newly-built building or the building, such as the wall, door and window, stairs, ground and internal functional layout, by a horizontal projection method and corresponding legend.
It should be noted that, in the present application, a garden plan (jpg or vector map, please ensure that the plane has a scale or a drawing unit) is used as a main research object, but the present application is not limited to the garden plan, and is applicable to a complex building space including a reachable space and a visible space combined together. In the present application, a network teacher park in suzhou garden is taken as an example to describe a specific embodiment, but the method is not limited to the network teacher park.
Step S112: drawing the building plan as a space syntax analysis base map, and dividing the space syntax analysis base map into grids with a first number and equal size.
Specifically, the CAD software of the Windows system and DepthMapX (the latest version of the DepthMapX is v0.8) are installed; importing jpg pictures or vector mapping pictures of the network teacher garden plan in CAD (computer-aided design) drawing software such as Autocad (auto CAD), and then drawing a spatial syntax VGA (video graphics array) analysis base map; it is important in drawing to distinguish two types of boundaries in a plane: a visible boundary and a reachable boundary. The visual boundary refers to an opaque entity such as a wall body which is higher than the eye height and blocks the sight; the reachable boundary refers to a boundary for blocking the movement of the human body, and includes some boundaries with lower height (below the Eye height) or transparent, such as a railing, a greening, a water surface, a floor glass, etc. in addition to the former (fig. 3, the left of the figure in fig. 3 is a net-garden plan, the bottom of a VGA with a height capable of reaching the foot (drawn according to the boundary condition of the building plane of the height of the foot (Knee-level)) in the figure, and the right of the figure is a VGA with a height capable of seeing the foot (drawn according to the boundary condition of the height of the Eye-level)). Referring to the description of chapter four of the thirteen-five planning textbook "spatial syntax tutorial" of the ministry of building, the visible boundaries and the reachable boundaries are respectively drawn on two layers (such as named as "Visual boundary" and "Access boundary", respectively), the peripheral boundaries of the respective layers are ensured to be closed, and then the drawn graphics are stored in a dxf file format.
Step S113: and sequentially storing the reachable attribute data of each grid into a first data table according to the numbering sequence of the grids based on the spatial syntactic analysis base map, and generating a reachable layer spatial relationship graphic data table.
Specifically, the dxf file previously stored is imported into DepthMapX. And ensuring that the reachable layer and the visible layer in the Drawing Layers are both in an open state. Referring to the description of space syntax tutorial, chapter 4.2.2, chapter four, VGA analysis grid size is set to 0.6 meter, the range of reachable space is filled, and then the space relation diagram of reachable layer is generated. Then, the spatial relationship diagram of the reachable layer is output as a CSV file, and the menu path of the DepthmapX is: map → Export → Visability Graph Connections as CSV …. The output CSV file stores the information of other grids directly connected with each grid in the ID number mode of the grids, and the file can be named as' Access links.
Specifically, the reachable attribute data may include, but is not limited to, a reachable grid matrix, a connection relation, and an average depth.
Step S114: and sequentially storing the visual attribute data of each grid into a second data table according to the numbering sequence of the grids based on the spatial syntactic analysis base map, and generating a visual layer spatial relationship graphic data table.
Specifically, the dxf file previously stored is imported into DepthMapX. And ensuring that the reachable layer and the visible layer in the Drawing Layers are both in an open state. Referring to the description in chapter 4.2.2 of "spatial syntax tutorial", the VGA analysis grid size is set to 0.6 meter, the range of reachable space is filled, referring to the description in chapter 4.3.2 of "spatial syntax tutorial", the Drawing Layers are switched to be the current working Layers, the reachable Layers of the imported dxf file are closed, it is ensured that only the visible Layers are in the open state, and then the regenerated spatial relationship diagram is the relationship diagram of the visible Layers. Then, the spatial relationship diagram of the visual layer is output as a CSV file, and the menu path of the DepthmapX is as shown above; for the sake of distinction, the data table of the visual layer may be named "visiblitylinks.
Specifically, the visual attribute data may include a visual grid matrix, a connection relation, an average depth, and the like, which is not limited herein.
In the above embodiment, there are beneficial effects of: through the steps, the reachable layer spatial relationship graphic data sheet and the visible layer spatial relationship graphic data sheet are correctly generated, so that the calculation correctness of the average visual depth and the average viewed depth of the subsequent grids is ensured, and the spatial characteristics of the building garden are correctly analyzed.
Referring to fig. 4, fig. 4 is a detailed implementation step of step S120 in the first embodiment of the present invention, where the calculating, based on the reachable space of the building plan, an average visual depth and an average viewed depth of each grid according to the reachable relationship data and the visual relationship data, and completing the visual relationship analysis of each grid, includes:
step S121: and acquiring the reachable grid matrix and the visual grid matrix of each grid through the number of the grid based on the reachable layer spatial relationship graphic data table and the visual layer spatial relationship graphic data table.
Specifically, in the process of generating the reachable layer spatial relationship graphic data table and the visible layer relationship graphic data table, the positions of the grids are not changed, the numbers of the grids in the two layers are completely the same, and the visible attribute data and the reachable attribute data are obtained in the space at the same position.
Step S122: and calculating the average visual depth of each grid by utilizing a first preset method based on the reachable grid matrix and the visual grid matrix.
Specifically, in this embodiment, the average visual depth of each grid is calculated by using the C + + language, and the calculation result is stored in a txt file, but the present invention is not limited to the language and the file storage manner, and may be dynamically adjusted as needed.
Step S123: and calculating the average viewed depth of each grid by utilizing a second preset method based on the reachable grid matrix and the visual grid matrix.
Specifically, the step S122 is already explained, and is not described herein again.
In the above embodiment, there are beneficial effects of: the average visual depth and the average viewed depth of each grid are correctly calculated through the first preset method and the second preset method, so that the spatial information of each grid can be accurately analyzed.
Referring to fig. 5, fig. 5 is a specific implementation step of step S122 in the method for analyzing a view relationship of the present application, where the calculating an average visual depth of each grid by using a first preset method based on the reachable grid matrix and the visual grid matrix includes:
step S1221: selecting a starting point grid, wherein the number of the visual grids corresponding to the starting point grid is V0(ii) a Wherein the starting grid is any grid in the spatial syntactic analysis base map.
Step S1222: starting from the starting point grid, executing a first reachable topology, obtaining the total number of direct reachable grids of the first reachable topology and the direct visual grids corresponding to the direct reachable grids, and recording the total number as the number V of the newly added visual grids of the first reachable topology1
Step S1223: starting from any newly added visual grid of the last reachable topology, executing the ith reachable topology, obtaining the total number of the direct reachable grids of the ith reachable topology and the direct visual grids corresponding to the direct reachable grids, and recording the total number V of the newly added visual grids of the ith reachable topologyi(ii) a Wherein i is a positive integer.
Step S1224: repeating the above operations until the number of the visible grids of the starting grid reaches the total number of the grids minus 1, and stopping the topological operation; wherein the total number of grids is the first number.
Step S1225: the number V of visual grids corresponding to the starting grid0Is currently reachableTopology depth i, and the number V of newly added visual grids of the ith reachable topologyiAnd calculating the average visual depth of the starting point grid by carrying out the total times of the reachable topology and the total grid number.
Specifically, the topological depth algorithm (average visual depth) of "look" (view analysis) is as follows:
Figure BDA0003010154050000101
Figure BDA0003010154050000111
the computing idea of the code is as follows:
and performing reachable topology on the current visual grid set of the starting grid, and adding a new visual grid in each topology until the number of the visual grids of the starting grid is reached (the total grid number is 1).
Setting the current reachable topology depth as i, and the number V of newly added visual grids of the ith reachable topologyiThe total number of times the reachable topology is performed is n, and the number of visible grids at the starting grid is V0And the total grid number is C, then the calculation formula of the average visual depth is:
Figure BDA0003010154050000112
the specific steps of the algorithm code are as follows:
1) starting from the starting grid, making a once reachable topology, recording the number of directly reachable grids of the first reachable topology and the total number of grids directly visible by the reachable grids as the number V of newly added visible grids of the first topology1
2) Starting from the visual grids of the first topology, making a second reachable topology, subtracting the visual grid number V of the last reachable topology from the directly reachable grid data of the topology and the total number of the grids directly visible by the reachable grids1To obtain a second topologyIncreasing the number of viewable grids V2
3) Starting from the visual grid of the second topology, making a third reachable topology, subtracting the visual grid number V of the last reachable topology from the directly reachable grid data of the topology and the total number of the grids directly visible by the reachable grids2Obtaining the newly added visual grid number V of the second topology3
4) Repeating the steps until the number of the visible grids of the starting point grid reaches the total number of the grids in the map, and stopping topology;
5) according to equation 1, the average visible topology depth of the origin grid is calculated.
In the above embodiment, there are beneficial effects of: the correct calculation of the average visual depth of each grid is guaranteed, so that the "looking" spatial experience is better quantified.
In one embodiment, the average visible depth of each grid is an average topological depth of any starting grid in the spatial syntactic analysis base map which can be reached or other grids can be seen.
Specifically, referring to the left diagram of fig. 6, a schematic diagram of a measurement method of the average visible topological depth of the position J point is shown, taking the left diagram of fig. 6 as an example, the average topological depth "seen" by the position J point:
depth 1 (visual): 1080 grids;
depth 2 (up to + visible): 2234 grids;
depth 3 (up to + visible): 835 grids;
depth 4 (up to + visible): 298 grids;
depth 5 (up to + visible): 139 grids;
depth 6 (up to + visible): 679 grids;
depth 7 (up to + visible): 1438 grids;
depth 8 (up to + visible): 611 grids;
depth 9 (up to + visible): 76 grids;
depth 10 (up to + visible): 194 grids;
depth 11 (up to + visible): 349 grids;
depth 12 (up to + visible): 96 grids;
depth 13 (up to + visible): 51 grids;
average topological depth is 37246 (total depth)/8080 (total grid number) 4.610.
And each topological search of the visual relationship is established on the last newly added reachable space position until all grids are exhausted. This results in the average topological depth of the other grids "seen" from position J.
Referring to fig. 7, fig. 7 is a detailed implementation step of step S123 of the method for analyzing a view relationship according to the present application, where the calculating an average depth of view of each grid by using a second preset method based on the reachable grid matrix and the visible grid matrix includes:
step S1231: taking the starting point grid as a target, acquiring a grid of the directly visible starting point grid from the first number of grids, and marking the grid as a visible grid area, wherein the number of the grids in the visible grid area is D0
Step S1232: and eliminating the grids of the visual grid region from the first number of grids to obtain the remaining grids.
Step S1233: executing a first reachable topology in the remaining grids, obtaining the number of grids in the remaining grids which can reach the visual grid region, and recording the number as a new reachable grid number D of the first reachable topology1
Step S1234: and adding the newly added reachable grid obtained from the last reachable topology into the visual grid area to generate a new visual grid area.
Step S1235: and removing the new visible grid area from the residual grids to obtain new residual grids.
Step S1236: executing the ith reachable topology in the new residual grid, obtaining the number of grids which can reach the new visual grid area in the new residual grid, and recording as the new reachable grid number D of the ith reachable topologyi(ii) a Wherein i is a positive integer.
Step S1237: and repeating the operation until the number of the new remaining grids is reduced to zero, and stopping the topological operation.
Step S1238: number of grids D based on the visual grid area0The current reachable topology depth i and the new reachable grid number D of the ith reachable topologyiAnd calculating the average viewed depth of the starting point grids by carrying out the total times of the reachable topology and the total grid number.
Figure BDA0003010154050000131
Figure BDA0003010154050000141
The computing idea of the code is as follows:
finding out grids which can directly see the starting grid from the graph, marking as a current visual grid area, finding out the number of grids which can reach the current visual grid area through once reachable topology from the rest grids each time, adding the searched reachable grids into the visual grid area, and removing the reachable grids from the rest grids; and continuously searching the grids which are reached through the once reachable topology from the rest grids until all grids in the map are searched.
Let the current reachable topology depth be i, and the reachable grid number D of the ith reachable topologyiThe total number of times of performing the reachable topology is n, and the number of grids in the map in which the starting grid can be directly seen is D0And the total grid number is C, then the calculation formula of the average viewed depth is:
Figure BDA0003010154050000142
the specific steps of the algorithm code are as follows:
1) finding out the grids which can directly see the grid of the starting point from the map, and marking the number of the corresponding grids as D0
2) Finding a grid which can directly see the starting grid from the map, and recording as a current visible grid area;
3) in the residual grids with the removed visual grids, searching the number of grids which can be reached through one-time reachable topology in the residual grids, and recording the number of newly added reachable grids of the first topology as D1
4) Will D1Adding the grid into the visual grid area and eliminating the last reachable grid D from the rest grids1Searching the number of grids reachable through the once reachable topology in the rest grids, and recording the number of reachable grids in the second topology as D2
5) Will D2Adding the grid into the visual grid area and eliminating the last reachable grid D from the rest grids2Searching the number of grids reachable through the once reachable topology in the rest grids, and recording the number of reachable grids in the second topology as D3
6) And analogizing in sequence until the number of the remaining grids is reduced to zero, and stopping the topology;
7) according to equation 2, the average viewed topology depth of the origin grid is calculated.
In the above embodiment, there are beneficial effects of: the correct calculation of the average viewed depth of each grid is ensured, so that the 'viewed' spatial experience is better quantified.
In one embodiment, the average viewed depth of each grid is an average topological depth of any grid reachable or visible starting grid in the spatial parsing base map.
Specifically, referring to the right diagram of fig. 6, a schematic diagram of a measurement method of the average viewed topological depth of the position J point is shown, such as the average topological depth of the position J point "viewed" in the right diagram of fig. 6.
Depth 1 (visual): 1080 grids;
depth 2 (reachable): 1598 grids;
depth 3 (reachable): 879 grids;
depth 4 (reachable): 1930 grids;
depth 5 (reachable): 1243 grids;
depth 6 (reachable): 929 grids;
depth 7 (reachable): 326 grids;
depth 8 (reachable): 95 grids;
average topological depth is 29464 (total depth)/8080 (total grid number) 3.647.
As can be seen from the right diagram of FIG. 6, for the index calculation of "seen", there are and only the connections of the first topological depth are visible relation connections, and all the connections of the other topological depths are reachable relation connections.
Referring to fig. 8, fig. 8 is a view relation analysis method according to a second embodiment of the present application, the method further includes:
step S210: and obtaining reachable relation data and visual relation data of each grid in the building plan by a preset method.
Step S220: and calculating the average visual depth and the average viewed depth of each grid according to the reachable relation data and the visual relation data based on the reachable space of the building plan, and completing the view relation analysis of each grid.
Step S230: and performing visualization operation based on the view relation analysis result of each grid.
Compared with the first embodiment, the second embodiment includes step S230, and other steps have already been described in the first embodiment, and are not repeated herein.
Specifically, the txt file of the calculation result may be imported into a VGA analysis file of the DepthmapX and visualized, and the specific implementation steps are described in section 4.5.3 of chapter four of "spatial syntax tutorial", which is not described herein again. The average depth of the reachable layers of the network garden space, the average depth of the visible layers and the comparison of the average depth of the 'seeing' and the 'seen' are shown in a figure 9; as shown in fig. 9, for comparison of new and old visual image analysis (VGA) methods, the left two pictures are average depths of two systems, namely, the reachable layer and the visual layer of the network teacher park analyzed by the VGA method in the prior art for spatial syntax; the two pictures on the right are the spatial depth analysis of 'looking' and 'viewed' in the embodiment; it should be noted that the darker the grid color in fig. 9, the smaller the average topological depth.
In the above embodiment, there are advantageous effects of: after the visualization operation is executed, the method can be easily seen to better reveal the space perception characteristics of different parts of the garden.
The present application also provides a computer storage medium having stored thereon a program of a viewing relation analysis method, the program of the viewing relation analysis method realizing the steps of any one of the above-described viewing relation analysis methods when executed by a processor.
The present application also provides a viewing relation analysis apparatus including a memory, a processor, and a program of a viewing relation analysis method stored on the memory and executable on the processor, wherein the processor implements any of the steps of the viewing relation analysis method when executing the program of the viewing relation analysis method.
The present application relates to a visual field relationship analysis apparatus 010, including as shown in fig. 10: at least one processor 012, memory 011.
The processor 012 may be an integrated circuit chip having signal processing capability. In implementation, the steps of the method may be performed by hardware integrated logic circuits or instructions in the form of software in the processor 012. The processor 012 may be a general-purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic device, or discrete hardware components. The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in the memory 011, and the processor 012 reads the information in the memory 011 and completes the steps of the method in combination with the hardware.
It is to be understood that the memory 011 in embodiments of the present invention can be either volatile memory or nonvolatile memory, or can include both volatile and nonvolatile memory. The non-volatile Memory may be a Read Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash Memory. Volatile Memory can be Random Access Memory (RAM), which acts as external cache Memory. By way of illustration and not limitation, many forms of RAM are available, such as Static random access memory (Static RAM, SRAM), Dynamic Random Access Memory (DRAM), Synchronous Dynamic random access memory (Synchronous DRAM, SDRAM), Double data rate Synchronous Dynamic random access memory (ddr DRAM), Enhanced Synchronous SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), and Direct Rambus RAM (DRRAM). The memory 011 of the systems and methods described in connection with the embodiments of the invention is intended to comprise, without being limited to, these and any other suitable types of memory.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It should be noted that in the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (10)

1. A method of view relationship analysis, the method comprising:
generating reachable relation data and visual relation data of each grid in the building plan;
and calculating the average visual depth and the average viewed depth of each grid according to the reachable relation data and the visual relation data based on the reachable space of the building plan, and completing the view relation analysis of each grid.
2. The vision field relationship analysis method of claim 1, wherein the generating reachable relationship data and visual relationship data for each grid in the building plane comprises:
acquiring a building plan;
drawing the building plan as a space syntactic analysis base map, and dividing the space syntactic analysis base map into grids with a first number and equal size;
based on the spatial syntax analysis base map, sequentially storing reachable attribute data of each grid into a first data table according to the numbering sequence of the grids, and generating a reachable layer spatial relationship graphic data table;
and sequentially storing the visual attribute data of each grid into a second data table according to the numbering sequence of the grids based on the spatial syntactic analysis base map, and generating a visual layer spatial relationship graphic data table.
3. The viewing area relationship analysis method according to claim 2, wherein the performing the viewing area relationship analysis for each grid by calculating an average visual depth and an average viewed depth for each grid based on the reachable space of the building plan and the visual relationship data includes:
acquiring a reachable grid matrix and a visible grid matrix of each grid through the number of the grid based on the reachable layer spatial relationship graphic data table and the visible layer spatial relationship graphic data table;
calculating the average visual depth of each grid by utilizing a first preset method based on the reachable grid matrix and the visual grid matrix;
and calculating the average viewed depth of each grid by utilizing a second preset method based on the reachable grid matrix and the visual grid matrix.
4. The visual field relationship analysis method according to claim 3, wherein the calculating an average visual depth of each grid using a first preset method based on the reachable grid matrix and the visual grid matrix comprises:
selecting a starting point grid, wherein the number of the visual grids corresponding to the starting point grid is V0(ii) a Wherein the starting grid is any grid in the spatial syntactic analysis base map;
starting from the starting point grid, executing a first reachable topology, obtaining the total number of direct reachable grids of the first reachable topology and the direct visual grids corresponding to the direct reachable grids, and recording the total number as the number V of the newly added visual grids of the first reachable topology1
Starting from any newly added visual grid of the last reachable topology, executing the ith reachable topology, obtaining the total number of the direct reachable grids of the ith reachable topology and the direct visual grids corresponding to the direct reachable grids, and recording the total number V of the newly added visual grids of the ith reachable topologyi(ii) a Wherein i is a positive integer;
repeating the above operations until the number of the visible grids of the starting grid reaches the total number of the grids minus 1, and stopping the topological operation; wherein the total number of grids is the first number;
the number V of visual grids corresponding to the starting grid0The current reachable topology depth i and the new visual grid number V of the ith reachable topologyiAnd calculating the average visual depth of the starting point grid by carrying out the total times of the reachable topology and the total grid number.
5. The view relationship analysis method according to claim 4, wherein the average visual depth of each grid is an average topological depth of any one of the starting grids reachable in the spatial syntactic analysis base map or other grids that are visible.
6. The visual field relationship analysis method of claim 3, wherein said calculating an average depth of view for each grid using a second predetermined method based on the reachable grid matrix and the visible grid matrix comprises:
taking the starting point grid as a target, acquiring a grid of the directly visible starting point grid from the first number of grids, and marking the grid as a visible grid area, wherein the number of the grids in the visible grid area is D0
Removing grids of the visual grid region from the first number of grids to obtain remaining grids;
executing a first reachable topology in the remaining grids, obtaining the number of grids in the remaining grids which can reach the visual grid region, and recording the number as a new reachable grid number D of the first reachable topology1
Adding the newly-added reachable grid obtained from the last reachable topology into the visual grid area to generate a new visual grid area;
removing the new visible grid region from the residual grids to obtain new residual grids;
executing the ith reachable topology in the new residual grid to obtain the reachable topology in the new residual gridThe grid number of the new visual grid region is marked as the new reachable grid number D of the ith reachable topologyi(ii) a Wherein i is a positive integer;
repeatedly executing the operation until the number of the new remaining grids is reduced to zero, and stopping the topological operation;
number of grids D based on the visual grid area0The current reachable topology depth i and the new reachable grid number D of the ith reachable topologyiAnd calculating the average viewed depth of the starting point grids by carrying out the total times of the reachable topology and the total grid number.
7. The view relationship analysis method according to claim 6, wherein the average depth of view of each grid is an average topological depth of any one of the grids reachable in the spatial syntactic analysis base map or the grid of the visual starting point.
8. The viewing area relationship analysis method of claim 1, further comprising:
and performing visualization operation based on the view relation analysis result of each grid.
9. A computer storage medium having stored thereon a program of a viewing relation analysis method, the program of the viewing relation analysis method realizing the steps of the viewing relation analysis method according to any one of claims 1 to 8 when executed by a processor.
10. A visual field relationship analysis apparatus comprising a memory, a processor, and a program of a visual field relationship analysis method stored on the memory and executable on the processor, the processor implementing the steps of the visual field relationship analysis method according to any one of claims 1 to 8 when executing the program of the visual field relationship analysis method.
CN202110376074.5A 2021-04-07 2021-04-07 Vision relation analysis method, device and computer storage medium Active CN113032885B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110376074.5A CN113032885B (en) 2021-04-07 2021-04-07 Vision relation analysis method, device and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110376074.5A CN113032885B (en) 2021-04-07 2021-04-07 Vision relation analysis method, device and computer storage medium

Publications (2)

Publication Number Publication Date
CN113032885A true CN113032885A (en) 2021-06-25
CN113032885B CN113032885B (en) 2023-04-28

Family

ID=76454175

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110376074.5A Active CN113032885B (en) 2021-04-07 2021-04-07 Vision relation analysis method, device and computer storage medium

Country Status (1)

Country Link
CN (1) CN113032885B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180075643A1 (en) * 2015-04-10 2018-03-15 The European Atomic Energy Community (Euratom), Represented By The European Commission Method and device for real-time mapping and localization
CN108470103A (en) * 2018-03-22 2018-08-31 东南大学 A kind of pivot function space layout design method based on Space Syntax
CN112435337A (en) * 2020-11-13 2021-03-02 郑亮 Landscape visual field analysis method and system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180075643A1 (en) * 2015-04-10 2018-03-15 The European Atomic Energy Community (Euratom), Represented By The European Commission Method and device for real-time mapping and localization
CN108470103A (en) * 2018-03-22 2018-08-31 东南大学 A kind of pivot function space layout design method based on Space Syntax
CN112435337A (en) * 2020-11-13 2021-03-02 郑亮 Landscape visual field analysis method and system

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
曹玮;王晓春;薛白;黄春华;: "扬州园林复道回廊的空间句法解析――以何园、个园为例" *
曹玮;薛白;王晓春;胡立辉;: "基于空间句法的扬州何园空间组织特征分析" *
曹玮等: "扬州园林复道回廊的空间句法解析――以何园、个园为例", 《扬州大学学报(农业与生命科学版)》 *
王静文: "空间句法研究现状及其发展趋势", 《华中建筑》 *
金珊 等: "基于空间句法的旧城中心区空间形态演变研究", 《城市建设》 *

Also Published As

Publication number Publication date
CN113032885B (en) 2023-04-28

Similar Documents

Publication Publication Date Title
CN107004297B (en) Three-dimensional automatic stereo modeling method and program based on two-dimensional plane diagram
CN113538671B (en) Map generation method, map generation device, storage medium and processor
Fuglstad et al. Does non-stationary spatial data always require non-stationary random fields?
Truong-Hong et al. Octree-based, automatic building facade generation from LiDAR data
US20130271461A1 (en) Systems and methods for obtaining parameters for a three dimensional model from reflectance data
TWI661210B (en) Method and apparatus for establishing coordinate system and data structure product
Xiong et al. Building seismic response and visualization using 3D urban polygonal modeling
Stahl et al. Globally optimal grouping for symmetric closed boundaries by combining boundary and region information
Napolitano et al. Minimizing the adverse effects of bias and low repeatability precision in photogrammetry software through statistical analysis
KR20190048506A (en) Method and apparatus for providing virtual room
Krukar et al. Embodied 3D isovists: A method to model the visual perception of space
CN112489099A (en) Point cloud registration method and device, storage medium and electronic equipment
RU2721078C2 (en) Segmentation of anatomical structure based on model
JP2015184061A (en) Extracting device, method, and program
US7116341B2 (en) Information presentation apparatus and method in three-dimensional virtual space and computer program therefor
CN113032885A (en) View relation analysis method, device and computer storage medium
Ripperda Determination of facade attributes for facade reconstruction
CN113126944B (en) Depth map display method, display device, electronic device, and storage medium
Li et al. [Retracted] 3D Real Scene Data Collection of Cultural Relics and Historical Sites Based on Digital Image Processing
CN114359505A (en) Three-dimensional walking surface modeling method based on voxel chessboard model
Chen et al. Measvre: Measurement tools for unity vr applications
CN117456550B (en) MR-based CAD file viewing method, device, medium and equipment
Mirescu et al. Estimate of Color Depth Discretization Impact on the Accuracy of an Ideal Pinhole Camera
Lee et al. Mouse picking with Ray casting for 3D spatial information open-platform
Kazaryan et al. Parametric evaluation of observed objects from images based on perspective geometry methods and convolutional neural networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20210625

Assignee: SHENZHEN VIDENT TECHNOLOGY CO.,LTD.

Assignor: SHENZHEN University

Contract record no.: X2023980046281

Denomination of invention: Analysis methods, devices, and computer storage media for domain relationships

Granted publication date: 20230428

License type: Common License

Record date: 20231110

EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20210625

Assignee: Sankexiaocao (Shenzhen) Internet of Things Technology Co.,Ltd.

Assignor: SHENZHEN University

Contract record no.: X2023980047154

Denomination of invention: Analysis methods, devices, and computer storage media for field of view relationships

Granted publication date: 20230428

License type: Common License

Record date: 20231115

Application publication date: 20210625

Assignee: Shenzhen Pengyang Smart Technology Co.,Ltd.

Assignor: SHENZHEN University

Contract record no.: X2023980047146

Denomination of invention: Analysis methods, devices, and computer storage media for field of view relationships

Granted publication date: 20230428

License type: Common License

Record date: 20231115

Application publication date: 20210625

Assignee: Shenzhen Zhenbing intelligent Co.,Ltd.

Assignor: SHENZHEN University

Contract record no.: X2023980047136

Denomination of invention: Analysis methods, devices, and computer storage media for field of view relationships

Granted publication date: 20230428

License type: Common License

Record date: 20231115

Application publication date: 20210625

Assignee: SHENZHEN SCILIT TECHNOLOGY Co.,Ltd.

Assignor: SHENZHEN University

Contract record no.: X2023980047129

Denomination of invention: Analysis methods, devices, and computer storage media for field of view relationships

Granted publication date: 20230428

License type: Common License

Record date: 20231115

Application publication date: 20210625

Assignee: SHENZHEN KSY Co.,Ltd.

Assignor: SHENZHEN University

Contract record no.: X2023980046891

Denomination of invention: Analysis methods, devices, and computer storage media for field of view relationships

Granted publication date: 20230428

License type: Common License

Record date: 20231114

Application publication date: 20210625

Assignee: Shenzhen Yingqi Consulting Co.,Ltd.

Assignor: SHENZHEN University

Contract record no.: X2023980047348

Denomination of invention: Analysis methods, devices, and computer storage media for field of view relationships

Granted publication date: 20230428

License type: Common License

Record date: 20231116

Application publication date: 20210625

Assignee: Shenzhen Minghua Trading Co.,Ltd.

Assignor: SHENZHEN University

Contract record no.: X2023980047346

Denomination of invention: Analysis methods, devices, and computer storage media for field of view relationships

Granted publication date: 20230428

License type: Common License

Record date: 20231116

Application publication date: 20210625

Assignee: Shenzhen Dongfang Huilian Technology Co.,Ltd.

Assignor: SHENZHEN University

Contract record no.: X2023980047336

Denomination of invention: Analysis methods, devices, and computer storage media for field of view relationships

Granted publication date: 20230428

License type: Common License

Record date: 20231116

Application publication date: 20210625

Assignee: Shenzhen Weigao Investment Development Co.,Ltd.

Assignor: SHENZHEN University

Contract record no.: X2023980047270

Denomination of invention: Analysis methods, devices, and computer storage media for field of view relationships

Granted publication date: 20230428

License type: Common License

Record date: 20231116

Application publication date: 20210625

Assignee: Yuncheng Holding (Shenzhen) Co.,Ltd.

Assignor: SHENZHEN University

Contract record no.: X2023980047231

Denomination of invention: Analysis methods, devices, and computer storage media for field of view relationships

Granted publication date: 20230428

License type: Common License

Record date: 20231116

Application publication date: 20210625

Assignee: Shenzhen Boosted Goal Technology Co.,Ltd.

Assignor: SHENZHEN University

Contract record no.: X2023980047206

Denomination of invention: Analysis methods, devices, and computer storage media for field of view relationships

Granted publication date: 20230428

License type: Common License

Record date: 20231115

EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20210625

Assignee: Shenzhen Xunming Trading Co.,Ltd.

Assignor: SHENZHEN University

Contract record no.: X2023980047343

Denomination of invention: Analysis methods, devices, and computer storage media for field of view relationships

Granted publication date: 20230428

License type: Common License

Record date: 20231116

Application publication date: 20210625

Assignee: Shenzhen Haocai Digital Technology Co.,Ltd.

Assignor: SHENZHEN University

Contract record no.: X2023980047340

Denomination of invention: Analysis methods, devices, and computer storage media for field of view relationships

Granted publication date: 20230428

License type: Common License

Record date: 20231116

Application publication date: 20210625

Assignee: Changyuan Comprehensive Energy (Shenzhen) Co.,Ltd.

Assignor: SHENZHEN University

Contract record no.: X2023980047286

Denomination of invention: Analysis methods, devices, and computer storage media for field of view relationships

Granted publication date: 20230428

License type: Common License

Record date: 20231116

EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20210625

Assignee: Shenzhen Kaixin Intelligent Control Co.,Ltd.

Assignor: SHENZHEN University

Contract record no.: X2023980048385

Denomination of invention: Analysis methods, devices, and computer storage media for field of view relationships

Granted publication date: 20230428

License type: Common License

Record date: 20231124

Application publication date: 20210625

Assignee: Shenzhen Jiarun original Xinxian Technology Co.,Ltd.

Assignor: SHENZHEN University

Contract record no.: X2023980048249

Denomination of invention: Analysis methods, devices, and computer storage media for field of view relationships

Granted publication date: 20230428

License type: Common License

Record date: 20231123

Application publication date: 20210625

Assignee: SHENZHEN CHENGZI DIGITAL TECHNOLOGY Co.,Ltd.

Assignor: SHENZHEN University

Contract record no.: X2023980048050

Denomination of invention: Analysis methods, devices, and computer storage media for field of view relationships

Granted publication date: 20230428

License type: Common License

Record date: 20231123

Application publication date: 20210625

Assignee: Shenzhen Xinsheng interconnected technology Co.,Ltd.

Assignor: SHENZHEN University

Contract record no.: X2023980048035

Denomination of invention: Analysis methods, devices, and computer storage media for field of view relationships

Granted publication date: 20230428

License type: Common License

Record date: 20231123

Application publication date: 20210625

Assignee: Foshan Wanhu Technology Co.,Ltd.

Assignor: SHENZHEN University

Contract record no.: X2023980048028

Denomination of invention: Analysis methods, devices, and computer storage media for field of view relationships

Granted publication date: 20230428

License type: Common License

Record date: 20231123

Application publication date: 20210625

Assignee: Foshan Deyi Intelligent Technology Co.,Ltd.

Assignor: SHENZHEN University

Contract record no.: X2023980048004

Denomination of invention: Analysis methods, devices, and computer storage media for field of view relationships

Granted publication date: 20230428

License type: Common License

Record date: 20231123

Application publication date: 20210625

Assignee: SHENZHEN GEAZAN TECHNOLOGY Co.,Ltd.

Assignor: SHENZHEN University

Contract record no.: X2023980047959

Denomination of invention: Analysis methods, devices, and computer storage media for field of view relationships

Granted publication date: 20230428

License type: Common License

Record date: 20231123

Application publication date: 20210625

Assignee: SHENZHEN MASTERCOM TECHNOLOGY Corp.

Assignor: SHENZHEN University

Contract record no.: X2023980047952

Denomination of invention: Analysis methods, devices, and computer storage media for field of view relationships

Granted publication date: 20230428

License type: Common License

Record date: 20231123

EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20210625

Assignee: Langwei Supply Chain Management (Shenzhen) Co.,Ltd.

Assignor: SHENZHEN University

Contract record no.: X2023980048668

Denomination of invention: Analysis methods, devices, and computer storage media for field of view relationships

Granted publication date: 20230428

License type: Common License

Record date: 20231128

EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20210625

Assignee: Shenzhen Tianyi Survey Engineering Co.,Ltd.

Assignor: SHENZHEN University

Contract record no.: X2023980049540

Denomination of invention: Analysis methods, devices, and computer storage media for field of view relationships

Granted publication date: 20230428

License type: Common License

Record date: 20231201

Application publication date: 20210625

Assignee: Shenzhen Guangfeng Hongye Engineering Co.,Ltd.

Assignor: SHENZHEN University

Contract record no.: X2023980049510

Denomination of invention: Analysis methods, devices, and computer storage media for field of view relationships

Granted publication date: 20230428

License type: Common License

Record date: 20231201

Application publication date: 20210625

Assignee: Shenzhen Fulongsheng Industrial Co.,Ltd.

Assignor: SHENZHEN University

Contract record no.: X2023980049215

Denomination of invention: Analysis methods, devices, and computer storage media for field of view relationships

Granted publication date: 20230428

License type: Common License

Record date: 20231130

Application publication date: 20210625

Assignee: Shenzhen Dechangsheng Electromechanical Decoration Engineering Co.,Ltd.

Assignor: SHENZHEN University

Contract record no.: X2023980049197

Denomination of invention: Analysis methods, devices, and computer storage media for field of view relationships

Granted publication date: 20230428

License type: Common License

Record date: 20231130

EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20210625

Assignee: Shenzhen Jinchengyu Decoration Engineering Co.,Ltd.

Assignor: SHENZHEN University

Contract record no.: X2023980050232

Denomination of invention: Analysis methods, devices, and computer storage media for field of view relationships

Granted publication date: 20230428

License type: Common License

Record date: 20231205

Application publication date: 20210625

Assignee: Shenzhen Weitai Building Materials Co.,Ltd.

Assignor: SHENZHEN University

Contract record no.: X2023980049901

Denomination of invention: Analysis methods, devices, and computer storage media for field of view relationships

Granted publication date: 20230428

License type: Common License

Record date: 20231204

Application publication date: 20210625

Assignee: Shenzhen Yajun Decoration Design Co.,Ltd.

Assignor: SHENZHEN University

Contract record no.: X2023980049899

Denomination of invention: Analysis methods, devices, and computer storage media for field of view relationships

Granted publication date: 20230428

License type: Common License

Record date: 20231204

Application publication date: 20210625

Assignee: Shenzhen Yijia Construction Co.,Ltd.

Assignor: SHENZHEN University

Contract record no.: X2023980049897

Denomination of invention: Analysis methods, devices, and computer storage media for field of view relationships

Granted publication date: 20230428

License type: Common License

Record date: 20231204

Application publication date: 20210625

Assignee: Shenzhen Yongji Construction Engineering Inspection Co.,Ltd.

Assignor: SHENZHEN University

Contract record no.: X2023980049891

Denomination of invention: Analysis methods, devices, and computer storage media for field of view relationships

Granted publication date: 20230428

License type: Common License

Record date: 20231204

Application publication date: 20210625

Assignee: Zhenfeng Decoration Design Engineering (Shenzhen) Co.,Ltd.

Assignor: SHENZHEN University

Contract record no.: X2023980049887

Denomination of invention: Analysis methods, devices, and computer storage media for field of view relationships

Granted publication date: 20230428

License type: Common License

Record date: 20231204

EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20210625

Assignee: Shenzhen everything Safety Technology Co.,Ltd.

Assignor: SHENZHEN University

Contract record no.: X2023980050514

Denomination of invention: Analysis methods, devices, and computer storage media for field of view relationships

Granted publication date: 20230428

License type: Common License

Record date: 20231207

EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20210625

Assignee: Shenzhen Yangxin Decoration Engineering Co.,Ltd.

Assignor: SHENZHEN University

Contract record no.: X2023980052132

Denomination of invention: Analysis methods, devices, and computer storage media for field of view relationships

Granted publication date: 20230428

License type: Common License

Record date: 20231213

EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20210625

Assignee: AVIC intelligent construction (Shenzhen) Co.,Ltd.

Assignor: SHENZHEN University

Contract record no.: X2023980054566

Denomination of invention: Analysis methods, devices, and computer storage media for field of view relationships

Granted publication date: 20230428

License type: Common License

Record date: 20231228

EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20210625

Assignee: SHENZHEN GENERAL BARCODE'S TECHNOLOGY DEVELOPMENT CENTER

Assignor: SHENZHEN University

Contract record no.: X2024980000040

Denomination of invention: Analysis methods, devices, and computer storage media for field of view relationships

Granted publication date: 20230428

License type: Common License

Record date: 20240103

Application publication date: 20210625

Assignee: Shenzhen Subangbo Intelligent Technology Co.,Ltd.

Assignor: SHENZHEN University

Contract record no.: X2024980000038

Denomination of invention: Analysis methods, devices, and computer storage media for field of view relationships

Granted publication date: 20230428

License type: Common License

Record date: 20240103

Application publication date: 20210625

Assignee: Shenzhen Deep Sea Blue Ocean Technology Service Center

Assignor: SHENZHEN University

Contract record no.: X2024980000036

Denomination of invention: Analysis methods, devices, and computer storage media for field of view relationships

Granted publication date: 20230428

License type: Common License

Record date: 20240104

EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20210625

Assignee: Luoding Zhongda Technology Co.,Ltd.

Assignor: SHENZHEN University

Contract record no.: X2024980000187

Denomination of invention: Analysis methods, devices, and computer storage media for field of view relationships

Granted publication date: 20230428

License type: Common License

Record date: 20240105

EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20210625

Assignee: SHENZHEN HONGHUI INDUSTRIAL Co.,Ltd.

Assignor: SHENZHEN University

Contract record no.: X2024980000463

Denomination of invention: Analysis methods, devices, and computer storage media for field of view relationships

Granted publication date: 20230428

License type: Common License

Record date: 20240110

EE01 Entry into force of recordation of patent licensing contract