CN107220321B - Method and system for three-dimensional materialization of entity in scene conversion - Google Patents

Method and system for three-dimensional materialization of entity in scene conversion Download PDF

Info

Publication number
CN107220321B
CN107220321B CN201710358187.6A CN201710358187A CN107220321B CN 107220321 B CN107220321 B CN 107220321B CN 201710358187 A CN201710358187 A CN 201710358187A CN 107220321 B CN107220321 B CN 107220321B
Authority
CN
China
Prior art keywords
entity
entities
appearance
dimensional
attribute
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710358187.6A
Other languages
Chinese (zh)
Other versions
CN107220321A (en
Inventor
杨富平
黄杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University of Post and Telecommunications
Original Assignee
Chongqing University of Post and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Post and Telecommunications filed Critical Chongqing University of Post and Telecommunications
Priority to CN201710358187.6A priority Critical patent/CN107220321B/en
Publication of CN107220321A publication Critical patent/CN107220321A/en
Application granted granted Critical
Publication of CN107220321B publication Critical patent/CN107220321B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/34Browsing; Visualisation therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Image Generation (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention requests to protect a method and a system for three-dimensional materialization of an entity in context conversion, wherein the method comprises the following steps: step one, establishing a general visual attribute structure of an entity; classifying all entities, and analyzing influence factors and constraint relations of the entities of different classes; step three, establishing a visual attribute structure of an adaptive subclass through entity classification and analysis; selecting a proper appearance structure generation mode aiming at the entities of different types; fifthly, extracting and reasoning influence factor information required by the visual attribute structure from the text; and step six, generating a visual three-dimensional entity through the influence factor information and the appearance structure of the entity. The method skillfully utilizes the visual attribute structure and the entity classification principle, and finally generates the visual entity meeting the requirements through the generation of the entity appearance structure and the rendering of the surface texture, and simultaneously solves the problems of rough model and overlarge model library.

Description

Method and system for three-dimensional materialization of entity in scene conversion
Technical Field
The invention belongs to the field of information visualization, and particularly relates to a method and a system for three-dimensional materialization of an entity in scene conversion.
Background
Information visualization technology is one of the important fields of research and development in recent years, and the scene conversion is to convert the depiction of a scene in static characters into a virtual visualization scene, so that text information and human imagination can be better shown. The visualization of scene text elements is realized, and not only the description of texts but also the actual conditions of scenes and entities need to be met. However, in general, the description of the text does not meet the requirement of visualization of the scene elements, and then, the entities need to be reasonably classified, a perfect visualization attribute structure is established, and the visualization information of the entities is extracted and inferred according to the visualization attribute structure, so as to meet the visualization requirement of the entities.
According to the research of entity visualization in text information at home and abroad, the implementation modes are divided into the following two types: firstly, the position relation and the kinetic relation between the entities are researched, and a rough visualization model is used for showing the meaning of the text or the scene, but the model is rough and cannot meet the requirement of visualization. Secondly, the comparison system researches the layout strategies of semantics and entities, establishes a relatively real huge model library by using software entities, and displays scenes by using the huge model library, but does not solve the problem of overlarge entity model library. The difficulty of the invention lies in establishing an entity visual attribute structure for perfecting a classification system and determining an entity appearance structure generation mode.
Disclosure of Invention
The present invention is directed to solving the above problems of the prior art. The method and the system for three-dimensional materialization of the entity in the context conversion skillfully utilize the visual attribute structure and the entity classification principle, finally generate the visual entity meeting the requirements through the generation of the entity appearance structure and the rendering of the surface texture, and simultaneously solve the problems of rough model and overlarge model library. The technical scheme of the invention is as follows:
a method for three-dimensional materialization of an entity in scene conversion comprises the following steps:
1) establishing a general attribute structure of entity visualization, comprising: establishing a quintuple entity structure; establishing a visual attribute structure;
2) classifying all entities, and analyzing influence factors and constraint relations of the entities of different classes;
3) establishing a visual attribute structure of an adaptive subclass through entity classification and analysis;
4) selecting a shape structure generation mode which accords with the determined entity aiming at the entities of different types;
5) extracting and reasoning influence factor information required by the visual attribute structure from the Chinese natural language text;
6) and generating a visualized three-dimensional entity through the influence factor information and the appearance structure of the entity.
Further, the step 1) of establishing the quintuple entity structure includes: the triple form of the entity concept is extended to the quintuple, the entity name (concept domain, attribute value, state form, dependency element), the quintuple expression of the entity structure: e (c, a, v, s, d), wherein c represents the basic concept domain of the entity and is the description of the basic category and the special constraint condition of the entity; a represents the visual attribute of the entity, the visual attribute of the entity is a function aggregate, determines the appearance of the entity and comprises an entity appearance structure function and a surface texture function; v represents an attribute value relative to the visual attribute, is an independent variable of a visual attribute function, and is a general term of a parameter causing the change of the appearance attribute of the entity; s represents the behavior state of an entity in a scene, and only comprises two states: static and dynamic; d represents another entity on which the entity depends in the scene, and is a relation node of a scene information network;
the establishing of the visual attribute structure specifically comprises: expression of entity visualization attribute structure:
E(S'(x1,x2,…,xn),T'(y1,y2,…,yn),R'(z1,z2,…,zn))
s' represents the outline structure mapping rule of the entity; x is the number ofiE S (i is 1,2, …, n) represents a parameter influencing the physical appearance structure; t' represents the mapping rule of the surface texture and the illumination modification of the entity; y isiE T (i ═ 1,2, …, n) represents a parameter that affects the texture of a solid surface; r' represents a mapping rule for reshaping the appearance of the entity after the mapping rule for the shape structure and the texture feature is completed, ziE.a (i ═ 1,2, …, n) represents parameters that constrain appearance in addition to appearance and texture.
Further, in step 2), the classification of entities and the influence factors and constraint relations influencing the visualization of various types of entities specifically include: from the characteristics of human cognition and the attribute characteristics generated by visualization, the entities are classified, and the visualized entities are divided into: plants, animals, humans, artifacts and natural objects, and subclassing according to rules.
Further, in step 3), when the plant is a plant, a sub-class visual attribute structure is obtained according to the general form of the visual attribute structure in step 2) and the influence factor and constraint relation of entity visualization, the influence factor of the tree is known according to the classification characteristics of the plant, and the visual attribute structure of the tree is obtained:
tree (S ' (height, size of colony, species, age), T ' (season, species, flowering, fruiting), R ' (annual type, region, environment)).
Further, in the step 4), selecting a suitable outline structure generation mode for the entities of different categories specifically includes;
concept function of combined generation: s (x, y) { S ═ S1(x1,y1),s2(x2,y2),…,sn(xn,yn)}
Wherein s isi(xi,yi) Representing a unit structure function, x, combining the entire entityi,yiTo influence the parameters of the cell structure function, i denotes a cell structure number in the virtual view space (i ═ 1,2, …, n).
Further, in the step 6), generating a concept function of the three-dimensional entity by using the entity influence factor information and the entity shape structure includes:
Figure BDA0001299652990000031
where, Σ represents combining the unit entities,
Figure BDA0001299652990000032
representing the attachment of the unit texture to the unit structure for the three-dimensional texture data corresponding to the virtual unit entity.
A system for three-dimensional materialization of an entity in scene transformation, comprising:
the text processing module is used for retrieving entities appearing in the text and influence factors describing the entities;
the entity classification module selects entity categories according to the entities searched out by the text and determines entity concept domains;
the visual decision analysis module generates a visual attribute structure of the category entity according to the entity classification information and judges a generation mode of an entity appearance structure aiming at the category entity;
the appearance structure generating module selects a required model from the model library according to the generating mode of the appearance structure of a certain entity and generates the appearance structure of the whole entity;
the surface texture generating module is used for generating surface texture required by the entity according to the entity influence factor and the reasoning information retrieved by the text;
and the entity generation module generates a three-dimensional entity through a mapping algorithm and a rendering mode according to the existing entity appearance structure and surface texture.
Further, the entity classification module is configured to classify the entities based on human cognitive characteristics and visually generated attribute characteristics, and classify the visualized entities into: plants, animals, humans, artifacts, and natural objects, and subclassing according to rules.
Further, the visualized decision analysis module includes: the entity visualization attribute structure generation module and the entity appearance structure judgment and generation module; the visual attribute structure generation module generates a visual attribute structure of a corresponding subclass according to the entity classification result, the influence factor of the subclass entity and the constraint relation;
the appearance structure judging and generating module selects an appearance structure generating mode according to appearance structure characteristics of the subclass entity, and if the appearance structure is a combined generating mode, the appearance structure is generated according to a concept function; s (x, y) { S ═ S1(x1,y1),s2(x2,y2),…,sn(xn,yn) In which s isi(xi,yi) Representing a unit structure function, x, combining the entire entityi,yiTo influence the parameters of the cell structure function, i denotes a cell structure number in the virtual view space (i ═ 1,2, …, n).
Further, the entity generation module extracts and infers an algorithm of an influence factor by using a visual attribute structure to obtain texture information, and finally combines the texture information with the appearance structure information according to a function and renders a three-dimensional entity;
Figure BDA0001299652990000041
where, Σ represents combining the unit entities,
Figure BDA0001299652990000042
representing the attachment of the unit texture to the unit structure for the three-dimensional texture data corresponding to the virtual unit entity.
The invention has the following advantages and beneficial effects:
the invention has the specific innovation that: an entity visual attribute structure of a perfect classification system is established, and meanwhile, a generation mode for determining an entity appearance structure and a surface texture generation module for determining the entity are provided according to different types of entities. By comparison, the existing scene conversion mode is limited to a single field, and all visual entities are not classified and corresponding visual attribute structures are not established, so that the method is more convenient for visualization of multi-field entities; the invention reduces the scale of the traditional model base through different entity generation modes; meanwhile, the surface texture generation module is more beneficial to the realization of entity diversification in various fields.
Drawings
FIG. 1 is a flow chart of a method for three-dimensional materialization of an entity in context transformation according to a preferred embodiment of the present invention;
FIG. 2 is a diagram of a system implementation of a three-dimensional embodiment of an entity of the present application;
FIG. 3(a) is a three-dimensional outline structure of a trunk;
FIG. 3(b) is a three-dimensional outline structure diagram of a tree trunk containing branches;
FIG. 3(c) is a three-dimensional outline structure diagram of a tree trunk and branches with leaves;
fig. 3(d) is a three-dimensional entity diagram of the rendered tree.
Detailed Description
The technical solutions in the embodiments of the present invention will be described in detail and clearly with reference to the accompanying drawings. The described embodiments are only some of the embodiments of the present invention.
The technical scheme for solving the technical problems is as follows:
the invention discloses a flow chart of a method for three-dimensional materialization of an entity in scene conversion.
The invention provides a method for three-dimensional materialization of an entity in scene conversion, which comprises the following steps:
step one, establishing a general attribute structure of entity visualization;
classifying all entities, and analyzing influence factors and constraint relations of the entities of different classes;
step three, establishing a visual attribute structure of an adaptive subclass through entity classification and analysis;
selecting a proper appearance structure generation mode aiming at the entities of different types;
fifthly, extracting and reasoning influence factor information required by the visual attribute structure from the text;
and step six, generating a visual three-dimensional entity through the influence factor information and the appearance structure of the entity.
The method for three-dimensional materialization in the context conversion comprises the following steps:
step 1.1: solid structure
Traditionally, definitions of entities can be represented by triples, such as: entity names (concepts, attributes, attribute values), taking into account the special case of an entity in a scene, the triple form of an entity concept can be extended to a quintuple, such as: entity name (conceptual domain, attribute value, state form, dependency element).
The quintuple expression of the entity structure is as follows: e (c, a, v, s, d)
c represents the basic concept domain of the entity, and is the description of the basic category and the special constraint condition of the entity;
a represents the visualization attribute of the entity, and in the present document, the visualization attribute of the entity is a function aggregate which determines the appearance of the entity and comprises an entity appearance structure function and a surface texture function;
v represents an attribute value relative to the visual attribute, is an independent variable of a visual attribute function, and is a general term of a parameter causing the change of the appearance attribute of the entity;
s represents the behavior state of an entity in a scene, and only comprises two states: static and dynamic;
d represents another entity on which the entity depends in the scene, and is a relationship node of the scene information network.
Step 1.2: visual attribute structure
The expression form of the entity visualization attribute structure is as follows:
E(S'(x1,x2,…,xn),T'(y1,y2,…,yn),R'(z1,z2,…,zn))
s' represents the outline structure mapping rule of the entity; x is the number ofiE S (i is 1,2, …, n) represents a parameter influencing the physical appearance structure;
t' represents the mapping rule of the surface texture and the illumination modification of the entity; y isiE T (i ═ 1,2, …, n) represents a parameter that affects the texture of a solid surface;
r' represents a mapping rule for reshaping the appearance of the entity after the mapping rule for the shape structure and the texture feature is completed, ziE.a (i ═ 1,2, …, n) represents parameters that constrain appearance in addition to appearance and texture.
Wherein, V is a finite variable set, which affects the appearance structure, surface texture and visual appearance of the entity visualization and is called as an entity attribute variable set; s is a finite structure variable set, and
Figure BDA0001299652990000061
t is a finite set of texture variables, and
Figure BDA0001299652990000071
a is a finite set of appearance variables, and
Figure BDA0001299652990000072
the method for three-dimensional materialization in the context conversion comprises the following steps:
the purpose of entity classification is to determine the concept domain of an entity, thereby narrowing the retrieval range of information required in the entity generation process and more effectively improving the entity generation efficiency. The invention classifies the entities based on the cognitive characteristics of human beings and the attribute characteristics generated by visualization, and divides the visualized entities into: plants, animals, humans, artifacts and natural objects, and subclassing according to the following rules:
first, there is a detailed reference for the subclassification of plants and animals in ecology. Meanwhile, plants and animals are natural entities with a growth rule capable of being followed, and the appearance structure and the surface texture are influenced by limited constraint factors, so that the generation of the entities is controlled conveniently.
Second, human being as a high-grade animal, the growth rule of its shape structure can naturally follow, but its living environment is complex, so that the surface texture is influenced by the cross of various factors, resulting in more complex generation of entities than plants and animals.
The artificial products are indispensable entities in human living environment, are different in types due to different functions and purposes, but have the general characteristics that the general appearance of functional entities is fixed, the appearance structure and the surface texture of the functional entities are polymorphic, and the functional entities present artistic characteristics.
And fourthly, the natural entity exists objectively, the external structure of the natural entity is not changed due to external change, most surface textures are unchanged, and only the surface textures of a few entities are influenced by natural factors.
The method for three-dimensional materialization in the context conversion comprises the following four steps:
in order to adapt to the rationality of the three-dimensional entity appearance structure, the generation mode of the entity appearance structure is divided into a combined mode and an integrated mode. The combination type is that an entity is regarded as a combination of a limited unit structure, when a child class inherits a parent class entity, the unit entities are reasonably changed and combined into a new child class entity. The generation method is as follows:
Figure BDA0001299652990000073
where, Σ represents combining the unit entities,
Figure BDA0001299652990000081
representing the attachment of the unit texture to the unit structure for the three-dimensional texture data corresponding to the virtual unit entity.
The integration is to regard the appearance structure of the entity as a whole, and only the whole appearance structure and the whole texture of the entity are needed when the three-dimensional entity is generated. The generation method is as follows:
Figure BDA0001299652990000082
wherein the content of the first and second substances,
Figure BDA0001299652990000083
is three-dimensional texture data of the entity, s (x, y) represents the overall appearance structure of the entity, represents the fit of the surface texture to the overall appearance structure
A system for three-dimensional materialization of an entity in scene transformation, comprising:
the text processing module is used for retrieving entities appearing in the text and influence factors describing the entities;
the entity classification module selects entity categories according to the entities searched by the text, so that the entity concept domain can be conveniently determined, the search range of information required in the entity generation process is reduced, and the entity generation efficiency is more effectively improved;
the visual decision analysis module generates a visual attribute structure of the category entity according to the entity classification information and judges a generation mode of an entity appearance structure aiming at the category entity;
the appearance structure generating module selects a required model from the model library according to the generating mode of the appearance structure of a certain entity and generates the appearance structure of the whole entity;
the surface texture generating module is used for generating surface texture required by the entity according to the entity influence factor and the reasoning information retrieved by the text;
and the entity generation module generates a three-dimensional entity through a mapping algorithm and a rendering mode according to the existing entity appearance structure and surface texture.
For example, the three-dimensional embodiment of the entity referred to as a "tree" in the text is as follows:
s1: establishing a general visual attribute structure of an entity;
according to the basic definition of the visualization attribute structure, the expression form of the entity visualization attribute structure is as follows:
E(S'(x1,x2,…,xn),T'(y1,y2,…,yn),R'(z1,z2,…,zn))
s' represents the outline structure mapping rule of the entity; x is the number ofiE S (i is 1,2, …, n) represents a parameter influencing the physical appearance structure;
t' represents the mapping rule of the surface texture and the illumination modification of the entity; y isiE T (i ═ 1,2, …, n) represents a parameter that affects the texture of a solid surface;
r' represents a mapping rule for reshaping the appearance of the entity after the mapping rule for the shape structure and the texture feature is completed, ziE.a (i ═ 1,2, …, n) represents parameters that constrain appearance in addition to appearance and texture.
Wherein, V is a finite variable set, which affects the appearance structure, surface texture and visual appearance of the entity visualization and is called as an entity attribute variable set; s is a finite structure variable set, and
Figure BDA0001299652990000091
t is a finite set of texture variables, and
Figure BDA0001299652990000092
a is a finite set of appearance variables, and
Figure BDA0001299652990000093
s2: classifying the visualized entities, and analyzing influence factors and constraint relations influencing entity visualization;
in ecology, the appearance, life habit, growing environment, type of annual growth, regional distribution and the like of plants and animals are described and studied in detail. The plants in different classes have obviously different shapes, and the shapes of the plants in the same class and different types also have detail differences. The shape structure of the annual plants does not change basically with age, the shape structure of non-annual plants may grow with age, and the shape structure of most animals grows with age. However, compared with plants, the animal has flexible postures, so that the appearance structure can be expressed more variously. The texture of substantially all plants is affected by the season, and some plants are accompanied by seasonal flowering fruits, while the texture of animals is only species dependent. Tables 1 and 2 show the classification of various types of plants and animals and the factors that externally affect their shape structure and surface texture, respectively.
TABLE 1 classification of plants and corresponding influencing factors
Figure BDA0001299652990000094
Figure BDA0001299652990000101
TABLE 2 Classification of animals and corresponding influencing factors
Figure BDA0001299652990000102
Figure BDA0001299652990000111
Particularly, people are high-grade animals, the appearance structure of the animals naturally changes with age, but unlike ordinary animals, the actions of people are more flexible and diversified, so that the appearance structure of people is more polymorphic, and meanwhile, the appearance structures of different people are also obviously different. Although the autonomy of a person is relatively strong, the basic types of postures are limited, and generally, the shape structure of the person is similar, and key parameters of the shape structure are similar within a certain age range, which facilitates the generation of the shape structure. However, since the environment in which a person is exposed to is diverse, the surface texture thereof is directly affected by various factors. Such as: race, age, region, nationality, occupation, nationality, season, clothing, hairstyle, era, etc., the detailed effects are shown in table 3.
TABLE 3 factors affecting human appearance
Figure BDA0001299652990000112
Most of the entities in the artificial environment are non-living, and are classified into different categories due to different functions and purposes, and although the appearances of the entities in different categories are mainly changed due to the purposes and artificial design, generally, the functional entities are generally fixed in appearance, and the appearance structure of the functional entities is polymorphic. According to the standards of various industries, the specification and the model of each entity are fixed and can be checked, so that the entity generation is convenient.
TABLE 4 Classification of artifacts and corresponding impact factors
Figure BDA0001299652990000113
Figure BDA0001299652990000121
S3: establishing a subclass entity visual attribute structure;
based on the classification and the influence factor analysis, the visual attribute structure of a subclass of entities can be known, for example, the visual attribute structure of a tree is
Tree (S ' (height, size, species, age), T ' (season, species, flowering, fruiting), R ' (annual type, region, environment))
S4: selecting a generation mode of an entity appearance structure and generating an entity model;
the generation mode of the tree shape knot type obtained by judgment is a combined type and needs to be formed by combining a trunk, primary branches, secondary branches and leaves, as shown in fig. 3(a), 3(b) and 3 (c).
S5: reasoning influence factors to generate surface textures;
deducing information of the influence factors according to the text, after determining the influence factors, inquiring actual textures according to the information, and generating texture information; and in the case of default information, adopting default information and generating texture information.
S6: the surface texture and the shape structure are combined and rendered to generate a solid body.
The texture information is combined with the tree shape structure to generate a three-dimensional entity through rendering, as shown in fig. 3 (d).
The above examples are to be construed as merely illustrative and not limitative of the remainder of the disclosure. After reading the description of the invention, the skilled person can make various changes or modifications to the invention, and these equivalent changes and modifications also fall into the scope of the invention defined by the claims.

Claims (6)

1. A method for three-dimensional materialization of an entity in scene conversion is characterized by comprising the following steps:
1) establishing a general attribute structure of entity visualization, comprising: establishing a quintuple entity structure; establishing a visual attribute structure;
2) classifying all entities, and analyzing influence factors and constraint relations of the entities of different classes;
3) establishing a visual attribute structure of an adaptive subclass through entity classification and analysis;
4) selecting a shape structure generation mode which accords with the determined entity aiming at the entities of different types;
5) extracting and reasoning influence factor information required by the visual attribute structure from the Chinese natural language text;
6) generating a visual three-dimensional entity through the influence factor information and the appearance structure of the entity;
in the step 4), selecting a shape structure generation mode which accords with the determined entity aiming at the entities of different types;
concept function of combined generation: s (x, y) { S ═ S1(x1,y1),s2(x2,y2),…,sn(xn,yn)}
Wherein s isi(xi,yi) Representing a unit structure function, x, combining the entire entityi,yiFor parameters affecting the cell structure function, i denotes a cell structure number in the virtual view space (i ═ 1,2, …, n);
in the step 6), generating a visualized three-dimensional entity through the influence factor information and the appearance structure of the entity specifically includes: determining three-dimensional texture data according to the entity influence factor information, and generating a concept function of the three-dimensional entity by using the entity influence factor information and the entity appearance structure, wherein the concept function comprises the following steps:
Figure FDA0002779136480000011
where, Σ represents combining the unit entities,
Figure FDA0002779136480000012
representing the attachment of the unit texture to the unit structure for the three-dimensional texture data corresponding to the virtual unit entity.
2. The method for three-dimensional materialization of scenes transformation according to claim 1,
the step 1) of establishing the quintuple entity structure comprises the following steps: the triple form of the entity concept is extended to the quintuple, the entity name (concept domain, attribute value, state form, dependency element), the quintuple expression of the entity structure: e (c, a, v, s, d), wherein c represents the basic concept domain of the entity and is the description of the basic category and the special constraint condition of the entity; a represents the visual attribute of the entity, the visual attribute of the entity is a function aggregate, determines the appearance of the entity and comprises an entity appearance structure function and a surface texture function; v represents an attribute value relative to the visual attribute, is an independent variable of a visual attribute function, and is a general term of a parameter causing the change of the appearance attribute of the entity; s represents the behavior state of an entity in a scene, and only comprises two states: static and dynamic; d represents another entity on which the entity depends in the scene, and is a relation node of a scene information network;
the establishing of the visual attribute structure specifically comprises: expression of entity visualization attribute structure:
E(S'(x1,x2,…,xn),T'(y1,y2,…,yn),R'(z1,z2,…,zn))
s' represents the outline structure mapping rule of the entity; x is the number ofiE S (i is 1,2, …, n) represents a parameter influencing the physical appearance structure; t' represents the mapping rule of the surface texture and the illumination modification of the entity; y isiE T (i ═ 1,2, …, n) represents a parameter that affects the texture of a solid surface; r' represents a mapping rule for reshaping the appearance of the entity after the mapping rule for the shape structure and the texture feature is completed, ziE.a (i ═ 1,2, …, n) represents a parameter that constrains appearance in addition to shape and texture; s is a finite structure variable set, and
Figure FDA0002779136480000021
t is a finite set of texture variables, and
Figure FDA0002779136480000022
a is a finite set of appearance variables, and
Figure FDA0002779136480000023
v is a set of entity attribute variables.
3. The method for three-dimensional materialization of entities in the context transformation according to claim 1, wherein in the step 2), all entities are classified, and the influence factors and constraint relations of the entities of different classes are analyzed, which specifically comprises: from the characteristics of human cognition and the attribute characteristics generated by visualization, the entities are classified, and the visualized entities are divided into: plants, animals, people, artifacts and natural entities with unchanged shape and structure are classified into subclasses according to rules.
4. The method for three-dimensional materialization of entities in the cultural relic transformation according to claim 3, wherein in the step 3), when the entities are plants, the sub-class visual attribute structure is obtained according to the general form of the visual attribute structure in the step 1) and the influence factors and constraint relations of the entity visualization in the step 2), the influence factors of the tree are known according to the classification features of the plants, and the visual attribute structure of the tree is obtained:
tree (S ' (height, size of colony, species, age), T ' (season, species, flowering, fruiting), R ' (annual type, region, environment)).
5. A system for three-dimensional materialization of an entity in scene conversion is characterized by comprising the following components:
the text processing module is used for retrieving entities appearing in the text and influence factors describing the entities;
the entity classification module selects entity categories according to the entities searched out by the text and determines entity concept domains;
the visual decision analysis module generates a visual attribute structure of the category entity according to the entity classification information and judges a generation mode of an entity appearance structure aiming at the category entity;
the appearance structure generating module selects a required model from the model library according to the generating mode of the appearance structure of a certain entity and generates the appearance structure of the whole entity;
the surface texture generating module is used for generating surface texture required by the entity according to the entity influence factor and the reasoning information retrieved by the text;
the entity generation module generates a three-dimensional entity through a mapping algorithm and a rendering mode according to the existing entity appearance structure and surface texture;
the visual decision analysis module comprises: the entity visualization attribute structure generation module and the entity appearance structure judgment and generation module; the visual attribute structure generation module generates a visual attribute structure of a corresponding subclass according to the entity classification result, the influence factor of the subclass entity and the constraint relation;
the appearance structure judging and generating module selects an appearance structure generating mode according to appearance structure characteristics of the subclass entity, and if the appearance structure is a combined generating mode, the appearance structure is generated according to a concept function mode; s (x, y) { S ═ S1(x1,y1),s2(x2,y2),…,sn(xn,yn) In which s isi(xi,yi) Representing a unit structure function, x, combining the entire entityi,yiFor parameters affecting the cell structure function, i denotes a cell structure number in the virtual view space (i ═ 1,2, …, n);
the entity generation module utilizes an algorithm of visual attribute structure extraction and inference influence factors to obtain texture information, and finally combines the texture information with the appearance structure information according to a function and renders a three-dimensional entity;
Figure FDA0002779136480000031
where, Σ represents combining the unit entities,
Figure FDA0002779136480000032
representing the attachment of the unit texture to the unit structure for the three-dimensional texture data corresponding to the virtual unit entity.
6. The system as claimed in claim 5, wherein the entity classification module is configured to classify the entities based on human cognitive features and visually generated attribute features, and classify the visualized entities into: plants, animals, people, artifacts and natural entities with unchanged shape and structure are classified into subclasses according to rules.
CN201710358187.6A 2017-05-19 2017-05-19 Method and system for three-dimensional materialization of entity in scene conversion Active CN107220321B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710358187.6A CN107220321B (en) 2017-05-19 2017-05-19 Method and system for three-dimensional materialization of entity in scene conversion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710358187.6A CN107220321B (en) 2017-05-19 2017-05-19 Method and system for three-dimensional materialization of entity in scene conversion

Publications (2)

Publication Number Publication Date
CN107220321A CN107220321A (en) 2017-09-29
CN107220321B true CN107220321B (en) 2021-02-09

Family

ID=59944322

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710358187.6A Active CN107220321B (en) 2017-05-19 2017-05-19 Method and system for three-dimensional materialization of entity in scene conversion

Country Status (1)

Country Link
CN (1) CN107220321B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108108482B (en) * 2018-01-05 2022-02-11 重庆邮电大学 Method for realizing scene reality enhancement in scene conversion
CN110688483B (en) * 2019-09-16 2022-10-18 重庆邮电大学 Dictionary-based noun visibility labeling method, medium and system in context conversion
CN111708892B (en) * 2020-04-24 2021-08-03 陆洋 Database system based on depth knowledge graph

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102203781A (en) * 2008-10-14 2011-09-28 Cct国际股份有限公司 System and method for hybrid solid and surface modeling for computer-aided design environments
CN104063466A (en) * 2014-06-27 2014-09-24 深圳先进技术研究院 Virtuality-reality integrated three-dimensional display method and virtuality-reality integrated three-dimensional display system
CN105427373A (en) * 2015-10-30 2016-03-23 上海交通大学 Three-dimensional scene cooperative construction system based on three-layer body, and realization method thereof
CN106599493A (en) * 2016-12-19 2017-04-26 重庆市勘测院 Visual implementation method of BIM model in three-dimensional large scene

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7593940B2 (en) * 2006-05-26 2009-09-22 International Business Machines Corporation System and method for creation, representation, and delivery of document corpus entity co-occurrence information

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102203781A (en) * 2008-10-14 2011-09-28 Cct国际股份有限公司 System and method for hybrid solid and surface modeling for computer-aided design environments
CN104063466A (en) * 2014-06-27 2014-09-24 深圳先进技术研究院 Virtuality-reality integrated three-dimensional display method and virtuality-reality integrated three-dimensional display system
CN105427373A (en) * 2015-10-30 2016-03-23 上海交通大学 Three-dimensional scene cooperative construction system based on three-layer body, and realization method thereof
CN106599493A (en) * 2016-12-19 2017-04-26 重庆市勘测院 Visual implementation method of BIM model in three-dimensional large scene

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"基于模糊语义的动画场景生成 ";周森;《万方数据知识服务平台》;20140122;第22-46页 *
"空间信息的自然语言表达模型";杜清运;《武汉大学学报 信息科学版》;20140630;第682-688页 *

Also Published As

Publication number Publication date
CN107220321A (en) 2017-09-29

Similar Documents

Publication Publication Date Title
Huang et al. A machine-learning approach to automated knowledge-base building for remote sensing image analysis with GIS data
Hodder This is not an article about material culture as text
Kwasnik The role of classification in knowledge representation and discovery
CN107220321B (en) Method and system for three-dimensional materialization of entity in scene conversion
Zhang et al. A novel automatic image segmentation method for Chinese literati paintings using multi-view fuzzy clustering technology
CN110147483A (en) A kind of title method for reconstructing and device
CN110246011A (en) Interpretable fashion clothing personalized recommendation method
CN110097609A (en) A kind of fining embroidery texture moving method based on sample territory
CN109947916A (en) Question answering system device and answering method based on meteorological field knowledge mapping
CN104881852B (en) Image partition method based on immune clone and fuzzy kernel clustering
CN108197180A (en) A kind of method of the editable image of clothing retrieval of clothes attribute
Weng et al. Data augmentation computing model based on generative adversarial network
CN116266251A (en) Sketch generation countermeasure network, rendering generation countermeasure network and clothes design method thereof
Fukuda et al. Perceptional retrieving method for distributed design image database system
Qiang et al. Application of visualization technology in spatial data mining
CN105740360B (en) Method for identifying and searching classical titles in artwork images
Lauzzana et al. A rule system for analysis in the visual arts
CN115374290A (en) Retrieval method and device for scientific cultivation and maintenance knowledge of flowers
Biasotti Reeb graph representation of surfaces with boundary
CN114065359A (en) Decoration design generation method and device, electronic equipment and storage medium
Tripathi et al. Facial expression recognition using data mining algorithm
Wei et al. Using hybrid knowledge engineering and image processing in color virtual restoration of ancient murals
Koenderink et al. Supporting knowledge-intensive inspection tasks with application ontologies
Liu Application of modern urban landscape design based on machine learning model to generate plant landscaping
Yue et al. Research on rural landscape spatial information recording and protection based on 3D point cloud technology under the background of internet of things

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant