CN112802198A - Two-dimensional image three-dimensional interaction design method, terminal and storage device - Google Patents

Two-dimensional image three-dimensional interaction design method, terminal and storage device Download PDF

Info

Publication number
CN112802198A
CN112802198A CN202011563901.3A CN202011563901A CN112802198A CN 112802198 A CN112802198 A CN 112802198A CN 202011563901 A CN202011563901 A CN 202011563901A CN 112802198 A CN112802198 A CN 112802198A
Authority
CN
China
Prior art keywords
model
dimensional image
dimensional
display position
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011563901.3A
Other languages
Chinese (zh)
Inventor
薛冠衡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Foshan Oushennuo Yunshang Technology Co ltd
Original Assignee
Foshan Oushennuo Yunshang Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Foshan Oushennuo Yunshang Technology Co ltd filed Critical Foshan Oushennuo Yunshang Technology Co ltd
Priority to CN202011563901.3A priority Critical patent/CN112802198A/en
Publication of CN112802198A publication Critical patent/CN112802198A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/10Constructive solid geometry [CSG] using solid primitives, e.g. cylinders, cubes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/04Architectural design, interior design

Abstract

The invention provides a two-dimensional image three-dimensional interaction design method, a terminal and a storage device, wherein the two-dimensional image three-dimensional interaction design method comprises the following steps: s101: generating a three-dimensional image of the object from the vertices of the profile; s102: judging whether a model event is triggered according to the operation instruction, if so, executing S103, and if not, executing S105; s103: determining the display position of the model according to the mobile identifier; s104: acquiring the distance between the model and the three-dimensional image according to the display position, adsorbing the model to the three-dimensional image through the distance, and making the model three-dimensional according to the vertex of the model; s105: and reading the operation instruction, and executing corresponding operation according to the operation instruction. The invention can display the object in a three-dimensional mode, is convenient for a user to intuitively understand the three-dimensional structure of the object and see the real-time position of the model, reduces the requirement on the space imagination, avoids the problem of wrong placement by an automatic adsorption mode, reduces the times of modification, simplifies the design process and improves the design efficiency.

Description

Two-dimensional image three-dimensional interaction design method, terminal and storage device
Technical Field
The invention relates to the field of two-dimensional image processing, in particular to a three-dimensional interaction design method for a two-dimensional image, a terminal and a storage device.
Background
For architectural or indoor design schemes, three simple two-dimensional views are generally used to represent the three-dimensional structure of a real object. These two-dimensional views include: top view, side view, front view. Each view picture shows different angles of a real object respectively.
The existing software tools aiming at the building and indoor design industries all adopt a three-view mode to show different angles of a building structure. And the user switches among the top view, the side view and the front view for many times through a software interface, and modifies and edits a material object structure for many times, so that an expected result is obtained.
However, since the three views are independently separated, the user cannot intuitively understand the entire three-dimensional stereoscopic structure. When a user adds a new structure to the real object structure, the user needs to remove the adding position of the structure according to the imagination of the user on the three-dimensional structure, and the requirement on the space imagination capacity is high. When adding, the three views need to be modified completely according to the new structure, so the operation needs to be switched frequently among the functional interfaces of top view, side view and front view, which is very cumbersome. Moreover, when a user adds a new structure, the new structure needs to be manually added to the actual structure, however, when the model is moved, the real-time position of the model cannot be visually seen, and when the model is placed on an image, the model needs to be manually moved to a target position, and the requirement on the accuracy of the position is high, so that the user often makes a placement error, needs to make multiple attempts, and reduces the design efficiency.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides a two-dimensional image three-dimensional interactive design method, a terminal and a storage device, a three-dimensional image of an object is generated according to the vertex of a section diagram, after a trigger model event is determined, the display position of a model is determined according to a mobile identifier, and the model is adsorbed according to the distance between the display position and the three-dimensional image, so that the object can be displayed in a three-dimensional mode, a user can intuitively understand the three-dimensional structure of the object and see the real-time position of the model, the requirement on space imagination is reduced, the problem of placement error is avoided through an automatic adsorption mode, the number of modification is reduced, the design process is simplified, and the design efficiency is improved.
In order to solve the above problems, the present invention adopts a technical solution as follows: a two-dimensional image three-dimensional interaction design method applied to architectural design comprises the following steps: s101: acquiring a profile of an object, and generating a three-dimensional image of the object according to the vertex of the profile; s102: receiving an input operation instruction, judging whether a model event is triggered or not according to the operation instruction, if so, executing S103, and if not, executing S105; s103: obtaining a model to be operated according to the operation instruction, and determining the display position of the model according to the mobile identifier; s104: acquiring the distance between the model and the three-dimensional image according to the display position, adsorbing the model to the three-dimensional image according to the distance, and making the model three-dimensional according to the vertex of the model; s105: and reading the operation instruction, and executing corresponding operation according to the operation instruction.
Further, the step of generating a three-dimensional image of the object from the profile specifically includes: and acquiring a vertex of the sectional view, adding a first preset coordinate in a direction perpendicular to the sectional view at the vertex, and drawing a three-dimensional image of the object according to the first preset coordinate and the original coordinate of the vertex.
Further, the operation instruction is input through any one of a mouse, a touch pad, a touch screen, and a drawing pad.
Further, the step of determining whether to trigger a model event according to the operation instruction specifically includes: acquiring operation information corresponding to the operation instruction, and judging whether the operation information is a mobile model; if yes, determining a trigger model event; if not, determining that the model event is not triggered.
Further, the step of determining the display position of the model according to the mobile identifier specifically includes: and acquiring a moving position corresponding to the moving identifier at a preset frequency, and displaying the model by taking the moving position as a display position of the model.
Further, the step of obtaining the distance between the model and the three-dimensional image according to the display position, and adsorbing the model to the three-dimensional image through the distance specifically includes: acquiring a target position of the three-dimensional image, and judging whether the distance between the target position and the display position of the model is smaller than a preset value or not; if so, attaching the model to the target position; if not, determining the display position according to the mobile identifier.
Further, before the step of obtaining the target position of the three-dimensional image and judging whether the distance between the target position and the display position of the model is smaller than a preset value, the method further comprises: judging whether the model enters a trigger area or not according to the display position; if so, searching a three-dimensional image corresponding to the trigger area; and if not, determining the display position of the model according to the input instruction.
Further, the step of three-dimensionalizing the model according to the vertices of the model further comprises: and determining the selected target model according to the input selection instruction, displaying the rendering information of the target model, and modifying the rendering information according to the input instruction.
Based on the same inventive concept, the invention further provides an intelligent terminal, which comprises a processor and a memory, wherein the memory stores a computer program, and the memory executes the two-dimensional image three-dimensional interaction design method applied to architectural design according to the computer program.
Based on the same inventive concept, the invention further provides a storage device, which stores program data used for realizing the two-dimensional image three-dimensional interactive design method applied to architectural design.
Compared with the prior art, the invention has the beneficial effects that: the method comprises the steps of generating a three-dimensional image of an object according to the top point of a section diagram, determining the display position of a model according to a mobile identifier after determining a trigger model event, adsorbing the model through the distance between the display position and the three-dimensional image, displaying the object in a three-dimensional mode, facilitating a user to intuitively understand the three-dimensional structure of the object and see the real-time position of the model, reducing the requirement on the space imagination capacity, avoiding the problem of placement error through an automatic adsorption mode, reducing the number of times of modification, simplifying the design process and improving the design efficiency.
Drawings
FIG. 1 is a flow chart of an embodiment of a two-dimensional image three-dimensional interactive design method applied to architectural design according to the present invention;
FIG. 2 is a flowchart illustrating user operations of an embodiment of a two-dimensional image stereo interactive design method applied to architectural design according to the present invention;
FIG. 3 is a block diagram of an embodiment of an intelligent terminal according to the present invention;
FIG. 4 is a block diagram of a memory device according to an embodiment of the present invention.
Detailed Description
The present invention will be further described with reference to the accompanying drawings and the detailed description, and it should be noted that any combination of the embodiments or technical features described below can be used to form a new embodiment without conflict.
Referring to fig. 1-2, fig. 1 is a flow chart of an embodiment of a two-dimensional image three-dimensional interactive design method applied to architectural design according to the present invention; fig. 2 is a flowchart of user operations in an embodiment of a two-dimensional image three-dimensional interactive design method applied to architectural design in the present invention. The two-dimensional image three-dimensional interactive design method applied to architectural design of the invention is explained in detail with reference to the attached drawings 1-2.
In this embodiment, the two-dimensional image stereo interactive design method applied to architectural design includes:
s101: a cross-sectional view of the object is acquired, and a three-dimensional image of the object is generated from the vertices of the cross-sectional view.
In this embodiment, the cross-sectional view is a transverse cross-sectional view of the object, and in other embodiments, the cross-sectional view may be a longitudinal cross-sectional view, an oblique cross-sectional view, or a cross-sectional view in other directions of the object, as long as the object has a symmetry plane or a symmetry line in a direction perpendicular to the cross-sectional view.
In this embodiment, the device for executing the three-dimensional interactive design method of the two-dimensional image may be a mobile phone, a computer, a cloud, a server, or other intelligent terminals capable of receiving an instruction, generating a three-dimensional image according to the instruction, and adding a model.
In this embodiment, the step of generating a three-dimensional image of the object from the cross-sectional view specifically includes: and acquiring a vertex of the sectional view, adding a first preset coordinate in the direction perpendicular to the sectional view at the vertex, and drawing a three-dimensional image of the object according to the first preset coordinate and the original coordinate of the vertex.
In this embodiment, the first preset coordinates corresponding to each vertex are the same, and in other embodiments, the first preset coordinates corresponding to different vertices may also be different, which is not limited herein.
In this embodiment, the first preset value corresponding to each vertex may be determined according to the shape, type, number, and other information of the cross-sectional view, or the object corresponding to the cross-sectional view may be determined according to the shape, type, number, and other information of the cross-sectional view, so as to determine the first preset coordinate corresponding to each point on the cross-sectional view by combining the shape and the cross-sectional position of the object, and further form the three-dimensional image by combining the cross-sectional view and the first preset coordinate.
In other embodiments, the coordinate of each point in the direction perpendicular to the cross-sectional view can also be determined according to the position relationship between other points on the cross-sectional view and the fixed point. The position relationship may be preset, or may be obtained by searching for a figure similar to the cross-sectional view and associating the three-dimensional image.
In a specific embodiment, the section view is a section view of the suspended ceiling on a two-dimensional plane formed by an X axis and a Y axis in a three-dimensional coordinate system (X, Y, Z), a fixed longitudinal coordinate (Z axis direction) value is added to the coordinate values of all the vertexes of the section, and a three-dimensional image is generated according to the new coordinate values of the vertexes.
In this embodiment, for convenience of viewing by a user, after the three-dimensional image is generated, a display surface or a placement mode of the three-dimensional image may be adjusted according to the position of the first preset coordinate, so that the user can view an overall structure of the three-dimensional image.
In this embodiment, the cross-sectional view is a cross-sectional view of a suspended ceiling, the generated three-dimensional image is the suspended ceiling, the model attached to the suspended ceiling is an angular line model, in other embodiments, the model may also be cross-sectional views of walls, load-bearing columns, ceilings and other building structures, and similarly, the model attached to the three-dimensional image may be tiles, ornaments, doors, windows, ceiling lamps and other models associated with the generated three-dimensional image.
S102: receiving an input operation instruction, judging whether a model event is triggered or not according to the operation instruction, if so, executing S103, and if not, executing S105.
In the present embodiment, the operation instruction is input by any one of a mouse, a touch panel, a touch screen, and a drawing board.
In this embodiment, the step of determining whether to trigger the model event according to the operation instruction specifically includes: acquiring operation information corresponding to the operation instruction, and judging whether the operation information is a mobile model; if yes, determining a trigger model event; if not, determining that the model event is not triggered.
In the present embodiment, the model is a two-dimensional image, in which the shape of the model in the direction perpendicular to the two-dimensional image is expressed in grayscale on the image of the model in order to facilitate the user to understand the three-dimensional structure of the model.
In a specific embodiment, the model event is a click model, the mouse is dragged by pressing mouse coordinates, and when the action is detected, the model event is determined to be triggered.
In other embodiments, the model event may be triggered by inputting a command to move the model by pressing a key, clicking a button to move the model, inputting voice, and other means.
S103: and obtaining a model to be operated according to the operation instruction, and determining the display position of the model according to the mobile identifier.
In this embodiment, in order to facilitate the user to understand the moving position of the model and to implement the rapid movement of the model, the step of determining the display position of the model according to the moving identifier specifically includes: and acquiring a moving position corresponding to the moving identification at a preset frequency, and displaying the model by taking the moving position as a display position of the model.
In this embodiment, the preset frequency is 30 times per second, and in other embodiments, the preset frequency may also be 20, 25 or other numbers per second, which is not limited herein.
In a specific embodiment, the movement identifier is a mouse arrow corresponding to the mouse on the display screen, the position of the mouse arrow is read at a speed of 30 times per second, and the position is given to the model as the display position of the model, so that the model moves in the picture displayed on the display screen along with the mouse arrow.
S104: and acquiring the distance between the model and the three-dimensional image according to the display position, adsorbing the model to the three-dimensional image according to the distance, and making the model three-dimensional according to the vertex of the model.
In this embodiment, the step of obtaining the distance between the model and the three-dimensional image according to the display position, and adsorbing the model to the three-dimensional image according to the distance specifically includes: acquiring a target position of the three-dimensional image, and judging whether the distance between the target position and the display position of the model is smaller than a preset value or not; if so, attaching the model to the target position; if not, determining the display position according to the mobile identifier.
In this embodiment, the display position and the center point of the target position may be calculated, and whether the distance between the target position and the display position is smaller than a preset value is determined according to the distance between the two center points. Or obtaining a pixel point closest to the target position on the model, judging whether the distance from the pixel point to the central point of the target position or any pixel point is smaller than a preset value, and if so, attaching the model to the target position; if not, determining the display position according to the mobile identifier.
In this embodiment, the target position is a position in the three-dimensional image where the model can be attached, where the target position may be each side of the cross-sectional view or obtained according to the three-dimensional image and a preset relationship of the model, and the preset relationship stores different attachment positions of the model in the three-dimensional image. When a plurality of target positions exist, the target position closest to the display position of the model is used as the position needing to be adsorbed, and the target positions can be identified when a plurality of target positions with distances smaller than a preset value appear, the model is adsorbed to the specified target position according to the selection of a user, and the operation difficulty is reduced through the identification of the target positions.
In this embodiment, before the step of obtaining the target position of the three-dimensional image and determining whether the distance between the target position and the display position of the model is smaller than the preset value, the method further includes: judging whether the model enters a trigger area or not according to the display position; if so, searching a three-dimensional image corresponding to the trigger area; and if not, determining the display position of the model according to the input instruction.
In this embodiment, the trigger region is disposed around the three-dimensional image, and the region of the trigger region is larger than the three-dimensional image, and it is determined to which three-dimensional image the model needs to be attached according to the trigger region into which the model enters. In order to prevent the adsorption operation from being started when not needed, the intelligent terminal can also set a trigger area only for the three-dimensional image needing to be adsorbed, or set a corresponding specific model for the trigger area, when the specific model corresponding to the trigger area, such as an angular line model, moves to the trigger area, the trigger area displays a specific color or flickers to realize the association of the specific model and the trigger area and the effect of prompting a user, so that the judgment capability of the system is enhanced.
In this embodiment, the model is attached to the three-dimensional image by assigning coordinates of the region or edge to which the three-dimensional image is attached to the region or edge to which the model is attached.
In this embodiment, the three-dimensionality of the model is achieved by adding to each vertex of the model a second preset coordinate in a direction perpendicular to the two-dimensional image of the model, in the same way as the step of forming the three-dimensional image from the cross-sectional view described above.
In this embodiment, the second preset coordinate may be the same as or different from the first preset coordinate.
In this embodiment, after the intelligent terminal completes the adsorption and the three-dimensional action of the model, the intelligent terminal further automatically detects whether the current adsorption position of the model is connected with other positions of the three-dimensional image, and if the current adsorption position of the model is connected with other positions of the three-dimensional image, the associated data of the model and the three-dimensional image is recorded according to the connection relation and stored.
In this embodiment, after the foregoing steps are completed, the intelligent terminal further detects whether an adsorption completion instruction is input, and if the instruction is detected, records the three-dimensional image and the relevant data information of the model to implement the three-dimensional interaction of the two-dimensional image.
In a specific embodiment, the model is an angular line model, the three-dimensional image is a suspended ceiling adsorption completion instruction, namely, a left mouse button is released, the mouse is stopped to operate, when a user sees that the angular line model is adsorbed at the correct position of the suspended ceiling through a display screen, the left mouse button is released, the mouse is stopped to operate, the intelligent terminal records data of the current suspended ceiling structure and stores the data in the memory, and the interaction process is ended.
In this embodiment, for the convenience of user modification, the model adsorbed on the three-dimensional image may also be moved to other positions of the three-dimensional image or replaced by other models according to a model movement instruction input by the user.
In this embodiment, after the three-dimensional image is generated or the model is added to the three-dimensional image, the attributes of the three-dimensional image, such as the modeling structure, the size, and the color, may be modified according to the input instruction.
S105: and reading the operation instruction, and executing corresponding operation according to the operation instruction.
In this embodiment, when the received operation instruction is not an instruction of the movement model, the intelligent terminal determines that a model event is not triggered, and does not execute the movement model operation.
The two-dimensional image three-dimensional interactive design method applied to architectural design of the present invention is further explained by using a specific execution workflow.
The user clicks the "3D view" button, which triggers a system redraw view event when switching from a displayed 2D view to a 3D view. The intelligent terminal reads all suspended ceiling structures and corner line models appearing in the current 2D view, including position values of vertexes of each element in X-axis and Y-axis coordinate systems, such as A (X1, Y1), B (X2, Y2) and C (X3, Y3), and then transmits data to the camera module to add a coordinate in the Z-axis direction, and conversion calculation processing from a 2D space to a 3D space is carried out. After the calculation is completed, a set of 3D spatial data is obtained, whose content includes X, Y, Z vertex position values in the coordinate system, corresponding to a (x1, y1, z1), B (x2, y2, z1), C (x3, y3, z 1). And the system redraws the suspended ceiling structure and the angle line model according to the result to obtain a 3D view picture. The above process would be cycled at a rate of 30 times per second. The switch from 2D view to 3D view operation is now complete.
When a user selects a color or material product from the left product library and clicks a target object in the three-dimensional view, the system reads data of the color and the material selected by the user, and then modifies color values on the polygonal surface of the object to achieve the effect of replacing the color or the material.
When the user clicks the product in the left product library, the system is triggered to respond, the color data of the product is obtained, and the operation of waiting for obtaining the mouse click of the user for the second time is carried out. When a user clicks the mouse on a target structure, the system monitors the click position, calculates to which structural object the position belongs, and obtains its surface ID number. And then, giving the color data to the rendering shader module with the corresponding ID number for rendering processing. When the rendering process is complete, new color effects are rendered on the object surface. The user can view the result through the view screen.
Has the advantages that: the two-dimensional image three-dimensional interactive design method applied to architectural design generates the three-dimensional image of the object according to the vertex of the profile, determines the display position of the model according to the mobile identifier after determining the trigger model event, adsorbs the model through the distance between the display position and the three-dimensional image, can display the object in a three-dimensional mode, is convenient for a user to intuitively understand the three-dimensional structure of the object and see the real-time position of the model, reduces the requirement on space imagination capacity, avoids the problem of placement error through an automatic adsorption mode, reduces the number of times of modification, simplifies the design process and improves the design efficiency.
Based on the same inventive concept, the present invention further provides a two-dimensional image three-dimensional interactive design method applied to architectural design, please refer to fig. 3, fig. 3 is a structural diagram of an embodiment of the intelligent terminal of the present invention, and the intelligent terminal of the present invention is described with reference to fig. 3.
In this embodiment, the intelligent terminal includes a processor and a memory, the memory stores a computer program, and the memory executes the two-dimensional image stereoscopic interactive design method applied to architectural design according to the computer program.
Has the advantages that: the intelligent terminal generates the three-dimensional image of the object according to the peak of the profile, determines the display position of the model according to the mobile identifier after determining the trigger model event, adsorbs the model through the distance between the display position and the three-dimensional image, can display the object in a three-dimensional mode, is convenient for a user to intuitively understand the three-dimensional structure of the object and see the real-time position of the model, reduces the requirement on the space imagination capacity, avoids the problem of placement error through an automatic adsorption mode, reduces the modification times, simplifies the design process and improves the design efficiency.
Based on the same inventive concept, the present invention further provides a memory device, please refer to fig. 4, fig. 4 is a structural diagram of an embodiment of the memory device of the present invention, and the memory device of the present invention is described with reference to fig. 4.
In the present embodiment, the storage device stores program data used for implementing the two-dimensional image stereoscopic interactive design method applied to architectural design as described in the above embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed terminal and method can be implemented in other manners. For example, the above-described terminal embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for executing all or part of the steps of the method described in the embodiments of the present application through a computer device (which may be a personal computer, a server, or a network device). And the aforementioned storage medium includes: a U disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (10)

1. A two-dimensional image three-dimensional interactive design method applied to architectural design is characterized by comprising the following steps:
s101: acquiring a profile of an object, and generating a three-dimensional image of the object according to the vertex of the profile;
s102: receiving an input operation instruction, judging whether a model event is triggered or not according to the operation instruction, if so, executing S103, and if not, executing S105;
s103: obtaining a model to be operated according to the operation instruction, and determining the display position of the model according to the mobile identifier;
s104: acquiring the distance between the model and the three-dimensional image according to the display position, adsorbing the model to the three-dimensional image according to the distance, and making the model three-dimensional according to the vertex of the model;
s105: and reading the operation instruction, and executing corresponding operation according to the operation instruction.
2. The method for designing a three-dimensional interactive image for architectural design according to claim 1, wherein the step of generating the three-dimensional image of the object according to the profile specifically comprises:
and acquiring a vertex of the sectional view, adding a first preset coordinate in a direction perpendicular to the sectional view at the vertex, and drawing a three-dimensional image of the object according to the first preset coordinate and the original coordinate of the vertex.
3. The method for stereoscopic interactive design of two-dimensional images applied to architectural design according to claim 1, wherein the operation command is input through any one of a mouse, a touch pad, a touch screen, and a drawing pad.
4. The method for designing a two-dimensional image stereo interaction applied to architectural design according to claim 1, wherein the step of determining whether to trigger a model event according to the operation instruction specifically comprises:
acquiring operation information corresponding to the operation instruction, and judging whether the operation information is a mobile model;
if yes, determining a trigger model event;
if not, determining that the model event is not triggered.
5. The method for designing a two-dimensional image stereo interaction applied to architectural design according to claim 1, wherein the step of determining the display position of the model according to the mobile identifier specifically comprises:
and acquiring a moving position corresponding to the moving identifier at a preset frequency, and displaying the model by taking the moving position as a display position of the model.
6. The method for stereoscopic interactive design of two-dimensional images applied to architectural design according to claim 1, wherein the step of obtaining the distance between the model and the three-dimensional image according to the display position, and the step of attaching the model to the three-dimensional image through the distance specifically comprises:
acquiring a target position of the three-dimensional image, and judging whether the distance between the target position and the display position of the model is smaller than a preset value or not;
if so, attaching the model to the target position;
if not, determining the display position according to the mobile identifier.
7. The method as claimed in claim 6, wherein the step of obtaining the target position of the three-dimensional image and determining whether the distance between the target position and the display position of the model is smaller than a predetermined value further comprises:
judging whether the model enters a trigger area or not according to the display position;
if so, searching a three-dimensional image corresponding to the trigger area;
and if not, determining the display position of the model according to the input instruction.
8. The method for stereoscopic interactive design of two-dimensional images applied to architectural design according to claim 1, wherein the step of three-dimensionalizing the model according to the vertices of the model further comprises:
and determining the selected target model according to the input selection instruction, displaying the rendering information of the target model, and modifying the rendering information according to the input instruction.
9. An intelligent terminal, characterized in that the intelligent terminal comprises a processor and a memory, the memory stores a computer program, and the memory executes the two-dimensional image stereo interaction design method applied to architectural design according to any one of claims 1-8.
10. A storage device, characterized in that the storage device stores program data used for implementing the two-dimensional image stereoscopic interactive design method applied to architectural design according to any one of claims 1 to 8.
CN202011563901.3A 2020-12-25 2020-12-25 Two-dimensional image three-dimensional interaction design method, terminal and storage device Pending CN112802198A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011563901.3A CN112802198A (en) 2020-12-25 2020-12-25 Two-dimensional image three-dimensional interaction design method, terminal and storage device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011563901.3A CN112802198A (en) 2020-12-25 2020-12-25 Two-dimensional image three-dimensional interaction design method, terminal and storage device

Publications (1)

Publication Number Publication Date
CN112802198A true CN112802198A (en) 2021-05-14

Family

ID=75804829

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011563901.3A Pending CN112802198A (en) 2020-12-25 2020-12-25 Two-dimensional image three-dimensional interaction design method, terminal and storage device

Country Status (1)

Country Link
CN (1) CN112802198A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115202529A (en) * 2022-07-29 2022-10-18 上海联影医疗科技股份有限公司 Pointer positioning system and method

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5990900A (en) * 1997-12-24 1999-11-23 Be There Now, Inc. Two-dimensional to three-dimensional image converting system
JP2006244217A (en) * 2005-03-04 2006-09-14 C's Lab Ltd Three-dimensional map display method, three-dimensional map display program and three-dimensional map display device
CN101833543A (en) * 2010-05-07 2010-09-15 李响 Naked-eye three-dimensional display method for characters
US20130155057A1 (en) * 2011-12-20 2013-06-20 Au Optronics Corp. Three-dimensional interactive display apparatus and operation method using the same
CN103489224A (en) * 2013-10-12 2014-01-01 厦门大学 Interactive three-dimensional point cloud color editing method
CN106067190A (en) * 2016-05-27 2016-11-02 俞怡斐 A kind of fast face threedimensional model based on single image generates and alternative approach
CN111161129A (en) * 2019-11-25 2020-05-15 佛山欧神诺云商科技有限公司 Three-dimensional interaction design method and system for two-dimensional image

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5990900A (en) * 1997-12-24 1999-11-23 Be There Now, Inc. Two-dimensional to three-dimensional image converting system
JP2006244217A (en) * 2005-03-04 2006-09-14 C's Lab Ltd Three-dimensional map display method, three-dimensional map display program and three-dimensional map display device
CN101833543A (en) * 2010-05-07 2010-09-15 李响 Naked-eye three-dimensional display method for characters
US20130155057A1 (en) * 2011-12-20 2013-06-20 Au Optronics Corp. Three-dimensional interactive display apparatus and operation method using the same
CN103489224A (en) * 2013-10-12 2014-01-01 厦门大学 Interactive three-dimensional point cloud color editing method
CN106067190A (en) * 2016-05-27 2016-11-02 俞怡斐 A kind of fast face threedimensional model based on single image generates and alternative approach
CN111161129A (en) * 2019-11-25 2020-05-15 佛山欧神诺云商科技有限公司 Three-dimensional interaction design method and system for two-dimensional image

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
胡中天;叶绿;: "实时交互式三维模型纹理映射算法", 浙江科技学院学报, no. 03, pages 206 - 213 *
黄启今, 刘国权, 马远征, 薛海滨, 高瑾: "基于CT图像重建腰椎活动节段三维有限元模型及其应用", 中国体视学与图像分析, no. 02, pages 59 - 63 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115202529A (en) * 2022-07-29 2022-10-18 上海联影医疗科技股份有限公司 Pointer positioning system and method

Similar Documents

Publication Publication Date Title
US10522116B2 (en) Projection method with multiple rectangular planes at arbitrary positions to a variable projection center
US11272165B2 (en) Image processing method and device
EP3496036B1 (en) Structural modeling using depth sensors
WO2017092303A1 (en) Virtual reality scenario model establishing method and device
CN109636919B (en) Holographic technology-based virtual exhibition hall construction method, system and storage medium
CN107798715B (en) Alignment adsorption method and device for three-dimensional graph, computer equipment and storage medium
CN104751520A (en) Diminished reality
CN109657387B (en) Household model positioning and placing method based on mixed reality scene
CN111142669B (en) Interaction method, device, equipment and storage medium from two-dimensional interface to three-dimensional scene
CN106204713B (en) Static merging processing method and device
WO2017012360A1 (en) Method for response of virtual reality display device to operation of peripheral device
CN112288873A (en) Rendering method and device, computer readable storage medium and electronic equipment
EP4283441A1 (en) Control method, device, equipment and storage medium for interactive reproduction of target object
CN105912310A (en) Method and device for realizing user interface control based on virtual reality application
CN107590337A (en) A kind of house ornamentation displaying interactive approach and device
CN110209325A (en) A kind of 3D scene display control method, system and equipment
CN111161129B (en) Three-dimensional interaction design method and system for two-dimensional image
CN112802198A (en) Two-dimensional image three-dimensional interaction design method, terminal and storage device
CN108093245B (en) Multi-screen fusion method, system, device and computer readable storage medium
CN109375866B (en) Screen touch click response method and system for realizing same
JP5767371B1 (en) Game program for controlling display of objects placed on a virtual space plane
CN110619683A (en) Three-dimensional model adjusting method and device, terminal equipment and storage medium
CN212433748U (en) Interactive display system and AR platform
CN115512089A (en) Rapid browsing method of BIM (building information modeling) model
JP2009123076A (en) Three-dimensional cad system program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination