CN114329675A - Model generation method, model generation device, electronic device, and readable storage medium - Google Patents

Model generation method, model generation device, electronic device, and readable storage medium Download PDF

Info

Publication number
CN114329675A
CN114329675A CN202111669474.1A CN202111669474A CN114329675A CN 114329675 A CN114329675 A CN 114329675A CN 202111669474 A CN202111669474 A CN 202111669474A CN 114329675 A CN114329675 A CN 114329675A
Authority
CN
China
Prior art keywords
model
target
house
lines
generation method
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111669474.1A
Other languages
Chinese (zh)
Inventor
董杨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Youzhuju Network Technology Co Ltd
Original Assignee
Beijing Youzhuju Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Youzhuju Network Technology Co Ltd filed Critical Beijing Youzhuju Network Technology Co Ltd
Priority to CN202111669474.1A priority Critical patent/CN114329675A/en
Publication of CN114329675A publication Critical patent/CN114329675A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The application discloses a model generation method, a model generation device, electronic equipment and a readable storage medium, and belongs to the technical field of virtual reality. The model generation method is used for three-dimensional modeling of a house, and comprises the following steps: configuring a target panoramic image in a three-dimensional coordinate system, wherein the target panoramic image corresponds to the house; receiving a first drawing input for the target panorama, the first drawing input comprising lines of intersection between adjacent walls in the house; drawing a plurality of model reference lines in a three-dimensional coordinate system according to a first drawing input; and establishing a target model according to a plurality of model datum lines.

Description

Model generation method, model generation device, electronic device, and readable storage medium
Technical Field
The application belongs to the technical field of virtual reality, and particularly relates to a model generation method, a model generation device, electronic equipment and a readable storage medium.
Background
The virtual reality technology is widely applied to the house-watching field. In the related art, an operator needs to perform modeling according to the specific size of a house and live-action pictures in the process of performing three-dimensional modeling on the house, so that the time required for modeling the house is long.
Disclosure of Invention
An object of the embodiments of the present application is to provide a model generation method, a model generation apparatus, an electronic device, and a readable storage medium, which simplify the steps of building a three-dimensional model of a house and shorten the modeling time.
In a first aspect, an embodiment of the present application provides a model generation method, where the model generation method is used for three-dimensional modeling of a house, and the model generation method includes: configuring a target panoramic image in a three-dimensional coordinate system, wherein the target panoramic image corresponds to the house; receiving a first drawing input for a first drawing input of the target panorama, the first drawing input including, for lines of intersection between adjacent walls in the house, lines of intersection between adjacent walls in the house; drawing a plurality of model reference lines in a three-dimensional coordinate system according to the intersection lines in response to a first drawing input according to a first drawing input; and generating a target model datum line according to the plurality of model datum lines to establish a target model.
In a second aspect, an embodiment of the present application provides a model generation apparatus, where the model generation apparatus is used to model a house, and the model generation apparatus includes: the configuration module is used for configuring the target panoramic image in a three-dimensional coordinate system, and the target panoramic image corresponds to the house; the receiving module is used for receiving a first drawing input aiming at the target panorama, and the first drawing input aims at an intersection line between adjacent wall surfaces in the house; a drawing module for drawing a plurality of model reference lines in a three-dimensional coordinate system according to the intersecting lines in response to a first drawing input; and the modeling module is used for generating a target model according to the plurality of model datum lines.
In a third aspect, embodiments of the present application provide an electronic device, which includes a processor and a memory, where the memory stores a program or instructions that can be executed on the processor, and the program or instructions, when executed by the processor, implement the steps of the model generation method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium on which a program or instructions are stored, which when executed by a processor, implement the steps of the model generation method according to the first aspect.
In the embodiment of the application, a modeler configures a target panorama into a three-dimensional coordinate system in a modeling system, the modeling system can draw intersecting lines between side wall surfaces in a house according to a first drawing input by the modeler, and automatically generate a target model according to the marked intersecting lines, so that the step of building a three-dimensional house model is simplified, and the modeling time is shortened.
Drawings
FIG. 1 is a flow chart of a model generation method provided by an embodiment of the present application;
fig. 2 shows a second flowchart of a model generation method provided in the embodiment of the present application;
fig. 3 is a third schematic flowchart illustrating a model generation method according to an embodiment of the present application;
FIG. 4 is a fourth flowchart illustrating a model generation method provided by an embodiment of the present application;
FIG. 5 is a fifth flowchart illustrating a model generation method provided by an embodiment of the present application;
FIG. 6 shows a sixth flowchart of a model generation method provided by an embodiment of the present application;
FIG. 7 shows a seventh flowchart of a model generation method provided by an embodiment of the present application;
fig. 8 is a block diagram illustrating a structure of a model generation apparatus according to an embodiment of the present application;
fig. 9 shows a block diagram of an electronic device provided in an embodiment of the present application;
fig. 10 shows a hardware structure diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described clearly below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments that can be derived by one of ordinary skill in the art from the embodiments given herein are intended to be within the scope of the present disclosure.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The model generation method, the model generation apparatus, the electronic device and the readable storage medium provided in the embodiments of the present application are described in detail below with reference to fig. 1 to 10 through specific embodiments and application scenarios thereof.
A model generation method is provided in an embodiment of the present application, fig. 1 shows one of the flow diagrams of the model generation method provided in the embodiment of the present application, and as shown in fig. 1, the model generation method is used for three-dimensional modeling of a house, and the model generation method includes:
102, configuring a target panoramic image in a three-dimensional coordinate system, wherein the target panoramic image corresponds to a house;
optionally, the modeler takes the panorama of the house to obtain the target panorama.
104, receiving a first drawing input aiming at the target panorama, wherein the first drawing input aims at an intersection line between adjacent wall surfaces in a house;
step 106, responding to the first drawing input, and drawing a plurality of model datum lines in a three-dimensional coordinate system according to the intersection lines;
and 108, generating a target model according to the plurality of model reference lines.
The model generation method provided by the embodiment of the application is applied to a modeling system. And establishing a three-dimensional coordinate system in the modeling system, and configuring the target panoramic image in the three-dimensional coordinate system, wherein the target panoramic image is a panoramic image of the house. And the modeling system receives a first drawing input by the modeling personnel and draws a model datum line in the three-dimensional coordinate system according to the first drawing input. It will be appreciated that the first drawing input includes intersecting lines between adjacent walls in the house, and therefore the modeling system is able to draw the model reference line in accordance with the intersecting lines. The model datum line is an inner contour line of the model, and the modeling system can automatically generate wall features in the model according to two adjacent contour lines, so that the target model is established.
Specifically, in the process of labeling the first drawing input, the modeler labels the intersection line between adjacent walls in the house by performing the first drawing input by observing the image feature in the target panorama to obtain the first drawing input. Because the intersecting lines drawn manually by the annotating personnel may have certain angle deviation, the annotating system automatically draws a plurality of model reference lines in a three-dimensional coordinate system according to the first drawing input, wherein the model reference lines are inner contour lines of the model. Wherein the first drawing input is directed at an intersection between the house side wall surfaces, so that the plurality of model reference lines drawn by the labeling system are straight lines that are parallel to each other.
In the embodiment of the application, a modeler configures the target panorama into a three-dimensional coordinate system in a modeling system, the modeling system can label the intersection line between the side wall surfaces in the house according to the first drawing input by the modeler, and automatically generate the target model according to the labeled intersection line, so that the step of building the house three-dimensional model is simplified, and the modeling time is shortened.
It is worth mentioning that the first drawing input is marked in the target panorama by the modeler, and the modeling system can automatically generate a model reference line in the three-dimensional system according to the first drawing input executed by the marker, so that the modeling system is independent of the networked image recognition technology.
In some embodiments of the present application, fig. 2 shows a second flowchart of the model generation method provided in the embodiments of the present application, and as shown in fig. 2, drawing a plurality of model reference lines in a three-dimensional coordinate system according to intersecting lines includes:
step 202, generating a ground model of the house in a three-dimensional coordinate system;
step 204, determining a first drawing point in the ground model according to the intersection line;
and step 206, generating a model datum line by taking the first drawing point as a starting point along the direction vertical to the ground model.
In the embodiment of the application, under the condition that the target panorama is configured in the three-dimensional coordinate system, the modeling system automatically generates the ground model of the house according to the three-dimensional coordinate system. The generated ground model of the house is contacted with the lower edge of the target panorama arranged in the three-dimensional coordinate system. The first drawing input is the marking performed by a marking person for the intersecting line of the adjacent wall surfaces in the house, and the intersection point between the extending line extending from the intersecting line towards the ground model and the ground model is obtained, so that the first drawing point is determined. And after the first drawing point is determined, the modeling system draws the model datum line in a direction away from the ground model by taking the first drawing point as the drawing starting point.
It can be understood that the ground model of the house is generated in the three-dimensional coordinate system, the ground model is used as a reference plane for drawing the model reference line, and after the intersecting line corresponding to the first drawing input is acquired, the model reference line is drawn according to the intersection point of the intersecting line and the reference plane, so that the intersecting line drawn by the modeling system for the annotating personnel is calibrated to be the model reference line, and the drawing accuracy of the model reference line is improved.
In the embodiment of the application, the modeling system takes the ground model as the reference surface to draw the model datum line, so that the effect of calibrating the first drawing input by a modeler is realized, the drawn model datum line is associated with the first drawing input, and meanwhile, the accuracy of the model datum line is also ensured.
In some embodiments of the present application, determining a first plotted point in the ground model from the intersection line comprises: and determining a first drawing point according to the intersection point of the intersection line and the ground model.
In the embodiment of the application, the generated ground model of the house is contacted with the lower edge of the target panorama configured in the three-dimensional coordinate system. The first drawing input comprises the intersection point between the extension line extending from the intersection line towards the ground model and the ground model, aiming at the intersection line between the adjacent wall surfaces, of the annotating person, so that a first drawing point is determined.
According to the embodiment of the application, the intersection point between the ground model and the intersection line is used as the first drawing point, so that the accuracy of the drawn model reference line can be guaranteed.
In some embodiments of the present application, fig. 3 shows a third flowchart of a model generation method provided in the embodiments of the present application, and as shown in fig. 3, establishing a target model according to a plurality of model reference lines includes:
step 302, generating a wall model according to a plurality of model datum lines;
step 304, generating a top surface model of the house in a three-dimensional coordinate system, wherein the top surface model is parallel to the ground model;
step 306, building a target model according to the wall model, the ground model and the top model.
In the embodiment of the present application, the model reference line is a contour line generated according to the first drawing input of the annotating person, and the first drawing input of the annotating person is an intersection line between side wall surfaces in the house, so that a wall model can be generated according to the model reference line, where the wall model is a model of a side wall in the house. At this time, the modeling system already has a floor model and a wall model of the house, so the modeling system generates a ceiling model of the house in a three-dimensional coordinate system. The modeling system can generate a target model by splicing the ground model, the top model and the wall model.
Specifically, the target model includes a ground model, a top model disposed parallel to the ground model, and a wall model disposed between the ground model and the top model. The ground model is a model automatically generated by the modeling system according to the relative position of the target panoramic image and the three-dimensional coordinate system, the wall model is a model generated by the modeling system according to the first drawing input of the annotation personnel, and the modeling system establishes a top surface model parallel to the ground model according to the ground model and the wall model, so that the configuration of all sub-models in the target model is completed, and the target model is obtained.
In the embodiment of the application, the ground model, the wall model and the top model are respectively established through the modeling system, and the ground model, the wall model and the top model are combined to obtain the accurate target model.
It will be appreciated that because the ground model and the ceiling model are parallel to each other, the annotator need only annotate the intersection between adjacent walls in the house with the first drawing input, and the modeling system can automatically generate the corresponding ground model and the corresponding plane model.
In some embodiments of the present application, fig. 4 shows a fourth flowchart of a model generation method provided in an embodiment of the present application, and as shown in fig. 4, generating a wall model according to multiple model reference lines includes:
step 402, receiving selection input aiming at a plurality of model datum lines;
step 404, selecting at least two model reference lines of the plurality of model reference lines in response to a selection input;
and 406, generating a wall model according to the at least two model datum lines.
In the embodiment of the application, the contour lines are inner contour lines of the model, and a wall surface can be determined through any two inner contour lines. The annotator can select the model reference lines for generating the wall model by inputting selection inputs for the plurality of model reference lines into the modeling system. After the modeler completes the selection of the at least two model reference lines, the modeling system can generate a wall model of the house according to the at least two model reference lines selected by the modeler.
Specifically, a plurality of model reference lines are displayed in a three-dimensional coordinate system, and a modeler selects two model reference lines by selection input. Because the model datum lines are obtained by labeling the intersection lines between the adjacent wall surfaces in the panoramic image, the model datum lines are parallel to each other, the modeling system can determine a plane according to at least two model datum lines selected by a modeling person, and the plane is intercepted by taking the at least two model datum lines selected by the modeling person as the boundary of the plane, so that the wall model is obtained.
According to the modeling system, the wall model can be automatically generated by receiving the selection input of the modeling personnel for at least two model reference lines in the multiple model reference lines, so that the operation steps of modeling by the labeling personnel are further simplified, and the time required by modeling is reduced.
In some embodiments of the present application, fig. 5 illustrates a fifth flowchart of a model generation method provided in an embodiment of the present application, and as shown in fig. 5, before generating a top surface model in a three-dimensional coordinate system, the method further includes:
step 502, receiving a second drawing input aiming at the target panorama, wherein the second drawing input aims at the characteristic points of the door body in the house;
and step 504, responding to the second drawing input, and generating a door body model on the wall body model according to the characteristic points of the door body.
In an embodiment of the application, the first drawing input is directed to an intersection line of adjacent walls in the house, and the modeling system can establish a main body frame of the target model according to the first drawing input. And the modeling personnel can manually mark the target panoramic image in the panoramic image, and after the modeling system receives a second drawing input of the marking personnel for the target panoramic image, the second drawing input is the marking of the marking personnel for the characteristic points of the door body in the target panoramic image. The modeling system can automatically add the door body model to the target model according to the second drawing input. The characteristic points of the door body can include door corner points of the door body, namely, a door body model is generated on the cavity by marking four door corners.
It will be appreciated that the door bodies in the room are all provided on a wall. Therefore, the modeling system can automatically capture the wall model corresponding to the second drawing input according to the second drawing input, and the generated door body model is configured on the corresponding wall in the target model.
Illustratively, the modeling system receives a second drawing input of the marking personnel, and automatically captures the door body marking points in the second drawing input onto the corresponding wall model. The modeling system establishes a door model by marking points on the door body on the wall model.
In the embodiment of the application, the annotating personnel manually annotate the door body in the panoramic image, and the modeling system can establish the door body characteristics in the target model according to the second drawing input by the annotating personnel.
In some embodiments of the present application, fig. 6 shows a sixth schematic flowchart of a model generation method provided in an embodiment of the present application, and as shown in fig. 6, configuring a target panorama in a three-dimensional coordinate system includes:
step 602, establishing spherical projection in a three-dimensional coordinate system;
step 604, the target panorama is configured on the spherical projection.
In the embodiment of the application, a sphere (spherical projection) is created in a three-dimensional coordinate system, and the modeling system configures the target panorama into the spherical projection. According to the method and the device, the spherical projection is established in the three-dimensional coordinate system, the spherical projection panoramic image is generated according to the spherical projection and the target panoramic image, the target panoramic image is not distorted in the spherical projection, namely, in the target panoramic image which is observed and configured in the spherical projection by a modeling person, the intersecting line between adjacent wall surfaces in a house in the target panoramic image of the spherical projection can be displayed in a straight line form.
According to the embodiment of the application, the target panorama is configured in the spherical projection, and the modeling system displays the target panorama configured in the spherical projection, so that a modeler can observe the target panorama without distortion, and the modeler can conveniently and manually input the first drawing input corresponding to the intersection line between the adjacent wall surfaces in the house.
In some embodiments of the present application, fig. 7 shows a seventh flowchart of a model generation method provided in an embodiment of the present application, and as shown in fig. 7, the pasting a target panorama onto a spherical projection includes:
step 702, acquiring a first reference point of a target panorama and a second reference point of spherical projection;
the first datum point is a camera viewpoint of the panoramic image, and the second datum point is a central point of the spherical projection;
step 704, map the target panorama onto the spherical projection to make the first reference point and the second reference point coincide.
In the embodiment of the application, the camera viewpoint of the target panoramic image is used as the first reference point, the central point of the spherical projection is used as the second reference point, and the target panoramic image is mapped onto the spherical projection by the first reference point and the second reference point, so that the positioning of the target panoramic image relative to the spherical projection is realized.
According to the embodiment of the application, the target panorama and the spherical projection are positioned through the camera viewpoint of the target panorama and the central point of the spherical head shape, so that the accuracy of the spherical projection panorama after mapping is improved, and the accuracy of the first drawing input of the modeling personnel according to the spherical projection panorama is higher.
In the model generation method provided by the embodiment of the application, the execution subject may be a model generation device. The embodiment of the present application describes a model generation apparatus provided in the embodiment of the present application, by taking an example in which the model generation apparatus executes a model generation method.
In some embodiments of the present application, a model generation apparatus is provided, and fig. 8 shows a block diagram of a structure of the model generation apparatus provided in the embodiments of the present application, and as shown in fig. 8, a model generation apparatus 800 includes:
a configuration module 802, configured to configure a target panorama in a three-dimensional coordinate system, where the target panorama corresponds to the house;
a receiving module 804, configured to receive a first drawing input for a target panorama, the first drawing input being for an intersection line between adjacent walls in the house;
a drawing module 806 for drawing a plurality of model reference lines in a three-dimensional coordinate system according to the intersection lines in response to a first drawing input;
and the modeling module 808 is used for generating a target model according to the plurality of model reference lines.
In the embodiment of the application, a modeler configures the target panorama into a three-dimensional coordinate system in a modeling system, the modeling system can draw the intersection line between the side wall surfaces in the house according to the first drawing input by the modeler, and automatically generate the target model according to the marked intersection line, so that the step of building the house three-dimensional model is simplified, and the modeling time is shortened.
It is worth mentioning that the first drawing input is marked in the target panorama by the modeler, and the modeling system can automatically generate a model reference line in the three-dimensional system according to the first drawing input executed by the marker, so that the modeling system is independent of the networked image recognition technology.
In some embodiments of the present application, the model generation apparatus 800 further includes:
the generation module is used for generating a ground model of the house in a three-dimensional coordinate system;
the determining module is used for determining a first drawing point in the ground model according to the intersection line;
and the generating module is also used for generating a model datum line by taking the first drawing point as a starting point along the direction vertical to the ground model.
In the embodiment of the application, the modeling system takes the ground model as the reference surface to draw the model datum line, so that the effect of calibrating the first drawing input by a modeler is realized, the drawn model datum line is associated with the first drawing input, and meanwhile, the accuracy of the model datum line is also ensured.
In some embodiments of the present application, the determining module is further configured to determine the first plotted point based on an intersection of the intersection line with the ground model.
According to the embodiment of the application, the intersection point between the ground model and the intersection line is used as the first drawing point, so that the accuracy of the drawn model reference line can be guaranteed.
In some embodiments of the present application, the generation module is further configured to generate a wall model according to a plurality of model reference lines;
the modeling module 808 is further configured to generate a top model of the house in the three-dimensional coordinate system, where the top model is parallel to the ground model;
the modeling module 808 is further configured to build a target model based on the wall model, the ground model, and the top model.
In the embodiment of the application, the ground model, the wall model and the top model are respectively established through the modeling system, and the ground model, the wall model and the top model are combined to obtain the accurate target model.
It will be appreciated that because the ground model and the ceiling model are parallel to each other, the annotator need only annotate the intersection between adjacent walls in the house with the first drawing input, and the modeling system can automatically generate the corresponding ground model and the corresponding plane model.
In some embodiments of the present application, the receiving module 804 is further configured to receive a selection input for a plurality of model reference lines;
the model generation apparatus 800 further includes:
a selection module for selecting at least two of the plurality of model reference lines in response to a selection input;
and the generation module is also used for generating a wall model according to the at least two model datum lines.
According to the modeling system, the wall model can be automatically generated by receiving the selection input of the modeling personnel for at least two model reference lines in the multiple model reference lines, so that the operation steps of modeling by the labeling personnel are further simplified, and the time required by modeling is reduced.
In some embodiments of the present application, the receiving module 804 is further configured to receive a second drawing input for the target panorama, where the second drawing input is for feature points of a door body in the house;
the modeling module 808 is further configured to generate a door model on the wall model according to the feature points of the door in response to the second drawing input.
In the embodiment of the application, the annotating personnel manually annotate the door body in the panoramic image, and the modeling system can establish the door body characteristics in the target model according to the second drawing input by the annotating personnel.
In some embodiments of the present application, the model generation apparatus 800 further includes:
the projection establishing module is used for establishing spherical projection in a three-dimensional coordinate system;
the module 802 is further configured to establish a spherical projection in the three-dimensional coordinate system.
According to the embodiment of the application, the target panorama is configured in the spherical projection, and the modeling system displays the target panorama configured in the spherical projection, so that a modeler can observe the target panorama without distortion, and the modeler can conveniently and manually input the first drawing input corresponding to the intersection line between the adjacent wall surfaces in the house.
In some embodiments of the present application, the model generation apparatus 800 further includes:
the acquisition module is used for acquiring a first reference point of the target panoramic image and a second reference point of the spherical projection, wherein the first reference point is a camera viewpoint of the panoramic image, and the second reference point is a central point of the spherical projection;
and the mapping module is used for mapping the target panorama onto the spherical projection so as to enable the first reference point and the second reference point to be coincident.
According to the embodiment of the application, the target panorama and the spherical projection are positioned through the camera viewpoint of the target panorama and the central point of the spherical head shape, so that the accuracy of the spherical projection panorama after mapping is improved, and the accuracy of the first drawing input of the modeling personnel according to the spherical projection panorama is higher.
The model generation apparatus in the embodiment of the present application may be an electronic device, or may be a component in an electronic device, such as an integrated circuit or a chip. The electronic device may be a terminal, or may be a device other than a terminal. The electronic Device may be, for example, a Mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic Device, a Mobile Internet Device (MID), an Augmented Reality (AR)/Virtual Reality (VR) Device, a robot, a wearable Device, an ultra-Mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and may also be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine, a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The model generation apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android operating system (Android), an iOS operating system, or other possible operating systems, which is not specifically limited in the embodiments of the present application.
The model generation device provided in the embodiment of the present application can implement each process implemented by the above method embodiment, and is not described here again to avoid repetition.
Optionally, as shown in fig. 9, an electronic device 900 is further provided in this embodiment of the present application, where the electronic device 900 includes a processor 902 and a memory 904, and the memory 904 stores a program or an instruction that can be executed on the processor 902, and when the program or the instruction is executed by the processor 902, the steps of the foregoing method embodiment are implemented, and the same technical effects can be achieved, and are not described again here to avoid repetition.
It should be noted that the electronic devices in the embodiments of the present application include the mobile electronic device and the non-mobile electronic device described above.
Fig. 10 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 1000 includes, but is not limited to: a radio frequency unit 1001, a network module 1002, an audio output unit 1003, an input unit 1004, a sensor 1005, a display unit 1006, a user input unit 1007, an interface unit 1008, a memory 1009, and a processor 1010.
Those skilled in the art will appreciate that the electronic device 1000 may further comprise a power source (e.g., a battery) for supplying power to various components, and the power source may be logically connected to the processor 1010 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system. The electronic device structure shown in fig. 10 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is not repeated here.
The processor 1010 is configured to configure a target panorama in a three-dimensional coordinate system, where the target panorama corresponds to the house;
a user input unit 1007 for receiving a first drawing input for a target panorama, the first drawing input being for an intersection line between adjacent walls in the house;
a processor 1010 for drawing a plurality of model reference lines in a three-dimensional coordinate system according to the intersection lines in response to a first drawing input;
a processor 1010 for building a target model from the plurality of model reference lines.
In the embodiment of the application, a modeler configures the target panorama into a three-dimensional coordinate system in a modeling system, the modeling system can label the intersection line between the side wall surfaces in the house according to the first drawing input by the modeler, and automatically generate the target model according to the labeled intersection line, so that the step of building the house three-dimensional model is simplified, and the modeling time is shortened.
It is worth mentioning that the first drawing input is marked in the target panorama by the modeler, and the modeling system can automatically generate a model reference line in the three-dimensional system according to the first drawing input executed by the marker, so that the modeling system is independent of the networked image recognition technology.
Further, a processor 1010 for generating a ground model of the house in a three-dimensional coordinate system;
a processor 1010 for determining a first plotted point in the ground model from the intersection line;
and a processor 1010 for generating a model reference line with the first drawing point as a starting point in a direction perpendicular to the ground model.
In the embodiment of the application, the modeling system takes the ground model as the reference surface to draw the model datum line, so that the effect of calibrating the first drawing input by a modeler is realized, the drawn model datum line is associated with the first drawing input, and meanwhile, the accuracy of the model datum line is also ensured.
Further, the processor 1010 is configured to determine a first drawing point according to an intersection point of the intersection line and the ground model.
According to the embodiment of the application, the intersection point between the ground model and the intersection line is used as the first drawing point, so that the accuracy of the drawn model reference line can be guaranteed.
Further, a processor 1010 configured to generate a wall model according to a plurality of model reference lines;
a processor 1010 for generating a top model of the house in a three-dimensional coordinate system, the top model being parallel to the ground model;
and a processor 1010 for establishing a target model according to the wall model, the ground model and the top model.
In the embodiment of the application, the ground model, the wall model and the top model are respectively established through the modeling system, and the ground model, the wall model and the top model are combined to obtain the accurate target model.
It will be appreciated that because the ground model and the ceiling model are parallel to each other, the annotator need only annotate the intersection between adjacent walls in the house with the first drawing input, and the modeling system can automatically generate the corresponding ground model and the corresponding plane model.
Further, a user input unit 1007 for receiving selection inputs for a plurality of model reference lines;
a processor 1010 for selecting at least two of the plurality of model reference lines in response to a selection input;
and a processor 1010 configured to generate a wall model from the at least two model reference lines.
According to the modeling system, the wall model can be automatically generated by receiving the selection input of the modeling personnel for at least two model reference lines in the multiple model reference lines, so that the operation steps of modeling by the labeling personnel are further simplified, and the time required by modeling is reduced.
Further, the user input unit 1007 is configured to receive a second drawing input for the target panorama, where the second drawing input is for a feature point of a door body in the house;
and the processor 1010 is used for responding to the second drawing input and generating a door body model on the wall body model according to the characteristic points of the door body.
In the embodiment of the application, the annotating personnel manually annotate the door body in the panoramic image, and the modeling system can establish the door body characteristics in the target model according to the second drawing input by the annotating personnel.
Further, a processor 1010 for establishing a spherical projection in the three-dimensional coordinate system;
a processor 1010 for establishing a spherical projection in a three-dimensional coordinate system.
According to the embodiment of the application, the target panorama is configured in the spherical projection, and the modeling system displays the target panorama configured in the spherical projection, so that a modeler can observe the target panorama without distortion, and the modeler can conveniently and manually input the first drawing input corresponding to the intersection line between the adjacent wall surfaces in the house.
Further, the processor 1010 is configured to obtain a first reference point of the target panorama and a second reference point of the spherical projection, where the first reference point is a camera viewpoint of the panorama, and the second reference point is a center point of the spherical projection;
a processor 1010 for mapping the target panorama onto the spherical projection such that the first reference point and the second reference point coincide.
According to the embodiment of the application, the target panorama and the spherical projection are positioned through the camera viewpoint of the target panorama and the central point of the spherical head shape, so that the accuracy of the spherical projection panorama after mapping is improved, and the accuracy of the first drawing input of the modeling personnel according to the spherical projection panorama is higher.
It should be understood that in the embodiment of the present application, the input Unit 1004 may include a Graphics Processing Unit (GPU) 10041 and a microphone 10042, and the Graphics Processing Unit 10041 processes image data of still pictures or videos obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 1006 may include a display panel 10061, and the display panel 10061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 1007 includes at least one of a touch panel 10071 and other input devices 10072. The touch panel 10071 is also referred to as a touch screen. The touch panel 10071 may include two parts, a touch detection device and a touch controller. Other input devices 10072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
The memory 1009 may be used to store software programs as well as various data. The memory 1009 may mainly include a first storage area storing a program or an instruction and a second storage area storing data, wherein the first storage area may store an operating system, an application program or an instruction (such as a sound playing function, an image playing function, and the like) required for at least one function, and the like. Further, the memory 1009 may include volatile memory or nonvolatile memory, or the memory 1009 may include both volatile and nonvolatile memory. The non-volatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash Memory. The volatile Memory may be a Random Access Memory (RAM), a Static Random Access Memory (Static RAM, SRAM), a Dynamic Random Access Memory (Dynamic RAM, DRAM), a Synchronous Dynamic Random Access Memory (Synchronous DRAM, SDRAM), a Double Data Rate Synchronous Dynamic Random Access Memory (Double Data Rate SDRAM, ddr SDRAM), an Enhanced Synchronous SDRAM (ESDRAM), a Synchronous Link DRAM (SLDRAM), and a Direct Memory bus RAM (DRRAM). The memory 1009 in the embodiments of the present application includes, but is not limited to, these and any other suitable types of memory.
Processor 1010 may include one or more processing units; optionally, the processor 1010 integrates an application processor and a modem processor, wherein the application processor mainly handles operations related to an operating system, a callmaker interface, application programs, and the like, and the modem processor mainly handles wireless communication signals, such as a baseband processor. It will be appreciated that the modem processor described above may not be integrated into processor 1010.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the above-mentioned embodiment of the model generation method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device in the above embodiment. Readable storage media, including computer readable storage media such as computer read only memory ROM, random access memory RAM, magnetic or optical disks, and the like.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement each process of the foregoing method embodiment, and the same technical effect can be achieved.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
Embodiments of the present application provide a computer program product, where the program product is stored in a storage medium, and the program product is executed by at least one processor to implement the processes of the above-mentioned embodiment of the model generation method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a computer software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, or a network device) to execute the method of the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (11)

1. A model generation method for modeling a house, the model generation method comprising:
arranging a target panorama in a three-dimensional coordinate system, wherein the target panorama corresponds to the house;
receiving a first drawing input for the target panorama, the first drawing input being for lines of intersection between adjacent walls in the house;
drawing a plurality of model reference lines in the three-dimensional coordinate system according to the intersection lines in response to the first drawing input;
and generating a target model according to the plurality of model reference lines.
2. The model generation method according to claim 1, wherein said drawing a plurality of model reference lines in said three-dimensional coordinate system according to said intersection lines comprises:
generating a ground model of the house in the three-dimensional coordinate system;
determining a first drawing point in the ground model according to the intersection line;
and generating the model reference line by taking the first drawing point as a starting point along a direction vertical to the ground model.
3. The model generation method of claim 2, wherein said determining a first plotted point in the ground model from the intersection line comprises:
and determining the first drawing point according to the intersection point of the intersection line and the ground model.
4. A model generation method according to claim 2 or 3, wherein said building a target model from said plurality of model reference lines comprises:
generating a wall model according to the plurality of model datum lines;
generating a top surface model of the house in the three-dimensional coordinate system, the top surface model being parallel to the ground model;
and establishing the target model according to the wall model, the ground model and the top surface model.
5. The model generation method of claim 4, wherein said generating a wall model from said plurality of model reference lines comprises:
receiving selection input for the plurality of model reference lines;
selecting at least two of the plurality of model reference lines in response to the selection input;
and generating a wall model according to the at least two model datum lines.
6. The model generation method of claim 5, wherein prior to generating the top surface model in the three-dimensional coordinate system, further comprising:
receiving a second drawing input aiming at the target panorama, wherein the second drawing input aims at the characteristic points of the door body in the house;
and responding to the second drawing input, and generating a door body model on the wall body model according to the characteristic points of the door body.
7. The model generation method according to any one of claims 1 to 3, wherein the arranging the target panorama in a three-dimensional coordinate system comprises:
establishing a spherical projection in the three-dimensional coordinate system;
and configuring the target panorama on the spherical projection.
8. The model generation method of claim 7, wherein said configuring the target panorama on the spherical projection comprises:
acquiring a first reference point of the target panorama and a second reference point of the spherical projection, wherein the first reference point is a camera viewpoint of the panorama, and the second reference point is a central point of the spherical projection;
mapping the target panorama onto the spherical projection so that the first reference point and the second reference point coincide.
9. A model generation apparatus for modeling a house, the model generation apparatus comprising:
a configuration module, configured to configure a target panorama in a three-dimensional coordinate system, the target panorama corresponding to the house;
a receiving module for receiving a first drawing input for the target panorama, the first drawing input being for lines of intersection between adjacent walls in the house;
a drawing module for drawing a plurality of model reference lines in the three-dimensional coordinate system according to the intersection lines in response to the first drawing input;
and the modeling module is used for generating a target model according to the plurality of model datum lines.
10. An electronic device, comprising:
a memory having a program or instructions stored thereon;
a processor for implementing the steps of the model generation method of any one of claims 1 to 8 when executing the program or instructions.
11. A readable storage medium on which a program or instructions are stored, characterized in that the program or instructions, when executed by a processor, implement the steps of the model generation method according to any one of claims 1 to 8.
CN202111669474.1A 2021-12-31 2021-12-31 Model generation method, model generation device, electronic device, and readable storage medium Pending CN114329675A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111669474.1A CN114329675A (en) 2021-12-31 2021-12-31 Model generation method, model generation device, electronic device, and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111669474.1A CN114329675A (en) 2021-12-31 2021-12-31 Model generation method, model generation device, electronic device, and readable storage medium

Publications (1)

Publication Number Publication Date
CN114329675A true CN114329675A (en) 2022-04-12

Family

ID=81021784

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111669474.1A Pending CN114329675A (en) 2021-12-31 2021-12-31 Model generation method, model generation device, electronic device, and readable storage medium

Country Status (1)

Country Link
CN (1) CN114329675A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115329420A (en) * 2022-07-18 2022-11-11 北京五八信息技术有限公司 Marking line generation method and device, terminal equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115329420A (en) * 2022-07-18 2022-11-11 北京五八信息技术有限公司 Marking line generation method and device, terminal equipment and storage medium
CN115329420B (en) * 2022-07-18 2023-10-20 北京五八信息技术有限公司 Marking generation method and device, terminal equipment and storage medium

Similar Documents

Publication Publication Date Title
US10074217B2 (en) Position identification method and system
US9760987B2 (en) Guiding method and information processing apparatus
US9756261B2 (en) Method for synthesizing images and electronic device thereof
US8751969B2 (en) Information processor, processing method and program for displaying a virtual image
US20210035346A1 (en) Multi-Plane Model Animation Interaction Method, Apparatus And Device For Augmented Reality, And Storage Medium
US20160148430A1 (en) Mobile device, operating method for modifying 3d model in ar, and non-transitory computer readable storage medium for storing operating method
CN113240769A (en) Spatial link relation identification method and device and storage medium
CN110648363A (en) Camera posture determining method and device, storage medium and electronic equipment
JP2020098568A (en) Information management device, information management system, information management method, and information management program
CN108430032B (en) Method and equipment for realizing position sharing of VR/AR equipment
CN114638939A (en) Model generation method, model generation device, electronic device, and readable storage medium
CN114329675A (en) Model generation method, model generation device, electronic device, and readable storage medium
CN111523334A (en) Method and device for setting virtual forbidden zone, terminal equipment, label and storage medium
US20140142900A1 (en) Information processing apparatus, information processing method, and program
CN109785444A (en) Recognition methods, device and the mobile terminal of real plane in image
CN114299809B (en) Direction information display method, display device, electronic apparatus, and readable storage medium
CN114299271A (en) Three-dimensional modeling method, three-dimensional modeling apparatus, electronic device, and readable storage medium
CN114820968A (en) Three-dimensional visualization method and device, robot, electronic device and storage medium
CN114357554A (en) Model rendering method, rendering device, terminal, server and storage medium
CN114782692A (en) House model repairing method and device, electronic equipment and readable storage medium
CN113421343A (en) Method for observing internal structure of equipment based on augmented reality
CN109472873B (en) Three-dimensional model generation method, device and hardware device
US20240203068A1 (en) Method and system for providing augmented reality object based on identification code
CN108446237B (en) Test method, test device, storage medium and electronic equipment
CN114332327A (en) Three-dimensional modeling method, three-dimensional modeling device, electronic equipment and server

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination