WO2024124370A1 - Model construction method and apparatus, storage medium, and electronic device - Google Patents

Model construction method and apparatus, storage medium, and electronic device Download PDF

Info

Publication number
WO2024124370A1
WO2024124370A1 PCT/CN2022/138339 CN2022138339W WO2024124370A1 WO 2024124370 A1 WO2024124370 A1 WO 2024124370A1 CN 2022138339 W CN2022138339 W CN 2022138339W WO 2024124370 A1 WO2024124370 A1 WO 2024124370A1
Authority
WO
WIPO (PCT)
Prior art keywords
model
coefficient
target
parameter
processed
Prior art date
Application number
PCT/CN2022/138339
Other languages
French (fr)
Chinese (zh)
Inventor
张哲�
朱丹枫
陈乃川
李坤
赵振焱
姜苏珈
褚虓
顾明
Original Assignee
京东方科技集团股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 京东方科技集团股份有限公司 filed Critical 京东方科技集团股份有限公司
Priority to PCT/CN2022/138339 priority Critical patent/WO2024124370A1/en
Publication of WO2024124370A1 publication Critical patent/WO2024124370A1/en

Links

Images

Definitions

  • the embodiments of the present disclosure relate to the field of computer technology, and in particular to a model building method, a model building device, a storage medium, and an electronic device.
  • the target model is bound with streaming media to push streaming media data corresponding to the target model to a corresponding preset terminal.
  • the non-business logic includes model luminescence logic
  • the diffuse reflection illumination coefficient is determined according to the first light intensity parameter and the second light intensity parameter.
  • determining a corresponding illumination mixing coefficient according to the diffuse illumination coefficient, the specular illumination coefficient and the distance field parameter includes:
  • the normal vector angle between the world coordinates and the screen coordinates corresponding to the coordinate point to be processed in the target model is configured as a first standard angle
  • FIG3 schematically shows a schematic diagram of a method for modeling rule execution order according to an exemplary embodiment of the present disclosure
  • FIG6 schematically shows a schematic diagram of cutting a convex polygon in an exemplary embodiment of the present disclosure
  • FIG13 schematically shows a schematic diagram of a method for controlling display of points of interest in an exemplary embodiment of the present disclosure
  • FIG20 schematically shows a schematic diagram of a zoomed display effect in an exemplary embodiment of the present disclosure
  • FIG21 schematically shows a schematic diagram of a service request processing method in an exemplary embodiment of the present disclosure
  • this example embodiment provides a model construction method, which can be applied to model construction in application scenarios such as smart parks and smart cities.
  • the above-mentioned model construction method may include:
  • the model building method provided in this example implementation method obtains the first-stage model by stereo processing and material mapping the basic geographic information data originally obtained from the target data, and then binds the non-business logic and business logic of the first-stage model, so that the obtained target model can have higher model performance and reduce the modeling cost.
  • the streaming media data of the model can be pushed to the preset terminal device, so that the user can view the real-time model changes on different terminal devices, and realize data simulation based on digital twins.
  • a production layer may be provided on the server side for executing the above step S11.
  • the above target data source may be an open source official data platform such as Open Street Map (public map), Tian Di Tu (i.e., the national geographic information public service platform).
  • data may be pulled from the above open source official data platform through a GIS (Geographic Information System) engine to obtain basic geographic information.
  • GIS Geographic Information System
  • the GIS engine may use the Cesium open source map engine.
  • the basic geographic information data obtained may also be electronic drawing data in various formats. Among them, the basic geographic information data obtained may include the longitude and latitude, floor height, terrain Dem, Floor Height (floor height), POI (Point of Interest) location information, etc. corresponding to the building and the real environment and virtual environment.
  • the data of the above virtual environment may also be obtained from the electronic drawing data; for example, it may be data on some planned but unbuilt buildings or environments.
  • step S11 may include:
  • a standard convex polygon is formed based on the formula Zf(x)*Center(x,y,z) as the standard convex polygon to be cut, as shown in Figure 4.
  • Zf(x) represents the outer bounding box point set
  • Center(x,y,z) represents the center point of the target shape. Since Zf(x) exists, Center(x,y,z) can be used as a unit height vector for preliminary stretching or stacking; forming a base convex polygon with elevation, as shown in Figure 5.
  • the first-stage model can also be bound to business logic and non-business logic.
  • non-business logic can include lighting effects in three-dimensional scenes, day and night display effects, weather systems, etc.
  • Business logic can include data push business logic of surveillance cameras, point of interest management business logic, model viewing business logic, etc.
  • step S13 streaming media binding is performed on the target model to push streaming media data corresponding to the target model to the corresponding preset terminal.
  • the target model can also be bound to several terminal devices after encapsulation, so that different terminal devices can receive streaming media data corresponding to the target model.
  • the addresses of different terminal devices can be pre-bound on the server side, and the content of the streaming media can be configured.
  • the terminal device can be a smart terminal device of a staff member, an IOC data dashboard, etc., to realize the model on the terminal device, as well as data visualization and interaction.
  • Step S31 determining the object to be processed in the first stage model according to the texture and material of the model
  • the advantage of this is that it can radiate the entire range of scenes, but the disadvantage is that it will reduce the rendering effect of the point light source in the scene itself. Due to the constant exposure coefficient, the local details of the model need to be superimposed on multiple post-processing boxes. In the process of level streaming, the control coefficient linkage will cause memory waste and huge performance overhead. Instead of using separate materials for processing, for each mesh that needs to be processed, two sets of texture materials and texture maps are pre-made, and dynamically loaded and unloaded according to the corresponding lighting threshold. The advantage of this is that all materials can be specially customized. The disadvantage is that once in the process of iteration and repair, the corresponding materials and textures need to be replaced frequently, which will make the model heavier (the number of all texture maps and materials are doubled).
  • non-business logic may be added to the first-stage models respectively; or, non-business logic may be added to the three-dimensional scene model after combining the first-stage models.
  • each sub-model in the three-dimensional scene model can be classified, and the same non-business logic can be bound to the sub-models of the same type.
  • the same non-business logic can be used for each type of sub-model, such as a building facade, a light sign, a staircase, etc.
  • the same luminous coefficient can be used for sub-models of the same type.
  • the UE5 engine can be used to bind a luminous material to the model, and the luminous effect under night scene conditions can be configured for the model by calling an interface to meet the night scene rendering mode.
  • the diffuse reflection model may adopt the Lambert model, and the formula may include:
  • Ia represents the intensity of ambient light
  • Kd represents the diffuse reflection coefficient of the material to ambient light
  • Iambdiff represents the intensity of the light reflected by the diffuse reflector and the ambient light.
  • Il represents the intensity of the point light source
  • represents the angle between the incident light direction and the vertex normal, called the incident angle, and 0 ⁇ 90°
  • Ildiff represents the intensity of the light reflected by the diffuse reflector and the directional light.
  • the Lambert lighting model can include:
  • calculating the specular reflection illumination coefficient corresponding to the object to be processed may specifically include: determining an initial specular coefficient in combination with the specular reflection coefficient, point light source intensity, highlight index, and a first light direction parameter; and correcting the initial specular coefficient using a second light direction parameter to obtain the specular reflection illumination coefficient.
  • the specular reflection model may adopt the Phong model.
  • the Phong model considers that the light intensity of specular reflection is related to the angle between the reflected light and the line of sight, and the formula may include:
  • Ks represents the specular reflection coefficient
  • Ns represents the highlight index
  • V represents the observation direction from the vertex to the viewpoint
  • R represents the direction of reflected light.
  • the Blinn-Phong illumination model can be used to correct specular light.
  • Blinn-Phong is a model based on the Phong model, and its formulas include:
  • N represents the unit normal vector of the incident point
  • H represents the intermediate vector between the light incident direction L and the viewpoint direction V, which is usually also called the half-angle vector.
  • determining the corresponding illumination mixing coefficient according to the diffuse reflection illumination coefficient, the specular reflection illumination coefficient and the distance field parameter may specifically include: calculating the mixing angle parameter corresponding to the object to be processed based on the angle corresponding to any two sample points in the object to be processed; and determining the corresponding illumination mixing coefficient by using the mixing angle parameter in combination with the diffuse reflection illumination coefficient, the specular reflection illumination coefficient and the distance field parameter.
  • the influence of the illumination range and the cosine distance field can be calculated.
  • the cosine formula of the angle between vector A (x1, y1) and vector B (x2, y2) in two-dimensional space can include:
  • the cosine of the angle between any two n-dimensional sample points a(x11,x12,...,x1n) and b(x21,x22,...,x2n) can include:
  • the value range of the angle cosine is [-1,1].
  • the larger the cosine the smaller the angle between the two vectors.
  • the smaller the cosine the larger the angle between the two vectors.
  • the cosine takes the maximum value of 1.
  • the cosine takes the minimum value of -1.
  • the mixing angle A can be determined by the above formula.
  • the stairs For example, taking the stairs as an example, first select the target stairs in the three-dimensional model, and screen the materials and textures of each stair sub-model; if it is an L1 level white film, no processing is performed.
  • diffuse reflection processing can be performed simultaneously to calculate the diffuse reflection illumination coefficient
  • mirror reflection processing can be performed to obtain the initial mirror coefficient
  • the mirror reflection illumination coefficient can be corrected to obtain the mirror reflection illumination coefficient
  • the distance field calculation can be performed simultaneously; wherein, the distance field calculation can be implemented using conventional methods, and the present disclosure will not repeat it.
  • the diffuse reflection illumination coefficient, the mirror reflection illumination coefficient and the distance field parameter are combined to calculate using the above formula to obtain the corresponding illumination mixing coefficient.
  • the illumination mixing coefficient is configured as the illumination coefficient of the model.
  • the coefficient is dynamically calculated and the luminous state is accurately configured; effectively solving the luminous effect of the L1 ⁇ L2 level model under night scene conditions without natural lighting conditions, close to the night scene lights, to meet the night scene rendering mode.
  • the original texture map can be used to produce the luminous material without using additional texture maps, thus reducing the complexity of the model.
  • the method may further include:
  • Step S101 obtaining spline configuration parameters through a preset parameter interface, and performing animation path planning according to the spline configuration parameters;
  • Step S102 binding the skeleton volume array and the spline of the virtual object to move the virtual object along the planned path.
  • the 3D model can also include a large number of movable virtual objects, such as pedestrians, vehicles, aircraft, animals in the smart park, and water areas and animals in the water areas, etc.
  • movable virtual objects such as pedestrians, vehicles, aircraft, animals in the smart park, and water areas and animals in the water areas, etc.
  • the existing technical solutions for animation flow produced by software such as Maya and Honidi will cause GPU overload.
  • splines can be used to plan the virtual objects in the model.
  • the configuration parameters of the Spline can be obtained through the blueprint communication interface corresponding to the model.
  • the spline configuration parameters may include: the type of virtual object, time information, key point coordinates of the Spline, and so on.
  • the Spline key points can be used to mark the planned path of the virtual object in the model, and each Spline key point can correspond to the coordinates in the world coordinate system.
  • the skeleton array of the virtual object can be used to bind with the corresponding Spline, so as to identify the driving path of virtual objects such as pedestrians, bicycles, vehicles, and airplanes in the digital twin scene.
  • the speed can be configured by configuring the path and time in the spline configuration parameters. Different speeds can be configured for the skeletons corresponding to different types of virtual objects. As shown in Figure 11, when planning the Spline key points, at different Spline key points, the rotation value of the skeleton can be configured according to the actual business needs to achieve the animation effect of the spline bending.
  • the multi-dimensional dynamic effects of the skeleton can be achieved.
  • Spline to plan the path of the skeleton, it is possible to use intermittent displacement to complete the corresponding position changes and the corresponding animation effects.
  • the flow of people and vehicles, etc. can be achieved by arbitrarily replacing the grid content to complete the corresponding animation expression, and can match the animation capabilities of complex road networks and non-standard grid lines to complete the rendering and scene display of the corresponding traffic part.
  • this business logic it can be encapsulated in the form of a plug-in. During the model construction process, the plug-in of the external business logic can be executed through the blueprint interface call to achieve the corresponding function.
  • the method may further include:
  • Step S121 configuring the normal vector angle between the world coordinates and the screen coordinates corresponding to the coordinate point to be processed in the target model as a first standard angle
  • Step S122 configuring the approximate three-dimensional coordinates of the coordinate point to be processed in the screen coordinate system according to the first standard angle
  • Step S123 splitting the approximate three-dimensional coordinates into coordinate vectors, and selecting a target point of interest and dynamically binding it to the coordinate point to be processed according to the result of the coordinate vector splitting.
  • the location and number of points of interest in the three-dimensional model generally depend on the data in the real scene. Therefore, when moving the model and viewing the model, there will be problems such as POIs blocking each other and discrete position points.
  • the human eye has a limit value of the field of vision; therefore, some POIs will be alternately displayed and hidden as the viewing angle changes and moves. If the traditional POI point traversal method is used to control the display and angle, when the number and types of POI points are large, the model carrier will be overloaded and the frame rate will be reduced. If the traditional method of inserting the model is used to process POIs, the reusability and iteration will be very poor.
  • the POI (Point of Interest) in the model can include monitoring, access control, water and electricity meters, time clocks, and other POIs set according to actual business needs.
  • the above-mentioned method of step S111-step S113 can be implemented during non-business logic binding.
  • the above-mentioned coordinate points to be processed can be points of interest in the model.
  • the method for obtaining the height can include taking the angle between the normal vector of the world coordinate and the normal vector of the screen coordinate as the standard angle of arcsin, so as to obtain the approximate Z-axis range.
  • the next step is to realize automatic compensation, divide the approximate (X, Y, Z) coordinates into eight equal vectors, take the POI with the smallest absolute value for dynamic binding, and realize the screen inverse intersection mapping and bind the longitude and latitude to the world coordinates, so as to realize the dynamic adsorption of the coordinates of the point of interest.
  • the method may further include: dividing interest point sets according to types corresponding to each interest point in the target model; configuring each interest point set as a sub-level corresponding to the main scene of the target model, so as to load each interest point set through level streaming according to display control of the target model.
  • corresponding points of interest sets can be constructed for different types of points of interest.
  • Various types of POI sets are processed as sub-levels of the main scene, so as to ensure that fixed levels are streamed at a fixed rhythm and the corresponding POI loading is completed.
  • the implementation logic of completing the classified loading of POIs through level streaming is realized.
  • step S131 in response to a display control operation on the target model, a current field of view of the virtual camera is acquired in real time;
  • Step S132 concurrently controlling the binding of the interest point followed by the current focus on the x time axis, the y time axis, and the z time axis, so as to maintain the display position of the target interest point.
  • the model can respond to the user's display control operation to move and rotate accordingly.
  • the corresponding field of view of the virtual camera corresponding to the model can be obtained in real time.
  • the multi-dimensional timeline control and concurrent control Control are used to execute, that is, the X timeline, Y timeline, and Z timeline are bound to the POI followed by Focus, so as to control the display and hiding of the POI, and keep the positive correspondence of the POI without missing or camera occlusion.
  • the method may further include: using an angle controller to control the switching of the time axes of each dimension.
  • the model can be calibrated using the native blueprint low-code method to reduce the amount of code work.
  • the model By classifying the points of interest in the model, grouping them in the form of a collection and nesting them according to the multi-fusion of the level stream, the world coordinates can be unified while ensuring performance optimization.
  • the above method is used to realize the automatic compensation of the coordinates of the points of interest, and the screen coordinates are inversely mapped to the world scene to show the height difference of the POI point. Refer to the POI in the model shown in Figure 14 to maintain the orthogonal projection effect of the screen outward. When a single position changes relative to the screen, the relative world coordinate fixed effect is shown in Figure 15.
  • the method may further include:
  • Step S171 identifying a device type of an input device in response to a display control operation on the target model
  • Step S172 when it is determined that the input device is a first type device, the orthogonal vector parameter of the angle direction between the current central axis of the virtual camera and the relative offset parameter of the input device is determined; or,
  • Step S173 when it is determined that the input device is a second type device, determining an acceleration/deceleration parameter corresponding to the display control operation; and determining an offset parameter in combination with the execution time of the acceleration/deceleration parameter and a preset step length;
  • Step S174 correcting the display effect of the target model according to the offset parameter.
  • roaming scenes refer to the collective name for actions such as moving and rotating in a virtual scene that do not follow a predetermined script or script.
  • actions such as moving and rotating in a virtual scene that do not follow a predetermined script or script.
  • rotation and translation it also includes operations such as zooming in and out.
  • zooming In the field of digital twins, roaming is also an important capability of the model.
  • a user views a model on a terminal device
  • the terminal device is an electronic device such as a mobile phone or a tablet computer equipped with a touch screen
  • the user can view the model by touch
  • the terminal device is a laptop or a desktop computer
  • the model can be viewed and moved by input devices such as a mouse and a keyboard.
  • the type of input device can be first identified.
  • the first type of device mentioned above can be a mouse or a keyboard
  • the second type of device can be a touch screen.
  • the central axis corresponding to the current virtual camera can be calculated, and the standard label quantity can be calculated.
  • this value is assumed to be S1 ⁇ S2.
  • the product of this move vector and the step length is the total length of the corresponding offset.
  • the step length is the corresponding offset pixel value and the corresponding scaling ratio value.
  • the step size refers to the minimum offset of movement/rotation in the scene, which is the orthogonal offset of the central axis of the camera's specified direction/angle and the corresponding eye distance.
  • the step size can be set manually or through parameter configuration; assuming the step size is 10, the step size is used by default instead of the pixel length (since the display layer uses pixels as the unit of resolution, pixels are also used here as the unit).
  • the position of the central axis is the translation of the cosine value of the scene center axis (determined during modeling) to the XY plane of the lens angle, which is the position of the translation central axis corresponding to the camera.
  • the offset correction can be performed according to the acceleration/deceleration.
  • acceleration and corresponding deceleration can be calculated as follows: Assuming that the operation is a normalized operation, it can be determined that during the touch screen process, the duration of the uniform speed is approximately 66.7%. Therefore, assuming that the touch time is 3s, then the 2S time meets the characteristics of mouse control, and the other 1 second is divided equally into 0.5 seconds for acceleration and 0.5 seconds for deceleration, then its acceleration should be calculated as: single step logarithm * acceleration/deceleration duration.
  • the calculation formula can include:
  • the result formula is as follows: standard length + Log (single step length) * acceleration duration t + (mid-end uniform speed length) + standard length - Log (single step length) * deceleration duration; then bring it into the length calculation of the conventional mouse.
  • the user's current control operation is rotation/scaling, the corresponding single step object content value can be changed. That is, the corresponding rotation angle/corresponding scaling ratio.
  • Figures 18, 19, and 20 which are schematic diagrams of the effects of panning, zooming, rotating, and zooming. The above steps can be mixed and bound using the Blend view of the UE engine when binding the business logic. After the model is rendered, it is displayed in the interactive interface of the terminal device.
  • the method may further include:
  • Step S211 receiving a service request from a terminal device through a signaling server; wherein the service request includes identification information of a target point of interest, task information, and a terminal device identification;
  • Step S212 Process the service request to obtain streaming media data corresponding to the target point of interest, and push the streaming media data corresponding to the target point of interest to the terminal device through a signaling server.
  • a terminal device such as one or more of the smart phone 101, the visual data panel IOC102, and the computer 103 as shown in FIG22
  • a network 104 may be included.
  • the network 104 is used to provide a medium for a communication link between the terminal device and the server.
  • the network 104 may include various connection types, such as a wired communication link, a wireless communication link, and the like.
  • the number of terminal devices, networks, and servers in FIG22 is merely schematic. Depending on the implementation requirements, any number of terminal devices, networks, and servers may be provided.
  • the signaling server 105 and the data server 106 may be a server cluster consisting of multiple servers, and the like.
  • the terminal device can initiate a service request to the data server through the signaling server.
  • the service request includes the identification information of the target point of interest, the task information and the terminal device identification.
  • the service request can be a path planning task from the engineer's current location to the target maintenance point; or it can be a monitoring task for a specified road section or building, etc.
  • the service request can also be a control operation such as rotating and stretching the model.
  • the three-dimensional model in the data twin scenario can be generated after the data server is built, rendered, and bound to the business logic and non-business logic.
  • the generated model can be encapsulated and packaged on the data server and bound to the address of the preset terminal device.
  • Different terminal devices can be configured with different permissions to view different model data.
  • the data server After receiving the service request forwarded by the signaling server, the data server can obtain the streaming media data corresponding to the point of interest and send it to the terminal device through the signaling server.
  • the model building method provided in the embodiment of the present disclosure can achieve high-precision model building. It can be applied to the model building of the digital twin of the smart park to form an overall digital twin combined with a virtual simulation system.
  • Business logic and non-business logic can be bound in the form of plug-ins. For various mobile devices, the streaming media address sent outward by the model end is bound, and the corresponding access method and interaction logic are formulated according to their own business needs.
  • a model building device 230 is also provided in the embodiment of this example, and the device includes: a first-stage model calculation module 2301, a target model calculation module 2302, and a streaming media data processing module 2303.
  • the first-stage model calculation module 2301 can be used to obtain basic geographic information data from a target data source and perform three-dimensional processing to obtain an initial model; and perform material mapping on the initial model to obtain a first-stage model.
  • the target model calculation module 2302 can be used to perform non-business logic binding on the first-stage model, and perform business logic binding on the first-stage model to generate a target model corresponding to the modeling object.
  • the streaming media data processing module 2303 can be used to perform streaming media binding on the target model, so as to push the streaming media data corresponding to the target model to the corresponding preset terminal.
  • the first-stage model calculation module 2301 can be used to obtain basic geographic information data from a target data source, and filter the basic geographic information data according to preset rules to obtain hierarchical data; execute a set of modeling rules on the hierarchical data to obtain the initial model.
  • the non-business logic includes model luminescence logic; the target model calculation module 2302 may include: a luminescence coefficient configuration module.
  • the luminous coefficient configuration module can be used to determine the object to be processed in the first stage model according to the texture and material of the model; calculate the corresponding diffuse reflection illumination coefficient, specular reflection illumination coefficient and distance field parameter for the object to be processed; determine the corresponding illumination mixing coefficient according to the diffuse reflection illumination coefficient, specular reflection illumination coefficient and distance field parameter; configure the illumination mixing coefficient as the basic luminous parameter of the object to be processed, and configure the corresponding actual luminous coefficient according to the preset ratio according to the position corresponding to the object to be processed based on the basic luminous parameter.
  • the luminous coefficient configuration module may include: determining a first light intensity parameter of the interactive reflection between the diffuse reflector and the ambient light in combination with the ambient light intensity and the reflection coefficient of the material to the ambient light; determining a second light intensity parameter of the interactive reflection between the diffuse reflector and the directional light by combining the intensity of the point light source, the reflection coefficient of the material to the ambient light, and the angle between the incident light direction and the vertex normal; and determining the diffuse reflection illumination coefficient based on the first light intensity parameter and the second light intensity parameter.
  • the luminous coefficient configuration module may include: determining an initial mirror coefficient by combining the mirror reflection coefficient, point light source intensity, highlight index, and first light direction parameter; and correcting the initial mirror coefficient using the second light direction parameter to obtain the mirror reflection illumination coefficient.
  • the luminous coefficient configuration module may include: calculating the mixing angle parameters corresponding to the object to be processed based on the angle corresponding to any two sample points in the object to be processed; using the mixing angle parameters, combined with the diffuse reflection illumination coefficient, the specular reflection illumination coefficient and the distance field parameter, to determine the corresponding illumination mixing coefficient.
  • the device may further include: a coordinate point dynamic binding module.
  • the coordinate point dynamic binding module can be used to configure the normal vector angle between the world coordinates and the screen coordinates corresponding to the coordinate point to be processed in the target model as a first standard angle; configure the approximate three-dimensional coordinates of the coordinate point to be processed in the screen coordinate system according to the first standard angle; perform coordinate vector splitting on the approximate three-dimensional coordinates, and select the target interest point for dynamic binding with the coordinate point to be processed according to the coordinate vector splitting result.
  • the device may further include: a display control module.
  • the display control module can be used to respond to the display control operation of the target model and obtain the current field of view of the virtual camera in real time; and perform concurrent control on binding the interest point followed by the current focus on the x-time axis, y-time axis, and z-time axis to maintain the display position of the target interest point.
  • the device may further include: a switching control module.
  • the switching control module can be used to control the switching of the time axes of each dimension using an angle controller.
  • the device may further include: a data loading module.
  • the data loading module can be used to divide the interest point sets according to the types corresponding to each interest point in the target model; configure each interest point set as a sub-level corresponding to the main scene of the target model, so as to load each interest point set through level streaming according to the display control of the target model.
  • the device may further include: a path planning module.
  • the path planning module can be used to obtain spline configuration parameters through a preset parameter interface, perform animation path planning according to the spline configuration parameters; and bind the skeleton array of the virtual object to the spline to move the virtual object along the planned path.
  • the device may further include: a display effect correction module.
  • the display effect correction module can be used to identify the device type of the input device in response to the display control operation of the target model; when it is determined that the input device is a first type of device, determine the orthogonal vector parameters of the angle direction between the current central axis of the virtual camera and the relative offset parameters of the input device; or, when it is determined that the input device is a second type of device, determine the acceleration/deceleration parameters corresponding to the display control operation; and determine the offset parameters in combination with the execution time of the acceleration/deceleration parameters and the preset step size; and correct the display effect of the target model according to the offset parameters.
  • the device may further include: a signaling processing module.
  • the signaling processing module can be used to receive a service request from a terminal device through a signaling server; wherein the service request includes identification information of a target point of interest, task information and a terminal device identification; process the service request to obtain streaming media data corresponding to the target point of interest, and push the streaming media data corresponding to the target point of interest to the terminal device through a signaling server.
  • model building device 230 The specific details of each module in the above-mentioned model building device 230 have been described in detail in the corresponding model building method, so they will not be repeated here.
  • an electronic device capable of implementing the above method is also provided.
  • the electronic device 1000 according to this embodiment of the present disclosure is described below with reference to Fig. 24.
  • the electronic device 1000 shown in Fig. 24 is only an example and should not bring any limitation to the functions and scope of use of the embodiment of the present disclosure.
  • the electronic device 1000 includes a central processing unit (CPU) 1001, which can perform various appropriate actions and processes according to a program stored in a read-only memory (ROM) 1002 or a program loaded from a storage part 1008 to a random access memory (RAM) 1003.
  • ROM read-only memory
  • RAM random access memory
  • Various programs and data required for system operation are also stored in the RAM 1003.
  • the CPU 1001, the ROM 1002, and the RAM 1003 are connected to each other via a bus 1004.
  • An input/output (I/O) interface 1005 is also connected to the bus 1004.
  • the following components are connected to the I/O interface 1005: an input section 1006 including a keyboard, a mouse, etc.; an output section 1007 including a cathode ray tube (CRT), a liquid crystal display (LCD), etc., and a speaker, etc.; a storage section 1008 including a hard disk, etc.; and a communication section 1009 including a network interface card such as a LAN (Local Area Network) card, a modem, etc.
  • the communication section 1009 performs communication processing via a network such as the Internet.
  • a drive 1010 is also connected to the I/O interface 1005 as needed.
  • a removable medium 1011 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, etc., is installed on the drive 1010 as needed so that a computer program read therefrom is installed into the storage section 1008 as needed.
  • an embodiment of the present invention includes a computer program product, which includes a computer program carried on a storage medium, and the computer program includes a program code for executing the method shown in the flowchart.
  • the computer program can be downloaded and installed from a network through a communication part 1009, and/or installed from a removable medium 1011.
  • CPU central processing unit
  • the electronic device may be a smart mobile electronic device such as a mobile phone, a tablet computer or a laptop computer, or may be a smart electronic device such as a desktop computer.
  • the storage medium shown in the embodiment of the present invention may be a computer-readable signal medium or a computer-readable storage medium or any combination of the above two.
  • the computer-readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device or device, or any combination of the above.
  • Computer-readable storage media may include, but are not limited to: an electrical connection with one or more wires, a portable computer disk, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM), a flash memory, an optical fiber, a portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the above.
  • a computer-readable storage medium may be any tangible medium containing or storing a program that can be used by or in combination with an instruction execution system, device or device.
  • a computer-readable signal medium may include a data signal propagated in a baseband or as part of a carrier wave, which carries a computer-readable program code. Such propagated data signals may take a variety of forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the above.
  • Computer-readable signal media may also be any storage medium other than computer-readable storage media, which may send, propagate, or transmit programs for use by or in conjunction with an instruction execution system, apparatus, or device.
  • the program code contained on the storage medium may be transmitted using any appropriate medium, including but not limited to: wireless, wired, etc., or any suitable combination of the above.
  • each box in the flowchart or block diagram can represent a module, a program segment, or a part of a code, and the above-mentioned module, program segment, or a part of a code contains one or more executable instructions for realizing the specified logical function.
  • the functions marked in the box can also occur in a different order from the order marked in the accompanying drawings. For example, two boxes represented in succession can actually be executed substantially in parallel, and they can sometimes be executed in the opposite order, depending on the functions involved.
  • each box in the block diagram or flowchart, and the combination of boxes in the block diagram or flowchart can be implemented with a dedicated hardware-based system that performs a specified function or operation, or can be implemented with a combination of dedicated hardware and computer instructions.
  • the units involved in the embodiments of the present invention may be implemented by software or hardware, and the units described may also be arranged in a processor.
  • the names of these units do not, in some cases, limit the units themselves.
  • the present application also provides a storage medium, which may be included in an electronic device; or may exist independently without being assembled into the electronic device.
  • the above storage medium carries one or more programs, and when the above one or more programs are executed by an electronic device, the electronic device implements the method described in the following embodiments. For example, the electronic device may implement the steps shown in FIG1.

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

A model construction method and apparatus, a storage medium, and an electronic device, relating to the technical field of computers. The method comprises: obtaining basic geographic information data from a target data source and performing stereoscopic processing to obtain an initial model, and preprocessing the initial model to obtain a first stage model; performing non-service logic binding on the first stage model, and performing service logic binding on the first stage model to generate a target model corresponding to a modeling object; and performing streaming media binding on the target model so as to push streaming media data corresponding to the target model to a corresponding preset terminal. The present solution can effectively improve the model performance, and can reduce modeling costs; and digital twin-based data simulation is achieved.

Description

一种模型构建方法及装置、存储介质、电子设备Model building method and device, storage medium, and electronic device 技术领域Technical Field
本公开实施例涉及计算机技术领域,具体涉及一种模型构建方法、一种模型构建装置、一种存储介质,以及一种电子设备。The embodiments of the present disclosure relate to the field of computer technology, and in particular to a model building method, a model building device, a storage medium, and an electronic device.
背景技术Background technique
现有的建模方式包括基于真实图像的建模型、基于美术图纸的建模方式。其中,基于真实图像还原建模是以航拍、倾斜摄影为主体的复原建模法,此种建模方式能够复原瞬时的场景状态,质量较高,但是局限于模型本身是由图片构成,因此在基于模型进行二次开发或者在各类业务的场景进行使用的过程中,模型本身对于光照,材质的变更,美术效果的追加都是非常不友好的。基于美术图纸的建模方式则需要建模师参照CAD或者Revit图纸在各类建模工具中进行手工建模,这种建模方式所产出的模型可以完美兼容二次开发和相应的渲染引擎,但由于是以人工的方式进行建模,因此其建模周期相对较长;另外,由于不同的建模师的技术水平、美术造诣参差不齐,后期需要大量的时间对模型进行优化和改善。此外,上述的建模方式成本较高。并且,为了能够将模型应用于数字孪生***中,对模型也提出了更高的性能要求。Existing modeling methods include modeling based on real images and modeling based on art drawings. Among them, modeling based on real image restoration is a restoration modeling method based on aerial photography and oblique photography. This modeling method can restore the instantaneous scene state with high quality, but it is limited to the fact that the model itself is composed of pictures. Therefore, in the process of secondary development based on the model or use in various business scenarios, the model itself is very unfriendly to changes in lighting, materials, and the addition of art effects. The modeling method based on art drawings requires modelers to refer to CAD or Revit drawings to perform manual modeling in various modeling tools. The model produced by this modeling method can be perfectly compatible with secondary development and corresponding rendering engines, but because it is modeled manually, its modeling cycle is relatively long; in addition, due to the different technical levels and artistic attainments of different modelers, a lot of time is needed to optimize and improve the model in the later stage. In addition, the above modeling methods are costly. In addition, in order to be able to apply the model to the digital twin system, higher performance requirements are also put forward for the model.
需要说明的是,在上述背景技术部分公开的信息仅用于加强对本公开的背景的理解,因此可以包括不构成对本领域普通技术人员已知的现有技术的信息。It should be noted that the information disclosed in the above background technology section is only used to enhance the understanding of the background of the present disclosure, and therefore may include information that does not constitute the prior art known to ordinary technicians in the field.
发明内容Summary of the invention
根据本公开的一个方面,提供一种模型构建方法,所述方法包括:According to one aspect of the present disclosure, a model building method is provided, the method comprising:
由目标数据源获取基础地理信息数据并进行立体化处理,以获取初始模型;并对所述初始模型进行预处理,获取第一阶段模型;Obtaining basic geographic information data from a target data source and performing three-dimensional processing to obtain an initial model; and preprocessing the initial model to obtain a first-stage model;
对所述第一阶段模型进行非业务逻辑绑定,以及对所述第一阶段模型进行业务逻辑绑定,以生成建模对象对应的目标模型;Performing non-business logic binding on the first-stage model and performing business logic binding on the first-stage model to generate a target model corresponding to the modeling object;
对所述目标模型进行流媒体绑定,以用于将目标模型对应的流媒体数据推送至对应的预设终端。The target model is bound with streaming media to push streaming media data corresponding to the target model to a corresponding preset terminal.
在本公开的一种示例性实施例中,所述由目标数据源获取基础地理信息数据并进行立体化处理,以获取初始模型,包括:In an exemplary embodiment of the present disclosure, the step of obtaining basic geographic information data from a target data source and performing stereoscopic processing to obtain an initial model includes:
向目标数据源获取基础地理信息数据,并按预设规则对所述基础地理信息数据进行筛选以获取层次数据;Obtaining basic geographic information data from a target data source, and filtering the basic geographic information data according to preset rules to obtain hierarchical data;
对所述层次数据执行建模规则集合,以获取所述初始模型。A set of modeling rules is executed on the hierarchical data to obtain the initial model.
在本公开的一种示例性实施例中,所述非业务逻辑包括模型发光逻辑;In an exemplary embodiment of the present disclosure, the non-business logic includes model luminescence logic;
所述对所述第一阶段模型进行非业务逻辑绑定,包括:The non-business logic binding of the first-stage model includes:
在所述第一阶段模型中根据模型的纹理和材质确定待处理对象;In the first stage model, the object to be processed is determined according to the texture and material of the model;
对所述待处理对象计算对应的漫反射光照系数、镜面反射光照系数以及距离场参数;Calculating corresponding diffuse reflection illumination coefficient, specular reflection illumination coefficient and distance field parameter for the object to be processed;
根据所述漫反射光照系数、镜面反射光照系数以及距离场参数确定对应的光照混合系数;Determine a corresponding lighting mixing coefficient according to the diffuse reflection lighting coefficient, the specular reflection lighting coefficient and the distance field parameter;
将所述光照混合系数配置为所述待处理对象的基础发光参数,并基于所述基础发光参数根据所述待处理对象对应的位置按预设比例配置对应的实际发光系数。The illumination mixing coefficient is configured as a basic luminous parameter of the object to be processed, and based on the basic luminous parameter, a corresponding actual luminous coefficient is configured according to a preset ratio according to a position corresponding to the object to be processed.
在本公开的一种示例性实施例中,计算所述待处理对象对应的漫反射光照系数,包括:In an exemplary embodiment of the present disclosure, calculating the diffuse reflection illumination coefficient corresponding to the object to be processed includes:
结合环境光强度、材质对环境光的反射系数,确定漫反射体与环境光交互反射的第一光强参数;Determine the first light intensity parameter of the interactive reflection between the diffuse reflector and the ambient light by combining the ambient light intensity and the reflection coefficient of the material to the ambient light;
集合点光源强度、材质对环境光的反射系数、入射光方向与顶点法线的夹角,确定漫反射体与方向光交互反射的第二光强参数;The intensity of the point light source, the reflection coefficient of the material to the ambient light, and the angle between the incident light direction and the vertex normal are collected to determine the second light intensity parameter of the interactive reflection of the diffuse reflector and the directional light;
根据所述第一光强参数、第二光强参数确定所述漫反射光照系数。The diffuse reflection illumination coefficient is determined according to the first light intensity parameter and the second light intensity parameter.
在本公开的一种示例性实施例中,计算所述待处理对象对应的镜面反射光照系数,包括:In an exemplary embodiment of the present disclosure, calculating the specular reflection illumination coefficient corresponding to the object to be processed includes:
结合镜面反射系数、点光源强度、高光指数、第一光线方向参数,确定初始镜面系数;Determine the initial mirror coefficient by combining the mirror reflection coefficient, point light source intensity, highlight index, and first light direction parameter;
利用第二光线方向参数对所述初始镜面系数进行修正,获取所述镜面反射光照系数。The initial mirror coefficient is corrected using the second light direction parameter to obtain the mirror reflection illumination coefficient.
在本公开的一种示例性实施例中,所述根据所述漫反射光照系数、镜面反射光照系数以及距离场参数确定对应的光照混合系数,包括:In an exemplary embodiment of the present disclosure, determining a corresponding illumination mixing coefficient according to the diffuse illumination coefficient, the specular illumination coefficient and the distance field parameter includes:
基于所述待处理对象中任意两个样本点对应的夹角,计算所述待处理对象对应的混合角度参数;Calculating a mixing angle parameter corresponding to the object to be processed based on the included angle corresponding to any two sample points in the object to be processed;
利用所述混合角度参数,结合所述漫反射光照系数、镜面反射光照系数以及距离场参数确定对应的光照混合系数。The mixing angle parameter is used to determine a corresponding lighting mixing coefficient in combination with the diffuse reflection lighting coefficient, the specular reflection lighting coefficient and the distance field parameter.
在本公开的一种示例性实施例中,所述方法还包括:In an exemplary embodiment of the present disclosure, the method further includes:
根据所述目标模型中待处理坐标点对应的世界坐标与屏幕坐标的法向量夹角配置为第一标准角度;The normal vector angle between the world coordinates and the screen coordinates corresponding to the coordinate point to be processed in the target model is configured as a first standard angle;
根据所述第一标准角度配置所述待处理坐标点在屏幕坐标系中的近似三维坐标;According to the first standard angle, the approximate three-dimensional coordinates of the coordinate point to be processed in the screen coordinate system are configured;
对所述近似三维坐标进行坐标向量拆分,并根据坐标向量拆分结果选定目标兴趣点与所述待处理坐标点进行动态绑定。The approximate three-dimensional coordinates are split into coordinate vectors, and a target point of interest is selected according to the coordinate vector splitting result for dynamic binding with the coordinate point to be processed.
在本公开的一种示例性实施例中,所述方法还包括:In an exemplary embodiment of the present disclosure, the method further includes:
响应于对所述目标模型的显示控制操作,实时获取虚拟摄像机的当前视场范围;In response to a display control operation on the target model, obtaining a current field of view of the virtual camera in real time;
对当前焦点跟随的兴趣点在x时间轴、y时间轴、z时间轴进行绑定的并发控制,以用于保持所述目标兴趣点的显示位置。The interest point followed by the current focus is bound to the x-time axis, the y-time axis, and the z-time axis through concurrent control to maintain the display position of the target interest point.
在本公开的一种示例性实施例中,所述方法还包括:In an exemplary embodiment of the present disclosure, the method further includes:
利用角度控制器控制各维度时间轴的切换。Use the angle controller to control the switching of time axes in each dimension.
在本公开的一种示例性实施例中,所述方法还包括:In an exemplary embodiment of the present disclosure, the method further includes:
根据所述目标模型中的各兴趣点对应的类型划分兴趣点集合;Dividing the interest point set according to the type corresponding to each interest point in the target model;
对各所述兴趣点集合配置为所述目标模型主场景对应的子关卡,以用于根据对所述目标模型的显示控制,通过关卡流送对各所述兴趣点集合进行加载。Each of the interest point sets is configured as a sub-level corresponding to the main scene of the target model, so as to load each of the interest point sets through level streaming according to display control of the target model.
在本公开的一种示例性实施例中,所述方法还包括:In an exemplary embodiment of the present disclosure, the method further includes:
通过预设参数接口获取样条线配置参数,根据所述样条线配置参数进行动画路径规划;以及Obtaining spline configuration parameters through a preset parameter interface, and performing animation path planning according to the spline configuration parameters; and
对虚拟对象的骨骼体数组与样条线进行绑定,以用于将所述虚拟对象按照已规划的路径进行移动。The skeleton volume array of the virtual object is bound to the spline line to move the virtual object according to the planned path.
在本公开的一种示例性实施例中,所述方法还包括:In an exemplary embodiment of the present disclosure, the method further includes:
响应于对所述目标模型的显示控制操作,识别输入设备的设备类型;identifying a device type of an input device in response to a display control operation on the target model;
确定输入设备为第一类型设备时,根据虚拟相机当前的中轴线、所述输入设备的相对偏移参数之间的夹角方向的正交向量参数;或者,When determining that the input device is a first type device, an orthogonal vector parameter in the direction of the angle between the current central axis of the virtual camera and the relative offset parameter of the input device; or,
在确定输入设备为第二类型设备时,确定所述显示控制操作对应的加/减速度参数;并结合加/减速度参数的执行时长,以及预设步长确定偏移参数;When it is determined that the input device is a second type device, determining an acceleration/deceleration parameter corresponding to the display control operation; and determining an offset parameter in combination with the execution time of the acceleration/deceleration parameter and a preset step length;
根据所述偏移参数修正所述目标模型的显示效果。The display effect of the target model is modified according to the offset parameter.
在本公开的一种示例性实施例中,所述方法还包括:In an exemplary embodiment of the present disclosure, the method further includes:
通过信令服务器接收终端设备的服务请求;其中,所述服务请求包括目标兴趣点的标识信息、任务信息和终端设备标识;Receiving a service request from a terminal device through a signaling server; wherein the service request includes identification information of a target point of interest, task information, and a terminal device identification;
处理所述服务请求以获取所述目标兴趣点对应的流媒体数据,并将所述目标兴趣点对应的流媒体数据通过信令服务器推送至所述终端设备。The service request is processed to obtain streaming media data corresponding to the target point of interest, and the streaming media data corresponding to the target point of interest is pushed to the terminal device through a signaling server.
根据本公开的一个方面,提供一种模型构建装置,包括:According to one aspect of the present disclosure, there is provided a model building device, comprising:
第一阶段模型计算模块,用于由目标数据源获取基础地理信息数据并进行立体化处理,以获取初始模型;并对所述初始模型进行预处理,获取第一阶段模型;The first stage model calculation module is used to obtain basic geographic information data from the target data source and perform three-dimensional processing to obtain an initial model; and pre-process the initial model to obtain a first stage model;
目标模型计算模块,用于对所述第一阶段模型进行非业务逻辑绑定,以及对所述第一阶段模型进行业务逻辑绑定,以生成建模对象对应的目标模型;A target model calculation module, used to perform non-business logic binding on the first-stage model and business logic binding on the first-stage model to generate a target model corresponding to the modeling object;
流媒体数据处理模块,用于对所述目标模型进行流媒体绑定,以用于将目标模型对应的流媒体数据推送至对应的预设终端。The streaming media data processing module is used to perform streaming media binding on the target model so as to push the streaming media data corresponding to the target model to the corresponding preset terminal.
根据本公开的一个方面,提供一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现上述任意一项所述的模型构建方法。According to one aspect of the present disclosure, a computer-readable storage medium is provided, on which a computer program is stored. When the computer program is executed by a processor, the model building method described in any one of the above is implemented.
根据本公开的一个方面,提供一种电子设备,包括:According to one aspect of the present disclosure, there is provided an electronic device, including:
处理器;以及Processor; and
存储器,用于存储所述处理器的可执行指令;A memory, configured to store executable instructions of the processor;
其中,所述处理器配置为经由执行所述可执行指令来执行上述任意一项所述的模型构建方法。The processor is configured to execute any one of the above-mentioned model building methods by executing the executable instructions.
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本公开。It is to be understood that the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the present disclosure.
附图说明BRIEF DESCRIPTION OF THE DRAWINGS
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本公开的实施例,并与说明书一起用于解释本公开的原理。显而易见地,下面描述中的附图仅仅是本公开的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。在附图中:The drawings herein are incorporated into the specification and constitute a part of the specification, showing embodiments consistent with the present disclosure, and together with the specification, are used to explain the principles of the present disclosure. Obviously, the drawings described below are only some embodiments of the present disclosure, and for ordinary technicians in this field, other drawings can be obtained based on these drawings without creative work. In the drawings:
图1示意性示出根据本公开示例实施例的一种模型构建方法的示意图;FIG1 schematically shows a schematic diagram of a model building method according to an exemplary embodiment of the present disclosure;
图2示意性示出本公开示例性实施例一种构建初始模型的方法的示意图;FIG2 schematically shows a schematic diagram of a method for constructing an initial model according to an exemplary embodiment of the present disclosure;
图3示意性示出本公开示例性实施例一种建模规则执行顺序的方法的示意图;FIG3 schematically shows a schematic diagram of a method for modeling rule execution order according to an exemplary embodiment of the present disclosure;
图4示意性示出本公开示例性实施例中一种标准凸多边形的示意图;FIG4 schematically shows a schematic diagram of a standard convex polygon in an exemplary embodiment of the present disclosure;
图5示意性示出本公开示例性实施例中一种带有标高的基凸多边形的示意图;FIG5 schematically shows a schematic diagram of a base convex polygon with elevation in an exemplary embodiment of the present disclosure;
图6示意性示出本公开示例性实施例中一种对凸多边形进行切割后的示意图;FIG6 schematically shows a schematic diagram of cutting a convex polygon in an exemplary embodiment of the present disclosure;
图7示意性示出本公开示例性实施例中一种模型迭代结果的示意图;FIG7 schematically shows a schematic diagram of a model iteration result in an exemplary embodiment of the present disclosure;
图8示意性示出本公开示例性实施例中一种对模型进行光照逻辑绑定的方法的示意图;FIG8 schematically shows a schematic diagram of a method for performing lighting logic binding on a model in an exemplary embodiment of the present disclosure;
图9示意性示出本公开示例性实施例中一种混合光照系数的计算方法的示意图;FIG9 schematically shows a schematic diagram of a method for calculating a mixed illumination coefficient in an exemplary embodiment of the present disclosure;
图10示意性示出本公开示例性实施例中一种利用Spline进行动画路径规划的方法的示意图;FIG10 schematically shows a schematic diagram of a method for performing animation path planning using Spline in an exemplary embodiment of the present disclosure;
图11示意性示出本公开示例性实施例中Spline关键点规划效果的示意图;FIG11 schematically shows a schematic diagram of a Spline key point planning effect in an exemplary embodiment of the present disclosure;
图12示意性示出本公开示例性实施例中一种对于兴趣点坐标动态补偿的方法的示意图;FIG12 schematically shows a schematic diagram of a method for dynamic compensation of interest point coordinates in an exemplary embodiment of the present disclosure;
图13示意性示出本公开示例性实施例中一种兴趣点显示控制方法的示意图;FIG13 schematically shows a schematic diagram of a method for controlling display of points of interest in an exemplary embodiment of the present disclosure;
图14示意性示出本公开示例性实施例中一种兴趣点保持屏幕向外正交投射效果的示意图;FIG. 14 schematically shows a schematic diagram of an interest point maintaining an orthogonal projection effect of the screen outward in an exemplary embodiment of the present disclosure;
图15示意性示出本公开示例性实施例中一种兴趣点显示效果的示意图;FIG15 schematically shows a schematic diagram of an interest point display effect in an exemplary embodiment of the present disclosure;
图16示意性示出本公开示例性实施例中一种兴趣点显示效果的示意图;FIG16 schematically shows a schematic diagram of an interest point display effect in an exemplary embodiment of the present disclosure;
图17示意性示出本公开示例性实施例中一种对模型进行显示控制的方法的示意图;FIG17 schematically shows a method for displaying and controlling a model in an exemplary embodiment of the present disclosure;
图18示意性示出本公开示例性实施例中一种平移拉远的显示效果的示意图;FIG. 18 schematically shows a schematic diagram of a display effect of panning and zooming out in an exemplary embodiment of the present disclosure;
图19示意性示出本公开示例性实施例中一种旋转的显示效果的示意图;FIG. 19 schematically shows a schematic diagram of a rotating display effect in an exemplary embodiment of the present disclosure;
图20示意性示出本公开示例性实施例中一种缩放的显示效果的示意图;FIG20 schematically shows a schematic diagram of a zoomed display effect in an exemplary embodiment of the present disclosure;
图21示意性示出本公开示例性实施例中一种服务请求处理方法的示意图;FIG21 schematically shows a schematic diagram of a service request processing method in an exemplary embodiment of the present disclosure;
图22示意性示出本公开示例性实施例中一种***架构的示意图;FIG22 schematically shows a schematic diagram of a system architecture in an exemplary embodiment of the present disclosure;
图23示意性示出本公开示例性实施例中一种模型构建装置的组成示意图;FIG23 schematically shows a schematic diagram of the composition of a model building device in an exemplary embodiment of the present disclosure;
图24示意性示出本公开示例性实施例中一种用于实现上述模型构建方法的电子设备的组成示意图。FIG. 24 schematically shows a composition diagram of an electronic device for implementing the above-mentioned model building method in an exemplary embodiment of the present disclosure.
具体实施方式Detailed ways
现在将参照附图更全面地描述示例实施方式。然而,示例实施方式能够以多种形式实施,且不应被理解为限于在此阐述的范例;相反,提供这些实施方式使得本公开将更加全面和完整,并将示例实施方式的构思全面地传达给本领域的技术人员。所描述的特征、结构或特性可以以任何合适的方式结合在一个或更多实施方式中。Example embodiments will now be described more fully with reference to the accompanying drawings. However, example embodiments can be implemented in a variety of forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be more comprehensive and complete and will fully convey the concepts of the example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
此外,附图仅为本公开的示意性图解,并非一定是按比例绘制。图中相同的附图标记表示相同或类似的部分,因而将省略对它们的重复描述。附图中所示的一些方框图是功能实体,不一定必须与物理或逻辑上独立的实体相对应。可以采用软件形式来实现这些功能实体,或在一个或多个硬件模块或集成电路中实现这些功能实体,或在不同网络和/或处理器装置和/或微控制器装置中实现这些功能实体。In addition, the accompanying drawings are only schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the figures represent the same or similar parts, and thus their repeated description will be omitted. Some of the block diagrams shown in the accompanying drawings are functional entities and do not necessarily correspond to physically or logically independent entities. These functional entities can be implemented in software form, or implemented in one or more hardware modules or integrated circuits, or implemented in different networks and/or processor devices and/or microcontroller devices.
针对现有技术的缺点和不足,本示例实施方式中提供了一种模型构建方法,可以应用于对智慧园区、智慧城市等应用场景下的模型构建。参考图1中所示,上述的模型构建方法可以包括:In view of the shortcomings and deficiencies of the prior art, this example embodiment provides a model construction method, which can be applied to model construction in application scenarios such as smart parks and smart cities. Referring to FIG1 , the above-mentioned model construction method may include:
步骤S11,由目标数据源获取基础地理信息数据并进行立体化处理,以获取初始模型;并对所述初始模型进行预处理,获取第一阶段模型;Step S11, obtaining basic geographic information data from a target data source and performing three-dimensional processing to obtain an initial model; and preprocessing the initial model to obtain a first-stage model;
步骤S12,对所述第一阶段模型进行非业务逻辑绑定,以及对所述第一阶段模型进行业务逻辑绑定,以生成建模对象对应的目标模型;Step S12, performing non-business logic binding on the first-stage model, and performing business logic binding on the first-stage model, so as to generate a target model corresponding to the modeling object;
步骤S13,对所述目标模型进行流媒体绑定,以用于将目标模型对应的流媒体数据推送至对应的预设终端。Step S13, performing streaming media binding on the target model, so as to push the streaming media data corresponding to the target model to the corresponding preset terminal.
本示例实施方式所提供的模型构建方法,通过从目标数据原获取的基础地理信息数据进行立体化处理、材质贴图,得到第一阶段模型,再对第一阶段模型进行非业务逻辑和业务逻辑绑定,使得得到的目标模型能够具有较高的模型性能,并能够降低建模成本。通过对目标模型进行流媒体绑定,可以将模型的流媒体数据推送至预设的终端设备,从而可以使用户在不同的终端设备上查看实时的模型变化,实现基于数字孪生的数据仿真。The model building method provided in this example implementation method obtains the first-stage model by stereo processing and material mapping the basic geographic information data originally obtained from the target data, and then binds the non-business logic and business logic of the first-stage model, so that the obtained target model can have higher model performance and reduce the modeling cost. By binding the target model with streaming media, the streaming media data of the model can be pushed to the preset terminal device, so that the user can view the real-time model changes on different terminal devices, and realize data simulation based on digital twins.
下面,将结合附图及实施例对本示例实施方式中的模型构建方法的各个步骤进行更详细的说明。Below, each step of the model building method in this example implementation will be described in more detail with reference to the accompanying drawings and embodiments.
在步骤S11中,由目标数据源获取基础地理信息数据并进行立体化处理,以获取初始模型;并对所述初始模型进行预处理,获取第一阶段模型。In step S11, basic geographic information data is acquired from a target data source and is processed in three dimensions to obtain an initial model; and the initial model is preprocessed to obtain a first-stage model.
本示例实施方式中,在服务器端,可以提供一生产层,用于执行上述的步骤S11。举例来说,上述的目标数据源可以是Open Street Map(公开地图)、Tian Di Tu(天地图,即 国家地理信息公共服务平台)等开源官方数据平台。具体而言,在获取基础地理数据时,可以通过GIS(Geographic Information System,地理信息***)引擎向上述的开源官方数据平台进行数据拉取,以获取基础地理信息。例如,GIS引擎可以采用Cesium开源地图引擎。此外,获取的基础地理信息数据还可以是各种格式的电子图纸数据。其中,获取的基础地理信息数据中可以包含建筑物以及真实环境、虚拟环境对应的经纬度、层高、地形Dem、Floor Height(楼层高度),POI(Point of Interest,兴趣点)位置信息等等。此外,上述的虚拟环境的数据也可以是从电子图纸数据中获取的;例如,可以是对一些已规划、未建设的建筑物或环境的数据。In this example implementation, a production layer may be provided on the server side for executing the above step S11. For example, the above target data source may be an open source official data platform such as Open Street Map (public map), Tian Di Tu (i.e., the national geographic information public service platform). Specifically, when obtaining basic geographic data, data may be pulled from the above open source official data platform through a GIS (Geographic Information System) engine to obtain basic geographic information. For example, the GIS engine may use the Cesium open source map engine. In addition, the basic geographic information data obtained may also be electronic drawing data in various formats. Among them, the basic geographic information data obtained may include the longitude and latitude, floor height, terrain Dem, Floor Height (floor height), POI (Point of Interest) location information, etc. corresponding to the building and the real environment and virtual environment. In addition, the data of the above virtual environment may also be obtained from the electronic drawing data; for example, it may be data on some planned but unbuilt buildings or environments.
根据获取的各项基础地理信息数据,可以通过CE平台、Blender平台、Twin Motion平台等三维图形图像软件进行数据立体化处理,形成L1白膜,即初始模型。According to the basic geographic information data obtained, three-dimensional data processing can be performed through CE platform, Blender platform, Twin Motion platform and other three-dimensional graphic image software to form L1 white film, that is, the initial model.
具体的,上述的预处理可以是对模型的材质贴图过程。在获取初始模型后,便可以对初始模型添加材质贴图,例如添加楼梯材质、光效等等,得到L2级别模型,生成FBX/obj/3ds等格式的模型体文件,从而得到第一阶段模型。此外,开发人员还可以对L2级别模型进行模型数据更新、修复,利用获取的新数据替换旧数据;以及,识别模型中的冗余数据,并进行删除。Specifically, the above-mentioned preprocessing can be a material mapping process for the model. After obtaining the initial model, you can add material mapping to the initial model, such as adding stair materials, light effects, etc., to obtain an L2 level model, generate a model body file in FBX/obj/3ds and other formats, and thus obtain a first-stage model. In addition, developers can also update and repair the model data of the L2 level model, replace the old data with the new data obtained; and identify redundant data in the model and delete it.
本示例实施方式中,参考图2所示,上述的步骤S11可以包括:In this example implementation, referring to FIG. 2 , the above step S11 may include:
步骤S21,向目标数据源获取基础地理信息数据,并按预设规则对所述基础地理信息数据进行筛选以获取层次数据;Step S21, obtaining basic geographic information data from a target data source, and filtering the basic geographic information data according to a preset rule to obtain hierarchical data;
步骤S22,对所述层次数据执行建模规则集合,以获取所述初始模型。Step S22: executing a set of modeling rules on the hierarchical data to obtain the initial model.
具体而言,在从目标数据源获取基础地理信息数据之后,可以按照预设的权重比例从基础数据中筛选不同区域对应的基础数据;例如,可以配置核心区域、过渡区域、边缘区域分别使用不同比例的基础数据;或者,可以根据建筑物、周边环境选取不同比例的基础数据;并作为层次数据,从而可以在基础地理信息数据中选择预设密度、层次的基础数据进行建模,减少无用数据。在获取层次数据后,便可以执行预设的建模规则集合,进行建模。Specifically, after obtaining basic geographic information data from the target data source, the basic data corresponding to different areas can be screened from the basic data according to the preset weight ratio; for example, the core area, transition area, and edge area can be configured to use basic data of different proportions; or, basic data of different proportions can be selected according to buildings and surrounding environments; and as hierarchical data, basic data of preset density and hierarchy can be selected in the basic geographic information data for modeling to reduce useless data. After obtaining the hierarchical data, the preset modeling rule set can be executed to perform modeling.
本示例实施方式中,所述建模规则集合包括按预设顺序执行的Lot规则、Floor规则、Side Facade规则和Groundfloor规则、Front规则和Front Facade规则、Building规则、Window,Door,and Wall规则,以及纹理处理规则。In this example implementation, the modeling rule set includes Lot rules, Floor rules, Side Facade rules and Ground Floor rules, Front rules and Front Facade rules, Building rules, Window, Door, and Wall rules, and texture processing rules executed in a preset order.
参考图3所示的流程图,在筛选获取层次数据后,可以首先执行Lot规则;然后执行Floor规则;然后可以同步执行Side Facade规则和Groundfloor规则,以及Front规则和Front Facade规则;之后,可以执行Building规则;然后,可以执行Window,Door,and Wall规则。Referring to the flowchart shown in Figure 3, after filtering and obtaining the hierarchical data, the Lot rule can be executed first; then the Floor rule can be executed; then the Side Facade rule and the Ground Floor rule, as well as the Front rule and the Front Facade rule can be executed synchronously; then, the Building rule can be executed; then, the Window, Door, and Wall rule can be executed.
其中,Lot规则可以是用于指示实际的建筑建造起点,即指示窗口中设定的第一个规则。Floor规则可以用于执行典型的***操作,使得分出来的每一个瓦片宽度逼近预设数值;另外,为了让楼层变得更加符合展示情况,满足数字孪生的应用场景,还可以多分割 一个宽度为预设数值的墙面元素。Side Facade规则可以用于将建筑的几个侧面***成多个楼层。***过程的代码和正面一样,以保证正面和侧面每一层的高度能够保持完全一致。Groundfloor规则可以是用于第一层的***操作,在最右边可以有个大门入口。Front规则可以是用于定义建筑物的正面。Front Facade规则可以是用于将建筑前立面***出高度为4的1楼,然后将上面的剩余部分分成多个高为3.5的楼层;并且,第一层的外观和上面各层可以不同,如正门,不同的高度、门窗、颜色等等配置。Building规则可以是用于通过组件分割的方式分成多个面;例如,将名为Building的形状依照3个不同的部分展开***。第一个部分是front(即建筑的正面),第二个部分是side(建筑的多个侧面),第三个部分是Root(即屋顶部分)。Window,Door,and Wall规则可以用于用相应的模型资源来替换窗户、门、墙的形状对象,并给其赋予贴图。此外,建模规则集合中还可以包括Tile规则,用于配置瓦片以及构建瓦片的元素。例如,可以通过嵌套分割在一个瓦片内,沿着x轴和y轴方向执行分割。Among them, the Lot rule can be used to indicate the actual starting point of building construction, that is, the first rule set in the indication window. The Floor rule can be used to perform typical splitting operations so that the width of each tile is close to the preset value; in addition, in order to make the floor more suitable for the display and meet the application scenario of digital twins, one more wall element with a preset width can be split. The Side Facade rule can be used to split several sides of the building into multiple floors. The code of the splitting process is the same as the front to ensure that the height of each layer of the front and side can be kept completely consistent. The Groundfloor rule can be used for the splitting operation of the first floor, and there can be a gate entrance on the far right. The Front rule can be used to define the front of the building. The Front Facade rule can be used to split the front facade of the building into the first floor with a height of 4, and then divide the remaining part above into multiple floors with a height of 3.5; and the appearance of the first floor can be different from the upper floors, such as the main door, different heights, doors and windows, colors, etc. The Building rule can be used to split into multiple faces by component splitting; for example, the shape named Building is split according to 3 different parts. The first part is the front (i.e. the front of the building), the second part is the side (the multiple sides of the building), and the third part is the root (i.e. the roof part). The Window, Door, and Wall rules can be used to replace the shape objects of windows, doors, and walls with the corresponding model resources and assign them textures. In addition, the modeling rule set can also include Tile rules for configuring tiles and building elements of tiles. For example, you can perform segmentation along the x-axis and y-axis directions within a tile by nesting segmentation.
具体的,上述的建模规则集合可以通过规则文件的形式实现。在建模时,向预设的存储地址调用该规则文件,并按照其预设的顺序执行其对应的具体规则,通过合理的布局规则执行的顺序,能够有效的缩短建模周期。通过上述的建模方式,可以实现在10~30分钟之内完成50平方公里区域的L1~L2级别建模,有效缩短初期建模周期百分之八十以上。另外,该规则建模规则集合便于迭代和维护,工程师可以随时按业务需求细化、调整规则内容,完成模型级别的提升。从理论角度来讲,最高可以完成L5级别建模能力。Specifically, the above-mentioned modeling rule set can be implemented in the form of a rule file. During modeling, the rule file is called to the preset storage address, and the corresponding specific rules are executed in the preset order. By reasonably laying out the order of rule execution, the modeling cycle can be effectively shortened. Through the above-mentioned modeling method, L1-L2 level modeling of an area of 50 square kilometers can be completed within 10-30 minutes, effectively shortening the initial modeling cycle by more than 80%. In addition, the rule modeling rule set is easy to iterate and maintain. Engineers can refine and adjust the rule content at any time according to business needs to complete the model level upgrade. From a theoretical point of view, the highest L5 level modeling capability can be achieved.
举例来说,对于上述的Lot规则,基于公式Zf(x)*Center(x,y,z)形成标准凸多边体,作为被切割的标准凸多边形体,如图4所示。其中,Zf(x)表示外包围盒点集,Center(x,y,z)表示目标形状中心点。由于Zf(x)存在,可以使用Center(x,y,z)作为单位高度向量进行初步拉伸,或层叠;形成带有标高的基凸多边形,如图5所示。对于模型中的楼梯,其外包围盒一定是凸多边形(甚至可以理解为矩形体);因此,排除不可见的底面,和需要特殊处理的顶面,使用另外四种规则对凸多边形进行切割,切割结果如图6所示。在进行凸多边形罗列时,一般利用以下的公式进行预夹角拟合计算:sin*sqrt(X 2+Y 2);其中,X、Y分别对为该点在x轴、y轴的坐标值,用于计算z轴的方向偏移角度。当然,也可以根据目标图形选择其他的数学方法,例如等比缩放的罗列。具体的,可以是依据目标结果值的规律关系而选择不同的数学算法。完成基凸多边形罗列后,对于模型体本身可以采用Radom函数向下取整的办法应对CE平台中的window,fall,door函数进行中部挖取,采用插值公式进行递归;得到的迭代模型参考图7所示。其中,递归公式可以包括: For example, for the above-mentioned Lot rule, a standard convex polygon is formed based on the formula Zf(x)*Center(x,y,z) as the standard convex polygon to be cut, as shown in Figure 4. Among them, Zf(x) represents the outer bounding box point set, and Center(x,y,z) represents the center point of the target shape. Since Zf(x) exists, Center(x,y,z) can be used as a unit height vector for preliminary stretching or stacking; forming a base convex polygon with elevation, as shown in Figure 5. For the stairs in the model, its outer bounding box must be a convex polygon (even can be understood as a rectangular body); therefore, excluding the invisible bottom surface and the top surface that requires special treatment, the other four rules are used to cut the convex polygon, and the cutting result is shown in Figure 6. When enumerating convex polygons, the following formula is generally used for pre-angle fitting calculation: sin*sqrt(X 2 +Y 2 ); among them, X and Y are the coordinate values of the point on the x-axis and y-axis, respectively, which are used to calculate the direction offset angle of the z-axis. Of course, other mathematical methods can also be selected according to the target graphics, such as geometric scaling. Specifically, different mathematical algorithms can be selected based on the regular relationship of the target result value. After completing the enumeration of the base convex polygons, the model body itself can be rounded down by the Radom function to deal with the window, fall, and door functions in the CE platform to dig the middle, and the interpolation formula can be used for recursion; the obtained iterative model is shown in Figure 7. Among them, the recursive formula can include:
yi=interp1(x,Y,xi),用于计算X轴插值,从x至xi以Y的大小进行插值;yi=interp1(x,Y,xi) is used to calculate the X-axis interpolation, interpolating from x to xi according to the size of Y;
yi=interp1(Y,xi),用于计算Y轴插值,从Y至xi以等差形式进行自动插值;yi=interp1(Y,xi), used to calculate the Y-axis interpolation, automatically interpolating from Y to xi in an arithmetic progression;
yi=interp1(x,Y,xi,method),用于计算X轴插值,在已经插值的其中一块区域Method从x至xi以Y的大小进行插值;yi=interp1(x,Y,xi,method) is used to calculate the X-axis interpolation. Method interpolates from x to xi with the size of Y in one of the interpolated areas.
yi=interp1(x,Y,xi,method,'extrap'),用于表示为上一个公式的插值结果保留索引'extrap';yi=interp1(x,Y,xi,method,'extrap'), which is used to indicate that the index 'extrap' is reserved for the interpolation result of the previous formula;
yi=interp1(x,Y,xi,method,extrapval),用于表示为上一个公式的插值结果保留索引extrapval;yi=interp1(x,Y,xi,method,extrapval), which is used to indicate that the index extrapval is reserved for the interpolation result of the previous formula;
pp=interp1(x,Y,method,'pp'),用于表示为上一个公式的插值结果保留索引'pp'。pp = interp1(x,Y,method,'pp') is used to indicate that the index 'pp' is reserved for the interpolation result of the previous formula.
此外,还可以采用固定值法、插值法、最近邻插值、回归法等方式进行多样楼梯的差值运算。In addition, the difference calculation of various stairs can be performed by using fixed value method, interpolation method, nearest neighbor interpolation method, regression method and the like.
在模型建立完成后,可以对其进行纹理贴图,得到初始模型。例如,可以指定纹理贴图的路径,在CE平台中进行纹理贴图。After the model is built, texture mapping can be performed on it to obtain an initial model. For example, the texture mapping path can be specified and texture mapping can be performed in the CE platform.
在步骤S12中,对所述第一阶段模型进行非业务逻辑绑定,以及对所述第一阶段模型进行业务逻辑绑定,以生成建模对象对应的目标模型。In step S12, non-business logic binding is performed on the first-stage model, and business logic binding is performed on the first-stage model to generate a target model corresponding to the modeling object.
本示例实施方式中,可以在服务器端提供一加工层,接收生产层输出的第一阶段模型,并对第一阶段模型进行处理。具体的,在生成层生成的第一阶段模型可以是多个;例如,可以是智慧园区中每个建筑物分别构建对应的第一阶段模型。生产层生成的各第一阶段模型的模型体文件,可以分别推送至加工层。加工层可以将各第一阶段模型利用虚拟引擎进行合并,完成三维场景的创建。In this example implementation, a processing layer may be provided on the server side to receive the first-stage model output by the production layer and process the first-stage model. Specifically, there may be multiple first-stage models generated in the generation layer; for example, each building in the smart park may have a corresponding first-stage model. The model body files of each first-stage model generated by the production layer may be pushed to the processing layer respectively. The processing layer may merge each first-stage model using a virtual engine to complete the creation of a three-dimensional scene.
同时,还可以对第一阶段模型进行业务逻辑、非业务逻辑的绑定。举例来说,非业务逻辑可以包括三维场景中的光照效果、昼夜交替的显示效果、天气***等等。业务逻辑可以包括监控摄像头的数据推送业务逻辑、兴趣点管控业务逻辑、模型查看业务逻辑,等等。At the same time, the first-stage model can also be bound to business logic and non-business logic. For example, non-business logic can include lighting effects in three-dimensional scenes, day and night display effects, weather systems, etc. Business logic can include data push business logic of surveillance cameras, point of interest management business logic, model viewing business logic, etc.
在步骤S13中,对所述目标模型进行流媒体绑定,以用于将目标模型对应的流媒体数据推送至对应的预设终端。In step S13, streaming media binding is performed on the target model to push streaming media data corresponding to the target model to the corresponding preset terminal.
本示例实施方式中,对于目标模型,还可以在封装后,将其与若干个终端设备进行绑定,以使不同的终端设备可以接收目标模型对应的流媒体数据。具体的,可以在服务器端预先绑定不同终端设备的地址,并配置流媒体的内容。其中,终端设备可以是工作人员的智能终端设备、IOC数据看板等等,实现在终端设备上的模型,以及数据可视化、交互。In this example implementation, the target model can also be bound to several terminal devices after encapsulation, so that different terminal devices can receive streaming media data corresponding to the target model. Specifically, the addresses of different terminal devices can be pre-bound on the server side, and the content of the streaming media can be configured. The terminal device can be a smart terminal device of a staff member, an IOC data dashboard, etc., to realize the model on the terminal device, as well as data visualization and interaction.
本示例实施方式中,所述非业务逻辑包括模型发光逻辑;参考图8所示,所述对所述第一阶段模型进行非业务逻辑绑定,包括:In this example implementation, the non-business logic includes model luminescence logic; referring to FIG8 , the non-business logic binding of the first-stage model includes:
步骤S31,在所述第一阶段模型中根据模型的纹理和材质确定待处理对象;Step S31, determining the object to be processed in the first stage model according to the texture and material of the model;
步骤S32,对所述待处理对象计算对应的漫反射光照系数、镜面反射光照系数以及距离场参数;Step S32, calculating the corresponding diffuse reflection illumination coefficient, specular reflection illumination coefficient and distance field parameter for the object to be processed;
步骤S33,根据所述漫反射光照系数、镜面反射光照系数以及距离场参数确定对应的光照混合系数;Step S33, determining a corresponding illumination mixing coefficient according to the diffuse reflection illumination coefficient, the specular reflection illumination coefficient and the distance field parameter;
步骤S34,将所述光照混合系数配置为所述待处理对象的基础发光参数,并基于所述基础发光参数根据所述待处理对象对应的位置按预设比例配置对应的实际发光系数。Step S34, configuring the illumination mixing coefficient as the basic luminous parameter of the object to be processed, and configuring the corresponding actual luminous coefficient according to the preset ratio according to the position corresponding to the object to be processed based on the basic luminous parameter.
具体而言,在已有的方案中,对于模型夜景,主流使用的低光照渲染技术,在采用整体场景铺设大量阵列式负轴向点光源,向外立面进行光源散射,模拟楼梯电力设备状态。这样做的好处是可以定制化所有楼梯、模型、房间的所有光源效果;劣势在于当点阵列式 负轴点光源铺设过多的时候,由于GPU性能的瓶颈,会导致整体场景掉帧和卡顿,在云端部署的情况下尤为明显。另一种方案中使用后期处理盒子进行范围内最低光源系数设置(通常情况下这个系数为0.1~0.5),这样做的优势在于可以辐射整体范围场景,劣势在于会降低本身场景内的点光源的渲染效果,由于曝光系数的恒定,对于模型局部的细节处理需要多个后期处理盒子进行叠加,在进行关卡流送的过程中,控制系数联动会造成内存浪费以及极大的性能开销。而使用单独材质进行处理,对于每一个需要处理的网格体,预制两套纹理材质和纹理贴图,根据对应光照阈值进行动态加载和卸载,这样做的好处是能够保证所有的材质都有特殊定制化的处理,劣势在于一旦在迭代和修复的过程中,需要频繁替换对应的材质和纹理,会导致模型较重(所有纹理贴图和材质的数量均翻倍)。Specifically, in the existing solutions, for the model night scene, the mainstream low-light rendering technology uses a large number of array-type negative axis point light sources to scatter light sources to the facade to simulate the state of the staircase power equipment. The advantage of this is that all light effects of all stairs, models, and rooms can be customized; the disadvantage is that when there are too many point array-type negative axis point light sources, the overall scene will drop frames and freeze due to the bottleneck of GPU performance, which is particularly obvious in the case of cloud deployment. Another solution is to use a post-processing box to set the lowest light source coefficient within the range (usually this coefficient is 0.1 to 0.5). The advantage of this is that it can radiate the entire range of scenes, but the disadvantage is that it will reduce the rendering effect of the point light source in the scene itself. Due to the constant exposure coefficient, the local details of the model need to be superimposed on multiple post-processing boxes. In the process of level streaming, the control coefficient linkage will cause memory waste and huge performance overhead. Instead of using separate materials for processing, for each mesh that needs to be processed, two sets of texture materials and texture maps are pre-made, and dynamically loaded and unloaded according to the corresponding lighting threshold. The advantage of this is that all materials can be specially customized. The disadvantage is that once in the process of iteration and repair, the corresponding materials and textures need to be replaced frequently, which will make the model heavier (the number of all texture maps and materials are doubled).
本示例实施方式中,具体的,可以是分别对第一阶段模型分别添加对应的非业务逻辑;或者,也可以是在将各第一阶段模型进行组合之后,对三维场景模型添加非业务逻辑。In this example implementation, specifically, corresponding non-business logic may be added to the first-stage models respectively; or, non-business logic may be added to the three-dimensional scene model after combining the first-stage models.
具体而言,以对三维场景模型进行非业务逻辑添加为例,可以对三维场景模型中各子模型进行分类,对于相同类型的子模型,可以绑定相同的非业务逻辑。例如,对于建筑物外立面、灯牌、楼梯等各类型的子模型,可以分别使用相同的非业务逻辑。如上述步骤步骤S31-步骤S34所述的方法,可以对相同类型的子模型使用相同的发光系数。例如,可以利用UE5引擎为模型绑定发光材质,并通过调用接口为模型配置夜景条件下的发光效果,以满足夜景渲染模式。Specifically, taking the addition of non-business logic to a three-dimensional scene model as an example, each sub-model in the three-dimensional scene model can be classified, and the same non-business logic can be bound to the sub-models of the same type. For example, the same non-business logic can be used for each type of sub-model, such as a building facade, a light sign, a staircase, etc. As described in the above steps S31-S34, the same luminous coefficient can be used for sub-models of the same type. For example, the UE5 engine can be used to bind a luminous material to the model, and the luminous effect under night scene conditions can be configured for the model by calling an interface to meet the night scene rendering mode.
具体的,可以首先根据模型的纹理和材质选择同一类型的子模型作为待处理对象,并对其进行漫反射光照系数、镜面反射光照系数以及距离场参数的计算。Specifically, sub-models of the same type may be selected as objects to be processed according to the texture and material of the model, and diffuse reflection illumination coefficients, specular reflection illumination coefficients and distance field parameters may be calculated for the sub-models.
本示例实施方式中,计算所述待处理对象对应的漫反射光照系数,具体可以包括:结合环境光强度、材质对环境光的反射系数,确定漫反射体与环境光交互反射的第一光强参数;集合点光源强度、材质对环境光的反射系数、入射光方向与顶点法线的夹角,确定漫反射体与方向光交互反射的第二光强参数;根据所述第一光强参数、第二光强参数确定所述漫反射光照系数。In this example implementation, calculating the diffuse reflection illumination coefficient corresponding to the object to be processed may specifically include: determining a first light intensity parameter of the interactive reflection between the diffuse reflector and the ambient light in combination with the ambient light intensity and the reflection coefficient of the material to the ambient light; determining a second light intensity parameter of the interactive reflection between the diffuse reflector and the directional light by combining the intensity of the point light source, the reflection coefficient of the material to the ambient light, and the angle between the incident light direction and the vertex normal; and determining the diffuse reflection illumination coefficient based on the first light intensity parameter and the second light intensity parameter.
具体的,漫反射模型可以采用Lambert模型,公式可以包括:Specifically, the diffuse reflection model may adopt the Lambert model, and the formula may include:
Iambdiff=Kd*IaIambdiff=Kd*Ia
其中,Ia表示环境光强度;Kd表示材质对环境光的漫反射系数,且0<Kd<1;Iambdiff表示漫反射体与环境光交互反射的光强。Among them, Ia represents the intensity of ambient light; Kd represents the diffuse reflection coefficient of the material to ambient light, and 0<Kd<1; Iambdiff represents the intensity of the light reflected by the diffuse reflector and the ambient light.
方向光的公式可以包括:Ildiff=Kd*Il*Cos(θ)The formula for directional light can include: Ildiff = Kd*Il*Cos(θ)
其中,Il表示点光源强度;θ表示入射光方向与顶点法线的夹角,称为入射角,且0≤θ≤90°;Ildiff表示漫反射体与方向光交互反射的光强。Among them, Il represents the intensity of the point light source; θ represents the angle between the incident light direction and the vertex normal, called the incident angle, and 0≤θ≤90°; Ildiff represents the intensity of the light reflected by the diffuse reflector and the directional light.
若N为顶点单位法向量,L表示从顶点指向光源的单位向量(注意顶点指向光源),则Cos(θ)等价于dot(N,L)。If N is the unit normal vector of the vertex, and L is the unit vector pointing from the vertex to the light source (note that the vertex points to the light source), then Cos(θ) is equivalent to dot(N,L).
基于此,可以有:Ildiff=Kd*Il*dot(N,L)Based on this, we can have: Ildiff = Kd*Il*dot(N,L)
综合环境光和方向光源,Lambert光照模型可以包括:Integrating ambient light and directional light sources, the Lambert lighting model can include:
Idiff=Iambdiff+Ildiff=Kd*Ia+Kd*Il*dot(N,L)Idiff=Iambdiff+Ildiff=Kd*Ia+Kd*Il*dot(N,L)
本示例实施方式中,计算所述待处理对象对应的镜面反射光照系数,具体可以包括:结合镜面反射系数、点光源强度、高光指数、第一光线方向参数,确定初始镜面系数;利用第二光线方向参数对所述初始镜面系数进行修正,获取所述镜面反射光照系数。In this example implementation, calculating the specular reflection illumination coefficient corresponding to the object to be processed may specifically include: determining an initial specular coefficient in combination with the specular reflection coefficient, point light source intensity, highlight index, and a first light direction parameter; and correcting the initial specular coefficient using a second light direction parameter to obtain the specular reflection illumination coefficient.
具体的,镜面反射模型可以采用Phong模型。Phong模型认为镜面反射的光强与反射光线和视线的夹角相关,公式可以包括:Specifically, the specular reflection model may adopt the Phong model. The Phong model considers that the light intensity of specular reflection is related to the angle between the reflected light and the line of sight, and the formula may include:
Ispec=Ks*Il*(dot(V,R))^NsIspec=Ks*Il*(dot(V,R))^Ns
其中,Ks表示镜面反射系数;Ns表示高光指数,V表示从顶点到视点的观察方向;R代表反射光方向。Among them, Ks represents the specular reflection coefficient; Ns represents the highlight index, V represents the observation direction from the vertex to the viewpoint; and R represents the direction of reflected light.
由于反射光的方向R可以通过入射光方向L(从顶点指向光源)和物体的法向量求出,则可以有:R+L=2*dot(N,L)*N;即R=2*dot(N,L)*N-L。基于以上公式,可以得到:Since the direction R of the reflected light can be obtained by the incident light direction L (from the vertex to the light source) and the normal vector of the object, we can get: R+L=2*dot(N,L)*N; that is, R=2*dot(N,L)*N-L. Based on the above formula, we can get:
Ispec=Ks*Il*(dot(V,(2*dot(N,L)*N–L))^NsIspec=Ks*Il*(dot(V,(2*dot(N,L)*N–L))^Ns
修正镜面光可以采用Blinn-Phong光照模型。Blinn-Phong是一个基于Phong模型修正的模型,其公式可以包括:The Blinn-Phong illumination model can be used to correct specular light. Blinn-Phong is a model based on the Phong model, and its formulas include:
Ispec=Ks*Il*(dot(N,H))^NsIspec=Ks*Il*(dot(N,H))^Ns
其中,N表示入射点的单位法向量;H表示光入射方向L和视点方向V的中间向量,通常也称之为半角向量。其中,半角向量的公式可以包括:H=(L+V)/|L+V|。Wherein, N represents the unit normal vector of the incident point; H represents the intermediate vector between the light incident direction L and the viewpoint direction V, which is usually also called the half-angle vector. Wherein, the formula of the half-angle vector may include: H=(L+V)/|L+V|.
本示例实施方式中,所述根据所述漫反射光照系数、镜面反射光照系数以及距离场参数确定对应的光照混合系数,具体可以包括:基于所述待处理对象中任意两个样本点对应的夹角,计算所述待处理对象对应的混合角度参数;利用所述混合角度参数,结合所述漫反射光照系数、镜面反射光照系数以及距离场参数确定对应的光照混合系数。In this example implementation, determining the corresponding illumination mixing coefficient according to the diffuse reflection illumination coefficient, the specular reflection illumination coefficient and the distance field parameter may specifically include: calculating the mixing angle parameter corresponding to the object to be processed based on the angle corresponding to any two sample points in the object to be processed; and determining the corresponding illumination mixing coefficient by using the mixing angle parameter in combination with the diffuse reflection illumination coefficient, the specular reflection illumination coefficient and the distance field parameter.
具体的,在获取光照系数后,可以计算光照范围和余弦距离场的影响。具体的,二维空间中向量A(x1,y1)与向量B(x2,y2)的夹角余弦公式可以包括:Specifically, after obtaining the illumination coefficient, the influence of the illumination range and the cosine distance field can be calculated. Specifically, the cosine formula of the angle between vector A (x1, y1) and vector B (x2, y2) in two-dimensional space can include:
Figure PCTCN2022138339-appb-000001
Figure PCTCN2022138339-appb-000001
任意两个n维样本点a(x11,x12,…,x1n)和b(x21,x22,…,x2n)之间的夹角余弦可以包括:The cosine of the angle between any two n-dimensional sample points a(x11,x12,…,x1n) and b(x21,x22,…,x2n) can include:
Figure PCTCN2022138339-appb-000002
Figure PCTCN2022138339-appb-000002
其中,夹角余弦取值范围为[-1,1]。余弦越大表示两个向量的夹角越小,余弦越小表示两向量的夹角越大。当两个向量的方向重合时余弦取最大值1,当两个向量的方向完全相反余弦取最小值-1。基于以上内容,上述的公式可以简化为:The value range of the angle cosine is [-1,1]. The larger the cosine, the smaller the angle between the two vectors. The smaller the cosine, the larger the angle between the two vectors. When the directions of the two vectors coincide, the cosine takes the maximum value of 1. When the directions of the two vectors are completely opposite, the cosine takes the minimum value of -1. Based on the above, the above formula can be simplified as:
Figure PCTCN2022138339-appb-000003
Figure PCTCN2022138339-appb-000003
通过上述公式可以确定混合角度A。The mixing angle A can be determined by the above formula.
在更正距离场后,可以利用(Iambdiff*cosA) 2+(Ildiff*sinA) 2+(Ispec*tanA) 2的平方根作为光照混合系数Q,再将对应蓝图暴露的系数光照系数赋值为Q即可完成。该光照混合系数Q可以作为基础发光参数。对于不同类型的其他区域,例如灯箱/广告牌等区域,可以在此基础发光系数基础上进行倍数扩大,实现对特殊区域的加量。一般来说,可以采用常量10.65系数作为扩大倍数。 After correcting the distance field, you can use the square root of (Iambdiff*cosA) 2 +(Ildiff*sinA) 2 +(Ispec*tanA) 2 as the lighting mixing coefficient Q, and then assign the corresponding blueprint exposed coefficient lighting coefficient to Q. This lighting mixing coefficient Q can be used as a basic luminous parameter. For other areas of different types, such as light boxes/billboards, you can multiply this basic luminous coefficient to achieve the addition of special areas. Generally speaking, a constant coefficient of 10.65 can be used as the expansion multiple.
举例来说,以楼梯为例,首先在三维模型中选择目标楼梯,并对各楼梯子模型的的材质、纹理进行筛选;若为L1级别白膜,则不作处理。对于带有相同纹理、材质的楼梯子模型,可以同步进行漫反射处理计算漫反射光照系数,以及进行镜面反射处理获取初始镜面系数,并进行镜面反射光照系数修正,得到镜面反射光照系数;以及,同步进行距离场计算;其中,距离场计算使用常规方式即可实现,本公开对其不再赘述。利用混合角度参数,结合所述漫反射光照系数、镜面反射光照系数以及距离场参数利用上述公式进行计算,获取对应的光照混合系数。并将该光照混合系数配置为模型的光照系数。利用混合发光系数,以及距离场影响的平方根作为材质发光系数,实现对系数的动态计算,准确配置发光状态;有效的解决夜景无自然光照条件下L1~L2级模型在夜景条件下的发光效果,贴近夜景灯光,以满足夜景渲染模式。并且,使用原有的纹理贴图进行发光材质制作即可,不使用额外的纹理贴图,降低模型复杂程度。For example, taking the stairs as an example, first select the target stairs in the three-dimensional model, and screen the materials and textures of each stair sub-model; if it is an L1 level white film, no processing is performed. For the stair sub-models with the same texture and material, diffuse reflection processing can be performed simultaneously to calculate the diffuse reflection illumination coefficient, and mirror reflection processing can be performed to obtain the initial mirror coefficient, and the mirror reflection illumination coefficient can be corrected to obtain the mirror reflection illumination coefficient; and, the distance field calculation can be performed simultaneously; wherein, the distance field calculation can be implemented using conventional methods, and the present disclosure will not repeat it. Using the mixed angle parameter, the diffuse reflection illumination coefficient, the mirror reflection illumination coefficient and the distance field parameter are combined to calculate using the above formula to obtain the corresponding illumination mixing coefficient. And the illumination mixing coefficient is configured as the illumination coefficient of the model. Using the mixed luminous coefficient and the square root of the distance field effect as the material luminous coefficient, the coefficient is dynamically calculated and the luminous state is accurately configured; effectively solving the luminous effect of the L1~L2 level model under night scene conditions without natural lighting conditions, close to the night scene lights, to meet the night scene rendering mode. In addition, the original texture map can be used to produce the luminous material without using additional texture maps, thus reducing the complexity of the model.
本示例实施方式中,参考图10所示,上述方法还可以包括:In this example implementation, referring to FIG10 , the method may further include:
步骤S101,通过预设参数接口获取样条线配置参数,根据所述样条线配置参数进行动画路径规划;以及Step S101, obtaining spline configuration parameters through a preset parameter interface, and performing animation path planning according to the spline configuration parameters; and
步骤S102,对虚拟对象的骨骼体数组与样条线进行绑定,以用于将所述虚拟对象按照已规划的路径进行移动。Step S102, binding the skeleton volume array and the spline of the virtual object to move the virtual object along the planned path.
具体来说,在数字孪生的应用场景中,三维模型模型中还可以包括大量可以活动的虚拟对象,例如智慧园区中的行人车辆、飞行器、动物,以及水域和水域中的动物,等等。在已有的使用Maya,Honidi等软件制作的动画流动的技术方案,会造成GPU超负荷运作。为了克服这一问题,在对三维模型进行非业务逻辑绑定时,可以利用样条线Spline对模型中的虚拟对象进行规划。Specifically, in the application scenarios of digital twins, the 3D model can also include a large number of movable virtual objects, such as pedestrians, vehicles, aircraft, animals in the smart park, and water areas and animals in the water areas, etc. The existing technical solutions for animation flow produced by software such as Maya and Honidi will cause GPU overload. To overcome this problem, when performing non-business logic binding on the 3D model, splines can be used to plan the virtual objects in the model.
具体的,在对三维模型进行业务逻辑绑定时,可以通过模型对应的蓝图通信接口来获取Spline的配置参数。其中,样条线配置参数可以包括:虚拟对象的类型、时间信息、Spline的关键点坐标,等等。在利用UE5平台进行建模时,可以在模型中利用Spline关键点标记虚拟对象的规划路径,每个Spline关键点可以对应世界坐标系中的坐标。此外,可以使用虚拟对象的骨骼体数组,与相应的Spline进行绑定,从而实现对数字孪生场景中的行人、自行车、车辆、飞机等虚拟对象的行驶路径进行标识。对于不同的虚拟对象,可以通过样条线配置参数中配置路径、时间来完成对速度的配置。对于不同类型虚拟对象对应的骨骼体,可以配置有不同的速度。参考图11所示,在对Spline关键点进行规划时,在不同的Spline关键点,可以genuine实际业务需求配置骨骼体的旋转值,实现样条线弯折的动画 效果。Specifically, when the business logic is bound to the three-dimensional model, the configuration parameters of the Spline can be obtained through the blueprint communication interface corresponding to the model. Among them, the spline configuration parameters may include: the type of virtual object, time information, key point coordinates of the Spline, and so on. When modeling using the UE5 platform, the Spline key points can be used to mark the planned path of the virtual object in the model, and each Spline key point can correspond to the coordinates in the world coordinate system. In addition, the skeleton array of the virtual object can be used to bind with the corresponding Spline, so as to identify the driving path of virtual objects such as pedestrians, bicycles, vehicles, and airplanes in the digital twin scene. For different virtual objects, the speed can be configured by configuring the path and time in the spline configuration parameters. Different speeds can be configured for the skeletons corresponding to different types of virtual objects. As shown in Figure 11, when planning the Spline key points, at different Spline key points, the rotation value of the skeleton can be configured according to the actual business needs to achieve the animation effect of the spline bending.
通过使用Spline绑定对应的骨骼体,可以实现骨骼体的多维动态效果。通过利用Spline进行骨骼体的路径规划,能够实现使用间断式位移来完成相应的位置变更和相应的动画效果实现。在数字孪生场景中的人流、车流等,可以实现随意替换网格体内容即可完成对应动画的表现形式,并且可以匹配复杂路网和非标准网格线的动画能力,完成对应交通部分的渲染构画和场景展示。对于该业务逻辑,可以插件的形式进行封装,在模型构建过程中,可以通过蓝图接口调用的方式来执行外业务逻辑的插件,实现对应的功能。By using Spline to bind the corresponding skeleton, the multi-dimensional dynamic effects of the skeleton can be achieved. By using Spline to plan the path of the skeleton, it is possible to use intermittent displacement to complete the corresponding position changes and the corresponding animation effects. In the digital twin scene, the flow of people and vehicles, etc., can be achieved by arbitrarily replacing the grid content to complete the corresponding animation expression, and can match the animation capabilities of complex road networks and non-standard grid lines to complete the rendering and scene display of the corresponding traffic part. For this business logic, it can be encapsulated in the form of a plug-in. During the model construction process, the plug-in of the external business logic can be executed through the blueprint interface call to achieve the corresponding function.
本示例实施方式中,参考图12所示,上述方法还可以包括:In this example implementation, referring to FIG12 , the method may further include:
步骤S121,根据所述目标模型中待处理坐标点对应的世界坐标与屏幕坐标的法向量夹角配置为第一标准角度;Step S121, configuring the normal vector angle between the world coordinates and the screen coordinates corresponding to the coordinate point to be processed in the target model as a first standard angle;
步骤S122,根据所述第一标准角度配置所述待处理坐标点在屏幕坐标系中的近似三维坐标;Step S122, configuring the approximate three-dimensional coordinates of the coordinate point to be processed in the screen coordinate system according to the first standard angle;
步骤S123,对所述近似三维坐标进行坐标向量拆分,并根据坐标向量拆分结果选定目标兴趣点与所述待处理坐标点进行动态绑定。Step S123, splitting the approximate three-dimensional coordinates into coordinate vectors, and selecting a target point of interest and dynamically binding it to the coordinate point to be processed according to the result of the coordinate vector splitting.
具体来说,在数字孪生的应用场景中,三维模型中的兴趣点的位置和数量一般取决于真实场景中的数据。因此,在移动模型,查看模型的过程中会存在兴趣点POI相互遮挡、位置点离散的问题。另外,模型中的POI由于存在高度差,而人眼存在视域的极限值;因此,部分POI会随视角的变化、移动产生交替显示和隐藏的情况,如果使用传统的POI点遍历的方法去控制显隐和角度,在POI点数量和种类较多的情况下,会造成模型承载体超重和降帧。如果使用传统***模型的办法来处理POI,那么复用性和迭代性又会很差。Specifically, in the application scenarios of digital twins, the location and number of points of interest in the three-dimensional model generally depend on the data in the real scene. Therefore, when moving the model and viewing the model, there will be problems such as POIs blocking each other and discrete position points. In addition, due to the height difference of the POIs in the model, the human eye has a limit value of the field of vision; therefore, some POIs will be alternately displayed and hidden as the viewing angle changes and moves. If the traditional POI point traversal method is used to control the display and angle, when the number and types of POI points are large, the model carrier will be overloaded and the frame rate will be reduced. If the traditional method of inserting the model is used to process POIs, the reusability and iteration will be very poor.
举例来说,模型中的兴趣点POI(Point of Interest)可以包括监控、门禁、水电表、打卡机,以及其他根据实际业务需求设定的兴趣点。在进行业务逻辑绑定时,可以配置兴趣点坐标动态补偿的业务逻辑,使得用户在终端设备查看、移动模型时,保证兴趣点在屏幕上的绝对位置与世界坐标相匹配,保证能够随用户操控保持正交投射,面向屏幕向外角度固定的显示效果。For example, the POI (Point of Interest) in the model can include monitoring, access control, water and electricity meters, time clocks, and other POIs set according to actual business needs. When binding business logic, you can configure the business logic of dynamic compensation of POI coordinates, so that when users view and move the model on the terminal device, the absolute position of the POI on the screen matches the world coordinates, ensuring that the orthogonal projection can be maintained with the user's control, and the display effect with a fixed outward angle facing the screen.
具体的,上述的步骤S111-步骤S113的方法可以是在非业务逻辑绑定时实现。上述的待处理坐标点可以是模型中的兴趣点。对于各兴趣点,模型在终端设备中显示时,可以对兴趣点的世界坐标与屏幕坐标进行转换,由于屏幕坐标系是不存在Z轴的高度参数的,那么获取高度的方法可以包括将世界坐标的法向量与屏幕坐标的法向量夹角作为取arcsin的标准角度,以此取得近似的Z轴范围。那么接下来就实现自动补偿,将近似的(X,Y,Z)的坐标分为八等分向量,取绝对值最小的POI进行动态绑定,实现屏幕反正交映射后将经纬度和世界坐标绑定,从而实现兴趣点坐标的动态吸附。Specifically, the above-mentioned method of step S111-step S113 can be implemented during non-business logic binding. The above-mentioned coordinate points to be processed can be points of interest in the model. For each point of interest, when the model is displayed in the terminal device, the world coordinates and screen coordinates of the point of interest can be converted. Since the screen coordinate system does not have a Z-axis height parameter, the method for obtaining the height can include taking the angle between the normal vector of the world coordinate and the normal vector of the screen coordinate as the standard angle of arcsin, so as to obtain the approximate Z-axis range. Then the next step is to realize automatic compensation, divide the approximate (X, Y, Z) coordinates into eight equal vectors, take the POI with the smallest absolute value for dynamic binding, and realize the screen inverse intersection mapping and bind the longitude and latitude to the world coordinates, so as to realize the dynamic adsorption of the coordinates of the point of interest.
本示例实施方式中,上述方法还可以包括:根据所述目标模型中的各兴趣点对应的类型划分兴趣点集合;对各所述兴趣点集合配置为所述目标模型主场景对应的子关卡,以用于根据对所述目标模型的显示控制,通过关卡流送对各所述兴趣点集合进行加载。In this example implementation, the method may further include: dividing interest point sets according to types corresponding to each interest point in the target model; configuring each interest point set as a sub-level corresponding to the main scene of the target model, so as to load each interest point set through level streaming according to display control of the target model.
具体来说,在模型中的各兴趣点,可以对不同的兴趣点进行分类,对不同类型的兴趣点构建对应的兴趣点集合。将多种类型的POI集合作为主场景的子关卡的形式进行处理,从而保证在固定的节奏下流送固定的关卡,完成对应的POI加载。实现通过关卡流送完成分类加载POI的实现逻辑。Specifically, different points of interest can be classified in the model, and corresponding points of interest sets can be constructed for different types of points of interest. Various types of POI sets are processed as sub-levels of the main scene, so as to ensure that fixed levels are streamed at a fixed rhythm and the corresponding POI loading is completed. The implementation logic of completing the classified loading of POIs through level streaming is realized.
本示例实施方式中,参考图13所示,步骤S131,响应于对所述目标模型的显示控制操作,实时获取虚拟摄像机的当前视场范围;In this example implementation, referring to FIG. 13 , step S131, in response to a display control operation on the target model, a current field of view of the virtual camera is acquired in real time;
步骤S132,对当前焦点跟随的兴趣点在x时间轴、y时间轴、z时间轴进行绑定的并发控制,以用于保持所述目标兴趣点的显示位置。Step S132, concurrently controlling the binding of the interest point followed by the current focus on the x time axis, the y time axis, and the z time axis, so as to maintain the display position of the target interest point.
具体来说,当用户在终端设备的交互界面中移动、旋转模型,查看兴趣点时,模型可以响应用户的显示控制操作进行对应的移动、旋转。此时,可以实时的获取模型对应的虚拟相机的对应的视场范围,对于焦点Focus的处理,使用多维时间轴控制并发控制Control来执行,即X时间轴、Y时间轴、Z时间轴绑定Focus跟随的POI,从而来控制POI的显示、隐藏,保持POI的正向对应不会有缺失和机位遮挡。Specifically, when the user moves and rotates the model in the interactive interface of the terminal device to view the point of interest, the model can respond to the user's display control operation to move and rotate accordingly. At this time, the corresponding field of view of the virtual camera corresponding to the model can be obtained in real time. For the processing of Focus, the multi-dimensional timeline control and concurrent control Control are used to execute, that is, the X timeline, Y timeline, and Z timeline are bound to the POI followed by Focus, so as to control the display and hiding of the POI, and keep the positive correspondence of the POI without missing or camera occlusion.
本示例实施方式中,上述方法还可以包括:利用角度控制器控制各维度时间轴的切换。In this example implementation, the method may further include: using an angle controller to control the switching of the time axes of each dimension.
具体的,由于POI通常伴随的是虚拟相机移动和转场处理,因此弹簧臂的存在是为了稳定多维时间轴的切换和并发的过程中造成的镜头震颤和模糊化降帧。本实施例使用了一种UE平台中原生的Arrow作为角度控制器,这样可以稳定进行动画过渡。Specifically, since POI is usually accompanied by virtual camera movement and transition processing, the existence of the spring arm is to stabilize the lens tremor and blurring frame drop caused by the switching and concurrent processes of the multi-dimensional time axis. This embodiment uses a native Arrow in the UE platform as an angle controller, which can stabilize the animation transition.
通过该非业务逻辑,能够对模型使用原生蓝图低代码方式进行坐标校准,减少代码作业量。通过对模型中的兴趣点进行分类,以集合的形式分组按照关卡流送的多融合嵌套,保证世界坐标统一的同时,也可以保证性能的优化。通过上述的方法实现对兴趣点坐标的自动补偿方,将屏幕坐标反正交映射到世界场景,来表现POI点的高低差值。参考图14所示模型中的POI保持屏幕向外正交投射效果。在单个位置相对屏幕变化时,相对世界坐标固定效果如图15所示。如图16所示,任意的POI点击并触发二次显示逻辑后,不会产生POI遮挡和错位效果。另外,通过使用虚拟相机和弹簧臂组合完成基类focus对应,并且用focus派生绑定所有POI,可以有效的区分不同POI的不同展示内容。Through this non-business logic, the model can be calibrated using the native blueprint low-code method to reduce the amount of code work. By classifying the points of interest in the model, grouping them in the form of a collection and nesting them according to the multi-fusion of the level stream, the world coordinates can be unified while ensuring performance optimization. The above method is used to realize the automatic compensation of the coordinates of the points of interest, and the screen coordinates are inversely mapped to the world scene to show the height difference of the POI point. Refer to the POI in the model shown in Figure 14 to maintain the orthogonal projection effect of the screen outward. When a single position changes relative to the screen, the relative world coordinate fixed effect is shown in Figure 15. As shown in Figure 16, after any POI is clicked and the secondary display logic is triggered, there will be no POI occlusion and dislocation effects. In addition, by using a virtual camera and a spring arm combination to complete the base class focus correspondence, and using focus to derive and bind all POIs, the different display contents of different POIs can be effectively distinguished.
本示例实施方式中,参考图17所示,上述方法还可以包括:In this example implementation, referring to FIG. 17 , the method may further include:
步骤S171,响应于对所述目标模型的显示控制操作,识别输入设备的设备类型;Step S171, identifying a device type of an input device in response to a display control operation on the target model;
步骤S172,确定输入设备为第一类型设备时,根据虚拟相机当前的中轴线、所述输入设备的相对偏移参数之间的夹角方向的正交向量参数;或者,Step S172, when it is determined that the input device is a first type device, the orthogonal vector parameter of the angle direction between the current central axis of the virtual camera and the relative offset parameter of the input device is determined; or,
步骤S173,在确定输入设备为第二类型设备时,确定所述显示控制操作对应的加/减速度参数;并结合加/减速度参数的执行时长,以及预设步长确定偏移参数;Step S173, when it is determined that the input device is a second type device, determining an acceleration/deceleration parameter corresponding to the display control operation; and determining an offset parameter in combination with the execution time of the acceleration/deceleration parameter and a preset step length;
步骤S174,根据所述偏移参数修正所述目标模型的显示效果。Step S174: correcting the display effect of the target model according to the offset parameter.
具体而言,漫游场景是指在虚拟场景中,不按照既定剧本或脚本进行移动,旋转等操作的行为的统称,一般来说,除了旋转和平移,也包括拉近,拉远等操作。在数字孪生领域中,漫游也是模型的一个重要能力。Specifically, roaming scenes refer to the collective name for actions such as moving and rotating in a virtual scene that do not follow a predetermined script or script. Generally speaking, in addition to rotation and translation, it also includes operations such as zooming in and out. In the field of digital twins, roaming is also an important capability of the model.
具体的,当用户在终端设备上查看模型时,例如查看智慧园区的数字孪生模型时,若终端设备为手机、平板电脑等配置有触摸屏幕的电子设备时,用户可以触控的方式查看模型;若终端设备为笔记本电脑、台式电脑时,可以通过鼠标、键盘等输入设备查看、移动模型。对于终端来说,当用户发生控制操作时,可以首先识别输入设备的类型。其中,上述的第一类型设备可以是鼠标、键盘;第二类型设备可以是在触摸屏。当识别为第一类型设备时,可以计算当前虚拟相机对应的中轴线,并计算标准标签量。一般来说,屏幕上一般存在有基于鼠标/手势而产生的相对偏移,假定这个值为S1~S2。当前虚拟场景中的镜头所视中轴线为S3,其形成的夹角为A,因此可以获得沿着A方向的正交向量值为Move=(x,y,z)。而这个move向量与步长的积,即为对应需要偏移的总长度。同理可得,在旋转,以及缩放中,其步长为对应的偏移像素值,以及相应的缩放比例值。其中,步长是指场景中移动/旋转的最小偏移量,按照Camera指定方向/角度以及相应目距中轴线的正交偏移量。步长可以通过手动设定,也可以通过参数配置;假定步长为10,默认用步长代替像素长度(由于显示层面均以像素作为分辨率的单位,因此在这里同样使用像素作为单位)。对于中轴线的计算,由于虚拟场景是具有本身的中轴线,因此中轴线的位置为场景中心轴(在建模时已经确定的)对镜头角度的XY平面取余弦值的平移,就是摄像头对应的平移中轴线的位置。Specifically, when a user views a model on a terminal device, for example, when viewing a digital twin model of a smart park, if the terminal device is an electronic device such as a mobile phone or a tablet computer equipped with a touch screen, the user can view the model by touch; if the terminal device is a laptop or a desktop computer, the model can be viewed and moved by input devices such as a mouse and a keyboard. For the terminal, when the user performs a control operation, the type of input device can be first identified. Among them, the first type of device mentioned above can be a mouse or a keyboard; the second type of device can be a touch screen. When it is identified as a first type of device, the central axis corresponding to the current virtual camera can be calculated, and the standard label quantity can be calculated. Generally speaking, there is generally a relative offset based on the mouse/gesture on the screen, and this value is assumed to be S1~S2. The central axis viewed by the lens in the current virtual scene is S3, and the angle formed by it is A, so the orthogonal vector value along the A direction can be obtained as Move=(x, y, z). The product of this move vector and the step length is the total length of the corresponding offset. Similarly, in rotation and scaling, the step length is the corresponding offset pixel value and the corresponding scaling ratio value. The step size refers to the minimum offset of movement/rotation in the scene, which is the orthogonal offset of the central axis of the camera's specified direction/angle and the corresponding eye distance. The step size can be set manually or through parameter configuration; assuming the step size is 10, the step size is used by default instead of the pixel length (since the display layer uses pixels as the unit of resolution, pixels are also used here as the unit). For the calculation of the central axis, since the virtual scene has its own central axis, the position of the central axis is the translation of the cosine value of the scene center axis (determined during modeling) to the XY plane of the lens angle, which is the position of the translation central axis corresponding to the camera.
或者,若输入方式为触控,则可以根据加/减速度,进行偏移量修正。在使用触屏式的应用中,由于人为操控的影响,因此会存在加速度以及相应的减速度,其计算方式可以为:假定操作为正态化操作,那么可以认定在触屏过程中,其时长处于匀速的时长大约为66.7%。因此,假定其触控时间为3s,那么2S时间符合鼠标操控的特性,那么另外1秒作为均等分,为加速时长0.5秒,减速时长0.5秒,那么其加速度应当计算为:单步长对数值*加速/减速时长。因此,在整体偏移长度上就会发生变化,计算公式可以包括:结果公式如下:标准长度+Log(单步长)*加速时长t+(中端匀速长度)+标准长度-Log(单步长)*减速时长;再带入常规鼠标的长度计算中即可。若用户当前的控制操作为旋转/缩放,则可以改变相应的单步长对象内容值即可。即相应旋转角度/相应缩放比例。参考图18、19、20所示,分别为平移拉远、旋转、缩放的效果示意图。上述的步骤可以在业务逻辑绑定时,利用UE引擎的Blend视图混合绑定。在对模型渲染后,显示在终端设备的交互界面中。Alternatively, if the input method is touch, the offset correction can be performed according to the acceleration/deceleration. In touch-screen applications, due to the influence of human manipulation, there will be acceleration and corresponding deceleration, which can be calculated as follows: Assuming that the operation is a normalized operation, it can be determined that during the touch screen process, the duration of the uniform speed is approximately 66.7%. Therefore, assuming that the touch time is 3s, then the 2S time meets the characteristics of mouse control, and the other 1 second is divided equally into 0.5 seconds for acceleration and 0.5 seconds for deceleration, then its acceleration should be calculated as: single step logarithm * acceleration/deceleration duration. Therefore, the overall offset length will change, and the calculation formula can include: The result formula is as follows: standard length + Log (single step length) * acceleration duration t + (mid-end uniform speed length) + standard length - Log (single step length) * deceleration duration; then bring it into the length calculation of the conventional mouse. If the user's current control operation is rotation/scaling, the corresponding single step object content value can be changed. That is, the corresponding rotation angle/corresponding scaling ratio. Refer to Figures 18, 19, and 20, which are schematic diagrams of the effects of panning, zooming, rotating, and zooming. The above steps can be mixed and bound using the Blend view of the UE engine when binding the business logic. After the model is rendered, it is displayed in the interactive interface of the terminal device.
本示例实施方式中,参考图21所示,上述方法还可以包括:In this example implementation, referring to FIG. 21 , the method may further include:
步骤S211,通过信令服务器接收终端设备的服务请求;其中,所述服务请求包括目标兴趣点的标识信息、任务信息和终端设备标识;Step S211, receiving a service request from a terminal device through a signaling server; wherein the service request includes identification information of a target point of interest, task information, and a terminal device identification;
步骤S212,处理所述服务请求以获取所述目标兴趣点对应的流媒体数据,并将所述目标兴趣点对应的流媒体数据通过信令服务器推送至所述终端设备。Step S212: Process the service request to obtain streaming media data corresponding to the target point of interest, and push the streaming media data corresponding to the target point of interest to the terminal device through a signaling server.
具体而言,参考图22所示的网络架构,可以包括终端设备(如图22中所示智能手机101、可视化数据面板IOC102、和计算机103中的一种或多种)、网络104、信令服务器 105、数据服务器106。网络104用以在终端设备和服务器之间提供通信链路的介质。网络104可以包括各种连接类型,例如有线通信链路、无线通信链路等等。应该理解,图22中的终端设备、网络和服务器的数目仅仅是示意性的。根据实现需要,可以具有任意数目的终端设备、网络和服务器。比如信令服务器105、数据服务器106可以是多个服务器组成的服务器集群等。Specifically, with reference to the network architecture shown in FIG22 , a terminal device (such as one or more of the smart phone 101, the visual data panel IOC102, and the computer 103 as shown in FIG22 ), a network 104, a signaling server 105, and a data server 106 may be included. The network 104 is used to provide a medium for a communication link between the terminal device and the server. The network 104 may include various connection types, such as a wired communication link, a wireless communication link, and the like. It should be understood that the number of terminal devices, networks, and servers in FIG22 is merely schematic. Depending on the implementation requirements, any number of terminal devices, networks, and servers may be provided. For example, the signaling server 105 and the data server 106 may be a server cluster consisting of multiple servers, and the like.
终端设备可以通过信令服务器向数据服务器发起服务请求。其中,服务请求包括目标兴趣点的标识信息、任务信息和终端设备标识。例如,服务请求可以是工程师当前位置到目标维修点的路径规划任务;或者,可以是对指定路段、楼宇的监控任务,等等。此外,服务请求也可以是对模型的旋转、拉伸等控制操作。The terminal device can initiate a service request to the data server through the signaling server. The service request includes the identification information of the target point of interest, the task information and the terminal device identification. For example, the service request can be a path planning task from the engineer's current location to the target maintenance point; or it can be a monitoring task for a specified road section or building, etc. In addition, the service request can also be a control operation such as rotating and stretching the model.
数据孪生场景下的三维模型可以是在数据服务器构建、渲染、业务逻辑和非业务逻辑绑定后生成的。对于生成的模型,可以在数据服务器进行封装、打包,并与预设的终端设备的地址进行绑定。对于不同的终端设备,可以配置有不同的权限,查看不同的模型数据。数据服务器在接收到信令服务器转发的服务请求后,可以获取兴趣点对应流媒体数据,并通过信令服务器洗发至终端设备。The three-dimensional model in the data twin scenario can be generated after the data server is built, rendered, and bound to the business logic and non-business logic. The generated model can be encapsulated and packaged on the data server and bound to the address of the preset terminal device. Different terminal devices can be configured with different permissions to view different model data. After receiving the service request forwarded by the signaling server, the data server can obtain the streaming media data corresponding to the point of interest and send it to the terminal device through the signaling server.
本公开实施例所提供的模型构建方法,能够实现高精度的模型构建。可以应用于智慧园区的数字孪生的模型构建,组建整体数字孪生结合虚拟仿真***。可以将业务逻辑、非业务逻辑以插件的形式进行绑定。对于各类移动设备,通过模型端向外发送的流媒体地址进行绑定,同时根据自身业务需求拟定相应的接入方式和交互逻辑。The model building method provided in the embodiment of the present disclosure can achieve high-precision model building. It can be applied to the model building of the digital twin of the smart park to form an overall digital twin combined with a virtual simulation system. Business logic and non-business logic can be bound in the form of plug-ins. For various mobile devices, the streaming media address sent outward by the model end is bound, and the corresponding access method and interaction logic are formulated according to their own business needs.
需要注意的是,上述附图仅是根据本发明示例性实施例的方法所包括的处理的示意性说明,而不是限制目的。易于理解,上述附图所示的处理并不表明或限制这些处理的时间顺序。另外,也易于理解,这些处理可以是例如在多个模块中同步或异步执行的。It should be noted that the above figures are only schematic illustrations of the processes included in the method according to an exemplary embodiment of the present invention, and are not intended to be limiting. It is easy to understand that the processes shown in the above figures do not indicate or limit the time sequence of these processes. In addition, it is also easy to understand that these processes can be performed synchronously or asynchronously, for example, in multiple modules.
进一步的,参考图23所示,本示例的实施方式中还提供一种模型构建装置230,所述装置包括:第一阶段模型计算模块2301、目标模型计算模块2302、流媒体数据处理模块2303。其中,Further, referring to FIG. 23 , a model building device 230 is also provided in the embodiment of this example, and the device includes: a first-stage model calculation module 2301, a target model calculation module 2302, and a streaming media data processing module 2303.
所述第一阶段模型计算模块2301可以用于由目标数据源获取基础地理信息数据并进行立体化处理,以获取初始模型;并对所述初始模型进行材质贴图,获取第一阶段模型。The first-stage model calculation module 2301 can be used to obtain basic geographic information data from a target data source and perform three-dimensional processing to obtain an initial model; and perform material mapping on the initial model to obtain a first-stage model.
所述目标模型计算模块2302可以用于对所述第一阶段模型进行非业务逻辑绑定,以及对所述第一阶段模型进行业务逻辑绑定,以生成建模对象对应的目标模型。The target model calculation module 2302 can be used to perform non-business logic binding on the first-stage model, and perform business logic binding on the first-stage model to generate a target model corresponding to the modeling object.
所述流媒体数据处理模块2303可以用于对所述目标模型进行流媒体绑定,以用于将目标模型对应的流媒体数据推送至对应的预设终端。The streaming media data processing module 2303 can be used to perform streaming media binding on the target model, so as to push the streaming media data corresponding to the target model to the corresponding preset terminal.
本示例实施方式中,所述第一阶段模型计算模块2301可以用于向目标数据源获取基础地理信息数据,并按预设规则对所述基础地理信息数据进行筛选以获取层次数据;对所述层次数据执行建模规则集合,以获取所述初始模型。In this example implementation, the first-stage model calculation module 2301 can be used to obtain basic geographic information data from a target data source, and filter the basic geographic information data according to preset rules to obtain hierarchical data; execute a set of modeling rules on the hierarchical data to obtain the initial model.
本示例实施方式中,所述非业务逻辑包括模型发光逻辑;所述目标模型计算模块2302可以包括:发光系数配置模块。In this example implementation, the non-business logic includes model luminescence logic; the target model calculation module 2302 may include: a luminescence coefficient configuration module.
所述发光系数配置模块可以用于在所述第一阶段模型中根据模型的纹理和材质确定待处理对象;对所述待处理对象计算对应的漫反射光照系数、镜面反射光照系数以及距离场参数;根据所述漫反射光照系数、镜面反射光照系数以及距离场参数确定对应的光照混合系数;将所述光照混合系数配置为所述待处理对象的基础发光参数,并基于所述基础发光参数根据所述待处理对象对应的位置按预设比例配置对应的实际发光系数。The luminous coefficient configuration module can be used to determine the object to be processed in the first stage model according to the texture and material of the model; calculate the corresponding diffuse reflection illumination coefficient, specular reflection illumination coefficient and distance field parameter for the object to be processed; determine the corresponding illumination mixing coefficient according to the diffuse reflection illumination coefficient, specular reflection illumination coefficient and distance field parameter; configure the illumination mixing coefficient as the basic luminous parameter of the object to be processed, and configure the corresponding actual luminous coefficient according to the preset ratio according to the position corresponding to the object to be processed based on the basic luminous parameter.
本示例实施方式中,所述发光系数配置模块可以包括:结合环境光强度、材质对环境光的反射系数,确定漫反射体与环境光交互反射的第一光强参数;集合点光源强度、材质对环境光的反射系数、入射光方向与顶点法线的夹角,确定漫反射体与方向光交互反射的第二光强参数;根据所述第一光强参数、第二光强参数确定所述漫反射光照系数。In this example implementation, the luminous coefficient configuration module may include: determining a first light intensity parameter of the interactive reflection between the diffuse reflector and the ambient light in combination with the ambient light intensity and the reflection coefficient of the material to the ambient light; determining a second light intensity parameter of the interactive reflection between the diffuse reflector and the directional light by combining the intensity of the point light source, the reflection coefficient of the material to the ambient light, and the angle between the incident light direction and the vertex normal; and determining the diffuse reflection illumination coefficient based on the first light intensity parameter and the second light intensity parameter.
本示例实施方式中,所述发光系数配置模块可以包括:结合镜面反射系数、点光源强度、高光指数、第一光线方向参数,确定初始镜面系数;利用第二光线方向参数对所述初始镜面系数进行修正,获取所述镜面反射光照系数。In this example implementation, the luminous coefficient configuration module may include: determining an initial mirror coefficient by combining the mirror reflection coefficient, point light source intensity, highlight index, and first light direction parameter; and correcting the initial mirror coefficient using the second light direction parameter to obtain the mirror reflection illumination coefficient.
本示例实施方式中,所述发光系数配置模块可以包括:基于所述待处理对象中任意两个样本点对应的夹角,计算所述待处理对象对应的混合角度参数;利用所述混合角度参数,结合所述漫反射光照系数、镜面反射光照系数以及距离场参数确定对应的光照混合系数。In this example implementation, the luminous coefficient configuration module may include: calculating the mixing angle parameters corresponding to the object to be processed based on the angle corresponding to any two sample points in the object to be processed; using the mixing angle parameters, combined with the diffuse reflection illumination coefficient, the specular reflection illumination coefficient and the distance field parameter, to determine the corresponding illumination mixing coefficient.
本示例实施方式中,所述装置还可以包括:坐标点动态绑定模块。In this example implementation, the device may further include: a coordinate point dynamic binding module.
所述坐标点动态绑定模块可以用于根据所述目标模型中待处理坐标点对应的世界坐标与屏幕坐标的法向量夹角配置为第一标准角度;根据所述第一标准角度配置所述待处理坐标点在屏幕坐标系中的近似三维坐标;对所述近似三维坐标进行坐标向量拆分,并根据坐标向量拆分结果选定目标兴趣点与所述待处理坐标点进行动态绑定。The coordinate point dynamic binding module can be used to configure the normal vector angle between the world coordinates and the screen coordinates corresponding to the coordinate point to be processed in the target model as a first standard angle; configure the approximate three-dimensional coordinates of the coordinate point to be processed in the screen coordinate system according to the first standard angle; perform coordinate vector splitting on the approximate three-dimensional coordinates, and select the target interest point for dynamic binding with the coordinate point to be processed according to the coordinate vector splitting result.
本示例实施方式中,所述装置还可以包括:显示控制模块。In this example implementation, the device may further include: a display control module.
所述显示控制模块可以用于响应于对所述目标模型的显示控制操作,实时获取虚拟摄像机的当前视场范围;对当前焦点跟随的兴趣点在x时间轴、y时间轴、z时间轴进行绑定的并发控制,以用于保持所述目标兴趣点的显示位置。The display control module can be used to respond to the display control operation of the target model and obtain the current field of view of the virtual camera in real time; and perform concurrent control on binding the interest point followed by the current focus on the x-time axis, y-time axis, and z-time axis to maintain the display position of the target interest point.
本示例实施方式中,所述装置还可以包括:切换控制模块。In this example implementation, the device may further include: a switching control module.
所述切换控制模块可以用于利用角度控制器控制各维度时间轴的切换。The switching control module can be used to control the switching of the time axes of each dimension using an angle controller.
本示例实施方式中,所述装置还可以包括:数据加载模块。In this example implementation, the device may further include: a data loading module.
所述数据加载模块可以用于根据所述目标模型中的各兴趣点对应的类型划分兴趣点集合;对各所述兴趣点集合配置为所述目标模型主场景对应的子关卡,以用于根据对所述目标模型的显示控制,通过关卡流送对各所述兴趣点集合进行加载。The data loading module can be used to divide the interest point sets according to the types corresponding to each interest point in the target model; configure each interest point set as a sub-level corresponding to the main scene of the target model, so as to load each interest point set through level streaming according to the display control of the target model.
本示例实施方式中,所述装置还可以包括:路径规划模块。In this example implementation, the device may further include: a path planning module.
所述路径规划模块可以用于通过预设参数接口获取样条线配置参数,根据所述样条线配置参数进行动画路径规划;以及对虚拟对象的骨骼体数组与样条线进行绑定,以用于将所述虚拟对象按照已规划的路径进行移动。The path planning module can be used to obtain spline configuration parameters through a preset parameter interface, perform animation path planning according to the spline configuration parameters; and bind the skeleton array of the virtual object to the spline to move the virtual object along the planned path.
本示例实施方式中,所述装置还可以包括:显示效果修正模块。In this example implementation, the device may further include: a display effect correction module.
所述显示效果修正模块可以用于响应于对所述目标模型的显示控制操作,识别输入设备的设备类型;确定输入设备为第一类型设备时,根据虚拟相机当前的中轴线、所述输入设备的相对偏移参数之间的夹角方向的正交向量参数;或者,在确定输入设备为第二类型设备时,确定所述显示控制操作对应的加/减速度参数;并结合加/减速度参数的执行时长,以及预设步长确定偏移参数;根据所述偏移参数修正所述目标模型的显示效果。The display effect correction module can be used to identify the device type of the input device in response to the display control operation of the target model; when it is determined that the input device is a first type of device, determine the orthogonal vector parameters of the angle direction between the current central axis of the virtual camera and the relative offset parameters of the input device; or, when it is determined that the input device is a second type of device, determine the acceleration/deceleration parameters corresponding to the display control operation; and determine the offset parameters in combination with the execution time of the acceleration/deceleration parameters and the preset step size; and correct the display effect of the target model according to the offset parameters.
本示例实施方式中,所述装置还可以包括:信令处理模块。In this example implementation, the device may further include: a signaling processing module.
所述信令处理模块可以用于通过信令服务器接收终端设备的服务请求;其中,所述服务请求包括目标兴趣点的标识信息、任务信息和终端设备标识;处理所述服务请求以获取所述目标兴趣点对应的流媒体数据,并将所述目标兴趣点对应的流媒体数据通过信令服务器推送至所述终端设备。The signaling processing module can be used to receive a service request from a terminal device through a signaling server; wherein the service request includes identification information of a target point of interest, task information and a terminal device identification; process the service request to obtain streaming media data corresponding to the target point of interest, and push the streaming media data corresponding to the target point of interest to the terminal device through a signaling server.
上述的模型构建装置230中各模块的具体细节已经在对应的模型构建方法中进行了详细的描述,因此此处不再赘述。The specific details of each module in the above-mentioned model building device 230 have been described in detail in the corresponding model building method, so they will not be repeated here.
应当注意,尽管在上文详细描述中提及了用于动作执行的设备的若干模块或者单元,但是这种划分并非强制性的。实际上,根据本公开的实施方式,上文描述的两个或更多模块或者单元的特征和功能可以在一个模块或者单元中具体化。反之,上文描述的一个模块或者单元的特征和功能可以进一步划分为由多个模块或者单元来具体化。It should be noted that, although several modules or units of the device for action execution are mentioned in the above detailed description, this division is not mandatory. In fact, according to the embodiments of the present disclosure, the features and functions of two or more modules or units described above can be embodied in one module or unit. Conversely, the features and functions of one module or unit described above can be further divided into multiple modules or units to be embodied.
此外,尽管在附图中以特定顺序描述了本公开中方法的各个步骤,但是,这并非要求或者暗示必须按照该特定顺序来执行这些步骤,或是必须执行全部所示的步骤才能实现期望的结果。附加的或备选的,可以省略某些步骤,将多个步骤合并为一个步骤执行,以及/或者将一个步骤分解为多个步骤执行等。In addition, although the steps of the method in the present disclosure are described in a specific order in the drawings, this does not require or imply that the steps must be performed in this specific order, or that all the steps shown must be performed to achieve the desired results. Additionally or alternatively, some steps may be omitted, multiple steps may be combined into one step, and/or one step may be decomposed into multiple steps, etc.
在本公开的示例性实施例中,还提供了一种能够实现上述方法的电子设备。In an exemplary embodiment of the present disclosure, an electronic device capable of implementing the above method is also provided.
所属技术领域的技术人员能够理解,本公开的各个方面可以实现为***、方法或程序产品。因此,本公开的各个方面可以具体实现为以下形式,即:完全的硬件实施方式、完全的软件实施方式(包括固件、微代码等),或硬件和软件方面结合的实施方式,这里可以统称为“电路”、“模块”或“***”。Those skilled in the art will appreciate that various aspects of the present disclosure may be implemented as systems, methods or program products. Therefore, various aspects of the present disclosure may be specifically implemented in the following forms, namely: complete hardware implementation, complete software implementation (including firmware, microcode, etc.), or a combination of hardware and software implementations, which may be collectively referred to herein as "circuits", "modules" or "systems".
下面参照图24来描述根据本公开的这种实施方式的电子设备1000。图24显示的电子设备1000仅仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。The electronic device 1000 according to this embodiment of the present disclosure is described below with reference to Fig. 24. The electronic device 1000 shown in Fig. 24 is only an example and should not bring any limitation to the functions and scope of use of the embodiment of the present disclosure.
如图24所示,电子设备1000包括中央处理单元(Central Processing Unit,CPU)1001,其可以根据存储在只读存储器(Read-Only Memory,ROM)1002中的程序或者从储存部分1008加载到随机访问存储器(Random Access Memory,RAM)1003中的程序而执行各种适当的动作和处理。在RAM 1003中,还存储有***操作所需的各种程序和数据。CPU 1001、ROM 1002以及RAM 1003通过总线1004彼此相连。输入/输出(Input/Output,I/O)接口1005也连接至总线1004。As shown in FIG. 24 , the electronic device 1000 includes a central processing unit (CPU) 1001, which can perform various appropriate actions and processes according to a program stored in a read-only memory (ROM) 1002 or a program loaded from a storage part 1008 to a random access memory (RAM) 1003. Various programs and data required for system operation are also stored in the RAM 1003. The CPU 1001, the ROM 1002, and the RAM 1003 are connected to each other via a bus 1004. An input/output (I/O) interface 1005 is also connected to the bus 1004.
以下部件连接至I/O接口1005:包括键盘、鼠标等的输入部分1006;包括诸如阴极射线管(Cathode Ray Tube,CRT)、液晶显示器(Liquid Crystal Display,LCD)等 以及扬声器等的输出部分1007;包括硬盘等的储存部分1008;以及包括诸如LAN(Local Area Network,局域网)卡、调制解调器等的网络接口卡的通信部分1009。通信部分1009经由诸如因特网的网络执行通信处理。驱动器1010也根据需要连接至I/O接口1005。可拆卸介质1011,诸如磁盘、光盘、磁光盘、半导体存储器等等,根据需要安装在驱动器1010上,以便于从其上读出的计算机程序根据需要被安装入储存部分1008。The following components are connected to the I/O interface 1005: an input section 1006 including a keyboard, a mouse, etc.; an output section 1007 including a cathode ray tube (CRT), a liquid crystal display (LCD), etc., and a speaker, etc.; a storage section 1008 including a hard disk, etc.; and a communication section 1009 including a network interface card such as a LAN (Local Area Network) card, a modem, etc. The communication section 1009 performs communication processing via a network such as the Internet. A drive 1010 is also connected to the I/O interface 1005 as needed. A removable medium 1011, such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, etc., is installed on the drive 1010 as needed so that a computer program read therefrom is installed into the storage section 1008 as needed.
特别地,根据本发明的实施例,下文参考流程图描述的过程可以被实现为计算机软件程序。例如,本发明的实施例包括一种计算机程序产品,其包括承载在存储介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信部分1009从网络上被下载和安装,和/或从可拆卸介质1011被安装。在该计算机程序被中央处理单元(CPU)1001执行时,执行本申请的***中限定的各种功能。In particular, according to an embodiment of the present invention, the process described below with reference to the flowchart can be implemented as a computer software program. For example, an embodiment of the present invention includes a computer program product, which includes a computer program carried on a storage medium, and the computer program includes a program code for executing the method shown in the flowchart. In such an embodiment, the computer program can be downloaded and installed from a network through a communication part 1009, and/or installed from a removable medium 1011. When the computer program is executed by a central processing unit (CPU) 1001, various functions defined in the system of the present application are executed.
具体来说,上述的电子设备可以是手机、平板电脑或者笔记本电脑等智能移动电子设备。或者,上述的电子设备也可以是台式电脑等智能电子设备。Specifically, the electronic device may be a smart mobile electronic device such as a mobile phone, a tablet computer or a laptop computer, or may be a smart electronic device such as a desktop computer.
需要说明的是,本发明实施例所示的存储介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的***、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(Erasable Programmable Read Only Memory,EPROM)、闪存、光纤、便携式紧凑磁盘只读存储器(Compact Disc Read-Only Memory,CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本发明中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行***、装置或者器件使用或者与其结合使用。而在本发明中,计算机可读的信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读的信号介质还可以是计算机可读存储介质以外的任何存储介质,该存储介质可以发送、传播或者传输用于由指令执行***、装置或者器件使用或者与其结合使用的程序。存储介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:无线、有线等等,或者上述的任意合适的组合。It should be noted that the storage medium shown in the embodiment of the present invention may be a computer-readable signal medium or a computer-readable storage medium or any combination of the above two. The computer-readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device or device, or any combination of the above. More specific examples of computer-readable storage media may include, but are not limited to: an electrical connection with one or more wires, a portable computer disk, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM), a flash memory, an optical fiber, a portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the above. In the present invention, a computer-readable storage medium may be any tangible medium containing or storing a program that can be used by or in combination with an instruction execution system, device or device. In the present invention, a computer-readable signal medium may include a data signal propagated in a baseband or as part of a carrier wave, which carries a computer-readable program code. Such propagated data signals may take a variety of forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the above. Computer-readable signal media may also be any storage medium other than computer-readable storage media, which may send, propagate, or transmit programs for use by or in conjunction with an instruction execution system, apparatus, or device. The program code contained on the storage medium may be transmitted using any appropriate medium, including but not limited to: wireless, wired, etc., or any suitable combination of the above.
附图中的流程图和框图,图示了按照本发明各种实施例的***、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,上述模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉 及的功能而定。也要注意的是,框图或流程图中的每个方框、以及框图或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的***来实现,或者可以用专用硬件与计算机指令的组合来实现。The flowchart and block diagram in the accompanying drawings illustrate the possible architecture, functions and operations of the system, method and computer program product according to various embodiments of the present invention. In this regard, each box in the flowchart or block diagram can represent a module, a program segment, or a part of a code, and the above-mentioned module, program segment, or a part of a code contains one or more executable instructions for realizing the specified logical function. It should also be noted that in some alternative implementations, the functions marked in the box can also occur in a different order from the order marked in the accompanying drawings. For example, two boxes represented in succession can actually be executed substantially in parallel, and they can sometimes be executed in the opposite order, depending on the functions involved. It should also be noted that each box in the block diagram or flowchart, and the combination of boxes in the block diagram or flowchart, can be implemented with a dedicated hardware-based system that performs a specified function or operation, or can be implemented with a combination of dedicated hardware and computer instructions.
描述于本发明实施例中所涉及到的单元可以通过软件的方式实现,也可以通过硬件的方式来实现,所描述的单元也可以设置在处理器中。其中,这些单元的名称在某种情况下并不构成对该单元本身的限定。The units involved in the embodiments of the present invention may be implemented by software or hardware, and the units described may also be arranged in a processor. The names of these units do not, in some cases, limit the units themselves.
需要说明的是,作为另一方面,本申请还提供了一种存储介质,该存储介质可以是电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。上述存储介质承载有一个或者多个程序,当上述一个或者多个程序被一个电子设备执行时,使得该电子设备实现如下述实施例中所述的方法。例如,所述的电子设备可以实现如图1所示的各个步骤。It should be noted that, as another aspect, the present application also provides a storage medium, which may be included in an electronic device; or may exist independently without being assembled into the electronic device. The above storage medium carries one or more programs, and when the above one or more programs are executed by an electronic device, the electronic device implements the method described in the following embodiments. For example, the electronic device may implement the steps shown in FIG1.
此外,上述附图仅是根据本发明示例性实施例的方法所包括的处理的示意性说明,而不是限制目的。易于理解,上述附图所示的处理并不表明或限制这些处理的时间顺序。另外,也易于理解,这些处理可以是例如在多个模块中同步或异步执行的。In addition, the above-mentioned figures are only schematic illustrations of the processes included in the method according to an exemplary embodiment of the present invention, and are not intended to be limiting. It is easy to understand that the processes shown in the above-mentioned figures do not indicate or limit the time sequence of these processes. In addition, it is also easy to understand that these processes can be performed synchronously or asynchronously, for example, in multiple modules.
本领域技术人员在考虑说明书及实践这里公开的发明后,将容易想到本公开的其他实施例。本申请旨在涵盖本公开的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本公开的一般性原理并包括本公开未公开的本技术领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,本公开的真正范围和精神由权利要求指出。Those skilled in the art will readily appreciate other embodiments of the present disclosure after considering the specification and practicing the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the present disclosure that follow the general principles of the present disclosure and include common knowledge or customary technical means in the art that are not disclosed in the present disclosure. The specification and examples are to be regarded as exemplary only, and the true scope and spirit of the present disclosure are indicated by the claims.

Claims (16)

  1. 一种模型构建方法,其特征在于,所述方法包括:A model building method, characterized in that the method comprises:
    由目标数据源获取基础地理信息数据并进行立体化处理,以获取初始模型;并对所述初始模型进行预处理,获取第一阶段模型;Obtaining basic geographic information data from a target data source and performing three-dimensional processing to obtain an initial model; and preprocessing the initial model to obtain a first-stage model;
    对所述第一阶段模型进行非业务逻辑绑定,以及对所述第一阶段模型进行业务逻辑绑定,以生成建模对象对应的目标模型;Performing non-business logic binding on the first-stage model and performing business logic binding on the first-stage model to generate a target model corresponding to the modeling object;
    对所述目标模型进行流媒体绑定,以用于将目标模型对应的流媒体数据推送至对应的预设终端。The target model is bound with streaming media to push streaming media data corresponding to the target model to a corresponding preset terminal.
  2. 根据权利要求1所述的模型构建方法,其特征在于,所述由目标数据源获取基础地理信息数据并进行立体化处理,以获取初始模型,包括:The model building method according to claim 1 is characterized in that the step of obtaining basic geographic information data from a target data source and performing three-dimensional processing to obtain an initial model comprises:
    向目标数据源获取基础地理信息数据,并按预设规则对所述基础地理信息数据进行筛选以获取层次数据;Obtaining basic geographic information data from a target data source, and filtering the basic geographic information data according to preset rules to obtain hierarchical data;
    对所述层次数据执行建模规则集合,以获取所述初始模型。A set of modeling rules is executed on the hierarchical data to obtain the initial model.
  3. 根据权利要求1所述的模型构建方法,其特征在于,所述非业务逻辑包括模型发光逻辑;The model building method according to claim 1, characterized in that the non-business logic includes model luminescence logic;
    所述对所述第一阶段模型进行非业务逻辑绑定,包括:The non-business logic binding of the first-stage model includes:
    在所述第一阶段模型中根据模型的纹理和材质确定待处理对象;In the first stage model, the object to be processed is determined according to the texture and material of the model;
    对所述待处理对象计算对应的漫反射光照系数、镜面反射光照系数以及距离场参数;Calculating corresponding diffuse reflection illumination coefficient, specular reflection illumination coefficient and distance field parameter for the object to be processed;
    根据所述漫反射光照系数、镜面反射光照系数以及距离场参数确定对应的光照混合系数;Determine a corresponding lighting mixing coefficient according to the diffuse reflection lighting coefficient, the specular reflection lighting coefficient and the distance field parameter;
    将所述光照混合系数配置为所述待处理对象的基础发光参数,并基于所述基础发光参数根据所述待处理对象对应的位置按预设比例配置对应的实际发光系数。The illumination mixing coefficient is configured as a basic luminous parameter of the object to be processed, and based on the basic luminous parameter, a corresponding actual luminous coefficient is configured according to a preset ratio according to a position corresponding to the object to be processed.
  4. 根据权利要求3所述的模型构建方法,其特征在于,计算所述待处理对象对应的漫反射光照系数,包括:The model building method according to claim 3 is characterized in that calculating the diffuse reflection illumination coefficient corresponding to the object to be processed comprises:
    结合环境光强度、材质对环境光的反射系数,确定漫反射体与环境光交互反射的第一光强参数;Determine the first light intensity parameter of the interactive reflection between the diffuse reflector and the ambient light by combining the ambient light intensity and the reflection coefficient of the material to the ambient light;
    集合点光源强度、材质对环境光的反射系数、入射光方向与顶点法线的夹角,确定漫反射体与方向光交互反射的第二光强参数;The intensity of the point light source, the reflection coefficient of the material to the ambient light, and the angle between the incident light direction and the vertex normal are collected to determine the second light intensity parameter of the interactive reflection of the diffuse reflector and the directional light;
    根据所述第一光强参数、第二光强参数确定所述漫反射光照系数。The diffuse reflection illumination coefficient is determined according to the first light intensity parameter and the second light intensity parameter.
  5. 根据权利要求3所述的模型构建方法,其特征在于,计算所述待处理对象对应的镜面反射光照系数,包括:The model building method according to claim 3 is characterized in that calculating the specular reflection illumination coefficient corresponding to the object to be processed comprises:
    结合镜面反射系数、点光源强度、高光指数、第一光线方向参数,确定初始镜面系数;Determine the initial mirror coefficient by combining the mirror reflection coefficient, point light source intensity, highlight index, and first light direction parameter;
    利用第二光线方向参数对所述初始镜面系数进行修正,获取所述镜面反射光照系数。The initial mirror coefficient is corrected using the second light direction parameter to obtain the mirror reflection illumination coefficient.
  6. 根据权利要求3所述的模型构建方法,其特征在于,所述根据所述漫反射光照系 数、镜面反射光照系数以及距离场参数确定对应的光照混合系数,包括:The model construction method according to claim 3, characterized in that the step of determining the corresponding illumination mixing coefficient according to the diffuse illumination coefficient, the specular illumination coefficient and the distance field parameter comprises:
    基于所述待处理对象中任意两个样本点对应的夹角,计算所述待处理对象对应的混合角度参数;Calculating a mixing angle parameter corresponding to the object to be processed based on the included angle corresponding to any two sample points in the object to be processed;
    利用所述混合角度参数,结合所述漫反射光照系数、镜面反射光照系数以及距离场参数确定对应的光照混合系数。The mixing angle parameter is used to determine a corresponding lighting mixing coefficient in combination with the diffuse reflection lighting coefficient, the specular reflection lighting coefficient and the distance field parameter.
  7. 根据权利要求1所述的模型构建方法,其特征在于,所述方法还包括:The model building method according to claim 1, characterized in that the method further comprises:
    根据所述目标模型中待处理坐标点对应的世界坐标与屏幕坐标的法向量夹角配置为第一标准角度;The normal vector angle between the world coordinates and the screen coordinates corresponding to the coordinate point to be processed in the target model is configured as a first standard angle;
    根据所述第一标准角度配置所述待处理坐标点在屏幕坐标系中的近似三维坐标;According to the first standard angle, the approximate three-dimensional coordinates of the coordinate point to be processed in the screen coordinate system are configured;
    对所述近似三维坐标进行坐标向量拆分,并根据坐标向量拆分结果选定目标兴趣点与所述待处理坐标点进行动态绑定。The approximate three-dimensional coordinates are split into coordinate vectors, and a target interest point is selected according to the coordinate vector splitting result for dynamic binding with the coordinate point to be processed.
  8. 根据权利要求1或7所述的模型构建方法,其特征在于,所述方法还包括:The model building method according to claim 1 or 7, characterized in that the method further comprises:
    响应于对所述目标模型的显示控制操作,实时获取虚拟摄像机的当前视场范围;In response to a display control operation on the target model, obtaining a current field of view of the virtual camera in real time;
    对当前焦点跟随的兴趣点在x时间轴、y时间轴、z时间轴进行绑定的并发控制,以用于保持所述目标兴趣点的显示位置。The interest point followed by the current focus is bound to the x-time axis, the y-time axis, and the z-time axis through concurrent control to maintain the display position of the target interest point.
  9. 根据权利要求8所述的模型构建方法,其特征在于,所述方法还包括:The model building method according to claim 8, characterized in that the method further comprises:
    利用角度控制器控制各维度时间轴的切换。Use the angle controller to control the switching of the time axis in each dimension.
  10. 根据权利要求1或7所述的模型构建方法,其特征在于,所述方法还包括:The model building method according to claim 1 or 7, characterized in that the method further comprises:
    根据所述目标模型中的各兴趣点对应的类型划分兴趣点集合;Dividing the interest point set according to the type corresponding to each interest point in the target model;
    对各所述兴趣点集合配置为所述目标模型主场景对应的子关卡,以用于根据对所述目标模型的显示控制,通过关卡流送对各所述兴趣点集合进行加载。Each of the interest point sets is configured as a sub-level corresponding to the main scene of the target model, so as to load each of the interest point sets through level streaming according to display control of the target model.
  11. 根据权利要求1所述的模型构建方法,其特征在于,所述方法还包括:The model building method according to claim 1, characterized in that the method further comprises:
    通过预设参数接口获取样条线配置参数,根据所述样条线配置参数进行动画路径规划;以及Obtaining spline configuration parameters through a preset parameter interface, and performing animation path planning according to the spline configuration parameters; and
    对虚拟对象的骨骼体数组与样条线进行绑定,以用于将所述虚拟对象按照已规划的路径进行移动。The skeleton volume array of the virtual object is bound to the spline line to move the virtual object according to the planned path.
  12. 根据权利要求1所述的模型构建方法,其特征在于,所述方法还包括:The model building method according to claim 1, characterized in that the method further comprises:
    响应于对所述目标模型的显示控制操作,识别输入设备的设备类型;identifying a device type of an input device in response to a display control operation on the target model;
    确定输入设备为第一类型设备时,根据虚拟相机当前的中轴线、所述输入设备的相对偏移参数之间的夹角方向的正交向量参数;或者,When determining that the input device is a first type device, an orthogonal vector parameter of an angle direction between the current central axis of the virtual camera and the relative offset parameter of the input device; or,
    在确定输入设备为第二类型设备时,确定所述显示控制操作对应的加/减速度参数;并结合加/减速度参数的执行时长,以及预设步长确定偏移参数;When it is determined that the input device is a second type device, determining an acceleration/deceleration parameter corresponding to the display control operation; and determining an offset parameter in combination with the execution time of the acceleration/deceleration parameter and a preset step length;
    根据所述偏移参数修正所述目标模型的显示效果。The display effect of the target model is modified according to the offset parameter.
  13. 根据权利要求1所述的模型构建方法,其特征在于,所述方法还包括:The model building method according to claim 1, characterized in that the method further comprises:
    通过信令服务器接收终端设备的服务请求;其中,所述服务请求包括目标兴趣点的标 识信息、任务信息和终端设备标识;Receiving a service request from a terminal device through a signaling server; wherein the service request includes identification information of a target point of interest, task information, and a terminal device identification;
    处理所述服务请求以获取所述目标兴趣点对应的流媒体数据,并将所述目标兴趣点对应的流媒体数据通过信令服务器推送至所述终端设备。The service request is processed to obtain streaming media data corresponding to the target point of interest, and the streaming media data corresponding to the target point of interest is pushed to the terminal device through a signaling server.
  14. 一种模型构建装置,其特征在于,所述装置包括:A model building device, characterized in that the device comprises:
    第一阶段模型计算模块,用于由目标数据源获取基础地理信息数据并进行立体化处理,以获取初始模型;并对所述初始模型进行预处理,获取第一阶段模型;The first stage model calculation module is used to obtain basic geographic information data from the target data source and perform three-dimensional processing to obtain an initial model; and pre-process the initial model to obtain a first stage model;
    目标模型计算模块,用于对所述第一阶段模型进行非业务逻辑绑定,以及对所述第一阶段模型进行业务逻辑绑定,以生成建模对象对应的目标模型;A target model calculation module, used for performing non-business logic binding on the first-stage model and performing business logic binding on the first-stage model to generate a target model corresponding to the modeling object;
    流媒体数据处理模块,用于对所述目标模型进行流媒体绑定,以用于将目标模型对应的流媒体数据推送至对应的预设终端。The streaming media data processing module is used to perform streaming media binding on the target model so as to push the streaming media data corresponding to the target model to the corresponding preset terminal.
  15. 一种存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如权利要求1至13中任一项所述的模型构建方法。A storage medium having a computer program stored thereon, wherein when the computer program is executed by a processor, the model building method according to any one of claims 1 to 13 is implemented.
  16. 一种电子设备,其特征在于,包括:An electronic device, comprising:
    处理器;以及Processor; and
    存储器,用于存储所述处理器的可执行指令;A memory, configured to store executable instructions of the processor;
    其中,所述处理器配置为经由执行所述可执行指令来执行权利要求1至13中任一项所述的模型构建方法。The processor is configured to perform the model building method according to any one of claims 1 to 13 by executing the executable instructions.
PCT/CN2022/138339 2022-12-12 2022-12-12 Model construction method and apparatus, storage medium, and electronic device WO2024124370A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2022/138339 WO2024124370A1 (en) 2022-12-12 2022-12-12 Model construction method and apparatus, storage medium, and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2022/138339 WO2024124370A1 (en) 2022-12-12 2022-12-12 Model construction method and apparatus, storage medium, and electronic device

Publications (1)

Publication Number Publication Date
WO2024124370A1 true WO2024124370A1 (en) 2024-06-20

Family

ID=91484224

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/138339 WO2024124370A1 (en) 2022-12-12 2022-12-12 Model construction method and apparatus, storage medium, and electronic device

Country Status (1)

Country Link
WO (1) WO2024124370A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070156742A1 (en) * 2005-12-30 2007-07-05 Jorge Gonzalez Visual modeling method and apparatus
CN109903366A (en) * 2019-03-13 2019-06-18 网易(杭州)网络有限公司 The rendering method and device of dummy model, storage medium and electronic equipment
CN112755523A (en) * 2021-01-12 2021-05-07 网易(杭州)网络有限公司 Target virtual model construction method and device, electronic equipment and storage medium
CN114863002A (en) * 2022-05-25 2022-08-05 Oppo广东移动通信有限公司 Virtual image generation method and device, terminal equipment and computer readable medium
CN115311414A (en) * 2022-08-11 2022-11-08 北京百度网讯科技有限公司 Live-action rendering method and device based on digital twinning and related equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070156742A1 (en) * 2005-12-30 2007-07-05 Jorge Gonzalez Visual modeling method and apparatus
CN109903366A (en) * 2019-03-13 2019-06-18 网易(杭州)网络有限公司 The rendering method and device of dummy model, storage medium and electronic equipment
CN112755523A (en) * 2021-01-12 2021-05-07 网易(杭州)网络有限公司 Target virtual model construction method and device, electronic equipment and storage medium
CN114863002A (en) * 2022-05-25 2022-08-05 Oppo广东移动通信有限公司 Virtual image generation method and device, terminal equipment and computer readable medium
CN115311414A (en) * 2022-08-11 2022-11-08 北京百度网讯科技有限公司 Live-action rendering method and device based on digital twinning and related equipment

Similar Documents

Publication Publication Date Title
CN108919944B (en) Virtual roaming method for realizing data lossless interaction at display terminal based on digital city model
WO2021164150A1 (en) Web terminal real-time hybrid rendering method and apparatus in combination with ray tracing, and computer device
EP3170151B1 (en) Blending between street view and earth view
JP2021523442A (en) Codec for processing scenes with almost unlimited details
CN108269304B (en) Scene fusion visualization method under multiple geographic information platforms
KR100915209B1 (en) Automatic Modeling And Navigation System for Solid Builing Plane And Internal in Base of XML, And Method Thereof
US9183654B2 (en) Live editing and integrated control of image-based lighting of 3D models
US10740981B2 (en) Digital stages for presenting digital three-dimensional models
CN116342783B (en) Live-action three-dimensional model data rendering optimization method and system
Beck Real-time visualization of big 3D city models
WO2023159595A1 (en) Method and device for constructing and configuring three-dimensional space scene model, and computer program product
CN115908716A (en) Virtual scene light rendering method and device, storage medium and electronic equipment
Sandnes Sketching 3D immersed experiences rapidly by hand through 2D cross sections
Dorffner et al. Generation and visualization of 3D photo-models using hybrid block adjustment with assumptions on the object shape
WO2023231793A9 (en) Method for virtualizing physical scene, and electronic device, computer-readable storage medium and computer program product
Fukuda et al. Integration of a structure from motion into virtual and augmented reality for architectural and urban simulation: demonstrated in real architectural and urban projects
WO2024124370A1 (en) Model construction method and apparatus, storage medium, and electronic device
Wang et al. Research and design of digital museum based on virtual reality
CN114972612B (en) Image texture generation method based on three-dimensional simplified model and related equipment
Li et al. A fast fusion method for multi-videos with three-dimensional GIS scenes
CN113628323B (en) Method for quickly constructing digital exhibition file and intelligent terminal
Giertsen et al. An open system for 3D visualisation and animation of geographic information
CN113870409B (en) Three-dimensional graph lightweight method based on industrial simulation scene
Conde et al. LiDAR Data Processing for Digitization of the Castro of Santa Trega and Integration in Unreal Engine 5
Lawlor Impostors for parallel interactive computer graphics