CN109493431B - 3D model data processing method, device and system - Google Patents

3D model data processing method, device and system Download PDF

Info

Publication number
CN109493431B
CN109493431B CN201710819123.1A CN201710819123A CN109493431B CN 109493431 B CN109493431 B CN 109493431B CN 201710819123 A CN201710819123 A CN 201710819123A CN 109493431 B CN109493431 B CN 109493431B
Authority
CN
China
Prior art keywords
model
rendering
characteristic information
rendering result
shooting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710819123.1A
Other languages
Chinese (zh)
Other versions
CN109493431A (en
Inventor
张哲�
胡晓航
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN201710819123.1A priority Critical patent/CN109493431B/en
Priority to PCT/CN2018/103941 priority patent/WO2019052371A1/en
Publication of CN109493431A publication Critical patent/CN109493431A/en
Application granted granted Critical
Publication of CN109493431B publication Critical patent/CN109493431B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts

Abstract

The embodiment of the application discloses a 3D model data processing method, a device and a system, wherein the system comprises: the 3D model creation end is used for creating a first 3D model according to a data format required by a first rendering engine, obtaining characteristic information of a rendering result corresponding to the first 3D model and submitting the characteristic information to the server; the server is used for carrying out 3D reconstruction according to the characteristic information to obtain a second 3D model with a data format required by a second rendering engine; and the client is provided with the second rendering engine, and is used for obtaining the second 3D model from the server and rendering the second 3D model. The scheme of the embodiment of the application is convenient to popularize and apply, can obtain more user quantity, and improves the quantity and richness of the 3D model.

Description

3D model data processing method, device and system
Technical Field
The application relates to the technical field of 3D models, in particular to a 3D model data processing method, device and system.
Background
In a 3D/AR/VR application scenario, 3D models are important materials. Taking the network sales system with rapid development as an example, massive commodities need to be displayed on the internet, and under the traditional mode, the commodities can be displayed only in a two-dimensional photo mode, but commodity or object information transmitted to a consumer in the mode is not comprehensive, detailed and vivid enough, and the effects of improving the purchasing desire and the purchasing accuracy of the consumer are difficult to achieve. Therefore, it is particularly important to make a specialized, personalized, realistic 3D presentation.
However, in the prior art, these needs are often difficult to meet, and this is mainly reflected in: there are a great many techniques available in the prior art for creating 3D models and, in addition, for rendering 3D models, which are often mutually non-generic. This means that if a technique is used to create a 3D model, the corresponding rendering engine must be used to render, otherwise rendering the 3D model cannot be completed. When applied to a network sales system, the following situations occur: if a certain rendering engine is used in the client (only one rendering engine is usually set in the client), when the 3D model is created, the technology that the rendering engine can support must be used to build the 3D model, otherwise normal rendering cannot be performed. However, the number of commodities in the network sales system is numerous, and the commodities may come from a plurality of different merchants or sellers, if each commodity needing to be 3D displayed is specified to be built according to the same technology, the popularization of the application is definitely limited, and the problems of small number of users, insufficient number of 3D models and the like are caused.
For the above reasons, in the prior art, a merchant or a seller user is typically required to send a sample of a commodity to a background technician of the system, and then the background technician creates a 3D model for each commodity according to a unified technology, and then renders the commodity by using a corresponding rendering engine. However, different goods may have different characteristics in terms of structure and the like, it may be necessary to create a 3D model by different techniques to better demonstrate the 3D effect; in addition, these items may come from different merchant or seller users who may also wish to embody some personalized designs in the 3D model, which clearly fails to meet the various personalized needs described above in creating the 3D model.
Disclosure of Invention
The 3D model data processing method, device and system are convenient to popularize and apply, can obtain more user quantity, and improve the quantity and richness of the 3D models.
The application provides the following scheme:
a 3D model processing system, comprising:
the 3D model creation end is used for creating a first 3D model according to a data format required by a first rendering engine, obtaining characteristic information of a rendering result corresponding to the first 3D model and submitting the characteristic information to the server;
The server is used for carrying out 3D reconstruction according to the characteristic information to obtain a second 3D model with a data format required by a second rendering engine;
and the client is provided with the second rendering engine, and is used for obtaining the second 3D model from the server and rendering the second 3D model.
A 3D model data processing method, comprising:
creating a first 3D model according to a data format required by a first rendering engine;
obtaining characteristic information of a rendering result corresponding to the first 3D model;
and submitting the characteristic information to a server, and performing 3D reconstruction by the server according to the characteristic information to obtain a second 3D model with a data format required by a second rendering engine.
A 3D model data processing method, comprising:
the method comprises the steps that a server receives characteristic information of a rendering result corresponding to a first 3D model, wherein the first 3D model is created according to a data format required by a first rendering engine;
3D reconstruction is carried out according to the characteristic information, and a second 3D model with a data format required by a second rendering engine is obtained;
and saving the second 3D model for rendering the second 3D model to a client deployed with the second rendering engine.
A data object information presentation method, comprising:
the client sends a request for obtaining the 3D model of the target data object to the server; wherein a second rendering engine is deployed at the client;
receiving a second 3D model returned by the server, wherein the second 3D model is a 3D model reconstructed according to the data format required by the second rendering engine according to the characteristic information of the rendering result corresponding to the first 3D model, and the first 3D model is a 3D model reconstructed according to the data format required by the first rendering engine;
rendering the second 3D model with the second rendering engine.
A 3D model data processing apparatus, comprising:
a first model creation unit for creating a first 3D model in accordance with a data format required by the first rendering engine;
the characteristic information obtaining unit is used for obtaining characteristic information of a rendering result corresponding to the first 3D model;
and the characteristic information submitting unit is used for submitting the characteristic information to a server, and the server performs 3D reconstruction according to the characteristic information to obtain a second 3D model with a data format required by a second rendering engine.
A 3D model data processing apparatus, applied to a server, comprising:
The device comprises a characteristic information receiving unit, a first rendering engine and a second rendering engine, wherein the characteristic information receiving unit is used for receiving characteristic information of a rendering result corresponding to a first 3D model, and the first 3D model is created according to a data format required by the first rendering engine;
the 3D reconstruction unit is used for carrying out 3D reconstruction according to the characteristic information to obtain a second 3D model with a data format required by a second rendering engine;
and the storage unit is used for storing the second 3D model and is used for providing the second 3D model to a client side deployed with the second rendering engine to render the second 3D model.
A data object information display device, applied to a client, comprising:
a request sending unit, configured to send a request for obtaining a 3D model of a target data object to a server; wherein a second rendering engine is deployed at the client;
the 3D model receiving unit is used for receiving a second 3D model returned by the server, wherein the second 3D model is a 3D model reconstructed according to the data format required by the second rendering engine according to the characteristic information of the rendering result corresponding to the first 3D model, and the first 3D model is a 3D model reconstructed according to the data format required by the first rendering engine;
and the rendering unit is used for rendering the second 3D model by utilizing the second rendering engine.
A 3D model processing system for data objects, comprising:
the 3D model creation end is used for creating a first 3D model for the specified data object according to the data format required by the first rendering engine, obtaining the characteristic information of the rendering result corresponding to the first 3D model and submitting the characteristic information to the server;
the server is used for carrying out 3D reconstruction according to the characteristic information to obtain a second 3D model with a data format required by a second rendering engine, and storing the association relationship between the identification information of the data object and the second 3D model;
and the client is provided with the second rendering engine, and is used for obtaining the second 3D model from the server and rendering the second 3D model when receiving a 3D access request of the data object.
A computer system, comprising:
one or more processors; and
a memory associated with the one or more processors, the memory for storing program instructions that, when read for execution by the one or more processors, perform the operations of:
receiving characteristic information of a rendering result corresponding to a first 3D model, wherein the first 3D model is created according to a data format required by a first rendering engine;
3D reconstruction is carried out according to the characteristic information, and a second 3D model with a data format required by a second rendering engine is obtained;
and saving the second 3D model for rendering the second 3D model to a client deployed with the second rendering engine.
According to a specific embodiment provided by the application, the application discloses the following technical effects:
in the embodiment of the present application, the creation end of the 3D model may use any technology to create the first 3D model, but instead of directly submitting the first 3D model to the server, the rendering result of the first 3D model is first obtained by obtaining feature information, and the feature information is submitted to the server. At the server side, the reconstruction of the 3D model can be realized according to the characteristic information, so that the reconstructed second 3D model has a data format required by a second rendering engine. In this way, conversion of the data format of the 3D model created by the various techniques to the data format required by a particular rendering engine may be achieved. The creator of the first 3D model is not limited in the technical aspect of creating the 3D model, and can select a proper technology to create the first 3D model according to the requirements of the creator or the characteristics of the object, and the like, so long as the creator performs the acquisition operation of the characteristic information before submitting, the creator can access the system, and the creator can reconstruct the data format which can be recognized and rendered by the rendering engine deployed in the client through the data format conversion of the server. Therefore, the scheme is more convenient to popularize and apply, can obtain more user quantity, and improves the quantity and richness of the 3D models.
Of course, not all of the above-described advantages need be achieved at the same time in practicing any one of the products of the present application.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of a 3D reconstruction mode provided in an embodiment of the present application;
fig. 2 is a schematic diagram of a background image setting mode in a 3D reconstruction process according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a system provided by an embodiment of the present application;
FIG. 4 is a flow chart of a first method provided by an embodiment of the present application;
FIG. 5 is a flow chart of a second method provided by an embodiment of the present application;
FIG. 6 is a flow chart of a third method provided by an embodiment of the present application;
FIG. 7 is a schematic diagram of a first apparatus provided in an embodiment of the present application;
FIG. 8 is a schematic diagram of a second apparatus provided in an embodiment of the present application;
FIG. 9 is a schematic diagram of a third apparatus provided in an embodiment of the present application;
Fig. 10 is a schematic diagram of a computer system provided in an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all, of the embodiments of the present application. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application are within the scope of the protection of the present application.
In the embodiment of the application, a data format for indirectly expressing a 3D model is provided, in this way, in the case that a certain fixed rendering engine is already deployed in a client, the 3D model can be created by any technology, and all rendering operations can be completed by the rendering engine fixedly deployed in the client. Specifically, when creating a 3D model for an item, any technique may be used to create the 3D model, after which the 3D model may be subjected to data format conversion into a data format required by a specific rendering engine. Specifically, when the data format is converted, after a 3D model is created by using any technology, feature information of a rendering result corresponding to the 3D model is obtained, and then the server reconstructs the 3D model according to the feature information and a certain algorithm, so that the reconstructed 3D model has a data format required by a rendering engine deployed in the client, and the reconstructed 3D model can be rendered by the rendering engine deployed in the client. That is, embodiments of the present application may convert 3D models created using various other techniques into a unified data format and reconstruct the 3D models into any format required by a rendering engine.
The significance of the technical scheme provided by the embodiment of the application in practical application is that the technology used in the process of specifically creating the 3D model is not limited. For example, in a network sales system, if a 3D presentation effect needs to be provided for a huge number of data objects (commodity objects, store objects, etc.), a 3D model may be created for each data object using any technique. Thus, for data objects with different spatial structures, the creation of a 3D model may be performed using suitable techniques; in the case that the data object corresponds to a plurality of different first users (such as seller users or merchant users), the data object can be created by each first user according to the respective requirements or the respective ideas, and in the process of creating the 3D model, any 3D model creation technology can be used for implementation, so that the first user can more freely embody own personalized design in the 3D model. Then, by converting the data format of such a 3D model, reconstruction of the 3D model can be performed so that it can be recognized and rendered by a specific rendering engine. That is, in the case where one type of rendering engine has been fixedly configured in the second user (consumer user or buyer user, etc.) client, a series of processes such as reconstruction of a 3D model can be performed by conversion of a data format, and the 3D model to be created using various techniques can be normally recognized and rendered by the rendering engine configured in the client.
In particular, as described above, the operation of creating the 3D model may be performed on the owner or publisher side of the data object, or may be performed by a technician in the system, or after the 3D model is created, the operation of obtaining the feature information of the rendering result corresponding to the 3D model may be performed on the creator side. Specifically, to complete the feature information acquiring operation, a plug-in (which may be provided by the embodiment of the present application) may be added to the rendering engines of the existing various 3D models, and specific processing may be completed through the plug-in. Among them, in order to facilitate distinction from the rendering engine deployed in the second user client in the embodiment of the present application, the rendering engine corresponding to the technology used in such 3D model creation may be referred to as a "first rendering engine", and the rendering engine deployed in the second user client may be referred to as a "second engine". That is, assuming that the first user itself is used as a creator or other designers are entrusted to create a 3D model for the data object thereof, or a background technician of the system creates a 3D model for the data object specified by the first user, after the creation of the 3D model is completed by using any one of techniques, in the first rendering engine corresponding to the technique, the feature information obtaining operation of the rendering result corresponding to the 3D model can be completed by a pre-installed plug-in.
In a specific implementation manner, a specific operation procedure may be that, first, the first 3D model is rendered by the first rendering engine, and is displayed as a rendering result with a three-dimensional structure. Then, a picture sequence composed of a plurality of pictures can be obtained by carrying out multi-angle image acquisition on the rendering result, and the picture sequence can be used as characteristic information for reconstructing the 3D model. Specifically, in order to perform the multi-angle image acquisition operation, the rendering result may be photographed in all directions through a pre-established model. Specifically, as shown in fig. 1, a spherical model may be created with the center point of the rendering result corresponding to the first 3D model as the center of the sphere, and the warp and weft may be drawn on the model at a certain angular interval, for example, one warp and weft may be drawn every 20 degrees, and of course, other angular interval values may be selected. Each intersection point of the warp and the weft can be used as a shooting point, and shooting is carried out towards the direction of the sphere center at the shooting point. Thus, a picture about the rendering result can be obtained at each shooting point, and the pictures obtained at each shooting point can form a picture sequence. It should be noted that, the shooting operations performed at each shooting point may be performed by a virtual camera assembly, and in order to reconstruct the 3D model more completely and more truly, a binocular camera assembly (two lenses with left and right pupil pitches being staggered) may be further used to perform at each shooting point, so that the picture at each shooting point has a viewing effect of 3D parallax, and the 3D reconstruction is more convenient.
And particularly, when each shooting point is obtained to shoot and acquire a picture sequence, different settings can be further performed in the aspects of a light source, a background and the like according to the specific situation of the rendering result corresponding to the 3D model. For example, one basic setup may construct a fully lit light source such that the rendering of the 3D model is fully illuminated as seen from each shot. The picture sequence obtained in the mode can be used as basic information, namely, all 3D models basically need the picture sequence in the full-brightness light source environment and are used for restoring information in aspects of basic object structures and the like.
In addition, some objects may have information such as special textures, and in this case, when the restoration is performed only by the above basic information, the texture information may not be restored well. In this case, after the generation of the photo sequence in the full-light source state is completed, the rendering result may be shot again at each shooting point by constructing a structural light source, and another group of photo sequences may be regenerated. In this way, the set of sequences of pictures can be used as an additional information, together with the basic information, to compose transformed data, which can be used together for subsequent reconstruction of the 3D model. The structured light is structured light, and simple structuring includes dot structured light, line structured light, simple surface structured light, and the like. For example, in the embodiment of the present application, a stripe projection technique may be used to implement a surface structured light, for example, a stripe with a sine shape may be generated by computer programming, and the stripe is projected onto the object to be measured by the projection device, so that the object to be measured exhibits a stripe-like phenomenon of alternating brightness and darkness. The imaging result obtained by the structured light can better restore the information such as the texture of the object. In addition, the image sequence with larger information quantity can be obtained by changing the width, the shape and the like of the stripes in the shooting process, so that the reconstruction of the 3D model with higher reduction degree can be realized.
Furthermore, some objects may need to highlight the material information, and the shooting under the full-bright light source is mainly used for restoring the structural shape of the object, and the shooting under the structural light source is mainly used for restoring the texture information of the object, so that other additional information can be obtained in other ways to restore the material information better. For example, some specific contexts may be constructed in a fully lit light source environment. Specifically, as shown in fig. 2, the four vertex colors may be RGB black, and different background images are obtained through different combinations of adjacent color relations, and the background images have obvious boundaries and can be used for assisting in position determination. When such a picture sequence with specific background information is obtained, it is expected that such physical properties as the material of the object are calculated by an algorithm such as machine learning, and used for better reconstruction of the 3D model.
It should be noted that, in practical applications, basic information is generally required for each 3D model, and additional information may be determined according to the specific situation. For example, if an article does not need to show information such as texture and material, the sequence of pictures obtained by shooting under the full-bright light source can be used as the basic information. If some object needs to highlight the texture information, the method can be realized by adopting the mode of adding the first additional information into the basic information; if the material information is emphasized by a certain object, the method can be realized by adopting a mode of adding the basic information and the second additional information; if something needs to highlight texture and texture information, it can be implemented by adding the basic information and the two additional information, and so on. In addition, in practical application, other additional information can be constructed in other modes for better restoring the characteristics of the object in other aspects.
In practical application, in the process of obtaining the feature information, a preview function may be further provided, for example, after a full-bright light source is used to shoot a rendering result of a certain 3D model to obtain basic information, the basic information may be submitted to a server, the server may reconstruct the 3D model and render the result by a second rendering engine, the rendering result information may be returned to the creator as preview information, and the creator may determine whether the reconstructed result meets the requirements of the creator through observation and other manners. If the basic information accords with the format, the basic information can be directly exported as the characteristic information to be confirmed and submitted in a format such as a file. Otherwise, if the display effect of the texture aspect is not satisfied, the creator may further add the first additional information, if the display effect of the texture aspect is not satisfied, further add the second additional information, and so on.
After the creator confirms the converted information, the information can be exported in the form of files and the like, and the files are submitted to a server. Then, the server can reconstruct the 3D model according to the independent algorithm and the file, and reconstruct the 3D model into a data format required by the second rendering engine. Thus, the corresponding relation between each data object and the reconstructed second 3D model is stored at the server side. When the second user client needs to perform 3D presentation on a certain data object, the server side can provide the reconstructed second 3D model to the second user client, and since the reconstructed second 3D model has a data format required by the second rendering engine, the second user client can use the second rendering engine deployed therein to complete rendering of the 3D model.
From the system architecture perspective, referring to fig. 3, an embodiment of the present application may include three components, namely, a 3D model creation end, a service end, and a client end. The user at the creation end may correspond to the first user to which the data object belongs, or a designer trusted by the first user, or may also be a background technician at the service end, or the like. At the creation end, different technologies can be used to create the 3D model, and the 3D model created by the different technologies needs to be rendered by using different first rendering engines in an initial state. For example, as shown in fig. 3, three creation ends respectively correspond to the first rendering engine A, B, C, that means that the three creation ends respectively use different techniques when creating the 3D model. In this embodiment of the present application, the function of obtaining the feature information of the rendering results may also be provided in the first rendering engine A, B, C through a plug-in, etc., so that the feature information of the rendering results corresponding to the 3D models created by various technologies may be obtained, and then submitted to the server. The server is mainly used for reconstructing the 3D model into a data format required by a second rendering engine D deployed in the client. The client is used for obtaining the reconstructed 3D model from the server when the 3D display is needed, and then rendering is directly carried out. The client described in the embodiment of the present application may include a client program running in a terminal device in a form of an independent App or the like, or may also include a Web page or the like.
The following describes specific technical schemes provided in the embodiments of the present application from various different angles.
Example 1
First, as shown in fig. 3, this embodiment provides a 3D model processing system, which may include:
the 3D model creation end 301 is configured to create a first 3D model according to a data format required by a first rendering engine, obtain feature information of a rendering result corresponding to the first 3D model, and submit the feature information to a server;
the server 302 is configured to perform 3D reconstruction according to the feature information to obtain a second 3D model with a data format required by a second rendering engine;
the client 303 is configured to obtain the second 3D model from the server, and render the second 3D model, where the second rendering engine is deployed.
In a specific implementation, the 3D model creation side may specifically be configured to: and rendering the first 3D model through the first rendering engine, respectively shooting the rendering result through a plurality of shooting points deployed around the rendering result, realizing full-angle shooting of the rendering result, obtaining a picture sequence, and determining the characteristic information according to the picture sequence. For example, in a more specific implementation, a spherical model may be created with the center point of the rendering result as the center of the sphere, and then a plurality of warps and wefts may be drawn on the spherical model at preset intervals, and the intersection point of each warp and weft may be taken as a photographing point. In particular, when shooting is performed, multiple groups of image sequences can be obtained by constructing different light sources and/or shooting backgrounds, and each group of image sequences can be used for bearing features in different dimensions, for example, structural features, texture features, material features and the like. For example, a first picture sequence mainly used for carrying structural information of a rendering result may be obtained by constructing a full-bright light source, a white background, a second picture sequence mainly used for carrying texture information of a rendering result may be obtained by constructing a structural light source, or a third picture sequence mainly used for carrying texture information of a rendering result may be obtained by constructing a special background, and so on.
In practical application, the first 3D model may correspond to a data object (including a commodity object or a store object, etc.), where the 3D model creation end may also submit corresponding data object identification information when submitting the feature to a server; correspondingly, the server is further configured to store a correspondence between the second 3D model and the data object identifier, and provide a corresponding second 3D model when receiving a request from the client to access the 3D information of the specified data object.
The method comprises the steps of obtaining characteristic information of a rendering result corresponding to a first 3D model in different first rendering engines when a plurality of data objects exist and different technologies for creating the first 3D model corresponding to different data objects. That is, assuming that a technique a is used that requires a first rendering engine m to recognize and render when creating a first 3D model for a certain data object a, feature information of a corresponding rendering result can be obtained in the first rendering engine m specifically. If a data object B creates a first 3D model, a technique B is used, where the technique B needs to be identified and rendered by a first rendering engine n, then feature information corresponding to the rendering result may be obtained in the first rendering engine n, and so on. Therefore, in specific implementation, the corresponding feature information acquisition function can be realized in various different first rendering engines respectively, so that in order to facilitate implementation, great modification of codes of the first rendering engines is avoided, the function can be realized in the form of a plug-in, and the function of performing feature acquisition on rendering results can be realized by installing corresponding plug-in programs in each first rendering engine.
In summary, in the embodiment of the present application, the creation end of the 3D model may use any technology to create the first 3D model, but instead of directly submitting the first 3D model to the server, the rendering result of the first 3D model is first obtained by obtaining feature information, and the feature information is submitted to the server. At the server side, the reconstruction of the 3D model can be realized according to the characteristic information, so that the reconstructed second 3D model has a data format required by a second rendering engine. In this way, conversion of the data format of the 3D model created by the various techniques to the data format required by a particular rendering engine may be achieved. The creator of the first 3D model is not limited in the technical aspect of creating the 3D model, and can select a proper technology to create the first 3D model according to the requirements of the creator or the characteristics of the object, and the like, so long as the creator performs the acquisition operation of the characteristic information before submitting, the creator can access the system, and the creator can reconstruct the data format which can be recognized and rendered by the rendering engine deployed in the client through the data format conversion of the server. Therefore, the scheme is more convenient to popularize and apply, can obtain more user quantity, and improves the quantity and richness of the 3D models.
Example two
The second embodiment corresponds to the first embodiment, and from the perspective of the creation end, a 3D model data processing method is provided, referring to fig. 4, and the method may include:
s401: creating a first 3D model according to a data format required by a first rendering engine;
the technique used to create the first 3D model is not limited, i.e., need not be limited to a data format that can be identified using the second rendering engine. The first rendering engines may be multiple types, different creators, or the same creator may use different technologies to create the 3D model for different data objects, and each corresponds to a different first rendering engine.
S402: obtaining characteristic information of a rendering result corresponding to the first 3D model;
the created first 3D model is not directly submitted to the server, but the rendering result thereof is first subjected to feature acquisition. When the feature information of the rendering result corresponding to the first 3D model is obtained, the first rendering engine may be utilized to render the first 3D model to obtain the rendering result. And then, shooting the rendering result through a plurality of shooting points arranged around the rendering result respectively, realizing full-angle shooting of the rendering result, obtaining a picture sequence composed of pictures respectively shot by the plurality of shooting points, and determining the picture sequence as the characteristic information. In a more specific implementation manner, a spherical model is created by taking the center point of the rendering result as the center of the sphere, then, a plurality of warps and wefts are drawn on the spherical model according to preset intervals, and the intersection point of each warp and weft is taken as a shooting point. Thus, a plurality of pictures can be obtained by photographing from each photographing point in the direction of the sphere center. Of course, in practical application, a plurality of shooting points may be deployed in an inter-mode. In this case, when shooting is performed at each shooting point, a camera assembly may be configured in the first rendering engine, and shooting is performed by such a camera assembly. Also, such a camera assembly may be a binocular camera assembly having a pupil distance so as to obtain pictures having 3D parallax information at respective imaging points.
In particular, when each image point is photographed, multiple groups of image sequences can be obtained by constructing different light sources and/or photographing backgrounds, and each group of image sequences can be used for bearing features in different dimensions, for example, structural features, texture features, material features, and the like. Specifically, the rendering result may be photographed at the plurality of photographing points under the environment of the full-bright light source, and the pictures obtained by photographing the plurality of photographing points respectively may be formed into a first picture sequence, so as to be used for carrying structural features of the rendering result corresponding to the first 3D model through the first picture sequence. If the rendering result includes texture features to be highlighted, the rendering result can be photographed at the plurality of photographing points respectively in the environment of the structural light source, and the pictures obtained by the plurality of photographing points are formed into a second picture sequence for carrying the texture features of the rendering result corresponding to the first 3D model through the second picture sequence. If the rendering result includes the material characteristics to be highlighted, a picture with different adjacent color relations and boundary information can be constructed under a full-bright light source, then the picture is taken as a configuration background, the rendering result is respectively shot at the plurality of shooting points, and the pictures respectively shot by the plurality of shooting points form a third picture sequence so as to be used for carrying the material characteristics of the rendering result corresponding to the first 3D model through the third picture sequence. In addition, as described above, the preview function may be further provided, so that the creator may intuitively feel the rendering effect corresponding to the reconstructed 3D model, and if the rendering effect does not meet the requirements in a certain aspect, more additional information may be obtained by readjusting the light source, the background, and the like.
S403: and submitting the characteristic information to a server, and performing 3D reconstruction by the server according to the characteristic information to obtain a second 3D model with a data format required by a second rendering engine.
After the feature information is obtained, the feature information can be exported in the form of a file and the like and submitted to the server, and then the server performs reconstruction of the 3D model according to the feature information, and a second 3D model with a data format required by a second rendering engine is obtained.
In a specific implementation, the first 3D model may be created for the specified data object, in particular when the creation of the first 3D model is performed. In this case, when the feature information is submitted to the server, the identification information of the data object may also be submitted to the server, so that the server may save the correspondence between the second 3D model and the data object identification after reconstructing to obtain the second 3D model. In this way, the client side can also provide the 3D display effect of the corresponding data object for the user by obtaining the second 3D model from the server side in the process of providing the data object information.
Example III
The third embodiment also corresponds to the first embodiment, and provides a 3D model data processing method from the perspective of a server, and referring to fig. 5, the method may include:
S501: the method comprises the steps that a server receives characteristic information of a rendering result corresponding to a first 3D model, wherein the first 3D model is created according to a data format required by a first rendering engine;
s502: a 3D reconstruction is performed from the characteristic information, obtaining a second 3D model having a data format required by a second rendering engine;
s503: and saving the second 3D model for rendering the second 3D model to a client deployed with the second rendering engine.
If the server is a server of a network sales system or the like, the specific 3D model is usually a 3D model created for a specific data object, so that identification information of the data object corresponding to the first 3D model provided by the creator can also be received. In this way, in particular, when the second 3D model is saved, a correspondence between the second 3D model and the data object identifier may be saved, so as to be used for returning to the corresponding second 3D model when a request for accessing the 3D information of the specified data object by the client is received.
Example IV
The fourth embodiment corresponds to the third embodiment, and provides a data object information display method from the perspective of a client, and referring to fig. 6, the method specifically may include:
S601: the client sends a request for obtaining the 3D model of the target data object to the server; wherein a second rendering engine is deployed at the client;
in specific implementation, the client may provide a data object information interface for the user, and if the server stores a corresponding 3D model for the data object, an operation option for viewing a 3D display effect of the data object may also be provided in the data object information interface, and if the user needs, a specific request may be initiated through the operation option. Accordingly, the client may request the server to obtain 3D model data for the data object. Of course, in practical application, when the interface data of the data object is obtained from the server, the 3D model data of the data object can be obtained together, and so on.
S602: receiving a second 3D model returned by the server, wherein the second 3D model is a 3D model reconstructed according to the data format required by the second rendering engine according to the characteristic information of the rendering result corresponding to the first 3D model, and the first 3D model is a 3D model reconstructed according to the data format required by the first rendering engine;
s603: rendering the second 3D model with the second rendering engine.
For the foregoing embodiments, parts not described in detail may be referred to other embodiments or descriptions of other parts in the specification of the present application, and are not described herein.
Example five
The fifth embodiment is an application of the foregoing embodiment to a specific data object information processing system, and in particular, as may also be shown in fig. 3, the fifth embodiment provides a 3D model processing system for a data object, where the system may include:
the 3D model creation end is used for creating a first 3D model for the specified data object according to the data format required by the first rendering engine, obtaining the characteristic information of the rendering result corresponding to the first 3D model and submitting the characteristic information to the server;
the server is used for carrying out 3D reconstruction according to the characteristic information to obtain a second 3D model with a data format required by a second rendering engine, and storing the association relationship between the identification information of the data object and the second 3D model;
and the client is provided with the second rendering engine, and is used for obtaining the second 3D model from the server and rendering the second 3D model when receiving a 3D access request of the data object.
That is, in the fifth embodiment of the present application, a specific scheme may be applied to a data object information processing process, for example, when a first user needs to provide a 3D display effect for a second user after publishing a data object or while publishing a data object, by using the method of the present application embodiment, a 3D model may be created on the data object through an arbitrary 3D model creation end, and then a 3D model may be reconstructed by a server end, and uniformly expressed as a preset data format, so that a client end of the second user may use a uniform engine to render the 3D model. That is, the second user may initiate a request for browsing the 3D model during the process of browsing the data object information by the client thereof, and then the server may return the corresponding 3D model to the client, where the 3D model is reconstructed by the server in advance according to the feature information of the first 3D model submitted by the first user. In this way, the 3D model of the data object can be automatically created by the publisher of the specific data object, and the 3D model created by different 3D model creation ends of different first users can be further rendered by a unified 3D rendering engine.
That is to say, in specific implementation, the 3D model creation end is located at a first user side, where the first user is a publisher user of the data object. The first 3D models created by the 3D model creation end corresponding to different first users may include first 3D models corresponding to different data formats. That is, the first user may not limit the 3D rendering engine selected when creating the first 3D model, and may use any engine to create the first 3D model.
For other implementation details in the fifth embodiment, reference may be made to the descriptions in the foregoing embodiments, which are not repeated here.
Corresponding to the embodiment, the embodiment of the application further provides a 3D model data processing device, referring to fig. 7, the device may include:
a first model creation unit 701 for creating a first 3D model according to a data format required by the first rendering engine;
a feature information obtaining unit 702, configured to obtain feature information of a rendering result corresponding to the first 3D model;
the feature information submitting unit 703 is configured to submit the feature information to a server, and the server performs 3D reconstruction according to the feature information, so as to obtain a second 3D model with a data format required by a second rendering engine.
The feature information obtaining unit may specifically include:
a rendering result obtaining subunit, configured to render the first 3D model by using the first rendering engine, to obtain a rendering result;
the image sequence obtaining subunit is used for respectively shooting the rendering result through a plurality of shooting points deployed around the rendering result, realizing full-angle shooting of the rendering result, and obtaining an image sequence composed of images respectively shot by the plurality of shooting points;
and the characteristic information determining subunit is used for determining the picture sequence as the characteristic information.
In a specific implementation, the picture sequence obtaining subunit may specifically be configured to:
taking the central point of the rendering result as a sphere center, and creating a spherical model;
and drawing a plurality of warps and wefts on the spherical model according to preset intervals, and taking the intersection point of each warp and weft as a shooting point.
Wherein, in one case, the picture sequence obtaining subunit may specifically be configured to:
and in the environment of a full-bright light source, shooting the rendering result at the shooting points respectively, and forming a first picture sequence from pictures obtained by shooting the shooting points respectively, wherein the first picture sequence is used for carrying structural characteristics of the rendering result corresponding to the first 3D model.
Wherein, if the rendering result includes texture features to be highlighted, the picture sequence obtaining subunit may be specifically configured to:
and in the environment of the structural light source, shooting the rendering result at the shooting points respectively, and forming a second picture sequence from pictures obtained by shooting the shooting points respectively, wherein the second picture sequence is used for carrying texture features of the rendering result corresponding to the first 3D model.
If the rendering result includes a material feature to be highlighted, the image sequence obtaining subunit may be further configured to:
under a full-bright light source, constructing pictures with different adjacent color relations and boundary information;
taking the pictures as configuration backgrounds, respectively shooting the rendering results at the shooting points, and forming a third picture sequence from the pictures respectively shot by the shooting points, wherein the third picture sequence is used for carrying the material characteristics of the rendering results corresponding to the first 3D model.
In addition, the picture sequence obtaining subunit may specifically be configured to:
and constructing a camera assembly, and respectively shooting the rendering result at the plurality of shooting points through the camera assembly.
Wherein the camera assembly comprises a binocular camera assembly.
In a specific implementation, the first model creation unit may specifically be configured to:
creating a first 3D model for the specified data object according to a data format required by the first rendering engine;
the characteristic information submitting unit may be further configured to:
submitting the identification information of the data object to a server so that the server can save the corresponding relation between the second 3D model and the data object identification after reconstructing the second 3D model.
Corresponding to the embodiment, the embodiment of the application further provides a 3D model data processing device, referring to fig. 8, where the device is applied to a server, and includes:
a feature information receiving unit 801, configured to receive feature information of a rendering result corresponding to a first 3D model, where the first 3D model is created according to a data format required by a first rendering engine;
a 3D reconstruction unit 802, configured to perform 3D reconstruction according to the feature information, to obtain a second 3D model with a data format required by a second rendering engine;
a saving unit 803, configured to save the second 3D model, for providing to a client deployed with the second rendering engine to render the second 3D model.
In a specific implementation, the feature information receiving unit may be further configured to:
receiving identification information of a data object corresponding to the first 3D model;
the storage unit is specifically configured to:
and storing the corresponding relation between the second 3D model and the data object identifier, so as to return to the corresponding second 3D model when receiving a request of the client for accessing the 3D information of the specified data object.
Corresponding to the fourth embodiment, the embodiment of the present application further provides a data object information display device, referring to fig. 9, where the device is applied to a client, and includes:
a request sending unit 901, configured to send a request for obtaining a 3D model of a target data object to a server; wherein a second rendering engine is deployed at the client;
the 3D model receiving unit 902 is configured to receive a second 3D model returned by the server, where the second 3D model is a 3D model reconstructed according to a data format required by the second rendering engine according to feature information of a rendering result corresponding to a first 3D model, and the first 3D model is a 3D model reconstructed according to a data format required by the first rendering engine;
and a rendering unit 903, configured to render the second 3D model by using the second rendering engine.
In addition, the embodiment of the application also provides a computer system, which comprises:
one or more processors; and
a memory associated with the one or more processors, the memory for storing program instructions that, when read for execution by the one or more processors, perform the operations of:
receiving characteristic information of a rendering result corresponding to a first 3D model, wherein the first 3D model is created according to a data format required by a first rendering engine;
3D reconstruction is carried out according to the characteristic information, and a second 3D model with a data format required by a second rendering engine is obtained;
and saving the second 3D model for rendering the second 3D model to a client deployed with the second rendering engine.
Among other things, FIG. 10 illustrates an architecture of a computer system that may include, in particular, a processor 1010, a video display adapter 1011, a disk drive 1012, an input/output interface 1013, a network interface 1014, and a memory 1020. The processor 1010, the video display adapter 1011, the disk drive 1012, the input/output interface 1013, the network interface 1014, and the memory 1020 may be communicatively connected by a communication bus 1030.
The processor 1010 may be implemented by a general-purpose CPU (Central Processing Unit ), a microprocessor, an application-specific integrated circuit (Application Specific Integrated Circuit, ASIC), or one or more integrated circuits, etc., for executing related programs to implement the technical solutions provided herein.
The Memory 1020 may be implemented in the form of ROM (Read Only Memory), RAM (Random Access Memory ), static storage device, dynamic storage device, or the like. Memory 1020 may store an operating system 1021 for controlling the operation of computer system 1000, and a Basic Input Output System (BIOS) for controlling the low-level operation of computer system 1000. In addition, web browser 1023, data storage management system 1024, and 3D model data processing system 1025, and the like, may also be stored. The 3D model data processing system 1025 may be an application program that implements the operations of the foregoing steps in the embodiments of the present application. In general, when implemented in software or firmware, the relevant program code is stored in memory 1020 and executed by processor 1010.
The input/output interface 1013 is used to connect with an input/output module to realize information input and output. The input/output module may be configured as a component in a device (not shown) or may be external to the device to provide corresponding functionality. Wherein the input devices may include a keyboard, mouse, touch screen, microphone, various types of sensors, etc., and the output devices may include a display, speaker, vibrator, indicator lights, etc.
The network interface 1014 is used to connect communication modules (not shown) to enable communication interactions of the device with other devices. The communication module may implement communication through a wired manner (such as USB, network cable, etc.), or may implement communication through a wireless manner (such as mobile network, WIFI, bluetooth, etc.).
Bus 1030 includes a path to transfer information between components of the device (e.g., processor 1010, video display adapter 1011, disk drive 1012, input/output interface 1013, network interface 1014, and memory 1020).
In addition, the computer system 1000 may also obtain information of specific acquisition conditions from the virtual resource object acquisition condition information database 1041 for making condition judgment, and so on.
It is noted that although the above-described devices illustrate only the processor 1010, video display adapter 1011, disk drive 1012, input/output interface 1013, network interface 1014, memory 1020, bus 1030, etc., the device may include other components necessary to achieve proper operation in an implementation. Furthermore, it will be understood by those skilled in the art that the above-described apparatus may include only the components necessary to implement the present application, and not all the components shown in the drawings.
From the above description of embodiments, it will be apparent to those skilled in the art that the present application may be implemented in software plus a necessary general purpose hardware platform. Based on such understanding, the technical solutions of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to perform the methods described in the embodiments or some parts of the embodiments of the present application.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for a system or system embodiment, since it is substantially similar to a method embodiment, the description is relatively simple, with reference to the description of the method embodiment being made in part. The systems and system embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
The above detailed description of the 3D model data processing method, apparatus and system provided in the present application applies specific examples to illustrate the principles and embodiments of the present application, where the above description of the examples is only used to help understand the method and core idea of the present application; also, as will occur to those of ordinary skill in the art, many modifications are possible in view of the teachings of the present application, both in the detailed description and the scope of its applications. In view of the foregoing, this description should not be construed as limiting the application.

Claims (19)

1. A 3D model processing system, comprising:
the 3D model creation end is used for creating a first 3D model according to a data format required by a first rendering engine, obtaining characteristic information of a rendering result corresponding to the first 3D model and submitting the characteristic information to the server;
the server is used for carrying out 3D reconstruction according to the characteristic information to obtain a second 3D model with a data format required by a second rendering engine;
the client is provided with the second rendering engine and is used for obtaining the second 3D model from the server and rendering the second 3D model;
the obtaining the feature information of the rendering result corresponding to the first 3D model includes: rendering the first 3D model by using the first rendering engine to obtain a rendering result; shooting the rendering result through a plurality of shooting points arranged around the rendering result, and obtaining a picture sequence composed of pictures respectively shot by the plurality of shooting points; and determining the picture sequence as the characteristic information.
2. A method for processing 3D model data, comprising:
creating a first 3D model according to a data format required by a first rendering engine;
Obtaining characteristic information of a rendering result corresponding to the first 3D model;
submitting the characteristic information to a server, and performing 3D reconstruction by the server according to the characteristic information to obtain a second 3D model with a data format required by a second rendering engine;
the obtaining the feature information of the rendering result corresponding to the first 3D model includes:
rendering the first 3D model by using the first rendering engine to obtain a rendering result;
shooting the rendering result through a plurality of shooting points arranged around the rendering result, and obtaining a picture sequence composed of pictures respectively shot by the plurality of shooting points;
and determining the picture sequence as the characteristic information.
3. The method of claim 2, wherein the deploying of the shooting points comprises:
taking the central point of the rendering result as a sphere center, and creating a spherical model;
and drawing a plurality of warps and wefts on the spherical model according to preset intervals, and taking the intersection point of each warp and weft as a shooting point.
4. The method of claim 2, wherein the photographing the rendering result through a plurality of photographing points disposed around the rendering result, respectively, comprises:
And in the environment of a full-bright light source, shooting the rendering result at the shooting points respectively, and forming a first picture sequence from pictures obtained by shooting the shooting points respectively, wherein the first picture sequence is used for carrying structural characteristics of the rendering result corresponding to the first 3D model.
5. The method according to claim 4, wherein if the rendering result includes a texture feature to be highlighted, the photographing the rendering result through a plurality of photographing points disposed around the rendering result, respectively, further comprises:
and in the environment of the structural light source, shooting the rendering result at the shooting points respectively, and forming a second picture sequence from pictures obtained by shooting the shooting points respectively, wherein the second picture sequence is used for carrying texture features of the rendering result corresponding to the first 3D model.
6. The method according to claim 4, wherein if the rendering result includes a material feature to be highlighted, the photographing the rendering result through a plurality of photographing points disposed around the rendering result, respectively, further comprises:
Under a full-bright light source, constructing pictures with different adjacent color relations and boundary information;
taking the pictures as configuration backgrounds, respectively shooting the rendering results at the shooting points, and forming a third picture sequence from the pictures respectively shot by the shooting points, wherein the third picture sequence is used for carrying the material characteristics of the rendering results corresponding to the first 3D model.
7. The method of claim 2, wherein the plurality of shooting points deployed around the rendering result respectively shoot the rendering result, comprising:
and constructing a camera assembly, and respectively shooting the rendering result at the plurality of shooting points through the camera assembly.
8. The method of claim 7, wherein the camera assembly comprises a binocular camera assembly.
9. The method of any of claims 2 to 8, wherein the creating a first 3D model in a data format required by a first rendering engine comprises:
creating a first 3D model for the specified data object according to a data format required by the first rendering engine;
when the characteristic information is submitted to the server, the method further comprises the following steps:
Submitting the identification information of the data object to a server so that the server can save the corresponding relation between the second 3D model and the data object identification after reconstructing the second 3D model.
10. A method for processing 3D model data, comprising:
the method comprises the steps that a server receives characteristic information of a rendering result corresponding to a first 3D model, wherein the first 3D model is created according to a data format required by a first rendering engine;
3D reconstruction is carried out according to the characteristic information, and a second 3D model with a data format required by a second rendering engine is obtained;
saving the second 3D model for rendering the second 3D model to a client deployed with the second rendering engine;
the determining process of the characteristic information comprises the following steps: rendering the first 3D model by using the first rendering engine to obtain a rendering result; shooting the rendering result through a plurality of shooting points arranged around the rendering result, and obtaining a picture sequence composed of pictures respectively shot by the plurality of shooting points; and determining the picture sequence as the characteristic information.
11. The method of claim 10, wherein the receiving the feature information for the corresponding rendering result for the first 3D model further comprises:
Receiving identification information of a data object corresponding to the first 3D model;
the saving the second 3D model includes:
and storing the corresponding relation between the second 3D model and the data object identifier, so as to return to the corresponding second 3D model when receiving a request of the client for accessing the 3D information of the specified data object.
12. A data object information display method, comprising:
the client sends a request for obtaining the 3D model of the target data object to the server; wherein a second rendering engine is deployed at the client;
receiving a second 3D model returned by the server, wherein the second 3D model is a 3D model reconstructed according to the data format required by the second rendering engine according to the characteristic information of the rendering result corresponding to the first 3D model, and the first 3D model is a 3D model reconstructed according to the data format required by the first rendering engine;
rendering the second 3D model with the second rendering engine;
the determining process of the characteristic information comprises the following steps: rendering the first 3D model by using the first rendering engine to obtain a rendering result; shooting the rendering result through a plurality of shooting points arranged around the rendering result, and obtaining a picture sequence composed of pictures respectively shot by the plurality of shooting points; and determining the picture sequence as the characteristic information.
13. A 3D model data processing apparatus, comprising:
a first model creation unit for creating a first 3D model in accordance with a data format required by the first rendering engine;
the characteristic information obtaining unit is used for obtaining characteristic information of a rendering result corresponding to the first 3D model;
the characteristic information submitting unit is used for submitting the characteristic information to a server, and the server performs 3D reconstruction according to the characteristic information to obtain a second 3D model with a data format required by a second rendering engine;
the feature information obtaining unit includes:
a rendering result obtaining subunit, configured to render the first 3D model by using the first rendering engine, to obtain a rendering result;
a picture sequence obtaining subunit, configured to respectively shoot the rendering result through a plurality of shooting points deployed around the rendering result, and obtain a picture sequence composed of pictures respectively shot by the plurality of shooting points;
and the characteristic information determining subunit is used for determining the picture sequence as the characteristic information.
14. A 3D model data processing apparatus, applied to a server, comprising:
the device comprises a characteristic information receiving unit, a first rendering engine and a second rendering engine, wherein the characteristic information receiving unit is used for receiving characteristic information of a rendering result corresponding to a first 3D model, and the first 3D model is created according to a data format required by the first rendering engine;
The 3D reconstruction unit is used for carrying out 3D reconstruction according to the characteristic information to obtain a second 3D model with a data format required by a second rendering engine;
a storage unit, configured to store the second 3D model, for providing to a client deployed with the second rendering engine to render the second 3D model;
the determining process of the characteristic information comprises the following steps: rendering the first 3D model by using the first rendering engine to obtain a rendering result; shooting the rendering result through a plurality of shooting points arranged around the rendering result, and obtaining a picture sequence composed of pictures respectively shot by the plurality of shooting points; and determining the picture sequence as the characteristic information.
15. A data object information display device, which is applied to a client, comprising:
a request sending unit, configured to send a request for obtaining a 3D model of a target data object to a server; wherein a second rendering engine is deployed at the client;
the 3D model receiving unit is used for receiving a second 3D model returned by the server, wherein the second 3D model is a 3D model reconstructed according to the data format required by the second rendering engine according to the characteristic information of the rendering result corresponding to the first 3D model, and the first 3D model is a 3D model reconstructed according to the data format required by the first rendering engine;
A rendering unit for rendering the second 3D model using the second rendering engine;
the determining process of the characteristic information comprises the following steps: rendering the first 3D model by using the first rendering engine to obtain a rendering result; shooting the rendering result through a plurality of shooting points arranged around the rendering result, and obtaining a picture sequence composed of pictures respectively shot by the plurality of shooting points; and determining the picture sequence as the characteristic information.
16. A 3D model processing system for data objects, comprising:
the 3D model creation end is used for creating a first 3D model for the specified data object according to the data format required by the first rendering engine, obtaining the characteristic information of the rendering result corresponding to the first 3D model and submitting the characteristic information to the server;
the server is used for carrying out 3D reconstruction according to the characteristic information to obtain a second 3D model with a data format required by a second rendering engine, and storing the association relationship between the identification information of the data object and the second 3D model;
the client is deployed with the second rendering engine and is used for obtaining the second 3D model from the server and rendering the second 3D model when receiving a 3D access request of a data object;
The determining process of the characteristic information comprises the following steps: rendering the first 3D model by using the first rendering engine to obtain a rendering result; shooting the rendering result through a plurality of shooting points arranged around the rendering result, and obtaining a picture sequence composed of pictures respectively shot by the plurality of shooting points; and determining the picture sequence as the characteristic information.
17. The system of claim 16, wherein the 3D model creation side is located at a first user side, the first user being a publisher user of the data object.
18. The system of claim 17, wherein the first 3D models created by the 3D model creation side corresponding to different first users comprise first 3D models corresponding to different data formats.
19. A computer system, comprising:
one or more processors; and
a memory associated with the one or more processors, the memory for storing program instructions that, when read for execution by the one or more processors, perform the operations of:
receiving characteristic information of a rendering result corresponding to a first 3D model, wherein the first 3D model is created according to a data format required by a first rendering engine;
3D reconstruction is carried out according to the characteristic information, and a second 3D model with a data format required by a second rendering engine is obtained;
saving the second 3D model for rendering the second 3D model to a client deployed with the second rendering engine;
the determining process of the characteristic information comprises the following steps: rendering the first 3D model by using the first rendering engine to obtain a rendering result; shooting the rendering result through a plurality of shooting points arranged around the rendering result, and obtaining a picture sequence composed of pictures respectively shot by the plurality of shooting points; and determining the picture sequence as the characteristic information.
CN201710819123.1A 2017-09-12 2017-09-12 3D model data processing method, device and system Active CN109493431B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201710819123.1A CN109493431B (en) 2017-09-12 2017-09-12 3D model data processing method, device and system
PCT/CN2018/103941 WO2019052371A1 (en) 2017-09-12 2018-09-04 3d model data processing method, device and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710819123.1A CN109493431B (en) 2017-09-12 2017-09-12 3D model data processing method, device and system

Publications (2)

Publication Number Publication Date
CN109493431A CN109493431A (en) 2019-03-19
CN109493431B true CN109493431B (en) 2023-06-23

Family

ID=65687868

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710819123.1A Active CN109493431B (en) 2017-09-12 2017-09-12 3D model data processing method, device and system

Country Status (2)

Country Link
CN (1) CN109493431B (en)
WO (1) WO2019052371A1 (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112016139A (en) * 2019-05-29 2020-12-01 阿里巴巴集团控股有限公司 Home decoration household object information processing method and device
CN111063032B (en) * 2019-12-26 2024-02-23 北京像素软件科技股份有限公司 Model rendering method, system and electronic device
CN112070872A (en) * 2020-09-23 2020-12-11 安徽共享装科技有限公司 Home color changing display rendering method and system based on online 3D home decoration platform
CN114596401A (en) * 2020-11-20 2022-06-07 华为云计算技术有限公司 Rendering method, device and system
CN112802192B (en) * 2021-03-05 2022-01-28 艾迪普科技股份有限公司 Three-dimensional graphic image player capable of realizing real-time interaction

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104851128A (en) * 2015-05-19 2015-08-19 北京控制工程研究所 Format conversion method for 3DS model file loading through OSG three-dimensional engine
CN106056666A (en) * 2016-05-27 2016-10-26 美屋三六五(天津)科技有限公司 Three-dimensional model processing method and three-dimensional model processing system

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6417849B2 (en) * 1998-07-31 2002-07-09 Hewlett-Packard Company Single logical screen in X windows with direct hardware access to the frame buffer for 3D rendering
US6825838B2 (en) * 2002-10-11 2004-11-30 Sonocine, Inc. 3D modeling system
US20070188488A1 (en) * 2006-01-13 2007-08-16 Choi Justin Y Computer network-based 3D rendering system
CN103714569B (en) * 2013-12-19 2017-12-15 华为技术有限公司 A kind of processing method of render instruction, device and system
WO2015140815A1 (en) * 2014-03-15 2015-09-24 Vats Nitin Real-time customization of a 3d model representing a real product

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104851128A (en) * 2015-05-19 2015-08-19 北京控制工程研究所 Format conversion method for 3DS model file loading through OSG three-dimensional engine
CN106056666A (en) * 2016-05-27 2016-10-26 美屋三六五(天津)科技有限公司 Three-dimensional model processing method and three-dimensional model processing system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
《ONLINE 3D MODEL CONVERSION TOOL》;dwburma;《https://www.liberty3d.com/2014/08/online-3d-model-conversion-tool/》;20140825;1-2 *

Also Published As

Publication number Publication date
WO2019052371A1 (en) 2019-03-21
CN109493431A (en) 2019-03-19

Similar Documents

Publication Publication Date Title
CN109493431B (en) 3D model data processing method, device and system
US11100724B2 (en) Systems and methods for generating and intelligently distributing forms of virtual reality content
CN106846497B (en) Method and device for presenting three-dimensional map applied to terminal
CN110969685A (en) Customizable rendering pipeline using rendering maps
JP7208549B2 (en) VIRTUAL SPACE CONTROL DEVICE, CONTROL METHOD THEREOF, AND PROGRAM
CN110689626A (en) Game model rendering method and device
Wolff OpenGL 4 shading language cookbook
CN111161398A (en) Image generation method, device, equipment and storage medium
Canessa et al. A dataset of stereoscopic images and ground-truth disparity mimicking human fixations in peripersonal space
Ratican et al. A proposed meta-reality immersive development pipeline: Generative ai models and extended reality (xr) content for the metaverse
US10754498B2 (en) Hybrid image rendering system
Marelli et al. SfM Flow: A comprehensive toolset for the evaluation of 3D reconstruction pipelines
US20230046431A1 (en) System and method for generating 3d objects from 2d images of garments
Nagashree et al. Markerless Augmented Reality Application for Interior Designing
CN114452646A (en) Virtual object perspective processing method and device and computer equipment
Kolivand et al. Livephantom: Retrieving virtual world light data to real environments
US20220230392A1 (en) Systems and methods for implementing source identifiers into 3d content
KR102541262B1 (en) METHOD, APPARATUS AND COMPUTER-READABLE MEDIUM OF Applying an object to VR content
Janiszewski et al. 3D Dataset of a twisted bending-active beam element digitized using Structure-from-Motion Photogrammetry
CN108920598A (en) Panorama sketch browsing method, device, terminal device, server and storage medium
JP2020013390A (en) Information processing apparatus, information processing program, and information processing method
Havemann et al. The presentation of cultural heritage models in epoch
Babii Use of augmented reality to build an interactive interior on the ios mobile platform
Khan et al. A 3D Classical Object Viewer for Device Compatible Display
KR20220111006A (en) 3D fitting method and apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant