CN116342759A - Method, device, equipment and storage medium for quick offline rendering - Google Patents

Method, device, equipment and storage medium for quick offline rendering Download PDF

Info

Publication number
CN116342759A
CN116342759A CN202310252261.1A CN202310252261A CN116342759A CN 116342759 A CN116342759 A CN 116342759A CN 202310252261 A CN202310252261 A CN 202310252261A CN 116342759 A CN116342759 A CN 116342759A
Authority
CN
China
Prior art keywords
rendering
image quality
image
map
job
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310252261.1A
Other languages
Chinese (zh)
Inventor
林驰捷
邹琼
周双全
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Ruiyun Technology Co ltd
Original Assignee
Shenzhen Ruiyun Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Ruiyun Technology Co ltd filed Critical Shenzhen Ruiyun Technology Co ltd
Priority to CN202310252261.1A priority Critical patent/CN116342759A/en
Publication of CN116342759A publication Critical patent/CN116342759A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Multimedia (AREA)
  • Image Generation (AREA)

Abstract

The invention relates to the technical field of off-line rendering, in particular to a method, a device, equipment and a storage medium for quick off-line rendering, wherein the method comprises the following steps: acquiring a rendering job image, inputting the rendering job image to a renderer for rough rendering, and generating a rendering map with a first image quality; and performing fine rendering by using a deep learning method pix2pix in an image translation technology, converting the rendering map of the first image quality into a rendering map of a second image quality, and storing the rendering map of the first image quality, wherein the first image quality is lower than the second image quality. It can be understood that the technical scheme provided by the invention improves the rendering speed by rendering in stages, thereby improving the working efficiency of rendering while guaranteeing the rendering quality.

Description

Method, device, equipment and storage medium for quick offline rendering
Technical Field
The present invention relates to the technical field of offline rendering, and in particular, to a method, an apparatus, a device, and a storage medium for fast offline rendering.
Background
With the development of 3D technology, the complexity and fineness of the video Rendering effect are gradually improved, and the picture Rendering can be divided into two types, namely, real-time Rendering (Real-time Rendering) in 3D games, and off-line Rendering (Offline Rendering) for animation films, the former has to make a compromise on the image quality because of the need of ensuring the speed, and the latter has to be cost-free for pursuing the sense of realism. However, it is often difficult to achieve both high-quality rendering effect and time cost in offline rendering, and the offline rendering can render very realistic complex effects by means of ray tracing, but at the cost of a great deal of time-consuming increase.
Therefore, in the current off-line rendering method, the time cost is high if a high quality rendering effect is required.
Disclosure of Invention
Accordingly, the present invention is directed to a method, apparatus, device and storage medium for fast offline rendering, which solve the problem of high time cost when the offline rendering method in the prior art is required to achieve a high quality rendering effect.
According to a first aspect of an embodiment of the present invention, there is provided a method for fast offline rendering, including:
acquiring a rendering job image, inputting the rendering job image to a renderer for rough rendering, and generating a rendering map with a first image quality;
and performing fine rendering by using a deep learning method pix2pix in an image translation technology, converting the rendering map of the first image quality into a rendering map of a second image quality, and storing the rendering map of the first image quality, wherein the first image quality is lower than the second image quality.
Preferably, the performing coarse rendering to generate a rendering map with a first image quality includes:
generating a rendering diagram of a first image quality, wherein the image information of the rendering diagram at least comprises outlines, directions or illumination, according to the rendering job image;
the performing fine rendering, converting the rendering diagram of the first image quality into a rendering diagram of a second image quality, includes:
and according to the rendering map of the first image quality, performing texture and light shadow rendering by using a pix2pix rendering network, and generating a rendering map of a second image quality.
Preferably, the performing fine rendering includes:
automatically analyzing the ambient light in the rendering graph of the first image quality and generating spherical harmonic light parameters by utilizing a real-time light reconstruction technology;
and inputting the spherical harmonic illumination parameters into a pix2pix rendering network to control the shadow effect of the rendered image.
Preferably, the method further comprises:
and automatically denoising the input image and the output image of the pix2pix rendering network, and supplementing the image fineness.
Preferably, the fine rendering further includes:
dividing the area of the rendering operation image according to different subdivision materials;
carrying out semantic segmentation on each region to generate region material information;
and sending the regional material information to the pix2pix rendering network so that the rendering network can simultaneously complete rendering of various materials.
Preferably, the acquiring the rendering job image includes:
and acquiring a rendering job image submitted by a user through a web interface or a client interface.
Preferably, the method further comprises:
obtaining an effect graph storage path input by a user;
and after generating the rendering diagram of the second image quality, storing the rendering diagram of the second image quality in the effect diagram storage path.
According to a second aspect of an embodiment of the present invention, there is provided an apparatus for fast offline rendering, including:
the rendering job acquisition module is used for acquiring rendering job images;
the rough rendering module is used for inputting the rendering job image to a renderer for rough rendering and generating a rendering diagram with a first image quality;
and the fine rendering module is used for carrying out fine rendering by utilizing a deep learning method pix2pix in the image translation technology, converting the rendering map with the first image quality into the rendering map with the second image quality and storing the rendering map with the second image quality, wherein the first image quality is lower than the second image quality.
According to a third aspect of an embodiment of the present invention, there is provided an apparatus for fast offline rendering, comprising:
a master controller and a memory connected with the master controller;
a memory having program instructions stored therein;
the master is configured to execute program instructions stored in the memory and perform the method of any of the above.
According to a fourth aspect of embodiments of the present invention, there is provided a computer readable storage medium storing a computer program, characterized in that the computer program, when executed by a processor, implements the method of any of the above.
The technical scheme provided by the embodiment of the invention can comprise the following beneficial effects:
it can be understood that in the technical scheme provided by the invention, the rendering job image is input to the renderer for rough rendering by acquiring the rendering job image, so as to generate a rendering image with a first image quality; and performing fine rendering by using a deep learning method pix2pix in an image translation technology, converting the rendering map of the first image quality into a rendering map of a second image quality, and storing the rendering map of the first image quality, wherein the first image quality is lower than the second image quality. It can be understood that the technical scheme provided by the invention improves the rendering speed by rendering in stages, thereby improving the working efficiency of rendering while guaranteeing the rendering quality.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention as claimed.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
FIG. 1 is a schematic diagram illustrating steps of a method of fast offline rendering, according to an exemplary embodiment;
FIG. 2 is a schematic block diagram illustrating an apparatus for fast offline rendering, according to an example embodiment.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with the invention. Rather, they are merely examples of apparatus and methods consistent with aspects of the invention as detailed in the accompanying claims.
Example 1
FIG. 1 is a schematic diagram illustrating steps of a method of fast offline rendering, see FIG. 1, according to an exemplary embodiment, providing a method of fast offline rendering, comprising:
step S11, acquiring a rendering job image, inputting the rendering job image to a renderer for rough rendering, and generating a rendering diagram with a first image quality;
and step S12, performing fine rendering by using a deep learning method pix2pix in an image translation technology, converting the rendering map with the first image quality into a rendering map with the second image quality, and storing the rendering map with the first image quality lower than the second image quality.
In specific practice, the first rough rendering is a rendering image of relatively low image quality generated from a rendering job image, and then the rendering image of relatively low image quality is converted into a high image quality image of fine rendering by fine rendering.
The image translation technology can convert the content in the image from one image domain X to another image domain Y, and can be regarded as removing certain attribute X of the original image and reassigning new attribute Y, namely cross-domain conversion among the images.
It can be understood that in the technical scheme provided by the invention, the rendering job image is input to the renderer for rough rendering by acquiring the rendering job image, so as to generate a rendering image with a first image quality; and performing fine rendering by using a deep learning method pix2pix in an image translation technology, converting the rendering map of the first image quality into a rendering map of a second image quality, and storing the rendering map of the first image quality, wherein the first image quality is lower than the second image quality. It can be understood that the technical scheme provided by the invention improves the rendering speed by rendering in stages, thereby improving the working efficiency of rendering while guaranteeing the rendering quality.
The rough rendering to generate a rendering map of the first image quality includes:
generating a rendering diagram of a first image quality, wherein the image information of the rendering diagram at least comprises outlines, directions or illumination, according to the rendering job image;
the performing fine rendering, converting the rendering diagram of the first image quality into a rendering diagram of a second image quality, includes:
and according to the rendering map of the first image quality, performing texture and light shadow rendering by using a pix2pix rendering network, and generating a rendering map of a second image quality.
In particular practice, in order to increase the real-time rendering speed, the rendering map of the first image with low image quality often only includes some rough information such as the contour, direction or illumination of the object, while the pix2pix rendering network is responsible for a relatively complex fine rendering task, and renders fine textures and light shadows on the object, so as to generate the rendering map of the second image quality with higher image quality.
It should be noted that the performing fine rendering includes:
automatically analyzing the ambient light in the rendering graph of the first image quality and generating spherical harmonic light parameters by utilizing a real-time light reconstruction technology;
and inputting the spherical harmonic illumination parameters into a pix2pix rendering network to control the shadow effect of the rendered image.
In specific practice, the shadow rendering in fine rendering needs to obtain the spherical harmonic illumination parameters first: in the aspect of fine rendering, in order to improve details and light shadows and enable pictures to be more lifelike, the embodiment adopts a real-time illumination reconstruction technology, automatically analyzes ambient illumination in an original picture, reconstructs the ambient illumination into spherical harmonic illumination parameters, and inputs the spherical harmonic illumination parameters into a pix2pix rendering network to control the light shadow effect of a rendered object.
It can be appreciated that, in the embodiment, by adopting the above manner, when the pix2pix rendering network performs the rendering of the shadow effect, the shadow rendering effect is improved, and the rendering efficiency is improved.
The method also comprises the following steps:
and automatically denoising the input image and the output image of the pix2pix rendering network, and supplementing the image fineness.
In specific practice, the input and output pictures of the pix2pix rendering network are carefully defined and optimized in terms of pairing property and material precision, so that the input and output pictures automatically make noise reduction on the pictures, the picture fineness is supplemented, and the model can learn rendering capability better.
It should be noted that, the fine rendering further includes:
dividing the area of the rendering operation image according to different subdivision materials;
carrying out semantic segmentation on each region to generate region material information;
and sending the regional material information to the pix2pix rendering network so that the rendering network can simultaneously complete rendering of various materials.
In specific practice, different areas on an object often have different subdivision materials, and the embodiment can perform semantic segmentation on the different areas and give the area material information to the pix2pix rendering network, so that a single network can simultaneously complete rendering of multiple materials.
In an actual application scene, for example, a user submits a rendering job image, wherein the rendering job image comprises multiple materials, when fine rendering is performed, the image can be partitioned according to different subdivision materials of the image, and pictures of the same materials are the same region, so that multiple regions are separated; then, carrying out semantic segmentation on the material information of each region, and extracting the material information of each region; after the region material information is sent to the pix2pix rendering network, the pix2pix rendering network can render various materials in a picture according to different material information of each region, so that a single network can simultaneously complete rendering of various materials, and rendering efficiency is improved.
The obtaining the rendering job image includes:
and acquiring a rendering job image submitted by a user through a web interface or a client interface.
In specific practice, when the method is applied to acquire the rendering job image, any user can submit the rendering job image. A web interface or client interface may be established that submits the rendered job image, and when a user needs to render, log in to the web interface or client interface to submit the rendered job image.
The method also comprises the following steps:
obtaining an effect graph storage path input by a user;
and after generating the rendering diagram of the second image quality, storing the rendering diagram of the second image quality in the effect diagram storage path.
In specific practice, when a user submits a rendering job image, the user can submit a storage path at the same time, after rendering is finished, the rendered effect graph can be automatically generated to the storage path of the user, and the user can download a finer and more vivid effect graph by himself and can be applied to a required scene.
It can be appreciated that the technical scheme provided by the embodiment can learn high-quality materials through fine rendering, can improve the effect of off-line rendering effect graphs, better improve the sense of reality and fineness, and promote the further evolution of prop effects; in addition, the rough rendering is used for firstly rendering the image contents such as outlines, directions or illumination, and the like, so that the film and television rendering cost is saved.
Example two
FIG. 2 is a schematic block diagram of an apparatus for fast offline rendering, according to an exemplary embodiment, see FIG. 2, providing an apparatus for fast offline rendering, comprising:
a rendering job acquisition module 101 for acquiring a rendering job image;
the rough rendering module 102 is configured to input the rendering job image to a renderer for rough rendering, and generate a rendering map with a first image quality;
and the fine rendering module 103 is configured to perform fine rendering by using a deep learning method pix2pix in an image translation technology, convert the rendering map with the first image quality into a rendering map with the second image quality, and store the rendering map with the second image quality, where the first image quality is lower than the second image quality.
It can be understood that, in the technical solution provided in this embodiment, a rendering job image is acquired by the rendering job acquisition module 101, and is input to a renderer by the rough rendering module 102 for rough rendering, so as to generate a rendering map with a first image quality; the fine rendering module 103 performs fine rendering by using a deep learning method pix2pix in the image translation technology, converts the rendering map with the first image quality into a rendering map with the second image quality, and stores the rendering map with the second image quality, wherein the first image quality is lower than the second image quality. It can be appreciated that the technical scheme provided by the embodiment improves the rendering speed by rendering in stages, thereby improving the working efficiency of rendering while guaranteeing the rendering quality.
Example III
There is provided an apparatus for fast offline rendering, comprising:
a master controller and a memory connected with the master controller;
a memory having program instructions stored therein;
the master is configured to execute program instructions stored in the memory and perform the method of any of the above.
In specific practice, the device for fast offline rendering provided in this embodiment may be any electronic device, and may be capable of executing the method described in any of the foregoing.
Example IV
According to a fourth aspect of embodiments of the present invention, there is provided a computer readable storage medium storing a computer program, characterized in that the computer program, when executed by a processor, implements the method of any of the above.
It is to be understood that the same or similar parts in the above embodiments may be referred to each other, and that in some embodiments, the same or similar parts in other embodiments may be referred to.
It should be noted that in the description of the present invention, the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. Furthermore, in the description of the present invention, unless otherwise indicated, the meaning of "plurality" means at least two.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and further implementations are included within the scope of the preferred embodiment of the present invention in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present invention.
It is to be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, may be implemented using any one or combination of the following techniques, as is well known in the art: discrete logic circuits having logic gates for implementing logic functions on data signals, application specific integrated circuits having suitable combinational logic gates, programmable Gate Arrays (PGAs), field Programmable Gate Arrays (FPGAs), and the like.
Those of ordinary skill in the art will appreciate that all or a portion of the steps carried out in the method of the above-described embodiments may be implemented by a program to instruct related hardware, where the program may be stored in a computer readable storage medium, and where the program, when executed, includes one or a combination of the steps of the method embodiments.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing module, or each unit may exist alone physically, or two or more units may be integrated in one module. The integrated modules may be implemented in hardware or in software functional modules. The integrated modules may also be stored in a computer readable storage medium if implemented in the form of software functional modules and sold or used as a stand-alone product.
The above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, or the like.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
While embodiments of the present invention have been shown and described above, it will be understood that the above embodiments are illustrative and not to be construed as limiting the invention, and that variations, modifications, alternatives and variations may be made to the above embodiments by one of ordinary skill in the art within the scope of the invention.

Claims (10)

1. A method of fast offline rendering, comprising:
acquiring a rendering job image, inputting the rendering job image to a renderer for rough rendering, and generating a rendering map with a first image quality;
and performing fine rendering by using a deep learning method pix2pix in an image translation technology, converting the rendering map of the first image quality into a rendering map of a second image quality, and storing the rendering map of the first image quality, wherein the first image quality is lower than the second image quality.
2. The method of claim 1, wherein performing the coarse rendering to generate a rendered map of the first image quality comprises:
generating a rendering diagram of a first image quality, wherein the image information of the rendering diagram at least comprises outlines, directions or illumination, according to the rendering job image;
the performing fine rendering, converting the rendering diagram of the first image quality into a rendering diagram of a second image quality, includes:
and according to the rendering map of the first image quality, performing texture and light shadow rendering by using a pix2pix rendering network, and generating a rendering map of a second image quality.
3. The method of claim 2, wherein the performing fine rendering comprises:
automatically analyzing the ambient light in the rendering graph of the first image quality and generating spherical harmonic light parameters by utilizing a real-time light reconstruction technology;
and inputting the spherical harmonic illumination parameters into a pix2pix rendering network to control the shadow effect of the rendered image.
4. A method according to claim 3, further comprising:
and automatically denoising the input image and the output image of the pix2pix rendering network, and supplementing the image fineness.
5. The method of claim 2, wherein the performing fine rendering further comprises:
dividing the area of the rendering operation image according to different subdivision materials;
carrying out semantic segmentation on each region to generate region material information;
and sending the regional material information to the pix2pix rendering network so that the rendering network can simultaneously complete rendering of various materials.
6. The method of claim 1, wherein the acquiring a rendered job image comprises:
and acquiring a rendering job image submitted by a user through a web interface or a client interface.
7. The method according to any one of claims 1 to 6, further comprising:
obtaining an effect graph storage path input by a user;
and after generating the rendering diagram of the second image quality, storing the rendering diagram of the second image quality in the effect diagram storage path.
8. An apparatus for fast offline rendering, comprising:
the rendering job acquisition module is used for acquiring rendering job images;
the rough rendering module is used for inputting the rendering job image to a renderer for rough rendering and generating a rendering diagram with a first image quality;
and the fine rendering module is used for carrying out fine rendering by utilizing a deep learning method pix2pix in the image translation technology, converting the rendering map with the first image quality into the rendering map with the second image quality and storing the rendering map with the second image quality, wherein the first image quality is lower than the second image quality.
9. An apparatus for fast offline rendering, comprising:
a master controller and a memory connected with the master controller;
a memory having program instructions stored therein;
the master is configured to execute program instructions stored in a memory and to perform the method of any one of claims 1 to 7.
10. A computer readable storage medium storing a computer program, which when executed by a processor, implements the method according to any one of claims 1-7.
CN202310252261.1A 2023-03-06 2023-03-06 Method, device, equipment and storage medium for quick offline rendering Pending CN116342759A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310252261.1A CN116342759A (en) 2023-03-06 2023-03-06 Method, device, equipment and storage medium for quick offline rendering

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310252261.1A CN116342759A (en) 2023-03-06 2023-03-06 Method, device, equipment and storage medium for quick offline rendering

Publications (1)

Publication Number Publication Date
CN116342759A true CN116342759A (en) 2023-06-27

Family

ID=86881650

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310252261.1A Pending CN116342759A (en) 2023-03-06 2023-03-06 Method, device, equipment and storage medium for quick offline rendering

Country Status (1)

Country Link
CN (1) CN116342759A (en)

Similar Documents

Publication Publication Date Title
KR101145260B1 (en) Apparatus and method for mapping textures to object model
CN105374007B (en) Merge the pencil drawing generation method and device of skeleton stroke and textural characteristics
CN111754517A (en) Interactive image matting using neural networks
CN111161392B (en) Video generation method and device and computer system
EP3533218B1 (en) Simulating depth of field
CN104732585A (en) Human body type reconstructing method and device
CN111832745A (en) Data augmentation method and device and electronic equipment
CN107578367B (en) Method and device for generating stylized image
CN109767488A (en) Three-dimensional modeling method and system based on artificial intelligence
CN113240783B (en) Stylized rendering method and device, readable storage medium and electronic equipment
CN115049780A (en) Deep rendering model training method and device, and target rendering method and device
CN113313631B (en) Image rendering method and device
WO2022026603A1 (en) Object recognition neural network training using multiple data sources
US20210233300A1 (en) Rig-space neural rendering of digital assets
CN114730480A (en) Machine learning based on volume capture and grid tracking
CN111932448A (en) Data processing method, device, storage medium and equipment
CN116342759A (en) Method, device, equipment and storage medium for quick offline rendering
CN112634439B (en) 3D information display method and device
Chen et al. A novel method for pencil drawing generation in non-photo-realistic rendering
US10403026B2 (en) Noise reduction on G-buffers for Monte Carlo filtering
KR102650940B1 (en) Computer implemented method and programmable system for rendering 2D/3D models
CN117671110B (en) Real-time rendering system and method based on artificial intelligence
CN116721194B (en) Face rendering method and device based on generation model
Nguyen et al. Inspire: An interactive image assisted non-photorealistic rendering system
CN116664738A (en) Image generation method and device, electronic equipment and computer readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination