CN114022360B - Rendered image super-resolution system based on deep learning - Google Patents
Rendered image super-resolution system based on deep learning Download PDFInfo
- Publication number
- CN114022360B CN114022360B CN202111305312.XA CN202111305312A CN114022360B CN 114022360 B CN114022360 B CN 114022360B CN 202111305312 A CN202111305312 A CN 202111305312A CN 114022360 B CN114022360 B CN 114022360B
- Authority
- CN
- China
- Prior art keywords
- image
- rendered
- closer
- resolution
- super
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000013135 deep learning Methods 0.000 title claims description 11
- 238000009877 rendering Methods 0.000 claims abstract description 9
- 238000011156 evaluation Methods 0.000 claims abstract description 8
- 238000000034 method Methods 0.000 claims description 9
- 238000012549 training Methods 0.000 claims description 6
- 230000006870 function Effects 0.000 claims description 5
- 238000004458 analytical method Methods 0.000 claims description 3
- 238000005520 cutting process Methods 0.000 claims description 2
- 230000001902 propagating effect Effects 0.000 claims description 2
- 230000008901 benefit Effects 0.000 abstract description 6
- 239000013598 vector Substances 0.000 abstract description 5
- 230000002708 enhancing effect Effects 0.000 abstract 1
- 230000000007 visual effect Effects 0.000 abstract 1
- 238000005516 engineering process Methods 0.000 description 4
- 238000011160 research Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000000052 comparative effect Effects 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012827 research and development Methods 0.000 description 1
- 238000004904 shortening Methods 0.000 description 1
- 238000003860 storage Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4053—Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/005—General purpose rendering architectures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4007—Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4046—Scaling of whole images or parts thereof, e.g. expanding or contracting using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2219/00—Indexing scheme for manipulating 3D models or images for computer graphics
- G06T2219/20—Indexing scheme for editing of 3D models
- G06T2219/2012—Colour editing, changing, or manipulating; Use of colour codes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2219/00—Indexing scheme for manipulating 3D models or images for computer graphics
- G06T2219/20—Indexing scheme for editing of 3D models
- G06T2219/2016—Rotation, translation, scaling
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Software Systems (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Computer Graphics (AREA)
- Architecture (AREA)
- Computer Hardware Design (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a depth learning-based rendering image super-resolution system, wherein the rendered low-resolution image has characteristic information such as depth, texture, normal vector and the like, the image is simulated and trained mainly through the depth learning, the characteristic information of the image is obtained, and an optimized MFSR network model is obtained, so that the low-resolution image is converted into a high-resolution image, each index evaluation is carried out on the realized high-resolution image, wherein the evaluation comprises peak signal-to-noise ratio PSNR and structural similarity SSIM, the larger the PSNR value is, the closer the reconstructed image is to a real clear image, and if the SSIM value is closer to 1, the closer the reconstructed image is to the real clear image, and otherwise, the opposite is true. The invention can play the roles of enhancing video image quality, improving video quality and improving visual experience of users, and simultaneously can shorten rendering time, save cost and bring considerable economic benefit.
Description
Technical Field
The invention relates to the technical field of artificial intelligence deep learning, in particular to a rendering image super-resolution system based on deep learning.
Background
At present, many aspects of researches on super-resolution of images are carried out at home and abroad, and many algorithms and models are provided for super-resolution of general images. However, for the rendered image, the method of obtaining information features such as edges, textures, depth, normal vectors and the like to perform super resolution and improving the accuracy of a prediction result is utilized, so that the clearer the rendered image is, the higher the time and cost for rendering are, and no research on the aspect is available at home at present. But research on extraction of characteristic information of a rendered image is still in a starting stage at present. The invention is based on the part of research and development results of 'Jilin province science and technology center natural science foundation project 20190201271 JC'. The super-resolution system for the rendered image based on the deep learning has the advantages of shortening the rendering time, saving the cost and bringing considerable economic benefit
Disclosure of Invention
The invention provides a rendering image super-resolution system based on deep learning, aiming at the image distortion phenomenon in the field of film and television production, a key technology for intelligently extracting image features and a key technology for dynamically dividing images for storage are researched, an image feature recognition optimization algorithm based on a deep convolutional neural network is provided, a multi-scale multi-feature super-resolution network model system is created, the model system fully extracts features such as image edges, textures, depths and the like by using the deep learning technology, the reconstruction capability of the network on image feature information is enhanced, and the created network model is subjected to objective evaluation index comparison with the existing interpolation algorithm on a self-built data set to obtain a high-resolution and high-definition image.
The technical scheme adopted by the invention is a rendering image super-resolution system based on deep learning, which is characterized in that: the method comprises the steps of extracting an image data characteristic module, collecting and sorting pictures in a classical disney rendered image data set to find out more than 100 clear rendered images, wherein the rendered images comprise 10-dimensional information, the 10-dimensional information comprises R ',G',B ',normalR',normalG ',normalB',albedoR ',albedoG',albedoB and depth Z, downsampling the previous three-dimensional RGB channel by 2,3,4 times, interpolating and reducing the previous three-dimensional RGB channel to the original size by using a bicubic interpolation method, and then cutting 64 x 64 images after horizontal, vertical and overturn to obtain a blurred image as input data;
Creating a multi-feature super-resolution (MFSR) network model, forming a batch of each 64 groups of data of the blurred image, entering a training network model into 64 x 64 divided pictures of 10 channels of each batch, learning by a 22-layer deep convolution network, generating a 3-channel target image after extracting data features of each channel in the blurred image, and obtaining an optimized MFSR network model by calculating a loss function of the target image and a real image and reversely propagating an update weight value;
and (3) carrying out result analysis, namely carrying out peak signal-to-noise ratio (PSNR) and Structural Similarity (SSIM) evaluation on the reconstructed high-definition image, wherein the larger the PSNR value is, the closer the reconstructed image is to the real clear image, the closer the SSIM value is to 1, and the closer the reconstructed image is to the real clear image.
Further, the rendered image is a exr-type multi-channel file.
Further, the mathematical expression of the SSIM is: The mathematical expression of PSNR:
Wherein X ori、Xres represents the true sharp image and the reconstructed image, MSE (Mean Squared Error) represents the error between X ori and X res, and m and n represent the row and column numbers of the image, respectively.
The beneficial effects of the invention are as follows: according to the invention, the low-resolution image rendered can be built into an optimized network model by using a deep learning technology, and the characteristic information such as the internal special edge, texture, depth and normal vector is acquired, so that the high-resolution image can be obtained more accurately. The invention can shorten the rendering time, save the cost and bring considerable economic benefit.
Drawings
Fig. 1 is a network model diagram of a depth learning based rendered image super-resolution system MFSR according to the present invention.
Fig. 2 is a block diagram of a rendered image super-resolution system module based on depth learning according to the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. The components of the embodiments of the present invention generally described and illustrated in the figures herein can be arranged and designed in a wide variety of different configurations.
The main technical points of the invention lie in a method for extracting image data characteristics and an optimization model system, which are as follows:
the data are obtained, the rendered image has information characteristics such as edges, textures, depth, normal vectors and the like, and the data of the image characteristics are obtained, so that the data loss rate is further reduced, and the corresponding precision is further improved.
By collecting and sorting pictures in a classical disney rendered image dataset, 100 clear rendered images can be found, more than 100 rendered images can be obtained, the rendered images are exr files, the files comprise 10-dimensional information, namely ' R ', ' G ', ' B ', ' normal.R ', ' normal.G ', ' B ', ' albedo.R ', ' albedo.G ', ' albedo.B ', ' depth Z ', and the like, R represents red, G represents green, B represents blue, ' normal.G ', ' normal.B ', ' albedo.R ', representing normal red, green and blue, ' albedo.G ', ' albedo.B ', representing normal vectors red, green and blue, ' depth of the images, the images are read in, the front three-dimensional ' RGB ' channels are downsampled by 2,3,4 times, the interpolation method is used for reducing the front three-dimensional channels to be large, the front three-dimensional channels to be the original depth of the images, the front three-dimensional channels are restored to be the original images, the front three-dimensional images are subjected to be cut to be the original 64, and the front three-dimensional images are subjected to be used as the corresponding data, and the front-dimensional images are subjected to 64, and the front-cut data are obtained, and the front-cut data are prepared, and the front 64 images are only cut to be used for the files.
Training, on the basis of deep research VDSR, DRCN, lapSRN, DRRN, memNet, IKN, MSICF and other network models, an innovative and optimized network model is provided, so that dynamic scheduling and distribution of computing resources are achieved, utilization rate and task completion benefits are maximized, and accurate and effective extraction of image feature information is ensured.
The method comprises the steps of providing a multi-feature super-resolution network model, namely MFSR network model, enabling each 64 groups of data of a fuzzy image to be a batch, enabling 64 x 64 divided pictures with 10 channels in each batch to enter a training network model, enabling the divided pictures to enter a 22-layer deep convolution network, obtaining a 3-channel target image after the convolution kernel size is 3*3 and extracting data features of all channels in a fuzzy input image, and enabling the update weight to be transmitted reversely through calculating a loss function of the target image output and a real image target when the loss function is deep learning and enabling the original image to be different from the target image, and finally obtaining the optimized network model. We set the momentum parameter to 0.9 and the weight decay to 10 -4. The learning rate is initialized to 0.1 and then reduced by a factor of 10 every 10 cycles. The training iterates 80 times, the loss function is converged to a very considerable degree at the 50 th time, and the training model of the 80 th generation is used for testing, so that the testing effect is better than the result of the VDSR network model.
Analysis of results we performed a comparative study on the dataset Car, classroom, bathroom and House established by themselves, each of which contained 5 images. And performing index evaluation on the realized high-definition image, wherein the index evaluation comprises peak signal-to-noise ratio (PSNR), structural Similarity (SSIM), running time and the like, so that the result of the system is ensured to be superior to that of the existing basic method.
The mathematical expression for SSIM is:
The SSIM gives an evaluation of the image quality by performing a similarity comparison of the structural information in the compared images,
If the value of SSIM is closer to 1, this means that the reconstructed image is closer to a true sharp image, and vice versa.
Mathematical expression of PSNR:
Wherein X ori、Xres represents the true sharp image and the reconstructed image, MSE (Mean Squared Error) represents the error between X ori and X res, and m and n represent the row and column numbers of the image, respectively. PSNR is given in Db. It is worth noting that the larger the value of PSNR, the closer the reconstructed image is to the true sharp image. See table 1.
Claims (3)
1. The rendering image super-resolution system based on the deep learning is characterized in that: the method comprises the steps of extracting an image data characteristic module, collecting and sorting pictures in a classical disney rendered image data set to find out more than 100 clear rendered images, wherein the rendered images comprise 10-dimensional information, the 10-dimensional information comprises R ',G',B ',normalR',normalG ',normalB',albedoR ',albedoG',albedoB and depth Z, downsampling the previous three-dimensional RGB channel by 2,3,4 times, interpolating and reducing the previous three-dimensional RGB channel to the original size by using a bicubic interpolation method, and then cutting 64 x 64 images after horizontal, vertical and overturn to obtain a blurred image as input data;
Creating a multi-feature super-resolution (MFSR) network model, forming a batch of each 64 groups of data of the blurred image, entering a training network model into 64 x 64 divided pictures of 10 channels of each batch, learning by a 22-layer deep convolution network, generating a 3-channel target image after extracting data features of each channel in the blurred image, and obtaining an optimized MFSR network model by calculating a loss function of the target image and a real image and reversely propagating an update weight value;
and (3) carrying out result analysis, namely carrying out peak signal-to-noise ratio (PSNR) and Structural Similarity (SSIM) evaluation on the reconstructed high-definition image, wherein the larger the PSNR value is, the closer the reconstructed image is to the real clear image, the closer the SSIM value is to 1, and the closer the reconstructed image is to the real clear image.
2. The depth learning based rendered image super resolution system of claim 1, wherein: the rendered image is a exr-type multi-channel file.
3. The depth learning based rendered image super resolution system of claim 1, wherein: the mathematical expression of the SSIM is as follows: The mathematical expression of PSNR: Wherein X ori、Xres represents the true sharp image and the reconstructed image, MSE (Mean Squared Error) represents the error between X ori and X res, and m and n represent the row and column numbers of the image, respectively.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111305312.XA CN114022360B (en) | 2021-11-05 | 2021-11-05 | Rendered image super-resolution system based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111305312.XA CN114022360B (en) | 2021-11-05 | 2021-11-05 | Rendered image super-resolution system based on deep learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114022360A CN114022360A (en) | 2022-02-08 |
CN114022360B true CN114022360B (en) | 2024-05-03 |
Family
ID=80061316
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111305312.XA Active CN114022360B (en) | 2021-11-05 | 2021-11-05 | Rendered image super-resolution system based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114022360B (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109559276A (en) * | 2018-11-14 | 2019-04-02 | 武汉大学 | A kind of image super-resolution rebuilding method based on reference-free quality evaluation and characteristic statistics |
WO2020015167A1 (en) * | 2018-07-17 | 2020-01-23 | 西安交通大学 | Image super-resolution and non-uniform blur removal method based on fusion network |
WO2020037965A1 (en) * | 2018-08-21 | 2020-02-27 | 北京大学深圳研究生院 | Method for multi-motion flow deep convolutional network model for video prediction |
CN111583109A (en) * | 2020-04-23 | 2020-08-25 | 华南理工大学 | Image super-resolution method based on generation countermeasure network |
CN111754403A (en) * | 2020-06-15 | 2020-10-09 | 南京邮电大学 | Image super-resolution reconstruction method based on residual learning |
-
2021
- 2021-11-05 CN CN202111305312.XA patent/CN114022360B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020015167A1 (en) * | 2018-07-17 | 2020-01-23 | 西安交通大学 | Image super-resolution and non-uniform blur removal method based on fusion network |
WO2020037965A1 (en) * | 2018-08-21 | 2020-02-27 | 北京大学深圳研究生院 | Method for multi-motion flow deep convolutional network model for video prediction |
CN109559276A (en) * | 2018-11-14 | 2019-04-02 | 武汉大学 | A kind of image super-resolution rebuilding method based on reference-free quality evaluation and characteristic statistics |
CN111583109A (en) * | 2020-04-23 | 2020-08-25 | 华南理工大学 | Image super-resolution method based on generation countermeasure network |
CN111754403A (en) * | 2020-06-15 | 2020-10-09 | 南京邮电大学 | Image super-resolution reconstruction method based on residual learning |
Non-Patent Citations (2)
Title |
---|
基于CT图像的超分辨率重构研究;曹洪玉;刘冬梅;付秀华;张静;岳鹏飞;;长春理工大学学报(自然科学版);20200215(第01期);全文 * |
基于倒易晶胞特征增强的图像超分辨算法;赵丽玲;孙权森;张泽林;图学学报;20171231;第38卷(第4期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN114022360A (en) | 2022-02-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110119780B (en) | Hyper-spectral image super-resolution reconstruction method based on generation countermeasure network | |
CN113362223B (en) | Image super-resolution reconstruction method based on attention mechanism and two-channel network | |
CN101950365B (en) | Multi-task super-resolution image reconstruction method based on KSVD dictionary learning | |
CN110136062B (en) | Super-resolution reconstruction method combining semantic segmentation | |
CN110675321A (en) | Super-resolution image reconstruction method based on progressive depth residual error network | |
CN102156875A (en) | Image super-resolution reconstruction method based on multitask KSVD (K singular value decomposition) dictionary learning | |
Luo et al. | Lattice network for lightweight image restoration | |
CN113837946B (en) | Lightweight image super-resolution reconstruction method based on progressive distillation network | |
CN111986085A (en) | Image super-resolution method based on depth feedback attention network system | |
CN115880158A (en) | Blind image super-resolution reconstruction method and system based on variational self-coding | |
CN116486074A (en) | Medical image segmentation method based on local and global context information coding | |
CN113139904A (en) | Image blind super-resolution method and system | |
CN110288529B (en) | Single image super-resolution reconstruction method based on recursive local synthesis network | |
CN112288626A (en) | Face illusion method and system based on dual-path depth fusion | |
Shi et al. | Structure-aware deep networks and pixel-level generative adversarial training for single image super-resolution | |
CN113096015B (en) | Image super-resolution reconstruction method based on progressive perception and ultra-lightweight network | |
CN114359039A (en) | Knowledge distillation-based image super-resolution method | |
CN115511705A (en) | Image super-resolution reconstruction method based on deformable residual convolution neural network | |
CN113160198A (en) | Image quality enhancement method based on channel attention mechanism | |
Yu et al. | A review of single image super-resolution reconstruction based on deep learning | |
CN112862946B (en) | Gray rock core image three-dimensional reconstruction method for generating countermeasure network based on cascade condition | |
CN114022360B (en) | Rendered image super-resolution system based on deep learning | |
Yu et al. | Single image super-resolution based on improved WGAN | |
CN116703719A (en) | Face super-resolution reconstruction device and method based on face 3D priori information | |
Yang et al. | Deep networks for image super-resolution using hierarchical features |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |