CN110084843A - A kind of method for compressing image based on deep learning applied to furniture 3 D-printing - Google Patents

A kind of method for compressing image based on deep learning applied to furniture 3 D-printing Download PDF

Info

Publication number
CN110084843A
CN110084843A CN201910327972.4A CN201910327972A CN110084843A CN 110084843 A CN110084843 A CN 110084843A CN 201910327972 A CN201910327972 A CN 201910327972A CN 110084843 A CN110084843 A CN 110084843A
Authority
CN
China
Prior art keywords
image
furniture
network model
image data
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201910327972.4A
Other languages
Chinese (zh)
Inventor
熊健
王一平
张训飞
马强
杨洁
桂冠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Post and Telecommunication University
Original Assignee
Nanjing Post and Telecommunication University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Post and Telecommunication University filed Critical Nanjing Post and Telecommunication University
Priority to CN201910327972.4A priority Critical patent/CN110084843A/en
Publication of CN110084843A publication Critical patent/CN110084843A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B29WORKING OF PLASTICS; WORKING OF SUBSTANCES IN A PLASTIC STATE IN GENERAL
    • B29CSHAPING OR JOINING OF PLASTICS; SHAPING OF MATERIAL IN A PLASTIC STATE, NOT OTHERWISE PROVIDED FOR; AFTER-TREATMENT OF THE SHAPED PRODUCTS, e.g. REPAIRING
    • B29C64/00Additive manufacturing, i.e. manufacturing of three-dimensional [3D] objects by additive deposition, additive agglomeration or additive layering, e.g. by 3D printing, stereolithography or selective laser sintering
    • B29C64/30Auxiliary operations or equipment
    • B29C64/386Data acquisition or data processing for additive manufacturing
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B33ADDITIVE MANUFACTURING TECHNOLOGY
    • B33YADDITIVE MANUFACTURING, i.e. MANUFACTURING OF THREE-DIMENSIONAL [3-D] OBJECTS BY ADDITIVE DEPOSITION, ADDITIVE AGGLOMERATION OR ADDITIVE LAYERING, e.g. BY 3-D PRINTING, STEREOLITHOGRAPHY OR SELECTIVE LASER SINTERING
    • B33Y50/00Data acquisition or data processing for additive manufacturing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • G06T7/41Analysis of texture based on statistical description of texture
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/161Encoding, multiplexing or demultiplexing different image signal components

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Chemical & Material Sciences (AREA)
  • Materials Engineering (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Manufacturing & Machinery (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Mathematical Physics (AREA)
  • Mechanical Engineering (AREA)
  • Optics & Photonics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Compression Of Band Width Or Redundancy In Fax (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The present invention discloses a kind of method for compressing image based on deep learning applied to furniture 3 D-printing, comprising: acquisition furniture surface texture image data;Quantization encoding is carried out to the furniture surface texture image data;Construct image entropy neural network model;Image data input picture entropy neural network model after the quantization encoding is trained, compression of images network model is obtained;Furniture surface texture image data to be printed is inputted into described image compression network model, obtains compressed furniture surface texture image data;Compressed image data is passed to three printers again and executes decompression, printing process;Neural network model of the invention can effectively compress the furniture texture image with abundant structural information and unique geometric error modeling, reduce the mean square error between original image and decompressed image to the maximum extent;When the present invention is applied to furniture 3 D-printing, memory footprint can be effectively reduced, improve the resolution ratio of print image image.

Description

A kind of method for compressing image based on deep learning applied to furniture 3 D-printing
Technical field
The invention belongs to Computer Image Processing field, it is specifically a kind of applied to furniture 3 D-printing based on depth The method for compressing image of habit.
Background technique
With three-dimensional printing technology constantly improve and the quick hair of the digital image processing techniques based on machine learning Exhibition constructs the 3 D-printing of object by layer-by-layer printing with can bond new material based on mathematical model Technology gradually applies to all trades and professions.And in the process flow of China's modern production furniture, 3 D-printing by Step is applied to Furniture modeling design innovation, furniture material innovation etc..However since 3 D-printing itself technique limits, and family Have texture image color and vein and geometric error modeling is more complicated, image fining requires height, and resolution ratio is inadequate, often leads to print Output image has obvious machine printing trace.Meanwhile the image of high resolution often needs to occupy because of 3 D-printing memory problem A large amount of memory headrooms are so as to cause printing failure.The technology for replacing Wood surface texture to create limitation by hand using machine printing is difficult Topic has perplexed the Furniture manufacturing industry several years.
In existing method for compressing image, JPEG will appear apparent blocking artifact in low rate encoding, and encounter Picture quality will receive badly damaged when bit error;JPEG2000 encodes computational complexity height and to text image and composite diagram The Lossless Image Compression performance of picture is low.
In recent years, the full resolution image compress technique based on DNNS, generallys use context model and recurrent neural net Network (RNN) makes network by that can reduce the mean square error (MSE) between original image and decompressed image after training to greatest extent.Base It is higher for the adaptability of particular target domain (i.e. finite field) in the study compressibility of DNN, it can be realized in these domains higher Compression ratio.The image compress processing method of the prior art lacks the coding method towards three-dimensional even multidimensional image, tradition pressure Contracting method is not able to satisfy the performance requirement of furniture textured pattern compression;Lack and is instructed simultaneously using context model as rate distorterence term Practice the network that two encoders improve depth image compression performance.
Summary of the invention
In response to the problems existing in the prior art, the purpose of the present invention is to provide it is a kind of applied to furniture 3 D-printing based on The method for compressing image of deep learning, after improving textured pattern compression reconfiguration by training autocoder and entropy model mutually Resolution ratio, there is the furniture texture image of abundant structural information and unique texture by compressing, reduce memory footprint, mention The resolution ratio of high print image.
To achieve the above object, the technical solution adopted by the present invention is that:
A kind of method for compressing image based on deep learning applied to furniture 3 D-printing, comprising the following steps:
S1 acquires furniture surface texture image data;
S2 carries out quantization encoding to the furniture surface texture image data;
S3 constructs image entropy neural network model;
Image data input picture entropy neural network model after the quantization encoding is trained, obtains image by S4 Compression network model;
Furniture surface texture image data to be printed is inputted described image compression network model, after obtaining compression by S5 Furniture surface texture image data.
Specifically, in step S1, the furniture surface texture image is color image or gray level image;
The size of described image is n*n, wherein n=2k, k >=5, k are integer;
The size of described image is 100k~500k.
Specifically, in step S2, the method for the quantization encoding are as follows: define L quantization central point C={ c on the image1, c2,…,cL, image is quantified using pixel each on range image nearest quantization central point, specific formula is as follows:
Wherein, (1, L) j ∈, xiFor the pixel value of each point in image,For the quantized value that image hard quantization directly obtains, For the quantized value obtained after the soft quantization of image, the quantized value that two amounts method obtains is different, can be with using different quantification manners It reduces image quantization and is distorted ratio.
In step S2, to the furniture surface texture image data carry out quantization encoding after, it is also necessary to image data into Row classification, respectively training data and test data;The training data is for training neural network model;The test data For testing trained neural network model, the low neural network model of distortion rate is obtained.
Specifically, in step S3, the method for building described image entropy neural network model are as follows:
Based on PixelRNN and factorization assignment itemConstruct entropyModel, formula are as follows:
Context model is constituted using convolution autocoder and lightweight 3D-CNN, the image after inputting as quantization encoding Data export as compressed image data.
Specifically, in step S4, during training neural network model, ladder is taken to autocoder and quantizer Algorithm tradeoff distortion is spent, entropy model is updated;The distortion is the mean square error between original image and compression image;It loses True cost weighs formula are as follows:
Wherein,For amount distortion, α is with reference to coefficient, no fixed value, and training pattern will update its iteration, and tradeoff is lost True cost makes the value of above-mentioned formula reach minimum to reach the smallest scheme of distortion cost.
In step S4, after obtaining described image compression network model, need input test data to compression of images network mould Type is tested, using compression ratio and distortion than measuring image compression quality, to obtain lesser distortion ratio.
Compared with prior art, the beneficial effects of the present invention are: the present invention passes through the deep learning side based on compression of images Method, effectively study compression has the furniture surface coating pattern of abundant structural information and color, geometric error modeling, to make this hair Bright neural network model can effectively compress the furniture texture image with abundant structural information and unique geometric error modeling, maximum Reduce to limit the mean square error between original image and decompressed image;When the present invention is applied to furniture 3 D-printing, Ke Yiyou Effect reduces memory footprint, improves the resolution ratio of print image image.
Detailed description of the invention
Fig. 1 is that a kind of process of the method for compressing image based on deep learning applied to furniture 3 D-printing of the present invention is shown Meaning block diagram;
Fig. 2 is the structural schematic block diagram of compression of images network model in the present invention.
Specific embodiment
Below in conjunction with the attached drawing in the present invention, technical solution of the present invention is clearly and completely described, it is clear that Described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.Based on the implementation in the present invention Example, those of ordinary skill in the art's all other embodiment obtained under the conditions of not making creative work belong to The scope of protection of the invention.
As shown in Figure 1, 2, a kind of image based on deep learning applied to furniture 3 D-printing is present embodiments provided Compression method, specifically includes the following steps:
S1, acquisition is largely with the image data of furniture textured pattern;
S2 carries out quantization encoding to acquired image data and convolution initializes;
S3 constructs neural network model, the image data small lot after quantization encoding is iterated in neural network model It is trained, takes gradient algorithm tradeoff distortion in each iteration for autocoder and quantizer, constantly update nerve net Network model;
Test data is inputted and carries out compressed and decompressed test in trained neural network model, utilizes compression ratio by S4 With distortion than measuring image compression quality, to obtain sufficiently small distortion ratio;
S6 inputs the furniture surface textured pattern to be printed, neural network output compression in the neural network debugged Good image data;
The image data of compression is passed to 3D printing equipment by S7, executes decompression, printing process.
Specifically, in step S1, the furniture surface texture image is color image or gray level image, and colored phase can be used Machine or gray scale camera acquire image data;
The size of described image is n*n, wherein n=2k, k >=5, k are integer;
The size of described image is 100k~500k.
Specifically, in step S2, the method for the quantization encoding are as follows: define L quantization central point C={ c on the image1, c2,…,cL, image is quantified using pixel each on range image nearest quantization central point, specific formula is as follows:
The formula relies on softening amount that can be micro- come the gradient after calculating into transmittance process;Wherein, (1, L) j ∈, xiFor figure The pixel value of each point as in,For the quantized value that image hard quantization directly obtains,For the quantized value obtained after the soft quantization of image, The quantized value that two amounts method obtains is different, using different quantification manners, can reduce image quantization distortion ratio.
In step S2, to the furniture surface texture image data carry out quantization encoding after, it is also necessary to image data into Row classification, respectively training data and test data;The training data is for training neural network model;The test data For testing trained neural network model, the low neural network model of distortion rate is obtained;The test data is extremely 10000 images are needed less.
Specifically, in step S3, the method for building described image entropy neural network model are as follows:
Based on PixelRNN and factorization assignment itemConstruct entropyModel, formula are as follows:
It, can be with using known cross entropy property as Coding cost when replacing true distribution p using Fault Distribution q Obtain above-mentioned calculating, the estimation that we can regard CE as.Therefore, when being trained to autocoder, friendship can be passed through Fork entropy CE is minimized indirectly
In the formula, using the known properties of cross entropy as Coding cost when Fault DistributionThe correct distribution of substitution CostConvolution autocoder and lightweight 3D-CNN is recycled to constitute neural network context model, the model Input is the image data after quantization encoding, is exported as compressed image data.
Specifically, in step S4, during training neural network model, ladder is taken to autocoder and quantizer Algorithm tradeoff distortion is spent, entropy model is updated;The distortion is the mean square error between original image and compression image;It loses True cost weighs formula are as follows:
Wherein,For amount distortion, α is with reference to coefficient, and no fixed value, training pattern will update its iteration, to reach To distortion cost minimum programme, the value of above-mentioned formula is made to reach minimum.
The present embodiment carries out compression study using the context model of deep learning and establishes the compressed web towards furniture image Network, compared to conventional method, the technique upgrading space and industrial production value after network foundation are larger, can be effectively reduced interior Occupied space is deposited, the resolution ratio of print image image is improved.
It although an embodiment of the present invention has been shown and described, for the ordinary skill in the art, can be with A variety of variations, modification, replacement can be carried out to these embodiments without departing from the principles and spirit of the present invention by understanding And modification, the scope of the present invention is defined by the appended.

Claims (7)

1. a kind of method for compressing image based on deep learning applied to furniture 3 D-printing, which is characterized in that including following Step:
S1 acquires furniture surface texture image data;
S2 carries out quantization encoding to the furniture surface texture image data;
S3 constructs image entropy neural network model;
Image data input picture entropy neural network model after the quantization encoding is trained, obtains compression of images by S4 Network model;
Furniture surface texture image data to be printed is inputted described image compression network model, obtains compressed family by S5 Have skin texture images data.
2. a kind of method for compressing image based on deep learning applied to furniture 3 D-printing according to claim 1, It is characterized in that, the furniture surface texture image is color image or gray level image in step S1;
The size of described image is n*n, wherein n=2k, k >=5, k are integer;
The size of described image is 100k~500k.
3. a kind of method for compressing image based on deep learning applied to furniture 3 D-printing according to claim 1, It is characterized in that, in step S2, the method for the quantization encoding are as follows: define L quantization central point C={ c on the image1, c2,…,cL, image is quantified using pixel each on range image nearest quantization central point, specific formula is as follows:
Wherein, (1, L) j ∈, xiFor the pixel value of each point in image,For the quantized value that image hard quantization directly obtains,For figure As the quantized value obtained after soft quantization.
4. a kind of method for compressing image based on deep learning applied to furniture 3 D-printing according to claim 1, It is characterized in that, in step S2, after carrying out quantization encoding to the furniture surface texture image data, it is also necessary to image data Classify, respectively training data and test data.
5. a kind of method for compressing image based on deep learning applied to furniture 3 D-printing according to claim 1, It is characterized in that, in step S3, the method for building described image entropy neural network model are as follows:
Based on PixelRNN and factorization assignment itemConstruct entropyModel, formula are as follows:
Wherein, Coding cost when p is Fault Distribution, Coding cost when q is correct distribution, CE areEstimated value, When being trained to autocoder, minimized indirectly by cross entropy CEValue;
Neural network context model is constituted using convolution autocoder and lightweight 3D-CNN, is inputted as after quantization encoding Image data exports as compressed image data.
6. a kind of method for compressing image based on deep learning applied to furniture 3 D-printing according to claim 1, It is characterized in that, during training neural network model, taking gradient to calculate autocoder and quantizer in step S4 Right weighing apparatus distortion, is updated entropy model;The distortion is the mean square error between original image and compression image;It is distorted generation Valence weighs formula are as follows:
Wherein,For amount distortion, α is with reference to coefficient, no fixed value.
7. a kind of method for compressing image based on deep learning applied to furniture 3 D-printing according to claim 1, It is characterized in that, after obtaining described image compression network model, needing input test data to compression of images network in step S4 Model is tested.
CN201910327972.4A 2019-04-23 2019-04-23 A kind of method for compressing image based on deep learning applied to furniture 3 D-printing Withdrawn CN110084843A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910327972.4A CN110084843A (en) 2019-04-23 2019-04-23 A kind of method for compressing image based on deep learning applied to furniture 3 D-printing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910327972.4A CN110084843A (en) 2019-04-23 2019-04-23 A kind of method for compressing image based on deep learning applied to furniture 3 D-printing

Publications (1)

Publication Number Publication Date
CN110084843A true CN110084843A (en) 2019-08-02

Family

ID=67416223

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910327972.4A Withdrawn CN110084843A (en) 2019-04-23 2019-04-23 A kind of method for compressing image based on deep learning applied to furniture 3 D-printing

Country Status (1)

Country Link
CN (1) CN110084843A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110411308A (en) * 2019-08-14 2019-11-05 浙江德尔达医疗科技有限公司 A kind of accuracy checking method of customized type 3D printing model
CN112659548A (en) * 2020-11-06 2021-04-16 西安交通大学 Surface exposure 3D printing process optimization method based on genetic algorithm and BP neural network
CN113079377A (en) * 2021-04-01 2021-07-06 中国科学技术大学 Training method for depth image/video compression network

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110411308A (en) * 2019-08-14 2019-11-05 浙江德尔达医疗科技有限公司 A kind of accuracy checking method of customized type 3D printing model
CN110411308B (en) * 2019-08-14 2021-08-24 浙江德尔达医疗科技有限公司 Precision detection method for customized 3D printing model
CN112659548A (en) * 2020-11-06 2021-04-16 西安交通大学 Surface exposure 3D printing process optimization method based on genetic algorithm and BP neural network
CN113079377A (en) * 2021-04-01 2021-07-06 中国科学技术大学 Training method for depth image/video compression network

Similar Documents

Publication Publication Date Title
CN106157339B (en) The animated Mesh sequence compaction method extracted based on low-rank vertex trajectories subspace
CN110084843A (en) A kind of method for compressing image based on deep learning applied to furniture 3 D-printing
CN109685743B (en) Image mixed noise elimination method based on noise learning neural network model
CN106254879B (en) A kind of application encodes the Image Lossy Compression method of neural network certainly
CN108154499B (en) Woven fabric texture flaw detection method based on K-SVD learning dictionary
CN108038503B (en) Woven fabric texture characterization method based on K-SVD learning dictionary
CN101835048A (en) By carry out the method and apparatus of video coding based on the just noticeable difference model of ABT
CN110248190A (en) A kind of compressed sensing based multilayer residual error coefficient image encoding method
CN103700074B (en) Based on the self-adapting compressing perception method of sampling of discrete cosine transform coefficient distribution
CN108898568B (en) Image synthesis method and device
Fan et al. D-dpcc: Deep dynamic point cloud compression via 3d motion prediction
CN107018410B (en) A kind of non-reference picture quality appraisement method based on pre- attention mechanism and spatial dependence
CN108111852A (en) Towards the double measurement parameter rate distortion control methods for quantifying splits' positions perceptual coding
Gao et al. Rate-distortion modeling for bit rate constrained point cloud compression
Harell et al. Rate-distortion in image coding for machines
Joshua et al. Comparison of DCT and DWT image compression
CN104299256B (en) Almost-lossless compression domain volume rendering method for three-dimensional volume data
CN111083498B (en) Model training method and using method for video coding inter-frame loop filtering
Bletterer et al. Point cloud compression using depth maps
CN107146260A (en) A kind of compression of images based on mean square error perceives the method for sampling
Gao et al. Quality constrained compression using DWT-based image quality metric
CN107203991A (en) A kind of half reference image quality appraisement method based on spectrum residual error
CN111161363A (en) Image coding model training method and device
CN115294010A (en) Method for evaluating quality of reference point cloud based on support vector machine
CN107705249A (en) Image super-resolution method based on depth measure study

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20190802

WW01 Invention patent application withdrawn after publication