CN112270397B - Color space conversion method based on deep neural network - Google Patents

Color space conversion method based on deep neural network Download PDF

Info

Publication number
CN112270397B
CN112270397B CN202011157124.2A CN202011157124A CN112270397B CN 112270397 B CN112270397 B CN 112270397B CN 202011157124 A CN202011157124 A CN 202011157124A CN 112270397 B CN112270397 B CN 112270397B
Authority
CN
China
Prior art keywords
layer
training
network
neural network
dbn
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011157124.2A
Other languages
Chinese (zh)
Other versions
CN112270397A (en
Inventor
苏泽斌
杨金锴
李鹏飞
景军锋
张缓缓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Polytechnic University
Original Assignee
Xian Polytechnic University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Polytechnic University filed Critical Xian Polytechnic University
Priority to CN202011157124.2A priority Critical patent/CN112270397B/en
Publication of CN112270397A publication Critical patent/CN112270397A/en
Application granted granted Critical
Publication of CN112270397B publication Critical patent/CN112270397B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Neurology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Color Image Communication Systems (AREA)
  • Image Processing (AREA)
  • Facsimile Image Signal Circuits (AREA)

Abstract

The invention discloses a color space conversion method based on a deep neural network, which is implemented according to the following steps: step 1, manufacturing a training sample and a test sample, wherein the training sample is used for building a neural network model, and the test sample is used for checking the conversion precision of the trained model; establishing a deep confidence network model; step 2, optimizing parameters such as neuron connection weights of the deep belief network by using a particle swarm algorithm; step 3, inputting a training sample into the step 3 for training, and then performing reverse fine tuning by using a BP neural network to obtain a stable PSO-DBN model, and obtaining a conversion model from Lxa to CMYK color space; and 4, inputting the test sample into a conversion model to perform color conversion, calculating conversion errors, checking model accuracy and finishing color space conversion. The problem of the existing technology that the conversion accuracy of the conversion model from LabXto CMYK color space is low is solved.

Description

Color space conversion method based on deep neural network
Technical Field
The invention belongs to the technical field of image processing, and relates to a color space conversion method based on a deep neural network.
Background
Color management of digital printers can be divided into three steps, calibration equipment, characterization, and color space conversion, which is an important part of digital print color management. The CMYK color space is a color standard applied in digital printing, and describes the relationship between the ink amounts of four colors of cyan (C), magenta (M), yellow (Y), and black (K) in a digital printed product. The color space of L.a.b.is irrelevant to the equipment, can be used as a connection color space between different equipment, and is widely applied to color evaluation of a digital printing machine. L in the formula of L is a and b represents brightness; the positive range of the value a is a red color gamut, and the negative range is a green color gamut; the positive range of b values is the yellow color gamut and the negative range is the blue color gamut. Different devices have different color space description methods on images, and the color gamuts of the different devices have larger differences, so that chromatic aberration exists between the digital printing product and the sample manuscript image. The conversion relation between the L, a and b with higher conversion precision and the CMYK color space is established, so that the quality of the digital printing product can be greatly improved.
Neural network technology has received great attention in color management and color space conversion applications. The conventional color space conversion method uses shallow neural networks such as BPNN, GRNN, ELM, etc., which are affected by their own structures, and can easily obtain a locally optimal solution under a complex problem, and the accuracy thereof is difficult to further improve. The deep belief network (Deep Belief Network, DBN) is an unsupervised learning method that can extract features from a large amount of data and has wide adaptability and strong mapping capability suitable for constructing a color space conversion model. Parameters of the DBN algorithm are often determined manually through experience and multiple adjustments, and network practicability is greatly affected. The particle swarm optimization algorithm (Particle Swarm Optimization, PSO) can optimize parameters of the DBN algorithm, and finally, the optimal parameters are given to the DBN network, so that the conversion accuracy of the DBN is improved.
Disclosure of Invention
The invention aims to provide a color space conversion method based on a deep neural network, which solves the problem of low conversion accuracy from Labx to CMYK color space conversion models in the prior art.
The technical scheme adopted by the invention is that the color space conversion method based on the deep neural network is implemented according to the following steps:
step 1, manufacturing a training sample and a test sample, wherein the training sample is used for building a neural network model, and the test sample is used for checking the conversion precision of the trained model; establishing a deep confidence network model, initializing parameters among an input layer, an hidden layer and an output layer in the DBN, wherein Lxaxb color space is used as an input value of the neural network, and CMYK color space is used as an output value of the neural network;
step 2, optimizing parameters such as neuron connection weights of the deep belief network by using a particle swarm algorithm;
step 3, inputting a training sample into the step 3 for training, and then performing reverse fine tuning by using a BP neural network to obtain a stable PSO-DBN model, and obtaining a conversion model from Lxa to CMYK color space;
and 4, inputting the test sample into a conversion model to perform color conversion, calculating conversion errors, checking model accuracy and finishing color space conversion.
The invention is also characterized in that:
in step 1, a deep confidence network model is established, parameters among an input layer, an hidden layer and an output layer in the DBN are initialized, an Lxa xb color space is used as an input value of the neural network, and a CMYK color space is used as an output value of the neural network and is implemented specifically according to the following steps: establishing a deep confidence network model, wherein a limited Boltzmann machine is a main component of the DBN, and the training process of the DBN can be divided into two stages, namely pre-training and reverse fine tuning; firstly, training each RBM in a network layer by adopting an unsupervised greedy learning algorithm, and transmitting data characteristic information layer by layer to initialize network parameters; subsequently, fine tuning is performed using a BP neural network algorithm, initial weights obtained by top-to-bottom pre-training; the process carries out supervision training to ensure that the model obtains an optimal solution, thereby determining the structure of the whole DBN network.
The unsupervised training of the RBM in step 1 is specifically performed as follows:
in RBM, v 1 ,v 2 … the visible layer, h 1 ,h 2 … the hidden layer, w ij A weight representing each neuron connection; introducing an energy function to define the total energy of the system, and calculating joint distribution probability;
in the above equation θ= { a, b, w }, w ij Is the connection weight between the visible layer and the hidden layer, V is the number of units in the visible layer, H is the number of units in the hidden, a i Is the offset of the visible layer, a j Is the offset of the hidden layer; the joint distribution of visible and hidden units is defined as follows, according to the defined system energy.
Z=∑ v′h′ e -E(v′,h′|θ) 3
In the above formula, Z is a normalization factor, ensuring that the joint probability varies within the range of [0,1 ]. Thus, the edge distribution of the hidden layer is as follows:
since the neurons in each layer of the RBM are independent, the activation probability of each cell can be obtained according to the following relation;
p(v i =I|h)=f(a i +∑ j h j w ij ) 5
p(h i =I|v)=f(b j +∑ i v i w ij ) 6。
the training process of the DBN network in the step 1 is implemented specifically as follows:
step 1.1, let node status denote F i ,F j Representing the state of a node j connected with the node i, wherein the weight matrix is W; randomly selecting a training sample, inputting the data into the visible layer of the first RBM, and updating the state F of each node of the hidden layer of the first layer according to the formula (7) j Wherein σ ε [0,1]]。
Step 1.2, using the hidden node state F obtained in step 2.1 j Updating the state of the first RBM visible node, denoted as F ', according to equation (8)' i
Step 1.3, the hidden layer node state F 'of the first RBM obtained in the previous step is calculated' i As the input of the second RBM, updating each node state of the DBN network layer by layer according to the steps until the four RBM nodes are updated;
step 1.4, calculating a network weight value according to a formula (9), updating a weight matrix in the network until the variation of the weight matrix of the network is small enough or reaches the set highest training times, and ending DBN training;
Δω ij =η(<F i F j >-<F' i F' j >) 9。
the step 2 is specifically implemented according to the following steps:
step 2.1, preprocessing a data set, and firstly initializing parameters in a DBN neural network to determine the dimension of particles;
step 2.2, initializing each parameter of the particle swarm, and for 4 hidden layers DBN, each layer has m respectively 1 、m 2 、m 3 And m 4 Neurons, learning rate eta epsilon [0, 1); thus, each particle in the particle swarm is set as a four-dimensional vector X (m 1 ,m 2 ,m 3 ,m 4 ,η);
Step 2.3, calculating fitness function values of the particles by using a formula (10) to obtain an individual extremum P best And population extremum G best
Wherein N is the total number of samples; m is the dimension of the particle; a, a ij 、b ij The predicted value and the actual value of the j-th dimensional data of the i-th sample are respectively;
step 2.4, comparing the fitness value of each particle with P best Is of a size of (2); if the fitness is greater than the individual extremum, updating it to P best And vice versa. The same procedure yields a global optimum G best
And 2.5, updating the speed and the position of each particle. If the maximum iteration number is reached, the iteration is ended and the final parameters are output, otherwise, the optimal particle position is continuously searched.
The step 3 is specifically implemented according to the following steps:
step 3.1, splitting the optimal solution in the step 3 into parameters of a DBN, and inputting the parameters into a DBN network for training to obtain a stable neural network model;
and 3.2, taking 3 component values in the training sample Lxa xb xcolor space as a neural network input, taking the values of 4 components in the CMYK color space as a neural network output, and training the PSO-DBN network.
The beneficial effects of the invention are as follows: the invention discloses a color space conversion method based on a deep neural network, which solves the problem of low conversion precision of the color space conversion method in the prior art. Aiming at the problems that the traditional color space conversion method is low in conversion precision, a neural network algorithm is easy to fall into a local minimum value and the like, the defect of low conversion precision of the traditional shallow color conversion method is overcome by optimizing a deep confidence network through a particle swarm algorithm, and meanwhile, a conversion model is high in stability.
Drawings
FIG. 1 is a schematic flow chart of a color space conversion method based on a deep neural network according to the present invention;
FIG. 2 is a diagram of a deep belief network structure in a color space conversion method based on a deep neural network according to the present invention;
FIG. 3 is a flow chart of a particle swarm optimization deep belief network algorithm in a color space conversion method based on a deep neural network;
fig. 4 is a flowchart of a color space conversion method based on a deep neural network for creating a model from l×a×b to CMYK color space conversion
Fig. 5 is a statistical chart of conversion color differences for verifying a designed color space conversion method in an embodiment of a color space conversion method based on a deep neural network according to the present invention.
Detailed Description
The invention will be described in detail below with reference to the drawings and the detailed description.
The invention discloses a color space conversion method based on a deep neural network, which is implemented as shown in fig. 1, and specifically comprises the following steps:
step 1, manufacturing a training sample and a test sample, wherein the training sample is used for building a neural network model, and the test sample is used for checking the conversion precision of the trained model; establishing a deep confidence network model, initializing parameters among an input layer, an hidden layer and an output layer in the DBN, wherein Lxaxb color space is used as an input value of the neural network, and CMYK color space is used as an output value of the neural network;
in step 1, a deep confidence network model is established, parameters among an input layer, an hidden layer and an output layer in the DBN are initialized, an Lxa xb color space is used as an input value of the neural network, and a CMYK color space is used as an output value of the neural network and is implemented specifically according to the following steps: establishing a deep belief network model, as shown in fig. 2, the restricted boltzmann machine (Restricted Boltzmann Machine, RBM) is a main component of the DBN, and the training process of the DBN can be divided into two stages, namely pre-training and reverse fine-tuning; firstly, training each RBM in a network layer by adopting an unsupervised greedy learning algorithm, and transmitting data characteristic information layer by layer to initialize network parameters; subsequently, fine tuning is performed using a BP neural network algorithm, initial weights obtained by top-to-bottom pre-training; the process carries out supervision training to ensure that the model obtains an optimal solution, thereby determining the structure of the whole DBN network.
The unsupervised training of the RBM in step 1 is specifically performed as follows:
in RBM, v 1 ,v 2 … the visible layer, h 1 ,h 2 … the hidden layer, w ij A weight representing each neuron connection; introducing an energy function to define the total energy of the system, and calculating joint distribution probability;
in the above equation θ= { a, b, w }, w ij Is the connection weight between the visible layer and the hidden layer, V is the number of units in the visible layer, H is the number of units in the hidden, a i Is the offset of the visible layer, a j Is the offset of the hidden layer; the joint distribution of visible and hidden units is defined as follows, according to the defined system energy.
Z=∑ v′h′ e -E(v′,h′|θ) 3
In the above formula, Z is a normalization factor, ensuring that the joint probability varies within the range of [0,1 ]. Thus, the edge distribution of the hidden layer is as follows:
since the neurons in each layer of the RBM are independent, the activation probability of each cell can be obtained according to the following relation;
p(v i =I|h)=f(a i +∑ j h j w ij ) 5
p(h i =I|v)=f(b j +∑ i v i w ij ) 6。
the training process of the DBN network in the step 1 is implemented specifically as follows:
step 1.1, let node status denote F i ,F j Representing node j connected to node iThe weight matrix is W; randomly selecting a training sample, inputting the data into the visible layer of the first RBM, and updating the state F of each node of the hidden layer of the first layer according to the formula (7) j Wherein σ ε [0,1]]。
Step 1.2, using the hidden node state F obtained in step 2.1 j Updating the state of the first RBM visible node, denoted as F ', according to equation (8)' i
Step 1.3, the hidden layer node state F 'of the first RBM obtained in the previous step is calculated' i As the input of the second RBM, updating each node state of the DBN network layer by layer according to the steps until the four RBM nodes are updated;
step 1.4, calculating a network weight value according to a formula (9), updating a weight matrix in the network until the variation of the weight matrix of the network is small enough or reaches the set highest training times, and ending DBN training;
Δω ij =η(<F i F j >-<F' i F' j >) 9。
step 2, optimizing parameters such as neuron connection weights of the deep belief network by using a particle swarm algorithm;
as shown in fig. 3, the step 2 is specifically implemented according to the following steps:
step 2.1, preprocessing a data set, and firstly initializing parameters in a DBN neural network to determine the dimension of particles;
step 2.2, initializing each parameter of the particle swarm, and for 4 hidden layers DBN, each layer has m respectively 1 、m 2 、m 3 And m 4 The learning rate eta epsilon [0, 1) of each neuron. Thus, each particle in the particle swarm is set as a four-dimensional vector X (m 1 ,m 2 ,m 3 ,m 4 ,η);
Step 2.3, calculating fitness function values of the particles by using a formula (10) to obtain an individual extremum P best And population extremum G best
Wherein N is the total number of samples; m is the dimension of the particle; a, a ij 、b ij The predicted value and the actual value of the j-th dimensional data of the i-th sample are respectively;
step 2.4, comparing the fitness value of each particle with P best Is of a size of (2); if the fitness is greater than the individual extremum, updating it to P best And vice versa. The same procedure yields a global optimum G best
And 2.5, updating the speed and the position of each particle. If the maximum iteration number is reached, the iteration is ended and the final parameters are output, otherwise, the optimal particle position is continuously searched.
Step 3, inputting a training sample into the step 3 for training, and then performing reverse fine tuning by using a BP neural network to obtain a stable PSO-DBN model, and obtaining a conversion model from Lxa to CMYK color space;
as shown in fig. 4, the step 3 is specifically implemented as follows:
step 3.1, splitting the optimal solution in the step 3 into parameters of a DBN, and inputting the parameters into a DBN network for training to obtain a stable neural network model;
and 3.2, taking 3 component values in the training sample Lxa xb xcolor space as a neural network input, taking the values of 4 components in the CMYK color space as a neural network output, and training the PSO-DBN network.
And 4, inputting the test sample into a conversion model to perform color conversion, calculating conversion errors, checking model accuracy and finishing color space conversion.
Examples
The running platform of this example was Windows 10, the simulation environment used MATLAB R2016a, the PANTONE TCX color chart was used as the sample dataset for the experiment, all 2310 color patches were numbered for the color chart, and 800 random numbers in the range of 1 to 2310 were randomly generated using MATLAB software, corresponding to the color patches numbered 800 as training samples. The value of L a b is used as input, the corresponding CMYK values are used as output, to train the network and create a nonlinear mapping. Then, another 50 color patches were randomly selected as test samples from the remaining 1510 color patches in the color chart.
And inputting 50 test samples into a color space conversion model to obtain 50 CMYK predicted values, comparing the 50 CMYK predicted values with actual CMYK values, and respectively calculating C, M, Y, K average conversion errors of four components. As shown in fig. 5, it can be seen that the conversion accuracy from l×a×b to CMYK color space is high.
The invention discloses a color space conversion method based on a deep neural network, which converts an Lxa-b color space into a CMYK color space. L a b and CMYK correspond to the input and output values of the deep confidence network, respectively. And the parameters such as the connection weight of the deep confidence network are optimized by using a particle swarm algorithm, so that the performance of the deep confidence network is improved. The working process is as follows: establishing a data set; determining a deep confidence neural network structure and initializing parameters; optimizing the weight of the deep confidence neural network through a particle swarm; and finally obtaining a stable neural network model. Any L x a x b x to CMYK color space conversion function in digital printing can be realized. The method improves the color space conversion accuracy and has higher conversion efficiency.

Claims (1)

1. The color space conversion method based on the deep neural network is characterized by comprising the following steps of:
step 1, manufacturing a training sample and a test sample, wherein the training sample is used for building a neural network model, and the test sample is used for checking the conversion precision of the trained model; establishing a deep confidence network model, initializing parameters among an input layer, an hidden layer and an output layer in the DBN, wherein Lxaxb color space is used as an input value of the neural network, and CMYK color space is used as an output value of the neural network;
in the step 1, a deep confidence network model is established, parameters among an input layer, an hidden layer and an output layer in the DBN are initialized, an Lxa xb color space is used as an input value of the neural network, and a CMYK color space is used as an output value of the neural network, specifically according to the following implementation: establishing a deep confidence network model, wherein a limited Boltzmann machine is a main component of the DBN, and the training process of the DBN can be divided into two stages, namely pre-training and reverse fine tuning; firstly, training each RBM in a network layer by adopting an unsupervised greedy learning algorithm, and transmitting data characteristic information layer by layer to initialize network parameters; subsequently, fine tuning is performed using a BP neural network algorithm, initial weights obtained by top-to-bottom pre-training; the process carries out supervision training to ensure that the model obtains an optimal solution, thereby determining the structure of the whole DBN network;
the unsupervised training of the RBM in step 1 is specifically implemented as follows:
in the RBM, the number of the RBMs is,,/>… represents the visible layer, +.>,/>… the hidden layer, < >>A weight representing each neuron connection; introducing an energy function to define the total energy of the system, and calculating joint distribution probability;
(1)
in the above,/>Is the connection weight between the visible layer and the hidden layer,/->Is the number of units in the visible layer, +.>Is the number of units in the implication, < >>Is the offset of the visible layer, +.>Is the offset of the hidden layer; the joint distribution of visible and hidden units is defined as follows, according to the defined system energy:
(2)
(3)
in the above-mentioned description of the invention,is a normalization factor, and ensures that the joint probability is 0,1]The edge distribution of the hidden layer is therefore as follows:
(4)
since the neurons in each layer of the RBM are independent, the activation probability of each cell can be obtained according to the following relation;
(5)
(6);
the training process of the DBN network in the step 1 is specifically implemented according to the following steps:
step 1.1, let node status denote F r ,F t Representing the state of a node t connected with a node r, the weight matrix is as followsThe method comprises the steps of carrying out a first treatment on the surface of the Randomly selecting a training sample, inputting the data into the visible layer of the first RBM, and updating the state F of each node of the hidden layer of the first layer according to the formula (7) t Wherein->
(7)
Step 1.2, using the hidden node state F obtained in step 1.1 t Updating the state of the first RBM visible node, denoted as F, according to equation (8) t ’;
(8)
Step 1.3, the hidden layer node state F of the first RBM obtained in the previous step is obtained t ' as the input of the second RBM, updating each node state of the DBN network layer by layer according to the steps until the four RBM nodes are updated;
step 1.4, calculating a network weight value according to a formula (9), updating a weight matrix in the network until the variation of the weight matrix of the network is small enough or reaches the set highest training times, and ending DBN training;
(9);
step 2, optimizing parameters such as neuron connection weights of the deep belief network by using a particle swarm algorithm;
the step 2 is specifically implemented according to the following steps:
step 2.1, preprocessing a data set, and firstly initializing parameters in a DBN neural network to determine the dimension of particles;
step 2.2, initializing each parameter of the particle swarm, wherein each layer of the particle swarm comprises 4 hidden layers DBN、/>、/>And->Individual neurons, learning rate->The method comprises the steps of carrying out a first treatment on the surface of the Thus, each particle in the particle swarm is set as a four-dimensional vector +.>
Step 2.3, calculating fitness function values of the particles by using a formula (10) to obtain an extremum of the individualAnd population extremum
(10)
Wherein n is the total number of samples;is the dimension of the particle; />、/>Respectively the firstxSample numberyA predicted value and an actual value of the dimensional data;
step 2.4, comparing the fitness value of each particle withIs of a size of (2); if the fitness is greater than the individual extremum, updating it toOtherwise, the same procedure yields the global optimum +.>
Step 2.5, updating the speed and the position of each particle, if the maximum iteration number is reached, ending the iteration and outputting final parameters, otherwise, continuing searching the optimal particle position;
step 3, inputting a training sample into the step 3 for training, and then performing reverse fine tuning by using a BP neural network to obtain a stable PSO-DBN model, and obtaining a conversion model from Lxa to CMYK color space;
the step 3 is specifically implemented according to the following steps:
step 3.1, splitting the optimal solution in the step 3 into parameters of a DBN, and inputting the parameters into a DBN network for training to obtain a stable neural network model;
step 3.2, taking 3 component values in the training sample Lxa xb xcolor space as a neural network input, taking the values of 4 components in the CMYK color space as a neural network output, and training a PSO-DBN network;
and 4, inputting the test sample into a conversion model to perform color conversion, calculating conversion errors, checking model accuracy and finishing color space conversion.
CN202011157124.2A 2020-10-26 2020-10-26 Color space conversion method based on deep neural network Active CN112270397B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011157124.2A CN112270397B (en) 2020-10-26 2020-10-26 Color space conversion method based on deep neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011157124.2A CN112270397B (en) 2020-10-26 2020-10-26 Color space conversion method based on deep neural network

Publications (2)

Publication Number Publication Date
CN112270397A CN112270397A (en) 2021-01-26
CN112270397B true CN112270397B (en) 2024-02-20

Family

ID=74342437

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011157124.2A Active CN112270397B (en) 2020-10-26 2020-10-26 Color space conversion method based on deep neural network

Country Status (1)

Country Link
CN (1) CN112270397B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113119447B (en) * 2021-03-19 2022-08-30 西安理工大学 Method for color space conversion of color 3D printing
CN113409206A (en) * 2021-06-11 2021-09-17 西安工程大学 High-precision digital printing color space conversion method

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102111626A (en) * 2009-12-23 2011-06-29 新奥特(北京)视频技术有限公司 Conversion method and device from red-green-blue (RGB) color space to cyan-magenta-yellow-black (CMYK) color space
CN102110428A (en) * 2009-12-23 2011-06-29 新奥特(北京)视频技术有限公司 Method and device for converting color space from CMYK to RGB
CN103383743A (en) * 2013-07-16 2013-11-06 南京信息工程大学 Chrominance space transformation method
CN103729695A (en) * 2014-01-06 2014-04-16 国家电网公司 Short-term power load forecasting method based on particle swarm and BP neural network
CN103729678A (en) * 2013-12-12 2014-04-16 中国科学院信息工程研究所 Navy detection method and system based on improved DBN model
WO2019101720A1 (en) * 2017-11-22 2019-05-31 Connaught Electronics Ltd. Methods for scene classification of an image in a driving support system
KR101993752B1 (en) * 2018-02-27 2019-06-27 연세대학교 산학협력단 Method and Apparatus for Matching Colors Using Neural Network
CN110475043A (en) * 2019-07-31 2019-11-19 西安工程大学 A kind of conversion method of CMYK to Lab color space

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102111626A (en) * 2009-12-23 2011-06-29 新奥特(北京)视频技术有限公司 Conversion method and device from red-green-blue (RGB) color space to cyan-magenta-yellow-black (CMYK) color space
CN102110428A (en) * 2009-12-23 2011-06-29 新奥特(北京)视频技术有限公司 Method and device for converting color space from CMYK to RGB
CN103383743A (en) * 2013-07-16 2013-11-06 南京信息工程大学 Chrominance space transformation method
CN103729678A (en) * 2013-12-12 2014-04-16 中国科学院信息工程研究所 Navy detection method and system based on improved DBN model
CN103729695A (en) * 2014-01-06 2014-04-16 国家电网公司 Short-term power load forecasting method based on particle swarm and BP neural network
WO2019101720A1 (en) * 2017-11-22 2019-05-31 Connaught Electronics Ltd. Methods for scene classification of an image in a driving support system
KR101993752B1 (en) * 2018-02-27 2019-06-27 연세대학교 산학협력단 Method and Apparatus for Matching Colors Using Neural Network
CN110475043A (en) * 2019-07-31 2019-11-19 西安工程大学 A kind of conversion method of CMYK to Lab color space

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
基于PSO-DBN神经网络的光伏短期发电出力预测;李正明等;《电力***保护与控制》;20200416;第48卷(第8期);引言、第2、3.3、4.3节 *
张雷洪.《色彩管理与实务》.2018,85-87. *
王攀等.《优化与控制中的软计算方法研究》.2017,110-111. *

Also Published As

Publication number Publication date
CN112270397A (en) 2021-01-26

Similar Documents

Publication Publication Date Title
CN107563422B (en) A kind of polarization SAR classification method based on semi-supervised convolutional neural networks
CN112270397B (en) Color space conversion method based on deep neural network
CN108805200B (en) Optical remote sensing scene classification method and device based on depth twin residual error network
CN110475043B (en) Method for converting CMYK to Lab color space
CN112507793A (en) Ultra-short-term photovoltaic power prediction method
CN110349185B (en) RGBT target tracking model training method and device
CN109410917A (en) Voice data classification method based on modified capsule network
CN111160553B (en) Novel field self-adaptive learning method
CN112347970B (en) Remote sensing image ground object identification method based on graph convolution neural network
CN114429219A (en) Long-tail heterogeneous data-oriented federal learning method
CN114897837A (en) Power inspection image defect detection method based on federal learning and self-adaptive difference
CN109801218B (en) Multispectral remote sensing image Pan-sharpening method based on multilayer coupling convolutional neural network
CN112905894B (en) Collaborative filtering recommendation method based on enhanced graph learning
CN112560603B (en) Underwater sound data set expansion method based on wavelet image
CN115409157A (en) Non-data knowledge distillation method based on student feedback
CN113225130A (en) Atmospheric turbulence equivalent phase screen prediction method based on machine learning
CN117035061A (en) Self-adaptive federal learning weight aggregation method
CN116343157A (en) Deep learning extraction method for road surface cracks
CN116933141B (en) Multispectral laser radar point cloud classification method based on multicore graph learning
CN110059718A (en) Fine granularity detection method based on the more attention mechanism of multiclass
CN116362328A (en) Federal learning heterogeneous model aggregation method based on fairness characteristic representation
CN113486929B (en) Rock slice image identification method based on residual shrinkage module and attention mechanism
CN113409206A (en) High-precision digital printing color space conversion method
CN116010832A (en) Federal clustering method, federal clustering device, central server, federal clustering system and electronic equipment
CN114463569A (en) Image matching method and system based on optimization adaptive metric learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant