CN109886874B - Super-resolution image reconstruction method and special acceleration circuit - Google Patents

Super-resolution image reconstruction method and special acceleration circuit Download PDF

Info

Publication number
CN109886874B
CN109886874B CN201910095232.2A CN201910095232A CN109886874B CN 109886874 B CN109886874 B CN 109886874B CN 201910095232 A CN201910095232 A CN 201910095232A CN 109886874 B CN109886874 B CN 109886874B
Authority
CN
China
Prior art keywords
layer
operation unit
convolution
deconvolution
configuration register
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910095232.2A
Other languages
Chinese (zh)
Other versions
CN109886874A (en
Inventor
余宁梅
王永超
田典
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian University of Technology
Original Assignee
Xian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian University of Technology filed Critical Xian University of Technology
Priority to CN201910095232.2A priority Critical patent/CN109886874B/en
Publication of CN109886874A publication Critical patent/CN109886874A/en
Application granted granted Critical
Publication of CN109886874B publication Critical patent/CN109886874B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The super-resolution image reconstruction method comprises the following steps: step 1, a first layer is an input layer, and a low-resolution image DL is input; step 2, the second layer to the sixth layer are convolution layers, images with low resolution are subjected to convolution operation of the convolution layers to obtain a series of feature maps, and the feature maps are subjected to nonlinear mapping to form image blocks with high resolution; step 3, the seventh layer is an deconvolution layer, and the resolution of the image block is further improved through deconvolution operation of the deconvolution layer; step 4, outputting a reconstructed high-resolution image DH from the eighth layer; the special accelerating circuit comprises a network training server, and the network training server is connected with a network structure parameter storage unit of the portable image acquisition circuit; the network structure parameter storage unit is connected with a main memory of the super-resolution chip through a network structure parameter import control unit; the data input end of the main memory is connected with the CMOS camera; the method has the characteristics of high-efficiency operation and simple method.

Description

Super-resolution image reconstruction method and special acceleration circuit
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a super-resolution image reconstruction method and a special accelerating circuit.
Background
With the rapid development of economy, people have higher and higher requirements on rapid and portable health monitoring, portable medical image acquisition and processing equipment is a key link of the portable medical image acquisition and processing equipment, and in the field of portable medical image acquisition and processing, an important reason which puzzles the development for a long time is that the detail resolution of images acquired by the portable equipment is low, while the processing method of the prior art is used for directly processing images sampled by the portable equipment, and the accuracy of further analysis and processing cannot be guaranteed due to the poor quality of sample images obtained by direct sampling.
Therefore, the present invention provides a circuit and a method for image super-resolution reconstruction, which can reconstruct a low-resolution image acquired by a portable device, and then further use the reconstructed image for content analysis and processing.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention aims to provide a super-resolution image reconstruction method and a special accelerating circuit, and the super-resolution image reconstruction method and the special accelerating circuit have the characteristics of high-efficiency operation and simple algorithm.
In order to achieve the purpose, the invention adopts the technical scheme that: the super-resolution image reconstruction method comprises the following steps:
step 1, a first layer is an input layer, and a low-resolution image DL is input;
step 2, the second layer to the sixth layer are convolution layers, images with low resolution are subjected to convolution operation of the convolution layers to obtain a series of feature maps, and the feature maps are subjected to nonlinear mapping to form image blocks with high resolution;
step 3, the seventh layer is an deconvolution layer, and the resolution of the image block is further improved through deconvolution operation of the deconvolution layer;
and 4, finally outputting a reconstructed high-resolution image DH by the eighth layer.
In the convolution operation described in step 2, the convolution layer is a core component of the convolutional neural network, and has the characteristics of local connection and weight sharing, and the convolution operation can be represented by the following formula:
F i =σ(W c(i) *F i-1 +b i ),(1)
in the formula: f i Represents the output of the ith convolutional layer, F i-1 Representing the output of the i-1 st convolutional layer, the output of the previous layer being the input of the next layer, W c(i) Represents the weight of the ith convolution layer; the convolution weight corresponds to a filter bank with the number of n and the size of fxf, the values of n and f need to be specifically set, which represents the convolution operation, b i Represents the bias of the ith layer; the offset dimension is always consistent with the number of the convolution kernels in the layer, sigma represents an activation function, the convolution operation can well extract features, through error back propagation of BP, a best parameter for a task can be obtained according to different tasks, and the convolution kernel which is best relative to the task is learned; the several parameters to be set for each convolutional layer are the size of the convolutional kernel and the number of convolutional kernelsNumber (Number), step size (stride) of the convolution operation, and size of the zero padding (Pad).
The deconvolution operation described in step 3 is to implement a final reconstruction process by using a deconvolution layer, where the deconvolution layer is equivalent to an upsampling operation, and the adjustment of the sampling factor is implemented by adjusting the step size of the deconvolution layer, and the reconstruction quality is improved by using a relatively large convolution kernel, and this process can be represented by the following formula:
F=σ(W d ·F 5 +B),(2)
wherein F represents the output of the deconvolution layer, W d Weight parameters representing the deconvolution layer,. Represents the deconvolution operation, F 6 Representing the output of the last convolution layer, B representing bias, the step length is correspondingly adjusted according to the size of the network sampling factor and is always larger than 1, if the size of the image input into the deconvolution layer is I, the deconvolution layer parameters are the size RxR and the step length s of the kernel, the filling size is p, and the size of the output image after deconvolution is o = s (I-1) + R-2p, (4).
A special accelerating circuit for super-resolution image reconstruction method comprises a network training server, wherein a network structure parameter storage unit 3 of a portable image acquisition circuit of the network training server is connected; the network structure parameter storage unit is connected with a main memory of the super-resolution chip through a network structure parameter import control unit; the data input end of the main memory is connected with the CMOS camera.
The portable image acquisition circuit comprises a super-resolution chip; the super-resolution chip comprises a main memory, and the output end of the main memory is connected with the CPU 8; the output end of the main memory is connected with the access control unit; the input end of the main memory is connected with the write-number control unit; the output end of the CPU is respectively connected with a first configuration register, a second configuration register, a third configuration register, a fourth configuration register, a fifth configuration register and a sixth configuration register; the first configuration register is connected with the input end and the output end of the access control unit through the first path selector; the access control unit is sequentially connected with the convolution operation unit, the activation operation unit, the deconvolution operation unit, the pooling operation unit and the write number control unit; the second configuration register is connected with the convolution operation unit through a second path selector; the third configuration register is connected with the activation arithmetic unit through a third path selector; a fourth configuration register passes through a fourth path selector and a deconvolution operation unit; the configuration register five is connected with the pooling operation unit through the path selector five; and the configuration register six is connected with the writing number control unit through the path selector six.
The first configuration register, the second configuration register, the third configuration register, the fourth configuration register, the fifth configuration register and the sixth configuration register have the same structure; the first configuration register is composed of a configuration register A and a configuration register B.
Two ends of the convolution operation unit, the activation operation unit, the deconvolution operation unit and the pooling operation unit are connected with a through unit with the same structure in parallel.
The network training server is responsible for training the network structure parameters according to the learning samples and storing the network structure parameters generated by training in a network structure parameter storage unit;
the network structure parameter storage unit is responsible for storing trained network structure parameters, including the weight length (weight) of each layer network, the width, the number of channels and the value of each weight;
the network structure parameter importing control unit is responsible for storing the trained network structure parameters into the main memory;
the tissue sample is a biological tissue sample which is acquired and analyzed by the portable image acquisition and reconstruction circuit;
the CMOS camera is responsible for image acquisition of the tissue sample 22 and sending the acquired image to the main memory for storage.
The super-resolution chip is responsible for completing super-resolution image reconstruction and storing a reconstructed image in the main memory;
the CPU is responsible for reading and controlling the circuit of the whole super-resolution chip.
The main memory is used for storing characteristic data and convolution kernel data required by each layer of the neural network;
the access control unit is responsible for reading data from the memory according to the configuration register information and sending the read data to the convolution multiply-add operation unit;
the convolution multiply-add operation unit is responsible for carrying out convolution operation of the neural network and transmitting an operation result to the activation operation unit;
the activation operation unit is responsible for performing activation function operation of the neural network and transmitting an operation result to the deconvolution operation unit;
the deconvolution operation unit is responsible for performing deconvolution operation and sending a result to the pooling operation unit;
the pooling operation unit is responsible for pooling operation of the neural network and sending the pooling operation result to the writing number control unit.
The access control unit, the convolution multiply-add operation unit, the activation operation unit, the deconvolution operation unit, the pooling operation unit and the write-number control unit are all provided with A, B two groups of configuration registers and a path selector, and the configuration of the two layers of networks can be stored respectively, and the specific working method is described in detail below;
the access control unit, the convolution multiply-add operation unit, the activation operation unit, the deconvolution operation unit and the pooling operation unit are all provided with a through unit, and the through unit is responsible for skipping the corresponding module by the data flow according to the configuration and directly reaching the next operation module.
Compared with the prior art, the invention has the beneficial effects that:
the invention provides an operation method and a circuit for image super-resolution reconstruction, wherein the circuit can efficiently realize the reconstruction algorithm, can rapidly reconstruct a low-resolution image acquired by portable equipment, and then further uses the reconstructed image for content analysis and processing.
In the algorithm, the CSRnet network operation method is provided, and compared with the classical algorithm, the method reduces the number of network layers and obtains the same algorithm effect, so that the method has the characteristics of simplicity and high efficiency.
Meanwhile, the circuit has reconfigurable characteristics by adopting the module direct-through configurable structure, and can adapt to the acceleration of various neural network structure operations.
Drawings
Fig. 1 is a schematic block diagram of a reconstruction circuit of the present invention.
Fig. 2 is a schematic diagram of the reconstruction method of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples.
Referring to fig. 1, the image super-resolution reconstruction circuit comprises a network training server 1, wherein a network structure parameter storage unit 3 of a portable image acquisition circuit 2 of the network training server 1 is connected; the network structure parameter storage unit 3 is connected with a main memory 6 of the super-resolution chip 5 through a network structure parameter import control unit 4; the data input of the main memory 6 is connected to the CMOS camera 7.
The portable image acquisition circuit 2 comprises a super-resolution chip 5; the super-resolution chip 5 comprises a main memory 6, and the output end of the main memory 6 is connected with a CPU 8; the output end of the main memory 6 is connected with the access control unit 9; the input end of the main memory 6 is connected with a write-number control unit 10; the output end of the CPU8 is respectively connected with a first configuration register 11, a second configuration register 12, a third configuration register 13, a fourth configuration register 14, a fifth configuration register 15 and a sixth configuration register 16; the first configuration register 11 is connected with the input and output ends of the access control unit 9 through the first path selector; the data acquisition control unit 9 is connected with a convolution operation unit 17, an activation operation unit 18, a deconvolution operation unit 19, a pooling operation unit 20 and a write number control unit 10 in sequence; the second configuration register 12 is connected with the convolution operation unit 17 through a second path selector; the third configuration register 13 is connected with the active operation unit 18 through a third path selector; a fourth configuration register 14 passes through a fourth path selector and a deconvolution operation unit 19; the configuration register five 15 is connected with the pooling operation unit 20 through the path selector five; configuration register six 16 is connected to write number control unit 10 through path selector six.
The first configuration register 11, the second configuration register 12, the third configuration register 13, the fourth configuration register 14, the fifth configuration register 15 and the sixth configuration register 16 have the same structure; configuration register one 11 is comprised of configuration register a and configuration register B.
Two ends of the convolution operation unit 17, the activation operation unit 18, the deconvolution operation unit 19 and the pooling operation unit 20 are connected in parallel with a through unit 21 with the same structure.
The network training server is responsible for training the network structure parameters according to the learning samples and storing the network structure parameters generated by training in a network structure parameter storage unit.
The network structure parameter storage unit is responsible for storing trained network structure parameters, including the length, width and number of channels of the weight (weight) of each layer network, and the value of each weight.
The network structure parameter importing control unit is responsible for storing the trained network structure parameters into the main memory.
The tissue sample 22 is a biological tissue sample that is acquired and analyzed by the portable image acquisition and reconstruction circuit.
The CMOS camera is responsible for image acquisition of the tissue sample 22 and sending the acquired image to the main memory for storage.
The super-resolution chip is responsible for completing super-resolution image reconstruction and storing the reconstructed image in the main memory.
The super-resolution chip has the following internal structure:
the CPU is responsible for reading and controlling the circuit of the whole super-resolution chip.
The main memory is used for storing the characteristic data and convolution kernel data required by each layer of the neural network.
And the access control unit is responsible for carrying out data reading operation on the memory according to the configuration register information and sending the read data to the convolution multiply-add operation unit.
The convolution multiply-add operation unit is responsible for carrying out convolution operation of the neural network and transmitting an operation result to the activation operation unit.
The activation operation unit is responsible for performing activation function operation of the neural network and sending an operation result to the deconvolution operation unit.
The deconvolution operation unit is responsible for performing deconvolution operation and sending a result to the pooling operation unit.
The pooling operation unit is responsible for pooling operation of the neural network and sending the pooling operation result to the writing number control unit.
The access control unit, the convolution multiply-add operation unit, the activation operation unit, the deconvolution operation unit, the pooling operation unit and the write-number control unit are all provided with A, B two groups of configuration registers and a path selector (also called a register gating control unit), and can respectively store the configuration of the two-layer network.
The access control unit, the convolution multiply-add operation unit, the activation operation unit, the deconvolution operation unit and the pooling operation unit are all provided with a through unit, and the through unit is responsible for skipping the corresponding module by the data flow according to the configuration and directly reaching the next operation module.
Before the circuit starts to work, the configuration of a continuous two-layer network is respectively configured to a configuration register A and a configuration register B of each module, then after the read arbitration priority and the write arbitration priority are configured, the reconstruction circuit can start to work.
Two sets of configuration registers alternate workflow:
1) Before the circuit starts to work, the configuration of the two layers of networks is respectively configured to the configuration register A and the configuration register B of each module, then the circuit can start to work, and the configuration in the configuration register A is used for working of the first layer of networks during working;
2) When the first access control unit finishes the work of the current layer, the work finishing signal of the current layer is sent to the access selector and the CPU of the access control unit, then the access selector of the access control unit immediately changes the register of the access control unit from the configuration register A to the configuration register B, namely the configuration of the next network layer, and the access control unit can immediately enter the work processing of the next layer. Meanwhile, after receiving the local layer work completion signal of the access control unit, the CPU performs configuration updating on the configuration in the configuration register A, and updates the configuration into the configuration of a third-layer network;
3) After the convolution multiply-add operation unit finishes the operation work of a second layer network, the same work completion signal of the layer is sent to a register gating control unit and a CPU of the convolution multiply-add operation unit, then a channel selector of the convolution multiply-add operation unit immediately changes a register of the convolution operation unit from a configuration register B of the second layer to a configuration register A, namely a configuration register of a third layer network, the convolution multiply-add operation unit can immediately enter the work processing of the third layer, and meanwhile, after the CPU receives the work completion signal of the layer of the convolution multiply-add operation unit, the configuration in the configuration register B is updated and updated to the configuration of the fourth layer network;
4) The activation arithmetic unit, the deconvolution arithmetic unit and the pooling arithmetic unit all work according to the sequence, and complete flow operation can be realized without depending on the working states of the front unit and the rear unit.
The operation of the dedicated circuit includes two processes: a network training process and an actual use process;
the network training process is completed on a network training server, which is a software execution process, and the specific process is shown in the following network training process;
in the actual using process, the network structure parameters are led into the control unit, the network structure is led into the main memory, then the CPU starts to respectively configure the configuration of the two layers of networks to the access control unit, the convolution multiply add operation unit, the activation operation unit, the deconvolution operation unit, the configuration register A and the configuration register B of the pooling operation unit, then the operation of all layers is completed in sequence according to the alternative working flow of the two groups of configuration registers, and finally the operation result is output.
A network training process comprising the steps of:
step 1, adopting Mean Square Error (MSE) as a cost function; the mean square error is minimized by using a stochastic gradient descent method and the back propagation of the network to adjust the parameters of the network, and the updating process of the network weight is as follows:
Figure BDA0001964350040000101
in the formula: delta k Representing the last weight update value, l representing the number of layers, k representing the number of iterations of the network, η being the learning rate,
Figure BDA0001964350040000102
represents the weight at the kth iteration of the η -th layer,
Figure BDA0001964350040000103
representing the partial derivation of the corresponding weight in the cost function, the weight adopts high distribution with the mean value of 0 and the variance of 0.001 to carry out random initialization, and the model adopts fixed learning rate in the training process.
Step 2, continuously adjusting network parameters by minimizing the difference between the result obtained by reconstruction and the reference image,
Δ k representing the last weight update value, l representing the number of layers, k representing the number of iterations of the network, η being the learning rate,
Figure BDA0001964350040000111
represents the weight at the kth iteration of the η -th layer,
Figure BDA0001964350040000112
representing the partial derivation of the corresponding weight in the cost function, the weight adopts high distribution with the mean value of 0 and the variance of 0.001 to carry out random initialization, and the model adopts fixed learning rate in the training process.
Referring to fig. 2, the method for reconstructing a super-resolution image (the algorithm is a neural network operation structure) includes the following steps:
step 1, a first layer is an input layer, and a low-resolution image DL is input;
step 2, the second layer to the sixth layer are convolution layers, images with low resolution are subjected to convolution operation of the convolution layers to obtain a series of feature maps, and the feature maps are subjected to nonlinear mapping to form image blocks with high resolution;
step 3, the seventh layer is an deconvolution layer, and the resolution of the image block is further improved through deconvolution operation of the deconvolution layer;
and 4, finally outputting a reconstructed high-resolution image DH by the eighth layer.
Table 1 below shows that compared with the structure of the present invention, the classical FSRCNN reduces two convolution layers, C7 and C8, in CSRNet, compared with the FSRCNN network, and the convolution kernel size of the first layer of the present invention structure is 3x3, while the convolution kernel of the FSRCNN is 5x5, and the adjustment on the two network structures can greatly reduce the operation amount of the neural network.
Comparison of FSRCNN with the CSRnet network structure of the present invention
TABLE 1 network architecture with FSRCNN content
Figure BDA0001964350040000121
TABLE 2 content of the network architecture proposed by the present invention
Figure BDA0001964350040000122
The convolution operation, convolution layer, is the core component of the convolution neural network, has the characteristics of local connection and weight sharing, and the convolution operation of the invention can be expressed by the following formula:
F i =σ(W c(i) *F i-1 +b i ),(1)
in the formula: f i Represents the output of the ith convolutional layer, F i-1 Representing the output of the i-1 st convolutional layer, the output of the previous layer being the input of the next layer, W c(i) Represents the weight of the ith convolutional layer; the convolution weight corresponds to a filter bank with the number of n and the size of fxf, the values of n and f need to be specifically set, which represents the convolution operation, b i Represents the bias of the ith layer; wherein the dimension of the offsetThe number of the convolution kernels is always consistent with that of the layer of convolution kernels, sigma represents an activation function, the convolution operation can well extract features, through back propagation of BP (Back propagation) errors, a best parameter for a task can be obtained according to different tasks, and the best convolution kernel relative to the task is learned; several parameters that need to be set for each convolution layer are the size of the convolution kernel, the Number (Number) of the convolution kernels, the step size (stride) of the convolution operation, and the size of the zero padding (Pad), and a specific multiply-add implementation circuit is common.
Activating a function
Using the prlu as an activation function, the expression σ = max (0,y) + a min (0,y), where a is the learnable slope coefficient of negative Y;
the deconvolution operation is realized by a deconvolution layer, the deconvolution layer is equivalent to an up-sampling operation, the sampling factor is adjusted by adjusting the step length of the deconvolution layer, the reconstruction quality is improved by adopting a relatively large convolution kernel, and the process can be expressed by the following formula:
F=σ(W d ·F 5 +B),(2)
wherein F represents the output of the deconvolution layer, W d Weight parameters representing the deconvolution layer,. Represents the deconvolution operation, F 6 Representing the output of the last convolution layer, B representing bias, and the step length is correspondingly adjusted according to the size of the network sampling factor and is always larger than 1, assuming that the size of the image input into the deconvolution layer is I, and the deconvolution layer parameters are that the size RxR and the step length s of the kernel and the filling size is p, then the size of the output image after deconvolution is:
o=s(I-1)+R-2p,(4)。

Claims (8)

1. the super-resolution image reconstruction method is characterized by comprising the following steps of:
step 1, a first layer is an input layer, and a low-resolution image DL is input;
step 2, the second layer to the sixth layer are convolution layers, images with low resolution are subjected to convolution operation of the convolution layers to obtain a series of feature maps, and the feature maps are subjected to nonlinear mapping to form image blocks with high resolution;
in the convolution operation described in step 2, the convolution layer is a core component of the convolutional neural network, and has the characteristics of local connection and weight sharing, and the convolution operation can be represented by the following formula:
F i =σ(W c(i) *F i-1 +b i ),(1)
in the formula: f i Represents the output of the ith convolutional layer, F i-1 Representing the output of the i-1 st convolutional layer, the output of the previous layer being the input of the next layer, W c(i) Represents the weight of the ith convolutional layer; the convolution weight corresponds to a filter bank with the number of n and the size of fxf, the values of n and f need to be specifically set, which represents the convolution operation, b i Represents the bias of the ith layer; the offset dimension is always consistent with the number of the convolution kernels in the layer, sigma represents an activation function, the convolution operation can well extract features, through error back propagation of BP, a best parameter for a task can be obtained according to different tasks, and the convolution kernel which is best relative to the task is learned; several parameters that need to be set for each convolutional layer are the size of the convolutional kernel, the Number of convolutional kernels (Number), the step size of the convolutional operation (stride), and the size of the zero padding (Pad);
step 3, the seventh layer is an deconvolution layer, and the resolution of the image block is further improved through deconvolution operation of the deconvolution layer;
in the deconvolution operation described in step 3, a final reconstruction process is implemented by using a deconvolution layer, which is equivalent to an upsampling operation, the adjustment of the sampling factor is implemented by adjusting the step size of the deconvolution layer, the reconstruction quality is improved by using a relatively large convolution kernel, and the process can be represented by the following formula:
F=σ(W d ·F 5 +B),(2)
wherein F represents the output of the deconvolution layer, W d Weight parameters representing the deconvolution layer,. Represents the deconvolution operation, F 6 Representing the output of the last convolutional layer, B representing the offset, the step size being based on the magnitude of the network sampling factorCorrespondingly adjusting the image size of the deconvolution layer to be always larger than 1, if the image size of the input deconvolution layer is I, the deconvolution layer parameters are RxR (kernel size), step length s and filling size p, and the size of the deconvolution output image is o = s (I-1) + R-2p (4);
and 4, finally outputting the reconstructed high-resolution image DH by the eighth layer.
2. The special accelerating circuit for the super-resolution image reconstruction method according to claim 1, characterized in that the special accelerating circuit comprises a network training server (1), and the network training server (1) is connected with a network structure parameter storage unit (3) of a portable image acquisition circuit (2); the network structure parameter storage unit (3) is connected with a main memory (6) of the super-resolution chip (5) through a network structure parameter import control unit (4); the data input end of the main memory (6) is connected with the CMOS camera (7).
3. The dedicated acceleration circuit for super-resolution image reconstruction method according to claim 2, characterized in that the portable image acquisition circuit (2) comprises a super-resolution chip (5); the super-resolution chip (5) comprises a main memory (6), and the output end of the main memory (6) is connected with the CPU 8; the output end of the main memory (6) is connected with the access control unit (9); the input end of the main memory (6) is connected with the write-number control unit (10); the output end of the CPU (8) is respectively connected with a first configuration register (11), a second configuration register (12), a third configuration register (13), a fourth configuration register (14), a fifth configuration register (15) and a sixth configuration register (16); the configuration register I (11) is connected with the input and output end of the access control unit (9) through the path selector I; the data acquisition control unit (9) is sequentially connected with the convolution operation unit (17), the activation operation unit (18), the deconvolution operation unit (19), the pooling operation unit (20) and the write number control unit (10); the second configuration register (12) is connected with the convolution operation unit (17) through the second path selector; the third configuration register (13) is connected with the active operation unit (18) through a third path selector; a configuration register IV (14) passes through a path selector IV and a deconvolution operation unit (19); the configuration register five (15) is connected with the pooling operation unit (20) through the path selector five; and a configuration register six (16) is connected with the writing number control unit (10) through a path selector six.
4. The special accelerating circuit for super-resolution image reconstruction method according to claim 2, wherein the configuration register one (11), the configuration register two (12), the configuration register three (13), the configuration register four (14), the configuration register five (15), and the configuration register six (16) have the same structure; the first configuration register (11) is composed of a configuration register A and a configuration register B.
5. The dedicated accelerating circuit for super-resolution image reconstruction method according to claim 2, wherein a through unit (21) with the same structure is connected in parallel to both ends of the convolution operation unit (17), the activation operation unit (18), the deconvolution operation unit (19) and the pooling operation unit (20).
6. The dedicated accelerating circuit for super-resolution image reconstruction method according to claim 2, wherein the network training server is responsible for training the network structure parameters according to the learning samples and storing the network structure parameters generated by training in the network structure parameter storage unit;
the network structure parameter storage unit is responsible for storing trained network structure parameters, including the weight length (weight) of each layer network, the width, the number of channels and the value of each weight;
the network structure parameter importing control unit is responsible for storing the trained network structure parameters into the main memory;
the CMOS camera is responsible for image acquisition of the tissue sample 22 and sending the acquired image to the main memory for storage.
7. The dedicated accelerating circuit for super-resolution image reconstruction method according to claim 3, wherein the super-resolution chip is responsible for completing the super-resolution image reconstruction and storing the reconstructed image in the main memory;
the CPU is responsible for reading and controlling the circuit of the whole super-resolution chip;
the main memory is used for storing characteristic data and convolution kernel data required by each layer of the neural network;
the access control unit is responsible for reading data from the memory according to the configuration register information and sending the read data to the convolution multiply-add operation unit;
the convolution multiply-add operation unit is responsible for carrying out convolution operation of the neural network and transmitting an operation result to the activation operation unit;
the activation operation unit is responsible for performing activation function operation of the neural network and transmitting an operation result to the deconvolution operation unit;
the deconvolution operation unit is responsible for performing deconvolution operation and sending a result to the pooling operation unit;
the pooling operation unit is responsible for pooling operation of the neural network and sending the pooling operation result to the writing number control unit.
8. The special accelerating circuit for super-resolution image reconstruction method of claim 3, wherein the fetch control unit, the convolution multiply add operation unit, the activation operation unit, the deconvolution operation unit, the pooling operation unit and the write control unit all have A, B two sets of configuration registers and a path selector;
the access control unit, the convolution multiply-add operation unit, the activation operation unit, the deconvolution operation unit and the pooling operation unit are all provided with a through unit, and the through unit is responsible for skipping the corresponding module by the data flow according to the configuration and directly reaching the next operation module.
CN201910095232.2A 2019-01-31 2019-01-31 Super-resolution image reconstruction method and special acceleration circuit Active CN109886874B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910095232.2A CN109886874B (en) 2019-01-31 2019-01-31 Super-resolution image reconstruction method and special acceleration circuit

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910095232.2A CN109886874B (en) 2019-01-31 2019-01-31 Super-resolution image reconstruction method and special acceleration circuit

Publications (2)

Publication Number Publication Date
CN109886874A CN109886874A (en) 2019-06-14
CN109886874B true CN109886874B (en) 2022-11-29

Family

ID=66927704

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910095232.2A Active CN109886874B (en) 2019-01-31 2019-01-31 Super-resolution image reconstruction method and special acceleration circuit

Country Status (1)

Country Link
CN (1) CN109886874B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111353939B (en) * 2020-03-02 2023-10-27 中国科学院深圳先进技术研究院 Image super-resolution method based on multi-scale feature representation and weight sharing convolution layer

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017106998A1 (en) * 2015-12-21 2017-06-29 Sensetime Group Limited A method and a system for image processing
CN107240066A (en) * 2017-04-28 2017-10-10 天津大学 Image super-resolution rebuilding algorithm based on shallow-layer and deep layer convolutional neural networks
CN109118432A (en) * 2018-09-26 2019-01-01 福建帝视信息科技有限公司 A kind of image super-resolution rebuilding method based on Rapid Circulation convolutional network

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017106998A1 (en) * 2015-12-21 2017-06-29 Sensetime Group Limited A method and a system for image processing
CN107240066A (en) * 2017-04-28 2017-10-10 天津大学 Image super-resolution rebuilding algorithm based on shallow-layer and deep layer convolutional neural networks
CN109118432A (en) * 2018-09-26 2019-01-01 福建帝视信息科技有限公司 A kind of image super-resolution rebuilding method based on Rapid Circulation convolutional network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于残差神经网络的图像超分辨率改进算法;王一宁等;《计算机应用》;20180110(第01期);全文 *
基于深度卷积神经网络的遥感图像超分辨率重建;王爱丽等;《黑龙江大学自然科学学报》;20180225(第01期);全文 *

Also Published As

Publication number Publication date
CN109886874A (en) 2019-06-14

Similar Documents

Publication Publication Date Title
US20210365710A1 (en) Image processing method, apparatus, equipment, and storage medium
CN108537733B (en) Super-resolution reconstruction method based on multi-path deep convolutional neural network
CN109685819B (en) Three-dimensional medical image segmentation method based on feature enhancement
CN110020716A (en) Neural network hardware
CN107506828A (en) Computing device and method
CN107730451A (en) A kind of compressed sensing method for reconstructing and system based on depth residual error network
CN109508717A (en) A kind of licence plate recognition method, identification device, identification equipment and readable storage medium storing program for executing
KR20180034853A (en) Apparatus and method test operating of convolutional neural network
CN111488986A (en) Model compression method, image processing method and device
CN111383741B (en) Method, device and equipment for establishing medical imaging model and storage medium
CN111161146A (en) Coarse-to-fine single-image super-resolution reconstruction method
CN111353939B (en) Image super-resolution method based on multi-scale feature representation and weight sharing convolution layer
CN109191376A (en) High-resolution terahertz image reconstruction method based on SRCNN improved model
CN114461978B (en) Data processing method and device, electronic equipment and readable storage medium
CN109886874B (en) Super-resolution image reconstruction method and special acceleration circuit
CN110490947A (en) Nuclear magnetic resonance image method for reconstructing, device, storage medium and terminal device
CN114333074A (en) Human body posture estimation method based on dynamic lightweight high-resolution network
CN110717958A (en) Image reconstruction method, device, equipment and medium
CN115661911A (en) Face feature extraction method, device and storage medium
Ying et al. PSigmoid: Improving squeeze-and-excitation block with parametric sigmoid
CN113301221B (en) Image processing method of depth network camera and terminal
CN116597272A (en) Image feature recognition method for improving YOLOv8x training precision
TWI769466B (en) Neural network system and method of operating the same
CN113592973A (en) Magnetic resonance image reconstruction method and device based on multi-frequency complex convolution
CN108830378A (en) SOM neural network configurable module hardware implementation method based on FPGA

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant