CN109903350A - Method for compressing image and relevant apparatus - Google Patents
Method for compressing image and relevant apparatus Download PDFInfo
- Publication number
- CN109903350A CN109903350A CN201711289667.8A CN201711289667A CN109903350A CN 109903350 A CN109903350 A CN 109903350A CN 201711289667 A CN201711289667 A CN 201711289667A CN 109903350 A CN109903350 A CN 109903350A
- Authority
- CN
- China
- Prior art keywords
- compression
- image
- neural network
- training
- original image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Landscapes
- Image Analysis (AREA)
Abstract
The embodiment of the present application discloses a kind of method for compressing image, it include: the original image for obtaining first resolution, the original image is compressed based on object module to obtain the compression image of second resolution, the compression image is identified to obtain reference label information based on identification neural network model, according to the target labels information and the reference label acquisition of information loss function, when the loss function converges on first threshold or the current frequency of training of the compression neural network is greater than or equal to second threshold, obtain the target original image of the first resolution, corresponding compression neural network model when the object module is completed as the compression neural metwork training, the target original image is compressed based on the compression neural network model, obtain the targeted compression image of the second resolution.The validity of compression of images and the accuracy rate of identification can be improved in the embodiment of the present application.
Description
Technical field
This application involves Image Compression fields, and in particular to a kind of method for compressing image and relevant apparatus.
Background technique
With the arrival of big data era, data are increased with astonishing speed, and the data of flood tide carry information and exist
It is transmitted between people, and visual basis of the image as the human perception world, it is that the mankind obtain information, expressing information and transmitting
The important means of information.
In the prior art, data volume is effectively reduced by compression of images, improves the transmission rate of image.However, right
After image is compressed, it is difficult to retain all information of original image, therefore, how carry out compression of images and remain as this field
Technical staff's technical problem to be solved.
Summary of the invention
The embodiment of the present application proposes a kind of method for compressing image and relevant apparatus, can be used for the compression nerve of image training
Network improves the validity of compression of images and the accuracy rate of identification.
In a first aspect, the embodiment of the present application provides a kind of method for compressing image, comprising:
The original image of first resolution is obtained, the original image is in the compression training atlas for compress neural network
Any training image, the label information of the original image is as target labels information;
The original image is compressed based on object module, obtains the compression image of second resolution, described second
Resolution ratio is less than the first resolution, and the object module is the current neural network model of the compression neural network;
The compression image is identified based on identification neural network model, obtains reference label information, the identification
Neural network model is corresponding neural network model when identification neural metwork training is completed;
According to the target labels information and the reference label acquisition of information loss function;
First threshold is converged in the loss function or the current frequency of training of the compression neural network is greater than or waits
When second threshold, the target original image of the first resolution is obtained, using the object module as the compression nerve
Network training corresponding compression neural network model when completing;
The target original image is compressed based on the compression neural network model, obtains the second resolution
Targeted compression image.
With reference to first aspect, in the first possible embodiment of first aspect, the method also includes:
The loss function it is not converged in the first threshold or it is described compression neural network it is current frequency of training it is small
When the second threshold, the object module is updated according to the loss function, obtains more new model, by described in more
New model executes the acquisition first and differentiates as the object module using next training image as the original image
The step of original image of rate.
With reference to first aspect or the first possible embodiment of first aspect, the possible reality of second of first aspect
It applies in mode, it is described that the compression image is identified based on identification neural network model, reference label information is obtained, is wrapped
It includes:
The compression image is pre-processed, images to be recognized is obtained;
The images to be recognized is identified based on the identification neural network model, obtains the reference label letter
Breath.
The possible embodiment of second with reference to first aspect, in the third possible embodiment of first aspect,
The pretreatment includes that size is handled, described to pre-process to the compression image, obtains images to be recognized, comprising:
When the image size of the compression image is less than the primary image size of the identification neural network, according to described
Primary image size is filled pixel to the compression image, obtains the images to be recognized.
With reference to first aspect or the first possible embodiment of first aspect, the 4th kind of possible reality of first aspect
It applies in mode, the compression training atlas includes at least recognition training atlas, the method also includes:
The identification neural network is trained using the recognition training atlas, obtains the identification neural network mould
Type, each training image includes at least in the recognition training atlas and the consistent label of the type of the target labels information is believed
Breath.
With reference to first aspect or the first possible embodiment of first aspect, the 5th kind of possible reality of first aspect
It applies in mode, the target original image is compressed described based on the compression neural network model, obtain described the
After the targeted compression image of two resolution ratio, the method also includes:
The targeted compression image is compressed based on the identification neural network model, obtains the target original graph
The label information of picture, and store the label information of the target original image.
With reference to first aspect or the first possible embodiment of first aspect, the 6th kind of possible reality of first aspect
It applies in mode, the compression training atlas includes multiple dimensions, and it is described that the original image is compressed based on object module,
The compression image for obtaining second resolution includes:
The original image is identified based on the object module, obtain multiple images information, every dimension is corresponding
One image information;
The original image is compressed based on the object module and described multiple images information, obtains the compression
Image.
Second aspect, the embodiment of the present application provide a kind of image compressing device, including processor and the processor connect
The memory connect, in which:
The memory, for store first threshold, second threshold, the current neural network model of compression neural network and
The compression training atlas and the label for compressing each training image in training atlas of frequency of training, the compression neural network
Information, identification neural network model, compression neural network model make the current neural network model of the compression neural network
For object module, the compression neural network model is corresponding object module, institute when the compression neural metwork training is completed
State corresponding neural network model when identification neural network model is completed for identification neural metwork training;
The processor, for obtaining the original image of first resolution, the original image is compression training figure
Any training image concentrated, using the label information of the original image as target labels information;Based on the object module
The original image is compressed, the compression image of second resolution is obtained, the second resolution is less than described first point
Resolution;The compression image is identified based on the identification neural network model, obtains reference label information;According to described
Target labels information and the reference label acquisition of information loss function;The first threshold is converged in the loss function,
Or the frequency of training be greater than or equal to the second threshold when, obtain the target original image of the first resolution, confirm
The object module is the compression neural network model;Based on the compression neural network model to the target original image
It is compressed, obtains the targeted compression image of the second resolution.
In conjunction with second aspect, in the possible embodiment of the first of second aspect, the processor is also used to described
Loss function is not converged when the first threshold or the frequency of training are less than the second threshold, according to the loss letter
It is several that the object module is updated, more new model is obtained, it, will be next by the more new model as the object module
Training image is as the original image, the step of executing the original image for obtaining first resolution.
In conjunction with the possible embodiment of the first of second aspect or second aspect, the possible reality of second of second aspect
It applies in mode, the processor is specifically used for pre-processing the compression image, obtains images to be recognized;Based on the knowledge
Other neural network model identifies the images to be recognized, obtains the reference label information.
In conjunction with second of possible embodiment of second aspect, in the third possible embodiment of second aspect,
The pretreatment includes that size is handled, the memory, is also used to store the primary image size of the identification neural network;Institute
Processor is stated to be specifically used for when the image size of the compression image is less than the primary image size, according to the parent map
As size is filled pixel to the compression image, the images to be recognized is obtained.
In conjunction with the possible embodiment of the first of second aspect or second aspect, the 4th kind of possible reality of second aspect
It applies in mode, the compression training atlas includes at least recognition training atlas, and the processor is also used to instruct using the identification
Practice atlas to be trained the identification neural network, obtain the identification neural network model, in the recognition training atlas
Each training image includes at least the consistent label information of type with the target labels information.
In conjunction with the possible embodiment of the first of second aspect or second aspect, the 5th kind of possible reality of second aspect
It applies in mode, the processor, is also used to identify the targeted compression image based on the identification neural network model,
Obtain the label information of the target original image;The memory is also used to store the label letter of the target original image
Breath.
In conjunction with the possible embodiment of the first of second aspect or second aspect, the 6th kind of possible reality of second aspect
It applies in mode, the compression training atlas includes multiple dimensions, and the processor is specifically used for based on the object module to institute
It states original image to be identified, obtains multiple images information, the corresponding image information of every dimension;Based on the object module
The original image is compressed with described multiple images information, obtains the compression image.
The third aspect, the embodiment of the present application provide another electronic equipment, including processor, memory, communication interface with
And one or more programs, wherein said one or multiple programs are stored in above-mentioned memory, and are configured by above-mentioned
Processor executes, and described program includes the instruction for the step some or all of as described in first aspect.
Fourth aspect, the embodiment of the present application provide a kind of computer readable storage medium, the computer storage medium
It is stored with computer program, the computer program includes program instruction, and described program instruction makes institute when being executed by a processor
State the method that processor executes above-mentioned first aspect.
After above-mentioned method for compressing image and relevant apparatus, the compression of original image is obtained based on object module
Image obtains the reference label information of compression image, the target mark for including according to original image based on identification neural network model
Information and reference label acquisition of information loss function are signed, first threshold is converged in loss function or to compress neural network current
When frequency of training is greater than or equal to second threshold, that is, the training of the compression neural network for compression of images is completed, by target mould
Type can obtain the targeted compression image of target original image as compression neural network model based on compression neural network model.
That is, the reference label value and original image that are obtained by the trained identification neural network model for completing to obtain include
Target labels value obtain loss function, meet preset condition in loss function or to compress the current frequency of training of neural network super
Training is completed when crossing preset threshold, its weight is otherwise adjusted by training compression neural network repeatedly, i.e., in the same image
Picture material represented by each pixel is adjusted, and is reduced the loss of compression neural network, is improved the effective of compression of images
Property, consequently facilitating improving the accuracy rate of identification.
Detailed description of the invention
In order to illustrate the technical solutions in the embodiments of the present application or in the prior art more clearly, to embodiment or will show below
There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is the application
Some embodiments for those of ordinary skill in the art without creative efforts, can also basis
These attached drawings obtain other attached drawings.
Wherein:
Fig. 1 is a kind of operation schematic diagram of neural network provided by the embodiments of the present application;
Fig. 2 is a kind of flow diagram of method for compressing image provided by the embodiments of the present application;
Fig. 2A is a kind of schematic diagram of a scenario of size processing method provided by the embodiments of the present application;
Fig. 2 B is a kind of flow diagram of monolayer neural networks operation method provided by the embodiments of the present application;
Fig. 2 C is that a kind of structure for executing compression neural network reverse train device provided by the embodiments of the present application is shown
It is intended to;
Fig. 2 D is a kind of structural schematic diagram of H tree module provided by the embodiments of the present application;
Fig. 2 E is a kind of structural schematic diagram of main computing module provided by the embodiments of the present application;
Fig. 2 F is a kind of structural schematic diagram of computing module provided by the embodiments of the present application;
Fig. 2 G is a kind of example block diagram for compressing neural network reverse train provided by the embodiments of the present application;
Fig. 3 is a kind of flow diagram of method for compressing image provided by the embodiments of the present application;
Fig. 4 is a kind of structural schematic diagram of electronic device provided by the embodiments of the present application.
Specific embodiment
Below in conjunction with the attached drawing in the embodiment of the present application, technical solutions in the embodiments of the present application carries out clear, complete
Site preparation description, it is clear that described embodiment is some embodiments of the present application, instead of all the embodiments.Based on this Shen
Please in embodiment, every other implementation obtained by those of ordinary skill in the art without making creative efforts
Example, shall fall in the protection scope of this application.
It should be appreciated that ought use in this specification and in the appended claims, term " includes " and "comprising" instruction
Described feature, entirety, step, operation, the presence of element and/or component, but one or more of the other feature, whole is not precluded
Body, step, operation, the presence or addition of element, component and/or its set.
It is also understood that mesh of the term used in this present specification merely for the sake of description specific embodiment
And be not intended to limit the application.As present specification and it is used in the attached claims, unless on
Other situations are hereafter clearly indicated, otherwise " one " of singular, "one" and "the" are intended to include plural form.
It will be further appreciated that the term "and/or" used in present specification and the appended claims is
Refer to any combination and all possible combinations of one or more of associated item listed, and including these combinations.
As used in this specification and in the appended claims, term " if " can be according to context quilt
Be construed to " when ... " or " once " or " in response to determination " or " in response to detecting ".Similarly, phrase " if it is determined that " or
" if detecting [described condition or event] " can be interpreted to mean according to context " once it is determined that " or " in response to true
It is fixed " or " once detecting [described condition or event] " or " in response to detecting [described condition or event] ".
The embodiment of the present application proposes a kind of method for compressing image and relevant apparatus, can train the compression for compression of images
Neural network improves the validity of compression of images and the accuracy rate of identification.Below in conjunction with specific embodiment, and referring to attached drawing,
The application is further described.
Input neuron and output neuron mentioned in the present invention do not mean that refreshing in the input layer of entire neural network
Through neuron in member and output layer, but the mind for two layers of arbitrary neighborhood in network, in network feed forward operation lower layer
It is input neuron through member, the neuron in network feed forward operation upper layer is output neuron.With convolutional Neural net
For network, if a convolutional neural networks have L layers, K=1,2 ..., L-1, for K layers and K+1 layers, by K layers
Referred to as input layer, neuron therein are the input neuron, and K+1 layers are known as output layer, and neuron therein is described
Output neuron.I.e. in addition to top, each layer all can serve as input layer, and next layer is corresponding output layer.
The operation being mentioned above all is one layer in neural network of operation, for multilayer neural network, realizes process
As shown in Figure 1, the arrow of dotted line indicates reversed operation in figure, the arrow of solid line indicates forward operation.In forward operation, when upper
One layer of artificial neural network executes complete after, using upper one layer of obtained output neuron as next layer input neuron into
Row operation (or the input neuron that certain operations are re-used as next layer is carried out to the output neuron), meanwhile, by weight
Also next layer of weight is replaced with.It, will after the completion of the reversed operation of upper one layer of artificial neural network executes in reversed operation
Upper one layer obtain input neuron gradient as next layer output neuron gradient carry out operation (or to the input mind
Certain operations, which are carried out, through first gradient is re-used as next layer of output neuron gradient), while weight being replaced with to next layer of power
Value.
The forward-propagating stage of neural network corresponds to forward operation, is the mistake of input data input to output data output
Journey, back-propagation phase correspond to reversed operation, be error between final result data and desired output data back through
The process in forward-propagating stage is repaired in the way of error gradient decline by forward-propagating and backpropagation in cycles
Just each layer weight, each layer weight is adjusted and neural network learning training process, can reduce network output mistake
Difference.
In this application, the instruction that the type for the compression training atlas of compression neural network and every class training atlas include
With no restriction, type is more, and quantity is more, and frequency of training is more, and the proportion of goods damageds of compression of images are lower, is convenient for for the quantity of white silk image
Improve the accuracy rate of image recognition.
Compressing training atlas may include the image of multiple angles, the image under multi-light intensity or a variety of different types of
Multiple dimensions such as the image of image capture device acquisition.When refreshing to compression for the corresponding compression training atlas of above-mentioned different dimensions
It is trained through network, improves the validity of the compression of images under different situations, expand the scope of application of method for compressing image.
The label information that training image includes in training atlas is compressed, the application does not make the particular content of label information
It limits, treats trained image section and be marked, can be used for detecting whether compression neural network trains completion.Such as: road
In the driving image of video monitoring shooting, label information is target license plate information, and driving image is input to compression neural network
Compression image is obtained, compression image is identified to obtain with reference to license board information, if with reference to vehicle based on identification neural network model
Board information and target license plate information matches then can determine the training for completing compression neural network, otherwise, work as in compression neural network
When preceding frequency of training is less than preset threshold, also need to be trained compression neural network.
The application is not construed as limiting the type of label information, can be license board information, is also possible to face information, traffic
Flag information, object classification information etc..
When identification neural network model involved in the application is the identification neural metwork training completion for image recognition
Batch gradient descent algorithm (Batch can be used for identifying that the training method of neural network is not construed as limiting in obtained data
Gradient Descent, BGD), stochastic gradient descent algorithm (Stochastic Gradient Descent, SGD) or small quantities of
Amount gradient descent algorithm (mini-batch SGD) etc. is trained, and a cycle of training is by single forward operation and reversed gradient
It propagates and completes.
Optionally, the identification neural network is trained using the recognition training atlas to obtain the identification nerve
Network model.
Wherein, each training image includes at least and training figure each in the compression training image in recognition training atlas
The consistent label information of type of the target labels information of picture.That is, identification neural network model can be to compression nerve net
The compression image of network (training wait train or complete) output is identified.
For example, if the type of the label information of compression training image is license plate, the label of recognition training image is believed
The type of breath includes at least license plate, to guarantee to identify that neural network model carries out the compression image of compression neural network output
Identification, obtains license board information.
Optionally, it compresses training atlas and includes at least recognition training atlas.
Since the image in training atlas is limited to the influence of the factors such as angle, light or image capture device, work as use
When recognition training atlas is trained, the accuracy rate of identification neural network model can be improved, to improve compression neural network
Training effectiveness, the i.e. validity convenient for improving compression of images.
Fig. 2 is referred to, Fig. 2 is a kind of flow diagram of method for compressing image provided by the embodiments of the present application.Such as Fig. 2 institute
Show, the above method includes:
201: obtaining the original image of first resolution.
Wherein, first resolution is the input resolution ratio for compressing neural network, and second resolution is less than first resolution, is
Compress the output resolution ratio of neural network, i.e. compression ratio (second resolution and first point of the image of input compression neural network
The ratio between resolution) it is fixed, that is to say, that different images are compressed based on the same compression neural network model, can be obtained
To the image of the same compression ratio.
Original image is any training image in the compression training atlas for compress neural network, by the label of original image
Information is as target labels information.The application without limitation, can be artificial identification and gained be marked for label information,
Original image can be input to identification neural network, based on identification neural network model identified obtained by etc..
202: the original image being compressed based on object module to obtain the compression image of second resolution.
Wherein, object module is the current neural network model of the compression neural network, i.e. object module is compression mind
Parameter current through network.Based on object module to resolution ratio be equal to compression neural network input resolution ratio original image into
The compression image that resolution ratio is equal to the output resolution ratio of compression neural network can be obtained in row compression.
Optionally, described that the original image is compressed based on object module to obtain the compression image of second resolution
Include: to be identified based on the object module to the original image, obtain multiple images information, every dimension is one corresponding
Image information;The original image is compressed based on the object module and described multiple images information, obtains the pressure
Contract drawing picture.
If above-mentioned training image includes multiple dimensions, first original image is identified based on object module, it may be determined that
The corresponding image information of every dimension, then original image is compressed for each image information, to improve different dimensional
Spend the accuracy rate of lower compression of images.
203: the compression image being identified to obtain reference label information based on identification neural network model.
The application is not construed as limiting recognition methods, it may include feature extraction and feature identify two parts, and feature is identified
Obtained result as reference label information, such as: driving compression of images after obtain driving compression image it is corresponding with reference to mark
Label information is license plate number;It is recognition of face that the corresponding reference label information of face compression image is obtained after Compressed Facial Image
As a result.
Optionally, described that the compression image is identified to obtain reference label information based on identification neural network model
It include: to be pre-processed to obtain images to be recognized to the compression image;Based on the identification neural network model to it is described to
Identification image is identified to obtain the reference label information.
Pretreatment includes but is not limited to any one of following or multinomial: Data Format Transform processing (such as normalized,
Integer data conversion etc.), data deduplication processing, data exception processing, shortage of data fill up processing etc..By to compression image
It is pre-processed, the recognition efficiency and accuracy rate of image recognition can be improved.
Likewise, the original image for obtaining first resolution includes: reception input picture;To the input picture into
Row pretreatment obtains the original image.By the pretreatment to input picture, the compression efficiency of compression of images can be improved.
Above-mentioned pretreatment further includes size processing, since neural network has a fixed size requirement, i.e., can only to
The equal-sized image of the primary image of the neural network is handled.The primary image size of neural network will be compressed as
One primary image size will identify the primary image size of neural network as the second primary image size, i.e. compression nerve net
Network requires the size of input picture to be that image size is equal to the first primary image size, identifies neural network to input picture
Size requires to be that image size is equal to the second primary image size.Compressing neural network can be to the first primary image size of satisfaction
Image to be compressed is compressed to obtain compression image;Identify that neural network can be to the figure to be identified for meeting the second primary image size
As being identified to obtain reference label information.
The concrete mode that the application handles size is not construed as limiting, it may include the mode of cutting or filler pixels point,
Down-sampled method etc. can also be carried out to input picture in such a way that primary image size zooms in and out.
Wherein, peripheral pixels point is cut to cut the non-critical information region of image periphery;Down-sampled processing is to reduce spy
Determine the process of the sample rate of signal, such as: 4 neighbor pixels are averaged, on the corresponding position as image after processing
The value of one pixel, to reduce the size of image.
Optionally, described that the compression image is pre-processed to obtain images to be recognized to include: in the compression image
Image size be less than identification neural network primary image size when, according to the primary image size to the compression image
It is filled pixel and obtains the images to be recognized.
The application is not construed as limiting pixel, and it is corresponding to can be any color mode, such as: rgb (0,0,0).It is right
It is also not construed as limiting in the specific location of pixel filling, can be any position other than compressing image, i.e., compression schemed
Picture carries out image spreading by the way of filler pixels point without processing, will not generate deformation to compression image, be convenient for
Improve the recognition efficiency and accuracy rate of image recognition.
For example, as shown in Figure 2 A, compression image is placed in the upper left side of images to be recognized, images to be recognized is in addition to pressure
Position filler pixels point except contract drawing picture.
Likewise, described pre-processed to obtain the original image to include: in the input figure to the input picture
When the image size of picture is less than the first primary image size of the compression neural network, according to the first primary image size
Pixel is filled to the input picture, obtains the original image.Make original graph to be compressed by pixel filling
It is identified to obtain reference label information as being identified neural network, and pixel filling has not been changed the compression ratio of input picture,
Convenient for improving the efficiency and accuracy rate of training compression neural network.
204: according to the target labels information and the reference label acquisition of information loss function.
In this application, loss function is used to describe the error size between target labels information and reference label information,
Label information includes multiple dimensions, is generally calculated using squared difference formula:
Wherein: c is the dimension of label information, tkIt is tieed up for the kth of reference label information, ykFor the kth of target labels information
Dimension.
205: judging whether the loss function converges on first threshold or the compression neural network current training time
Whether number is greater than or equal to second threshold, if so, executing step 206;If it is not, executing step 207.
Each training image corresponding cycle of training is by list in the training method of compression neural network involved in the application
Secondary forward operation and reversed gradient, which are propagated, to be completed, and is set first threshold for the threshold value of loss function, will be compressed neural network
The threshold value of frequency of training is set as second threshold.That is, if loss function converges on first threshold or frequency of training is greater than
Or be equal to second threshold, then the training of compression neural network is completed, is instructed the object module as the compression neural network
Practice corresponding compression neural network model when completing;Otherwise, the backpropagation rank of compression neural network is entered according to loss function
Section updates object module according to loss function, and is trained for next training image, i.e. execution step 202-205,
When meeting above-mentioned condition, terminates training, wait pending step 206.
The application is not construed as limiting for compressing the reverse train method of neural network, and optionally, B is provided referring to figure 2.
Monolayer neural networks operation method flow diagram, Fig. 2 B can be applied to it is shown in fig. 2 C for execute compression neural network
The structural schematic diagram of reverse train device.
As shown in Figure 2 C, which includes instruction cache unit 21, controller unit 22, direct memory access unit 23, H
Set module 24, main computing module 25 and multiple from computing module 26, above-mentioned apparatus can pass through hardware circuit (such as dedicated integrated electricity
Road ASIC) it realizes.
Wherein, instruction cache unit 21 reads in the instruction for instructing and caching reading by direct memory access unit 23;Control
Device unit 22 processed reads instruction from instruction cache unit 21, and instruction is translated into the microcommand for controlling other module behaviors, described
Other modules such as direct memory access unit 23, main computing module 25 and from computing module 26 etc.;Direct memory access unit
23 can memory access external address space, directly to inside device each cache unit read and write data, complete data load and
Storage.
Fig. 2 D diagrammatically illustrates the structure of H tree module 24, and as shown in Figure 2 D, H tree module 24 constitutes main computing module 25
And multiple data paths between computing module 26, and the structure with H tree-shaped.H tree is the y-bend being made of multiple nodes
Access is set, the data of upstream are similarly issued two nodes in downstream by each node, the number that two nodes in downstream are returned
According to merging, and return to the node of upstream.For example, two, downstream node returns in the reversed calculating process of neural network
Vector can be summed into a vector in present node and return to upstream node.Start calculating in every layer of artificial neural network
Stage, the input gradient in main computing module 25 are sent to each from computing module 26 by H tree module 24;When from computing module
After the completion of 26 calculating process, each from computing module 26 export output gradient vector part and can in H tree module 24 by
Grade is added two-by-two, i.e., to all output gradient vector parts and summation, as final output gradient vector.
Fig. 2 E diagrammatically illustrates the structure of main computing module 25, and as shown in Figure 2 E, main computing module 25 includes operation list
Member 251, data dependence relation judging unit 252 and neuron cache unit 253.
Wherein, neuron cache unit 253 is for caching the input data that main computing module 25 is used in calculating process
And output data.Arithmetic element 251 completes the various calculation functions of main computing module.Data dependence relation judging unit 252 is
Arithmetic element 251 reads and writes the port of neuron cache unit 253, while can guarantee to data in neuron cache unit 253
Read-write be not present consistency conflict.Specifically, data dependence relation judging unit 252 judge the microcommand that has not carried out with just
It whether there is dependence between the data of microcommand in the process of implementation, if it does not, allowing this microcommand immediately
Otherwise transmitting needs this microcommand after the completion of all microcommands whole execution that this microcommand is relied on just to allow quilt
Transmitting.For example, all microcommands for being sent to data dependence relation unit 252 can be all stored into data dependence relation unit 252
In the instruction queue in portion, in the queue, if the range of the reading data of the reading instruction write command forward with queue position is write
The range of data clashes, then the instruction must can execute after relied on write command is performed.Meanwhile data
Dependence judging unit 252 is also responsible for passing through the transmission of H tree module 24 from the reading input gradient vector of neuron cache unit 253
To from computing module 26, and arithmetic element 251 is transmitted directly to from the output data of computing module 26 by H tree module 24.Control
The instruction that device unit 22 processed exports is sent to arithmetic element 251 and dependence judging unit 252, to control its behavior.
Fig. 2 F schematically shows the structure of computing module 26, includes each operation list from computing module 26 as shown in Figure 2 F
Member 261, data dependence relation judging unit 262, neuron cache unit 263, weight cache unit 264 and weight gradient caching
Unit 265.
Wherein, arithmetic element 261 receives the microcommand of the sending of controller unit 22 and carries out arithmetic logic operation.
Data dependence relation judging unit 262 is responsible for the read-write operation in calculating process to cache unit.Data dependence closes
It is that judging unit 262 guarantees that consistency conflict is not present in the read-write to cache unit.Specifically, data dependence relation judging unit
With the presence or absence of dependence between 262 microcommands that have not carried out of judgement and the data of the microcommand during being carrying out, such as
Fruit is not present, this microcommand is allowed to emit immediately, and all microcommands for otherwise needing to be relied on until this microcommand are whole
This microcommand just allows to be launched after the completion of execution.For example, all microcommands for being sent to data dependence relation unit 262 all can
It is stored into the instruction queue inside data dependence relation unit 262, in the queue, the range of the reading data of reading instruction is such as
The range that the fruit write command forward with queue position writes data clashes, then the instruction must wait until relied on write command quilt
It can be executed after execution.
The output that neuron cache unit 263 caches input gradient vector data and should be calculated from computing module 26
Gradient vector part and.
Weight cache unit 264 caches the weight vector needed in calculating process from computing module 26.For each
It is a from computing module, all can only store in weight matrix with this from the corresponding column of computing module 26.
Weight gradient cache unit 265 caches the weight gradient number accordingly needed during updating weight from computing module
According to.The weight gradient data that each is stored from computing module 26 is corresponding with the weight vector that it is stored.
From computing module 26 realize that every layer of artificial neural network reverse train calculates output gradient vector during can be with
The update of parallel first half and weight.By taking the full articulamentum of artificial neural network (MLP) as an example, process out_
Gradient=w*in_gradient, wherein the multiplication of weight matrix w and input gradient vector in_gradient can divide
For incoherent parallel computation subtask, out_gradient and in_gradient are column vectors, are each only counted from computing module
Calculate the product of corresponding part scaling element column corresponding with weight matrix w in in_gradient, obtained each output vector
All be final result a part to be added up and, these parts and the result for being added two-by-two to the end step by step in H tree.
So calculating process becomes the process and subsequent cumulative process of parallel calculating section sum.Each counted from computing module 26
Calculate output gradient vector part and, all parts and in H tree module 24 completion summation operation obtain output to the end
Gradient vector.Each every layer when input gradient vector sum forward operation of output valve is multiplied simultaneously from computing module 26, is calculated
The gradient of weight out, to update the weight that this is stored from computing module 26.Forward operation and reverse train are neural network algorithms
Two main process, neural network will train the weight in (update) network, it is necessary first to calculate input vector and weigh currently
The positive output being worth in the network constituted, this is positive process, then according to output valve and the mark value of input vector itself it
Between difference, the weight of reversed successively every layer of training (update).Each layer of output vector can be saved in positive calculating process
And the derivative value of activation primitive, these data are required for reverse train process, so when reverse train starts, these
Data have guaranteed exist.Every layer of output valve is existing data when reversed operation starts in forward operation, can be by straight
Memory memory access unit caches are connect in main computing module and are sent to by H tree from computing module.Main computing module 25 is based on defeated
Gradient vector carries out subsequent calculating out, such as the derivative for exporting activation primitive of the gradient vector multiplied by forward operation when is obtained down
One layer of input gradient value.The derivative of activation primitive when forward operation is the existing data when reversed operation starts, can be with
Through direct memory memory access unit caches in main computing module.
According to embodiments of the present invention, the instruction that artificial neural network forward operation is executed in aforementioned device is additionally provided
Collection.It include CONFIG instruction, COMPUTE instruction, I/O instruction, NOP instruction, JUMP instruction and MOVE instruction in instruction set, in which:
CONFIG instruction configures current layer before every layer of artificial neural networks start and calculates the various constants needed;
The arithmetical logic that every layer of artificial neural network is completed in COMPUTE instruction calculates;
I/O instruction, which is realized to read in from external address space, calculates the input data needed and after the completion of calculating by data
It is stored back to exterior space;
NOP instruction is responsible for emptying the microcommand being currently filled in internal all microcommand buffer queues, guarantee NOP instruction it
Preceding all instructions all instructions finishes.NOP instruction itself does not include any operation;
JUMP instructs jumping for next IA for being responsible for that controller will be read from instruction cache unit, is used to real
What now control was flowed jumps;
MOVE instruction is responsible for the data of a certain address of device internal address space being carried to device internal address space
Another address, the process are not take up the resource of arithmetic element independently of arithmetic element in the process of implementation.
Fig. 2 G is the example block diagram of compression neural network reverse train provided by the embodiment of the present application.Calculate output gradient
The process of vector is out_gradient=w*in_gradient, wherein weight matrix w and input gradient vector in_
The matrix-vector multiplication of gradient can be divided into incoherent parallel computation subtask, each calculate from computing module 26
Export gradient vector part and, all parts and in H tree module 24 completion summation operation obtain output gradient to the end
Vector.Upper one layer of output gradient vector input gradient obtains this layer multiplied by corresponding activation primitive derivative in Fig. 2 G
Input data, then be multiplied to obtain output gradient vector with weight matrix.The process for calculating right value update gradient is dw=x*in_
Gradient, wherein each calculating the update gradient of the weight of this module corresponding part from computing module 26.From computing module 26
Input gradient is multiplied with input neuron when forward operation and calculates right value update gradient dw, then uses w, dw and upper one
The right value update gradient dw ' used when secondary update weight updates weight w according to the learning rate of instruction setting.
With reference to shown in Fig. 2 G, input gradient ([input gradient0 ..., input in Fig. 2 G
Gradient3]) it is (n+1)th layer of output gradient vector, which first has to the derivative value with n-th layer during forward operation
([f ' (out0) ..., f ' (out3)] in Fig. 2 G) is multiplied, and obtains the input gradient vector of n-th layer, the process is in main operation mould
It is completed in block 5, the neuron cache unit 263 being temporarily stored in from computing module 26 from computing module 26 is sent to by H tree module 24
In.Then, input gradient vector is multiplied to obtain the output gradient vector of n-th layer with weight matrix.In this process, i-th
The product of column vector [w_i0 ..., w_iN] in i-th of scalar sum weight matrix in input gradient vector is calculated from computing module,
The output gradient vector output gradient that obtained output vector is added to the end two-by-two step by step in H tree module 24
([output gradient0 ..., output gradient3] in Fig. 2 G).
Meanwhile also needing to update the weight stored in this module from computing module 26, calculate the process of right value update gradient
For dw_ij=x_j*in_gradient_i, when wherein x_j is forward operation the input (i.e. (n-1)th layer of output) of n-th layer to
J-th of element of amount, in_gradient_i are input gradient vector (the i.e. input in Fig. 2 G of reversed operation n-th layer
The product of gradient and derivative f ') i-th of element.When forward operation the input of n-th layer be when reverse train starts just
Existing data are sent to from computing module 26 by H tree module 24 and are temporarily stored in neuron cache unit 263.Then, from fortune
It calculates in module 26, after the calculating for completing output gradient vector part sum, by i-th of scalar sum forward operation of input gradient vector
The input vector of n-th layer is multiplied, and obtains updating the gradient vector dw of weight and updates weight accordingly.
As shown in Figure 2 B, an I/O instruction is pre-deposited at the first address of instruction cache unit;Controller unit is from finger
The first address of cache unit is enabled to read this I/O instruction, according to the microcommand translated, direct memory access unit is from external address
All instructions related with the single layer artificial neural network reverse train is read in space, and is buffered in instruction cache unit
In;Controller unit then reads in next I/O instruction from instruction cache unit, according to the microcommand translated, direct memory access
Unit reads neuron cache unit of all data of main computing module needs to main computing module, institute from external address space
Input neuron and activation primitive derivative value and input gradient vector when stating forward operation before data include;Controller list
Member then reads in next I/O instruction from instruction cache unit, and according to the microcommand translated, direct memory access unit is from outside
Address space reads all weight datas and weight gradient data needed from computing module, and is respectively stored into accordingly from fortune
Calculate the weight cache unit and weight gradient cache unit of module;Controller unit then reads in next from instruction cache unit
CONFIG instruction, arithmetic element according to the value of the parameter configuration arithmetic element internal register in the microcommand translated, including this
The various constants that layer neural computing needs, precision setting that this layer calculates, the learning rate etc. when updating weight;Controller list
Member then reads in next COMPUTE instruction from instruction cache unit, and according to the microcommand translated, main computing module passes through H tree
Input neuron when module is by input gradient vector sum forward operation is issued respectively from computing module, the input gradient vector sum
Input neuron when forward operation is deposited to the neuron cache unit from computing module;It is translated according to COMPUTE instruction micro-
Instruction reads weight vector (i.e. weight stored from computing module from the arithmetic element of computing module from weight cache unit
The part of matrix arranges), the vector for completing weight vector and input gradient vector multiplies scalar operation, by output vector part and passes through
H tree returns;It is multiplied simultaneously from computing module by input gradient vector with input neuron, obtains weight gradient and deposit to weight gradient
Cache unit;In H tree module, the output gradient part that is respectively returned from computing module be added to obtain step by step two-by-two it is complete
Export gradient vector;Main computing module obtains the return value of H tree module, the microcommand translated is instructed according to COMPUTE, from nerve
Activation primitive derivative value when first cache unit reads forward operation obtains next by derivative value multiplied by the output vector of return
The input gradient vector of layer reverse train, is written back to neuron cache unit;Controller unit is then from instruction buffer list
Member reads in next COMPUTE instruction, according to the microcommand translated, reads weight w from computing module from weight cache unit, from
Weight gradient cache unit reads this weight gradient dw and the last weight gradient dw ' for updating weight and using, and updates power
Value w;Controller unit then reads in next I/O instruction from instruction cache unit, and according to the microcommand translated, direct memory is visited
It asks that unit deposits the output gradient vector in neuron cache unit to external address space and specifies address, operation terminates.
For multi-layer artificial neural network, realize that process is similar with monolayer neural networks, when upper one layer of artificial neural network
After network is finished, next layer of operational order can be instructed output gradient vector calculated in main computing module as next layer
Experienced input gradient vector carries out calculating process as above, and the weight address and weight gradient address in instruction can also change to this
The corresponding address of layer.
By using for executing neural network reverse train device, effectively increase to multi-layer artificial neural network forward direction
The support of operation.And cached using the dedicated on piece for multilayer neural network reverse train, sufficiently excavate input neuron
With the reusability of weight data, avoids and read these data to memory repeatedly, reduce EMS memory access bandwidth, avoid memory
Bandwidth becomes the problem of multi-layer artificial neural network forward operation performance bottleneck.
206: the target original image of the first resolution is obtained, it is former to the target based on compression neural network model
Beginning image is compressed to obtain the targeted compression image of the second resolution.
Wherein, target original image is (to belong to identical data with the consistent image of the type of the label information of training image
The image of collection).If loss function converges on first threshold or frequency of training is greater than or equal to second threshold, compression neural network is complete
At training, compression neural network progress compression of images can be directly inputted and obtain targeted compression image, and the targeted compression image can
Identified neural network recognization.
Optionally, compressed to obtain described the to the target original image based on compression neural network model described
After the targeted compression image of two resolution ratio, the method also includes: based on the identification neural network model to the target
Compression image is identified to obtain the label information of the target original image, and stores the label letter of the target original image
Breath.
That is, can scheme to based on identification neural network model to compression after compression neural metwork training is completed
As being identified, the efficiency and accuracy rate of manual identified label information are improved.
207: it is updated to obtain more new model to the object module according to the loss function, it will the more new model
Step 202 is executed using next training image as the original image as the object module.
It is appreciated that passing through the trained reference label value that obtained identification neural network model obtains and original completed
The target labels value that image includes obtains loss function, meets preset condition in loss function or compresses the current instruction of neural network
Practice when number is more than preset threshold and complete training, its weight is otherwise adjusted by training compression neural network repeatedly, i.e., to same
Picture material represented by each pixel is adjusted in a image, reduces the loss of compression neural network.And by having trained
Compression of images is carried out at obtained compression neural network model, improves the validity of compression of images, consequently facilitating improving identification
Accuracy rate.
Fig. 3 is referred to, Fig. 3 is a kind of structural schematic diagram of image compressing device provided by the embodiments of the present application, such as Fig. 3 institute
Show, above-mentioned apparatus 300 includes: processor 301, the memory connecting with processor 301 302.
In the embodiment of the present application, memory 302 is current for storing first threshold, second threshold, compression neural network
Neural network model and frequency of training, the compression training atlas of the compression neural network and compression training atlas in it is every
Label information, identification neural network model, the compression neural network model of one training image, the compression neural network is current
Neural network model as object module, when the compression neural network model is compression neural metwork training completion pair
The object module answered, the identification neural network model are corresponding neural network model when identification neural metwork training is completed.
Processor 301 is used to obtain the original image of first resolution, and the original image is compression training atlas
In any training image, using the label information of the original image as target labels information;Based on the object module pair
The original image is compressed, and the compression image of second resolution is obtained, and the second resolution is less than described first and differentiates
Rate;The compression image is identified based on the identification neural network model, obtains reference label information;According to the mesh
Mark label information and the reference label acquisition of information loss function;The first threshold is converged in the loss function, or
When the frequency of training is greater than or equal to the second threshold, the target original image of the first resolution is obtained, confirms institute
Stating object module is the compression neural network model;Based on the compression neural network model to the target original image into
Row compression, obtains the targeted compression image of the second resolution.
Optionally, the processor 301 is also used to not converged in the first threshold or the instruction in the loss function
When practicing number less than the second threshold, the object module is updated according to the loss function, obtains more new model,
By the more new model as the object module, using next training image as the original image, the acquisition is executed
The step of original image of first resolution.
Optionally, the processor 301 is specifically used for pre-processing the compression image, obtains images to be recognized;
The images to be recognized is identified based on the identification neural network model, obtains the reference label information.
Optionally, the pretreatment includes that size is handled, and the memory 302 is also used to store the identification neural network
Primary image size;The processor 301 is specifically used for being less than the primary image in the image size of the compression image
When size, pixel is filled to the compression image according to the primary image size, obtains the images to be recognized.
Optionally, the compression training atlas includes at least recognition training atlas, and the processor 301 is also used to using institute
It states recognition training atlas to be trained the identification neural network, obtains the identification neural network model, the identification instruction
Each training image includes at least the consistent label information of type with the target labels information in white silk atlas.
Optionally, the processor 301 is also used to based on the identification neural network model to the targeted compression image
It is identified, obtains the label information of the target original image;
The memory 302 is also used to store the label information of the target original image.
Optionally, the compression training atlas includes multiple dimensions, and the processor 301 is specifically used for being based on the target
Model identifies the original image, obtains multiple images information, the corresponding image information of every dimension;Based on described
Object module and described multiple images information compress the original image, obtain the compression image.
It is appreciated that obtaining the compression image of original image based on object module, obtained based on identification neural network model
The reference label information for compressing image, the target labels information for including according to original image and reference label acquisition of information lose letter
Number, when loss function converges on first threshold or compresses the current frequency of training of neural network more than or equal to second threshold,
The training for completing the compression neural network for compression of images can be based on using object module as compression neural network model
Compress the targeted compression image that neural network model obtains target original image.That is, being obtained by trained completion
The obtained reference label value of identification neural network model and the original image target labels value that includes obtain loss function, damaging
It loses function to meet preset condition or compress completion training when the current frequency of training of neural network is more than preset threshold, otherwise pass through
Training compression neural network adjusts its weight repeatedly, i.e., adjusts to picture material represented by each pixel in the same image
It is whole, the loss of compression neural network is reduced, the validity of compression of images is improved, consequently facilitating improving the accuracy rate of identification.
In one embodiment, this application discloses an electronic devices, including above-mentioned image compressing device.
In one embodiment, this application discloses an electronic devices, as shown in figure 4, above-mentioned electronic device 400 includes
Processor 401, memory 402, communication interface 403 and one or more programs 404, wherein one or more 404 quilts of program
Storage in memory 402, and be configured to be executed by processor 401, program 404 includes for executing above-mentioned compression of images side
The instruction of step some or all of described in method.
Above-mentioned electronic device include but is not limited to robot, computer, printer, scanner, tablet computer, intelligent terminal,
Mobile phone, automobile data recorder, navigator, sensor, camera, cloud server, camera, video camera, projector, wrist-watch, earphone,
Mobile storage, wearable device, the vehicles, household electrical appliance, Medical Devices.
The vehicles include aircraft, steamer and/or vehicle;The household electrical appliance include TV, air-conditioning, micro-wave oven,
Refrigerator, electric cooker, humidifier, washing machine, electric light, gas-cooker, kitchen ventilator;The Medical Devices include Nuclear Magnetic Resonance, B ultrasound instrument
And/or electrocardiograph.
The application can be used in numerous general or special purpose computing system environments or configuration.Such as: personal computer, service
Device computer, handheld device or portable device, laptop device, multicomputer system, microprocessor-based system, top set
Conjunction, programmable consumer-elcetronics devices, NetPC Network PC (personal computer, PC), minicomputer, large size
Computer, distributed computing environment including any of the above system or equipment etc..
A kind of computer readable storage medium, above-mentioned computer-readable storage medium are provided in another embodiment of the invention
Matter is stored with computer program, and above-mentioned computer program includes program instruction, and above procedure instruction makes when being executed by a processor
Above-mentioned processor executes implementation described in method for compressing image.
Those of ordinary skill in the art may be aware that list described in conjunction with the examples disclosed in the embodiments of the present disclosure
Member and algorithm steps, can be realized with electronic hardware, computer software, or a combination of the two, in order to clearly demonstrate hardware
With the interchangeability of software, each exemplary composition and step are generally described according to function in the above description.This
A little functions are implemented in hardware or software actually, the specific application and design constraint depending on technical solution.Specially
Industry technical staff can use different methods to achieve the described function each specific application, but this realization is not
It is considered as beyond the scope of this invention.
It is apparent to those skilled in the art that for convenience of description and succinctly, the end of foregoing description
The specific work process at end and unit, can refer to corresponding processes in the foregoing method embodiment, details are not described herein.
In several embodiments provided herein, it should be understood that disclosed terminal and method can pass through it
Its mode is realized.For example, the apparatus embodiments described above are merely exemplary, for example, the division of said units, only
Only a kind of logical function partition, there may be another division manner in actual implementation, such as multiple units or components can be tied
Another system is closed or is desirably integrated into, or some features can be ignored or not executed.In addition, shown or discussed phase
Mutually between coupling, direct-coupling or communication connection can be through some interfaces, the INDIRECT COUPLING or communication of device or unit
Connection is also possible to electricity, mechanical or other form connections.
Above-mentioned unit as illustrated by the separation member may or may not be physically separated, aobvious as unit
The component shown may or may not be physical unit, it can and it is in one place, or may be distributed over multiple
In network unit.Some or all of unit therein can be selected to realize the embodiment of the present invention according to the actual needs
Purpose.
It, can also be in addition, the functional units in various embodiments of the present invention may be integrated into one processing unit
It is that each unit physically exists alone, is also possible to two or more units and is integrated in one unit.It is above-mentioned integrated
Unit both can take the form of hardware realization, can also realize in the form of software functional units.
If above-mentioned integrated unit is realized in the form of SFU software functional unit and sells or use as independent product
When, it can store in a computer readable storage medium.Based on this understanding, technical solution of the present invention is substantially
The all or part of the part that contributes to existing technology or the technical solution can be in the form of software products in other words
It embodies, which is stored in a storage medium, including some instructions are used so that a computer
Equipment (can be personal computer, server or the network equipment etc.) executes the complete of each embodiment above method of the present invention
Portion or part steps.And storage medium above-mentioned include: USB flash disk, mobile hard disk, read-only memory (Read-Only Memory,
ROM), random access memory (Random Access Memory, RAM), magnetic or disk etc. are various can store program
The medium of code.
It should be noted that in attached drawing or specification text, the implementation for not being painted or describing is affiliated technology
Form known to a person of ordinary skill in the art, is not described in detail in field.In addition, the above-mentioned definition to each element and method is simultaneously
It is not limited only to various specific structures, shape or the mode mentioned in embodiment, those of ordinary skill in the art can carry out letter to it
It singly changes or replaces.
Particular embodiments described above has carried out further in detail the purpose of the application, technical scheme and beneficial effects
Describe in detail bright, it should be understood that the foregoing is merely the specific embodiments of the application, are not intended to limit this application, it is all
Within spirit herein and principle, any modification, equivalent substitution, improvement and etc. done should be included in the protection of the application
Within the scope of.
Claims (16)
1. a kind of method for compressing image characterized by comprising
The original image of first resolution is obtained, the original image is any in the compression training atlas for compress neural network
Training image, using the label information of the original image as target labels information;
The original image is compressed based on object module, obtains the compression image of second resolution, described second differentiates
Rate is less than the first resolution, and the object module is the current neural network model of the compression neural network;
The compression image is identified based on identification neural network model, obtains reference label information, the identification nerve
Network model is corresponding neural network model when identification neural metwork training is completed;
According to the target labels information and the reference label acquisition of information loss function;
First threshold is converged in the loss function or the current frequency of training of the compression neural network is greater than or equal to the
When two threshold values, the target original image of the first resolution is obtained, using the object module as the compression neural network
Training corresponding compression neural network model when completing;
The target original image is compressed based on the compression neural network model, obtains the mesh of the second resolution
Mark compression image.
2. the method according to claim 1, wherein the method also includes:
It is not converged in the first threshold or the current frequency of training of the compression neural network is less than institute in the loss function
When stating second threshold, the object module is updated according to the loss function, obtains more new model, by the update mould
Type executes the acquisition first resolution using next training image as the original image as the object module
The step of original image.
3. method according to claim 1 or 2, which is characterized in that the identification neural network model that is based on is to the pressure
Contract drawing picture is identified, reference label information is obtained, comprising:
The compression image is pre-processed, images to be recognized is obtained;
The images to be recognized is identified based on the identification neural network model, obtains the reference label information.
4. according to the method described in claim 3, it is characterized in that, it is described pretreatment include size handle, it is described to the pressure
Contract drawing picture is pre-processed, and images to be recognized is obtained, comprising:
When the image size of the compression image is less than the primary image size of the identification neural network, according to described basic
Image size is filled pixel to the compression image, obtains the images to be recognized.
5. method according to claim 1 or 2, which is characterized in that the compression training atlas includes at least recognition training
Atlas, the method also includes:
The identification neural network is trained using the recognition training atlas, obtains the identification neural network model,
Each training image includes at least the consistent label information of type with the target labels information in the recognition training atlas.
6. method according to claim 1 or 2, which is characterized in that be based on the compression neural network model pair described
The target original image is compressed, after obtaining the targeted compression image of the second resolution, the method also includes:
The targeted compression image is compressed based on the identification neural network model, obtains the target original image
Label information, and store the label information of the target original image.
7. method according to claim 1 or 2, which is characterized in that the compression training atlas includes multiple dimensions, described
The original image is compressed based on object module, the compression image for obtaining second resolution includes:
The original image is identified based on the object module, obtain multiple images information, every dimension is one corresponding
Image information;
The original image is compressed based on the object module and described multiple images information, obtains the compression figure
Picture.
8. a kind of image compressing device, which is characterized in that including processor, the memory being connected to the processor, in which:
The memory, for storing first threshold, second threshold, compression neural network current neural network model and training
The compression training atlas and the label letter for compressing each training image in training atlas of number, the compression neural network
Breath, identification neural network model, compression neural network model, using the current neural network model of the compression neural network as
Object module, the compression neural network model is corresponding object module when the compression neural metwork training is completed, described
Identify that neural network model is corresponding neural network model when identification neural metwork training is completed;
The processor, for obtaining the original image of first resolution, the original image is in compression training atlas
Any training image, using the label information of the original image as target labels information;Based on the object module to institute
It states original image to be compressed, obtains the compression image of second resolution, the second resolution is less than the first resolution;
The compression image is identified based on the identification neural network model, obtains reference label information;According to the target
Label information and the reference label acquisition of information loss function;The first threshold or institute are converged in the loss function
When stating frequency of training more than or equal to the second threshold, the target original image of the first resolution is obtained, described in confirmation
Object module is the compression neural network model;The target original image is carried out based on the compression neural network model
Compression, obtains the targeted compression image of the second resolution.
9. device according to claim 8, which is characterized in that the processor is also used to not receive in the loss function
It holds back when the first threshold or the frequency of training are less than the second threshold, according to the loss function to the target
Model is updated, and obtains more new model, will the more new model as the object module, using next training image as
The step of original image, the original image of the execution acquisition first resolution.
10. device according to claim 8 or claim 9, which is characterized in that the processor is specifically used for the compression image
It is pre-processed, obtains images to be recognized;The images to be recognized is identified based on the identification neural network model, is obtained
To the reference label information.
11. device according to claim 10, which is characterized in that the pretreatment includes that size is handled,
The memory is also used to store the primary image size of the identification neural network;
The processor is specifically used for when the image size of the compression image is less than the primary image size, according to described
Primary image size is filled pixel to the compression image, obtains the images to be recognized.
12. device according to claim 8 or claim 9, which is characterized in that the compression training atlas includes at least recognition training
Atlas, the processor are also used to be trained the identification neural network using the recognition training atlas, obtain described
Identify neural network model, each training image includes at least the class with the target labels information in the recognition training atlas
The consistent label information of type.
13. method according to claim 8 or claim 9, which is characterized in that the processor is also used to based on the identification mind
The targeted compression image is identified through network model, obtains the label information of the target original image;
The memory is also used to store the label information of the target original image.
14. method according to claim 8 or claim 9, which is characterized in that the compression training atlas includes multiple dimensions, institute
It states processor to be specifically used for identifying the original image based on the object module, obtains multiple images information, it is each
Dimension corresponds to an image information;The original image is pressed based on the object module and described multiple images information
Contracting, obtains the compression image.
15. a kind of electronic device, which is characterized in that including processor, memory, communication interface and one or more program,
Wherein, one or more of programs are stored in the memory, and are configured to be executed by the processor, the journey
Sequence includes the steps that requiring the instruction in any one of 1-7 method for perform claim.
16. a kind of computer readable storage medium, the computer-readable recording medium storage has computer program, the calculating
Machine program includes program instruction, and described program instruction makes the processor execute such as claim 1-7 when being executed by a processor
Described in any item methods.
Priority Applications (11)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711289667.8A CN109903350B (en) | 2017-12-07 | 2017-12-07 | Image compression method and related device |
EP18868807.1A EP3627397B1 (en) | 2017-10-20 | 2018-07-13 | Processing method and apparatus |
KR1020197037574A KR102434729B1 (en) | 2017-10-20 | 2018-07-13 | Processing method and apparatus |
EP19215858.2A EP3667569A1 (en) | 2017-10-20 | 2018-07-13 | Processing method and device, operation method and device |
KR1020197037566A KR102434728B1 (en) | 2017-10-20 | 2018-07-13 | Processing method and apparatus |
KR1020197023878A KR102434726B1 (en) | 2017-10-20 | 2018-07-13 | Treatment method and device |
US16/482,710 US11593658B2 (en) | 2017-10-20 | 2018-07-13 | Processing method and device |
EP19215860.8A EP3660706B1 (en) | 2017-10-20 | 2018-07-13 | Convolutional operation device and method |
EP19215859.0A EP3660628B1 (en) | 2017-10-20 | 2018-07-13 | Dynamic voltage frequency scaling device and method |
PCT/CN2018/095548 WO2019076095A1 (en) | 2017-10-20 | 2018-07-13 | Processing method and apparatus |
US16/529,041 US10540574B2 (en) | 2017-12-07 | 2019-08-01 | Image compression method and related device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711289667.8A CN109903350B (en) | 2017-12-07 | 2017-12-07 | Image compression method and related device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109903350A true CN109903350A (en) | 2019-06-18 |
CN109903350B CN109903350B (en) | 2021-08-06 |
Family
ID=66939820
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711289667.8A Active CN109903350B (en) | 2017-10-20 | 2017-12-07 | Image compression method and related device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109903350B (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110808738A (en) * | 2019-09-16 | 2020-02-18 | 平安科技(深圳)有限公司 | Data compression method, device, equipment and computer readable storage medium |
CN112954011A (en) * | 2021-01-27 | 2021-06-11 | 上海淇玥信息技术有限公司 | Image resource compression method and device and electronic equipment |
CN113065579A (en) * | 2021-03-12 | 2021-07-02 | 支付宝(杭州)信息技术有限公司 | Method and device for classifying target object |
CN113422950A (en) * | 2021-05-31 | 2021-09-21 | 北京达佳互联信息技术有限公司 | Training method and training device for image data processing model |
CN113657136A (en) * | 2020-05-12 | 2021-11-16 | 阿里巴巴集团控股有限公司 | Identification method and device |
CN117440172A (en) * | 2023-12-20 | 2024-01-23 | 江苏金融租赁股份有限公司 | Picture compression method and device |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105163121A (en) * | 2015-08-24 | 2015-12-16 | 西安电子科技大学 | Large-compression-ratio satellite remote sensing image compression method based on deep self-encoding network |
CN105809704A (en) * | 2016-03-30 | 2016-07-27 | 北京小米移动软件有限公司 | Method and device for identifying image definition |
CN105976400A (en) * | 2016-05-10 | 2016-09-28 | 北京旷视科技有限公司 | Object tracking method and device based on neural network model |
CN106096670A (en) * | 2016-06-17 | 2016-11-09 | 北京市商汤科技开发有限公司 | Concatenated convolutional neural metwork training and image detecting method, Apparatus and system |
CN106296692A (en) * | 2016-08-11 | 2017-01-04 | 深圳市未来媒体技术研究院 | Image significance detection method based on antagonism network |
CN107018422A (en) * | 2017-04-27 | 2017-08-04 | 四川大学 | Still image compression method based on depth convolutional neural networks |
US20170230675A1 (en) * | 2016-02-05 | 2017-08-10 | Google Inc. | Compressing images using neural networks |
US20170249536A1 (en) * | 2016-02-29 | 2017-08-31 | Christopher J. Hillar | Self-Organizing Discrete Recurrent Network Digital Image Codec |
CN107301668A (en) * | 2017-06-14 | 2017-10-27 | 成都四方伟业软件股份有限公司 | A kind of picture compression method based on sparse matrix, convolutional neural networks |
CN107403166A (en) * | 2017-08-02 | 2017-11-28 | 广东工业大学 | A kind of method and apparatus for extracting facial image pore feature |
CN107403415A (en) * | 2017-07-21 | 2017-11-28 | 深圳大学 | Compression depth plot quality Enhancement Method and device based on full convolutional neural networks |
-
2017
- 2017-12-07 CN CN201711289667.8A patent/CN109903350B/en active Active
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105163121A (en) * | 2015-08-24 | 2015-12-16 | 西安电子科技大学 | Large-compression-ratio satellite remote sensing image compression method based on deep self-encoding network |
US20170230675A1 (en) * | 2016-02-05 | 2017-08-10 | Google Inc. | Compressing images using neural networks |
US20170249536A1 (en) * | 2016-02-29 | 2017-08-31 | Christopher J. Hillar | Self-Organizing Discrete Recurrent Network Digital Image Codec |
CN105809704A (en) * | 2016-03-30 | 2016-07-27 | 北京小米移动软件有限公司 | Method and device for identifying image definition |
CN105976400A (en) * | 2016-05-10 | 2016-09-28 | 北京旷视科技有限公司 | Object tracking method and device based on neural network model |
CN106096670A (en) * | 2016-06-17 | 2016-11-09 | 北京市商汤科技开发有限公司 | Concatenated convolutional neural metwork training and image detecting method, Apparatus and system |
CN106296692A (en) * | 2016-08-11 | 2017-01-04 | 深圳市未来媒体技术研究院 | Image significance detection method based on antagonism network |
CN107018422A (en) * | 2017-04-27 | 2017-08-04 | 四川大学 | Still image compression method based on depth convolutional neural networks |
CN107301668A (en) * | 2017-06-14 | 2017-10-27 | 成都四方伟业软件股份有限公司 | A kind of picture compression method based on sparse matrix, convolutional neural networks |
CN107403415A (en) * | 2017-07-21 | 2017-11-28 | 深圳大学 | Compression depth plot quality Enhancement Method and device based on full convolutional neural networks |
CN107403166A (en) * | 2017-08-02 | 2017-11-28 | 广东工业大学 | A kind of method and apparatus for extracting facial image pore feature |
Non-Patent Citations (3)
Title |
---|
全球人工智能: "嫌图片太大?!卷积神经网络轻松实现无损压缩到20%", 《HTTPS://WWW.SOHU.COM/A/163460325_642762》 * |
许锋 等: "神经网络在图像处理中的应用", 《信息与控制》 * |
高绪慧: "基于神经网络与SVM的图像压缩(编码)理论和方法", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110808738A (en) * | 2019-09-16 | 2020-02-18 | 平安科技(深圳)有限公司 | Data compression method, device, equipment and computer readable storage medium |
CN110808738B (en) * | 2019-09-16 | 2023-10-20 | 平安科技(深圳)有限公司 | Data compression method, device, equipment and computer readable storage medium |
CN113657136A (en) * | 2020-05-12 | 2021-11-16 | 阿里巴巴集团控股有限公司 | Identification method and device |
CN113657136B (en) * | 2020-05-12 | 2024-02-13 | 阿里巴巴集团控股有限公司 | Identification method and device |
CN112954011A (en) * | 2021-01-27 | 2021-06-11 | 上海淇玥信息技术有限公司 | Image resource compression method and device and electronic equipment |
CN112954011B (en) * | 2021-01-27 | 2023-11-10 | 上海淇玥信息技术有限公司 | Image resource compression method and device and electronic equipment |
CN113065579A (en) * | 2021-03-12 | 2021-07-02 | 支付宝(杭州)信息技术有限公司 | Method and device for classifying target object |
CN113065579B (en) * | 2021-03-12 | 2022-04-12 | 支付宝(杭州)信息技术有限公司 | Method and device for classifying target object |
CN113422950A (en) * | 2021-05-31 | 2021-09-21 | 北京达佳互联信息技术有限公司 | Training method and training device for image data processing model |
CN117440172A (en) * | 2023-12-20 | 2024-01-23 | 江苏金融租赁股份有限公司 | Picture compression method and device |
CN117440172B (en) * | 2023-12-20 | 2024-03-19 | 江苏金融租赁股份有限公司 | Picture compression method and device |
Also Published As
Publication number | Publication date |
---|---|
CN109903350B (en) | 2021-08-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109903350A (en) | Method for compressing image and relevant apparatus | |
CN104685516B (en) | Apparatus and method for realizing the renewal based on event in spiking neuron network | |
EP3685319B1 (en) | Direct access, hardware acceleration in neural network | |
CN112651511B (en) | Model training method, data processing method and device | |
JP7065199B2 (en) | Image processing methods and equipment, electronic devices, storage media and program products | |
CN110062934A (en) | The structure and movement in image are determined using neural network | |
CN109376861A (en) | A kind of device and method for executing full articulamentum neural metwork training | |
CN109086877A (en) | A kind of device and method for executing convolutional neural networks forward operation | |
CN110298443A (en) | Neural network computing device and method | |
CN112163601B (en) | Image classification method, system, computer device and storage medium | |
CN111126590B (en) | Device and method for artificial neural network operation | |
EP3451238A1 (en) | Apparatus and method for executing pooling operation | |
CN113240079A (en) | Model training method and device | |
CN109918630A (en) | Document creation method, device, computer equipment and storage medium | |
CN111950700A (en) | Neural network optimization method and related equipment | |
CN109670578A (en) | Neural network first floor convolution layer data processing method, device and computer equipment | |
CN113627163A (en) | Attention model, feature extraction method and related device | |
CN108875920A (en) | Operation method, device, system and the storage medium of neural network | |
CN110083842A (en) | Translation quality detection method, device, machine translation system and storage medium | |
CN108629410A (en) | Based on principal component analysis dimensionality reduction and/or rise the Processing with Neural Network method tieed up | |
CN109684085B (en) | Memory pool access method and Related product | |
CN116958862A (en) | End-side layered neural network model training method, device and computer equipment | |
CN115795025A (en) | Abstract generation method and related equipment thereof | |
CN109542513A (en) | A kind of convolutional neural networks instruction data storage system and method | |
CN116563660A (en) | Image processing method and related device based on pre-training large model |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |