CN110163194A - A kind of image processing method, device and storage medium - Google Patents
A kind of image processing method, device and storage medium Download PDFInfo
- Publication number
- CN110163194A CN110163194A CN201910378956.8A CN201910378956A CN110163194A CN 110163194 A CN110163194 A CN 110163194A CN 201910378956 A CN201910378956 A CN 201910378956A CN 110163194 A CN110163194 A CN 110163194A
- Authority
- CN
- China
- Prior art keywords
- image
- network
- processed
- default
- confrontation network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/22—Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the invention discloses a kind of image processing method, device and storage medium, the embodiment of the present invention obtains image to be processed;Target image detection is carried out to image to be processed;When detecting that image to be processed includes target image, the first generator generated in confrontation network is obtained, confrontation network is generated and is formed by sample image training, wherein the first generator includes convolution sub-network and the deconvolution sub-network connecting with convolution sub-network;It is then based on convolution sub-network and process of convolution is carried out to image to be processed, obtain the non-object image feature of image to be processed;Deconvolution processing is carried out to non-object image feature based on deconvolution sub-network again, obtains the image not comprising target image.The program can extract the non-object image feature in image to be processed by generating the first generator in confrontation network, image after the processing of target image then not included according to non-object image feature generation, to remove the target image in image to be processed.
Description
Technical field
The present invention relates to technical field of computer vision, and in particular to a kind of image processing method, device and storage medium.
Background technique
Optical character identification (Optical Character Recognition, OCR) technology is capable of providing under more scenes
Text detection, identification.In the identification mission of some images, often there are some interference images, such as in the identification mission of bill,
Critical field would generally be covered with seal, this identifies etc. that tasks cause greatly to interfere for subsequent detection, such as will print
Word in chapter has been also added in recognition result, or leads to identification error etc. since seal is overlapping with text to be identified.
So the interference image removed in images to be recognized is extremely important in the picture in the fields such as Text region.
Summary of the invention
The embodiment of the present invention provides a kind of image processing method, device and storage medium, can remove in images to be recognized
Target image.
The embodiment of the present invention provides a kind of image processing method, comprising:
Obtain image to be processed;
Target image detection is carried out to the image to be processed;
When detecting that the image to be processed includes target image, the first generator generated in confrontation network is obtained,
Generation confrontation network is formed by sample image training, wherein and the target image be the image that needs detect, described first
Generator includes convolution sub-network and the deconvolution sub-network that connect with the convolution sub-network;
Process of convolution is carried out to the image to be processed based on the convolution sub-network, obtains the non-of the image to be processed
Target image characteristics, the non-object image feature are corresponding to the image except target image described in the image to be processed
Characteristics of image;
Deconvolution processing is carried out to the non-object image feature based on the deconvolution sub-network, obtains not including described
The image of target image.
Correspondingly, the embodiment of the present invention also provides a kind of image processing apparatus, comprising:
First acquisition unit, for obtaining image to be processed;
Detection unit, for carrying out target image detection to the image to be processed;
Second acquisition unit generates confrontation net for obtaining when detecting that the image to be processed includes target image
The first generator in network, the generation confrontation network are formed by sample image training, wherein the target image is to need to examine
The image of survey, first generator include convolution sub-network and the deconvolution sub-network that connect with the convolution sub-network;
First processing units are obtained for carrying out process of convolution to the image to be processed based on the convolution sub-network
The non-object image feature of the image to be processed, the non-object image feature are target figure described in the image to be processed
Characteristics of image corresponding to image as except;
The second processing unit, for being carried out at deconvolution based on the deconvolution sub-network to the non-object image feature
Reason obtains the image not comprising the target image.
Optionally, in some embodiments, the first processing units are specifically used for:
The characteristics of image of the image to be processed is extracted based on the convolutional layer in the convolution sub-network;
Down-sampling processing is carried out to described image feature based on the pond layer in the convolution sub-network, obtains the non-mesh
Logo image feature.
Optionally, in some embodiments, described the second processing unit is specifically used for:
Deconvolution processing is carried out to the non-object image feature based on the warp lamination in the deconvolution sub-network;
It is special to the non-object image for carrying out the deconvolution processing based on the up-sampling layer in the deconvolution sub-network
Sign carries out up-sampling treatment, obtains image after the processing.
Optionally, in some embodiments, described the second processing unit also particularly useful for:
The characteristics of image of the convolutional layer output is obtained based on the warp lamination;
The characteristics of image of upper one layer of output is obtained based on the warp lamination;
It is special based on image of the warp lamination to the convolutional layer characteristics of image exported and upper one layer of output
Sign carries out deconvolution processing.
Optionally, in some embodiments, described device further include:
Third acquiring unit, for obtaining the sample image, the sample image includes positive example sample and negative example sample
This, the positive example sample is the sample comprising target image, and the negative example sample is the sample not comprising the target image;
Training unit, for being replaced according to the positive example sample, the negative example sample to default generation confrontation network
Training obtains the generation confrontation network, wherein the default generation confrontation network includes the first generator.
Optionally, in some embodiments, the default generation confrontation network include first it is default generate confrontation network with
And second it is default generate confrontation network, the training unit is specifically used for:
The described first default confrontation network that generates is trained according to the positive example sample, obtains the first generation confrontation net
Network;
The parameter that described first generates first network module in confrontation network is updated to the described second default generate to fight
In corresponding second network module of network, second network module includes the first generator;
The described second default confrontation network that generates is trained according to the negative example sample, obtains the second generation confrontation net
Network;
The parameter that described second generates the second network module in confrontation network is updated to the described first default generate to fight
In the corresponding first network module of network, the first network module includes the first generator;
Confrontation network is generated according to described first or the second generation confrontation network determines the generation confrontation network.
Optionally, in some embodiments, institute's first network module further include the second generator, the first arbiter and
Second arbiter, state training unit also particularly useful for:
The positive example sample is inputted into the described first default the first generator generated in confrontation network, generates and does not include mesh
First image of logo image;
The first image is inputted into the described first default the second generator generated in confrontation network, generating includes target
Second image of image;
The first-loss of the first image is determined by the described first default the first arbiter generated in confrontation network
Value and the second loss that second image is determined by the described first default the second arbiter generated in confrontation network
Value;
The described first default ginseng for generating confrontation network is adjusted according to the first-loss value and second penalty values
Number obtains described first and generates confrontation network.
Optionally, in some embodiments, second network module further includes the second generator, the first arbiter and
Two arbiters, the training unit also particularly useful for:
The negative example sample is inputted into the described second default the second generator generated in confrontation network, generating includes target
The third image of image;
The third image is inputted into the described second default the first generator generated in confrontation network, generates and does not include mesh
4th image of logo image;
The third loss of the third image is determined by the described second default the second arbiter generated in confrontation network
Value and the 4th loss that the 4th image is determined by the described second default the first arbiter generated in confrontation network
Value;
The described second default ginseng for generating confrontation network is adjusted according to the third penalty values and the 4th penalty values
Number obtains described second and generates confrontation network.
Optionally, in some embodiments, described device further include:
Extraction unit, for when detect the image to be processed include target image when, from the image to be processed
Extract the corresponding object region of the target image;
At this point, the first processing units are specifically used for:
Process of convolution is carried out to the object region based on the convolution sub-network, obtains the image to be processed
Non-object image feature.
The embodiment of the present invention also provides a kind of storage medium, and the storage medium is stored with a plurality of instruction, and described instruction is suitable
It is loaded in processor, to execute the step in any image processing method provided in an embodiment of the present invention.
The embodiment of the present invention also provides a kind of computer program product, when run on a computer, so that computer
Execute the step in any image processing method provided in an embodiment of the present invention.
Image processing apparatus obtains image to be processed in the embodiment of the present invention;Target image inspection is carried out to image to be processed
It surveys;When detecting that image to be processed includes target image, the first generator generated in confrontation network is obtained, generates confrontation net
Network is formed by sample image training, wherein the first generator includes convolution sub-network and the warp that connect with convolution sub-network
Product sub-network;It is then based on convolution sub-network and process of convolution is carried out to image to be processed, obtain the non-targeted figure of image to be processed
As feature;Deconvolution processing is carried out to non-object image feature based on deconvolution sub-network again, is obtained not comprising target image
Image.The program can extract the spy of the non-object image in image to be processed by generating the first generator in confrontation network
Then sign does not include image after the processing of target image, to remove image to be processed according to non-object image feature generation
In target image.
Detailed description of the invention
To describe the technical solutions in the embodiments of the present invention more clearly, make required in being described below to embodiment
Attached drawing is briefly described, it should be apparent that, drawings in the following description are only some embodiments of the invention, for
For those skilled in the art, without creative efforts, it can also be obtained according to these attached drawings other attached
Figure.
Fig. 1 is a schematic diagram of a scenario of image processing method provided in an embodiment of the present invention;
Fig. 2 is a flow diagram of image processing method provided in an embodiment of the present invention;
Fig. 3 is a provided in an embodiment of the present invention first default structural schematic diagram for generating confrontation network;
Fig. 4 is a provided in an embodiment of the present invention second default structural schematic diagram for generating confrontation network;
Fig. 5 is a structural schematic diagram of the first generator provided in an embodiment of the present invention;
Fig. 6 a is another flow diagram of image processing method provided in an embodiment of the present invention;
Fig. 6 b is a schematic diagram of untreated bill images provided in an embodiment of the present invention;
Fig. 6 c is a schematic diagram of treated bill images provided in an embodiment of the present invention;
Fig. 6 d is a frame flow diagram of image processing method provided by the invention;
Fig. 7 is a structural schematic diagram of image processing apparatus provided in an embodiment of the present invention;
Fig. 8 is another structural schematic diagram of image processing apparatus provided in an embodiment of the present invention;
Fig. 9 is a structural schematic diagram of the network equipment provided in an embodiment of the present invention.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete
Site preparation description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on
Embodiment in the present invention, those skilled in the art's every other implementation obtained without creative efforts
Example, shall fall within the protection scope of the present invention.
The embodiment of the present invention, which provides a kind of image processing method, device and storage medium, the image processing apparatus, to be collected
At in the network device, which can be server, be also possible to the equipment such as terminal, wherein the terminal may include
The equipment such as mobile phone, tablet computer, laptop and individual calculus (PC, Personal Computer).
Image processing method provided in an embodiment of the present invention can be used for handling image, for example, through the embodiment of the present invention
The target image in image to be processed can be removed.It in some embodiments, can be by generating the generator in confrontation network
Remove the target image in image to be processed, wherein the target image includes seal image.
For example, referring to Fig. 1, the network equipment obtains image to be processed, then to image progress target image to be processed
Detection, using the image to be processed as bill images, target image be seal image for, when detect in bill images comprising print
When chapter image, the seal image in the bill images then is removed by generating the generator in confrontation network, specifically, according to
Convolution sub-network in the generator extracts the non-seal image feature of the bill images, finally according to the warp in the generator
Product sub-network restores the corresponding image of the non-seal image feature, obtains the bill images not comprising seal image.
It is described in detail separately below.It should be noted that the serial number of following embodiment is not as preferably suitable to embodiment
The restriction of sequence.
In the embodiment of the present invention, it will be described from the angle of image processing apparatus, in one embodiment, provide one kind
Image processing method, this method can be executed by the processor of the network equipment, as shown in Fig. 2, the image processing method is specific
Process can be such that
201, image to be processed is obtained.
Wherein, which is to need to carry out target image detection, and when comprising target image, need to carry out
The image of target image removal.
Wherein, the target image in the embodiment of the present invention can be the interference image in image to be processed, the interference image
It can be the foreground image of images to be recognized, which can interfere the OCR of image to be processed to identify etc..
It in some embodiments, can be by image to be processed when user needs to remove the target image in image to be processed
It is input in image processing apparatus, so that image processing apparatus obtains the image to be processed.
In some embodiments, which can be bill images, and the target image in image to be processed can be with
For seal image, i.e. the embodiment of the present invention can remove the seal image in bill images.
202, target image detection is carried out to image to be processed.
After image processing apparatus gets image to be processed, it will target image detection is carried out to the image, with inspection
It whether surveys in the image to be processed comprising target image.
Specifically, the embodiment of the present invention can be realized by image detecting method, attention network etc. to image to be processed
Target image detection, herein without limitation to the specific method of target image detection, as long as can detect in image to be processed
It whether include whether to include seal image in target image, such as detection bill images.
203, when detecting that image to be processed includes target image, the first generator generated in confrontation network is obtained.
It generates confrontation network to be formed by sample image training, wherein the target image is the image for needing to detect, and first is raw
It grows up to be a useful person including convolution sub-network and the deconvolution sub-network being connect with convolution sub-network.
When detecting in image to be processed comprising target image, needs at this time, need to remove in the image to be processed
Target image recycles first generator removal to be processed firstly, it is necessary to obtain the first generator generated in confrontation network
Target image in image, wherein first generator can be preset in image processing apparatus, can also be from other servers
Or it is obtained in terminal.
Wherein, generate confrontation network by sample image training forms, the first generator include convolution sub-network and with volume
The deconvolution sub-network of product sub-network connection.
Wherein, the first generator in generation confrontation network may be embodied in image processing apparatus.
In some embodiments, it before obtaining image to be processed, needs to fight net to default generation according to sample image
Network is trained, to obtain generation confrontation network, wherein it is according to the trained net of sample image that the generation, which fights network,
Network.
Specifically, which is trained and includes:
(1) sample image is obtained.
Wherein, sample image includes positive example sample and negative example sample, which is the sample comprising target image,
The negative example sample is the sample not comprising target image.
(2) alternately training is carried out to default generation confrontation network according to positive example sample, negative example sample, obtains generating confrontation net
Network, wherein the default confrontation network that generates includes the first generator.
More specifically, the default generation confrontation network fights network and the second default generation pair including the first default generate
Anti- network, the first default confrontation network that generates include the first generator, and the second default confrontation network that generates includes the first generation
Device.
At this point, above-mentioned carry out alternately training to default generation confrontation network according to positive example sample, negative example sample, generated
Fight network, comprising:
A. the first default confrontation network that generates is trained according to positive example sample, obtains the first generation confrontation network.
Due to be difficult to be collected into largely occur in pairs with target image and without the image of target image, we
The method alternately produced has been used to be trained network, the positive example sample and negative example sample in the present embodiment can be not pairs of
Sample.
The first default structure for generating confrontation network is as shown in figure 3, first default first net generated in confrontation network
Network module includes the first generator Gx-y, the first arbiter Dy, the second generator Gy-xWith the second arbiter Dx, wherein Gx-yFor
The image y, G without target image are generated according to the image x with target imagey-xTarget figure is had for generating according to image y
The image x', D of pictureyFor judging whether image y is image true and without target image, sentence with specific reference to penalty values
It is disconnected, DxFor judging whether image x' is true and signet image.Wherein, the arbiter (D in the embodiment of the present inventionyWith
Dx) it is typically based on the disaggregated model of convolutional neural networks, chief component is convolutional layer and full articulamentum.
Specifically, by default the first generator G generated in confrontation network of positive example sample x input firstx-y, generate and do not wrap
The first image y containing target image;
Then by default the second generator G generated in confrontation network of the first image y input firsty-x, generating includes target
Second image x of image;
And pass through the first default the first arbiter D generated in confrontation networkyDetermine the first image y first-loss value,
And pass through the first default the second arbiter D generated in confrontation networkxDetermine the second penalty values of the second image x';
Finally according to first-loss value and the default parameter for generating confrontation network of the second penalty values adjustment first, the is obtained
All one's life is at confrontation network.
Specifically, first-loss value is used to adjust the parameter in the first generator, and the second penalty values are raw for adjusting second
Network parameter in growing up to be a useful person.
Wherein, the first image that pattern recognition device can generate the first generator as the sample in the first arbiter,
For improving the accuracy of the first arbiter.
B. the parameter that first generates first network module in confrontation network is updated to the second default generate and fights network pair
In the second network module answered, the second network module includes the first generator.
It is taken turns after training when having carried out one to the first default generation confrontation network according to positive example sample, it will the wheel will be passed through
It is corresponding that first after the training parameter for generating first network module in confrontation network is updated to the second default generation confrontation network
In second network module, wherein as shown in figure 4, the second default the second network module generated in confrontation network includes: second raw
Grow up to be a useful person Gy-x, the second arbiter Dx, the first generator Gx-yWith the first arbiter Dy。
Specifically, the parameter that first generates the first generator in confrontation network is updated to the second default generate and fights network
In the first generator in;The parameter that first generates the second generator in confrontation network is updated to the second default generate and fights net
In the second generator in network;The parameter that first generates the first arbiter in confrontation network is updated to the second default generate to fight
In the first arbiter in network;The parameter that first generates the second arbiter in confrontation network is updated to the second default generation pair
In the second arbiter in anti-network.
It should be noted that the first generation confrontation network can generate confrontation net at this time for first Jing Guo newest training
Network.
First arbiter generated in confrontation network makes the image x' of production as identical as true image x as possible, the
Two generate confrontation networks in arbiters make production image y' it is as identical as true image y as possible, by arbiter with
The process that the continuous game of generator is evolved, can make the ability of both sides constantly be promoted.
C. the second default confrontation network that generates is trained according to negative example sample, obtains the second generation confrontation network.
Wherein, if being trained excessively to the first default confrontation network that generates before, the second default generation confrontation at this time
The parameter of corresponding network module in network is identical as the trained first default generation confrontation parameter of network.
The second default confrontation network that generates is trained further according to example sample at this time, specifically:
By default the second generator G generated in confrontation network of negative example sample y input secondy-x, generating includes target image
Third image x;
Again by default the first generator G generated in confrontation network of third image x input secondx-y, generate and do not include target
4th image y' of image;
And pass through the second default the second arbiter D generated in confrontation networkxDetermine third image x third penalty values,
And pass through the second default the first arbiter D generated in confrontation networkyDetermine the 4th penalty values of the 4th image y';
Finally according to third penalty values and the default parameter for generating confrontation network of the 4th penalty values adjustment second, the is obtained
Two generate confrontation network.
Specifically, the parameter that the second generator is adjusted according to third penalty values is generated according to the 4th penalty values adjustment first
The parameter of device.
D. the parameter that second generates the second network module in confrontation network is updated to the first default generate and fights network pair
In the first network module answered, first network module includes the first generator.
A new round has been carried out after training when fighting network to the second default generation according to negative example sample, it will will be by being somebody's turn to do
Second after the wheel training parameter for generating the second network module in confrontation network is updated to the first generation confrontation network corresponding the
In one network module.
Specifically, the parameter that second generates the second generator in confrontation network is updated to the first default generate and fights network
In the second generator in;The parameter that second generates the first generator in confrontation network is updated to the first default generate and fights net
In the first generator in network;The parameter that second generates the second arbiter in confrontation network is updated to the first default generate to fight
In the second arbiter in network;The parameter that second generates the first arbiter in confrontation network is updated to the first default generation pair
In the first arbiter in anti-network.
It should be noted that the second generation confrontation network is the second generation confrontation network Jing Guo newest training at this time.
It should be noted that current embodiment require that according to positive example sample and negative example sample respectively to the first default generation pair
Anti- network (or not training convergent first to generate confrontation network) and the second default confrontation network that generates (or are not trained convergent
Second generates confrontation network) alternately training is carried out, and during alternately training, newest training parameter is updated to pair
In the corresponding network module in side's (first or second generates confrontation network), until network convergence.
Wherein, it when training starts, is preset to first and generates confrontation network and network is fought to the second default generate
Learning sequence is herein without limitation.
In embodiments of the present invention, it is only necessary to extract the first generation confrontation network or second generate confrontation network in first
Generator handles image.
E. confrontation network is generated according to first or the second generation confrontation network determines and generates confrontation network.
When determining network convergence according to the first generation confrontation network and the second generation confrontation network, at this point, due to first
It is mutually updated when generating the parameter in confrontation network and the second generation confrontation network in corresponding network module, so after convergence
First generate confrontation network and second generate network module in confrontation network parameter be it is the same, generation at this time fights net
Network is that the first generation confrontation network after training restrains and/or the second generation after training convergence fight network.
It can be generated in confrontation network or the second generation confrontation network from first at this time and obtain the first generator.
204, process of convolution is carried out to image to be processed based on convolution sub-network, obtains the non-object image of image to be processed
Feature.
Wherein, which is that image corresponding to the image in image to be processed except target image is special
Sign.The network structure of first generator is as shown in figure 5, specifically, being extracted based on the convolutional layer in the convolution sub-network should be wait locate
The characteristics of image of image is managed, is wherein non-object image feature in the image graph, i.e., does not include that the corresponding image of target image is special
Sign;The pond layer being then based in the convolution sub-network carries out down-sampling processing to the characteristics of image, obtains the non-object image
Feature.
Wherein, the number of convolutional layer and warp lamination can be 7 without limitation in the embodiment of the present invention, or
Other numbers, in addition, the number of pond layer and up-sampling layer is also without limitation in the embodiment of the present invention.
205, deconvolution processing is carried out to non-object image feature based on deconvolution sub-network, obtains not including target image
Image.
Specifically, deconvolution processing is carried out to non-object image feature based on the warp lamination in deconvolution sub-network, also
Original image;The up-sampling layer being then based in deconvolution sub-network carries out the non-object image feature for carrying out deconvolution processing
Up-sampling treatment restores image size, image after being handled, wherein image can be to eliminate target image after the processing
Image to be processed, for example, eliminating the bill images of seal.
Wherein, as shown in figure 5, due to that may lose detailed information during convolutional layer encodes, we are needed
Convolution sub-network and the respective layer in deconvolution sub-network are connected, to reduce the loss of information, at this point, being based on warp
Warp lamination in product sub-network carries out deconvolution processing to non-object image feature, comprising: obtains convolution based on warp lamination
The characteristics of image of layer output;The characteristics of image of upper one layer of output is obtained based on warp lamination;It is defeated to convolutional layer based on warp lamination
The characteristics of image of characteristics of image and upper one layer of output out carries out deconvolution processing.Wherein, which is and warp lamination
Corresponding convolutional layer.
In some embodiments, it when detecting that image to be processed includes target image, can be extracted from image to be processed
The corresponding object region of target image intercepts the image-region that a part includes target image that is, from image to be processed,
At this point, non-object image feature can be characteristics of image corresponding to the image except target image in object region;
At this point, convolution sub-network and deconvolution sub-network in the first generator also only to the object region into
Row processing, removes the target image in object region, the image that obtains that treated.
At this time, it is also necessary to treated image mosaic is returned into original image, can be obtained by this time eliminate target image to
Handle image.
It in some embodiments, at this time can be to eliminating after obtaining in the target image eliminated image to be processed
The image to be processed of target image, such as optical character identification (Optical is carried out to the bill images for eliminating seal
Character Recognition, OCR), the corresponding text information of image to be processed is obtained, such as obtain the text of bill images
This information, the scheme in the present embodiment can reduce seal image and interfere the identification of image, will not excessively remove seal problem,
It can restore the text covered by seal, improve the accuracy of OCR identification.
Image processing apparatus obtains image to be processed in the embodiment of the present invention;Target image inspection is carried out to image to be processed
It surveys;When detecting that image to be processed includes target image, the first generator generated in confrontation network is obtained, generates confrontation net
Network is formed by sample image training, wherein the first generator includes convolution sub-network and the warp that connect with convolution sub-network
Product sub-network;It is then based on convolution sub-network and process of convolution is carried out to image to be processed, obtain the non-targeted figure of image to be processed
As feature;Deconvolution processing is carried out to non-object image feature based on deconvolution sub-network again, is obtained not comprising target image
Image.The program can extract the spy of the non-object image in image to be processed by generating the first generator in confrontation network
Then sign does not include image after the processing of target image, to remove image to be processed according to non-object image feature generation
In target image.
Wherein, the image processing method in the embodiment of the present invention is end-to-end image processing method, and removal process whole process is certainly
Dynamicization is participated in without artificial.
Citing, is described in further detail by the method according to described in above-described embodiment below.
Please refer to Fig. 6 a, in the present embodiment, will with the image processing apparatus specifically integrate in the network device, it is to be processed
Image is bill images and target image is to be illustrated for seal image.
601, the network equipment obtains bill images.
Wherein, which is the image for needing to carry out seal image detection.
In some embodiments, which can be as shown in Figure 6 b.
It in some embodiments, can be defeated by the bill images when user needs to remove the seal image in bill images
Enter into the network equipment, so that the network equipment obtains the bill images.
Wherein, which can be the bill images being scanned to bill, or carry out to bill
The bill images taken pictures, specifically herein without limitation.
602, the network equipment carries out seal image detection to the bill images.
After the network equipment gets bill images, it will seal image detection is carried out to the image, to detect the ticket
It whether include seal image according in image.
Specifically, the embodiment of the present invention can detect network, attention network etc. by seal and realize to image to be processed
Seal image detection, herein without limitation to the specific method of seal image detection, as long as can detect be in bill images
No includes seal image.
603, when detecting that the bill images include seal image, then the network equipment extracts seal figure from bill images
As corresponding seal image region.
In order to improve the efficiency for removing seal image of the network equipment, the network equipment can extract seal from bill images
The corresponding seal image region of image, wherein the seal image region is less than the region of bill images, and the seal image region
Comprising whole seal image information, which can be border circular areas, or square region, concrete shape
Herein without limitation.
It in some embodiments, can be directly to the image when detecting in the bill images not comprising seal image
Carry out OCR identification.
604, the network equipment obtains the first generator generated in confrontation network.
The generation fight network by sample image training forms, wherein the first generator include convolution sub-network and with
The deconvolution sub-network of convolution sub-network connection.
When detecting seal image and being extracted seal image region from bill images, need to obtain generation pair at this time
The first generator in anti-network recycles the seal image in first generator removal bill images.
Before obtaining bill images, first default generation training network can be trained, training is obtained and generate confrontation net
Network;
Wherein, which fights network and the second default generation confrontation net including the first default generate
Network after the first default training generates confrontation network training, obtains the first generation confrontation network;Second default training generates confrontation net
After network training, the second generation confrontation network is obtained.Wherein, the first default structure for generating confrontation network is as shown in figure 3, second is pre-
If the structure for generating confrontation network is as shown in Figure 4.
Due to be difficult to be collected into largely occur in pairs with target image and without the image of target image, we
The method alternately produced has been used to be trained network, the positive example sample and negative example sample in the present embodiment can be not pairs of
Sample, positive example sample be the bill images comprising seal image, negative example sample be the bill images not comprising seal image, this
In inventive embodiments, the first default confrontation network that generates is trained using positive example sample, it is pre- to second using negative example sample
If generating confrontation network to be trained.
Wherein, the embodiment of the present invention is utilized respectively positive example sample and negative example sample and default generates confrontation network and the to first
Two, which preset generation confrontation networks, carries out alternately training, in addition, after fighting network to a default generation and being trained, it will
The default parameter for generating confrontation network is updated to another to preset in the corresponding network module of generation confrontation network;For example,
It default is generated after confrontation network is trained to first, it will the first default parameter for generating confrontation network is updated to the
Two default generate are fought in the corresponding network module of network;After being trained to the second default generation confrontation network, it will
The second default parameter for generating confrontation network is updated to first to preset in the corresponding network module of generation confrontation network, until the
One default generation confrontation network and/or the second default generation confrontation network convergence, default generate of generation trained first are fought
Network and/or the second default generate fight network, wherein the first default generation confrontation network and the second default generation confrontation at this time
The parameter of corresponding network module is identical in network.
In embodiments of the present invention, it is only necessary to extract the first generation confrontation network or second generate confrontation network in first
Generator handles image.
605, the network equipment extracts the characteristics of image in seal image region based on the convolutional layer in convolution sub-network.
Wherein, the structure of the first generator based on the convolutional layer in the convolution sub-network as shown in figure 5, specifically, extracted
The characteristics of image in the seal image region;The pond layer being then based in the convolution sub-network carries out down-sampling to the characteristics of image
Processing, obtains the non-object image feature in the seal image region.
Wherein, the number of convolutional layer and warp lamination can be 7 without limitation in the embodiment of the present invention, or
Other numbers, in addition, the number of pond layer and up-sampling layer is also without limitation in the embodiment of the present invention.
606, the network equipment carries out down-sampling processing to characteristics of image based on the pond layer in convolution sub-network, obtains non-print
Chapter characteristics of image.
In some embodiments, after the network equipment has got the characteristics of image in seal image region, it will to this
Image carries out down-sampling processing, reduces the characteristics of image, obtains the non-seal image feature in seal image region.
607, the network equipment carries out at deconvolution non-seal image feature based on the warp lamination in deconvolution sub-network
Reason.
Wherein, as shown in figure 5, due to that may lose detailed information during convolutional layer encodes, we are needed
Convolution sub-network and the respective layer in deconvolution sub-network are connected, to reduce the loss of information, at this point, being based on warp
Warp lamination in product sub-network carries out deconvolution processing to non-object image feature, comprising: obtains convolution based on warp lamination
The characteristics of image of layer output;The characteristics of image of upper one layer of output is obtained based on warp lamination;It is defeated to convolutional layer based on warp lamination
The characteristics of image of characteristics of image and upper one layer of output out carries out deconvolution processing.
608, the network equipment is based on the up-sampling layer in deconvolution sub-network to the non-seal figure for carrying out deconvolution processing
As feature progress up-sampling treatment, image after the corresponding processing in seal image region is obtained.
When to non-seal image feature carry out deconvolution processing, after having restored the corresponding image of non-seal image feature,
The non-seal image feature that deconvolution processing will be carried out to this carries out up-sampling treatment, restores the image in seal image region
Size obtains image after the corresponding processing in seal image region.
609, image mosaic after the corresponding processing in seal image region is returned former bill images by the network equipment, obtains bill
As image after corresponding processing.
After eliminating the seal image in seal image region, it will by this in addition to the seal in seal image region
Image mosaic is returned in former bill images, obtains after the corresponding processing of the bill images image to get to eliminating seal image
Bill images.
In some embodiments, image is as fig. 6 c after the corresponding processing of the bill images.
610, the network equipment carries out OCR identification to image after the corresponding processing of bill images, and it is corresponding to obtain bill images
Text information.
In the present embodiment, since image is the bill images for eliminating seal image, institute after the corresponding processing of bill images
With to treated, image carries out OCR identification, it is possible to reduce the interference that seal image identifies OCR, also, the life in the present invention
It grows up to be a useful person when removing seal, the text covered by seal can also be restored, the accuracy of OCR identification can be improved.
Wherein, the embodiment of the present invention can also remove seal by Super-resolution reconstruction establishing network or other generation confrontation networks
Image.
Wherein, Fig. 6 d is please referred to, Fig. 6 d is a frame flow diagram of image processing method provided by the invention, tool
Body, the specific location of seal detection network detection seal is first passed around for the image of input, it is then straight if there is no seal
Tap into row OCR identification, on the contrary seal image region is input to generation and fights net by then seal image region where intercepting seal
The image without seal is generated in the generator of network.Then without seal image split such as original image and OCR identification will be carried out.
Image processing apparatus obtains image to be processed in the embodiment of the present invention;Target image inspection is carried out to image to be processed
It surveys;When detecting that image to be processed includes target image, the first generator generated in confrontation network is obtained, generates confrontation net
Network is formed by sample image training, wherein the first generator includes convolution sub-network and the warp that connect with convolution sub-network
Product sub-network;It is then based on convolution sub-network and process of convolution is carried out to image to be processed, obtain the non-targeted figure of image to be processed
As feature;Deconvolution processing is carried out to non-object image feature based on deconvolution sub-network again, is obtained not comprising target image
Image.The program can extract the spy of the non-object image in image to be processed by generating the first generator in confrontation network
Then sign does not include image after the processing of target image, to remove image to be processed according to non-object image feature generation
In target image.
In order to better implement above method, correspondingly, the embodiment of the present invention also provides a kind of image processing apparatus, the figure
As processing unit specifically can integrate in the network device, which can be server, be also possible to the equipment such as terminal.
For example, as shown in fig. 7, the image processing apparatus may include
First acquisition unit 701, for obtaining image to be processed;
Detection unit 702, for carrying out target image detection to the image to be processed;
Second acquisition unit 703, for obtaining and generating confrontation when detecting that the image to be processed includes target image
The first generator in network, the generation confrontation network are formed by sample image training, wherein the target image is to need
The image of detection, first generator include convolution sub-network and the deconvolution subnet that connect with the convolution sub-network
Network;
First processing units 704 are obtained for carrying out process of convolution to the image to be processed based on the convolution sub-network
To the non-object image feature of the image to be processed, the non-object image feature is target described in the image to be processed
Characteristics of image corresponding to image except image;
The second processing unit 705, for carrying out warp to the non-object image feature based on the deconvolution sub-network
Product processing, obtains the image not comprising the target image.
In some embodiments, the first processing units 704 are specifically used for:
The characteristics of image of the image to be processed is extracted based on the convolutional layer in the convolution sub-network;
Down-sampling processing is carried out to described image feature based on the pond layer in the convolution sub-network, obtains the non-mesh
Logo image feature.
In some embodiments, described the second processing unit 705 is specifically used for:
Deconvolution processing is carried out to the non-object image feature based on the warp lamination in the deconvolution sub-network;
It is special to the non-object image for carrying out the deconvolution processing based on the up-sampling layer in the deconvolution sub-network
Sign carries out up-sampling treatment, obtains image after the processing.
In some embodiments, described the second processing unit 705 also particularly useful for:
The characteristics of image of the convolutional layer output is obtained based on the warp lamination;
The characteristics of image of upper one layer of output is obtained based on the warp lamination;
It is special based on image of the warp lamination to the convolutional layer characteristics of image exported and upper one layer of output
Sign carries out deconvolution processing.
Referring to Fig. 8, in some embodiments, described device further include:
Third acquiring unit 706, for obtaining the sample image, the sample image includes positive example sample and negative example
Sample, the positive example sample are the sample comprising target image, and the negative example sample is the sample not comprising the target image;
Training unit 707, for being handed over according to the positive example sample, the negative example sample default generation confrontation network
For training, the generation confrontation network is obtained, wherein the default generation confrontation network includes the first generator.
In some embodiments, the default generation confrontation network includes that the first default generation confrontation network and second are pre-
If generating confrontation network, the training unit 707 is specifically used for:
The described first default confrontation network that generates is trained according to the positive example sample, obtains the first generation confrontation net
Network;
The parameter that described first generates first network module in confrontation network is updated to the described second default generate to fight
In corresponding second network module of network, second network module includes the first generator;
The described second default confrontation network that generates is trained according to the negative example sample, obtains the second generation confrontation net
Network;
The parameter that described second generates the second network module in confrontation network is updated to the described first default generate to fight
In the corresponding first network module of network, the first network module includes the first generator;
Confrontation network is generated according to described first or the second generation confrontation network determines the generation confrontation network.
In some embodiments, institute's first network module further includes that the second generator, the first arbiter and second are sentenced
Other device, state training unit 707 also particularly useful for:
The positive example sample is inputted into the described first default the first generator generated in confrontation network, generates and does not include mesh
First image of logo image;
The first image is inputted into the described first default the second generator generated in confrontation network, generating includes target
Second image of image;
The first-loss of the first image is determined by the described first default the first arbiter generated in confrontation network
Value and the second loss that second image is determined by the described first default the second arbiter generated in confrontation network
Value;
The described first default ginseng for generating confrontation network is adjusted according to the first-loss value and second penalty values
Number obtains described first and generates confrontation network.
In some embodiments, second network module further includes the second generator, the first arbiter and the second differentiation
Device, the training unit 707 also particularly useful for:
The negative example sample is inputted into the described second default the second generator generated in confrontation network, generating includes target
The third image of image;
The third image is inputted into the described second default the first generator generated in confrontation network, generates and does not include mesh
4th image of logo image;
The third loss of the third image is determined by the described second default the second arbiter generated in confrontation network
Value and the 4th loss that the 4th image is determined by the described second default the first arbiter generated in confrontation network
Value;
The described second default ginseng for generating confrontation network is adjusted according to the third penalty values and the 4th penalty values
Number obtains described second and generates confrontation network.
In some embodiments, described device further include:
Extraction unit 708, for when detect the image to be processed include target image when, from the image to be processed
It is middle to extract the corresponding object region of the target image;
At this point, the first processing units 704 are specifically used for:
Process of convolution is carried out to the object region based on the convolution sub-network, obtains the image to be processed
Non-object image feature.
Image processing apparatus obtains image to be processed in the embodiment of the present invention in the embodiment of the present invention;Detection unit 702 is right
Image to be processed carries out target image detection;When detecting that image to be processed includes target image, second acquisition unit 703 is obtained
The first generator generated in confrontation network is taken, confrontation network is generated and is formed by sample image training, wherein the first generator packet
The deconvolution sub-network for including convolution sub-network and being connect with convolution sub-network;Then first processing units 704 are based on convolution
Network handles handle image and carry out process of convolution, obtain the non-object image feature of image to be processed;The second processing unit 705 is again
Deconvolution processing is carried out to non-object image feature based on deconvolution sub-network, obtains the image not comprising target image.The party
Case can extract the non-object image feature in image to be processed by generating the first generator in confrontation network, then basis
The non-object image feature generates image after the processing for not including target image, to remove the target figure in image to be processed
Picture.
In addition, the embodiment of the present invention also provides a kind of network equipment, as shown in figure 9, it illustrates institutes of the embodiment of the present invention
The structural schematic diagram for the network equipment being related to, specifically:
The network equipment may include one or more than one processing core processor 901, one or more
The components such as memory 902, power supply 903 and the input unit 904 of computer readable storage medium.Those skilled in the art can manage
It solves, network equipment infrastructure shown in Fig. 9 does not constitute the restriction to the network equipment, may include more more or fewer than illustrating
Component perhaps combines certain components or different component layouts.Wherein:
Processor 901 is the control centre of the network equipment, utilizes various interfaces and connection whole network equipment
Various pieces by running or execute the software program and/or module that are stored in memory 902, and are called and are stored in
Data in reservoir 902 execute the various functions and processing data of the network equipment, to carry out integral monitoring to the network equipment.
Optionally, processor 901 may include one or more processing cores;Preferably, processor 901 can integrate application processor and tune
Demodulation processor processed, wherein the main processing operation system of application processor, user interface and application program etc., modulatedemodulate is mediated
Reason device mainly handles wireless communication.It is understood that above-mentioned modem processor can not also be integrated into processor 901
In.
Memory 902 can be used for storing software program and module, and processor 901 is stored in memory 902 by operation
Software program and module, thereby executing various function application and data processing.Memory 902 can mainly include storage journey
Sequence area and storage data area, wherein storing program area can the (ratio of application program needed for storage program area, at least one function
Such as sound-playing function, image player function) etc.;Storage data area, which can be stored, uses created number according to the network equipment
According to etc..In addition, memory 902 may include high-speed random access memory, it can also include nonvolatile memory, such as extremely
A few disk memory, flush memory device or other volatile solid-state parts.Correspondingly, memory 902 can also wrap
Memory Controller is included, to provide access of the processor 901 to memory 902.
The network equipment further includes the power supply 903 powered to all parts, it is preferred that power supply 903 can pass through power management
System and processor 901 are logically contiguous, to realize management charging, electric discharge and power managed etc. by power-supply management system
Function.Power supply 903 can also include one or more direct current or AC power source, recharging system, power failure monitor
The random components such as circuit, power adapter or inverter, power supply status indicator.
The network equipment may also include input unit 904, which can be used for receiving the number or character of input
Information, and generate keyboard related with user setting and function control, mouse, operating stick, optics or trackball signal
Input.
Although being not shown, the network equipment can also be including display unit etc., and details are not described herein.Specifically in the present embodiment
In, the processor 901 in the network equipment can be corresponding by the process of one or more application program according to following instruction
Executable file be loaded into memory 902, and the application program being stored in memory 902 is run by processor 901,
It is as follows to realize various functions:
Obtain image to be processed;Target image detection is carried out to the image to be processed;When detecting the figure to be processed
When as comprising target image, the first generator generated in confrontation network is obtained, the generation confrontation network is instructed by sample image
White silk forms, wherein the target image be the image that detects of needs, first generator include convolution sub-network and with institute
State the deconvolution sub-network of convolution sub-network connection;The image to be processed is carried out at convolution based on the convolution sub-network
Reason, obtains the non-object image feature of the image to be processed, the non-object image feature is institute in the image to be processed
State characteristics of image corresponding to the image except target image;Based on the deconvolution sub-network to the non-object image feature
Deconvolution processing is carried out, the image not comprising the target image is obtained.
The specific implementation of above each operation can be found in the embodiment of front, and details are not described herein.
From the foregoing, it will be observed that image processing apparatus obtains image to be processed in the embodiment of the present invention;Mesh is carried out to image to be processed
Logo image detection;When detecting that image to be processed includes target image, the first generator generated in confrontation network is obtained, it is raw
It is formed at confrontation network by sample image training, wherein the first generator includes convolution sub-network and connects with convolution sub-network
The deconvolution sub-network connect;It is then based on convolution sub-network and process of convolution is carried out to image to be processed, obtain image to be processed
Non-object image feature;Deconvolution processing is carried out to non-object image feature based on deconvolution sub-network again, obtains not including mesh
The image of logo image.The program can be extracted non-targeted in image to be processed by the first generator in generation confrontation network
Then characteristics of image does not include image after the processing of target image, to remove wait locate according to non-object image feature generation
Manage the target image in image.
It will appreciated by the skilled person that all or part of the steps in the various methods of above-described embodiment can be with
It is completed by instructing, or relevant hardware is controlled by instruction to complete, which can store computer-readable deposits in one
In storage media, and is loaded and executed by processor.
For this purpose, the embodiment of the present invention provides a kind of storage medium, wherein being stored with a plurality of instruction, which can be processed
Device is loaded, to execute the step in any image processing method provided by the embodiment of the present invention.For example, the instruction can
To execute following steps:
Obtain image to be processed;Target image detection is carried out to the image to be processed;When detecting the figure to be processed
When as comprising target image, the first generator generated in confrontation network is obtained, the generation confrontation network is instructed by sample image
White silk forms, wherein the target image be the image that detects of needs, first generator include convolution sub-network and with institute
State the deconvolution sub-network of convolution sub-network connection;The image to be processed is carried out at convolution based on the convolution sub-network
Reason, obtains the non-object image feature of the image to be processed, the non-object image feature is institute in the image to be processed
State characteristics of image corresponding to the image except target image;Based on the deconvolution sub-network to the non-object image feature
Deconvolution processing is carried out, the image not comprising the target image is obtained.
The specific implementation of above each operation can be found in the embodiment of front, and details are not described herein.
Wherein, which may include: read-only memory (ROM, Read Only Memory), random access memory
Body (RAM, Random Access Memory), disk or CD etc..
By the instruction stored in the storage medium, can execute at any image provided by the embodiment of the present invention
Step in reason method, it is thereby achieved that achieved by any image processing method provided by the embodiment of the present invention
Beneficial effect is detailed in the embodiment of front, and details are not described herein.
It is provided for the embodiments of the invention a kind of image processing method, device and storage medium above and has carried out detailed Jie
It continues, used herein a specific example illustrates the principle and implementation of the invention, and the explanation of above embodiments is only
It is to be used to help understand method and its core concept of the invention;Meanwhile for those skilled in the art, according to the present invention
Thought, there will be changes in the specific implementation manner and application range, in conclusion the content of the present specification should not be construed as
Limitation of the present invention.
Claims (11)
1. a kind of image processing method characterized by comprising
Obtain image to be processed;
Target image detection is carried out to the image to be processed;
When detecting that the image to be processed includes target image, the first generator generated in confrontation network is obtained, it is described
It generates confrontation network to be formed by sample image training, wherein the target image is the image for needing to detect, and described first generates
Device includes convolution sub-network and the deconvolution sub-network that connect with the convolution sub-network;
Process of convolution is carried out to the image to be processed based on the convolution sub-network, obtains the non-targeted of the image to be processed
Characteristics of image, the non-object image feature are figure corresponding to the image except target image described in the image to be processed
As feature;
Deconvolution processing is carried out to the non-object image feature based on the deconvolution sub-network, obtains not including the target
The image of image.
2. according to right to go 1 described in method, which is characterized in that it is described based on the convolution sub-network to the figure to be processed
As carrying out process of convolution, the non-object image feature of the image to be processed is obtained, comprising:
The characteristics of image of the image to be processed is extracted based on the convolutional layer in the convolution sub-network;
Down-sampling processing is carried out to described image feature based on the pond layer in the convolution sub-network, obtains the non-targeted figure
As feature.
3. according to right to go 1 described in method, which is characterized in that it is described based on the deconvolution sub-network to described non-targeted
Characteristics of image carries out deconvolution processing, obtains the image not comprising the target image, comprising:
Deconvolution processing is carried out to the non-object image feature based on the warp lamination in the deconvolution sub-network;
Based on the up-sampling layer in the deconvolution sub-network to carried out the non-object image feature of deconvolution processing into
Row up-sampling treatment obtains image after the processing.
4. according to right to go 3 described in method, which is characterized in that the warp lamination based in the deconvolution sub-network
Deconvolution processing is carried out to the non-object image feature, comprising:
The characteristics of image of the convolutional layer output is obtained based on the warp lamination;
The characteristics of image of upper one layer of output is obtained based on the warp lamination;
The characteristics of image of the convolutional layer is exported based on the warp lamination characteristics of image and upper one layer of output into
Row deconvolution processing.
5. according to right to go 1 described in method, which is characterized in that it is described obtain image to be processed before, the method is also wrapped
It includes:
Obtain the sample image, the sample image includes positive example sample and negative example sample, the positive example sample be comprising
The sample of target image, the negative example sample are the sample not comprising the target image;
Alternately training is carried out to default generation confrontation network according to the positive example sample, the negative example sample, obtains the generation
Fight network, wherein the default generation confrontation network includes the first generator.
6. according to right to go 5 described in method, which is characterized in that the default generation confrontation network includes first default generating
It fights network and the second default generate fights network, the described first default confrontation network that generates includes the first generator, described
The second default confrontation network that generates includes the first generator, it is described according to the positive example sample, the negative example sample to default life
Alternately training is carried out at confrontation network, obtains the generation confrontation network, comprising:
The described first default confrontation network that generates is trained according to the positive example sample, obtains the first generation confrontation network;
The parameter that described first generates first network module in confrontation network is updated to the described second default generate and fights network
In corresponding second network module, second network module includes the first generator;
The described second default confrontation network that generates is trained according to the negative example sample, obtains the second generation confrontation network;
The parameter that described second generates the second network module in confrontation network is updated to the described first default generate and fights network
In corresponding first network module, the first network module includes the first generator;
Confrontation network is generated according to described first or the second generation confrontation network determines the generation confrontation network.
7. according to right to go 6 described in method, which is characterized in that the first network module further includes the second generator,
One arbiter and the second arbiter, it is described that the described first default confrontation network that generates is trained according to the positive example sample,
Obtain the first generation confrontation network, comprising:
The positive example sample is inputted into the described first default the first generator generated in confrontation network, generates and does not include target figure
First image of picture;
The first image is inputted into the described first default the second generator generated in confrontation network, generating includes target image
The second image;
By the described first default the first arbiter generated in confrontation network determine the first image first-loss value, with
And the second penalty values of second image are determined by the described first default the second arbiter generated in confrontation network;
The described first default parameter for generating confrontation network is adjusted according to the first-loss value and second penalty values, is obtained
Confrontation network is generated to described first.
8. according to right to go 6 described in method, which is characterized in that second network module further includes the second generator,
One arbiter and the second arbiter, it is described that the described second default confrontation network that generates is trained according to the negative example sample,
Obtain the second generation confrontation network, comprising:
The negative example sample is inputted into the described second default the second generator generated in confrontation network, generating includes target image
Third image;
The third image is inputted into the described second default the first generator generated in confrontation network, generates and does not include target figure
4th image of picture;
By the described second default the second arbiter generated in confrontation network determine the third image third penalty values, with
And the 4th penalty values of the 4th image are determined by the described second default the first arbiter generated in confrontation network;
The described second default parameter for generating confrontation network is adjusted according to the third penalty values and the 4th penalty values, is obtained
Confrontation network is generated to described second.
9. method according to any one of claim 1 to 8, which is characterized in that described to be carried out to the image to be processed
After target image detection, the method also includes:
When detecting that the image to be processed includes target image, then the target image is extracted from the image to be processed
Corresponding object region;
The convolution sub-network based in first generator carries out process of convolution to the image to be processed, obtains described
The non-object image feature of image to be processed, comprising:
Process of convolution is carried out to the object region based on the convolution sub-network, obtains the non-mesh of the image to be processed
Logo image feature.
10. a kind of image processing apparatus characterized by comprising
First acquisition unit, for obtaining image to be processed;
Detection unit, for carrying out target image detection to the image to be processed;
Second acquisition unit generates in confrontation network for obtaining when detecting that the image to be processed includes target image
The first generator, generation confrontation network forms by sample image training, wherein the target image needs to detect
Image, first generator include convolution sub-network and the deconvolution sub-network that connect with the convolution sub-network;
First processing units obtain described for carrying out process of convolution to the image to be processed based on the convolution sub-network
The non-object image feature of image to be processed, the non-object image feature be the image to be processed described in target image it
Characteristics of image corresponding to outer image;
The second processing unit, for carrying out deconvolution processing to the non-object image feature based on the deconvolution sub-network,
Obtain the image not comprising the target image.
11. a kind of storage medium, which is characterized in that the storage medium is stored with a plurality of instruction, and described instruction is suitable for processor
It is loaded, the step in 1 to 9 described in any item image processing methods is required with perform claim.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910378956.8A CN110163194A (en) | 2019-05-08 | 2019-05-08 | A kind of image processing method, device and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910378956.8A CN110163194A (en) | 2019-05-08 | 2019-05-08 | A kind of image processing method, device and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110163194A true CN110163194A (en) | 2019-08-23 |
Family
ID=67633677
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910378956.8A Pending CN110163194A (en) | 2019-05-08 | 2019-05-08 | A kind of image processing method, device and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110163194A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113065407A (en) * | 2021-03-09 | 2021-07-02 | 国网河北省电力有限公司 | Financial bill seal erasing method based on attention mechanism and generation countermeasure network |
WO2023202570A1 (en) * | 2022-04-21 | 2023-10-26 | 维沃移动通信有限公司 | Image processing method and processing apparatus, electronic device and readable storage medium |
-
2019
- 2019-05-08 CN CN201910378956.8A patent/CN110163194A/en active Pending
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113065407A (en) * | 2021-03-09 | 2021-07-02 | 国网河北省电力有限公司 | Financial bill seal erasing method based on attention mechanism and generation countermeasure network |
CN113065407B (en) * | 2021-03-09 | 2022-07-12 | 国网河北省电力有限公司 | Financial bill seal erasing method based on attention mechanism and generation countermeasure network |
WO2023202570A1 (en) * | 2022-04-21 | 2023-10-26 | 维沃移动通信有限公司 | Image processing method and processing apparatus, electronic device and readable storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108805789A (en) | A kind of method, apparatus, equipment and readable medium removing watermark based on confrontation neural network | |
CN108604369A (en) | A kind of method, apparatus, equipment and the convolutional neural networks of removal picture noise | |
CN111383232B (en) | Matting method, matting device, terminal equipment and computer readable storage medium | |
CN111127309B (en) | Portrait style migration model training method, portrait style migration method and device | |
CN113239875B (en) | Method, system and device for acquiring face characteristics and computer readable storage medium | |
CN112102185B (en) | Image deblurring method and device based on deep learning and electronic equipment | |
CN110163194A (en) | A kind of image processing method, device and storage medium | |
JP7282474B2 (en) | Encryption mask determination method, encryption mask determination device, electronic device, storage medium, and computer program | |
CN113139917A (en) | Image processing method, image processing device, electronic equipment and storage medium | |
CN111429374A (en) | Method and device for eliminating moire in image | |
CN110321892A (en) | A kind of picture screening technique, device and electronic equipment | |
CN112837251A (en) | Image processing method and device | |
CN108776800A (en) | A kind of image processing method, mobile terminal and computer readable storage medium | |
CN104184936B (en) | Image focusing processing method and system based on light field camera | |
CN110135428A (en) | Image segmentation processing method and device | |
CN112862712A (en) | Beautifying processing method, system, storage medium and terminal equipment | |
CN117455786A (en) | Multi-focus image fusion method and device, computer equipment and storage medium | |
Kim et al. | Restoring spatially-heterogeneous distortions using mixture of experts network | |
CN113379623B (en) | Image processing method, device, electronic equipment and storage medium | |
CN110163049B (en) | Face attribute prediction method, device and storage medium | |
CN116309158A (en) | Training method, three-dimensional reconstruction method, device, equipment and medium of network model | |
CN113256541B (en) | Method for removing water mist from drilling platform monitoring picture by machine learning | |
CN114742725A (en) | Image processing method, image processing device, electronic equipment and storage medium | |
Nataraj et al. | Holistic image manipulation detection using pixel co-occurrence matrices | |
CN114943655A (en) | Image restoration system for generating confrontation network structure based on cyclic depth convolution |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |