CN117649353A - Image processing method, device, storage medium and electronic equipment - Google Patents

Image processing method, device, storage medium and electronic equipment Download PDF

Info

Publication number
CN117649353A
CN117649353A CN202210995243.8A CN202210995243A CN117649353A CN 117649353 A CN117649353 A CN 117649353A CN 202210995243 A CN202210995243 A CN 202210995243A CN 117649353 A CN117649353 A CN 117649353A
Authority
CN
China
Prior art keywords
image
feature
noise
features
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210995243.8A
Other languages
Chinese (zh)
Inventor
王慧芬
张园
杨明川
薛俊达
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Telecom Corp Ltd
Original Assignee
China Telecom Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Telecom Corp Ltd filed Critical China Telecom Corp Ltd
Priority to CN202210995243.8A priority Critical patent/CN117649353A/en
Publication of CN117649353A publication Critical patent/CN117649353A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The present disclosure provides an image processing method, an image processing apparatus, a computer-readable storage medium, and an electronic device, and relates to the technical field of image processing. The image processing method comprises the following steps: extracting image characteristics of an image to be processed; cutting the noise area in the image characteristics to obtain intermediate image characteristics; and performing image reconstruction on the intermediate image features to obtain a noise-reduced target image. The technical scheme solves the technical problems of high complexity and low image reconstruction efficiency of the image reconstruction model in the traditional method, and achieves the technical effects of improving the image reconstruction efficiency and reducing the complexity of the image reconstruction model.

Description

Image processing method, device, storage medium and electronic equipment
Technical Field
The present disclosure relates to the field of image processing technology, and in particular, to an image processing method, an image processing apparatus, a computer readable storage medium, and an electronic device.
Background
With the rapid development of machine learning and the wide application of machine vision in various engineering fields, image processing technology has also been rapidly developed. Image feature extraction for the original image of machine vision may enable a variety of machine tasks, such as target detection, target tracking, etc. However, when the original image of machine vision is subjected to image feature extraction, noise is introduced into the image feature channel, so that noise appears in the reconstructed image.
At present, the above described image reconstruction process is typically denoised using methods that generate a countermeasure network (Generative Adversarial Net, GAN) or a stack of feature cascade residual modules.
However, the denoising method increases a large number of network module levels, greatly increases the volume of a network model, occupies more resources, and therefore has low image reconstruction efficiency.
Disclosure of Invention
The present disclosure provides an image processing method, an image processing apparatus, a computer-readable storage medium, and an electronic device, thereby improving image reconstruction efficiency.
In a first aspect, an embodiment of the present disclosure provides an image processing method, including: extracting image characteristics of an image to be processed; cutting the noise area in the image characteristics to obtain intermediate image characteristics; and performing image reconstruction on the intermediate image features to obtain a noise-reduced target image.
In an alternative embodiment of the present disclosure, extracting image features of an image to be processed includes: and performing image scaling on the image size of the image to be processed according to a preset proportion to obtain the image characteristics of the image to be processed.
In an optional embodiment of the disclosure, clipping the noise region in the image feature to obtain an intermediate image feature includes: determining an area with a pixel value of a preset pixel value in an image to be processed as a noise area; determining a feature region corresponding to the initial noise region in the graphic feature as a target noise region in the image feature; and cutting out the image features with preset widths of the edge regions corresponding to the target noise regions to obtain the rest intermediate image features in the image features.
In an optional embodiment of the disclosure, performing image reconstruction on the intermediate image feature to obtain a target image after noise reduction, including: inputting the characteristic value of the intermediate image characteristic into an image reconstruction network model for image reconstruction, and obtaining a noise-reduced target image.
In an optional embodiment of the present disclosure, inputting a feature value of an intermediate feature into an image reconstruction network model for image reconstruction to obtain a target image after noise reduction, including: and carrying out feature enhancement extraction on the intermediate image features based on a residual error learning module in the image reconstruction network model to obtain feature values after the intermediate image features are subjected to feature enhancement.
In an optional embodiment of the disclosure, based on an amplifying module in the image reconstruction network model, corresponding size scaling is performed on the intermediate image features output by the residual learning module according to a preset proportion, so as to obtain the target image.
In an optional embodiment of the disclosure, based on a quantization module in the image reconstruction network model, the intermediate image feature output by the residual learning module is quantized to obtain the target image.
In a second aspect, one embodiment of the present disclosure provides an image processing apparatus including: the device comprises a feature extraction module, a feature clipping module and an image reconstruction module. The feature extraction module is used for extracting image features of the image to be processed; the feature clipping module is used for clipping the noise area in the image features to obtain intermediate image features; the image reconstruction module is used for reconstructing the image of the intermediate image features to obtain a noise-reduced target image.
In an optional embodiment of the disclosure, the feature extraction module is configured to perform image scaling on an image size of the image to be processed according to a preset ratio, so as to obtain an image feature of the image to be processed.
In an optional embodiment of the disclosure, the feature clipping module is configured to determine a region in the image feature, where the pixel value is a preset pixel value, as a noise region; determining a feature region corresponding to the initial noise region in the graphic feature as a target noise region in the image feature; and cutting out the image features with preset widths of the edge regions corresponding to the target noise regions to obtain the rest intermediate image features in the image features.
In an optional embodiment of the disclosure, the image reconstruction module is configured to input a feature value of the intermediate image feature to the image reconstruction network model for image reconstruction, so as to obtain a noise-reduced target image.
In an optional embodiment of the disclosure, the image reconstruction module is configured to perform feature enhancement extraction on the intermediate image feature based on a residual error learning module in the image reconstruction network model, so as to obtain a feature value after feature enhancement on the intermediate image feature.
In an optional embodiment of the disclosure, the image reconstruction module is configured to perform corresponding size scaling on the intermediate image feature output by the residual learning module according to a preset ratio based on an amplifying module in the image reconstruction network model, so as to obtain the target image.
In an optional embodiment of the disclosure, the image reconstruction module is configured to perform quantization processing on the intermediate image feature output by the residual learning module based on a quantization module in the image reconstruction network model, so as to obtain the target image.
In a third aspect, an embodiment of the present disclosure provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the image processing method as above.
In a fourth aspect, one embodiment of the present disclosure provides an electronic device, including: a processor; and a memory for storing executable instructions of the processor; wherein the processor is configured to perform the image processing method as above via execution of the executable instructions.
The technical scheme of the present disclosure has the following beneficial effects:
in the image processing method, the image characteristics of the image to be processed are extracted; cutting the noise area in the image characteristics to obtain intermediate image characteristics; and performing image reconstruction on the intermediate image features to obtain a noise-reduced target image. In the method, feature extraction is carried out on an image to be processed to obtain low-resolution image features; then clipping the noise area in the image features, and performing image reconstruction on the intermediate image features obtained after clipping by using an image reconstruction technology, so as to obtain a reconstructed image after noise removal. By the method, a large number of network structures can be prevented from being increased, so that the network structures are simple and light, computing resources are saved, and the image processing efficiency can be greatly improved. Therefore, the technical problems of high network structure complexity and low image reconstruction efficiency in the prior art are solved, and the technical effects of reducing the network structure complexity and improving the image processing efficiency are achieved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure. It will be apparent to those of ordinary skill in the art that the drawings in the following description are merely some embodiments of the present disclosure and that other drawings may be derived from these drawings without undue effort.
FIG. 1 schematically illustrates a system architecture diagram for image reconstruction based on machine-vision-oriented encoding features in the present exemplary embodiment;
FIG. 2 schematically illustrates a schematic diagram of an image reconstruction network model in the present exemplary embodiment;
fig. 3 schematically shows an architecture diagram of an image processing system in the present exemplary embodiment;
fig. 4 schematically shows a flowchart of an image processing method in the present exemplary embodiment;
fig. 5 schematically shows a feature extraction network diagram in the present exemplary embodiment;
fig. 6 schematically shows a flow chart of a method for clipping a noise region in the present exemplary embodiment;
FIG. 7 schematically illustrates another method flow diagram for clipping noise regions in the present exemplary embodiment;
fig. 8 schematically illustrates an effect of removing tail noise of the transverse bar in the present exemplary embodiment;
fig. 9 schematically illustrates an effect diagram of removing longitudinal bar end shadow noise in the present exemplary embodiment;
fig. 10 is a schematic diagram showing the structure of an image processing apparatus in the present exemplary embodiment;
fig. 11 shows a schematic structural diagram of an electronic device in the present exemplary embodiment.
Detailed Description
Exemplary embodiments will now be described more fully with reference to the accompanying drawings. However, the exemplary embodiments may be embodied in many forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the exemplary embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the present disclosure. However, those skilled in the art will recognize that the aspects of the present disclosure may be practiced with one or more of the specific details, or with other methods, components, devices, steps, etc. In other instances, well-known technical solutions have not been shown or described in detail to avoid obscuring aspects of the present disclosure.
Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus a repetitive description thereof will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in software or in one or more hardware modules or integrated circuits or in different networks and/or processor devices and/or microcontroller devices.
The flow diagrams depicted in the figures are exemplary only and not necessarily all steps are included. For example, some steps may be decomposed, and some steps may be combined or partially combined, so that the order of actual execution may be changed according to actual situations.
The image processing method provided by the embodiment of the disclosure can be applied to an application scene for performing noise reduction processing on the image, and particularly can be applied to the application scene for performing noise reduction processing on the image in the image reconstruction process in the machine vision field. Taking the field of medical imaging as an example, medical images are required to display fine focus areas that cannot be distinguished by human eyes, and thus high-quality images are required to be displayed. However, the sensor is subject to technical process constraints and noise during the process of generating the image, and in the process of machine learning, massive image data is required as sample data for model training, and the high-resolution image occupies a large amount of computer resources during the transmission and storage processes, resulting in system delay. Therefore, in practical applications, image features of an input image with high resolution are generally extracted and encoded to obtain image features with low resolution for output, and then the image features are reconstructed into an image with high resolution by an image reconstruction technology.
Fig. 1 schematically illustrates a system architecture diagram for image reconstruction based on machine-vision-oriented encoding features according to an exemplary embodiment of the present disclosure.
As shown in fig. 1, taking an image of a DIV2K dataset as an example, an image a is an original image input by a transmitting end, the original image may be an image collected by an intelligent terminal (for example, an intelligent device with image collection capability including an intelligent camera, etc.), and an image b is a reconstructed image output by an image reconstruction system by a receiving end through machine vision feature-oriented coding.
For example, the shape (shape) of the transmitting-end input image a is (h, w, 3), where h (height) represents the height of the picture; w (width) represents the width of the picture; 3 is the number of channels of the image. The image a is subjected to machine task preprocessing to obtain an image x (h ', w', 3), wherein the machine task preprocessing is an image reconstruction system for processing an input image a to adapt to machine vision feature coding, and the machine task preprocessing can comprise: geometric transformation (resolution), normalization processing, image enhancement processing, and the like.
Then, extracting features of the image x through a feature extraction network to obtain image features f (H, W, N), wherein N is N feature channels obtained by extracting the features of the image x, H is the height of the image features, W is the width of the image features, H and the height of the image x meet H=h '/upscale, W and the width of the image x meet W=w'/upscale, and the scaling factor upscale is determined by downsampling operation in the feature extraction network. Taking a step layer of ResNet as an example, N can be 64 channels, the feature extraction network is a network structure before the step layer, and 4 times of total downsampling is performed once through convolution with a convolution step length (stride) of 2 and once through a 3x3 max pooling layer, so that upscale is 4. And then the image characteristics of the image x are subjected to characteristic coding through a machine vision characteristic coder and decoder and transmitted to a receiving end, and the characteristic coding is also called characteristic compression, namely, the image characteristic information is represented by less bits through the characteristic coding. Thereby reducing the data transmission delay and further improving the image characteristic transmission efficiency between the sending end and the receiving end.
And performing machine vision feature-oriented decoding at a receiving end to obtain machine vision decoding features (Extracted decode feature) f ' (H, W, N), and inputting the machine vision decoding features into an image reconstruction network model to obtain a reconstructed image I (H ', W ', 3). Meanwhile, the image features f' (H, W, N) of the machine vision correspond to N machine tasks, which may be target detection, target recognition, target segmentation, target tracking, etc., where N is a value of 1 or more. The image reconstruction network model reconstructs an image according to the input image characteristic values.
Finally, the reconstructed image I is restored to an image b (h, w, 3) through a machine task post-processing method so as to enable a user to visually check or perform other machine tasks, wherein the machine task post-processing method corresponds to the machine task pre-processing method.
Based on the system architecture for image reconstruction based on machine vision coding features shown in fig. 1, fig. 2 schematically shows a schematic diagram of an image reconstruction network model according to an exemplary embodiment of the present disclosure.
Referring to fig. 2, the image reconstruction network model may include a feature alignment module, a residual learning module, an upsampling module, which may also be referred to as an amplifying module, and a quantization processing module, i.e., a quantization module.
The input of the image reconstruction network model is the image characteristic, and the output is the reconstructed image. The feature alignment module can align input image features, and the image features which are highly correlated but are not aligned in the space position can be aligned through the feature alignment module so as to carry out residual error learning later; the residual learning module can enhance image characteristics to reconstruct image detail information, wherein the residual learning module can comprise multi-module residual learning block superposition, attention module superposition and the like; the up-sampling module enlarges the image according to a certain proportion so as to enlarge the pixels of the image; the quantization processing module may map the pixel values of the image to a range of 0-255, resulting in a reconstructed image.
However, in the above image reconstruction process, the image features of the input low-resolution image are generated by performing image feature extraction through machine vision feature extraction (or referred to as a feature extraction network), and filling, convolution operations and the like are usually introduced in the image feature extraction process by the feature extraction network, and these operations tend to form side effects on the image features, that is, noise (for example, streak tail noise, which is special noise having a certain periodicity, directionality, and being distributed in a streak shape in the image). After the image features with the side effect are input into the image reconstruction network, side noise can be generated in the reconstructed image, and the quality of the reconstructed image is further affected. In some conventional technologies, the above image features with side effects are noise-reduced by adding GAN to the system shown in fig. 1 to generate an countermeasure network or a stack of image feature cascade residual modules.
However, the above-mentioned noise reduction processing method for the image features with the side effects increases the network structure and training cost of the whole image reconstruction model, resulting in high complexity of the image reconstruction model and low image reconstruction efficiency.
In view of the foregoing, exemplary embodiments of the present disclosure provide an image processing method, which does not need to add a network structure in the image reconstruction system shown in fig. 1, but performs noise detection on an image feature of an image to be processed, so as to perform clipping processing on a noise region in the image feature of the image to be processed, and obtain an intermediate image feature; and finally, carrying out image reconstruction on the intermediate image features to realize the noise reduction processing process of the low-resolution image. The method avoids the network structure added in the image noise reduction process of some technologies, thereby reducing the complexity of the model in the image processing process and further improving the efficiency of image reconstruction.
Fig. 3 schematically shows an architecture diagram of an image processing system in the present exemplary embodiment. Referring to fig. 3, an image processing system 300 includes a plurality of terminal apparatuses 301 and a service apparatus 302. Wherein the user may input the image to be processed according to the terminal device 301 to initiate a service request for image reconstruction. After receiving the service request of image reconstruction, the service device 302 extracts image features of the image to be processed; cutting the noise area in the image characteristics to obtain intermediate image characteristics; and performing image reconstruction on the intermediate image features to obtain a noise-reduced target image. Finally, the target image (e.g., reconstructed image—only an example) after the noise reduction processing is fed back to the terminal device 301.
It should be noted that, the service device 302 may be one server, and may be a server cluster formed by a plurality of servers. And in the system architecture 300 shown in fig. 3, the number of terminal devices 301 and service devices 302 is merely exemplary, and a greater or lesser number is within the scope of protection of the present application. Also, in the example operational scenario described above, the terminal device may be, for example, a personal computer, a server, a palm top (Personal Digital Assistant, PDA), a notebook, or any other computing device having networking functionality.
Having knowledge of the system architecture of the present disclosure, a detailed description will be given of the technical solution of the image processing method of the present disclosure with reference to fig. 4.
Fig. 4 schematically illustrates a flowchart of an image processing method in this exemplary embodiment, where the method may be performed by any apparatus for performing an image processing method, and the apparatus may be implemented by software and/or hardware, and the image processing method provided in this exemplary embodiment of the present disclosure may be performed by any electronic device having computing processing capabilities, for example, the terminal device 301 and/or the service device 302.
In the following embodiment, an image processing method performed by the service apparatus 302 is exemplified, but the present disclosure is not limited thereto. Referring to fig. 4, the image processing method provided by the exemplary embodiment of the present disclosure may include steps S401 to S403:
Step S401, extracting image characteristics of the image to be processed.
The image to be processed can be an image acquired by the transmitting end through the intelligent device; or an image after image reconstruction by the primary image reconstruction network model, namely a reconstructed image with a noise area.
And step S402, clipping the noise area in the image characteristic to obtain an intermediate image characteristic.
Illustratively, the noise region is the region of noise in the image to be processed, and the noise may be a bar tail noise or other type of noise. After the noise area of the image to be processed is determined, clipping processing is carried out on the noise area so as to cut off the noise area, and the intermediate image characteristics are obtained.
And S403, performing image reconstruction on the intermediate image features to obtain a noise-reduced target image.
And carrying out image reconstruction on the intermediate image features after the noise area is cut off, so as to obtain a target image after noise is removed.
In the technical schemes provided by some embodiments of the present disclosure, image features of an image to be processed are extracted; cutting the noise area in the image characteristics to obtain intermediate image characteristics; and performing image reconstruction on the intermediate image features to obtain a noise-reduced target image. In the method, under the condition that noise exists in the image features to be processed, a noise area of the image features to be processed is cut, and the cut image features are subjected to image reconstruction through an image reconstruction technology, so that an image with the noise removed is obtained. By the method, a large number of network structures can be prevented from being increased, so that the network structures are simple and light, computing resources are saved, and the image processing efficiency can be greatly improved. Therefore, the technical problems of high network structure complexity and low image reconstruction efficiency in the prior art are solved, and the technical effects of reducing the network structure complexity and improving the image processing efficiency are achieved.
According to some embodiments of the present disclosure, in extracting image features of an image to be processed, feature extraction may be performed on the image to be processed through a feature extraction network. The feature extraction network is trained according to the neural network model and the sample image set, so that image features of the input image are obtained.
The process of extracting features of an image to be processed through a feature extraction network to obtain image features corresponding to the image to be processed will be described in detail below with reference to fig. 5.
Fig. 5 schematically shows a feature extraction network diagram in the present exemplary embodiment. Referring to fig. 5, first, an image preprocessing is performed on an image to be processed. Wherein, the image preprocessing may include: geometric transformation (resolution), normalization processing, image enhancement processing, and the like. The image to be processed of the input feature extraction network can be subjected to a series of standardized processing through image preprocessing, so that the image to be processed is converted into an image process in a fixed standard form, and the calculation process of the subsequent extraction of the feature of the extracted image is facilitated.
Then, the preprocessed image can be subjected to feature extraction through a feature extraction network, and then the image is subjected to encoding and decoding through a feature encoding unit and a feature decoding unit, so that the reconstructed image features are finally obtained. The feature extraction network may include one or more of a convolution unit, a normalization unit and an activation unit, and may be obtained by training sample data.
For example, the image preprocessing and feature extraction network may employ image preprocessing of Faster R-CNN X101 FPN in a detectron2 and network architecture starting from the backbone network ResNet101 to the step module for processing, with network weights of Faster R-CNN X101 FPN for target detection in a detectron2 model zoo.
The feature extraction is carried out on the image to be processed before the image reconstruction, and only important information in the image can be stored, so that the problem of calculation resource waste caused by directly taking a storage matrix of the image to be processed as image features to carry out various operations is avoided, and the image storage and calculation complexity in the image reconstruction process is greatly reduced.
In an exemplary embodiment of the present disclosure, when extracting an image feature of an image to be processed, image scaling may be performed on an image size of the image to be processed according to a preset ratio, to obtain the image feature of the image to be processed.
For example, the image size of the extracted image to be processed (for example, the width and the height of the image to be processed) may be scaled by a preset ratio when the feature extraction network is used to extract features of the image to be processed. Then, the clipping process of the noise region is performed. The scaling operation is performed on the image features of the image to be processed, so that the image features of the image to be processed are scaled according to a preset proportion under the condition that the image features of the image to be processed are not deformed.
For example, if the image height H of the image to be processed is 4, the image width W is 4, and the downsampling factor upscale is 4, the image feature of the image to be processed has an image feature height h=1 and an image feature width w=1.
In the process of extracting the characteristics of the image to be processed by using the characteristic extraction network, scaling operation is carried out on the image size of the image to be processed, and the image characteristics of the image to be processed are extracted, so that the high-resolution image is converted into the low-resolution image characteristics for transmission, and the system delay is reduced.
Further, when the clipping process is performed on the noise region in the image feature in step S402 to obtain the intermediate image feature, the noise region that needs clipping may be determined.
In an exemplary embodiment of the present disclosure, when clipping a noise region in an image feature of an image to be processed to obtain an intermediate image feature, a region in the image to be processed, where a pixel value is a preset pixel value, may be determined as an initial noise region; determining a feature region corresponding to the initial noise region in the graphic feature as a noise region in the image feature; cutting off the image features with preset widths of the edge regions corresponding to the noise regions to obtain the rest intermediate image features in the image features.
The preset pixel value may be set according to a specific feature of noise.
For example, when clipping is performed on a noise region in an image feature of an image to be processed, a region in which pixel values in the image to be processed are all preset pixel values may be searched, the region is determined as an initial noise region in the image to be processed, and the determined initial noise region corresponds to a corresponding feature region in the extracted image feature to obtain a target noise region of the image feature, so that a subsequent noise clipping operation is performed on the target noise region.
Taking the example that streak tail shadow noise exists in an image to be processed, filling and convolution operations are usually introduced in the process of extracting image features by a convolution neural network, and side effects are easily formed in middle features of the image by the operations. The streak tail noise is noise which is collected in a picture boundary region, has directivity, and is distributed in a streak shape, and the pixel value of the noise region is 0. Therefore, the initial noise direction and the initial noise region of the image to be processed can be determined by judging whether the pixel values of the preset width region or the preset height region in the boundary region of the image to be processed are all 0, wherein the initial noise direction is kept unchanged in the image characteristics corresponding to the image to be processed, i.e. the target noise direction contained in the image characteristics is the same as the initial noise direction determined in the image to be processed. And finally, cutting the image features with preset widths corresponding to the edge regions of the target noise regions in the image features corresponding to the initial noise direction to obtain the rest intermediate image features.
The noise direction and the noise area can be rapidly determined by judging whether the pixel value of the image to be processed is the preset pixel value, so that the image features with the same noise direction are cut on the image features corresponding to the image to be processed, noise after image reconstruction is removed, and the quality of the image after image reconstruction is improved.
In an exemplary embodiment of the present disclosure, when image reconstruction is performed on intermediate image features to obtain a noise-reduced target image, feature values of the intermediate image features are input to an image reconstruction network model to perform image reconstruction, so as to obtain the noise-reduced target image.
When the intermediate image features are subjected to image reconstruction, the feature values of the intermediate image features after noise removal can be input into an image reconstruction network model to perform image reconstruction, so that a noise-reduced target image can be obtained.
The image characteristic values of the noise-removed area can be subjected to image reconstruction to obtain a high-quality image by inputting the low-resolution intermediate image characteristic values cut out of the noise area into the image reconstruction network model to carry out image reconstruction, so that the image quality after image reconstruction is further improved on the premise of not increasing the network structure of the model.
In an exemplary embodiment of the present disclosure, when image reconstruction is performed based on the image reconstruction network model, feature enhancement extraction may be performed on intermediate image features based on a residual error learning module in the image reconstruction network model, so as to obtain feature values after feature enhancement on the intermediate image features.
The residual error learning module can be composed of a convolution module, an activation function module, a residual error scaling module and the like. Meanwhile, the residual error learning module performs feature enhancement extraction on the intermediate image by utilizing a residual error principle.
For example, since the low-frequency information carried by the low-resolution image features is similar to the low-frequency information carried by the high-resolution image features, when the residual learning module performs model training, it takes a lot of time to perform learning training on the low-frequency information. Therefore, in practical application, the residual error learning module only needs to learn the high-frequency information residual error between the high-resolution image features and the low-resolution image features to solve the problem of low resolution, and the process is the residual error principle. The image features are embodied in the image, the low-frequency information of the image features represents a region with slow gray value change in the image, and the region is an approximate outline and contour of the image and is approximate information of the image; the high frequency information of the image represents the region of the image where the gray value changes drastically, and the small range of detail information is reflected corresponding to the edges, noise and detail of the image.
For example, before image feature enhancement using the residual learning module, a morphing (warp) operation may also be used to align highly correlated but spatially misaligned image features for subsequent image detail reconstruction.
The residual error learning module in the image reconstruction network model can enhance the image characteristics extracted from the image to be processed, so that the high-frequency information of the image is learned through the image reconstruction network model, the resolution of the reconstructed image is improved, the complexity of the model is reduced, and the learning rate is accelerated.
In an exemplary embodiment of the disclosure, based on an amplifying module in an image reconstruction network model, corresponding size scaling is performed on intermediate image features output by a residual learning module according to a preset proportion, so as to obtain a target image.
The amplifying module is an up-sampling module. The up-sampling module scales the intermediate image features output by the residual error learning module to a preset image feature size according to a certain rule and a preset proportion. Thus, the size of the input image features is re-enlarged to the size of the pre-scaled image by the upsampling module. For example, the size of the image x 'after being preprocessed by the machine task is (H', W ', 3), the feature size of the input image reconstruction network model is (H, W, N), and the output target intermediate image is scaled to the same image size (H', W ', 3) as x' by the upsampling module.
By way of example, the upsampling module may include a convolution module and a reorganization Pixel module (Pixel-Shuffle). The convolution module can expand the number of channels of the intermediate image output by the residual error learning module, and then mutually insert the feature images of the two channels through the Pixel-Shuffle, so that the size of the output target image is expanded to be the same as the size of the image preprocessed by the machine task, and super-Pixel reconstruction of the target image is realized.
And performing corresponding size scaling on the intermediate image output by the residual error learning module according to a preset proportion by the up-sampling module, so that the reduced image features are enlarged to the same size as the image preprocessed by the machine task, the resolution of the features is enlarged, and the quality of the reconstructed image is improved.
In an exemplary embodiment of the disclosure, based on a quantization module in the image reconstruction network model, the intermediate image features output by the residual learning module are quantized to obtain the target image.
The Quantization module performs Quantization (Quantization) on the input image features, and the Quantization aims at converting a continuous variation interval of brightness corresponding to image pixels into a single specific value, namely, mapping continuous gray values of intermediate image features to a gray level range of 0-255 to obtain a target image.
In general, the more quantization levels, the more abundant the image modules, the higher the resolution of the image gray scale, and the better the image quality; on the contrary, the less the quantization level, the less abundant the image module is, the lower the gray resolution is, the phenomenon of image contour division module can appear, thereby reducing the quality of the image.
The intermediate image features output by the residual error learning module are quantized by the quantization module, so that the image resolution can be further improved, and the quality of the reconstructed image is improved.
On the basis of any of the above exemplary embodiments, a method flow of determining a noise area and performing a clipping operation will be described in detail with reference to fig. 6 and 7.
Fig. 6 schematically shows a flow chart of a method for clipping a noise region in the present exemplary embodiment. Referring to fig. 6, the image features are taken as input of the reconstructed image network model, wherein the input image features may be image features obtained by performing feature extraction processing on the image to be processed through the method shown in fig. 5.
The quantized image I generated by one image reconstruction will be described below taking the image to be processed as an example, where the size of the quantized image I is (3, h ', w').
Illustratively, let the input image feature be (N, H, W), H being the height of the image feature, h=h'/upscale; w is the width of the image feature, w=w'/upscale; n is the number of channels. The image features Z may be feature aligned to obtain image features (M, H, W).
Then, sequentially passing through a residual error learning module, an up-sampling module and a quantization processing module of the reconstructed image network model to obtain an image I (3, h ', w') generated after the first image reconstruction.
And finally, judging whether the quantized image has streak tail shadow noise or not, and obtaining a noise-reduced target image according to a judging result.
Specifically, whether the streak tail noise exists in the quantized image can be determined by judging whether the pixel values of the quantized image boundary region are all 0. If the pixel value of the quantized image boundary area is not 0, the quantized image is free from streak tail noise, and the quantized image I (3, H, W) is directly output as a target image. If the pixel values of the horizontal or vertical areas of the quantized image boundary are all 0, the quantized image has streak tail shadow noise, and the edge area of the quantized image I corresponding to the image feature Z is subjected to cutting operation to obtain the rest intermediate image features in the quantized image. And then, the obtained intermediate image features pass through a residual error learning module, an up-sampling module and a quantization processing module of the reconstructed image network model again to obtain a quantized image I ' (3, h ', w ') generated by the second image reconstruction as a target image.
Fig. 7 is a detailed process diagram of determining whether there is a streak tail noise in the quantized image in fig. 6, and reconstructing the image according to the determination result, and fig. 8 and fig. 9 are schematic diagrams of the results of removing the transverse streak tail noise and removing the longitudinal streak tail noise, respectively.
Fig. 7 schematically shows another flow chart of a method for clipping a noise region in the present exemplary embodiment.
Referring to fig. 7, if streak tail noise exists in the quantized image, pixel values of a preset number of lines may all be 0 in the height direction, and lateral streak tail noise is abbreviated; or the pixel value of the preset column number is 0 in the width direction, and the longitudinal bar block tail shadow noise is short for the sake of simplicity. Therefore, in judging whether or not there is streak tail noise in the quantized image, it is necessary to judge from the height direction and the width direction, respectively.
According to some embodiments of the present disclosure, it is determined whether the quantized image I (3, H ', w') has all of the pixel values of 0 in the height direction H [ -K: ], i.e., whether the quantized image has all of the pixel values of 0 in the height direction of the boundary region K rows. If so, the image feature Z (M, H, W) processed by the feature alignment module is cut, and only the feature in the height direction H0-R is reserved, so that the feature Z1 is Z [:, 0-R, and Z1 is an intermediate image feature. Wherein K, R are all threshold values in the preset height direction, which can be adjusted according to noise characteristics.
According to some embodiments of the present disclosure, if the pixel values of the boundary region in the height direction are not all 0, it is determined whether the pixel values of the boundary region W [ -D: ] in the width direction are all 0. If so, the image features Z (M, H, W) processed by the feature alignment module are cut, and only the features in the width direction W0: -C are reserved, so that the features Z2 are Z [: 0: -C ], and at the moment, Z2 is an intermediate image feature. Wherein D, C are all threshold values in the preset width direction and can be adjusted according to noise characteristics.
After obtaining the intermediate image feature Z1 or the intermediate image feature Z2, the feature Z1 or the feature Z2 can be input into a residual error learning module, an up-sampling module and a quantization processing module of the reconstructed image network model again to obtain a quantized image I ' (3, h ', w ') which is a target image after noise reduction; if the streak tail noise does not exist in the height direction and the width direction, the quantized image I (3, h ', w') generated by the first image reconstruction is taken as a target image.
For example, it may be preferentially judged whether or not the last 3 rows of pixel values in the height direction of the quantized image I (3, h ', w') are all 0. If the image features are all 0, cutting the image features Z (M, H, W) processed by the feature alignment module, and reserving the features Z1 of the H-2 rows from top to bottom in the height direction; or whether the last 3 columns of pixel values in the width direction of the quantized image I (3, h ', w') are all 0. If it is all 0, the feature Z (M, H, W) performs a clipping operation, leaving the feature Z2 of the W-2 row from left to right in the width direction. Finally, the characteristic Z1 or Z2 reenters the residual error learning module, the up-sampling module and the quantization processing module to output a target image I ' (3, h ', w ').
According to some embodiments of the present disclosure, the training sample of the feature extraction network and the reconstructed image network model is a COCO track 2017 dataset picture, and machine vision feature-to-image reconstruction reasoning is performed on the DIV2K-test-LR-bicubic-X4 dataset after the training model is obtained. The learning images shown in fig. 8 and 9 are example images in the DIV2K-test-LR-bicubic-X4 dataset.
Fig. 8 schematically shows an effect of removing the cross bar tail noise in the present exemplary embodiment. Referring to fig. 8, fig. 8 (a) is a quantized image I obtained by reconstructing an image network model for the first time, wherein the image I has transverse streak tail noise; fig. 8 (b) is a target image I' obtained by performing a cropping operation on the image features of the image I and then reconstructing the image network model.
Fig. 9 schematically shows an effect of removing longitudinal bar end shadow noise in the present exemplary embodiment. Referring to fig. 9, fig. 9 (a) is a quantized image I obtained by reconstructing an image network model for the first time, wherein the image I has longitudinal bar artifact noise; fig. 9 (b) is a target image I' obtained by performing a cropping operation on the image features of the image I and then reconstructing the image network model.
Illustratively, table 1 compares image quality of reconstructed images using the cropping processing schemes of the present disclosure with those of images not using the cropping processing schemes, in the case where the technical schemes of the present disclosure are the same as the feature extraction networks used by some techniques, and the reconstructed image network models, data sets.
TABLE 1
As shown in table 1, three image quality evaluation indexes, namely Peak Signal-to-Noise Ratio (PSNR), structural similarity (Structural Similarity, SSIM), and Multi-scale structural similarity (Multi-scale Structural Similarity, MS-SSIM), are described.
Wherein, the minimum value of PSNR is 0, and the larger the PSNR is, the less the distortion of the image is, and the higher the image quality is; SSIM is an index for measuring the similarity of two images, and the larger the SSIM is, the more similar the two images are, and the higher the image quality is; MS-SSIM evaluates images at various resolutions, with higher image quality the larger the MS-SSIM. As can be seen from table 1, compared with some techniques that do not use clipping technique to reconstruct an image, the technique provided by the present disclosure improves image quality.
According to some embodiments of the present disclosure, when training a machine vision feature to image reconstruction model for a picture acquired by a fixed camera and performing noise reduction processing to perform image reconstruction, since the imaging size of the fixed camera is always a fixed size, if streak tail noise exists, only transverse streak tail noise needs to be removed in the height direction.
On the basis of the illustration in fig. 7, the scaling operation and the feature Z (M, H, W) after the feature alignment are performed are clipped, leaving the feature Z1 of the H-2 line from the top in the height direction. And then inputting the characteristic Z1 into a residual error learning module, an up-sampling module and a quantization processing module of the reconstructed image network model again to obtain a quantized image I ' (3, h ', w ') which is a noise-reduced target image.
By the image processing method, whether the streak tail shadow noise exists or not can be determined, the introduction point of the streak tail shadow noise can be rapidly determined, the denoising is precise, and the quality of the reconstructed image is improved.
Further, in order to implement the above-described image processing method, an image processing apparatus is provided in one embodiment of the present disclosure. Fig. 10 shows a schematic configuration diagram of an image processing apparatus 1000. The image processing apparatus 1000 includes a feature extraction module 1001, a feature clipping module 1002, and an image reconstruction module 1003.
The feature extraction module 1001 is configured to extract image features of an image to be processed; the feature clipping module 1002 is configured to clip a noise region in the image feature to obtain an intermediate image feature; the image reconstruction module 1003 is configured to perform image reconstruction on the intermediate image feature, so as to obtain a noise-reduced target image.
In an alternative embodiment, the feature extraction module 1001 is configured to perform image scaling on an image size of the image to be processed according to a preset ratio, so as to obtain an image feature of the image to be processed.
In an alternative embodiment, the feature clipping module 1002 is configured to determine, as the noise area, an area in the image to be processed, where the pixel value is a preset pixel value; determining a feature region corresponding to the initial noise region in the graphic feature as a target noise region in the image feature; and cutting out the image features with preset widths of the edge regions corresponding to the target noise regions to obtain the rest intermediate image features in the image features.
In an alternative embodiment, the image reconstruction module 1003 is configured to input the feature value of the intermediate image feature into the image reconstruction network model for image reconstruction, so as to obtain the target image after noise reduction.
In an alternative embodiment, the image reconstruction module 1003 is configured to perform feature enhancement extraction on the intermediate image feature based on a residual learning module in the image reconstruction network model, so as to obtain a feature value after feature enhancement on the intermediate image feature.
In an alternative embodiment, the image reconstruction module 1003 is configured to perform corresponding size scaling on the residual learning module output target intermediate image feature according to a preset ratio based on an amplifying module in the image reconstruction network model, so as to obtain the target image.
In an alternative embodiment, the image reconstruction module 1003 is configured to perform quantization processing on the intermediate image feature output by the residual learning module based on a quantization module in the image reconstruction network model, so as to obtain the target image.
The image processing apparatus 1000 provided in the embodiments of the present disclosure may execute the technical scheme of the image processing method in any of the embodiments, and the implementation principle and beneficial effects of the image processing method are similar to those of the image processing method, and reference may be made to the implementation principle and beneficial effects of the image processing method, and no further description is given here.
Exemplary embodiments of the present disclosure also provide a computer readable storage medium, which may be implemented in the form of a program product comprising program code for causing an electronic device to carry out the steps according to the various exemplary embodiments of the disclosure as described in the above section of the "exemplary method" when the program product is run on the electronic device. In one embodiment, the program product may be implemented as a portable compact disc read only memory (CD-ROM) and includes program code and may be run on an electronic device, such as a personal computer. However, the program product of the present disclosure is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium can be, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The computer readable signal medium may include a data signal propagated in baseband or as part of a carrier wave with readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of remote computing devices, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., connected via the Internet using an Internet service provider). In the embodiments of the present disclosure, any of the steps in the above image processing method may be implemented when the program code stored in the computer-readable storage medium is executed.
Referring to fig. 11, the exemplary embodiment of the present disclosure further provides an electronic device 1100, which may be a background server of an information platform. The electronic device 1100 is described below with reference to fig. 11. It should be understood that the electronic device 1100 shown in fig. 11 is merely an example and should not be construed as limiting the functionality and scope of use of the disclosed embodiments.
As shown in fig. 11, the electronic device 1100 is embodied in the form of a general purpose computing device. Components of electronic device 1100 may include, but are not limited to: at least one processing unit 1110, at least one memory unit 1120, a bus 1130 connecting the different system components (including the memory unit 1120 and the processing unit 1110), a display unit 1140.
Wherein the storage unit stores program code that is executable by the processing unit 1110 such that the processing unit 1110 performs steps according to various exemplary embodiments of the present invention described in the above-described "exemplary methods" section of the present specification. For example, the processing unit 1110 may perform steps S401 to S403 as shown in fig. 4.
The storage unit 1120 may include a readable medium in the form of a volatile storage unit, such as a Random Access Memory (RAM) 11201 and/or a cache memory 11202, and may further include a Read Only Memory (ROM) 11203.
The storage unit 1120 may also include a program/utility 11204 having a set (at least one) of program modules 11205, such program modules 11205 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment.
The bus 1130 may be a local bus representing one or more of several types of bus structures, including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or a bus using any of a variety of bus architectures.
The electronic device 1100 may also communicate with one or more external devices 1200 (e.g., keyboard, pointing device, bluetooth device, etc.), one or more devices that enable a user to interact with the electronic device 1100, and/or any devices (e.g., routers, modems, etc.) that enable the electronic device 1100 to communicate with one or more other computing devices. Such communication may occur through an input/output (I/O) interface 1150. Also, electronic device 1100 can communicate with one or more networks such as a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the Internet, through network adapter 1160. As shown, network adapter 1160 communicates with other modules of electronic device 1100 via bus 1130. It should be appreciated that although not shown, other hardware and/or software modules may be used in connection with electronic device 1100, including, but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, data backup storage systems, and the like.
From the above description of embodiments, those skilled in the art will readily appreciate that the example embodiments described herein may be implemented in software, or may be implemented in software in combination with the necessary hardware. Thus, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (may be a CD-ROM, a U-disk, a mobile hard disk, etc.) or on a network, including several instructions to cause a computing device (may be a personal computer, a server, a terminal device, or a network device, etc.) to perform the method according to the embodiments of the present disclosure.
Furthermore, the above-described drawings are only schematic illustrations of processes included in the method according to the exemplary embodiment of the present invention, and are not intended to be limiting. It will be readily appreciated that the processes shown in the above figures do not indicate or limit the temporal order of these processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, for example, among a plurality of modules.
It should be noted that although in the above detailed description several modules or units of a device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit in accordance with embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into a plurality of modules or units to be embodied.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. An image processing method, comprising:
extracting image characteristics of an image to be processed;
cutting the noise area in the image characteristics to obtain intermediate image characteristics;
and carrying out image reconstruction on the intermediate image features to obtain a noise-reduced target image.
2. The image processing method according to claim 1, wherein the extracting the image features of the image to be processed includes:
And performing image scaling on the image size of the image to be processed according to a preset proportion to obtain the image characteristics of the image to be processed.
3. The image processing method according to claim 1, wherein the clipping the noise region in the image feature to obtain an intermediate image feature includes:
determining an area with a pixel value of a preset pixel value in the image to be processed as an initial noise area;
determining a feature region corresponding to the initial noise region in the graphic feature as a target noise region in the image feature;
cutting out the image features with preset widths of the edge regions corresponding to the target noise regions to obtain the rest intermediate image features in the image features.
4. The image processing method according to claim 3, wherein the performing image reconstruction on the intermediate image features to obtain the target image after noise reduction includes:
and inputting the characteristic value of the intermediate image characteristic into an image reconstruction network model for image reconstruction to obtain a noise-reduced target image.
5. The image processing method according to claim 4, wherein inputting the feature value of the intermediate image feature into an image reconstruction network model for image reconstruction to obtain the noise-reduced target image comprises:
And carrying out feature enhancement extraction on the intermediate image features based on a residual error learning module in the image reconstruction network model to obtain feature values after the intermediate image features are subjected to feature enhancement.
6. The image processing method according to claim 5, characterized by further comprising:
and based on an amplifying module in the image reconstruction network model, performing corresponding size scaling on the intermediate image features output by the residual error learning module according to the preset proportion to obtain a target image.
7. The image processing method according to claim 5, characterized by further comprising:
and based on a quantization module in the image reconstruction network model, performing quantization processing on the intermediate image characteristics output by the residual error learning module to obtain a target image.
8. An image processing apparatus, comprising:
the feature extraction module is used for extracting image features of the image to be processed;
the feature clipping module is used for clipping the noise area in the image features to obtain intermediate image features;
and the image reconstruction module is used for carrying out image reconstruction on the intermediate image features to obtain a noise-reduced target image.
9. A computer-readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the image processing method of any one of claims 1 to 7.
10. An electronic device, comprising:
a processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the image processing method of any one of claims 1 to 7 via execution of the executable instructions.
CN202210995243.8A 2022-08-18 2022-08-18 Image processing method, device, storage medium and electronic equipment Pending CN117649353A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210995243.8A CN117649353A (en) 2022-08-18 2022-08-18 Image processing method, device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210995243.8A CN117649353A (en) 2022-08-18 2022-08-18 Image processing method, device, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN117649353A true CN117649353A (en) 2024-03-05

Family

ID=90043852

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210995243.8A Pending CN117649353A (en) 2022-08-18 2022-08-18 Image processing method, device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN117649353A (en)

Similar Documents

Publication Publication Date Title
CN112950471A (en) Video super-resolution processing method and device, super-resolution reconstruction model and medium
CN114140346A (en) Image processing method and device
CN111881920B (en) Network adaptation method of large-resolution image and neural network training device
CN115861131A (en) Training method and device based on image generation video and model and electronic equipment
CN112700460A (en) Image segmentation method and system
CN113724136A (en) Video restoration method, device and medium
CN112188236B (en) Video interpolation frame model training method, video interpolation frame generation method and related device
CN113658073B (en) Image denoising processing method and device, storage medium and electronic equipment
CN113962882B (en) JPEG image compression artifact eliminating method based on controllable pyramid wavelet network
CN112634153A (en) Image deblurring method based on edge enhancement
CN115941966A (en) Video compression method and electronic equipment
CN117649353A (en) Image processing method, device, storage medium and electronic equipment
CN113096019B (en) Image reconstruction method, image reconstruction device, image processing equipment and storage medium
CN115205117A (en) Image reconstruction method and device, computer storage medium and electronic equipment
CN115187455A (en) Lightweight super-resolution reconstruction model and system for compressed image
CN115170807A (en) Image segmentation and model training method, device, equipment and medium
CN115375539A (en) Image resolution enhancement, multi-frame image super-resolution system and method
CN113592723B (en) Video enhancement method and device, electronic equipment and storage medium
CN114359557A (en) Image processing method, system, equipment and computer medium
CN115861048A (en) Image super-resolution method, device, equipment and storage medium
CN115631115B (en) Dynamic image restoration method based on recursion transform
CN116248807A (en) Method and device for optimizing image under call channel, electronic equipment and storage medium
CN117557452A (en) Image restoration method, device, equipment and storage medium
CN118071629A (en) Method and device for eliminating CT artifact and electronic equipment
CN114022361A (en) Image processing method, medium, device and computing equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination