CN111833263B - Image processing method, device, readable storage medium and electronic equipment - Google Patents

Image processing method, device, readable storage medium and electronic equipment Download PDF

Info

Publication number
CN111833263B
CN111833263B CN202010514617.0A CN202010514617A CN111833263B CN 111833263 B CN111833263 B CN 111833263B CN 202010514617 A CN202010514617 A CN 202010514617A CN 111833263 B CN111833263 B CN 111833263B
Authority
CN
China
Prior art keywords
matrix
image
filter
transmission rate
target image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010514617.0A
Other languages
Chinese (zh)
Other versions
CN111833263A (en
Inventor
赵元
沈海峰
吴庆波
任文琦
操晓春
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Didi Infinity Technology and Development Co Ltd
Original Assignee
Beijing Didi Infinity Technology and Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Didi Infinity Technology and Development Co Ltd filed Critical Beijing Didi Infinity Technology and Development Co Ltd
Priority to CN202010514617.0A priority Critical patent/CN111833263B/en
Publication of CN111833263A publication Critical patent/CN111833263A/en
Application granted granted Critical
Publication of CN111833263B publication Critical patent/CN111833263B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • G06T9/002Image coding using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The embodiment of the invention discloses an image processing method, an image processing device, a readable storage medium and electronic equipment, which are characterized in that an initial target image and an initial transmission rate image are used as initial input to carry out iterative processing, a current target image and a current transmission rate image are input into a first color compression field to output a reference scene image in each iterative process, the reference scene image and the current transmission rate image are input into a second color compression field to obtain an updated current target image, the updated current target image and the updated current transmission rate image are input into a third color compression field to obtain an updated current transmission rate image, and the current target image and the current transmission rate image are output after the iterative process is finished. The embodiment of the invention can convert the target image blurred due to the shooting scene into a clear scene graph, and improve the visual effect of the target image.

Description

Image processing method, device, readable storage medium and electronic equipment
Technical Field
The present invention relates to the field of computer technologies, and in particular, to an image processing method, an image processing device, a readable storage medium, and an electronic apparatus.
Background
When an image is photographed under photographing environments such as underwater, haze weather and the like, the photographed image often has obvious degradation problems including color shift, low contrast and noise pollution due to the influence of the photographing environments. Current methods for recovering a clear scene graph from such degraded images include a method for obtaining a clear scene graph by manually designed feature solution and a method for obtaining a clear scene graph by a model subjected to deep learning. The first method is seriously dependent on the characteristics of manual design, and has no universality; the second approach is often difficult to achieve the desired enhancement due to the difficulty in obtaining deep learning training samples.
Disclosure of Invention
In view of this, embodiments of the present invention provide an image processing method, apparatus, readable storage medium, and electronic device to realize recovery of a clear scene image from a target image degraded due to a photographing environment.
In a first aspect, an embodiment of the present invention provides an image processing method, where the method includes:
determining an initial target image and an initial transmission rate image;
determining a current target image and a current transmission rate image in an iterative manner with the initial target image and the initial transmission rate image as initial inputs; and
Ending the iterative process in response to the preset condition, and outputting a current target image and a current transmission rate image;
wherein the iteratively determining the current target image and the current transmission rate image comprises:
determining a current target image and a current transmission rate image;
inputting the current target image and the current transmission rate image into a first color compression field to determine a reference scene graph;
Inputting the reference scene graph and a current transmission rate image into a second color compression field to update the current target image;
a third color compression field is used for updating the updated target image and the transmission rate image so as to update the current transmission rate image;
the first, second and third color compression fields are higher order color compression fields at least for filtering and compressing an input image through color channels, respectively.
In a second aspect, an embodiment of the present invention provides an image processing apparatus, including:
the image acquisition module is used for determining an initial target image and an initial transmission rate image;
the iteration processing module is used for taking the initial target image and the initial transmission rate image as initial input and determining a current target image and a current transmission rate image in an iteration mode; and
The image output module is used for ending the iterative process and outputting a current target image and a current transmission rate image in response to the satisfaction of preset conditions;
wherein, the iterative processing module includes:
a current image determining sub-module for determining a current target image and a current transmission rate image;
A first processing sub-module for inputting the current target image and the current transmission rate image into a first color compression field to determine a reference scene graph;
A second processing sub-module for inputting the reference scene graph and a current transmission rate image into a second color compression field to update the current target image;
A third processing sub-module, configured to third color compress the updated target image and the transmission rate image to update the current transmission rate image;
the first, second and third color compression fields are higher order color compression fields at least for filtering and compressing an input image through color channels, respectively.
In a third aspect, embodiments of the present invention provide a computer-readable storage medium storing computer program instructions which, when executed by a processor, implement a method according to any one of the first aspects.
In a fourth aspect, an embodiment of the present invention provides an electronic device comprising a memory and a processor, the memory storing one or more computer program instructions, wherein the one or more computer program instructions are executed by the processor to implement the method of any of the first aspects.
According to the embodiment of the invention, iteration processing is carried out by taking an initial target image and an initial transmission rate image as initial inputs, a current target image and a current transmission rate image are input into a first color compression field to output a reference scene image in each iteration process, the reference scene image and the current transmission rate image are input into a second color compression field to obtain an updated current target image, the updated current target image and the current transmission rate image are input into a third color compression field to obtain an updated current transmission rate image, and the current target image and the current transmission rate image are output after the iteration process is finished. Therefore, the embodiment of the invention can convert the target image blurred due to shooting the scene into a clear scene graph, and improve the visual effect of the target image.
Drawings
The above and other objects, features and advantages of the present invention will become more apparent from the following description of embodiments of the present invention with reference to the accompanying drawings, in which:
FIG. 1 is a flowchart of an image processing method according to an embodiment of the present invention;
FIG. 2 is a flowchart of an iterative process in an image processing method according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of determining a first filtered image according to an embodiment of the present invention;
FIG. 4 is a diagram illustrating a second filtered image determination according to an embodiment of the present invention;
FIG. 5 is a diagram illustrating a third filtered image determination according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of an iterative process in an image processing method according to an embodiment of the present invention;
FIG. 7 is a schematic view showing the effect of an initial target image according to an embodiment of the present invention;
FIG. 8 is a schematic diagram of the effect of the current target image output after image processing according to the embodiment of the invention;
FIG. 9 is a schematic diagram of an image processing apparatus according to an embodiment of the present invention;
fig. 10 is a schematic diagram of an electronic device according to an embodiment of the invention.
Detailed Description
The present invention is described below based on examples, but the present invention is not limited to only these examples. In the following detailed description of the present invention, certain specific details are set forth in detail. The present invention will be fully understood by those skilled in the art without the details described herein. Well-known methods, procedures, flows, components and circuits have not been described in detail so as not to obscure the nature of the invention.
Moreover, those of ordinary skill in the art will appreciate that the drawings are provided herein for illustrative purposes and that the drawings are not necessarily drawn to scale.
Unless the context clearly requires otherwise, the words "comprise," "comprising," and the like in the description are to be construed in an inclusive sense as opposed to an exclusive or exhaustive sense; that is, it is the meaning of "including but not limited to".
In the description of the present invention, it should be understood that the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. Furthermore, in the description of the present invention, unless otherwise indicated, the meaning of "a plurality" is two or more.
Fig. 1 is a flowchart of an image processing method according to an embodiment of the present invention, as shown in fig. 1, the method includes:
step S100, determining an initial target image and an initial transmission rate image.
In particular, the initial target image (radius) and the initial transmission rate image (transmission) may be determined by a server or a terminal device, or may be determined by one of the server or the terminal device and then transmitted to the other device. The server may be a single server or a server cluster formed by a plurality of servers, and the terminal device may be a general data processing terminal capable of running a computer program, such as a smart phone, a tablet computer, a notebook computer, and the like, and having an image acquisition function, an image processing function, or an information transmission function. In the embodiment of the invention, the initial target image is an image degraded due to the influence of shooting environments such as haze weather, underwater environment and the like on the light refractive index, and the initial transmission rate image is used for optimizing the target image and recovering a clear scene image from the target image. In an optional implementation manner of the embodiment of the present invention, the determining process for determining the initial target image and the initial transmission rate image may include:
Step S110, determining an initial target image.
Specifically, the initial target image is a degraded image which needs to restore a clear scene, and can be acquired by a camera, a terminal device with a photographing function, and the like. The acquisition scene of the initial target image can be an environment which can generate color shift, low contrast, noise pollution and the like on the light refractive index, such as an outdoor environment or an underwater environment of haze weather, low-temperature weather and the like.
Step S120, initializing the initial target image to determine an initial transmission rate image.
Specifically, the transmission rate image may be obtained by initializing the initial target image for optimizing the target image. The initialization process may be implemented in a variety of ways, and in an alternative implementation of an embodiment of the present invention, may be implemented by a method described in literature "PENG Y T,COSMAN P C.Underwater image restoration based on image blurriness and light absorption[J].IEEE Transactions on Image Processing,2017,26(4):1579-1594".
Further, in the process of initializing the initial target image, an environment image of an environment where the initial target image is located can be acquired while an initial transmission rate image is acquired. In the embodiment of the invention, the environment image can be used as a parameter in the image processing process of the initial target image.
Step S200, determining the current target image and the current transmission rate image in an iterative manner by taking the initial target image and the initial transmission rate image as initial inputs.
Specifically, the image processing process of the initial target image is an iterative processing process, that is, the initial target image and the initial transmission rate image are used as initial inputs to perform image processing, after each image processing process is finished, when a preset condition is not met, the current output is used as the input of the next image processing, and when the preset iteration process finishing condition is met, the image processing process is finished.
Fig. 2 is a flowchart of an iterative process in the image processing method according to the embodiment of the present invention, as shown in fig. 2, each iterative process in the iterative image processing method according to the embodiment of the present invention includes:
step S210, determining a current target image and a current transmission rate image.
Specifically, since the initial input of the iterative process is an initial target image and an initial transmission rate image, the current target image of the first iterative process is determined to be the initial target image, and the current transmission rate image is determined to be the initial transmission rate image. In addition to the first iteration process, the current target image and the current transmission rate image determined in each iteration process in the embodiment of the invention are the current target image and the current transmission rate image output at the end of the last iteration process.
Step S220, inputting the current target image and the current transmission rate image into a first color compression field to determine a reference scene graph.
Specifically, the first Color compression field is a high-order Color compression field (Color SHRINKAGE FIELD) at least for filtering and compressing an input image through a Color channel. In an embodiment of the present invention, the first color compression field includes a first filter bank, where the first filter bank is composed of a plurality of first filter channels, and each of the first filter channels includes a first filter, a first compression function, and a first transfer filter corresponding to the first filter. Thus in an embodiment of the invention, said inputting the current target image and the current transmission rate image into the first color compression field to determine a reference scene graph (auxiliary radiance) comprises:
Step S221, determining a first reference matrix corresponding to the first filter bank.
Specifically, the first reference matrix corresponding to the first filter bank may be determined according to the first filter, the first transfer filter, and the current transmission rate image in each first filtering channel in the first filter bank. For example, the sum of products of the first filter matrix and the first transposed filter matrix in each first filtering channel may be calculated first, and then the first filter matrix is obtained by multiplying the product sum by a preset first weight parameter, where the first filter matrix is a transmission matrix of the first filter, and the first transposed filter matrix is a transmission matrix of the first transposed filter. Finally, the sum of the square of the current transmission rate image and the first filter bank matrix is calculated to determine a first reference matrix.
Taking the initial target image as U, the initial transmission rate image as T * and the preset environment image as L as an example for explanation. In the S-th iteration process, the current target image is R s-1, the current transmission rate image is T s-1, the number of first filtering channels in the first filter bank is N, and the first filter matrix in the j-th first filtering channel in the first filter bank isThe first transpose filter matrix is/>When the process of determining the first reference matrix H s in the S-th iteration process may be implemented by the following formula:
Wherein, α s is a first weight parameter preset in the S-th iteration process.
Step S222, inputting the current target image into each first filtering channel to determine a corresponding first convolution matrix.
Specifically, each first filtering channel in the first filter bank further includes a first filter and a first transfer filter, and each first filtering channel in the first filter bank further has a corresponding compression function. After the current target image is input into each first filtering channel, a corresponding first convolution matrix can be output respectively. When determining a first convolution matrix corresponding to each first filtering channel, determining a target first filtering channel in each first filtering channel, and determining a first convolution matrix obtained after the current target image is input into the target first filtering channel. And determining one first filter channel serving as a new target in other first filter channels, and determining a first convolution matrix obtained after the current target image is input until the first convolution matrix corresponding to all the first filter channels in the first filter bank is determined. In the embodiment of the invention, after the current target image is input into the target first filtering channel, the current target image is convolved through a first filter in the target first filtering channel, then the convolved result is input into a first compression function preset in the target first filtering channel to determine a first compression matrix, and finally the first compression matrix is input into the first transfer filter to be convolved to determine a corresponding first convolution matrix.
Taking the initial target image as U, the initial transmission rate image as T * and the preset environment image as L as an example for explanation. In the S-th iteration process, the current target image is R s-1, the current transmission rate image is T s-1, the number of first filter channels in the first filter bank is N, the target first filter channel is the j-th first filter channel in the first filter bank, and the first filter matrix in the target first filter channel isThe first transpose filter matrix is/>In the S-th iteration process, inputting the current target image into the first convolution matrix/>, which is determined after the target first filtering channelThe process of (2) can be realized by the following formula:
Wherein, And the first compression function is preset in the target first filtering channel in the S-th iteration process.
Step S223, calculating the sum of the first convolution matrices to determine a first filtered image.
Specifically, after determining a first convolution matrix corresponding to each first filtering channel in each first filter bank, calculating a sum of each first convolution matrix to determine a first filtered image according to the sum. Optionally, in each iteration process, the first filtered image may be obtained by multiplying a sum of the preset first weight parameters and each of the first convolution matrices. For example, in the S-th iteration, when the preset first weight parameter is α s, the process of determining the first filtered image I s may be implemented by the following formula:
Fig. 3 is a schematic diagram of determining a first filtered image according to an embodiment of the present invention, as shown in fig. 3, the first filter bank includes N first filter channels, and each first filter channel includes a first filter 30 and a first transpose filter 31. Meanwhile, each first filtering channel further includes a first compression function 32 corresponding to each first filtering channel. After the current target image is input into the first filter bank, the current target image is respectively input into each first filter channel, that is, the current target image is sequentially convolved by the first filter 30, compressed by the first compression function 32, convolved by the first transfer filter 31, and then output a corresponding first convolution matrix. And calculating the sum of the first convolution matrixes, and multiplying the sum with a preset first weight parameter to obtain a first filtered image.
Step S224, determining a first feature matrix according to the first filtered image, the initial target image, the current transmission rate image, and a preset environment image.
Specifically, the first feature matrix may be determined according to the first filtered image, an initial target image, a current transmission rate image, and a preset environment image. Taking the initial target image as U, the initial transmission rate image as T * and the preset environment image as L as an example for explanation. In the S-th iteration process of the embodiment of the present invention, the current target image is R s-1, the current transmission rate image is T s-1, the number of first filtering channels in the first filter bank is N, and the first filter matrix in the j-th first filtering channel in the first filter bank isThe first transpose filter matrix is/>When the process of determining the first feature matrix K s in the S-th iteration process may be implemented by the following formula:
and step S225, determining a reference scene graph according to the first reference matrix and the first feature matrix.
Specifically, in each iteration process, a reference scene graph may be further obtained according to the determined first reference matrix and the first feature matrix. In the embodiment of the present invention, in the S-th iteration process, when the first reference matrix is H s and the first feature matrix is K s, the process of determining the reference scene graph according to the first reference matrix and the first feature matrix may be to input the first reference matrix H s into an inverse operation channel to determine a first inverse matrix H s-1 corresponding to the first reference matrix H s, and calculate a product of the first inverse matrix H s-1 and the first feature matrix K s to determine the reference scene graph R' s. That is, in the S-th iteration process, the process of determining the reference scene graph R' s according to the embodiment of the present invention may be implemented by the following formula:
step S230, inputting the reference scene graph and the current transmission rate image into a second color compression field to update the current target image.
Specifically, the second Color compression field is a higher-order Color compression field (Color SHRINKAGE FIELD) at least for filtering and compressing the input image through Color channels, respectively. In an embodiment of the present invention, the second color compression field includes a second filter bank, where the second filter bank is composed of a plurality of second filter channels, and each of the second filter channels includes a second filter, a second compression function, and a second transposed filter corresponding to the second filter. Thus in an embodiment of the present invention, said inputting said reference scene graph and current transmission rate image into a second color compression field to update said current target image comprises:
step S231, determining a second reference matrix corresponding to the second filter bank.
Specifically, the second reference matrix corresponding to the second filter bank may be determined according to the second filter, the second transposed filter, and the current transmission rate image in each second filtering channel in the second filter bank. For example, the sum of products of the second filter matrix and the second transposed filter matrix in each second filtering channel may be calculated first, and then the second filter matrix is obtained by multiplying the product sum by a preset second weight parameter, where the second filter matrix is a transmission matrix of the second filter, and the second transposed filter matrix is a transmission matrix of the second transposed filter. Finally, the sum of the square of the current transmission rate image and the second filter bank matrix is calculated to determine a second reference matrix.
Taking the initial target image as U, the initial transmission rate image as T * and the preset environment image as L as an example for explanation. In the S-th iteration process, the current target image is R s-1, the current transmission rate image is T s-1, the number of second filter channels in the second filter bank is N, and the second filter matrix in the j-th second filter channel in the second filter bank isThe second transposed filter matrix is/>And when the reference scene graph determined by inputting the current target image R s-1 and the current transmission rate image T s-1 into the first color compression field is R 's, the process of determining the second reference matrix H' s in the S-th iteration process may be implemented by the following formula:
Wherein, α' s is a second weight parameter preset in the S-th iteration process.
Step S232, inputting the reference scene graph into each second filtering channel to determine a corresponding second convolution matrix.
Specifically, each second filtering channel in the second filter bank further includes a second filter and a second transposed filter, and each second filtering channel in the second filter bank further has a corresponding compression function. After the reference scene graph is input into each second filtering channel, a corresponding second convolution matrix can be output respectively. When determining a second convolution matrix corresponding to each second filtering channel, determining a target second filtering channel in each second filtering channel, and determining a second convolution matrix obtained after the reference scene graph is input into the target second filtering channel. And determining one second filter channel serving as a new target in other second filter channels, and determining a second convolution matrix obtained after the reference scene graph is input until the second convolution matrix corresponding to all the second filter channels in the second filter bank is determined. In the embodiment of the invention, after the reference scene graph is input into the target second filtering channel, the reference scene graph is convolved through a second filter in the target second filtering channel, then the convolved result is input into a second compression function preset in the target second filtering channel to determine a second compression matrix, and finally the second compression matrix is input into the second transpose filter to carry out convolution to determine a corresponding second convolution matrix.
Taking the initial target image as U, the initial transmission rate image as T * and the preset environment image as L as an example for explanation. In the S-th iteration process, the current target image is R s-1, the current transmission rate image is T s-1, the number of second filter channels in the second filter bank is N, the target second filter channel is the j-th second filter channel in the second filter bank, and the second filter matrix in the target second filter channel isThe second transposed filter matrix isAnd the second convolution matrix/>, determined after the reference scene graph is input to the target second filtering pass in the S-th iteration process when the reference scene graph determined by inputting the current target image R s-1 and the current transmission rate image T s-1 to the first color compression field is R' s The process of (2) can be realized by the following formula:
Wherein, And a second compression function corresponding to the target second filtering channel in the S-th iteration process.
Step S233, calculating the sum of the second convolution matrices to determine a second filtered image.
Specifically, after determining a second convolution matrix corresponding to each second filtering channel in each second filter bank, calculating a sum of each second convolution matrix to determine a second filtered image according to the sum. Optionally, in each iteration process, the second filtered image may be obtained by multiplying the sum of the second convolution matrices by a preset second weight parameter. For example, in the S-th iteration, when the preset second weight parameter is α' s, the process of determining the second filtered image I ′s may be implemented by the following formula:
Fig. 4 is a schematic diagram of determining a second filtered image according to an embodiment of the present invention, as shown in fig. 4, the second filter bank includes N second filter channels, and each second filter channel includes a second filter 40 and a second transposed filter 41. Meanwhile, each second filtering channel further includes a second compression function 42 corresponding to the second filter bank. After the reference scene graph is input into the second filter bank, each second filter channel is input respectively, that is, the second filter 40 convolves the reference scene graph in sequence, and the second compression function 42 compresses and the second transpose filter 41 convolves the reference scene graph to output a corresponding second convolution matrix. And calculating the sum of the second convolution matrixes, and multiplying the sum with a preset second weight parameter to obtain a second filtered image.
Step S234, determining a second feature matrix according to the second filtered image, the initial target image, the current transmission rate image and the preset environment image.
Specifically, the second feature matrix may be determined according to the second filtered image, the initial target image, the current transmission rate image, and a preset environment image. Taking the initial target image as U, the initial transmission rate image as T * and the preset environment image as L as an example for explanation. In the S-th iteration process of the embodiment of the present invention, the current target image is R s-1, the current transmission rate image is T s-1, the number of second filter channels in the second filter bank is N, and the second filter matrix in the j-th second filter channel in the second filter bank isThe second transposed filter matrix is/>And when the reference scene graph determined by inputting the current target image R s-1 and the current transmission rate image T s-1 into the first color compression field is R 's, the process of determining the second feature matrix K' s in the S-th iteration process may be implemented by the following formula:
and step S235, determining an updated current target image according to the second reference matrix and the second feature matrix.
Specifically, in each iteration process, an updated current target image can be further obtained according to the determined second reference matrix and the second feature matrix. In the embodiment of the present invention, in the S-th iteration process, when the second reference matrix is H ' s and the second feature matrix is K ' s, the process of determining the updated current target image according to the second reference matrix and the second feature matrix may be to input the second reference matrix H ' s into an inverse operation channel to determine a second inverse matrix H ' s-1 corresponding to the second reference matrix H ' s, and then calculate the product of the second inverse matrix H ' s-1 and the second feature matrix K ' s to determine the updated current target image R s. That is, in the S-th iteration process, the process of determining the updated current target image R s according to the embodiment of the present invention may be implemented by the following formula:
and step S240, the updated target image and the transmission rate image are subjected to third color compression field so as to update the current transmission rate image.
Specifically, the third Color compression field is a higher-order Color compression field (Color SHRINKAGE FIELD) at least for filtering and compressing the input image through Color channels, respectively. In an embodiment of the present invention, the third color compression field includes a third filter bank, where the third filter bank is composed of a plurality of third filter channels, and each third filter channel includes three third filters, a third compression function, and a third transposed filter corresponding to the third filters. Thus, in an embodiment of the present invention, the third color compressed field of the updated target image and the transmission rate image to update the current transmission rate image includes:
step S241, determining a third reference matrix corresponding to the third filter bank.
Specifically, the third reference matrix corresponding to the third filter bank may be determined according to a third filter, a third transposed filter, and a current transmission rate image in each third filtering channel in the third filter bank. For example, the sum of products of a third filter matrix and a third transposed filter matrix in each third filtering channel may be calculated first, and then a third filter bank matrix may be obtained by multiplying the product sum by a preset third weight parameter, where the third filter matrix is a transmission matrix of the third filter, and the third transposed filter matrix is a transmission matrix of the third transposed filter. And finally, determining a third reference matrix according to the third filter group matrix, the updated target image, the preset environment image and the identity matrix.
Taking the initial target image as U, the initial transmission rate image as T * and the preset environment image as L as an example for explanation. In the S-th iteration process, the current target image is R s-1, the current transmission rate image is T s-1, the number of third filter channels in the third filter bank is N, and the third filter matrix in the j-th third filter channel in the third filter bank isThe third transposed filter matrix is/>And when the updated current target image is determined to be R s through the second color compressed field, the process of determining the third reference matrix H "s in the S-th iteration process may be implemented by the following formula:
Wherein, alpha' s is a third weight parameter preset in the S-th iteration process, E is a preset identity matrix.
Step S242, inputting the current transmission rate image into each third filtering channel to determine a corresponding third convolution matrix.
Specifically, each third filtering channel in the third filter bank further includes a third filter and a third transposed filter, and each third filtering channel in the third filter bank further has a corresponding compression function. After the current transmission rate image is input into each third filtering channel, a corresponding third convolution matrix can be output respectively. When determining a third convolution matrix corresponding to each third filtering channel, determining a target third filtering channel in each third filtering channel, and determining a third convolution matrix obtained after the current transmission rate image is input into the target third filtering channel. And determining one third filter channel serving as a new target in other third filter channels, and determining a third convolution matrix obtained after the current transmission rate image is input until the third convolution matrix corresponding to all the third filter channels in the third filter bank is determined. In the embodiment of the present invention, after the current transmission rate image is input into the target third filtering channel, the current transmission rate image is convolved through a third filter in the target third filtering channel, then the convolved result is input into a third compression function preset in the target third filtering channel to obtain a corresponding third compression matrix, and finally the third compression matrix is input into the third transpose filter to be convolved, so as to determine a corresponding third convolution matrix.
Taking the initial target image as U, the initial transmission rate image as T * and the preset environment image as L as an example for explanation. In the S-th iteration process, the current target image is R s-1, the current transmission rate image is T s-1, the number of third filter channels in the third filter bank is N, the target third filter channel is the j-th third filter channel in the third filter bank, and the third filter matrix in the target third filter channel isThe third transposed filter matrix isAnd when the reference scene graph determined by inputting the current target image R s-1 and the current transmission rate image T s-1 into the first color compression field is R' s and the updated current target image is R s, in the S-th iteration process, inputting the current transmission rate image into the third convolution matrix/>, which is determined after the target third filtering channelThe process of (2) can be realized by the following formula:
Wherein, And a third compression function corresponding to a target third filtering channel in the S-th iteration process.
Step S243, calculating the sum of the third convolution matrices to determine a third filtered image.
Specifically, after determining third convolution matrices corresponding to the third filtering channels in the third filter banks, calculating a sum of the third convolution matrices to determine a third filtered image according to the sum. Optionally, in each iteration process, the third filtered image may be obtained by multiplying the sum of the third convolution matrices by a preset third weight parameter. For example, in the S-th iteration, when the preset third weight parameter is α "s, the process of determining the third filtered image I" s may be implemented by the following formula:
Fig. 5 is a schematic diagram of determining a third filtered image according to an embodiment of the present invention, as shown in fig. 5, the third filter bank includes N third filter channels, and each third filter channel includes a third filter 50 and a third transposed filter 51. Meanwhile, each third filtering channel further includes a third compression function 52 corresponding to the third filter bank. After the third filter bank is input, the reference scene graph is input into each third filter channel respectively, that is, the third filter 50 is sequentially convolved, and the third compression function 52 compresses and the third transpose filter 51 convolves the reference scene graph to output a corresponding third convolution matrix. And calculating the sum of the third convolution matrixes, and multiplying the sum with a preset third weight parameter to obtain a third filtered image.
Step S244, determining a third feature matrix according to the third filtered image, the updated target image, the initial transmission rate image, and the preset environment image.
Specifically, the third feature matrix may be determined according to the third filtered image, the updated target image, the initial transmission rate image, and the preset environment image. Taking the initial target image as U, the initial transmission rate image as T * and the preset environment image as L as an example for explanation. In the S-th iteration process of the embodiment of the present invention, the current target image is R s-1, the current transmission rate image is T s-1, the number of third filter channels in the third filter bank is N, and the third filter matrix in the j-th third filter channel in the third filter bank isThe third transposed filter matrix is/>And when the reference scene graph determined by inputting the current target image R s-1 and the current transmission rate image T s-1 into the first color compression field is R' s, and when the updated current target image is determined to be R s by the second color compression field, the process of determining the third feature matrix K "s in the S-th iteration process may be implemented by the following formula:
And step S245, determining an updated current transmission rate image according to the third reference matrix and the third feature matrix.
Specifically, in each iteration process, an updated current transmission rate image can be further obtained according to the determined third reference matrix and the determined third feature matrix. In the embodiment of the present invention, in the S-th iteration process, when the third reference matrix is H "s and the third feature matrix is K" s, the process of determining the updated current transmission rate image according to the third reference matrix and the third feature matrix may be to input the third reference matrix H "s into an inverse operation channel to determine a third inverse matrix H" s-1 corresponding to the third reference matrix H "s, and calculate a product of the third inverse matrix H" s-1 and the third feature matrix K "s to determine the updated current transmission rate image T S. That is, in the S-th iteration process, the process of determining the updated current transmission rate image T s according to the embodiment of the present invention may be implemented by the following formula:
Fig. 6 is a schematic diagram of an iterative process in the image processing method according to the embodiment of the present invention, and as shown in fig. 6, each iterative process in the iterative process is to input the current target image and the current transmission rate image into the first color compression field 60, and output a reference scene graph. The reference scene graph and the current transmission rate image are then input into a second color compression field 61 to output an updated current target image. The updated current target image and the current transmission rate image are input into the first color compression field 62, outputting an updated current transmission rate image. And judging whether a preset condition is met or not when each iteration process is finished. If the preset condition is met, step S300 is entered to end the iterative process and output the updated current target image and current transmission rate image; and under the condition that the preset condition is not met, taking the updated current target image and the updated current transmission rate image as the input of the next iteration process.
And step S300, responding to the satisfaction of the preset condition, ending the iterative process, and outputting the current target image and the current transmission rate image.
Specifically, the preset condition may be preset, for example, iterating to a preset number of times, or outputting a current target image after one iteration process to satisfy the preset condition. And under the condition that the preset condition is met, ending the iterative process and taking the current target image and the current transmission rate image which are output by the last iterative process as output results of the image processing method.
Therefore, the embodiment of the invention inputs the current target image and the current transmission rate image into the first color compression field in each iteration process by taking the initial target image and the initial transmission rate image as initial inputs so as to output a reference scene graph with relatively clear scene. And inputting the reference scene graph and the current transmission rate graph into a second color compression field to further refine and update the current target image. And finally, inputting the updated current target image and the current transmission rate diagram into a third color compression field to update the current transmission rate image, and outputting the current target image and the current transmission rate diagram after finishing the iterative process. The embodiment of the invention can convert the target image blurred due to the shooting scene into a clear scene graph, and improve the visual effect of the target image.
Fig. 7 is a schematic view of an effect of an initial target image according to an embodiment of the present invention, and fig. 8 is a schematic view of an effect of a current target image output after image processing according to an embodiment of the present invention. As shown in fig. 7 and fig. 8, the initial target image is an image shot in a haze environment, and due to environmental influence, the image scene is magic and has poor quality, after the initial target image is processed by the image processing process in the embodiment of the invention, the output image is the current target image shown in fig. 8, and the current target image has clear scene after repeated iterative processing, so that the quality is greatly improved.
Fig. 9 is a schematic diagram of an image processing apparatus according to an embodiment of the present invention, and as shown in fig. 9, the image processing apparatus includes an image acquisition module 90, an iteration processing module 91, and an image output module 92.
Specifically, the image acquisition module 90 is configured to determine an initial target image and an initial transmission rate image. The iterative processing module 91 is configured to determine the current target image and the current transmission rate image in an iterative manner, with the initial target image and the initial transmission rate image as initial inputs. And the image output module 92 is configured to end the iterative process in response to satisfaction of a preset condition, and output a current target image and a current transmission rate image.
Wherein, the iterative processing module includes:
a current image determining sub-module for determining a current target image and a current transmission rate image;
A first processing sub-module for inputting the current target image and the current transmission rate image into a first color compression field to determine a reference scene graph;
A second processing sub-module for inputting the reference scene graph and a current transmission rate image into a second color compression field to update the current target image;
A third processing sub-module, configured to third color compress the updated target image and the transmission rate image to update the current transmission rate image;
the first, second and third color compression fields are higher order color compression fields at least for filtering and compressing an input image through color channels, respectively.
Further, the image acquisition module includes:
an image acquisition sub-module for determining an initial target image;
And the initialization sub-module is used for initializing the initial target image to determine an initial transmission rate image.
Further, the first color compression field comprises a first filter bank formed by a plurality of first filter channels, and each first filter channel comprises a first filter, a first compression function and a first transfer filter corresponding to the first filter;
The first processing submodule includes:
a first reference matrix determining unit, configured to determine a first reference matrix corresponding to the first filter bank;
A first convolution matrix determining unit, configured to input the current target image into each first filtering channel to determine a corresponding first convolution matrix;
A first filtered image determining unit configured to calculate a sum of the first convolution matrices to determine a first filtered image;
The first feature matrix determining unit is used for determining a first feature matrix according to the first filtering image, the initial target image, the current transmission rate image and the preset environment image;
and the reference scene graph determining unit is used for determining a reference scene graph according to the first reference matrix and the first feature matrix.
Further, the first reference matrix determining unit includes:
A first filter bank matrix determining subunit, configured to calculate a sum of products of a first filter matrix and a first transposed filter matrix in each of the first filtering channels to determine a corresponding first filter bank matrix, where the first filter matrix is a transfer matrix of the first filter, and the first transposed filter matrix is a transfer matrix of the first transposed filter;
a first reference matrix determination subunit for calculating a sum of the square of the current transmission rate image and the first filter bank matrix to determine a first reference matrix.
Further, the first convolution matrix determination unit includes:
a first target channel determining subunit, configured to determine a target first filtering channel;
A first convolution subunit, configured to input the current target image into a first filter in the target first filtering channel for convolution;
the first compression subunit is used for inputting the convolved result into a first compression function preset in the target first filtering channel so as to determine a first compression matrix;
And the first deconvolution subunit is used for inputting the first compression matrix into the first transfer filter for convolution so as to determine a corresponding first convolution matrix.
Further, the reference scene graph determination unit includes:
a first inverse matrix determining subunit, configured to input the first reference matrix into an inverse operation channel, so as to determine a first inverse matrix corresponding to the first reference matrix;
a reference scene graph determination subunit operable to calculate a product of the first inverse matrix and the first feature matrix to determine a reference scene graph.
Further, the second color compression field comprises a second filter bank formed by a plurality of second filter channels, and each second filter channel comprises a second filter, a second compression function and a second transposed filter corresponding to the second filter;
the second processing sub-module includes:
A second reference matrix determining unit, configured to determine a second reference matrix corresponding to the second filter bank;
a second convolution matrix determining unit, configured to input the reference scene graph into each of the second filtering channels, so as to determine a corresponding second convolution matrix;
a second filtered image determining unit for calculating a sum of the second convolution matrices to determine a second filtered image;
The second feature matrix determining unit is used for determining a second feature matrix according to the second filtering image, the initial target image, the current transmission rate image and the preset environment image;
And the current target image updating unit is used for determining an updated current target image according to the second reference matrix and the second feature matrix.
Further, the second reference matrix determining unit includes:
A second filter bank matrix determining subunit, configured to calculate a sum of products of a second filter matrix and a second transposed filter matrix in each second filtering channel to determine a corresponding second filter bank matrix, where the second filter matrix is a transfer matrix of the second filter, and the second transposed filter matrix is a transfer matrix of the second transposed filter;
a second reference matrix determination subunit for calculating a sum of the square of the current transmission rate image and the second filter bank matrix to determine a second reference matrix.
Further, the second convolution matrix determination unit includes:
A second target channel determining subunit, configured to determine a target second filtering channel;
a second convolution subunit, configured to input the reference scene graph into a second filter in the target second filtering channel for convolution;
The first compression subunit is used for inputting the convolved result into a preset second compression function in the target second filtering channel so as to determine a second compression matrix;
and a second deconvolution subunit, configured to input the second compression matrix into the second transpose filter for convolution, so as to determine a corresponding second convolution matrix.
Further, the current target image updating unit includes:
A second inverse matrix determining subunit, configured to input the second reference matrix into an inverse operation channel, so as to determine a second inverse matrix corresponding to the second reference matrix;
And the current target image updating subunit is used for calculating the product of the second inverse matrix and the second feature matrix to determine an updated current target image.
Further, the third color compression field comprises a third filter bank formed by a plurality of third filtering channels, and each third filtering channel comprises a third filter, a third compression function and a third transposed filter corresponding to the third filter;
the third processing sub-module includes:
a third reference matrix determining unit, configured to determine a third reference matrix corresponding to the third filter bank;
a third convolution matrix determining unit, configured to input the current transmission rate image into each third filtering channel to determine a corresponding third convolution matrix;
A third filtered image determining unit for calculating a sum of the third convolution matrices to determine a third filtered image;
The third feature matrix determining unit is used for determining a third feature matrix according to the third filtering image, the updated target image, the initial transmission rate image and the preset environment image;
and the current transmission rate image updating unit is used for determining an updated current transmission rate image according to the third reference matrix and the third feature matrix.
Further, the third reference matrix determining unit includes:
a third filter bank matrix determining subunit, configured to calculate a sum of products of a third filter matrix and a third transposed filter matrix in each third filtering channel to determine a corresponding third filter bank matrix, where the third filter matrix is a transfer matrix of the third filter, and the third transposed filter matrix is a transfer matrix of the third transposed filter;
And the third reference matrix determining subunit is used for determining a third reference matrix according to the third filter bank matrix, the updated target image, the preset environment image and the identity matrix.
Further, the third convolution matrix determination unit includes:
A third target channel determining subunit, configured to determine a target third filtering channel;
A third convolution subunit, configured to input the current transmission rate image into a third filter in the target third filtering channel for convolution;
a third compression subunit, configured to input the convolved result into a third compression function preset in the target third filtering channel, so as to determine a third compression matrix;
and a third convolution matrix determining subunit, configured to input the third compression matrix into the third transpose filter to perform convolution, so as to determine a corresponding third convolution matrix.
Further, the current transmission rate image updating unit includes:
A third inverse matrix determining subunit, configured to input the third reference matrix into an inverse operation channel, so as to determine a third inverse matrix corresponding to the third reference matrix;
and the current transmission rate image updating subunit is used for calculating the product of the third inverse matrix and the third feature matrix to determine an updated current target image.
According to the embodiment of the invention, the initial target image and the initial transmission rate image are used as initial inputs for iterative processing, and the current target image and the current transmission rate image are input into the first color compression field in each iterative process, so that a reference scene graph with relatively clear scene is output. And inputting the reference scene graph and the current transmission rate graph into a second color compression field to further refine and update the current target image. And finally, inputting the updated current target image and the current transmission rate diagram into a third color compression field to update the current transmission rate image, and outputting the current target image and the current transmission rate diagram after finishing the iterative process. The embodiment of the invention can convert the target image blurred due to the shooting scene into a clear scene graph, and improve the visual effect of the target image.
Fig. 10 is a schematic diagram of an electronic device according to an embodiment of the invention. The electronic device shown in fig. 10 is a general-purpose data processing apparatus comprising a general-purpose computer hardware structure including at least a processor 100 and a memory 101. Processor 100 and memory 101 are connected by bus 102. The memory 101 is adapted to store instructions or programs executable by the processor 100. Processor 100 may be a stand-alone microprocessor or may be a collection of one or more microprocessors. Thus, the processor 100 performs the process of processing data and controlling other devices by executing the commands stored in the memory 101, thereby performing the method flow of the embodiment of the present invention as described above. The bus 102 connects the above-described components together, and connects the above-described components to the display controller 103 and the display device and the input/output (I/O) device 104. Input/output (I/O) devices 104 may be mice, keyboards, modems, network interfaces, touch input devices, somatosensory input devices, printers, and other devices which are well known in the art. Typically, an input/output (I/O) device 104 is connected to the system through an input/output (I/O) controller 105.
The memory 101 may store software components such as an operating system, communication modules, interaction modules, and application programs, among others. Each of the modules and applications described above corresponds to a set of executable program instructions that perform one or more functions and methods described in the embodiments of the invention.
The above-described flow diagrams and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention illustrate various aspects of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
Meanwhile, as will be appreciated by those skilled in the art, aspects of embodiments of the present invention may be implemented as a system, method, or computer program product. Accordingly, aspects of embodiments of the invention may take the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a "circuit," module "or" system. Furthermore, aspects of the invention may take the form: a computer program product embodied in one or more computer-readable media having computer-readable program code embodied thereon.
Any combination of one or more computer readable media may be utilized. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. The computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of embodiments of the present invention, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, such as in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to: electromagnetic, optical, or any suitable combination thereof. The computer readable signal medium may be any of the following: a computer-readable storage medium is not a computer-readable storage medium and can communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including: object oriented programming languages such as Java, smalltalk, C ++, PHP, python, and the like; and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package; executing partly on the user computer and partly on the remote computer; or entirely on a remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The invention also relates to a computer readable storage medium for storing a computer readable program for causing a computer to perform some or all of the above-described method embodiments.
That is, it will be understood by those skilled in the art that all or part of the steps in implementing the methods of the embodiments described above may be implemented by a program stored in a storage medium, where the program includes several instructions for causing a device (which may be a single-chip microcomputer, a chip or the like) or a processor (processor) to perform all or part of the steps in the methods of the embodiments described herein. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The above description is only of the preferred embodiments of the present invention and is not intended to limit the present invention, and various modifications and variations may be made to the present invention by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (17)

1. An image processing method, comprising:
determining an initial target image and an initial transmission rate image;
determining a current target image and a current transmission rate image in an iterative manner with the initial target image and the initial transmission rate image as initial inputs; and
Ending the iterative process in response to the preset condition, and outputting a current target image and a current transmission rate image;
wherein the iteratively determining the current target image and the current transmission rate image comprises:
determining a current target image and a current transmission rate image;
inputting the current target image and the current transmission rate image into a first color compression field to determine a reference scene graph;
Inputting the reference scene graph and a current transmission rate image into a second color compression field to update the current target image;
a third color compression field is used for updating the updated target image and the transmission rate image so as to update the current transmission rate image;
The first color compression field, the second color compression field and the third color compression field are high-order color compression fields and are at least used for filtering and compressing an input image through color channels respectively;
Wherein the initial transmission rate image is used to optimize the initial target image.
2. The method of claim 1, wherein the determining an initial target image and an initial transmission rate image comprises:
Determining an initial target image;
The initial target image is initialized to determine an initial transmission rate image.
3. The method of claim 1, wherein the first color compression field comprises a first filter bank comprising a plurality of first filter channels, each of the first filter channels comprising a first filter, a first compression function, and a first transpose filter corresponding to the first filter;
the inputting the current target image and the current transmission rate image into a first color compression field to determine a reference scene graph includes:
determining a first reference matrix corresponding to the first filter bank;
Inputting the current target image into each first filtering channel to determine a corresponding first convolution matrix;
Calculating a sum of the first convolution matrices to determine a first filtered image;
determining a first feature matrix according to the first filtering image, the initial target image, the current transmission rate image and a preset environment image;
And determining a reference scene graph according to the first reference matrix and the first feature matrix.
4. A method according to claim 3, wherein said determining a first reference matrix for which said first filter bank corresponds comprises:
Calculating the sum of products of a first filter matrix and a first transfer filter matrix in each first filtering channel to determine a corresponding first filter group matrix, wherein the first filter matrix is a transfer matrix of the first filter, and the first transfer filter matrix is a transfer matrix of the first transfer filter;
A sum of the square of the current transmission rate image and the first filter bank matrix is calculated to determine a first reference matrix.
5. A method according to claim 3, wherein said inputting the current target image into each of the first filter channels to determine a corresponding first convolution matrix comprises:
Determining a target first filtering channel;
inputting the current target image into a first filter in a first filtering channel of the target for convolution;
Inputting the convolved result into a first compression function preset in the target first filtering channel to determine a first compression matrix;
The first compression matrix is input to the first transpose filter for convolution to determine a corresponding first convolution matrix.
6. A method according to claim 3, wherein said determining a reference scene graph from said first reference matrix and first feature matrix comprises:
Inputting the first reference matrix into an inverse operation channel to determine a first inverse matrix corresponding to the first reference matrix;
a product of the first inverse matrix and the first feature matrix is calculated to determine a reference scene graph.
7. The method of claim 1, wherein the second color compressed field comprises a second filter bank comprising a plurality of second filter channels, each of the second filter channels comprising a second filter, a second compression function, and a second transpose filter corresponding to the second filter;
Said inputting the reference scene graph and the current transmission rate image into a second color compression field to update the current target image comprises:
determining a second reference matrix corresponding to the second filter bank;
Inputting the reference scene graph into each second filtering channel to determine a corresponding second convolution matrix;
Calculating a sum of the second convolution matrices to determine a second filtered image;
determining a second feature matrix according to the second filtering image, the initial target image, the current transmission rate image and a preset environment image;
And determining an updated current target image according to the second reference matrix and the second feature matrix.
8. The method of claim 7, wherein the determining the second reference matrix corresponding to the second filter bank comprises:
Calculating the sum of products of a second filter matrix and a second transposed filter matrix in each second filtering channel to determine a corresponding second filter bank matrix, wherein the second filter matrix is a transmission matrix of the second filter, and the second transposed filter matrix is a transmission matrix of the second transposed filter;
A sum of the square of the current transmission rate image and the second filter bank matrix is calculated to determine a second reference matrix.
9. The method of claim 7, wherein said inputting the reference scene graph into each of the second filtering channels to determine a corresponding second convolution matrix comprises:
determining a target second filtering channel;
inputting the reference scene graph into a second filter in the target second filtering channel for convolution;
inputting the convolved result into a second compression function preset in the target second filtering channel to determine a second compression matrix;
And inputting the second compression matrix into the second transpose filter for convolution to determine a corresponding second convolution matrix.
10. The method of claim 7, wherein determining the updated current target image from the second reference matrix and the second feature matrix comprises:
Inputting the second reference matrix into an inverse operation channel to determine a second inverse matrix corresponding to the second reference matrix;
And calculating the product of the second inverse matrix and the second feature matrix to determine an updated current target image.
11. The method of claim 1, wherein the third color compressed field comprises a third filter bank comprising a plurality of third filter channels, each of the third filter channels comprising a third filter, a third compression function, and a third transpose filter corresponding to the third filter;
The third color compressed field of the updated target image and the transmission rate image to update the current transmission rate image includes:
determining a third reference matrix corresponding to the third filter bank;
inputting the current transmission rate image into each third filtering channel to determine a corresponding third convolution matrix;
calculating a sum of the third convolution matrices to determine a third filtered image;
determining a third feature matrix according to the third filtering image, the updated target image, the initial transmission rate image and the preset environment image;
and determining an updated current transmission rate image according to the third reference matrix and the third feature matrix.
12. The method of claim 11, wherein the determining the third reference matrix corresponding to the third filter bank comprises:
calculating the sum of products of a third filter matrix and a third transposed filter matrix in each third filtering channel to determine a corresponding third filter bank matrix, wherein the third filter matrix is a transmission matrix of the third filter, and the third transposed filter matrix is a transmission matrix of the third transposed filter;
And determining a third reference matrix according to the third filter group matrix, the updated target image, the preset environment image and the identity matrix.
13. The method of claim 11, wherein said inputting the current transmission rate image into each of the third filter channels to determine a corresponding third convolution matrix comprises:
determining a target third filtering channel;
Inputting the current transmission rate image into a third filter in the target third filtering channel for convolution;
Inputting the convolved result into a third compression function preset in the target third filtering channel to determine a third compression matrix;
and inputting the third compression matrix into the third transpose filter for convolution to determine a corresponding third convolution matrix.
14. The method of claim 11, wherein determining the updated current target image from the third reference matrix and third feature matrix comprises:
inputting the third reference matrix into an inverse operation channel to determine a third inverse matrix corresponding to the third reference matrix;
And calculating the product of the third inverse matrix and the third feature matrix to determine an updated current target image.
15. An image processing apparatus, characterized in that the apparatus comprises:
the image acquisition module is used for determining an initial target image and an initial transmission rate image;
the iteration processing module is used for taking the initial target image and the initial transmission rate image as initial input and determining a current target image and a current transmission rate image in an iteration mode; and
The image output module is used for ending the iterative process and outputting a current target image and a current transmission rate image in response to the satisfaction of the preset condition;
wherein, the iterative processing module includes:
a current image determining sub-module for determining a current target image and a current transmission rate image;
A first processing sub-module for inputting the current target image and the current transmission rate image into a first color compression field to determine a reference scene graph;
A second processing sub-module for inputting the reference scene graph and a current transmission rate image into a second color compression field to update the current target image;
A third processing sub-module, configured to third color compress the updated target image and the transmission rate image to update the current transmission rate image;
The first color compression field, the second color compression field and the third color compression field are high-order color compression fields and are at least used for filtering and compressing an input image through color channels respectively;
Wherein the initial transmission rate image is used to optimize the initial target image.
16. A computer readable storage medium storing computer program instructions which, when executed by a processor, implement the method of any one of claims 1-14.
17. An electronic device comprising a memory and a processor, wherein the memory is configured to store one or more computer program instructions, wherein the one or more computer program instructions are executed by the processor to implement the method of any of claims 1-14.
CN202010514617.0A 2020-06-08 2020-06-08 Image processing method, device, readable storage medium and electronic equipment Active CN111833263B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010514617.0A CN111833263B (en) 2020-06-08 2020-06-08 Image processing method, device, readable storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010514617.0A CN111833263B (en) 2020-06-08 2020-06-08 Image processing method, device, readable storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN111833263A CN111833263A (en) 2020-10-27
CN111833263B true CN111833263B (en) 2024-06-07

Family

ID=72899272

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010514617.0A Active CN111833263B (en) 2020-06-08 2020-06-08 Image processing method, device, readable storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN111833263B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105430295A (en) * 2015-10-30 2016-03-23 努比亚技术有限公司 Device and method for image processing
CN107730514A (en) * 2017-09-29 2018-02-23 北京奇虎科技有限公司 Scene cut network training method, device, computing device and storage medium
CN107895377A (en) * 2017-11-15 2018-04-10 国光电器股份有限公司 A kind of foreground target extracting method, device, equipment and storage medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8401332B2 (en) * 2008-04-24 2013-03-19 Old Dominion University Research Foundation Optical pattern recognition technique
US10586129B2 (en) * 2018-02-21 2020-03-10 International Business Machines Corporation Generating artificial images for use in neural networks
US10750135B2 (en) * 2018-10-19 2020-08-18 Qualcomm Incorporated Hardware-friendly model-based filtering system for image restoration

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105430295A (en) * 2015-10-30 2016-03-23 努比亚技术有限公司 Device and method for image processing
CN107730514A (en) * 2017-09-29 2018-02-23 北京奇虎科技有限公司 Scene cut network training method, device, computing device and storage medium
CN107895377A (en) * 2017-11-15 2018-04-10 国光电器股份有限公司 A kind of foreground target extracting method, device, equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于场景识别的夜视图像彩色融合方法;瞿哲;肖刚;徐宁文;刁卓然;;计算机应用研究(03);全文 *
基于颜色传递和对比度增强的夜视图像彩色融合;薛模根;刘存超;周浦城;;图学学报(06);全文 *

Also Published As

Publication number Publication date
CN111833263A (en) 2020-10-27

Similar Documents

Publication Publication Date Title
CN111950723B (en) Neural network model training method, image processing method, device and terminal equipment
CN106664467A (en) Real time video summarization
CN113222855B (en) Image recovery method, device and equipment
CN111028006B (en) Service delivery auxiliary method, service delivery method and related device
CN110162657B (en) Image retrieval method and system based on high-level semantic features and color features
CN111079764A (en) Low-illumination license plate image recognition method and device based on deep learning
CN111369450A (en) Method and device for removing Moire pattern
CN112561028A (en) Method for training neural network model, and method and device for data processing
Byun et al. BitNet: Learning-based bit-depth expansion
CN111951192A (en) Shot image processing method and shooting equipment
CN113556442A (en) Video denoising method and device, electronic equipment and computer readable storage medium
CN111814735A (en) Ticket taking method, device and equipment based on face recognition and storage medium
CN115880187A (en) Single-image reflection removing method based on denoising diffusion probability model and related equipment
CN110717864B (en) Image enhancement method, device, terminal equipment and computer readable medium
CN113919479B (en) Method for extracting data features and related device
CN111833263B (en) Image processing method, device, readable storage medium and electronic equipment
CN112801882B (en) Image processing method and device, storage medium and electronic equipment
CN112235598B (en) Video structured processing method and device and terminal equipment
CN112215221A (en) Automatic vehicle frame number identification method
WO2018194611A1 (en) Recommending a photographic filter
CN114862711B (en) Low-illumination image enhancement and denoising method based on dual complementary prior constraints
CN113139490B (en) Image feature matching method and device, computer equipment and storage medium
CN115205157A (en) Image processing method and system, electronic device, and storage medium
CN112288748B (en) Semantic segmentation network training and image semantic segmentation method and device
CN116703731A (en) Image processing method, device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant