CN110136070B - Image processing method, image processing device, computer-readable storage medium and electronic equipment - Google Patents

Image processing method, image processing device, computer-readable storage medium and electronic equipment Download PDF

Info

Publication number
CN110136070B
CN110136070B CN201810104756.9A CN201810104756A CN110136070B CN 110136070 B CN110136070 B CN 110136070B CN 201810104756 A CN201810104756 A CN 201810104756A CN 110136070 B CN110136070 B CN 110136070B
Authority
CN
China
Prior art keywords
matrix
gradient
image
reflectivity
elements
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810104756.9A
Other languages
Chinese (zh)
Other versions
CN110136070A (en
Inventor
马文晔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201810104756.9A priority Critical patent/CN110136070B/en
Publication of CN110136070A publication Critical patent/CN110136070A/en
Application granted granted Critical
Publication of CN110136070B publication Critical patent/CN110136070B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an image processing method, an image processing device, a computer readable storage medium and an electronic device, wherein the image processing method comprises the following steps: acquiring a gradient matrix based on a pixel matrix of an image, wherein elements of the pixel matrix correspond to values of pixels in the image under a preset channel, and elements of the gradient matrix correspond to gradient values of the elements in the pixel matrix; intercepting the gradient value in the gradient matrix according to preset interception parameters to obtain an intercepted gradient matrix; and adjusting the total variation model according to the intercepted gradient matrix, and acquiring a reflectivity matrix of the image based on the adjusted total variation model, wherein elements of the reflectivity matrix correspond to reflectivity components of pixels in the image under the preset channel. Based on the image processing method provided by the invention, the total variation model is adjusted by introducing the interception gradient matrix, and the reflectivity matrix is obtained based on the adjusted total variation model, so that a more excellent shadow removing effect can be realized.

Description

Image processing method, image processing device, computer-readable storage medium and electronic equipment
Technical Field
The present invention relates to the field of computer application technologies, and in particular, to an image processing method and apparatus, a computer-readable storage medium, and an electronic device.
Background
In image processing, lighting generally indicates an external lighting condition in which an object is located, such as a degree of shading of a shooting environment or the like. The different illumination parameters reflect different degrees of shading, which is usually a shadow, on the image.
Research shows that the image difference caused by illumination change even exceeds the image difference caused by individual difference of display objects, which can affect or even determine the performance of the image processing system. Therefore, the illumination problem is a key consideration for designing an image processing system, and is particularly prominent in application scenes such as face recognition, image segmentation and the like. How to perform effective illumination processing to achieve the effect of image shadow removal has become an important direction for image processing research.
In recent years, researchers have proposed various illumination processing algorithms, typically a light cone method, a business image method, and the like. The light cone method proves that an image set of a face image under different illumination conditions forms a light cone in a high-dimensional data space through experiments, and the light cone can be determined by convex combination of images under various extreme light rays. The quotient image method utilizes known different illumination images to learn, by estimating the direction of the light of the input image, then synthesizes the image under the same light reverse direction, and finally takes the quotient of the input image and the synthesized image as invariant to identify. However, the data required for modeling before application of these methods is relatively high and computationally intensive, making real-time processing difficult to achieve. Furthermore, the common premise of the application of the above methods is that the target image has been preprocessed by correct detection, segmentation and alignment, and these preprocessing operations are also unstable under the influence of light, thus weakening the practical effects of these methods.
The Retinex method proposed by Land (Land) et al directly extracts the illumination invariant from a single image from the viewpoint of signal analysis, does not need a training set and does not have a complex modeling process, and thus the requirement of real-time processing is easily met. The basic idea of the method is to separate the illumination component in the image, then to find the reflection coefficient of the object surface according to the reflection coefficient model, and to use the reflection coefficient map as the illumination invariant. It follows that the key to the Retinex method is to estimate the illumination component in the image.
However, the processing schemes proposed by the related art for the illumination component estimation when applying the Retinex method all have certain defects, or cause the shadow removing effect of the image to be insignificant, or cause useful information in the image to be lost.
Disclosure of Invention
The invention provides an image processing method, an image processing device, a computer-readable storage medium and an electronic device, and aims to solve the technical problem that illumination components in an image cannot be effectively estimated in the related art.
According to an embodiment of the present invention, there is provided an image processing method including: acquiring a gradient matrix based on a pixel matrix of an image, wherein elements of the pixel matrix correspond to values of pixels in the image under a preset channel, and elements of the gradient matrix correspond to gradient values of the elements in the pixel matrix; intercepting the gradient value in the gradient matrix according to a preset interception parameter to obtain an intercepted gradient matrix; and adjusting a total variation model according to the intercepted gradient matrix, and acquiring a reflectivity matrix of the image based on the adjusted total variation model, wherein elements of the reflectivity matrix correspond to reflectivity components of pixels in the image under the preset channel.
According to an embodiment of the present invention, there is provided an image processing apparatus including: a gradient matrix module configured to obtain a gradient matrix based on a pixel matrix of an image, elements of the pixel matrix corresponding to values of pixels in the image under a predetermined channel, and elements of the gradient matrix corresponding to gradient values of the elements in the pixel matrix; the gradient interception module is used for intercepting the gradient value in the gradient matrix according to preset interception parameters to obtain an intercepted gradient matrix; and the reflectivity acquisition module is used for adjusting a total variation model according to the intercepted gradient matrix and acquiring a reflectivity matrix of the image based on the adjusted total variation model, wherein elements of the reflectivity matrix correspond to reflectivity components of pixels in the image under the preset channel.
According to an embodiment of the present invention, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the above-described image processing method.
According to an embodiment of the present invention, there is provided an electronic apparatus including: a processor; and a memory having computer readable instructions stored thereon which, when executed by the processor, implement the above-described image processing method.
The technical scheme provided by the embodiment of the invention can have the following beneficial effects:
based on the image processing method provided by the embodiment, the total variation model is adjusted by introducing the intercepting gradient matrix, and the reflectivity matrix is obtained based on the adjusted total variation model, so that the influence of two components of reflectivity and illumination on the result can be reflected at the same time, the influence of illumination is removed as much as possible while the obtained reflectivity matrix reflects the real reflectivity as much as possible, and a more excellent shadow removing effect is realized.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention as claimed.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
Fig. 1 shows a schematic diagram of an exemplary system architecture to which an image processing method or an image processing apparatus of an embodiment of the present invention can be applied.
FIG. 2 illustrates a schematic structural diagram of a computer system suitable for use with the electronic device to implement an embodiment of the invention.
FIG. 3 is a flowchart illustrating an image processing method according to an exemplary embodiment.
FIG. 4 is a flowchart illustrating solving an adjusted fully-variational model in accordance with an exemplary embodiment.
Fig. 5 is a flowchart illustrating an image processing method according to another exemplary embodiment.
Fig. 6 schematically shows an example image to be processed.
Fig. 7-9 schematically illustrate the RGB channels of the image shown in fig. 6, respectively.
Fig. 10 and 11 schematically show the results of the transverse and longitudinal gradients of the R channel shown in fig. 7, respectively.
Fig. 12 and 13 schematically show the results of the gradient cuts made on fig. 10 and 11, respectively.
Fig. 14-16 schematically show the results of the reflectivity component calculations for the RGB channels of the image shown in fig. 6, respectively.
Fig. 17 schematically shows a combined image of the reflectivity components shown in fig. 14-16.
Fig. 18-20 schematically show the illumination component calculation results for the RGB channels of the image shown in fig. 6, respectively.
Fig. 21 schematically illustrates a combined image of the illumination components shown in fig. 18-20.
Fig. 22 is a block diagram illustrating an image processing apparatus according to an exemplary embodiment.
Fig. 23 is a block diagram illustrating an image processing apparatus according to another exemplary embodiment.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art.
Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the invention. One skilled in the relevant art will recognize, however, that the invention may be practiced without one or more of the specific details, or with other methods, components, devices, steps, and so forth. In other instances, well-known methods, devices, implementations or operations have not been shown or described in detail to avoid obscuring aspects of the invention.
The block diagrams shown in the figures are functional entities only and do not necessarily correspond to physically separate entities. I.e. these functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor means and/or microcontroller means.
The flow charts shown in the drawings are merely illustrative and do not necessarily include all of the contents and operations/steps, nor do they necessarily have to be performed in the order described. For example, some operations/steps may be decomposed, and some operations/steps may be combined or partially combined, so that the actual execution sequence may be changed according to the actual situation.
Fig. 1 shows a schematic diagram of an exemplary system architecture 100 to which an image processing method or an image processing apparatus of an embodiment of the present invention may be applied.
As shown in fig. 1, the system architecture 100 may include one or more of terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation. For example, server 105 may be a server cluster comprised of multiple servers, or the like.
The user may use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to receive or send messages or the like. The terminal devices 101, 102, 103 may be various electronic devices having a display screen, including but not limited to smart phones, tablet computers, portable computers, desktop computers, and the like.
The server 105 may be a server that provides various services. For example, a user uploads an image to be processed to the server 105 by using the terminal device 103 (or the terminal device 101 or 102), the server 105 may obtain a gradient matrix based on a pixel matrix of the image, then intercepts a gradient value in the gradient matrix according to a preset interception parameter to obtain an intercepted gradient matrix, further adjusts the total variation model according to the obtained intercepted gradient matrix, and obtains a reflectivity matrix of the image based on the adjusted total variation model; the elements of the pixel matrix correspond to values of the pixels in the image under a predetermined channel, the elements of the gradient matrix correspond to gradient values of the elements in the pixel matrix, and the elements of the reflectivity matrix correspond to reflectivity components of the pixels in the image under the predetermined channel.
In some embodiments, the image processing method provided by the embodiments of the present invention is generally performed by the server 105, and accordingly, the image processing apparatus is generally disposed in the server 105. In other embodiments, some terminals may have similar functionality as the server to perform the method. Therefore, the image processing method provided by the embodiment of the invention is not limited to be executed at the server side.
FIG. 2 illustrates a schematic structural diagram of a computer system suitable for use with the electronic device to implement an embodiment of the invention.
It should be noted that the computer system 200 of the electronic device shown in fig. 2 is only an example, and should not bring any limitation to the functions and the scope of the application of the embodiment of the present invention.
As shown in fig. 2, the computer system 200 includes a Central Processing Unit (CPU) 201 that can perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM) 202 or a program loaded from a storage section 208 into a Random Access Memory (RAM) 203. In the RAM203, various programs and data necessary for system operation are also stored. The CPU 201, ROM 202, and RAM203 are connected to each other via a bus 204. An input/output (I/O) interface 205 is also connected to bus 204.
The following components are connected to the I/O interface 205: an input portion 206 including a keyboard, a mouse, and the like; an output section 207 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage section 208 including a hard disk and the like; and a communication section 209 including a network interface card such as a LAN card, a modem, or the like. The communication section 209 performs communication processing via a network such as the internet. A drive 210 is also connected to the I/O interface 205 as needed. A removable medium 211 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 210 as necessary, so that a computer program read out therefrom is mounted into the storage section 208 as necessary.
In particular, according to an embodiment of the present invention, the processes described below with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the invention include a computer program product comprising a computer program embodied on a computer-readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 209 and/or installed from the removable medium 211. The computer program executes various functions defined in the system of the present application when executed by a Central Processing Unit (CPU) 201.
It should be noted that the computer readable medium shown in the present invention can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present invention, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present invention, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present invention may be implemented by software, or may be implemented by hardware, and the described units may also be disposed in a processor. Wherein the names of the elements do not in some way constitute a limitation on the elements themselves.
As another aspect, the present application also provides a computer-readable medium, which may be contained in the electronic device described in the above embodiments; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by an electronic device, cause the electronic device to implement the method as described in the embodiments below. For example, the electronic device may implement the steps shown in fig. 3 to 5.
The principle and implementation details of the technical solution of the embodiments of the present invention are explained in detail below.
As described above, the key of the Retinex method is to estimate the illumination component in the image. In order to estimate the illumination component in the image, the related art proposes a model algorithm based on gradient interception and a model algorithm based on total variation, both of which start from the gradient matrix of the image. For example, the image may be regarded as a two-dimensional function, each pixel corresponds to a function value, and the gradient of the function at each pixel position is calculated, so as to obtain a gradient matrix corresponding to the image. The function value may have different meanings for different image representation models, for example, in a Red-Green-Blue (RGB) model based on three primary colors, the function value may include intensity values corresponding to pixel points in three RGB channels; for another example, in a Hue-Saturation-Value (HSV) model, the function Value may include a Hue Value, a Saturation Value, and a brightness Value corresponding to the pixel point under three HSV channels.
The principle of the gradient-based clipping model algorithm is that the illumination component (shaded on the image) in the image function value is regarded as a gentle and smooth component, so that the gradient of the component is small on the whole image, and thus, the part of the image gradient matrix smaller than the clipping parameter (a positive constant parameter) is removed (set to zero), the pixel part with zero gradient corresponds to the shadow on the image, and the remaining pixels corresponding to the non-zero part in the clipped gradient matrix can be approximated as the image without the shadow. However, the model algorithm does not take the reflectivity component in the image function value into consideration, and therefore does not fully reflect the design idea of the Retinex method.
The principle of the model algorithm based on total variation is that the reflectivity component (expressed as physical color on the image) in the image function value is regarded as a slicing constant value, so that the gradient of the component is zero value on most pixels on the whole image, and the value of the non-zero part is larger, so that an objective function is constructed based on the total variation term (averaging the absolute value of the gradient of the variable matrix) to solve the inverse, the gradient matrix of the image is used as the initial value of the variable matrix to start solving, when the objective function takes the minimum value, the solution of the variable matrix can be regarded as the gradient matrix of the reflectivity component in the image function value, and the corresponding pixel part can be approximated as an image without shadow. However, the model algorithm does not consider the illumination component in the image function value, and the solution to the objective function is easy to appear at two extremes, so that shadow is not easy to remove, or useful information in the image is removed.
Therefore, the above-mentioned model based on gradient interception and the model based on total variation can not accurately and effectively estimate the illumination component in the image, and thus can not realize good shadow removing effect of the image.
In order to solve the above problem, embodiments of the present invention provide an image processing method and apparatus, a computer-readable storage medium, and an electronic device.
FIG. 3 is a flowchart illustrating an image processing method according to an exemplary embodiment. As shown in FIG. 3, the image processing method may be performed by any computing device and may include the following steps 310-350.
In step 310, a gradient matrix is obtained based on a pixel matrix of the image; the elements of the pixel matrix correspond to values of pixels in the image under a predetermined channel, and the elements of the gradient matrix correspond to gradient values of the elements in the pixel matrix.
The embodiment of the invention is based on the Retinex theory of the human visual system, and the theory considers that the human visual system (consisting of human brain and human eyes) can automatically weaken the light and shade change of an observed scene so as to observe the actual color of an object. If the image is considered to be defined in a region
Figure BDA0001567491910000081
The two-dimensional function of (a) is denoted as i (x, y) or simply denoted as i, where (x, y) is a pixel coordinate, and then according to the Retinex theory, a value i of each pixel of an image can be divided into two components, namely a reflectivity r component and an illumination e component:
i=r+e (1)
the value of the function i may have different meanings for different image representation models, for example, in an RGB model, the function value may include intensity values corresponding to pixel points in three RGB channels, respectively; for another example, in the HSV model, the function value may include a chromatic value, a saturation value, and a brightness value corresponding to the pixel point under three HSV channels. The embodiment of the present invention does not limit this, and this embodiment only takes the processing under the current predetermined channel as an example for description, and the processing principle of other channels is similar, which is not described herein again.
Next, the main purpose of the embodiment of the present invention is to obtain the reflectivity r and the illumination e through the input image i. For this reason, on the basis of the above formula (1), the embodiment of the present invention introduces the following two assumptions. Firstly, the reflectivity r is assumed to represent the physical color of an image and is a slice constant value; next, assume that the illumination e represents the shadow of the image, and is gentle and smooth.
Step 310 obtains a gradient matrix based on a pixel matrix i of the image
Figure BDA0001567491910000091
Assuming that each element of the pixel matrix i corresponds to a value of each pixel of the image under a predetermined channel, the gradient matrix
Figure BDA0001567491910000092
Corresponds to the gradient value of each element in the pixel matrix i. In one embodiment, the gradient value of each pixel may be obtained by performing a discrete derivation (e.g., a difference between adjacent pixels) on the horizontal and vertical directions of the pixel matrix i, so as to obtain two gradient matrices corresponding to the horizontal and vertical directions, respectively. Accordingly, the subsequent steps related to processing the gradient matrix may be performed twice respectively, but the principle of the two processing is similar, so that the following steps are described by taking a gradient matrix as an example to avoid repetition.
In step 330, the gradient values in the gradient matrix are intercepted according to preset interception parameters to obtain an intercepted gradient matrix.
For the gradient matrix obtained in step 310
Figure BDA0001567491910000093
According to formula (1):
Figure BDA0001567491910000094
wherein,
Figure BDA0001567491910000095
is composed of
Figure BDA0001567491910000096
The component of the light that corresponds to the reflectivity,
Figure BDA0001567491910000097
then is
Figure BDA0001567491910000098
The component corresponding to the illumination.
Based on the above assumption that the reflectivity r is a slice constant,
Figure BDA0001567491910000099
zero values are present in most pixels, while the values of the non-zero parts are larger; based on the assumption that the illumination e is gentle smoothing,
Figure BDA00015674919100000910
is small across all pixels. To this end, step 330 may be accomplished by truncating the function δ t To remove
Figure BDA00015674919100000911
The middle and smaller part is a truncated gradient matrix obtained by gradient truncation
Figure BDA00015674919100000912
Approximating components corresponding to the reflectivity portions
Figure BDA00015674919100000913
Figure BDA00015674919100000914
In one embodiment, the truncation function δ with truncation parameter t t Can be defined as:
Figure BDA00015674919100000915
the interception parameter t is an empirical value, and for the pixel point value interval of [0,255], the interception parameter t can be selected from [10,20 ].
In step 350, adjusting the total variation model according to the intercepted gradient matrix, and acquiring a reflectivity matrix of the image based on the adjusted total variation model; wherein the elements of the reflectivity matrix correspond to reflectivity components of pixels in the image under a predetermined channel.
According to the above principle, for a given image i, if the reflectivity component r can be solved reversely, the image with shadow removed is obtained; meanwhile, the image i is subtracted by the reflectivity component r based on the formula (1) to obtain the illumination component e of the image, namely the shadow image.
The general inverse solution process is performed by a conventional fully-variant model represented by:
Figure BDA0001567491910000101
wherein, argmin u f (u) denotes the set of variables u that minimizes f (u), Ω denotes the domain of the image, the sum notation denotes the sum over the domain of the entire image, u denotes the variable matrix of the model function,
Figure BDA0001567491910000102
a variable gradient matrix representing a gradient of elements of the variable matrix,
Figure BDA0001567491910000103
pair of representations
Figure BDA0001567491910000104
Taking an absolute value, taking alpha as a proportion adjustment parameter,
Figure BDA0001567491910000105
representing a gradient matrix of a pair variable
Figure BDA0001567491910000106
And gradient matrix
Figure BDA0001567491910000107
The absolute value of the difference is taken.
To solve the total variation model of equation (5), a bragman (Bregman) iterative algorithm may be used, and an auxiliary variable may be introduced for iteration. However, the result of solving r based on the above-mentioned fully-variant model is very sensitive to the value of the parameter α. The larger the value of a is,
Figure BDA0001567491910000108
the greater the contribution to the result, the smoother the resulting image, so too much α removes useful information from the image; the smaller the value of a is,
Figure BDA0001567491910000109
the more contribution to the result, the closer the image is to the original image, so too small a does not achieve the shadow removal effect.
Step 350, the total variation model is adjusted by introducing the interception gradient matrix, the reflectivity matrix is solved based on the adjusted total variation model, the influence of two components of reflectivity r and illumination e on a calculation result can be reflected simultaneously, the solved reflectivity matrix reflects the real reflectivity as much as possible and is close to the interception gradient matrix as much as possible at the same time
Figure BDA00015674919100001010
In one embodiment, the above-described fully-variant model may be adapted by introducing a gradient intercept measure. The gradient intercept measure here may be, for example, the gradient of the variable matrix u
Figure BDA00015674919100001011
And truncating the gradient matrix
Figure BDA00015674919100001012
The difference between them. Therefore, the adjusted total variation model comprises the total variation item and the gradient interception measure, so that the reflectivity matrix obtained by solving the adjusted total variation model can reflect the characteristics that the total variation item enables the solution to reflect the real reflectivity as much as possible, and the gradient interception measure enables the solution to be close to the intercepted gradient as much as possibleMatrix of
Figure BDA00015674919100001013
Therefore, the characteristic of illumination influence is removed, so that the solution closer to the real reflectivity can be obtained, and the more excellent shadow removing effect is realized.
In one embodiment, the objective function f (u) of the variable matrix may be constructed based on the sum of the total variational term and the gradient intercept measure:
Figure BDA00015674919100001014
where Ω represents the domain of the image, the sum sign represents the sum over the entire image, u represents the variable matrix to be solved,
Figure BDA0001567491910000111
a variable gradient matrix representing the gradient of the elements of the variable matrix,
Figure BDA0001567491910000112
pair of representations
Figure BDA0001567491910000113
Taking an absolute value, taking alpha as a proportion adjustment parameter,
Figure BDA0001567491910000114
it is shown that the gradient matrix is truncated,
Figure BDA0001567491910000115
the expression takes the absolute value of the difference between the variable gradient matrix and the truncated gradient matrix. The above
Figure BDA0001567491910000116
Can be regarded as a total variation term,
Figure BDA0001567491910000117
can be regarded as a gradient intercept measure. Where alpha is used to adjust the weights of the total variational terms and the gradient intercept measures in the solution, e.g.May be set to a constant of 1.
In this way, the reflectivity matrix can be obtained by solving the following model:
r=argmin u f(u) (7)
wherein r represents the reflectivity matrix, argmin u f (u) represents a set of variables u that minimize f (u).
Based on the image processing method provided by the embodiment, the total variation model is adjusted by introducing the intercepting gradient matrix, and the reflectivity matrix is obtained based on the adjusted total variation model, so that the influence of two components of reflectivity and illumination on the result can be reflected at the same time, the influence of illumination is removed as much as possible while the obtained reflectivity matrix reflects the real reflectivity as much as possible, and a more excellent shadow removing effect is realized.
Compared with the traditional total variation model, the method has the advantages that by introducing the gradient interception measure, the useful information is intercepted before the total variation is applied, and therefore the problem that the proportional control parameter alpha is sensitive can be overcome. For example, in the case where α is zero, at least the information of the truncated gradient matrix can be embodied in the objective function f (u) expressed by the above equation (6), and there is no processing effect on the image at all as in the conventional full-variational model corresponding to equation (5).
The following describes an exemplary process for solving the adjusted total variation model by way of example, but the process is only for illustrative purposes and is not intended to limit the scope of the present invention.
In one embodiment, to solve the fully-variant model represented by equations (6) - (7), two auxiliary variables d and g may be introduced:
Figure BDA0001567491910000118
Figure BDA0001567491910000119
based on these two auxiliary variables, the Bregman iterative method can be referenced for calculation. The basic idea of the Bregman iterative algorithm here is that, after introducing the auxiliary variables d and g, the solution problem of the above-mentioned total-variation model is converted into finding a set of three variables (u, d, g) that minimizes the objective function of equation (6), updating the other variable by fixing two of the variables each time, and updating alternately to obtain the variable u that meets the preset iteration ending condition.
An exemplary process for solving the above-described fully-variant model using the Bregman iterative algorithm is shown in FIG. 4, and includes steps 410-450.
In step 410, u is initialized 0 =i,d 0 =g 0 =b 0 =c 0 =0,k=0。
Where u represents the variable matrix to be solved, u 0 The initial value assigned by the variable matrix is represented, i is the same as the pixel matrix of the image to be processed; b and c respectively represent dual variables corresponding to the auxiliary variables d and g, and the initial values of the four variables d, g, b and c are all zero; k represents the number of iterative computations.
In step 420, the following poisson equation is solved to obtain u k+1
Figure BDA0001567491910000121
Based on the idea of the Bregman iterative algorithm, d is fixed firstly k And g k To get updated u k+1 The above poisson equation (10) is derived from equations (6) - (8).
Wherein, delta is Laplace operator; div denotes the standard divergence operator, being the gradient operator
Figure BDA0001567491910000122
Negative transposition; λ and μ are constants introduced by the Bregman algorithm and can be any positive number, for example, both can be set to 1.
It is noted here that for
Figure BDA0001567491910000123
With components in both x and y directionsIn the case of (3), the above four variables d, g, b and c also have two components, respectively, and
Figure BDA0001567491910000124
for the function p (x, y) there are
Figure BDA0001567491910000125
Thus, the components u of the variable matrix in two directions are correspondingly obtained after the solution in the formula (10) k+1 (x,y)。
In one embodiment, poisson's equation (10) may be solved using a standard discrete cosine transform.
In step 430, the two auxiliary variables d and g are updated based on:
Figure BDA0001567491910000126
Figure BDA0001567491910000127
based on the idea of the Bregman iterative algorithm, u is fixed k+1 And g k To get updated d k+1 Then through fixing u k+1 And d k+1 To obtain updated g k+1 The above formulas (11) and (12) are derived, respectively.
Where shrink represents the introduced contraction function, and its expression for arguments z and τ is:
Figure BDA0001567491910000128
in step 440, the two dual variables b and c are updated based on:
Figure BDA0001567491910000131
Figure BDA0001567491910000132
in step 450, it is determined whether the current iteration satisfies the end condition based on the following equation:
||u k+1 -u k || 2 /||u k+1 || 2 ≤ε (14)
wherein, the subscript 2 of the double vertical lines indicates that 2-norm of the vector is taken, that is, the square sum of each element in the vector is then squared; ε is a constant, and may be set to 0.00001, for example. The physical meaning of the above equation (14) represents the matrix u obtained in the k +1 th iteration k+1 The matrix u solved by the k-th iteration k The relative distance therebetween is equal to or less than a constant epsilon.
If u is k+1 And u k Stopping iteration if the above conditions are met, and outputting the last u k+1 As a reflectivity matrix r; if the above conditions are not met, k is assigned as k +1, and the process proceeds to step 420 to continue the next iteration calculation.
According to the above example calculation process of the present invention, the auxiliary variables d and g and the variable matrix u are introduced to perform updating alternately, i.e., d and g are fixed to update u, d and u are fixed to update g, and u and g are fixed to update d, so that each step of updating has an explicit solution, and the variable matrix u meeting the iteration end condition is obtained finally.
Fig. 5 is a flowchart illustrating an image processing method according to another exemplary embodiment. As shown in FIG. 5, the image processing method may be performed by any computing device and may include the following steps 510-550.
In step 510, each channel of the image is computed separately to obtain a corresponding pixel matrix.
The elements of the pixel matrix described here may correspond to the function values i of each pixel in the image. The function value i may have different meanings for different image representation models.
In one embodiment, step 510 may be calculated based on the RGB channels of the image, respectively, resulting in three corresponding intensity value matrices. Taking the image shown in fig. 6 as an example, the three intensity value matrices are linearly adjusted to the interval of [0,255], so as to obtain the three images illustrated in fig. 7-9.
In an embodiment, step 510 may also be performed by separately calculating based on three HSV channels of the image, to obtain a corresponding chromaticity value matrix, a saturation value matrix, and a luminance value matrix.
In step 520, discrete derivative operations are performed on the pixel matrix in both horizontal and vertical directions to obtain two gradient matrices.
The discrete derivation operation here may be to find the difference between the function values of two adjacent pixels. With element (x) in the pixel matrix at (i, j) i ,y i ) For example, the element of the corresponding position in the transverse gradient matrix takes the value of x i+1 -x i The value of the element at the corresponding position in the longitudinal gradient matrix is y i+1 -y i
Taking the image corresponding to the R-channel intensity value matrix shown in fig. 7 as an example, the images corresponding to the two gradient matrices obtained after the calculation in step 520 are shown in fig. 10 and 11.
In step 530, the gradient values in the gradient matrix are intercepted according to preset interception parameters, so as to obtain an intercepted gradient matrix.
Step 530 may perform gradient interception using the intercept function of equation (4), where the value of the parameter t may be set based on experience. For example, for the value range of the pixel point of [0,255], the clipping parameter t may be selected from [10,20 ].
Taking the images shown in fig. 10 and 11 as an example, the images corresponding to the two truncated gradient matrices obtained after performing step 530 on the corresponding gradient matrices are shown in fig. 12 and 13.
In step 540, a reflectivity matrix of the image is obtained based on the full variation model adjusted by the truncated gradient matrix.
Step 540 may solve the reflectivity matrix based on the fully variant models of equations (6) - (7) using Bregman iterative processes corresponding to equations (8) - (14).
In one embodiment, when step 510 is based on separate calculations for the three RGB channels of the image, the calculations of steps 520-540 can be performed for R, G, and B, respectively, and the three solved reflectivity matrices are linearly adjusted to [0,255] to obtain the images shown in fig. 14-16, respectively.
In one embodiment, when step 510 is based on separate calculations for three channels of HSV for an image, the calculations of steps 520-540 may be performed for only the V (luminance value) channel, solving for a reflectivity matrix.
In step 550, the reflectivity matrices for the respective channels are combined to obtain a de-shadowed image.
In one embodiment, when step 540 results in three reflectivity matrices corresponding to the respective RGB channels, the three reflectivity matrices are combined to result in a de-shadowed image, for example as shown in FIG. 17.
In one embodiment, when step 540 only results in the reflectance matrix corresponding to the V-channel, the reflectance matrix is combined with the chrominance value matrix and the saturation value matrix of step 510 to result in a de-shaded image, i.e., the r-component of image i.
In step 560, subtracting the reflectivity matrix from the pixel matrix of the image to obtain a shadow matrix of the image, and combining the shadow matrices of the corresponding channels to obtain a shadow of the image; wherein the elements of the shadow matrix correspond to shadow components of pixels in the image under predetermined channels.
Based on the principle of equation (1), the reflectivity matrix r is subtracted from the pixel matrix i of the image to obtain a shadow matrix e of the image under each channel, as shown in fig. 18-20, for example. The shadow matrices of the channels are combined to obtain the illumination components (shadows) of the image, as shown in fig. 21.
Based on the image processing method provided by the embodiment, the total variation model is adjusted by introducing the intercepting gradient matrix, and the reflectivity matrix is obtained based on the adjusted total variation model, so that the influence of two components of reflectivity and illumination on the result can be reflected at the same time, the influence of illumination is removed as much as possible while the obtained reflectivity matrix reflects the real reflectivity as much as possible, and a more excellent shadow removing effect is realized.
As can be seen from the images respectively illustrated in fig. 6, 17 and 21, by using the image processing method provided by the embodiment of the present invention, a good shadow removing effect can be achieved, and accordingly, a shadow image representing an illumination effect can be obtained.
The following are embodiments of the apparatus of the present invention that may be used to perform the above-described embodiments of the image processing method of the present invention. For details not disclosed in the embodiment of the apparatus of the present invention, please refer to the embodiment of the image processing method of the present invention.
Fig. 22 is a block diagram illustrating an image processing apparatus according to an exemplary embodiment. The image processing apparatus, as shown in fig. 22, includes but is not limited to: a gradient matrix module 2210, a gradient cut-off module 2220, and a reflectivity acquisition module 2230.
The gradient matrix module 2210 is arranged to obtain a gradient matrix based on a pixel matrix of an image, elements of the pixel matrix corresponding to values of pixels in the image under predetermined channels, elements of the gradient matrix corresponding to gradient values of the elements in the pixel matrix.
The gradient interception module 2220 is configured to intercept the gradient value in the gradient matrix according to a preset interception parameter, so as to obtain an intercepted gradient matrix.
The reflectivity obtaining module 2230 is configured to adjust a total variation model according to the truncated gradient matrix, and obtain a reflectivity matrix of the image based on the adjusted total variation model, where elements of the reflectivity matrix correspond to reflectivity components of pixels in the image in the predetermined channel.
Based on the image processing apparatus provided in the above embodiment, the total variation model is adjusted by introducing the intercepting gradient matrix, and the reflectivity matrix is obtained based on the adjusted total variation model, so that the influence of the reflectivity and the illumination on the result can be simultaneously reflected, the influence of the illumination is removed as much as possible while the obtained reflectivity matrix reflects the real reflectivity as much as possible, and a more excellent shadow removing effect is achieved.
Fig. 23 is a block diagram illustrating an image processing apparatus according to another exemplary embodiment. As shown in fig. 23, based on the embodiment shown in fig. 22, the reflectivity obtaining module 2230 in the image processing apparatus includes, but is not limited to: an objective function unit 2231 and a function solving unit 2232, the image processing apparatus further includes, but is not limited to: a channel calculation module 2240 and a shadow acquisition module 2250.
The objective function unit 2231 is arranged to construct an objective function of the variable matrix based on the full variational terms and the gradient intercept measure. Wherein, the total variation item represents a variable gradient matrix obtained by solving the gradient of the elements of the variable matrix, and the gradient interception measure represents the difference between the variable gradient matrix and the intercepted gradient matrix.
The function solving unit 2232 is arranged to obtain a solution of a variable matrix that makes the above objective function extremal to obtain a reflectivity matrix.
The channel calculating module 2240 is configured to calculate each channel of the image, obtain a corresponding pixel matrix, input the pixel matrix into the gradient matrix module 2210 to process the pixel matrix, and combine the reflectivity matrices of the corresponding channels obtained by the reflectivity obtaining module 2230 to obtain the reflectivity component of the image.
The shadow acquisition module 2250 is arranged to subtract the reflectivity matrix from the pixel matrix of the image to obtain a shadow matrix of the image, the elements of the shadow matrix corresponding to shadow components of pixels in the image under the predetermined channel.
In one embodiment, the objective function unit 2231 constructs the objective function of the variable matrix based on the full variational terms and the gradient intercept measure as:
Figure BDA0001567491910000161
wherein Ω represents a domain of the image, a summation symbol represents a summation over the domain, u represents the variable matrix,
Figure BDA0001567491910000167
a matrix representing the gradient of the variable is described,
Figure BDA0001567491910000162
presentation pair
Figure BDA0001567491910000163
Taking an absolute value, taking alpha as a proportion adjustment parameter,
Figure BDA0001567491910000164
a gradient matrix representing the image is generated by the image generator,
Figure BDA0001567491910000165
representing the truncated gradient matrix, and the truncated gradient matrix,
Figure BDA0001567491910000166
means for taking an absolute value of a difference between the variable gradient matrix and the truncated gradient matrix.
Accordingly, in this embodiment, the function solving unit 2232 finds the reflectivity matrix by solving the following model:
r=argmin u f(u)
wherein r represents the reflectivity matrix, argmin u f (u) represents a set of variables u that minimize f (u).
In one embodiment, the function solving unit 2232 may solve the above model to obtain the reflectivity matrix based on the Bregman iteration corresponding to equations (8) - (14).
In one embodiment, the channel calculating module 2240 is configured to calculate RGB channels of the image respectively, obtain corresponding intensity value matrices, input the intensity value matrices to the gradient matrix module 2210 to process the intensity value matrices, and combine the reflectivity matrices of the three channels obtained by the reflectivity obtaining module 2230 to obtain the image without shadow.
In another embodiment, the channel calculating module 2240 is configured to calculate the HSV channels of the image respectively to obtain a corresponding chrominance pixel matrix, a corresponding saturation pixel matrix, and a corresponding luminance pixel matrix, input the luminance pixel matrix into the gradient matrix module 2210 to be processed, and combine the reflectivity matrix obtained by the reflectivity obtaining module 2230 with the chrominance pixel matrix and the saturation pixel matrix to obtain the shadow-removed image.
Based on the image processing device provided by the embodiment, the total variation model is adjusted by introducing the intercepting gradient matrix, and the reflectivity matrix is obtained based on the adjusted total variation model, so that the influence of two components of reflectivity and illumination on the result can be reflected at the same time, the influence of illumination is removed as much as possible while the obtained reflectivity matrix reflects the real reflectivity as much as possible, and a more excellent shadow removing effect is realized.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
It should be noted that although in the above detailed description several modules or units of the device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit, according to embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units. The components shown as modules or units may or may not be physical units, i.e. may be located in one place or may also be distributed over a plurality of network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of the disclosed solution.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiment of the present invention can be embodied in the form of a software product, which can be stored in a non-volatile storage medium (which can be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which can be a personal computer, a server, a touch terminal, or a network device, etc.) to execute the method according to the embodiment of the present invention.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.
It will be understood that the invention is not limited to the precise arrangements that have been described above and shown in the drawings, and that various modifications and changes can be made without departing from the scope thereof. The scope of the invention is limited only by the appended claims.

Claims (8)

1. An image processing method, characterized in that the method comprises:
acquiring a gradient matrix based on a pixel matrix of an image, wherein elements of the pixel matrix correspond to values of pixels in the image under a preset channel, and elements of the gradient matrix correspond to gradient values of the elements in the pixel matrix;
intercepting the gradient value in the gradient matrix according to a preset interception parameter to obtain an intercepted gradient matrix;
constructing an objective function of a variable matrix based on a total variation subentry and a gradient interception measure, wherein the total variation subentry represents a variable gradient matrix obtained by solving a gradient of elements of the variable matrix, the gradient interception measure represents a difference between the variable gradient matrix and an intercepted gradient matrix, and the objective function is as follows:
Figure 750092DEST_PATH_IMAGE001
wherein,
Figure 170709DEST_PATH_IMAGE002
a domain representing the image, a summation symbol representing a summation over the domain,
Figure 643279DEST_PATH_IMAGE003
a matrix of variables is represented that is,
Figure 756729DEST_PATH_IMAGE004
a matrix representing the gradient of the variable is represented,
Figure 996473DEST_PATH_IMAGE005
presentation pair
Figure 220781DEST_PATH_IMAGE006
The absolute value of the sum of the absolute values is taken,
Figure 282278DEST_PATH_IMAGE007
in order to adjust the parameters in proportion,
Figure 566629DEST_PATH_IMAGE008
a gradient matrix representing the image is generated by the image generator,
Figure 979156DEST_PATH_IMAGE009
representing the truncated gradient matrix, and the truncated gradient matrix,
Figure 7155DEST_PATH_IMAGE010
representing taking an absolute value of a difference between the variable gradient matrix and the truncated gradient matrix;
solving the following model to obtain a reflectivity matrix of the image, wherein elements of the reflectivity matrix correspond to reflectivity components of pixels in the image under the predetermined channel:
Figure 251054DEST_PATH_IMAGE011
wherein,
Figure 440727DEST_PATH_IMAGE012
a matrix of said reflectivity is represented by a matrix of,
Figure 340550DEST_PATH_IMAGE013
show to make
Figure 906661DEST_PATH_IMAGE014
Variable taking minimum value
Figure 208329DEST_PATH_IMAGE003
A collection of (a).
2. The method of claim 1, wherein the predetermined channels comprise three channels, red, green, blue, RGB, the method further comprising:
and combining the reflectivity matrixes respectively obtained under the three channels to obtain a shadow-removed image.
3. The method of claim 1, wherein the predetermined channel comprises a luma channel in a chroma saturation luma (HSV) space, the method further comprising:
converting the image from a red, green and blue (RGB) space into a chroma saturation brightness (HSV) space;
acquiring a chrominance pixel matrix and a saturation pixel matrix of the image, wherein elements of the chrominance pixel matrix correspond to chrominance values of pixels in the image, and elements of the saturation pixel matrix correspond to saturation values of pixels in the image; and
and combining the chrominance pixel matrix, the saturation pixel matrix and the reflectivity matrix obtained under the brightness channel to obtain a shadow-removed image.
4. The method of claim 1, wherein the obtaining a gradient matrix based on the image pixel matrix comprises: and acquiring a transverse gradient matrix and a longitudinal gradient matrix of the image, wherein elements of the transverse gradient matrix correspond to differences between elements in the pixel matrix and transversely adjacent elements, and elements of the longitudinal gradient matrix correspond to differences between elements in the pixel matrix and longitudinally adjacent elements.
5. The method of claim 1, further comprising:
subtracting the reflectivity matrix from a pixel matrix of the image to obtain a shadow matrix of the image, elements of the shadow matrix corresponding to shadow components of pixels in the image under the predetermined channel.
6. An image processing apparatus, characterized in that the apparatus comprises:
a gradient matrix module configured to obtain a gradient matrix based on a pixel matrix of an image, elements of the pixel matrix corresponding to values of pixels in the image under a predetermined channel, and elements of the gradient matrix corresponding to gradient values of the elements in the pixel matrix;
the gradient interception module is used for intercepting the gradient value in the gradient matrix according to preset interception parameters to obtain an intercepted gradient matrix;
the reflectivity acquisition module comprises a target function unit and a function solving unit;
the objective function unit is set as an objective function for constructing a variable matrix based on a total variable element and a gradient interception measurement, the total variable element represents a variable gradient matrix obtained by solving a gradient of an element of the variable matrix, the gradient interception measurement represents a difference between the variable gradient matrix and an intercepted gradient matrix, and the objective function is as follows:
Figure 381952DEST_PATH_IMAGE001
wherein,
Figure 769071DEST_PATH_IMAGE002
a domain representing the image, a summation symbol representing a summation over the domain,
Figure 138873DEST_PATH_IMAGE003
a matrix of variables is represented that is,
Figure 29468DEST_PATH_IMAGE015
a matrix representing the gradient of the variable is represented,
Figure 826523DEST_PATH_IMAGE016
presentation pair
Figure 497676DEST_PATH_IMAGE006
The absolute value of the sum of the absolute values is taken,
Figure 671168DEST_PATH_IMAGE007
in order to adjust the parameters in proportion,
Figure 681850DEST_PATH_IMAGE017
a gradient matrix representing the image is generated by the image generator,
Figure 649806DEST_PATH_IMAGE018
representing the truncated gradient matrix of the image data,
Figure 745938DEST_PATH_IMAGE010
representing taking an absolute value of a difference between the variable gradient matrix and the truncated gradient matrix;
the function solving unit is configured to solve the following model to obtain a reflectivity matrix of the image, wherein elements of the reflectivity matrix correspond to reflectivity components of pixels in the image under the predetermined channel:
Figure 457542DEST_PATH_IMAGE011
wherein,
Figure 634314DEST_PATH_IMAGE019
a matrix of said reflectivity is represented by a matrix of,
Figure 773171DEST_PATH_IMAGE013
show to make
Figure 356599DEST_PATH_IMAGE014
Variable of minimum value
Figure 606315DEST_PATH_IMAGE003
A collection of (a).
7. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out an image processing method according to any one of claims 1 to 5.
8. An electronic device, comprising:
a processor; and
a memory having computer readable instructions stored thereon which, when executed by the processor, implement the image processing method of any of claims 1 to 5.
CN201810104756.9A 2018-02-02 2018-02-02 Image processing method, image processing device, computer-readable storage medium and electronic equipment Active CN110136070B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810104756.9A CN110136070B (en) 2018-02-02 2018-02-02 Image processing method, image processing device, computer-readable storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810104756.9A CN110136070B (en) 2018-02-02 2018-02-02 Image processing method, image processing device, computer-readable storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN110136070A CN110136070A (en) 2019-08-16
CN110136070B true CN110136070B (en) 2022-10-04

Family

ID=67566978

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810104756.9A Active CN110136070B (en) 2018-02-02 2018-02-02 Image processing method, image processing device, computer-readable storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN110136070B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110827339B (en) * 2019-11-05 2022-08-26 北京深测科技有限公司 Method for extracting target point cloud
CN113689354B (en) * 2021-08-30 2022-08-12 广州市保伦电子有限公司 Image shadow removing method and processing terminal
CN117745621A (en) * 2023-12-18 2024-03-22 优视科技(中国)有限公司 Training sample generation method and electronic equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103679660A (en) * 2013-12-16 2014-03-26 清华大学 Method and system for restoring image
CN104504743A (en) * 2014-12-30 2015-04-08 深圳先进技术研究院 Method and system for reconstructing internal region-of-interest image
CN105608679A (en) * 2016-01-28 2016-05-25 重庆邮电大学 Image denoising method integrated with structure tensor and non-local total variation
CN106056558A (en) * 2016-06-29 2016-10-26 中国人民解放军国防科学技术大学 Target image recovery method based on laser longitudinal chromatographic image sequence
CN106355561A (en) * 2016-08-30 2017-01-25 天津大学 TV (total variation) image noise removal method based on noise priori constraint
CN107239729A (en) * 2017-04-10 2017-10-10 南京工程学院 A kind of illumination face recognition method based on illumination estimation

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101572065B1 (en) * 2014-01-03 2015-11-25 현대모비스(주) Method for compensating image distortion and Apparatus for the same

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103679660A (en) * 2013-12-16 2014-03-26 清华大学 Method and system for restoring image
CN104504743A (en) * 2014-12-30 2015-04-08 深圳先进技术研究院 Method and system for reconstructing internal region-of-interest image
CN105608679A (en) * 2016-01-28 2016-05-25 重庆邮电大学 Image denoising method integrated with structure tensor and non-local total variation
CN106056558A (en) * 2016-06-29 2016-10-26 中国人民解放军国防科学技术大学 Target image recovery method based on laser longitudinal chromatographic image sequence
CN106355561A (en) * 2016-08-30 2017-01-25 天津大学 TV (total variation) image noise removal method based on noise priori constraint
CN107239729A (en) * 2017-04-10 2017-10-10 南京工程学院 A kind of illumination face recognition method based on illumination estimation

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
An L1-based variational model for Retinex theory and its application to medical images;Ma wenye etal;《CVPR2011》;20110822;全文 *
基于边缘检测的全变分模型红外图像去噪;许帆;《中国科技信息》;20180129;全文 *

Also Published As

Publication number Publication date
CN110136070A (en) 2019-08-16

Similar Documents

Publication Publication Date Title
US9661296B2 (en) Image processing apparatus and method
Nalpantidis et al. Stereo vision for robotic applications in the presence of non-ideal lighting conditions
Nguyen et al. Illuminant aware gamut‐based color transfer
WO2019085838A1 (en) Object rendering method and device, storage medium and electronic device
US9424231B2 (en) Image reconstruction method and system
CN110136070B (en) Image processing method, image processing device, computer-readable storage medium and electronic equipment
Ding et al. Efficient dark channel based image dehazing using quadtrees
KR20210011322A (en) Video depth estimation based on temporal attention
KR20070090224A (en) Method of electronic color image saturation processing
Vazquez-Corral et al. A fast image dehazing method that does not introduce color artifacts
Yu et al. Underwater image enhancement based on color-line model and homomorphic filtering
Lee et al. Correction of the overexposed region in digital color image
GB2543775A (en) System and methods for processing images of objects
Pierre et al. Luminance-hue specification in the RGB space
Wang et al. Fast smoothing technique with edge preservation for single image dehazing
KR102614906B1 (en) Method and apparatus of image processing
Hong et al. Single image dehazing based on pixel-wise transmission estimation with estimated radiance patches
Si et al. A novel method for single nighttime image haze removal based on gray space
CN113096033B (en) Low-light image enhancement method based on Retinex model self-adaptive structure
JP2012028973A (en) Illumination light estimation device, illumination light estimation method, and illumination light estimation program
CN112927200B (en) Intrinsic image decomposition method and device, readable storage medium and electronic equipment
CN114266803A (en) Image processing method, image processing device, electronic equipment and storage medium
KR20230037953A (en) Method of color transform, electronic apparatus performing the method, and image sensor
Ji et al. An efficient method for scanned images by using color-correction and L0 gradient minimization
Voronin Modified local and global contrast enhancement algorithm for color satellite image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant