CN111275804A - Image illumination removing method and device, storage medium and computer equipment - Google Patents

Image illumination removing method and device, storage medium and computer equipment Download PDF

Info

Publication number
CN111275804A
CN111275804A CN202010050512.4A CN202010050512A CN111275804A CN 111275804 A CN111275804 A CN 111275804A CN 202010050512 A CN202010050512 A CN 202010050512A CN 111275804 A CN111275804 A CN 111275804A
Authority
CN
China
Prior art keywords
image
illumination
processed
map
detail
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010050512.4A
Other languages
Chinese (zh)
Other versions
CN111275804B (en
Inventor
林祥凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202010050512.4A priority Critical patent/CN111275804B/en
Publication of CN111275804A publication Critical patent/CN111275804A/en
Application granted granted Critical
Publication of CN111275804B publication Critical patent/CN111275804B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/80Shading
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Image Processing (AREA)

Abstract

The application relates to an image illumination removing method, an image illumination removing device, a computer readable storage medium and computer equipment, wherein the method comprises the following steps: acquiring a graph to be processed; filtering the image to be processed based on the first kernel size filter to obtain a first low-frequency characteristic image without illumination; filtering the image to be processed based on a second kernel size filter to obtain a second low-frequency characteristic image containing illumination; the first nucleus size is larger than the second nucleus size; filtering information of a second low-frequency characteristic diagram in the diagram to be processed to obtain a second high-frequency detail diagram with illumination removed; and fusing the first low-frequency characteristic diagram and the second high-frequency detailed diagram to obtain a target extinction diagram. The scheme provided by the application can well give consideration to the illumination removal effect and the image definition.

Description

Image illumination removing method and device, storage medium and computer equipment
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image illumination removal method, an image illumination removal apparatus, a storage medium, and a computer device.
Background
In scenes such as three-dimensional object rendering and object recognition, there is often a need to remove illumination in images. For example, in the three-dimensional object rendering, after texture information of a graph to be processed is mapped to a three-dimensional model, illumination information is given to the three-dimensional model in a rendering stage. If the graph to be processed also carries illumination information, the rendered three-dimensional object has double illumination information, so that the rendered three-dimensional object has strong sense of incongruity, and illumination in the graph to be processed needs to be removed. The effectiveness and reliability of subsequent image processing and analysis are directly affected by the good and bad illumination removal effect.
The traditional method is mainly based on a spherical harmonic illumination algorithm to remove illumination, but the traditional method can only remove certain illumination of an image. However, the illumination in the actual image is very complicated, and includes various illumination such as scattered light, highlight, and the like, so that the illumination removal effect is poor.
Disclosure of Invention
Based on this, it is necessary to provide an image illumination removal method, an apparatus, a computer-readable storage medium, and a computer device for solving the technical problem of poor illumination removal effect based on the spherical harmonic illumination algorithm.
An image illumination removal method, comprising:
acquiring a graph to be processed;
filtering the image to be processed based on the first kernel size filter to obtain a first low-frequency characteristic image without illumination;
filtering the image to be processed based on a second kernel size filter to obtain a second low-frequency characteristic image containing illumination; the first nucleus size is larger than the second nucleus size;
filtering information of a second low-frequency characteristic diagram in the diagram to be processed to obtain a second high-frequency detail diagram with illumination removed;
and fusing the first low-frequency characteristic diagram and the second high-frequency detailed diagram to obtain a target extinction diagram.
An image illumination removal apparatus, the apparatus comprising:
the illumination removal module is used for acquiring a to-be-processed image; filtering the image to be processed based on the first kernel size filter to obtain a first low-frequency characteristic image without illumination;
the detail extraction module is used for filtering the image to be processed based on the second kernel size filter to obtain a second low-frequency characteristic image containing illumination; the first nucleus size is larger than the second nucleus size; filtering information of a second low-frequency characteristic diagram in the diagram to be processed to obtain a second high-frequency detail diagram with illumination removed;
and the image fusion module is used for fusing the first low-frequency characteristic diagram and the second high-frequency detail diagram to obtain a target delumination diagram.
A computer-readable storage medium, storing a computer program which, when executed by a processor, causes the processor to perform the steps of the image illumination removal method.
A computer device comprising a memory and a processor, the memory storing a computer program which, when executed by the processor, causes the processor to perform the steps of the image illumination removal method.
According to the image illumination removing method, the image illumination removing device, the computer readable storage medium and the computer equipment, the image to be processed is filtered based on the filter with the large kernel size, and illumination in the image to be processed can be filtered as much as possible. The filter based on the small kernel size filters the image to be processed, so that the illumination in the image to be processed can be kept as much as possible, the second high-frequency detail image only keeping the image detail information can be obtained after the information of the second low-frequency feature image full of illumination information is filtered in the image to be processed, the second high-frequency detail image is blended into the first low-frequency feature image without the illumination, the illumination removal effect and the image definition are well considered, and the fused de-illumination image not only has a good illumination removal effect, but also has a good image clear visual effect.
Drawings
FIG. 1 is a diagram of an embodiment of an application environment of a method for removing image illumination;
FIG. 2 is a flowchart illustrating an image illumination removal method according to an embodiment;
FIG. 3 is a diagram illustrating images involved in filtering a graph to be processed based on filters of different kernel sizes, in one embodiment;
FIG. 4 is a schematic diagram of a delumination map obtained based on fusion of a first low frequency feature map and a second high frequency detail map in one embodiment;
FIG. 5 is a flowchart illustrating an embodiment of a process for removing illumination from a graph to be processed based on filters with different kernel sizes;
FIG. 6 is a schematic diagram illustrating a comparison of removal of illumination based on spherical harmonic illumination and a filter in one embodiment;
FIG. 7 is a diagram illustrating an image involved in a process of cropping a target's deluminescence map in one embodiment;
FIG. 8 is a diagram illustrating an image involved in texture extraction for a graph to be processed, in one embodiment;
FIG. 9a is a schematic diagram illustrating an image involved in a process of performing illumination removal on a map to be processed based on a spherical harmonic illumination algorithm in one embodiment;
FIG. 9b is a schematic diagram showing an image involved in a process of removing illumination from a map to be processed based on a spherical harmonic illumination algorithm in another embodiment;
FIG. 10 is a flow diagram illustrating the illumination removal process for a map to be processed based on the spherical harmonic illumination algorithm in one embodiment;
FIG. 11 is a flowchart illustrating an image illumination removal method according to an embodiment;
FIG. 12 is a flowchart illustrating an image illumination removal method according to another embodiment;
FIG. 13 is a flowchart illustrating an image illumination removal method according to still another embodiment;
FIG. 14 is a block diagram showing the construction of an image light removal device according to an embodiment;
FIG. 15 is a block diagram showing the construction of an image light removal device according to another embodiment;
FIG. 16 is a block diagram showing a configuration of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
FIG. 1 is a diagram of an application environment of the image illumination removal method in one embodiment. Referring to fig. 1, the image illumination removal method is applied to an image illumination removal system. The image illumination removal system includes a terminal 110 and a server 120. The terminal 110 and the server 120 are connected through a network. The terminal 110 may specifically be a desktop terminal or a mobile terminal, and the mobile terminal may specifically be at least one of a mobile phone, a tablet computer, a notebook computer, and the like. The server 120 may be implemented as a stand-alone server or a server cluster composed of a plurality of servers. The terminal 110 and the server 120 can be used separately to execute the image illumination removal method provided in the embodiment of the present application. The terminal 110 and the server 120 may also be cooperatively used to execute the image illumination removal method provided in the embodiment of the present application.
It should be noted that, in the embodiment of the present application, two filters with different kernel sizes are involved for performing filtering processing on a graph to be processed. The filter is a filter for suppressing noise of the target image while retaining the detail features of the image as much as possible. The filter may be a single filter. In another embodiment, the filter may also be a Convolutional layer of a CNN (Convolutional Neural Networks) model. The Convolutional neural network includes a Convolutional Layer (Convolutional Layer) and a Pooling Layer (Pooling Layer). There may be several convolution kernels (also called filters) in each convolution layer, and each feature map of the previous layer is convolved with each convolution kernel to generate the feature map of the next layer. Therefore, the scheme provided by the embodiment of the application relates to an artificial intelligence image processing technology. The image is subjected to the light removal treatment based on the artificial intelligence technology, and the processed image can be used for face recognition, three-dimensional object reconstruction and the like.
As shown in FIG. 2, in one embodiment, an image illumination removal method is provided. The embodiment is mainly exemplified by applying the method to the computer device in fig. 1, and the computer device may specifically be the terminal 110 or the server 120 in the above figure. Referring to fig. 2, the image illumination removal method specifically includes the following steps:
and S202, acquiring a graph to be processed.
The graph to be processed is an image to be processed by the image illumination removal method provided in the embodiment of the present application. The graph to be processed may be an image acquired in real time by an image acquisition device, an existing image crawled from a network, or an original graph such as a video frame image separated from a video, or a fusion graph obtained by fusing a plurality of original graphs, or a texture graph UVmap. The texture map is a two-dimensional picture for projecting texture information on a three-dimensional model to give the three-dimensional model a texture effect. The texture map may be drawn by drawing software such as 3Dmax by art workers, or may be extracted from an original map based on spatial information of a three-dimensional model.
Specifically, the computer device may acquire an image generated locally, and take the image as a to-be-processed drawing. The computer device may also crawl images from the network as pending graphs. The computer equipment can also acquire images transmitted by other computer equipment, and the images are taken as images to be processed.
And S204, filtering the image to be processed based on the first kernel size filter to obtain a first low-frequency characteristic image without illumination.
The image filtering is an operation for suppressing noise in the image to be processed under the condition of keeping the detail features of the image to be processed. The image filtering may be performed in the real domain or in the frequency domain. Image filtering may alter or enhance the image. By filtering, it is possible to emphasize some features in the image or to remove some unwanted parts of the image. The Filter used for image filtering may be a linear Filter, such as block filtering (boxFilter), mean filtering (blu), gaussian filtering (gaussian), etc., or a non-linear Filter, such as median filtering (median Filter), Bilateral filtering (binary Filter), etc.
The filters have corresponding filter matrices and kernel functions. The filter matrix is a matrix of n x m and has three dimensions of length, width and depth. The length and width are usually the same (both n), and can be specified artificially. The depth m of the filter matrix is the same as the depth of the graph to be processed (the number of layers of the feature map). If the graph to be processed is a gray-scale graph and the number of layers of the feature graph is 1, then m is 1. If the graph to be processed is an RGB color graph and the number of the layers of the feature graph is 3, then m is 3.
Filtering is a neighborhood operator. In image processing, a graph to be processed is given, each corresponding pixel in an output image is formed after weighted averaging of pixels in a small area in the graph to be processed, and the final output value of the pixel is determined by using the values of pixels around the given pixel. For example, the mean filtering replaces the original pixel value with the average value of the pixels around the pixel point. The kernel size of the filter is a numerical value for defining the size of the region of the "domain", specifically, the length x width (n x n). Commonly used core sizes are 3 x3, 5 x 5 etc. The matrix elements of the filter matrix are weights used in weighted average calculation, and the weights are obtained by kernel function calculation definition. Therefore, when the filter is specified, only one parameter n of the kernel size needs to be specified.
In one embodiment, the filter comprises a bilateral filter; the bilateral filter comprises a spatial domain kernel and a pixel value domain kernel; the method further comprises the following steps: determining kernel elements of a spatial domain kernel according to spatial proximity between pixel points in a graph to be processed; determining kernel elements of a pixel value domain kernel according to the similarity between pixel points in the graph to be processed; and fusing the kernel elements of the spatial domain kernel and the kernel elements of the pixel value domain kernel to obtain the bilateral filter.
The filter used in this embodiment is a bilateral filter, and specifically may be a gaussian filter, a Weighted Least Squares (WLS) filter, a guided filter, or the like. The bilateral filter can achieve the effects of keeping the edge and reducing noise and smoothing. As with other filtering principles, the bilateral filter also uses a weighted average method, in which the intensity of a certain pixel is represented by a weighted average of the brightness values of the peripheral pixels, and the weighted average is based on gaussian distribution. The difference is that the weight of the bilateral filtering considers not only the euclidean distance of the pixel space domain, but also the radiation difference in the pixel range domain (such as the similarity degree between the pixel in the filtering kernel and the central pixel, the color intensity, the depth distance, and the like), and the two weights are considered simultaneously when the central pixel is calculated.
The kernel function of the bilateral filter is the result of the synthesis of a spatial domain kernel (S) and a pixel range domain (R). In a flat area of an image, the pixel value change is very small, the corresponding pixel range domain weight is close to 1, and the spatial domain weight plays a main role at this time, which is equivalent to performing gaussian blurring. In the edge area of the image, the pixel value is greatly changed, and the pixel range area weight is increased, so that the information of the edge is maintained. The filtering is carried out based on the bilateral filter, so that the important information such as the outline and the edge of the image can be well reserved while the illumination information of the image is filtered.
S206, filtering the image to be processed based on the second kernel size filter to obtain a second low-frequency characteristic image containing illumination; the first nucleus size is larger than the second nucleus size.
The graph to be processed may be decomposed into different frequency components, that is, the graph to be processed may be represented by Imag ═ base + light or Imag ═ base + light. Wherein, Imag is a graph to be processed, base is a low-frequency part, and describes large-range information; light is a high frequency component, describing specific details. The frequency of the image is an index for representing the intensity of the change of the gray level in the image, and is the gradient of the gray level on the plane space. For an image, the edge part of the image is a sudden change part which changes rapidly, so that the reaction is a high-frequency component in a frequency domain; the noise of the image is mostly a high frequency part; the gently changing part of the image is a low-frequency component.
The computer device may separate the low frequency component and the high frequency component of the graph to be processed by a filter. The kernel size of the filter directly determines the separation effect and the sharpness of the separated image. Theoretically, when a filter with a proper kernel size exists, the illumination in the image to be processed can be sorted out at one time, and an image without illumination is obtained. However, such kernel sizes are often very difficult to find, and a single kernel size filter is often difficult to achieve a good balance between the light removal effect and the image sharpness.
In order to solve the above problem, the embodiments of the present application respectively process the graph to be processed by using two filters with different kernel sizes. Wherein the first core size is substantially larger than the second core size. The filter of the first kernel size is used for filtering out the illumination in the image to be processed as much as possible at the expense of image definition. The second kernel size filter is used to preserve as much detail as possible in the graph to be processed by sacrificing the illumination removal effect.
The low frequency feature map is a low frequency component base separated from the graph to be processed based on the filter, and the high frequency detail map mentioned below is a high frequency component light separated from the graph to be processed based on the filter. In the embodiment of the application, the low-frequency characteristic diagram is an image obtained by filtering the image to be processed based on the filter; the high-frequency detail map is an image obtained by removing the information of the low-frequency feature map from the to-be-processed map.
The first low-frequency characteristic diagram is an image obtained by filtering the image to be processed based on a filter with a first kernel size; the first high-frequency detail map mentioned below is an image obtained by removing information of the first low-frequency feature map from the to-be-processed map. The second low-frequency feature map is an image obtained by filtering the to-be-processed map based on a filter of a second kernel size; the second high-frequency detail map mentioned below is an image obtained by removing information of the second low-frequency feature map from the to-be-processed map.
Specifically, the computer device dynamically determines a first kernel size and a second kernel size of the filter according to a pixel size of the image to be processed. Or after the computer equipment converts the image to be processed into a preset standard size, adopting a preset kernel size which is universal for the standard size image. In other words, images of different pixel sizes are preset with corresponding first and second kernel sizes. For example, when the bilateral filter is a gaussian filter, the corresponding first kernel size may be 15, and the second kernel size may be 3; when the bilateral filter is a WLS filter, the corresponding first kernel size may be 10 and the second kernel size may be 0.1.
Further, the computer device filters the graph to be processed based on the filter with the first kernel size to obtain a first low-frequency feature graph. And the computer equipment filters the graph to be processed based on the filter with the second kernel size to obtain a second low-frequency characteristic graph.
S208, filtering information of the second low-frequency characteristic diagram in the diagram to be processed to obtain a second high-frequency detail diagram with illumination removed.
Specifically, the computer device removes the information of the first low-frequency characteristic diagram from the graph to be processed to obtain a first high-frequency detail diagram. And the computer equipment removes the information of the second low-frequency characteristic diagram from the graph to be processed to obtain a second high-frequency detail diagram. According to different existing modes of the low-frequency component and the high-frequency component in the graph to be processed, methods for removing the low-frequency characteristic information in the graph to be processed can be different. For example, when the to-be-processed graph is the multiplication-based image Imag _ base _ light, the method for the computer device to remove the low-frequency feature graph information in the to-be-processed graph may be to divide the pixel matrix corresponding to the to-be-processed graph by the pixel matrix light corresponding to the low-frequency feature graph Imag/base. When the to-be-processed graph is the addition-based image Imag + base + light, the method for the computer device to remove the low-frequency feature graph information from the to-be-processed graph may be to subtract the pixel matrix light corresponding to the low-frequency feature graph from the pixel matrix corresponding to the to-be-processed graph. It is understood that the operation between images herein refers to an operation between corresponding pixel matrices of images.
Referring to fig. 3, fig. 3 shows a schematic diagram of images involved in filtering a map to be processed based on filters of different kernel sizes in one embodiment. As shown in fig. 3, taking a face image as an example, filtering the to-be-processed image a based on a filter with a first kernel size to obtain a first low-frequency feature map b1, and removing information of the first low-frequency feature map b1 from the to-be-processed image a to obtain a first high-frequency detail map b 2. And filtering the graph a to be processed based on the filter with the second kernel size to obtain a second low-frequency feature map c1, removing information of the second low-frequency feature map c1 from the graph a to be processed to obtain a second high-frequency detail map c 2. As can be clearly seen from fig. 3, the first low frequency feature map b1 is blurred and the details of the five sense organs have been completely lost, but the first low frequency feature map b1 has substantially removed the illumination information, leaving the illumination information substantially in the first high frequency detail map b 2. The second low frequency profile c1 retains as much illumination information as possible, leaving the second high frequency profile c2 with only details of the five sense organs.
It should be noted that, the embodiments of the present application are described only by taking a face image as an example, but the image illumination removal method provided by the present application is applicable to all images, and the image content is not limited to a face.
And S210, fusing the first low-frequency characteristic diagram and the second high-frequency detailed diagram to obtain a target deluminescence diagram.
Specifically, when the graph to be processed is the image Imag-base light based on multiplication, the computer device multiplies the pixel matrix corresponding to the first low-frequency feature graph b1 by the pixel matrix corresponding to the second high-frequency detail graph c2, and the pixel matrices are fused to obtain the target deeuminescence map Imag-X-baseb1*lightc2When the to-be-processed image is the image Imag based on addition, base + light, the computer device adds the pixel matrix corresponding to the first low-frequency feature map b1 to the pixel matrix corresponding to the second high-frequency detail map c2, and obtains the base-X base-extinction map of the target by fusionb1+lightc2
Referring to fig. 4, fig. 4 is a schematic diagram illustrating an embodiment of obtaining a degumination map based on fusion of a first low frequency feature map and a second high frequency detail map. As shown in fig. 4, the delumination map d of the target obtained based on the fusion of the first low-frequency feature map b1 and the second high-frequency detail map c2 is more uniform in color and obviously free of illumination compared with the map a to be processed.
According to the image illumination removing method, the image to be processed is filtered based on the filter with the large kernel size, and illumination in the image to be processed can be filtered as much as possible. The filter based on the small kernel size filters the image to be processed, so that the illumination in the image to be processed can be kept as much as possible, the second high-frequency detail image only keeping the image detail information can be obtained after the information of the second low-frequency feature image full of illumination information is filtered in the image to be processed, the second high-frequency detail image is blended into the first low-frequency feature image without the illumination, the illumination removal effect and the image definition are well considered, and the fused de-illumination image not only has a good illumination removal effect, but also has a good image clear visual effect.
In one embodiment, the image illumination removing method further includes: determining the pixel size of an image to be processed; determining a first kernel size of a filter for filtering the image to be processed according to the pixel size and a preset first proportional threshold; determining a second kernel size of a filter for filtering the image to be processed according to the pixel size and a preset second proportional threshold; the first proportional threshold is greater than the second proportional threshold.
The first scale threshold is a preset scale value used for determining the minimum value of the first kernel size according to the pixel size of the image to be processed. The second scale threshold is a preset scale value for determining a maximum value of the second kernel size according to the pixel size of the image to be processed. The first proportional threshold and the second proportional threshold are both values between 0 and 1. The first proportional threshold is much larger than the second proportional threshold. The first proportional threshold may be a value close to 1; the second scaling threshold may be a value close to 0.
Specifically, the pixel size of the to-be-processed image includes a length and a width. When the graph to be processed is an image with equal length and width, the computer equipment determines the minimum value of the first kernel size according to the product of the length of the graph to be processed and a preset first proportional threshold; and determining the maximum value of the second kernel size according to the product of the length of the graph to be processed and a preset second proportion threshold value, and further obtaining the value ranges of the first kernel size and the second kernel size. For example, assuming that the pixel size of the graph to be processed is 1024 × 1024, the first proportional threshold is 80%, and the second proportional threshold is 20%, the minimum value of the first kernel size is |1024 × 80%, and the maximum value of the second kernel size is |1024 × 20%. Wherein, | | is the operator of rounding.
When the graph to be processed is an image with unequal length and width, the computer equipment determines the minimum value of the first kernel size according to the product of the minimum value of the length and the width and a preset first proportional threshold; and determining the maximum value of the second kernel size according to the product of the minimum value of the length and the width and a preset second proportion threshold value. For example, assuming that the pixel size 256 × 1024 of the graph to be processed, the first proportional threshold is 80%, and the second proportional threshold is 20%, the minimum value of the second kernel size is |256 × 80%, and the maximum value of the second kernel size is |256 × 20%. It is understood that the maximum value of the first kernel size is the minimum value of the length and the width of the map to be processed.
Further, the computer device selects a size value within a value range of the first kernel size as a final first kernel size, and selects a size value within a value range of the second kernel size as a final second kernel size. The selected size value may be a random number within a value range, or a median, and the specific selection principle is not limited herein.
In the embodiment, the kernel size of the filter is determined in a targeted and dynamic manner according to the pixel size of the image, so that the specified filter is more suitable for the current graph to be processed, and the filtering effect is improved.
In one embodiment, the image illumination removing method further includes: filtering information of the first low-frequency characteristic diagram in the diagram to be processed to obtain a first high-frequency detail diagram; determining a non-illuminated region in the first high-frequency detail map; picking up a detail clear image in a non-illumination area in the first high-frequency detail image; and superposing the detail clear image into a delumination image to obtain a target image.
The non-illumination area refers to an area, which contains less illumination information, in the to-be-processed image, that is, an area, which is not reflected, in the to-be-processed image. The non-illumination area can be automatically identified and determined by the computer device according to pixel values in the image to be processed, or can be a fixed area where a target is pre-designated for a specific type of image. The object may be a static object such as a building, a tree, or standing table and chair. The target may also be a dynamic target such as a natural human, animal, or airplane, etc. The target may be a complete object, such as a natural human body or an entire building, etc.; or a local object such as a natural human face, a natural human hand, or a natural human foot, etc. The number of the targets in the graph to be processed can be one or more than one. The number of the non-illumination areas divided in the graph to be processed can be one or more than one. For example, the area of the mouth and the area of the eyebrow in the standard-size face image may be non-illuminated areas, respectively.
The detail clearer picture is the image content in the non-illuminated area in the first high-frequency detail picture. The image content in the non-illumination area has less illumination information and better definition, and can be used for repairing a de-illumination map of the target obtained by the fusion of the method. Referring to fig. 5, fig. 5 is a schematic flowchart illustrating an entire process of performing the illumination removal process on the graph to be processed in one embodiment. The whole process of performing illumination removal processing on the graph to be processed in this embodiment includes three steps of separation, fusion and restoration, that is, the first low-frequency feature graph and the second high-frequency detail graph of illumination are separated from the graph to be processed, the separated first low-frequency feature graph and the separated second high-frequency detail graph are fused, and the fused target extinction graph is restored.
In one embodiment, the image illumination removing method further includes: determining a non-illumination area in a graph to be processed; picking a detail clear picture in a non-illumination area in a picture to be processed; and fusing the detail clear image and the delustering image to obtain a target image. That is, the detail-clearing map may also be the image content in the non-illuminated area in the original to-be-processed map. In other words, the detail-clearing image can be extracted from the first high-frequency detail image or the image to be processed.
Specifically, the computer device divides the pixels in the first high-frequency detail map into a plurality of pixel areas according to different illumination information amounts contained. Essentially, the image area division realizes the classification of image pixel level, and the illumination content marking of the whole image is realized by classifying pixel points. It should be noted that, in the embodiment of the present invention, the classification unit is not limited, and may be classified pixel by pixel, or may be classified by image block. An image block comprises a plurality of pixels. The computer device may encode the first high frequency detail map into a region partition feature matrix, and then decode the region partition feature matrix to obtain a de-illuminated region of the first high frequency detail map.
And the computer equipment cuts the image content in the non-illumination area to obtain a detail clear image corresponding to each non-illumination area. The detail-clearing map may be an image of the same pixel size as the delumination map of the object. The detail clear map divided from the first high-frequency detail map can be only one, and the detail clear map simultaneously contains the image contents of a plurality of non-illumination areas. In other words, the image content in different non-illuminated areas may be on a clear detail map. As shown in fig. 4, the detail clarity map b 2' extracted from the first high frequency detail map b2 includes the image content of the region where the eyebrows and mouth are located. And the computer equipment fuses the delumination image d of the target with the detail clear image b 2' to obtain a final target image e. In the embodiment of the application, the image fusion algorithm is not limited, and the image fusion can be performed. Image fusion is actually a kind of filtering, and different operators have different fusion effects. The more common methods are: laplacian pyramid fusion, poisson fusion, and the like.
Referring to fig. 6, fig. 6 shows a comparison diagram of the effect of removing illumination based on spherical harmonic illumination and a filter in one embodiment. Compared with the de-illumination image e obtained by performing illumination removal processing on the image a to be processed based on the method provided by the embodiment of the application, the image a' obtained by performing illumination removal processing on the image a to be processed based on spherical harmonic illumination further grinds the image and removes illumination information while retaining details of the image to be processed.
In this embodiment, after the first low-frequency feature map and the second high-frequency detail map with removed illumination separated from the to-be-processed map are fused, the fused target removed illumination map is further repaired based on the detail clear map without the illumination area, so that the finally obtained image has good illumination removal and good image clear visual effect.
In one embodiment, determining the non-illuminated region in the first high frequency detail map comprises: converting the first high-frequency detail drawing into a preset area division drawing with the same size; overlapping the first high-frequency detail map and the region division map with the same size; the region division map comprises at least one marking region; each region of the first high frequency detail map that coincides with a mark region is determined as an un-illuminated region.
Wherein the area division map is a preset image for determining the non-illuminated area. Each region partition map is applicable to a specific type and size of image, that is, the manner of determining the non-illumination region in the map to be processed based on the region partition map is applicable to a specific type and size of image, such as a standard size of face image.
In the region division map, regions in which one or more objects are respectively located are designated in advance and are referred to as mark regions. The area division map may include a plurality of mark areas at the same time, or may include only one mark area. When the region division map only contains one marked region, there may be a plurality of detail-clear maps divided in the first high-frequency detail map. In this embodiment, each region division diagram includes only one marked region.
Specifically, the computer device converts the first high-frequency detail map into the same size with the preset area division map so as to ensure that the image content of the marked area in the first high-frequency detail map falling into the area division map can be positioned and identified through the overlapped position. As shown in fig. 7, a detail clear image a1 can be extracted by overlapping the image a to be processed with a region dividing image x1 for extracting the left eyebrow in the face image; the method comprises the steps of overlapping a to-be-processed image a with a region division image x2 used for picking up the mouth of a human face image, picking up a detail definition image a2, overlapping the to-be-processed image a with a region division image x3 used for picking up the right eyebrow of the right side of the human face image, and picking up a detail definition image a 3. And the computer equipment fuses the de-illumination image d of the target with a1, a2 and a3 to obtain a final target image e.
In one embodiment, the low-frequency characteristic diagram, the high-frequency detail diagram, the detail clear diagram, the delustering diagram and the like involved in the process of removing illumination from the graph to be processed have uniform sizes, and all the low-frequency characteristic diagram, the high-frequency detail diagram, the detail clear diagram, the delustering diagram and the like can be preset standard sizes.
In this embodiment, the manner of determining the non-illumination region in the first high-frequency detail drawing based on the region division drawing is only to perform cutout from the first high-frequency detail drawing according to the pre-specified mark region in the region division drawing, so that the processing logic is simple, and the image illumination removal efficiency is improved.
In one embodiment, the graph to be processed comprises a texture graph; the acquisition of the graph to be processed comprises the following steps: acquiring a three-dimensional model and an original image of a target object; determining a texture coordinate index corresponding to each pixel point in an original image; projecting point cloud data in the three-dimensional model to corresponding pixel points in the original image according to the posture of a camera used for collecting the original image; extracting a texture map from the original map projected with the point cloud data based on the texture coordinate index.
The target object may be a virtual object in a virtual scene, or may be an object, an animal, a person, or the like in a real scene. The three-dimensional model of the target object can be manually drawn based on drawing software such as 3Dmax and the like, and can also be automatically established according to images of the target object from multiple visual angles. For example, in a specific scene, the terminal may prompt the user to take facial images of multiple viewing angles, and a corresponding three-dimensional avatar may be generated based on the facial images of multiple viewing angles, and the three-dimensional avatar may be used for entertainment interaction. It is understood that in such a scenario, the terminal may directly determine the camera pose corresponding to each image when acquiring the multi-view images. The original image of the target object may be one or more. The present embodiment is described by taking an original image of a target object as an example.
A texture is actually a two-dimensional array whose elements are color values. A single color value is called a texel or texel. Each texel has a unique address in the texture, which consists of a row and column, denoted U, V, respectively, and thus the texture coordinates may also be referred to as "UV coordinates". The UV coordinates refer to a plane in which all image files are two-dimensional. With a horizontal direction U and a vertical direction V, any pixel point on the image can be located by this planar, two-dimensional UV coordinate system. Most current mapping software specifies a uniform UV coordinate range 0.0, 1.0.
The three-dimensional model is composed of a plurality of triangular surfaces, and each triangular surface is composed of a plurality of three-dimensional points. And generating a point cloud according to the three-dimensional model. The point cloud data includes data for all three-dimensional points in the three-dimensional model. The three-dimensional model has a corresponding triangular face topology, and may also be colored with UV texture. For the artificially drawn three-dimensional model, the triangular surface topological structure is determined, and the texture coordinate index, namely the UV coordinate index, corresponding to each pixel point in the original image can be directly determined according to the triangular surface topological structure of the three-dimensional model. However, the triangular surface topology structure of the three-dimensional model automatically built based on the multi-view image is uncertain, and texture extraction is difficult to perform due to the lack of UV coordinate index.
In order to solve the above problem, the present embodiment provides a texture extraction method applicable to a three-dimensional model with an arbitrary triangular surface topology. Specifically, the computer device expands the three-dimensional model in a cylindrical or spherical shape or the like. For example, for a three-dimensional head portrait, which may be viewed as a cylinder, each three-dimensional point (x, y, z) on the three-dimensional head portrait, where x may be a horizontal coordinate direction from the left eye to the right eye, y may be a vertical coordinate direction from the eyebrow center to the mouth, and z may be a depth coordinate direction from the back of the brain scoop to the tip of the nose. At this time, a UV coordinate index corresponding to each three-dimensional point, such as U-tan, may be calculated-1(x/z), V ═ y, i.e. in the UV coordinate index, U is tan-1(x/z), V is the normalized y value. Thus, the three-dimensional head portrait can be unfolded, and a planar two-dimensional UV coordinate index is obtained.
From the known camera pose and camera parameters, the computer device may project the point cloud into the corresponding original map through a roto-translational perspective. The perspective projection model specifically adopted may be: and xc (K ═ R | T) × P, where RT is the camera pose, R is the rotation angle, T is the translation distance, K is the camera parameter, and P is the three-dimensional point. It will be appreciated that there are many models that can project three-dimensional points onto a two-dimensional raw map, and that the models are not limited herein. According to the projection, the computer device can establish a correspondence between each three-dimensional point in the point cloud and the coordinates of the pixel points on the original map. And then, by combining the UV coordinate index of each three-dimensional point, the computer equipment can determine the corresponding relation between each three-dimensional point in the point cloud and each pixel point on the texture map, so as to obtain the corresponding relation between the pixel point on the texture map and the pixel point on the original map, namely obtain the texture map corresponding to the original map.
In this embodiment, the corresponding texture coordinate index is dynamically generated according to the triangular surface topology structure of the three-dimensional model, and the corresponding texture map is extracted from the original map based on the texture coordinate index, so that the method according to this embodiment can realize texture extraction for the three-dimensional model of any topology. In addition, tests prove that the image illumination removing method provided by the application is more suitable for texture maps, namely the illumination removing effect based on the texture maps is better.
In one embodiment, obtaining the three-dimensional model and the raw map of the target object comprises: acquiring a three-dimensional model of a target object and a plurality of original images of visual angles; extracting a texture map from the raw map projected with the point cloud data based on the texture coordinate index includes: determining a marked area in the original image of each view angle; extracting a local texture map in a corresponding mark area from each original map projected with point cloud data based on the texture coordinate index; and fusing the local texture maps to obtain a complete texture map of the target object.
As above, the target object may be a plurality of pieces. The present embodiment is described by taking the example of extracting a texture map based on a multi-view original map. Specifically, the computer device determines the UV coordinate index of each three-dimensional point in the point cloud in the above manner, and perspectively projects the point cloud onto each original map respectively. Referring to fig. 8, fig. 8 shows a schematic diagram of an image involved in extracting a texture map in an original map based on a three-dimensional model in one embodiment. Fig. 8 shows an original view of projected dotted cloud data 802 from four perspectives, front, left, right, and bottom.
To facilitate fusion, the computer device extracts only part of the information on the original graph at each perspective. Specifically, as shown in fig. 8, the computer device marks out a mark region 806 in the original image according to the position of a pre-specified key pixel point 804, and extracts a corresponding local texture map 808 from a corresponding mark region in the original image of each viewing angle according to the UV coordinate index of each three-dimensional point. The marker region taken in the original drawing in the front view may be the entire face region; the marked region taken in the original drawing from the left-view perspective may be a right-side region where the face is relatively bright; the marked region taken in the original image at the right-view perspective may be a left region where the face is relatively bright; the marked area taken in the original view at the bottom view may be a relatively bright chin area of the face. It is understood that the four viewing angles are only used as an example, and any other viewing angle combination may be used, which is not limited to this.
Further, the computer device fuses the local texture maps of the multiple viewing angles together based on the concepts of laplacian pyramid fusion or poisson fusion, and the like, so as to obtain a final texture map 810. The illumination conditions of the original images at all the visual angles are different, color deviation may occur, fusion is performed based on the concepts of Laplacian pyramid fusion or Poisson fusion, and compared with the way that a plurality of local texture images are directly superposed together, the fusion mode can form good transition between different marked areas, and a good smooth effect is achieved.
In one embodiment, the computer device further samples an average color of the multi-view original image, and performs color smoothing processing on the texture image obtained by fusion based on the average color to achieve a better smoothing effect. The sampling of the average color of the multi-view raw map may refer to the description of the embodiments below.
In the embodiment, the texture coordinate index is automatically generated in a cylindrical expansion mode, the local texture maps are respectively extracted from a plurality of visual angles, and the integral texture map is finally obtained by fusion in a Laplacian pyramid mode.
In one embodiment, the image illumination removing method further includes: generating an initial delustering map according to the average color parameter in the graph to be processed; performing iterative delustering treatment on the initial delustering image based on a spherical harmonic illumination algorithm until an iteration stop condition is met, and obtaining a middle delustering image; based on the filtering of the graph to be processed by the first kernel size filter, the obtaining of the first low-frequency characteristic graph without illumination comprises the following steps: and filtering the middle delumination map based on the first kernel size filter to obtain a first low-frequency characteristic map with delumination.
Before the illumination removal processing is performed on the graph to be processed by using the filters with different kernel sizes based on the embodiment, the graph to be processed may be initially subjected to illumination removal based on a spherical harmonic illumination algorithm to obtain a middle de-illumination graph. That is to say, in this embodiment, preliminary illumination removal is performed on the to-be-processed map based on the spherical harmonic illumination algorithm, and then illumination removal is performed on the basis of the filter on the basis of the obtained intermediate extinction map, so as to obtain the extinction map of the target. Unlike the traditional spherical harmonic illumination algorithm for removing illumination, the intermediate illumination removing map of the embodiment is obtained by multi-view image fusion.
When the image to be processed is an original image of the target object from multiple visual angles, the computer equipment can preliminarily remove illumination by adopting spherical harmonic illumination. Specifically, the computer device may simulate the illumination using a standard third-order spherical harmonic illumination model, color ═ albedo (H × light), where color is the color of the illumination on the original map; albedo is an intermediate extinction map expected to be obtained based on a spherical harmonic illumination algorithm; h is a spherical harmonic base which can be uniquely determined according to a normal of the three-dimensional model; light is spherical harmonic illumination. The color and the albedo can be respectively a pixel matrix of n × 3, wherein n represents the pixel size of the image to be processed, and 3 represents three RGB color channels; h in the third-order spherical harmonic illumination model may be specifically a matrix of n × 9; light may specifically be a 9 x3 matrix.
The computer device first initializes albdeo to average color, thereby calculating light. In one embodiment, the image illumination removing method further includes: determining one or more sampling areas in a graph to be processed; uniformly sampling in each sampling area of the graph to be processed to obtain a plurality of sampling points; and calculating the average value of the color parameters of the sampling points, and determining the average value as the average color parameter of the graph to be processed.
Specifically, the computer device prestores area division maps suitable for certain specific types and sizes of images. Referring to fig. 9a, fig. 9a is a schematic diagram illustrating an image involved in a process of performing illumination removal on a map to be processed based on a spherical harmonic illumination algorithm in one embodiment. In fig. 9a, sampling is performed in three viewing angle original graphs 902 of left view, front view and right view, each viewing angle original graph has a corresponding area division graph 904, the original graph 902 and the area division graph 904 are overlapped to determine image content recorded in a mark area 906 in the area division graph, and further uniform sampling can be performed in the mark area to obtain a sampling graph 908 including a plurality of sampling points.
The computer device is based on the color of the uniformly sampled points as an average color of the multi-view original map. That is to say, pixel points of albedo and illumination are calculated in an iterative manner, the computer equipment only adopts sampling points, good illumination can be calculated based on the sampling points, and finally the illumination obtained through calculation is applied to all the pixel points on each original image, so that the de-illumination based on spherical harmonic illumination is realized on the whole original image, and the initial de-illumination image is obtained. Compared with the method of calculating the average color based on the whole marked area, the local sampling method can avoid the color interference caused by the areas with obvious color deviation (such as nostrils, eye sockets or hairs in the face image, and the like), so that the average color can well represent the color of most areas of the original image.
After obtaining the initial desliming map corresponding to each original map, the computer device may calculate albedo by light in reverse according to the third-order spherical harmonic illumination model, iterate for multiple times until convergence, and fuse the illumination maps corresponding to each original map obtained by iterative calculation to obtain an intermediate desliming map. As shown in fig. 9, the effect of the left-view face image 910 after illumination is removed based on the spherical harmonic illumination algorithm.
In the embodiment, the preliminary illumination removal is firstly carried out on the image to be processed based on the spherical harmonic illumination algorithm, and in the process, an initial de-illumination image is initialized based on the color uniformly sampled in the local area of the image, compared with the method for calculating the average color based on the whole image, the local sampling mode can avoid the color interference caused by the area with obviously deviated color, so that the color of the image is more uniform; and the illumination is removed based on a filter on the basis of the middle de-illumination image, so that the illumination removal effect can be improved.
In one embodiment, obtaining the intermediate delumination map until the iteration stop condition is satisfied comprises: when an iteration stopping condition is met, a detail clear image of other regions outside the sampling region is scratched in the to-be-processed image; and fusing the detail clear image with a delustering image obtained when the iteration is stopped to obtain a middle delustering image.
For the purpose of aspect description, the detail clearing drawings extracted from the first high-frequency detail drawing for repairing the fused image corresponding to the first low-frequency feature drawing and the second high-frequency detail drawing are referred to as first detail clearing drawings, and the detail clearing drawings extracted from the drawing to be processed for repairing the intermediate extinction drawing are referred to as second detail clearing drawings. The second detail clear image is the image content in the mark area in the image to be processed. The image content in the marked area has less illumination information and better definition, and can be used for repairing the intermediate delumination map obtained by the fusion of the method. The manner of matting of the first and second detailed clearer images can be the same.
Referring to fig. 10, fig. 10 is a schematic flowchart illustrating an overall process of performing the illumination removal process on the graph to be processed based on the spherical harmonic illumination algorithm in one embodiment. The full flow of the illumination removal processing of the graph to be processed based on the spherical harmonic illumination algorithm comprises three steps of multi-view spherical harmonic illumination, image detail restoration and color unification and fusion.
Specifically, the computer device divides the map based on a preset region, and fills other regions outside the marked region in each view-angle original map subjected to removal of illumination based on spherical harmonic illumination with the average color of the multi-view-angle original map to obtain a filled map. And the computer equipment scratches the image content positioned in the mark area in the original image based on the area division image to obtain a second detail clearness image. And superposing the second detail clear map to a corresponding position in the filling map to obtain a local de-illumination map. The illumination conditions of the original images at different viewing angles are different, so that the situation that the color base tones are inconsistent exists in the image, and the computer equipment scales the local de-illumination map according to the average color calculated in the mode to uniformly change the local de-illumination maps to the same color. And the computer equipment fuses the local light removing maps with uniform colors to obtain a middle light removing map.
As shown in fig. 9b, the actual image fusion phase actually uses the content located in the labeled region in the sampling map 910, i.e. the local spherical harmonic illumination map 912, fills the other regions of the local spherical harmonic illumination map located outside the labeled region based on the average color to obtain a filling map 914, and adds the image content of the right eye socket from the original map to the corresponding position in the filling map to obtain a local de-illumination map 916 of the left viewing angle. The partial delustering maps of the front view and the right view can be obtained based on the same method. After the local delustering maps are color-unified, the color-unified multi-view local delustering maps 918 are fused to obtain an intermediate delustering map 920.
In the embodiment, the illumination is removed based on the spherical harmonic illumination algorithm, the obtained extinction image is repaired based on the detail clearness image of the marked area, and in addition, the color of the multi-view image is uniformly processed, so that the color of the intermediate extinction image obtained by final fusion is more uniform, and the finally obtained image has good illumination removal and good image clearness visual effect.
In a specific embodiment, as shown in fig. 11, the image illumination removing method provided by the present application includes the following steps:
and S1102, acquiring a graph to be processed.
And S1104, determining the pixel size of the image to be processed.
S1106, determining a first kernel size of a filter for performing filtering processing on the graph to be processed according to the pixel size and a preset first proportional threshold.
S1108, determining a second kernel size of a filter for filtering the image to be processed according to the pixel size and a preset second proportional threshold; the first proportional threshold is greater than the second proportional threshold.
S1110, filtering the image to be processed based on the first kernel size filter to obtain a first low-frequency characteristic image without illumination.
S1112, filtering the to-be-processed image based on the second kernel size filter to obtain a second low-frequency characteristic image containing illumination; the first nucleus size is larger than the second nucleus size.
S1114, filtering the information of the second low-frequency characteristic diagram in the graph to be processed to obtain a second high-frequency detail diagram with illumination removed.
And S1116, fusing the first low-frequency characteristic diagram and the second high-frequency detailed diagram to obtain a target deluminescence diagram.
S1118, information of the first low-frequency characteristic diagram is filtered in the diagram to be processed, and a first high-frequency detail diagram is obtained.
And S1120, converting the first high-frequency detail map into a region division map with the same size as the preset region division map.
S1122, overlapping the first high-frequency detail map and the region division map with the same size; the region division map includes at least one mark region.
S1124, each region of the first high frequency detail map that coincides with the mark region is determined as a non-illuminated region. And scratching a detail clear image in the non-illumination area in the first high-frequency detail image.
And S1126, overlapping the detail clearness image into a light removing image to obtain a target image.
According to the image illumination removing method, the image to be processed is filtered based on the filter with the large kernel size, and illumination in the image to be processed can be filtered as much as possible. The filter based on the small kernel size filters the image to be processed, so that the illumination in the image to be processed can be kept as much as possible, the second high-frequency detail image only keeping the image detail information can be obtained after the information of the second low-frequency feature image full of illumination information is filtered in the image to be processed, the second high-frequency detail image is blended into the first low-frequency feature image without the illumination, the illumination removal effect and the image definition are well considered, and the fused de-illumination image not only has a good illumination removal effect, but also has a good image clear visual effect.
In another specific embodiment, as shown in fig. 12, the image illumination removing method provided by the present application includes the following steps:
s1202, a three-dimensional model of the target object and a plurality of original images of the view angles are obtained.
S1204, determining a texture coordinate index corresponding to each pixel point in the original image.
And S1206, projecting the point cloud data in the three-dimensional model to corresponding pixel points in the original image according to the posture of the camera for collecting the original image.
And S1208, determining a marked area in the original image of each view angle.
S1210, extracting a local texture map in a corresponding mark area from each original map projected with point cloud data based on the texture coordinate index.
And S1212, fusing the local texture maps to obtain a complete texture map of the target object.
S1214, the pixel size of the texture map is determined.
S1216, determining a first kernel size of a filter for performing a filtering process on the texture map according to the pixel size and a preset first scale threshold.
S1218, determining a second kernel size of a filter for filtering the texture map according to the pixel size and a preset second proportional threshold; the first proportional threshold is greater than the second proportional threshold.
And S1220, filtering the texture map based on the first kernel size filter to obtain a first low-frequency characteristic map without illumination.
S1222, filtering the texture map based on the second kernel size filter to obtain a second low-frequency feature map containing illumination; the first nucleus size is larger than the second nucleus size.
S1224, filtering information of the second low-frequency feature map in the texture map to obtain a second high-frequency detail map with illumination removed.
And S1226, fusing the first low-frequency characteristic diagram and the second high-frequency detailed diagram to obtain a target desliming diagram.
And S1228, filtering the information of the first low-frequency characteristic diagram in the texture diagram to obtain a first high-frequency detail diagram.
And S1230, converting the first high-frequency detail map into the same size with the preset area division map.
S1232, overlapping the first high-frequency detail map and the region division map with the same size; the region division map includes at least one mark region.
And S1234, determining each area which coincides with the mark area in the first high-frequency detail diagram as a non-illumination area. And scratching a detail clear image in the non-illumination area in the first high-frequency detail image.
And S1236, overlapping the detail clear image into the light removing image to obtain a target image.
According to the image illumination removing method, the corresponding texture coordinate index is dynamically generated according to the structure of the three-dimensional model of the target object, and the corresponding texture map is extracted from the original map based on the texture coordinate index, so that the method based on the embodiment can realize texture extraction for the three-dimensional model with any topology; the texture map is filtered by a filter based on the large kernel size, so that illumination in the texture map can be filtered as much as possible. The filter based on the small kernel size filters the texture map, so that the illumination in the texture map can be kept as much as possible, the second high-frequency detail map only keeping the image detail information can be obtained after the information of the second low-frequency feature map full of illumination information is filtered in the texture map, the second high-frequency detail map is blended into the first low-frequency feature map without the illumination, the illumination removal effect and the image definition are well considered, and the fused de-illumination map has a good illumination removal effect and a good image clear visual effect.
In yet another embodiment, as shown in fig. 13, the image illumination removal method provided by the present application includes the following steps:
s1302, acquiring a graph to be processed.
And S1304, determining one or more sampling areas in the graph to be processed.
And S1306, uniformly sampling in each sampling area of the image to be processed to obtain a plurality of sampling points.
And S1308, calculating the average value of the color parameters of the sampling points, and determining the average value as the average color parameter of the graph to be processed.
S1310, generating an initial delustering map according to the average color parameters in the map to be processed.
And S1312, carrying out iterative delumination processing on the initial delumination map based on the spherical harmonic illumination algorithm.
And S1314, when the iteration stopping condition is met, scratching a detail clearness image of other areas outside the sampling area in the to-be-processed image.
And S1316, fusing the detail clear image with the delustering image obtained when the iteration stops to obtain a middle delustering image.
S1318, filtering the middle delumination map based on the first kernel size filter to obtain a delumination-removed first low-frequency characteristic map.
S1320, filtering the middle delumination map based on the second kernel size filter to obtain a second low-frequency characteristic map containing illumination; the first nucleus size is larger than the second nucleus size.
S1322, filtering the information of the second low-frequency characteristic diagram in the image to be processed to obtain a second high-frequency detail diagram with illumination removed.
And S1324, fusing the first low-frequency characteristic diagram and the second high-frequency detailed diagram to obtain a target deluminescence diagram.
S1326, filtering information of the first low-frequency characteristic diagram in the graph to be processed to obtain a first high-frequency detail diagram.
S1328, determining an illumination-free area in the first high-frequency detail map.
And S1330, scratching a detail clearness image in the non-illumination area in the first high-frequency detail image.
And S1332, superposing the detail clear image into the light removing image to obtain a target image.
According to the image illumination removing method, preliminary illumination removing is carried out on an image to be processed on the basis of a spherical harmonic illumination algorithm, the image to be processed is filtered on the basis of a large-kernel-size filter on the basis of a middle light-removing image obtained through spherical harmonic illumination processing, and illumination in the image to be processed can be filtered as much as possible; the filter based on the small kernel size filters the image to be processed, so that the illumination in the image to be processed can be kept as much as possible, the second high-frequency detail image only keeping the image detail information can be obtained after the information of the second low-frequency feature image full of illumination information is filtered in the image to be processed, the second high-frequency detail image is blended into the first low-frequency feature image without the illumination, the illumination removal effect and the image definition are well considered, and the fused de-illumination image not only has a good illumination removal effect, but also has a good image clear visual effect.
Fig. 2, 11, 12, and 13 are flow diagrams illustrating an image illumination removal method according to an embodiment. It should be understood that although the steps in the flowcharts of fig. 2, 11, 12 and 13 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2, 11, 12, and 13 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performing the sub-steps or stages is not necessarily sequential, but may be performed alternately or alternatingly with other steps or at least some of the sub-steps or stages of other steps.
As shown in fig. 14, in one embodiment, an image illumination removal apparatus 1400 is provided that includes an image acquisition module 1402, an illumination removal module 1404, a detail extraction module 1406, and an image fusion module 1408, wherein,
an image obtaining module 1402, configured to obtain a to-be-processed image.
And an illumination removing module 1404, configured to filter the to-be-processed graph based on the first kernel size filter to obtain a first low-frequency feature graph without illumination.
A detail extraction module 1406, configured to filter the to-be-processed image based on the second kernel size filter, to obtain a second low-frequency feature map including illumination; the first nucleus size is larger than the second nucleus size; and filtering the information of the second low-frequency characteristic diagram in the diagram to be processed to obtain a second high-frequency detail diagram with illumination removed.
And an image fusion module 1408, configured to fuse the first low-frequency feature map and the second high-frequency detail map to obtain a target de-illumination map.
In one embodiment, the graph to be processed comprises a texture graph; the image obtaining module 1402 is further configured to obtain a three-dimensional model and an original map of the target object; determining a texture coordinate index corresponding to each pixel point in an original image; projecting point cloud data in the three-dimensional model to corresponding pixel points in the original image according to the posture of a camera used for collecting the original image; extracting a texture map from the original map projected with the point cloud data based on the texture coordinate index.
In one embodiment, the image acquisition module 1402 is further configured to acquire a three-dimensional model of the target object and a plurality of raw maps of perspectives; determining a marked area in the original image of each view angle; extracting a local texture map in a corresponding mark area from each original map projected with point cloud data based on the texture coordinate index; and fusing the local texture maps to obtain a complete texture map of the target object.
In one embodiment, as shown in fig. 15, the image illumination removing apparatus 1400 further includes a preliminary illumination removing module 1410, configured to generate an initial de-illumination map according to the average color parameter in the to-be-processed map; performing iterative delustering treatment on the initial delustering image based on a spherical harmonic illumination algorithm until an iteration stop condition is met, and obtaining a middle delustering image; the illumination removal module 1404 further filters the intermediate de-illumination map based on the first kernel size filter to obtain a de-illuminated first low frequency feature map.
In one embodiment, the preliminary illumination removal module 1410 is further configured to determine one or more sampling regions in the pending map; uniformly sampling in each sampling area of the graph to be processed to obtain a plurality of sampling points; and calculating the average value of the color parameters of the sampling points, and determining the average value as the average color parameter of the graph to be processed.
In one embodiment, the preliminary illumination removal module 1410 is further configured to, when the iteration stop condition is satisfied, extract a detail clearness map of other regions outside the sampling region in the to-be-processed map; and fusing the detail clear image with a delustering image obtained when the iteration is stopped to obtain a middle delustering image.
In one embodiment, as shown in fig. 15, the image illumination removing apparatus 1400 further includes a kernel size determining module 1412, configured to determine a pixel size of the image to be processed; determining a first kernel size of a filter for filtering the image to be processed according to the pixel size and a preset first proportional threshold; determining a second kernel size of a filter for filtering the image to be processed according to the pixel size and a preset second proportional threshold; the first proportional threshold is greater than the second proportional threshold.
In an embodiment, as shown in fig. 15, the image illumination removing apparatus 1400 further includes an image inpainting module 1414, configured to filter information of the first low-frequency feature map in the graph to be processed, so as to obtain a first high-frequency detail map; determining a non-illuminated region in the first high-frequency detail map; picking up a detail clear image in a non-illumination area in the first high-frequency detail image; and superposing the detail clear image into a delumination image to obtain a target image.
In one embodiment, the image inpainting module 1414 is further configured to convert the first high frequency detail map to the same size as the predetermined area division map; overlapping the first high-frequency detail map and the region division map with the same size; the region division map comprises at least one marking region; each region of the first high frequency detail map that coincides with a mark region is determined as an un-illuminated region.
In one embodiment, the image inpainting module 1414 is further configured to determine an area in the map to be processed that is not illuminated; picking a detail clear picture in a non-illumination area in a picture to be processed; and fusing the detail clear image and the delustering image to obtain a target image.
According to the image illumination removing device, the image to be processed is filtered based on the filter with the large kernel size, and illumination in the image to be processed can be filtered as much as possible. The filter based on the small kernel size filters the image to be processed, so that the illumination in the image to be processed can be kept as much as possible, the second high-frequency detail image only keeping the image detail information can be obtained after the information of the second low-frequency feature image full of illumination information is filtered in the image to be processed, the second high-frequency detail image is blended into the first low-frequency feature image without the illumination, the illumination removal effect and the image definition are well considered, and the fused de-illumination image not only has a good illumination removal effect, but also has a good image clear visual effect.
FIG. 16 is a diagram illustrating an internal structure of a computer device in one embodiment. The computer device may specifically be the terminal 110 or the server 120 in fig. 1. As shown in fig. 16, the computer device includes a processor, a memory, and a network interface connected by a system bus. Wherein the memory includes a non-volatile storage medium and an internal memory. The non-volatile storage medium of the computer device stores an operating system and may also store a computer program that, when executed by the processor, causes the processor to implement the image illumination removal method. The internal memory may also have a computer program stored therein, which when executed by the processor, causes the processor to perform the image illumination removal method.
Those skilled in the art will appreciate that the architecture shown in fig. 16 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, the image illumination removal apparatus provided in the present application may be implemented in the form of a computer program, and the computer program may be run on a computer device as shown in fig. 16. The memory of the computer device can store various program modules which form the image illumination removing device, such as an image acquiring module, an illumination removing module, a detail extracting module and an image fusing module shown in fig. 14. The computer program constituted by the respective program modules causes the processor to execute the steps in the image illumination removal method of the respective embodiments of the present application described in the present specification.
For example, the computer device shown in fig. 16 may execute step S202 by the image acquisition module in the image illumination removal apparatus shown in fig. 14. The computer device may perform step S204 through the illumination removal module. The computer device may perform steps S206 and S208 by the detail extraction module. The computer device may perform step S210 through the image fusion module.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory storing a computer program which, when executed by the processor, causes the processor to perform the steps of the image illumination removal method described above. Here, the steps of the image illumination removal method may be steps in the image illumination removal methods of the above-described respective embodiments.
In one embodiment, a computer readable storage medium is provided, storing a computer program which, when executed by a processor, causes the processor to perform the steps of the image illumination removal method described above. Here, the steps of the image illumination removal method may be steps in the image illumination removal methods of the above-described respective embodiments.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware related to instructions of a computer program, and the program can be stored in a non-volatile computer readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above examples only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (15)

1. An image illumination removal method, comprising:
acquiring a graph to be processed;
filtering the image to be processed based on the first kernel size filter to obtain a first low-frequency characteristic image without illumination;
filtering the image to be processed based on a second kernel size filter to obtain a second low-frequency characteristic image containing illumination; the first nucleus size is larger than the second nucleus size;
filtering information of a second low-frequency characteristic diagram in the diagram to be processed to obtain a second high-frequency detail diagram with illumination removed;
and fusing the first low-frequency characteristic diagram and the second high-frequency detailed diagram to obtain a target extinction diagram.
2. The method of claim 1, wherein the graph to be processed comprises a texture graph; the acquiring of the graph to be processed comprises the following steps:
acquiring a three-dimensional model and an original image of a target object;
determining a texture coordinate index corresponding to each pixel point in the original image;
projecting point cloud data in the three-dimensional model to corresponding pixel points in the original image according to the posture of a camera used for collecting the original image;
extracting a texture map from the original map projected with the point cloud data based on the texture coordinate index.
3. The method of claim 2, wherein the obtaining the three-dimensional model and the raw map of the target object comprises: acquiring a three-dimensional model of a target object and a plurality of original images of visual angles;
extracting a texture map from the raw map projected with the point cloud data based on the texture coordinate index includes:
determining a marked area in the original image of each view angle;
extracting a local texture map in a corresponding mark area from each original map projected with point cloud data based on the texture coordinate index;
and fusing the local texture maps to obtain a complete texture map of the target object.
4. The method of claim 1, further comprising:
generating an initial deluminescence map according to the average color parameter in the graph to be processed;
performing iterative delustering treatment on the initial delustering image based on a spherical harmonic illumination algorithm until an iterative stop condition is met, and obtaining a middle delustering image;
the filtering of the image to be processed based on the first kernel size filter to obtain the first low-frequency characteristic image without illumination comprises the following steps: and filtering the middle delumination map based on the first kernel size filter to obtain a first low-frequency characteristic map with delumination.
5. The method of claim 4, further comprising:
determining one or more sampling areas in the graph to be processed;
uniformly sampling in each sampling area of the graph to be processed to obtain a plurality of sampling points;
and calculating the average value of the color parameters of the sampling points, and determining the average value as the average color parameter of the graph to be processed.
6. The method of claim 4, wherein obtaining the intermediate delumination map until the iteration stop condition is satisfied comprises:
when an iteration stopping condition is met, a detail clearness image of other regions outside the sampling region is scratched in the to-be-processed image;
and fusing the detail clear image with a delustering image obtained when the iteration is stopped to obtain a middle delustering image.
7. The method of claim 1, further comprising:
determining the pixel size of the image to be processed;
determining a first kernel size of a filter for performing filtering processing on the image to be processed according to the pixel size and a preset first proportional threshold;
determining a second kernel size of a filter for performing filtering processing on the image to be processed according to the pixel size and a preset second proportion threshold; the first proportional threshold is greater than the second proportional threshold.
8. The method of claim 1, further comprising:
filtering information of the first low-frequency characteristic diagram in the diagram to be processed to obtain a first high-frequency detail diagram;
determining a non-illuminated region in the first high frequency detail map;
scratching a detail clearness image in the first high-frequency detail image, wherein the detail clearness image is located in the non-illumination area;
and superposing the detail clear image to the delumination image to obtain a target image.
9. The method of claim 8, wherein the determining the non-illuminated region in the first high frequency detail map comprises:
converting the first high-frequency detail drawing into a preset area division drawing with the same size;
overlapping the first high-frequency detail map and the region division map with the same size; the region division map comprises at least one marked region;
and determining each area of the first high-frequency detail map which is coincident with the mark area as an illumination-free area.
10. The method of claim 1, further comprising:
determining a non-illumination area in the graph to be processed;
picking a detail clear image in the non-illumination area in the to-be-processed image;
and fusing the detail clear image and the delustering image to obtain a target image.
11. An image illumination removal apparatus, characterized in that the apparatus comprises:
the image acquisition module is used for acquiring a to-be-processed image;
the illumination removal module is used for filtering the image to be processed based on the first kernel size filter to obtain a first low-frequency characteristic image without illumination;
the detail extraction module is used for filtering the image to be processed based on the second kernel size filter to obtain a second low-frequency characteristic image containing illumination; the first nucleus size is larger than the second nucleus size; filtering information of a second low-frequency characteristic diagram in the diagram to be processed to obtain a second high-frequency detail diagram with illumination removed;
and the image fusion module is used for fusing the first low-frequency characteristic diagram and the second high-frequency detail diagram to obtain a target delumination diagram.
12. The apparatus of claim 11, further comprising a kernel size determination module configured to determine a pixel size of the image to be processed; determining a first kernel size of a filter for performing filtering processing on the image to be processed according to the pixel size and a preset first proportional threshold; determining a second kernel size of a filter for performing filtering processing on the image to be processed according to the pixel size and a preset second proportion threshold; the first proportional threshold is greater than the second proportional threshold.
13. The apparatus according to claim 11, further comprising an image restoration module, configured to filter information of the first low-frequency feature map in the to-be-processed map to obtain a first high-frequency detail map; determining a non-illuminated region in the first high frequency detail map; scratching a detail clearness image in the first high-frequency detail image, wherein the detail clearness image is located in the non-illumination area; and superposing the detail clear image to the delumination image to obtain a target image.
14. A computer-readable storage medium, storing a computer program which, when executed by a processor, causes the processor to carry out the steps of the method according to any one of claims 1 to 10.
15. A computer device comprising a memory and a processor, the memory storing a computer program that, when executed by the processor, causes the processor to perform the steps of the method according to any one of claims 1 to 10.
CN202010050512.4A 2020-01-17 2020-01-17 Image illumination removing method and device, storage medium and computer equipment Active CN111275804B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010050512.4A CN111275804B (en) 2020-01-17 2020-01-17 Image illumination removing method and device, storage medium and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010050512.4A CN111275804B (en) 2020-01-17 2020-01-17 Image illumination removing method and device, storage medium and computer equipment

Publications (2)

Publication Number Publication Date
CN111275804A true CN111275804A (en) 2020-06-12
CN111275804B CN111275804B (en) 2022-09-16

Family

ID=71001735

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010050512.4A Active CN111275804B (en) 2020-01-17 2020-01-17 Image illumination removing method and device, storage medium and computer equipment

Country Status (1)

Country Link
CN (1) CN111275804B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112001874A (en) * 2020-08-28 2020-11-27 四川达曼正特科技有限公司 Image fusion method based on wavelet decomposition and Poisson fusion and application thereof
CN114529490A (en) * 2022-04-24 2022-05-24 腾讯科技(深圳)有限公司 Data processing method, device, equipment and readable storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101425179A (en) * 2008-11-18 2009-05-06 清华大学 Face image relighting method and device
US20130063349A1 (en) * 2011-09-09 2013-03-14 Stmicroelectronics (Research & Development) Limited Optical nagivation device
CN103679651A (en) * 2013-11-29 2014-03-26 桂林电子科技大学 Underwater image enhancement processing method
CN105447890A (en) * 2015-12-08 2016-03-30 南京航空航天大学 Motion vehicle detection method resisting light effect
CN107516319A (en) * 2017-09-05 2017-12-26 中北大学 A kind of high accuracy simple interactive stingy drawing method, storage device and terminal
CN109118444A (en) * 2018-07-26 2019-01-01 东南大学 A kind of regularization facial image complex illumination minimizing technology based on character separation
CN109146814A (en) * 2018-08-20 2019-01-04 Oppo广东移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN110047058A (en) * 2019-03-25 2019-07-23 杭州电子科技大学 A kind of image interfusion method based on residual pyramid
CN110610525A (en) * 2018-06-15 2019-12-24 中兴通讯股份有限公司 Image processing method and device and computer readable storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101425179A (en) * 2008-11-18 2009-05-06 清华大学 Face image relighting method and device
US20130063349A1 (en) * 2011-09-09 2013-03-14 Stmicroelectronics (Research & Development) Limited Optical nagivation device
CN103679651A (en) * 2013-11-29 2014-03-26 桂林电子科技大学 Underwater image enhancement processing method
CN105447890A (en) * 2015-12-08 2016-03-30 南京航空航天大学 Motion vehicle detection method resisting light effect
CN107516319A (en) * 2017-09-05 2017-12-26 中北大学 A kind of high accuracy simple interactive stingy drawing method, storage device and terminal
CN110610525A (en) * 2018-06-15 2019-12-24 中兴通讯股份有限公司 Image processing method and device and computer readable storage medium
CN109118444A (en) * 2018-07-26 2019-01-01 东南大学 A kind of regularization facial image complex illumination minimizing technology based on character separation
CN109146814A (en) * 2018-08-20 2019-01-04 Oppo广东移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN110047058A (en) * 2019-03-25 2019-07-23 杭州电子科技大学 A kind of image interfusion method based on residual pyramid

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
FERNANDEZ-GALLEGO, JA等: ""Wheat ear counting in-field conditions: high throughput and low-cost approach using RGB images"", 《PLANT METHODS》 *
杨作宝 等: ""人脸识别的光照预处理算法"", 《北京信息科技大学学报》 *
王言 等: ""IHS色彩空间下基于Curvelet变换的汽车抗晕光方法"", 《汽车技术》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112001874A (en) * 2020-08-28 2020-11-27 四川达曼正特科技有限公司 Image fusion method based on wavelet decomposition and Poisson fusion and application thereof
CN114529490A (en) * 2022-04-24 2022-05-24 腾讯科技(深圳)有限公司 Data processing method, device, equipment and readable storage medium

Also Published As

Publication number Publication date
CN111275804B (en) 2022-09-16

Similar Documents

Publication Publication Date Title
CN109859098B (en) Face image fusion method and device, computer equipment and readable storage medium
CN111445410B (en) Texture enhancement method, device and equipment based on texture image and storage medium
CN108305312B (en) Method and device for generating 3D virtual image
CN109697688B (en) Method and device for image processing
CN113012293B (en) Stone carving model construction method, device, equipment and storage medium
CN111445564B (en) Face texture image generation method, device, computer equipment and storage medium
US8270704B2 (en) Method and apparatus for reconstructing 3D shape model of object by using multi-view image information
KR101199475B1 (en) Method and apparatus for reconstruction 3 dimension model
Rematas et al. Image-based synthesis and re-synthesis of viewpoints guided by 3d models
CN111063021A (en) Method and device for establishing three-dimensional reconstruction model of space moving target
CN113628327A (en) Head three-dimensional reconstruction method and equipment
EP3756163B1 (en) Methods, devices, and computer program products for gradient based depth reconstructions with robust statistics
JP2015045920A (en) Virtual viewpoint image generation device, virtual viewpoint image generation method, and virtual viewpoint image generation program
CN108463823A (en) A kind of method for reconstructing, device and the terminal of user's Hair model
CN111275804B (en) Image illumination removing method and device, storage medium and computer equipment
Kumar et al. Structure-preserving NPR framework for image abstraction and stylization
Svitov et al. Dinar: Diffusion inpainting of neural textures for one-shot human avatars
CN116012432A (en) Stereoscopic panoramic image generation method and device and computer equipment
KR102358854B1 (en) Apparatus and method for color synthesis of face images
JP6901885B2 (en) Foreground extractor and program
KR101513931B1 (en) Auto-correction method of composition and image apparatus with the same technique
US20210241430A1 (en) Methods, devices, and computer program products for improved 3d mesh texturing
CN115063303A (en) Image 3D method based on image restoration
Herrera et al. A learned joint depth and intensity prior using Markov random fields
Lee et al. Panoramic mesh model generation from multiple range data for indoor scene reconstruction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40024418

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant