CN118172623A - Training data construction method, training data construction device, electronic equipment and readable storage medium - Google Patents

Training data construction method, training data construction device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN118172623A
CN118172623A CN202410381896.6A CN202410381896A CN118172623A CN 118172623 A CN118172623 A CN 118172623A CN 202410381896 A CN202410381896 A CN 202410381896A CN 118172623 A CN118172623 A CN 118172623A
Authority
CN
China
Prior art keywords
image data
color
channel
pseudo
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410381896.6A
Other languages
Chinese (zh)
Inventor
李顼晟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202410381896.6A priority Critical patent/CN118172623A/en
Publication of CN118172623A publication Critical patent/CN118172623A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Processing (AREA)

Abstract

The present application relates to a training data construction method, apparatus, computer device, storage medium and computer program product. The method comprises the following steps: acquiring initial image data; carrying out brightening treatment on the initial image data to obtain brightening image data; performing edge detection on the brightening image data to obtain an edge mask of the brightening image data; fusing the edge mask, the initial image data and the brightness enhancement image data to obtain reference image data; performing pseudo-color simulation on the brightening image data to obtain pseudo-color simulation image data; fusing the pseudo-color simulation image data, the reference image data and the edge mask to obtain target image data; and taking the target image data as training data of a de-pseudo-color model. By adopting the method, the accuracy of removing the false color can be improved.

Description

Training data construction method, training data construction device, electronic equipment and readable storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a training data construction method, apparatus, electronic device, and computer readable storage medium.
Background
With the popularization of mobile intelligent devices, the photographing function of the mobile intelligent devices is widely focused, and the requirements of people on resolution, imaging quality and the like of photographed images are also increasing. Because of the size limitation of the mobile intelligent device, a large lifting space still exists for improving the image quality, wherein the obvious situation is that the shot image may have false color edges such as purple edges, green edges and the like, and the visual influence on the image quality exists.
In the conventional technology, the false color is detected and removed based on the conventional detection and filtering modes, and false color removal errors possibly occur, so that the accuracy of false color removal is low.
Disclosure of Invention
The embodiment of the application provides a training data construction method, a training data construction device, electronic equipment and a computer readable storage medium, which can improve the accuracy of removing false colors.
In a first aspect, the present application provides a training data construction method. The method comprises the following steps:
Acquiring initial image data;
carrying out brightening treatment on the initial image data to obtain brightening image data;
performing edge detection on the brightening image data to obtain an edge mask of the brightening image data;
Fusing the edge mask, the initial image data and the brightness enhancement image data to obtain reference image data;
Performing pseudo-color simulation on the brightening image data to obtain pseudo-color simulation image data;
Fusing the pseudo-color simulation image data, the reference image data and the edge mask to obtain target image data;
and taking the target image data as training data of a de-pseudo-color model.
In a second aspect, the application further provides a training data construction device. The device comprises:
The image acquisition module is used for acquiring initial image data;
The brightness enhancement processing module is used for carrying out brightness enhancement processing on the initial image data to obtain brightness enhancement image data;
The edge detection module is used for carrying out edge detection on the brightening image data to obtain an edge mask of the brightening image data;
the first fusion module is used for fusing the edge mask, the initial image data and the brightness enhancement image data to obtain reference image data;
The pseudo-color simulation module is used for performing pseudo-color simulation on the brightening image data to obtain pseudo-color simulation image data;
The second fusion module is used for fusing the pseudo-color simulation image data, the reference image data and the edge mask to obtain target image data;
And the training data obtaining module is used for taking the target image data as training data of the pseudo-color removing model.
In a third aspect, the present application also provides a computer device. The computer device comprises a memory storing a computer program and a processor implementing the steps of the training data construction method described above when the processor executes the computer program.
In a fourth aspect, the present application also provides a computer-readable storage medium. The computer readable storage medium has stored thereon a computer program which, when executed by a processor, implements the steps of the training data construction method described above.
In a fifth aspect, the present application also provides a computer program product. The computer program product comprises a computer program which, when executed by a processor, implements the steps of the training data construction method described above.
According to the training data construction method, the device, the electronic equipment, the computer readable storage medium and the computer program product, the initial image data is obtained, the initial image data is subjected to the brightening treatment to obtain the brightening image data, the edge detection is carried out on the brightening image data to obtain the edge mask of the brightening image data, the edge mask, the initial image data and the brightening image data are fused to obtain the reference image data, the pseudo-color simulation is carried out on the brightening image data to obtain the pseudo-color simulation image data, the reference image data and the edge mask are fused to obtain the target image data, the target image data is used as training data of the pseudo-color removal model, namely the target image data which is consistent with an actual scene and appears pseudo-color at a highlight edge can be obtained through simulation, the pseudo-color removal model can be obtained through training based on the constructed target image data in an artificial intelligence mode, and then the pseudo-color removal treatment is realized based on the pseudo-color removal model, and the pseudo-color removal accuracy can be improved.
In a sixth aspect, the present application provides an image processing method. The method comprises the following steps:
Acquiring an image to be processed;
Inputting the image to be processed into a pseudo-color removing model to obtain a pseudo-color removing image; the de-pseudo-color model is obtained through training data, and the training data is obtained through the training data construction method.
In a seventh aspect, the present application also provides an image processing apparatus. The device comprises:
The image acquisition module to be processed is used for acquiring the image to be processed;
The anti-false color module is used for inputting the image to be processed into an anti-false color model to obtain an anti-false color image; the de-pseudo-color model is obtained through training data, and the training data is obtained through the training data construction method.
In an eighth aspect, the present application also provides a computer device. The computer device comprises a memory storing a computer program and a processor implementing the steps of the image processing method described above when the processor executes the computer program.
In a ninth aspect, the present application also provides a computer-readable storage medium. The computer readable storage medium has stored thereon a computer program which, when executed by a processor, implements the steps of the image processing method described above.
In a tenth aspect, the present application also provides a computer program product. The computer program product comprises a computer program which, when being executed by a processor, implements the steps of the above-mentioned image processing method.
According to the image processing method, the device, the electronic equipment, the computer readable storage medium and the computer program product, the image to be processed is acquired and input into the anti-false color model to obtain the anti-false color image, the anti-false color model is obtained through training data training, the training data is obtained through a training data construction method, the anti-false color processing can be carried out on the image to be processed through an artificial intelligence mode, and the accuracy of anti-false color can be improved.
Drawings
In order to more clearly illustrate the embodiments of the application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is an application environment diagram of a training data construction method in one embodiment;
FIG. 2 is a flow chart of a training data construction method in one embodiment;
FIG. 3 is a flow chart of an image processing method in one embodiment;
FIG. 4 is a schematic diagram of a purple training data construction flow in one embodiment;
FIG. 5 is a schematic illustration of a high definition image in one embodiment;
FIG. 6 is a schematic diagram of an edge mask corresponding to a high definition image in one embodiment;
FIG. 7 is a schematic representation of a simulated image of a purple fringing in one embodiment;
FIG. 8 is a linear RGB high definition pictorial view in one embodiment;
FIG. 9 is a schematic diagram of an edge mask corresponding to a linear RGB high definition map in one embodiment;
FIG. 10 is a schematic view of a simulated image of a purple fringing in another embodiment;
FIG. 11 is a schematic diagram of an image to be processed in one embodiment;
FIG. 12 is a schematic diagram of a purplish image in one embodiment;
FIG. 13 is a block diagram of a training data construction device in one embodiment;
Fig. 14 is an internal structural diagram of a computer device in one embodiment.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
The training data construction method provided by the embodiment of the application can be applied to the electronic equipment shown in fig. 1. The electronic device can acquire initial image data, carry out brightness enhancement processing on the initial image data to obtain brightness enhancement image data, carry out edge detection on the brightness enhancement image data to obtain an edge mask of the brightness enhancement image data, fuse the edge mask, the initial image data and the brightness enhancement image data to obtain reference image data, carry out pseudo-color simulation on the brightness enhancement image data to obtain pseudo-color simulation image data, fuse the pseudo-color simulation image data, the reference image data and the edge mask to obtain target image data, and take the target image data as training data of a pseudo-color removal model. The electronic device may be, but not limited to, various personal computers, notebook computers, smart phones, tablet computers, internet of things devices and portable wearable devices, and the internet of things devices may be smart speakers, smart televisions, smart air conditioners, smart vehicle devices and the like. The portable wearable device may be a smart watch, smart bracelet, headset, or the like. The electronic device may be a terminal or a server.
In one embodiment, as shown in FIG. 2, a training data construction method is provided, comprising the following steps 202 to 212.
Step 202, acquiring initial image data.
The initial image data refers to original image data used to construct training data. The initial image data may be image data in a raw format, an RGB (Red, green, blue, red, green, blue) format, a YUV format or the like, wherein a Y component in a YUV image represents brightness (luminence or Luma), a U component and a V component represent chromaticity (Chrominance or Chroma), and the raw format data is an original image file acquired by an image sensor.
In an alternative embodiment, the initial image data is high-definition image data, where the high-definition image data refers to image data with resolution not lower than 2K, and the high-definition image data may be obtained from a public data set, or may be obtained by shooting with a single-phase inverter.
The electronic device can acquire the initial image data through the man-machine interaction interface, and can acquire the initial image data based on a network. For example, the initial image data is acquired by an application installed on the electronic device.
Step 204, performing a brightening process on the initial image data to obtain brightening image data.
The brightness enhancement processing is processing for enhancing brightness, that is, processing for enhancing brightness. The luminance of the initial image data is increased to obtain luminance-enhanced image data, which is higher than the luminance of the initial image data.
The initial image data may be subjected to linear or gamma brightening to obtain brightening image data. For example, if the initial image data is raw domain data, performing a brightening process on the initial image data by using a gain of 5-10 to obtain brightening image data; if the initial image data is sRGB domain data, linear brightening processing is carried out on the initial image data through gains with the size of 1-5, and brightening image data is obtained. Since the initial image data is sRGB domain data, the brightness of the initial image data is higher than that of the raw domain data after the basic image processing flow, and thus the corresponding brightness enhancement gain is smaller. In some practical application scenarios, the gamma gain through gamma brightening may be 1.8-2.5.
It will be appreciated that the purpose of the highlighting of the initial image data is to simulate the highlight region, and the location in the actual image where the pseudo-color appears is known to be the location where the pseudo-color usually appears at the edge where the highlight and contrast is strong, so that the accuracy of the simulated pseudo-color data can be improved by simulating the highlight region to simulate the pseudo-color data.
And 206, performing edge detection on the brightening image data to obtain an edge mask of the brightening image data.
The edge is the junction of the image attribute area and the other attribute area, is the place where the attribute of the area is suddenly changed, is the place where the uncertainty in the image is the largest, is also the place where the image information is concentrated, and contains rich information.
In an alternative embodiment, edge detection may be performed on the image data to be brightened by an edge detection algorithm, resulting in an edge mask for the image data to be brightened. The edge detection algorithm can comprise a gradient modulus operator, a Laplacian Gaussian operator method, a Canny operator, a neural network algorithm, a wavelet change method and the like, wherein the gradient modulus operator can be a Roberts operator, a Sobel operator, a Prewitt operator, a Krisch operator and the like. It can be appreciated that the edge detection can be performed by selecting a matched edge detection algorithm according to the actual application scene, so as to obtain an edge mask for brightening the image data.
As can be appreciated, the brightening image is a higher brightness image, edge detection is performed based on the brightening image data, and the resulting edge mask of the brightening image data can characterize the strong edges of the original image data.
Step 208, fusing the edge mask, the initial image data and the brightening image data to obtain reference image data.
And the reference image data is used for representing highlight edge area data corresponding to the initial image data. The reference image data is highlight data at the strong edge.
Alternatively, the initial image data and the brightening image data may be fused to obtain intermediate fused image data, and the reference image data is obtained based on the edge mask and the intermediate fused image data. For example, the initial image data and the brightening image data are subjected to weighted fusion to obtain intermediate fusion image data, and the intermediate fusion image data and pixels at positions corresponding to the edge masks are multiplied to obtain reference image data.
Or the initial image data and the pixels at the corresponding positions of the edge masks can be multiplied to obtain first image data; multiplying the brightening image data with pixels at positions corresponding to the edge masks to obtain second image data, and fusing the first image data and the second image data to obtain reference image data.
Step 210, performing pseudo-color simulation on the brightening image data to obtain pseudo-color simulated image data.
The pseudo-color simulation image data refers to image data obtained by pseudo-color simulation. Pseudo colors, also known as false colors or false colors, are not the actual colors of the article itself, and are the result of color rendering due to various complications during optical imaging, and often require removal of the occurring pseudo colors during image processing. In general, a false color that is easily perceived by the human eye is, for example, an appearance of green or purple, and the appearance of purple or green is also called a green edge or a purple edge because the appearance of purple or green is a long stripe-shaped region as a whole. It will be appreciated that the specific false color may be determined according to the actual application scenario, for example, some scenarios may be prone to purple fringing, then purple simulation may be performed, while other scenarios may be prone to green fringing, then green simulation may be performed, or both purple simulation and green simulation may be performed in other scenarios.
Alternatively, the brightness enhancement image data may be pseudo-color-simulated by color gain to obtain pseudo-color simulated image data. The color gain of each color channel may be set according to the color value distribution of the color channel of the simulated pseudo color. For example, assuming that the simulated pseudo color is purple, the color value distribution of the color channel corresponding to purple is that the B-channel color value is greater than the R-channel color value, the B-channel color gain corresponding to B-channel may be greater than the R-channel color gain corresponding to R-channel, and the G-channel color value may be unchanged or turned down.
In an alternative embodiment, the B-channel component and the R-channel component of each pixel in the enhanced image data may be adjusted such that the B-channel component and the G-channel component of each pixel are greater than the R-channel component, and the G-channel component may be unchanged or reduced, thereby obtaining the pseudo-color analog image data.
Step 212, fusing the pseudo-color analog image data, the reference image data and the edge mask to obtain target image data.
And the target image data is used for representing highlight pseudo-edge data corresponding to the initial image data, namely, the target image data is data which is constructed based on the initial image data and has the highlight edge representing pseudo color.
Alternatively, the pseudo-color analog image data and the reference image data may be fused to obtain first fused image data, and the target image data may be obtained based on the edge mask and the first fused image data. For example, the pseudo-color analog image data and the reference image data are subjected to weighted fusion to obtain first fused image data, and the first fused image data and pixels at positions corresponding to the edge masks are multiplied to obtain target image data.
Or the pseudo-color analog image data can be multiplied by pixels at the corresponding positions of the edge masks to obtain first image data; multiplying the reference image data by pixels at positions corresponding to the edge masks to obtain second image data, fusing the first image data and the second image data to obtain second fused image data, and taking the second fused image data as target image data.
Step 214, the target image data is used as training data for removing the pseudo-color model.
The de-pseudo-color model refers to a model for removing pseudo colors. The target image data is used as training data, for example, the basic model can be trained, and a pseudo-color removing model can be obtained. The base model refers to a general model, such as a neural network model, that is widely used for various artificial intelligence tasks.
In an alternative embodiment, the target image data may be used as training data for the de-pseudo-color model by means of unsupervised training. For example, inputting the target image data into the basic model to obtain a model output result, and adjusting parameters of the basic model based on the difference between the model output result and the real result until the difference between the model output result and the real result is smaller than a training threshold value, thereby obtaining the anti-false-color model.
In an alternative embodiment, the data pair consisting of the target image data and the reference image data may be used as training data for the de-pseudo-color model by means of supervised training. For example, target image data in training data is input into a basic model to obtain initial de-pseudo-color image data, parameters of the basic model are adjusted based on the difference between the initial de-pseudo-color image data and reference image data in the training data until the difference between the initial de-pseudo-color image data and a corresponding reference image is smaller than a training threshold value, and a de-pseudo-color model is obtained, so that the de-pseudo-color model can learn the capability of processing the target image data to obtain the reference image data.
According to the training data construction method, the initial image data is obtained, the initial image data is subjected to the brightening treatment to obtain the brightening image data, the edge of the brightening image data is detected to obtain the edge mask of the brightening image data, the edge mask, the initial image data and the brightening image data are fused to obtain the reference image data, the pseudo-color simulation image data is obtained by carrying out pseudo-color simulation on the brightening image data, the pseudo-color simulation image data, the reference image data and the edge mask are fused to obtain the target image data, the target image data is used as training data for removing the pseudo-color model, namely the target image data which accords with an actual scene and appears pseudo-color at a highlight edge can be accurately simulated to obtain the pseudo-color removing model based on the constructed target image data in an artificial intelligent mode, then the pseudo-color removing treatment is realized based on the pseudo-color removing model, and the accuracy of pseudo-color removing can be improved.
In some embodiments, fusing the edge mask, the initial image data, and the enhanced image data to obtain reference image data includes:
the initial image data and the brightening image data are subjected to weighted fusion to obtain fusion image data; and obtaining the reference image data according to the product of the fused image data and the pixels at the corresponding positions of the edge masks.
In this embodiment, the pixels at the positions corresponding to the initial image data and the brightening image data are weighted and fused to obtain fused image data. For example, for a target pixel in the initial image data, the target pixel and a pixel at a corresponding position in the brightening image data are subjected to weighted fusion to obtain pixel data at a corresponding position in the fused image data, the target pixel can be a pixel at any position in the initial image data, and so on, fused image data obtained by respectively performing weighted fusion on all pixels in the initial image data and all pixels at a corresponding position in the brightening image data can be obtained. The weighted fusion may refer to taking an average value after weighted summation. And multiplying the fused image data by pixels at positions corresponding to the edge masks to obtain products serving as reference image data.
In an alternative embodiment, since the edge mask is a value consisting of 0 and 1, the fused image data and the pixel corresponding to the position of the edge mask may be subjected to and operation to obtain the reference image data, that is, the reference image data corresponding to the position of the edge mask 0 is 0, and the reference image data corresponding to the position of the edge mask 1 is the fused image data corresponding to the position of the edge mask 1, which is equivalent to that the fused image data corresponding to the position of the edge mask 1 is filtered out and used as the reference image data.
In this embodiment, the initial image data and the brightening image data are weighted and fused to obtain the fused image data, and the reference image data are obtained according to the product of the fused image data and the pixels at the positions corresponding to the edge masks, so that the filtering of the initial image data and the brightening image data through the edge masks can be realized, the finally obtained reference image data are the image data of the corresponding areas of the edge masks, and the characteristics of the initial image data and the characteristics of the brightening image data are fused, namely, the data of the characteristics of the initial image data are reserved and the characteristics of the highlight characteristics are filtered through the edge masks, so that the obtained reference image data are more accurate, and the accuracy of the target image data is improved.
In some embodiments, performing a pseudo-color simulation on the enhanced image data to obtain pseudo-color simulated image data includes:
obtaining a color gain corresponding to the simulated pseudo color; and adjusting the colors of the corresponding channels in the brightening image data according to the color gains to obtain pseudo-color simulation image data.
In this embodiment, the color of the corresponding channel in the brightening image data may be adjusted by the color gain of each channel of the simulated pseudo color, and when the color channel condition of the pseudo color is reached, the pseudo-color simulated image data is obtained. For example, if the simulated pseudo color is purple, the color channel condition corresponding to purple is that the B-channel color value is greater than or equal to the R-channel color value; if the simulated pseudo color is green, the color channel condition corresponding to green is that the R channel color value is larger than the B channel color value.
The color gain corresponding to the simulated pseudo color may be generated by a random algorithm according to a preset gain range limit, that is, the color gain within the preset gain range limit may be generated by a random algorithm. For example, if the false color is purple, the preset gain range limit may be that the R-channel gain is 1.1-1.2, the B-channel gain is 1.1-1.25, and the G-channel gain is 0.8-1, that is, for the simulation of purple, the randomly generated R-channel gain may be 1.1, 1.15, 1.18, 1.2, etc., and the B-channel gain may be 1.1, 1.15, 1.17, 1.2, 1.25, etc., and the G-channel gain may be 0.8, 0.9, 1, etc.
In an alternative embodiment, the color of the corresponding channel in the enhanced image data is adjusted according to the color gain, which may be by multiplying the color gain with the color value of the corresponding channel in the enhanced image data, thereby obtaining pseudo-color analog image data.
In this embodiment, by obtaining the color gain corresponding to the simulated pseudo color, and adjusting the color of the corresponding channel in the brightening image data according to the color gain, the pseudo-color simulated image data is obtained, which can be simply and conveniently obtained, and the color of the corresponding channel in the brightening image data is respectively adjusted by the color gain of each channel, so that the pseudo-color simulated image data can be accurately obtained.
In some embodiments, the color gain comprises an R-channel gain and a B-channel gain, and the pseudo-color analog image data comprises purple analog image data; adjusting colors of corresponding channels in the brightening image data according to the color gains to obtain pseudo-color simulation image data, wherein the method comprises the following steps:
Adjusting the color of the R channel in the brightening image data according to the gain of the R channel to obtain a candidate R channel color value; b channel color in the brightening image data is adjusted according to the B channel gain, and a candidate B channel color value is obtained; and if the color value of the candidate R channel is larger than the color value of the candidate B channel, adjusting the color value of at least one channel in the R channel and the B channel so that the color value of the candidate R channel is smaller than or equal to the color value of the candidate B channel, and obtaining purple simulated image data.
In this embodiment, the brightness enhancement image data is in RGB format, the color gain includes at least an R channel gain and a B channel gain, the R channel color in the brightness enhancement image data is adjusted by the R channel gain to obtain a candidate R channel color value, and the B channel color in the brightness enhancement image data is adjusted by the B channel gain to obtain a candidate B channel color value.
Alternatively, the G-channel color in the luminance image data may or may not be adjusted for the case of purple simulation. If the color of the G channel needs to be adjusted, the color of the G channel can be adjusted according to the gain of the G channel, and a candidate color value of the G channel is obtained.
If the color value of the candidate R channel is greater than the color value of the candidate B channel, pink color may occur instead of purple color, the color value of at least one of the R channel and the B channel needs to be further adjusted, that is, the color value of the R channel may be adjusted independently, or the color value of the B channel may be adjusted independently, or the color value of the R channel and the color value of the B channel may be adjusted simultaneously, so that the color value of the candidate R channel is less than or equal to the color value of the candidate B channel, and may be purple color, and purple analog image data may be obtained according to the color values of the adjusted channels.
In this embodiment, the colors of the corresponding channels are respectively adjusted by the R-channel gain and the B-channel gain, so that the adjusted candidate R-channel color value is smaller than or equal to the candidate B-channel color value, and thus the purple analog image data can be rapidly and accurately simulated.
In some embodiments, if the candidate R-channel color value is greater than the candidate B-channel color value, adjusting the color value of at least one of the R-channel and B-channel such that the candidate R-channel color value is less than or equal to the candidate B-channel color value, obtaining purple analog image data, comprising:
if the candidate R channel color value is larger than the candidate B channel color value, acquiring at least one of a target R channel gain and a target B channel gain; and adjusting at least one of the color values of the R channels based on the target R channel gain and the color values of the B channels based on the target B channel gain so that the candidate R channel color values are smaller than or equal to the candidate B channel color values, thereby obtaining purple analog image data.
In this embodiment, if the color value of the candidate R channel is greater than the color value of the candidate B channel, it is indicated that the color value of the candidate R channel cannot be adjusted to obtain the purple analog image data, then the target R channel gain may be continuously obtained, the color value of the R channel may be adjusted based on the target R channel gain, or the target B channel gain may be obtained, the color value of the B channel may be adjusted based on the target B channel gain, or the target R channel gain and the target B channel gain may be obtained, the color value of the R channel may be adjusted based on the target R channel gain, and the color value of the B channel may be adjusted based on the target B channel gain, so that the color value of the candidate R channel is less than or equal to the color value of the candidate B channel, thereby obtaining the purple analog image data. The target R-channel gain or the target B-channel gain may be determined according to a difference between the candidate R-channel color value and the candidate B-channel color value.
In an alternative embodiment, the candidate R-channel color value and the candidate B-channel color value are calculated, if the candidate R-channel color value is greater than the candidate B-channel color value, a target R-channel gain may be obtained, the target R-channel gain is less than 1, the R-channel color value is adjusted based on the target R-channel gain, or the target B-channel gain is obtained, the B-channel color value is adjusted based on the target B-channel gain, or the target R-channel gain and the target B-channel gain may be obtained simultaneously, the target R-channel gain is less than 1 and the target B-channel gain is greater than 1, the R-channel color value is adjusted based on the target R-channel gain, and the B-channel color value is adjusted based on the target B-channel gain, so that the candidate R-channel color value is less than or equal to the candidate B-channel color value, and purple analog image data is obtained. It can be appreciated that the above adjustment process of the channel color values may be repeated, that is, at least one of the target R channel gain and the target B channel gain may be taken multiple times, and the color values of the corresponding channels may be adjusted respectively until the candidate R channel color value is less than or equal to the candidate B channel color value, so as to obtain the purple analog image data.
In this embodiment, by acquiring at least one of the target R channel gain and the target B channel gain and adjusting the color values of the corresponding color channels, the color values of the corresponding color channels can be adjusted more quickly and accurately, so that the efficiency and accuracy of obtaining the purple analog image data are improved.
In some embodiments, the color gain comprises a G-channel gain; the method further comprises the following steps:
According to the gain of the G channel, the color of the G channel in the brightness enhancement image data is adjusted, and a candidate G channel color value is obtained; the G channel gain is not greater than 1.
For the simulation of the purple data, the color gain includes a G-channel gain, and the color of the G-channel in the image data may be adjusted according to the G-channel gain, for example, the G-channel gain is multiplied by the G-channel color value in the image data to obtain a candidate G-channel color value. Wherein the G-channel gain is less than or equal to 1, e.g., the G-channel gain is 0.8-1, such as may be 0.8, 0.9, or 1.
In this embodiment, the color of the G channel in the brightness enhancing image data is adjusted according to the G channel gain, so as to obtain candidate G channel color values, and the color values of the RGB three channels are adjusted, so that the purple analog image data can be obtained more quickly and accurately.
In one embodiment, edge detection is performed on the brightening image data to obtain an edge mask of the brightening image data, including:
Converting the brightness-enhanced image data into YUV domain data and acquiring brightness data of a brightness channel of the YUV domain data; performing sliding filtering on the brightness data to obtain a brightness statistic value corresponding to the sliding window; based on the luminance statistics, an edge mask is obtained that lightens the image data.
In this embodiment, the brightness enhancement image data is converted into YUV domain data, the brightness data of the brightness channel in the YUV domain data is extracted, the brightness channel is the Y channel, that is, the data of the Y channel is extracted as the brightness data, and then the brightness data is subjected to sliding filtering to obtain the brightness statistic value of each sliding window, and the edge mask of the brightness enhancement image data is obtained based on the brightness statistic value. The sliding Filter may be Moving average Filter (Moving AVERAGE FILTER), exponential weighted Moving average Filter (Exponential Weighted Moving AVERAGE FILTER), median Filter (MEDIAN FILTER), kalman Filter (KALMAN FILTER), low-pass Filter (Low-PASS FILTER) or Gaussian Filter (Gaussian Filter), etc., the sliding window may also be referred to as a convolution kernel, and the size of the sliding window may be set according to the actual application scenario, for example, may be 3*3, 4*4 or 5*5. The moving step length of the sliding window can be 1,2 or 4, and the like, and can be set according to actual scene requirements.
In one example, the luminance statistic corresponding to the sliding window may be a weighted average or variance of luminance data within the sliding window. An edge mask that lightens the image data may then be obtained based on the magnitude of the luminance statistic. The variance characterizes the fluctuation degree of the data, the larger the variance characterizes the fluctuation degree of the data, and conversely, the smaller the variance characterizes the fluctuation degree of the data, the smaller the fluctuation degree of the data, and the larger the fluctuation degree of the data, the larger the data change degree of the corresponding position is, the more likely the edge is, so that the edge mask for brightening the image data can be obtained according to the size relation between the variance of the data in the sliding window and the target threshold.
In this embodiment, by performing sliding filtering on the luminance data of the brightness enhancement image data and calculating the luminance statistic value of the luminance data in the sliding window, the edge mask of the brightness enhancement image data is obtained according to the luminance statistic value, and the position of the luminance data with larger luminance variation can be accurately determined, thereby improving the accuracy of the edge mask.
In some embodiments, deriving an edge mask that lightens the image data based on the luminance statistics includes:
Carrying out normalization processing on the brightness statistic value to obtain a normalized statistic value; comparing the normalized statistics with a target threshold; if the normalized statistical value is larger than the target threshold value, the pixel gray corresponding to the normalized statistical value is endowed with a first gray value; if the normalized statistical value is not greater than the target threshold value, the pixel gray corresponding to the normalized statistical value is endowed with a second gray value; obtaining an edge mask for brightening the image data according to the first gray level value and the second gray level value; the first gray value is greater than the second gray value.
In this embodiment, normalization processing is performed on each obtained luminance statistic value, for example, the luminance statistic value is normalized to a value in a [0,1] interval, a normalized statistic value is obtained, each normalized statistic value is compared with a target threshold value, if the normalized statistic value is greater than the target threshold value, a pixel gray level corresponding to the normalized statistic value is assigned to a first gray level, if the normalized statistic value is not greater than the target threshold value, a pixel gray level corresponding to the normalized statistic value is assigned to a second gray level, and an edge mask for the brightness-improving image data is obtained according to the first gray level and the second gray level, and the first gray level is greater than the second gray level. For example, the first gray value is 255, the second gray value is 0, and the resulting edge mask is a binary image. The target threshold may be set according to the requirements of the actual application scenario, for example, the target threshold is 0.05 or 0.1.
In this embodiment, the brightness statistics value is normalized to obtain the normalized statistics value, and the edge mask for brightening the image data is obtained based on the magnitude relation between the normalized statistics value and the target threshold value, so that the calculated amount is small, and the edge mask for brightening the image data can be conveniently and quickly obtained, thereby improving the efficiency of obtaining the edge mask.
In some embodiments, using the target image data as training data for the de-pseudo-color model includes:
Performing mosaic degradation processing on the target image data to obtain intermediate image data; and converting the intermediate image data into raw domain data, and taking the raw domain data as training data of the anti-false color model.
Mosaic degradation refers to dividing an image into a plurality of tiles and processing the tiles so that the image exhibits a mosaic-like effect, which can be achieved, for example, by reducing the resolution of the tiles, reducing the color depth, or adding noise. The purpose of mosaic degradation is to make an image appear blurred, incoherent or pixelated. The mosaic degradation is a concept opposite to demosaicing, and the purpose of demosaicing is to reconstruct a full-Color image, i.e., reconstruct a complete RGB three-primary Color combination of each pixel, from incomplete Color samples output by photosensitive elements covered with Color filter arrays (Color FILTER ARRAY, CFA).
In this embodiment, the target image data is subjected to mosaic degradation processing to obtain intermediate image data, and then the intermediate image data is converted into raw domain data, where the raw domain data may be in a four bayer (quad bayer) format, and the raw domain data is used as training data of a de-pseudo-color model, and the training data may have more detailed information, so that the training data is more accurate, and when training the model by using the training data obtained through mosaic degradation processing, the model may have more processing capability.
In one embodiment, the method further comprises: performing first visual processing on the target image data to obtain target visual image data;
The training data for taking the target image data as the de-pseudo-color model comprises the following steps:
And taking the target visual image data as training data of a de-pseudo-color model, wherein the de-pseudo-color model is used for realizing de-pseudo-color and second visual processing, and the second visual processing is opposite to the first visual processing.
After obtaining the target image data, performing a first vision process on the target image data to obtain a target vision image, wherein the first vision process includes at least one of processes opposite to an underlying vision (Low level) process, that is, the number of vision processes corresponding to the first vision process may be one or more. The bottom layer vision is computer vision taking a pixel-level image as an input, processing and output unit, mainly focuses on low-level image information, such as pixel-level features, including image enhancement, image restoration, denoising, demosaicing, image compression, super-resolution, color correction, HDR (HIGH DYNAMIC RANGE IMAGING ) and the like, and aims to improve the image quality as much as possible on the premise of not changing the semantic information of the image. The bottom layer vision processing is processing performed by a pointer on bottom layer vision. The target visual image is an image subjected to pseudo-color simulation and first visual processing, for example, the first visual processing is noise addition processing, and the target visual image data is image data subjected to pseudo-color simulation and noise addition processing. Or the first visual processing includes a noise adding process and a mosaic degradation process, the target visual image data is image data subjected to pseudo-color simulation, noise adding and mosaic adding.
The target visual image data is used as training data of the de-pseudo-color model, and since the target visual image is an image subjected to pseudo-color simulation and first visual processing, the de-pseudo-color model trained based on the target visual image data can be used for de-pseudo-color and second visual processing, which is opposite to the first visual processing, for example, if the first visual processing is a noise adding process, the second visual processing is a de-noise process, if the first visual processing is a mosaic degradation process, the second visual processing is a de-mosaic process, and so on.
In this embodiment, the first visual processing may be performed on the basis of the target image data to obtain the target visual image data, and then the target visual image data is used as training data of the pseudo-color removing model, so that the pseudo-color removing model may perform the second visual processing while removing the pseudo-color, the second visual processing is opposite to the first visual processing, and the first visual processing may be implemented to construct the training data once, that is, the pseudo-color removing model may have multiple image processing functions at the same time.
In some embodiments, the above method further comprises: performing first visual processing on the initial image data to obtain visual processing image data; the visual processing image data is used as training data of a de-pseudo-color model for achieving de-pseudo-color and a second visual processing, which is opposite to the first visual processing.
In this embodiment, a first visual process is performed on the initial image data to obtain visual process image data, where the number of visual processes in the first visual process may be one or more, and the visual process image data is used as training data for removing the false color model. The visual processing image data and the target image data obtained in this embodiment are respectively used as training data of a de-pseudo-color model, and the de-pseudo-color model can be used to simultaneously implement de-pseudo-color and a second visual processing, which is the opposite processing to the first visual processing.
In an alternative embodiment, the base model is trained based on the target image data to obtain an initial de-pseudo-color model, and then the initial de-pseudo-color model is trained based on the visual processing image data to obtain a de-pseudo-color model, which can be used to implement de-pseudo-color and a second visual processing. Or training the basic model based on the vision processing image data to obtain a vision processing model, and training the vision processing model based on the target image data to obtain a pseudo-color removing model.
In an alternative embodiment, different first visual processes may be performed on the initial image data respectively, so as to obtain a plurality of visual processed image data correspondingly, the plurality of visual processed image data are used as training data of the anti-false color model, the anti-false color model may be trained separately based on each visual processed image data and the target image data, and after all training data training is completed, the anti-false color model is obtained.
In this embodiment, by performing a first visual process on the initial image data to obtain visual process image data, and using the visual process image data as training data of the anti-false color model, the anti-false color model may implement anti-false color and a second visual process, and the second visual process is opposite to the first visual process, that is, the same model may implement multiple image processing tasks including anti-false color, so as to improve power consumption in the image processing process.
In one embodiment, as shown in FIG. 3, an image processing method is provided, comprising the following steps 302 to 304.
Step 302, an image to be processed is acquired.
The image to be processed refers to an image which needs to be subjected to pseudo-color removal processing, and usually, the image to be processed comprises pseudo colors, and the pseudo colors are also called pseudo colors or false colors, and are not actual colors of the object, but color presentation results caused by various complicated reasons in the optical imaging process. In order to make the image form more consistent with the actual situation, the image to be processed needs to be processed for removing the false color.
Step 304, inputting the image to be processed into a de-pseudo-color model to obtain a de-pseudo-color image; the de-pseudo-color model is obtained through training data, and the training data is obtained through the training data construction method.
In this embodiment, the anti-false color model is used to perform anti-false color processing on the image to be processed, so as to obtain an anti-false color image, where the anti-false color image is the image to be processed after the false color is removed. The de-pseudo-color model is trained based on training data obtained by the data construction method described above.
In an alternative embodiment, the image to be processed is input into a de-pseudo-color model, and the de-pseudo-color model identifies and removes pseudo colors in the image to be processed to obtain a de-pseudo-color image. Or inputting the image to be processed into a pseudo-color removing model, wherein the pseudo-color removing model can perform pseudo-color removing processing and second visual processing on the image to be processed to obtain a pseudo-color removing image. Wherein the second visual process is opposite to the first visual process, and the explanation and explanation of the first visual process can be referred to the explanation in the training data construction method.
In an alternative embodiment, the training mode of the de-pseudo-color model may be a supervised training mode or an unsupervised training mode. The supervised training may take reference image data as a true value and a data pair consisting of target image data and reference image data as training data for the de-pseudo model. For example, target image data in training data is input into a basic model to obtain initial de-pseudo-color image data, parameters of the basic model are adjusted based on the difference between the initial de-pseudo-color image data and reference image data in the training data until the difference between the initial de-pseudo-color image data and a corresponding reference image is smaller than a training threshold value, and a de-pseudo-color model is obtained, so that the de-pseudo-color model can learn the capability of processing the target image data to obtain the reference image data. In the non-supervision training mode, for example, target image data can be input into a basic model to obtain a model output result, and parameters of the basic model are adjusted based on the difference between the model output result and a real result until the difference between the model output result and the real result is smaller than a training threshold value, so that a pseudo-color removing model is obtained.
In an actual application scene, the anti-false color model can be converted into an executable file (executable file), the corresponding executable file is installed on the electronic equipment, the electronic equipment can perform anti-false color processing on an image shot by the camera application, and an anti-false color image is obtained, namely, the image after the anti-false color is seen by human eyes.
It will be understood that, for the description and explanation of the terms or steps in the image processing method in this embodiment, reference may be made to the description and explanation of the corresponding terms or steps in the training data construction method, which are not repeated herein.
In the embodiment, the image to be processed is acquired, the image to be processed is input into the pseudo-color removing model to obtain the pseudo-color removing image, the pseudo-color removing model is obtained through training data training, the training data is obtained through a training data construction method, the pseudo-color removing processing can be performed on the image to be processed through an artificial intelligence mode, and the accuracy and the processing efficiency of pseudo-color removing can be improved; in addition, through the same thought as the training data construction method, other training data for processing the bottom-layer vision can be constructed, and the anti-fake color model is trained together, so that the anti-fake color model can simultaneously realize anti-fake color and other bottom-layer vision processing, namely one model can process a plurality of image processing tasks, and the power consumption of a processor can be saved.
In one embodiment, as shown in fig. 4, the training data construction method provided by the embodiment of the present application is described by a construction flow of purple training data. Taking the high-definition image as initial image data, and carrying out random gamma brightening treatment on the high-definition image to obtain brightening high-definition image data; sliding filtering is carried out on the high-definition drawing data, variance of brightness data in a sliding window is calculated to obtain local variance, normalization processing is carried out on the local variance to obtain normalized local variance, and according to comparison results of the normalized local variance and a target threshold value, an edge mask can be obtained; fusing the high-definition image, the highlighting high-definition image and the mask to obtain GT (Ground truth), namely reference image data, wherein the GT is used for simulating edge highlight, and can be used as a true value in a model training process of supervised training; meanwhile, random color disturbance can be performed based on the high-definition drawing, namely purple simulation is performed on the high-definition drawing through preset color gain, purple simulation image data are obtained, the preset color gain is that the R channel gain is 1.1-1.2, the B channel gain is 1.1-1.25, and the G channel gain is 0.8-1. If the color of the corresponding channel of the brightening high-definition image is not purple after being adjusted by the randomly generated color gain, yellow or pink may appear, further color adjustment is needed, the target R channel gain and the target B channel gain are obtained, the R channel color value can be adjusted through the target R channel gain, and at least one of the B channel color values can be adjusted through the target B channel gain, so that the R channel color value is smaller than or equal to the B channel color value, and purple analog image data is obtained; and fusing the purple simulation image data, the GT and the edge mask to obtain purple boundary simulation data.
Illustratively, the obtained high-definition image is shown in fig. 5, and the high-definition image in this example may be an image in sRGB (STANDARD RED GREEN Blue) format, where sRGB is a color language protocol. The method comprises the steps of carrying out brightening processing on a high-definition image to obtain brightening image data, carrying out edge detection on the brightening image data to obtain an edge mask of the high-definition image as shown in fig. 6, carrying out fusion on the high-definition image and the brightening image data to obtain reference image data based on the edge mask, carrying out purple simulation on the brightening image data to obtain purple simulation image data, and carrying out fusion on the purple simulation image data, the reference image data and the edge mask to obtain purple boundary simulation data, wherein the purple boundary simulation image corresponding to the purple boundary simulation data is shown in fig. 7, and the high-light purple boundary 702 obtained through simulation in fig. 7 is a part marked by a dotted line, namely, the purple boundary can be obtained only at the high-light boundary in a simulation mode, and the purple boundary cannot be obtained at the non-high-light boundary in a simulation mode.
For the simulation of the purple fringing data on the linear RGB image, as shown in fig. 8, fig. 8 is a linear RGB high-definition image, fig. 9 is an edge mask obtained based on the linear RGB high-definition image, fig. 10 is a purple fringing simulation image corresponding to the purple fringing simulation data obtained by performing the simulation based on the linear RGB high-definition image and the corresponding edge mask, and the corresponding purple fringing is indicated by a dotted line pointed by an arrow in fig. 10. Since the linear RGB image is not processed by AWB, gamma and the like, the simulation intensity of the purple fringing cannot be too high.
The purple fringing simulation data may be used as training data for a purplish model that may be used to purplish, i.e., to remove unwanted purple areas in the image. For example, the image to be processed is input into a purplish model, resulting in a purplish image. In one example, the image to be processed is shown in fig. 11, the image to be processed is an image including a purple fringing 1102, and the purplish image is shown in fig. 12, and it can be seen from fig. 12 that the purplish image is removed at the highlight edge on the basis of the image to be processed.
Or the purple simulation data can be subjected to first visual processing to obtain target visual image data, and then the target visual image data is used as training data of a purple removing model which can be used for purple removing and second visual processing, wherein the second visual processing is opposite to the first visual processing.
In the construction flow of the purple training data in the embodiment, the purple is simulated at the highlight edge to obtain the highlight purple edge, accurate purple training data can be obtained, visual processing is performed based on the purple training data to obtain target visual image data, a purple removing model is trained through the target visual image data, purple removing can be achieved by the purple removing model, meanwhile, various bottom visual processing tasks can be achieved, and power consumption of a processor can be reduced.
It should be understood that, although the steps in the flowcharts related to the embodiments described above are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
Based on the same inventive concept, the embodiment of the application also provides a training data construction device for realizing the above related training data construction method. The implementation of the solution provided by the apparatus is similar to the implementation described in the above method, so the specific limitations in the embodiments of one or more training data constructing apparatuses provided below may be referred to the limitations of the training data constructing method hereinabove, and will not be repeated here.
In one embodiment, as shown in fig. 13, there is provided a training data constructing apparatus including: an image acquisition module 1302, a highlighting processing module 1304, an edge detection module 1306, a first fusion module 1308, a pseudo-color simulation module 1310, a second fusion module 1312, wherein:
An image acquisition module 1302 for acquiring initial image data;
A brightening processing module 1304, configured to perform brightening processing on the initial image data to obtain brightening image data;
An edge detection module 1306, configured to perform edge detection on the brightening image data, to obtain an edge mask of the brightening image data;
A first fusing module 1308, configured to fuse the edge mask, the initial image data, and the brightness enhancement image data to obtain reference image data;
a pseudo-color simulation module 1310, configured to perform pseudo-color simulation on the brightening image data to obtain pseudo-color simulated image data;
The second fusing module 1312 is configured to fuse the pseudo-color analog image data, the reference image data, and the edge mask, to obtain target image data;
the training data obtaining module 1314 is configured to use the target image data as training data for removing the pseudo-color model.
In one embodiment, the first fusing module 1308 is further configured to perform weighted fusion on the initial image data and the brightness-enhanced image data, to obtain fused image data; and obtaining the reference image data according to the product of the fused image data and the pixels at the corresponding positions of the edge masks.
In one embodiment, the pseudo color simulation module 1310 is further configured to obtain a color gain corresponding to the simulated pseudo color; and adjusting the colors of the corresponding channels in the brightening image data according to the color gains to obtain pseudo-color simulation image data.
In one embodiment, the color gain comprises an R-channel gain and a B-channel gain, and the pseudo-color analog image data comprises purple analog image data; the pseudo-color simulation module 1310 is further configured to adjust the color of the R channel in the brightening image data according to the R channel gain, so as to obtain a candidate R channel color value; b channel color in the brightening image data is adjusted according to the B channel gain, and a candidate B channel color value is obtained; and if the color value of the candidate R channel is larger than the color value of the candidate B channel, adjusting the color value of at least one channel in the R channel and the B channel so that the color value of the candidate R channel is smaller than or equal to the color value of the candidate B channel, and obtaining purple simulated image data.
In one embodiment, the pseudo color simulation module 1310 is further configured to obtain at least one of a target R-channel gain and a target B-channel gain if the candidate R-channel color value is greater than the candidate B-channel color value; and adjusting at least one of the color values of the R channels based on the target R channel gain and the color values of the B channels based on the target B channel gain so that the candidate R channel color values are smaller than or equal to the candidate B channel color values, thereby obtaining purple analog image data.
In one embodiment, the color gain comprises a G-channel gain; the pseudo color simulation module 1310 is further configured to adjust a color of a G channel in the brightening image data according to the G channel gain, so as to obtain a candidate G channel color value; the G channel gain is not greater than 1.
In one embodiment, the edge detection module 1306 is further configured to convert the luminance image data into YUV domain data, and obtain luminance data of a luminance channel of the YUV domain data; performing sliding filtering on the brightness data to obtain a brightness statistic value corresponding to the sliding window; based on the luminance statistics, an edge mask is obtained that lightens the image data.
In one embodiment, the edge detection module 1306 is further configured to normalize the luminance statistic to obtain a normalized statistic; comparing the normalized statistics with a target threshold; if the normalized statistical value is larger than the target threshold value, the pixel gray corresponding to the normalized statistical value is endowed with a first gray value; if the normalized statistical value is not greater than the target threshold value, the pixel gray corresponding to the normalized statistical value is endowed with a second gray value; obtaining an edge mask for brightening the image data according to the first gray level value and the second gray level value; the first gray value is greater than the second gray value.
In one embodiment, the training data obtaining module is further configured to perform mosaic degradation processing on the target image data to obtain intermediate image data; and converting the intermediate image data into raw domain data, and taking the raw domain data as training data of the anti-false color model.
In one embodiment, the training data constructing apparatus further includes a visual image data constructing module, configured to perform a first visual process on the target image data to obtain target visual image data;
the training data obtaining module is further used for taking the target visual image data as training data of a pseudo-color removing model, wherein the pseudo-color removing model is used for achieving pseudo-color removing and second visual processing, and the second visual processing is opposite to the first visual processing.
In one embodiment, the training data constructing apparatus further includes a vision processing module, configured to perform a first vision processing on the initial image data to obtain vision processed image data; the visual processing image data is used as training data of a de-pseudo-color model for achieving de-pseudo-color and a second visual processing, which is opposite to the first visual processing.
Based on the same inventive concept, the embodiment of the application also provides an image processing device for realizing the above-mentioned image processing method. The implementation of the solution provided by the device is similar to the implementation described in the above image processing method, so the specific limitation in one or more embodiments of the image processing device provided below may refer to the limitation of the image processing method hereinabove, and will not be repeated herein.
In one embodiment, an image processing apparatus is provided, including an image acquisition module to be processed and a de-pseudo-color module, wherein:
The image acquisition module to be processed is used for acquiring the image to be processed;
The anti-false color module is used for inputting the image to be processed into the anti-false color model to obtain an anti-false color image; the de-pseudo-color model is obtained through training data, and the training data is obtained through the training data construction method.
The respective modules in the training data constructing apparatus or the image processing apparatus described above may be implemented in whole or in part by software, hardware, and combinations thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a server, and the internal structure of which may be as shown in fig. 14. The computer device includes a processor, a memory, an Input/Output interface (I/O) and a communication interface. The processor, the memory and the input/output interface are connected through a system bus, and the communication interface is connected to the system bus through the input/output interface. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The database of the computer device is for storing target image data. The input/output interface of the computer device is used to exchange information between the processor and the external device. The communication interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a training data construction method.
It will be appreciated by those skilled in the art that the structure shown in fig. 14 is merely a block diagram of a portion of the structure associated with the present inventive arrangements and is not limiting of the computer device to which the present inventive arrangements are applied, and that a particular computer device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
The embodiment of the application also provides a computer readable storage medium. One or more non-transitory computer-readable storage media containing computer-executable instructions that, when executed by one or more processors, cause the processors to perform the steps of a training data construction method or an image processing method.
Embodiments of the present application also provide a computer program product comprising instructions which, when run on a computer, cause the computer to perform the steps of a training data construction method or an image processing method.
It should be noted that, the user information (including but not limited to user equipment information, user personal information, etc.) and the data (including but not limited to data for analysis, stored data, presented data, etc.) related to the present application are information and data authorized by the user or sufficiently authorized by each party, and the collection, use and processing of the related data need to comply with the related laws and regulations and standards.
Those skilled in the art will appreciate that implementing all or part of the above-described methods in accordance with the embodiments may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magneto-resistive random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (PHASE CHANGE Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in various forms such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), etc. The databases referred to in the embodiments provided herein may include at least one of a relational database and a non-relational database. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processor referred to in the embodiments provided in the present application may be a general-purpose processor, a central processing unit, a graphics processor, a digital signal processor, a programmable logic unit, a data processing logic unit based on quantum computing, or the like, but is not limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The foregoing examples illustrate only a few embodiments of the application, which are described in detail and are not to be construed as limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of the application should be assessed as that of the appended claims.

Claims (16)

1. A method of training data construction, comprising:
Acquiring initial image data;
carrying out brightening treatment on the initial image data to obtain brightening image data;
performing edge detection on the brightening image data to obtain an edge mask of the brightening image data;
Fusing the edge mask, the initial image data and the brightness enhancement image data to obtain reference image data;
Performing pseudo-color simulation on the brightening image data to obtain pseudo-color simulation image data;
Fusing the pseudo-color simulation image data, the reference image data and the edge mask to obtain target image data;
and taking the target image data as training data of a de-pseudo-color model.
2. The method of claim 1, wherein fusing the edge mask, the initial image data, and the enhanced image data to obtain reference image data comprises:
carrying out weighted fusion on the initial image data and the brightening image data to obtain fused image data;
and obtaining the reference image data according to the product of the fused image data and the pixels at the corresponding positions of the edge masks.
3. The method of claim 1, wherein said performing a pseudo-color simulation on said enhanced image data results in pseudo-color simulated image data, comprising:
Obtaining a color gain corresponding to the simulated pseudo color;
And adjusting the color of the corresponding channel in the brightening image data according to the color gain to obtain the pseudo-color simulation image data.
4. The method of claim 3, wherein the color gain comprises an R-channel gain and a B-channel gain, and the pseudo-color analog image data comprises purple analog image data; the step of adjusting the colors of the corresponding channels in the brightening image data according to the color gain to obtain the pseudo-color simulation image data comprises the following steps:
adjusting the color of the R channel in the brightness enhancement image data according to the R channel gain to obtain a candidate R channel color value;
adjusting the color of the B channel in the brightness enhancement image data according to the B channel gain to obtain a candidate B channel color value;
And if the color value of the candidate R channel is larger than the color value of the candidate B channel, adjusting the color value of at least one channel in the R channel and the B channel so that the color value of the candidate R channel is smaller than or equal to the color value of the candidate B channel, and obtaining the purple simulated image data.
5. The method of claim 4, wherein if the candidate R-channel color value is greater than the candidate B-channel color value, adjusting the color value of at least one of the R-channel and the B-channel such that the candidate R-channel color value is less than or equal to the candidate B-channel color value, obtaining the purple analog image data, comprises:
if the candidate R channel color value is larger than the candidate B channel color value, acquiring at least one of a target R channel gain and a target B channel gain;
And adjusting at least one of the color values of the R channels based on the target R channel gain and the color values of the B channels based on the target B channel gain, so that the candidate R channel color values are smaller than or equal to the candidate B channel color values, and obtaining the purple analog image data.
6. The method of claim 4, wherein the color gain comprises a G-channel gain; the method further comprises the steps of:
according to the G channel gain, the color of the G channel in the brightness enhancement image data is adjusted, and a candidate G channel color value is obtained; the G-channel gain is not greater than 1.
7. The method of claim 1, wherein edge detecting the brightened image data to obtain an edge mask of the brightened image data comprises:
Converting the brightness enhancement image data into YUV domain data and acquiring brightness data of a brightness channel of the YUV domain data;
performing sliding filtering on the brightness data to obtain a brightness statistic value corresponding to a sliding window;
and obtaining the edge mask of the brightness enhancement image data based on the brightness statistic.
8. The method of claim 7, wherein the deriving an edge mask for the enhanced image data based on the luminance statistic comprises:
Normalizing the brightness statistic value to obtain a normalized statistic value;
Comparing the normalized statistics with a target threshold;
If the normalized statistical value is larger than a target threshold value, the pixel gray corresponding to the normalized statistical value is endowed with a first gray value;
If the normalized statistical value is not greater than a target threshold value, giving a second gray value to the pixel gray corresponding to the normalized statistical value;
Obtaining an edge mask of the brightening image data according to the first gray scale value and the second gray scale value; the first gray value is greater than the second gray value.
9. The method of claim 1, wherein said using said target image data as training data for a de-pseudo-color model comprises:
performing mosaic degradation processing on the target image data to obtain intermediate image data;
and converting the intermediate image data into raw domain data, and taking the raw domain data as training data of a de-pseudo color model.
10. The method according to any one of claims 1 to 9, further comprising:
performing first visual processing on the target image data to obtain target visual image data;
The training data for taking the target image data as the de-pseudo-color model comprises the following steps:
And taking the target visual image data as training data of a de-pseudo-color model, wherein the de-pseudo-color model is used for realizing de-pseudo-color and second visual processing, and the second visual processing is opposite to the first visual processing.
11. The method according to any one of claims 1 to 9, further comprising:
Performing first visual processing on the initial image data to obtain visual processing image data;
And taking the visual processing image data as training data of a de-pseudo-color model, wherein the de-pseudo-color model is used for realizing de-pseudo-color and a second visual processing, and the second visual processing is opposite to the first visual processing.
12. An image processing method, comprising:
Acquiring an image to be processed;
Inputting the image to be processed into a pseudo-color removing model to obtain a pseudo-color removing image; wherein the de-pseudo-color model is trained by training data obtained by the training data construction method according to any one of the preceding claims 1 to 11.
13. A training data constructing apparatus, comprising:
The image acquisition module is used for acquiring initial image data;
The brightness enhancement processing module is used for carrying out brightness enhancement processing on the initial image data to obtain brightness enhancement image data;
The edge detection module is used for carrying out edge detection on the brightening image data to obtain an edge mask of the brightening image data;
the first fusion module is used for fusing the edge mask, the initial image data and the brightness enhancement image data to obtain reference image data;
The pseudo-color simulation module is used for performing pseudo-color simulation on the brightening image data to obtain pseudo-color simulation image data;
The second fusion module is used for fusing the pseudo-color simulation image data, the reference image data and the edge mask to obtain target image data;
And the training data obtaining module is used for taking the target image data as training data of the pseudo-color removing model.
14. An electronic device comprising a memory and a processor, the memory having stored therein a computer program which, when executed by the processor, causes the processor to perform the steps of the method of any of claims 1 to 12.
15. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method according to any one of claims 1 to 12.
16. A computer program product comprising a computer program, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any one of claims 1 to 12.
CN202410381896.6A 2024-03-29 2024-03-29 Training data construction method, training data construction device, electronic equipment and readable storage medium Pending CN118172623A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410381896.6A CN118172623A (en) 2024-03-29 2024-03-29 Training data construction method, training data construction device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410381896.6A CN118172623A (en) 2024-03-29 2024-03-29 Training data construction method, training data construction device, electronic equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN118172623A true CN118172623A (en) 2024-06-11

Family

ID=91350305

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410381896.6A Pending CN118172623A (en) 2024-03-29 2024-03-29 Training data construction method, training data construction device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN118172623A (en)

Similar Documents

Publication Publication Date Title
Ancuti et al. Night-time dehazing by fusion
US20220292658A1 (en) Image fusion method and apparatus, storage medium, and electronic device
US10708525B2 (en) Systems and methods for processing low light images
EP4109392A1 (en) Image processing method and image processing device
CN105915909B (en) A kind of high dynamic range images layered compression method
CN105279746B (en) A kind of more exposure image fusion methods based on bilateral filtering
US8594451B2 (en) Edge mapping incorporating panchromatic pixels
US20080240601A1 (en) Edge mapping using panchromatic pixels
EP3284060A1 (en) Convolutional color correction
CN111784603A (en) RAW domain image denoising method, computer device and computer readable storage medium
Ancuti et al. Image and video decolorization by fusion
Kao High dynamic range imaging by fusing multiple raw images and tone reproduction
CN114627034A (en) Image enhancement method, training method of image enhancement model and related equipment
CN106709888B (en) A kind of high dynamic range images production method based on human vision model
CN114037641A (en) Low-illumination image enhancement method, device, equipment and medium
CN106780402A (en) Dynamic range of images extended method and device based on Bayer format
CN111161189A (en) Single image re-enhancement method based on detail compensation network
CN118172623A (en) Training data construction method, training data construction device, electronic equipment and readable storage medium
CN105991937A (en) Virtual exposure method and device based on Bayer format image
CN111091522B (en) Terminal and multi-exposure image fusion method thereof
CN116263942A (en) Method for adjusting image contrast, storage medium and computer program product
Kok et al. Digital Image Denoising in MATLAB
Kar et al. Statistical approach for color image detection
CN117408872B (en) Color image data conversion method, device, equipment and storage medium
CN112907653B (en) Image processing method, image pickup device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination