CN117764877A - Image processing method, apparatus, electronic device, and computer-readable storage medium - Google Patents

Image processing method, apparatus, electronic device, and computer-readable storage medium Download PDF

Info

Publication number
CN117764877A
CN117764877A CN202311676106.9A CN202311676106A CN117764877A CN 117764877 A CN117764877 A CN 117764877A CN 202311676106 A CN202311676106 A CN 202311676106A CN 117764877 A CN117764877 A CN 117764877A
Authority
CN
China
Prior art keywords
image
target
pseudo
edge
color
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311676106.9A
Other languages
Chinese (zh)
Inventor
李海军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202311676106.9A priority Critical patent/CN117764877A/en
Publication of CN117764877A publication Critical patent/CN117764877A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Processing (AREA)

Abstract

The present application relates to an image processing method, apparatus, computer device, storage medium and computer program product. The method comprises the following steps: acquiring a first image; performing brightness conversion on the first image to obtain a second image; performing false color edge detection on the second image to obtain a false edge mask; and performing pseudo-color removal processing on the first image according to the pseudo-edge mask to obtain a target image. By adopting the method, the accuracy of pseudo color correction can be improved.

Description

Image processing method, apparatus, electronic device, and computer-readable storage medium
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image processing method, an image processing device, an electronic device, and a computer readable storage medium.
Background
With the popularization and rapid development of mobile devices such as smart phones, the photographing function of the mobile devices is widely focused, and the requirements of people on image resolution and imaging quality are higher and higher. Because of the volume limitation of the mobile device, a great improvement space still exists in the aspects of image quality such as chromatic aberration, definition, noise control, dynamic range, color accuracy and the like of a shot image, wherein a false color edge of the image is, for example, a purple edge or a green edge, and the like, the false color edge belongs to a local color error of the image, and the influence on the image quality is visual and visual due to the fact that the purple edge or the green edge is obvious in vision, so that the processing of the image color edge is one of important means for improving the image quality.
The traditional processing method mainly corrects the false color edges of the image by means of area weighting or chromatic aberration correction, however, the correction accuracy of the traditional color edge correction method is not high.
Disclosure of Invention
The embodiment of the application provides an image processing method, an image processing device, electronic equipment and a computer readable storage medium, which can improve the accuracy of color edge correction.
In a first aspect, the present application provides an image processing method. The method comprises the following steps:
acquiring a first image;
performing brightness conversion on the first image to obtain a second image;
performing false color edge detection on the second image to obtain a false edge mask;
and performing pseudo-color removal processing on the first image according to the pseudo-edge mask to obtain a target image.
In a second aspect, the present application also provides an image processing apparatus. The device comprises:
the data acquisition module is used for acquiring a first image;
the brightness conversion module is used for carrying out brightness conversion on the first image to obtain a second image;
the mask detection module is used for carrying out false color edge detection on the second image to obtain a false edge mask;
and the pseudo-edge processing module is used for carrying out pseudo-color removal processing on the first image according to the pseudo-edge mask to obtain a target image.
In a third aspect, the present application also provides a computer device. The computer device comprises a memory storing a computer program and a processor which when executing the computer program performs the steps of:
acquiring a first image;
performing brightness conversion on the first image to obtain a second image;
performing false color edge detection on the second image to obtain a false edge mask;
and performing pseudo-color removal processing on the first image according to the pseudo-edge mask to obtain a target image.
In a fourth aspect, the present application also provides a computer-readable storage medium. The computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of:
acquiring a first image;
performing brightness conversion on the first image to obtain a second image;
performing false color edge detection on the second image to obtain a false edge mask;
and performing pseudo-color removal processing on the first image according to the pseudo-edge mask to obtain a target image.
In a fifth aspect, the present application also provides a computer program product. The computer program product comprises a computer program which, when executed by a processor, implements the steps of:
Acquiring a first image;
performing brightness conversion on the first image to obtain a second image;
performing false color edge detection on the second image to obtain a false edge mask;
and performing pseudo-color removal processing on the first image according to the pseudo-edge mask to obtain a target image.
According to the image processing method, the device, the electronic equipment, the computer readable storage medium and the computer program product, through acquiring the first image, performing brightness conversion on the first image to obtain the second image, performing false color edge detection on the second image to obtain the false edge mask, performing false color removal processing on the first image according to the false edge mask to obtain the target image, accurately detecting the false edge mask corresponding to the false color edge, performing false color removal processing on the corresponding false color area according to the false edge mask, and accordingly avoiding error processing on the non-false color area and improving the accuracy of false color area correction.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic illustration of imaging an object in one embodiment;
FIG. 2 is a schematic diagram showing chromatic aberration of an imaging lens in one embodiment;
FIG. 3 is a schematic diagram of a lamp imaging purple fringing in one embodiment;
FIG. 4 is a flow chart of an image processing method in one embodiment;
FIG. 5 is a flow chart of step 404 in one embodiment;
FIG. 6 is a schematic illustration of a first image in one embodiment;
FIG. 7 is a schematic illustration of a second image in one embodiment;
FIG. 8 is a flow chart of step 406 in one embodiment;
FIG. 9 is a schematic diagram of a pseudo color rendering position in one embodiment;
FIG. 10 is a flowchart of an image processing method according to another embodiment;
FIG. 11 is a schematic view of a first image in another embodiment;
FIG. 12 is a schematic diagram of a highlight green side mask in one embodiment;
FIG. 13 is a schematic diagram of a target image in one embodiment;
FIG. 14 is a schematic diagram of camera output in one embodiment;
FIG. 15 is a block diagram showing the structure of an image processing apparatus in one embodiment;
FIG. 16 is a diagram of the internal architecture of a computer device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
The purple edge/green edge belongs to the false color edge of the image, belongs to the partial color error of the image, and is an important means for improving the image quality because the purple edge/green edge of the image is visually obvious and has visual influence on the image quality.
Regarding the cause of purple fringing, it is generally considered that the imaging system is usually in focus using the green channel 104 as shown in fig. 1, but the red channel 102 and the blue channel 106 are not in full focus due to the lens or microlens aberrations, thereby causing a red fringing to occur at the edges of the object.
Classical chromatic aberration theory explains the formation of purple fringing to some extent, but contrasts the purple fringing with the common chromatic aberration fringing, which is shown in fig. 2, and the lens presents a corresponding chromatic aberration diagram, wherein the chromatic aberration fringing includes a purple fringing 202 and a green fringing 204, and the purple fringing does not include a green fringing.
The purple fringing causes mainly include a highlight region, a purple fringing region, a transition region, a normal color region, etc., and the near saturation region of each color channel is the highlight region, as shown in fig. 3, a purple fringing region 304 may be present beside the highlight region 302, the purple fringing region 304 is followed by a transition region 306, and a normal object color region 308 is beside the transition region 306. The color channel values of the transition region 306 and the purple fringing region 304 change dramatically, and the transition region 306 has no visually distinct purple fringing, but the color channel values of the transition region change, and the gradient changes of the color channels are not exactly the same.
Based on the above analysis, the cause of purple fringing/green fringing is complex and difficult to accurately determine. In the related art, the purple fringing/green fringing is corrected mainly by a method based on regional weighting or chromatic aberration correction, but a situation of false detection or false processing may still exist, so that correction accuracy is low.
Based on the problems of the traditional correction method, the embodiment of the application provides an image processing method, which is characterized in that the acquired first image is subjected to brightness conversion to obtain a second image, false color edge detection is performed on the second image to obtain a false edge mask, false color removal processing is performed on the first image according to the false edge mask to obtain a target image, accurate detection of the false edge mask can be achieved, false color removal processing is performed on a region corresponding to the false edge mask, and a target image with high correction accuracy can be obtained.
The image processing method provided by the embodiment of the application is described by taking the operation on the electronic equipment as an example. The electronic device may be a terminal or a server, and the terminal may be, but is not limited to, various personal computers, cameras, scanners, notebook computers, smart phones, tablet computers, internet of things devices and portable wearable devices, and the internet of things devices may be smart speakers, smart televisions, smart air conditioners, smart vehicle devices and the like. The portable wearable device may be a smart watch, smart bracelet, headset, or the like. The server may be implemented as a stand-alone server or as a server cluster composed of a plurality of servers. It will be appreciated that the method may also be applied to a server, and may also be applied to a system comprising a terminal and a server, and implemented by interaction of the terminal and the server.
In one embodiment, as shown in FIG. 4, an image processing method is provided, comprising the following steps 402 to 408.
Step 402, a first image is acquired.
The first image refers to an image to be processed, and includes an image of a pseudo color to be removed, the pseudo color representing a significant color error occurring near an edge of an object. The pseudo color may be, for example, at least one of violet and green, and it is understood that the pseudo color may be other colors than violet and green.
Optionally, if the image corresponding to the RAW domain acquired by the image sensor is an initial image, and the format of the initial image is converted from the RAW domain to the YUV domain, the image corresponding to the YUV domain may be used as the first image, where RAW represents an original image file, "Y" represents brightness (luminence or Luma), and "U" and "V" represent chromaticity (Chroma) to describe the image color and saturation, which are used for specifying the color of the pixel.
Optionally, the initial image data collected by the sensor needs to be subjected to subsequent processing such as filtering, denoising, removing false colors, enhancing brightness, correcting colors and the like, so that an image seen by human eyes can be obtained. The image processing flow in this embodiment includes a step of pixel depth conversion for converting the pixel depth of the image from high order to low order, and then the first image may be the converted image corresponding to the image pixel depth conversion step, that is, the image before the image pixel depth conversion. The pixel depth refers to the number of bits (bits) used for storing each pixel, and the deeper the pixel depth, the more bits representing a pixel, the more color data it can express. Assuming that 8 bits are required to store each pixel, the pixel depth of the image is 8.
Step 404, performing brightness conversion on the first image to obtain a second image.
In this embodiment, the electronic device performs brightness conversion on the first image to obtain the second image. Wherein the brightness conversion comprises increasing the brightness of the first image, thereby obtaining a second image, i.e. the brightness of the second image is greater than the height of the first image. It can be appreciated that the brightness of the first image is often darker, and the second image is obtained after the first image is brightened, which is beneficial to detecting the image information in the second image.
Alternatively, the first image may be subjected to brightness conversion according to brightness level to obtain the second image. And correspondingly converting pixel values corresponding to each brightness level in the first image to obtain converted brightness levels, and obtaining a second image according to the pixels corresponding to the converted brightness levels. The conversion modes corresponding to the brightness levels in the first image may be the same or different. Alternatively, the brightness of the first image may be uniformly increased by a target multiple to obtain the second image.
Optionally, the brightness reference value of the first image may be determined according to the exposure ratio corresponding to the first image, and the second image may be obtained according to the brightness conversion relationship corresponding to the brightness reference value of the first image and the first image. The exposure ratio is used for representing the proportion difference of the image brightness compared with the standard brightness, wherein the standard brightness refers to the brightness of the image information which can be recognized by human eyes, and the exposure ratio is a camera imaging parameter.
And step 406, performing false color edge detection on the second image to obtain a false edge mask.
And performing brightness conversion on the first image to obtain a second image, and performing false color edge detection on the second image to obtain a false edge mask. The pseudo color edge refers to an edge of a pseudo color area, the pseudo color area is usually a communicated area, the pseudo color area is obtained after the pseudo color edge is detected, the position information corresponding to the detected pseudo color edge can be used as a pseudo edge mask, and the corresponding pseudo edge mask can be obtained when pixels corresponding to the pseudo edge mask are obtained.
Alternatively, the second image may be converted into a corresponding RGB image, and the pseudo-edge mask may be obtained by performing pseudo-color edge detection on the corresponding RGB image. Wherein RGB characterizes the colors of three channels red (R), green (G), blue (B).
Optionally, edge detection may be performed on the second image to obtain an initial edge, luminance screening may be performed on pixels in the initial edge according to a luminance threshold to obtain a target edge, and a pseudo-edge mask may be obtained according to the target edge and the second image.
And step 408, performing de-pseudo color processing on the first image according to the pseudo-edge mask to obtain a target image.
And removing the pseudo color region in the first image according to the pseudo edge mask to obtain the target image. Optionally, filtering processing may be performed on the region corresponding to the pseudo-edge mask in the first image to obtain a target region with a removed pseudo-color, and combining the target region with the region except the region corresponding to the pseudo-edge mask in the first image to obtain the target image. For example, filtering the first image to obtain an intermediate image, obtaining a target region with a removed false color according to the intermediate image and the false edge mask, and fusing other regions of the first image except the region corresponding to the false edge mask with the target region with the removed false color to obtain the target image.
Alternatively, the target image may be input into an image processing flow for subsequent processing, such as pixel depth conversion, brightening, sharpening, and the like.
In the image processing method, the first image is obtained, the brightness conversion is carried out on the first image, the second image is obtained, the false color edge detection is carried out on the second image, the false edge mask is obtained, the false color removal processing is carried out on the first image according to the false edge mask, the target image is obtained, the accurate false edge mask can be obtained through the detection of the second image obtained through the brightness conversion, the false color removal processing is carried out on the first image according to the false edge mask, the false processing on a non-false color area can be avoided, more accurate false color removal can be achieved, and the correction accuracy of the false color area is improved.
In some embodiments, the above method further comprises: converting the pixel depth of the first image to obtain a third image; wherein the pixel depth of the third image is shallower than the pixel depth of the first image. Performing pseudo-color removal processing on the first image according to the pseudo-edge mask to obtain a target image, wherein the method comprises the following steps: and performing pseudo-color removal processing on the third image according to the pseudo-edge mask to obtain a target image.
In this embodiment, the pixel depth of the first image may be converted to obtain a third image, where the pixel depth of the third image is shallower than the pixel depth of the first image. That is, the pixel depth of the first image is made shallow, i.e., the number of bits of the pixels of the first image is reduced, so that a third image is obtained, for example, the pixel depth of the first image is 16 bits, and the pixel depth of the third image is 10 bits or 12 bits, or the like. It will be appreciated that the pixel depth of the third image obtained by the specific transformation may be set according to the actual needs. And then the third image can be subjected to pseudo-color removal processing according to the pseudo-edge mask, so that a target image is obtained. It can be understood that the deeper the pixel depth, the more the corresponding pixel number, the more subtle the color features of the corresponding image can be represented by more data; conversely, the shallower the pixel depth, the fewer the corresponding number of pixel bits, the less data characterizes the image, and the more the corresponding image color features are relatively rugged. Alternatively, the pixel depth of the first image may be converted, where the pixel depth of the first image may be changed to be deep, or the pixel depth of the first image may be changed to be shallow, where in this embodiment, the pixel depth of the first image is changed to be shallow, so as to obtain the third image.
Optionally, after the electronic device obtains the first image, brightness conversion may be performed on the first image to obtain a second image, false color edge detection is performed on the second image to obtain a false edge mask, meanwhile, pixel depth of the first image is converted to obtain a third image, and then false color removal processing is performed on the third image aiming at the false edge mask to obtain the target image. Or, after the first image is acquired, the pixel depth of the first image may be converted to obtain a third image, where the pixel depth of the third image is shallower than the pixel depth of the first image, and meanwhile, the first image may be subjected to brightness conversion to obtain a second image, the second image is subjected to pseudo-color edge detection to obtain a pseudo-edge mask, and then the pseudo-edge mask is used to perform pseudo-color removal processing on the third image to obtain the target image. Or, after the first image is obtained, performing brightness conversion on the first image to obtain a second image, performing false color edge detection on the second image to obtain a false edge mask, then performing conversion on the pixel depth of the first image to obtain a third image, and performing false color removal processing on the third image based on the false edge mask to obtain the target image. That is, the step of performing luminance conversion based on the first image to obtain a second image, performing false color edge detection on the second image to obtain a false edge mask, and the step of performing conversion on the pixel depth of the first image to obtain a third image may be performed simultaneously or sequentially.
Optionally, the pixel depth of the first image is converted, so that nonlinear mapping can be performed on the brightness of the first image, that is, brightness conversion and false color edge detection are performed on the first image before nonlinear mapping are performed, a false edge mask is obtained, then false color removal processing is performed on the third image after nonlinear mapping is performed, a target image is obtained, and a more accurate image correction effect can be obtained.
In this embodiment, the pixel depth of the first image is converted to obtain a third image, and the third image is subjected to pseudo-color removal processing according to the pseudo-edge mask to obtain a target image, that is, the pseudo-edge mask is detected for the image before the pixel depth conversion, and because the pixel depth is higher, more color information is reserved for the corresponding image, so that the pseudo-edge mask obtained based on the image detection before the pixel depth conversion is more accurate; in addition, the image after pixel depth conversion is subjected to the pseudo-color removing processing, the processed data volume is reduced, the processing efficiency can be improved, and the pseudo-color removing accuracy can be improved by performing the pseudo-color removing processing based on a more accurate pseudo-edge mask, so that the pseudo-color correcting accuracy is improved.
In some embodiments, as shown in fig. 5, the step 404 of performing brightness conversion on the first image to obtain the second image includes the following steps 502 to 508.
Step 502, an exposure ratio corresponding to the first image is obtained.
The exposure ratio is a camera imaging parameter used for representing the proportion difference of the image brightness compared with the standard brightness, and the standard brightness refers to the brightness of the image information which can be identified by human eyes. The exposure ratio corresponding to the first image is the proportion difference of the brightness of the first image compared with the standard brightness, and can be obtained from a shooting module of the electronic equipment.
At step 504, a luminance reference value is determined based on the exposure ratio and the pixel depth of the first image.
The electronic device may determine a luminance reference value based on the exposure ratio and the pixel depth of the first image, the luminance reference value may be used to characterize a luminance difference between the first image and an initial image, the initial image is an image acquired by the image sensor, a corresponding target luminance interval may be selected based on the luminance reference value, a corresponding luminance conversion relationship may be determined based on the target luminance interval, and a second image may be obtained according to the luminance conversion relationship.
Optionally, obtaining the first image based on the pixel depth of the first image and the pixel depth of the initial image And determining a brightness reference value according to the pixel depth difference value and the exposure ratio. Wherein the pixel depth difference value may characterize a difference between a pixel depth of an initial image acquired by the image sensor and a pixel depth of the first image. For example, the pixel depth difference may be an exponent value based on 2 raised to the power of the difference between the pixel depth of the initial image and the pixel depth of the first image, and if the pixel depth of the initial image is 10 bits and the pixel depth of the first image is 16 bits, the difference between the pixel depth of the initial image and the pixel depth of the first image is 6 bits, and 2 will be 6 =64 as pixel depth difference.
In one example, a ratio between the pixel depth difference value and the exposure ratio may be used as the luminance reference value. I.e. luminance reference value = pixel depth difference/exposure ratio.
Step 506, determining a brightness conversion relationship corresponding to the first image based on the target brightness interval to which the brightness reference value belongs.
In this embodiment, different luminance intervals correspond to different luminance conversion relationships, and then the luminance conversion relationship corresponding to the first image may be determined based on the luminance interval corresponding to the luminance reference value, where the different luminance reference value corresponds to the different luminance conversion relationship. It is understood that the specific number of luminance sections and the luminance section end point value may be set as required.
Step 508, obtaining a second image based on the brightness conversion relationship and the first image.
According to the brightness conversion relation, the first image can be subjected to brightness conversion to obtain a second image. The luminance transformation relationship may characterize a mapping relationship of the first image to the second image. The luminance conversion relationship includes a multiple relationship.
In one example, the pixel depth of the initial image captured by the image sensor is 10 bits, the pixel depth of the first image is 16 bits, and the pixel depth of the second image is 12 bits. It will be appreciated that the pixel depth of the initial image is related to the performance parameters of the image sensor itself, and that the pixel depth of the first image and the pixel depth of the second image may be selected according to actual needs. The exposure ratio corresponding to the first image is ratio, and the brightness reference value flag is:
flag=(2 16 /2 10 ) Ratio formula (1)
The first image is denoted by y1 and the second image is denoted by y2, and then the second image y2 is obtained by the following formula (2).
In the formula (2), 2 16 The pixel depth characterizing the first image is 16bit,2 12 The pixel depth representing the second image is 12 bits, the brightness interval takes 0, 8 and 12 as interval endpoint values, and the brightness conversion relation corresponding to the first image is determined according to the target brightness interval to which the brightness reference value belongs, for example, if the brightness reference value in the example is 13, the brightness conversion relation will be determined As the brightness conversion relation corresponding to the first image, then obtaining a second image; if the luminance reference value is 6, then by y 2 =y 1 And 8, taking x 8 as a brightness conversion relation corresponding to the first image, then obtaining a second image, and the like, wherein a schematic diagram of the first image is shown in fig. 6, and brightness conversion is carried out on the first image, and a schematic diagram of the obtained second image is shown in fig. 7.
In this embodiment, the exposure ratio corresponding to the first image is obtained, the brightness reference value is determined based on the exposure ratio and the pixel depth of the first image, the brightness conversion relation corresponding to the first image is determined based on the target brightness interval to which the brightness reference value belongs, the second image is obtained based on the brightness conversion relation and the first image, the exposure ratio, the pixel depth of the initial image acquired by the image sensor, the pixel depth of the first image and the pixel depth of the second image are comprehensively considered, and the conversion of the image brightness is performed, so that the second image with more accurate brightness can be obtained, thereby being beneficial to improving the accuracy of pseudo-edge mask detection based on the second image.
In some embodiments, as shown in fig. 8, the step 406 of performing false color edge detection on the second image to obtain a false edge mask includes the following steps 802 to 808.
Step 802, converting the format of the second image into a gray scale format, so as to obtain a gray scale image.
In this embodiment, the format of the second image is converted into a grayscale format, that is, the second image is converted into a grayscale image. Typically, the second image is in YUV format or RGB format, and the format of the second image needs to be converted into a grayscale image format. The gray image format is a single-channel image format, in which each pixel only contains a brightness value, the brightness value represents the brightness of each pixel in the image, and the larger the value is, the brighter the pixel is, and the smaller the value is, the darker the pixel is. The specific conversion mode of converting the format of the second image into the gray format to obtain the corresponding gray image can be realized in the existing mode. For example, if the second image is an RGB image, the R, G, B components of the pixels in the second image may be weighted and fused to obtain the gray value of the corresponding pixel in the gray image.
Step 804, gradient detection is performed on the gray scale image to obtain gray scale edge data.
Image gradient is understood to mean the speed of change of an image, in which the gray value changes more for the edge portion of the image and the gradient value is also more, and in which the gray value changes less for the smoother portion of the image and the corresponding gradient value is also less. Therefore, in this embodiment, the gray-scale image is subjected to gradient detection, that is, the edge portion in the gray-scale image is detected, so as to obtain gray-scale edge data, that is, the gray-scale edge data refers to image data corresponding to the gray-scale image edge.
Alternatively, gradient detection may be performed on the gray-scale image according to an edge detection operator to obtain gray-scale edge data. The edge detection operator may be, for example, a Roberts operator (cross differentiation algorithm), a Sobel operator, a Prewitt operator, a Laplacian operator, a Canny operator, or the like.
Step 806, filtering the gray edge data according to the target gray threshold to obtain target edge data.
In this embodiment, the gray edge data is filtered to remove pixels with brightness less than the target brightness and overexposed pixels. For example, the gray value corresponding to the pixel with the gray value smaller than the target gray threshold is set to zero, and the gray value corresponding to the pixel with the gray value larger than 255 is assigned to a preset gray value, for example, is assigned to 255.
Step 808, determining a pseudo-edge mask based on the target edge data and the second image.
In this embodiment, from the target edge data and the second image, a pseudo-edge mask may be determined, where the pseudo-edge mask is used to point to a pseudo-color region in the image.
Alternatively, the gray scale corresponding to the pixel may be determined according to whether the pixel in the second image satisfies the target color value condition and the target edge data, and then the pseudo-edge mask may be determined according to the gray scale of each pixel.
Optionally, the pseudo-edge mask determined according to the target edge data and the second image may be used as an initial pseudo-edge mask, the initial pseudo-edge mask and the gray-scale edge data are fused to obtain a target pseudo-edge mask, and then the first image is subjected to pseudo-color removal processing according to the target pseudo-edge mask to obtain the target image.
In this embodiment, the gray scale image is obtained by converting the format of the second image into the gray scale format, the gray scale image is subjected to gradient detection to obtain gray scale edge data, the gray scale edge data is filtered according to the target gray scale threshold value to obtain target edge data, and the pseudo-edge mask is determined according to the target edge data and the second image, so that the second image is subjected to gray scale conversion, gradient detection, brightness filtration and other processes, the accurate detection of the target mask can be realized, and the accuracy of target mask detection is improved.
In some embodiments, determining 808 the pseudo-edge mask from the target edge data and the second image includes: aiming at a target pixel in the second image, changing the gray level of the target pixel into target gray level if the target pixel meets the target color value condition; if the target pixel does not meet the target color value condition, changing the gray level of the target pixel into the gray level of the corresponding pixel in the target edge data; and determining a pseudo-edge mask according to the gray scale of the target pixel.
In this embodiment, the target color value condition may include a plurality of conditions, and satisfying the target color value condition means that at least one condition of the target color value conditions is satisfied; the condition that the target color value is not satisfied means that any one of the conditions of the target color value is not satisfied. The target color value condition may include that an R component corresponding to a pixel of the second image is greater than a G component, the R component is greater than a target component value, a pixel value corresponding to target edge data is greater than a target luminance value, and the like. The target component value and the target luminance value may be selected according to an actual application scenario, for example, the target component value is 0.7x255=178.5, and the target luminance value is 245. For the target pixel in the second image, if the target pixel meets the target color value condition, changing the gray level of the target pixel into the target gray level, wherein the target gray level can be set according to the actual application scene, for example, the target gray level can be 0, 1 or 10. And if the target pixel does not meet the target color value condition, changing the gray level of the target pixel into the gray level of the pixel corresponding to the target pixel in the target edge data, and further determining the pseudo-edge mask according to the gray level of each target pixel in the second image. Wherein the target pixel may be any one of the pixels in the second image. Alternatively, the pseudo-edge mask may be determined based on the gray scale of all target pixels in the second image.
In one example, the target color value condition includes three sub-conditions, as shown in equation (3) below.
Wherein R (x, y) and G (x, y) are R and G components characterizing the second image; gray characterizes the gray scale of the corresponding pixel in the target edge data.
If the target pixel in the second image meets at least one sub-condition of the target color value condition in the formula (3), changing the gray level of the target pixel to 0; and if the target pixel does not meet the target color value condition, changing the gray level of the target pixel into the gray level of the corresponding pixel in the target edge data. It will be understood that, the corresponding pixels in the target edge data refer to pixels corresponding to the target pixels, and the target edge data is obtained based on the second image processing, so that the number and positions of the pixels corresponding to the target edge data and the second image are identical, and only the corresponding pixel values are different.
And sequentially carrying out the same processing as the target pixel for each pixel in the second image to obtain a corresponding pseudo-edge mask.
In this embodiment, for the target pixel in the second image, if the target pixel meets the target color value condition, the gray level of the target pixel is changed to the target gray level, if the target pixel does not meet the target color value condition, the gray level of the target pixel is changed to the gray level of the corresponding pixel in the target edge data, and the pseudo edge mask is determined according to the gray level of the target pixel, so that the pseudo edge mask can be accurately and quickly determined.
In some embodiments, the above method further comprises:
fusing the pseudo-edge mask and gray-scale edge data to obtain a target pseudo-edge mask; performing pseudo-color removal processing on the first image according to the pseudo-edge mask to obtain a target image, wherein the method comprises the following steps: and performing pseudo-color removal processing on the first image according to the target pseudo-edge mask to obtain a target image.
In this embodiment, the pseudo-edge mask and the gray-scale edge data are fused to obtain the target pseudo-edge mask. Alternatively, the pseudo-edge mask and the gray-scale edge data may be anded to obtain the target pseudo-edge mask. Alternatively, the pseudo-edge mask and the gray-scale edge data may be weighted and summed to obtain the target pseudo-edge mask. And performing pseudo-color removal processing on the first image according to the obtained target mask to obtain a target image.
Optionally, performing pixel depth conversion on the first image to obtain a third image, wherein the pixel depth of the third image is shallower than that of the first image; and fusing the pseudo-edge mask and the gray edge data to obtain a target pseudo-edge mask, and performing pseudo-color removal processing on the third image according to the target pseudo-edge mask to obtain a target image.
In this embodiment, the pseudo edge mask and the gray edge data are fused to obtain the target pseudo edge mask, and the first image is subjected to pseudo-color removal processing according to the target pseudo edge mask to obtain the target image, that is, the common features in the pseudo edge mask and the gray edge data are fused and reinforced, so that the features of the corresponding pseudo color edges in the first image can be highlighted, the obtained target pseudo edge mask is more accurate, and the pseudo color correction accuracy of the target image is improved.
In some embodiments, the step 408 of performing a de-pseudo process on the first image according to the pseudo-edge mask to obtain the target image includes: filtering the first image to obtain an intermediate image; and based on the pseudo-edge mask, fusing the intermediate image and the first image to obtain a target image.
In this embodiment, the first image is filtered to obtain an intermediate image, where the filtering may be guided filtering, bilateral filtering, gaussian filtering, or the like. The guide filtering is, for example, YUV guide filtering, and the Y channel is used as a guide graph to carry out navigation filtering on the U channel and the V channel, so that the color of the corresponding region of the pseudo-edge mask is removed.
Optionally, the pixel depth of the first image may be converted to obtain a third image, filtering processing is performed on the third image to obtain an intermediate image, and the intermediate image and the third image are fused based on the pseudo-edge mask to obtain the target image.
Alternatively, based on the pseudo-edge mask, the intermediate image and the first image may be weighted and fused to obtain the target image, for example, the pseudo-edge mask is used as the weight of the intermediate image. Or based on the pseudo-edge mask, merging the first pixel region of the intermediate image corresponding to the pseudo-edge mask with the second pixel region of the first image corresponding to the pseudo-edge mask, and obtaining the target image.
Alternatively, the intermediate image and the third image may be weighted and fused based on the pseudo-edge mask, to obtain the target image, for example, the pseudo-edge mask is used as the weight of the intermediate image. Or based on the pseudo-edge mask, merging the first pixel region of the intermediate image corresponding to the pseudo-edge mask with the second pixel region of the third image corresponding to the pseudo-edge mask to obtain the target image.
In an alternative embodiment, the third image is downsampled, for example, 4 times of downsampled, so as to obtain downsampled data, the downsampled data is filtered so as to obtain an intermediate image, the intermediate image is upsampled by a sampling multiple corresponding to the downsampling so as to obtain upsampled data, and the upsampled data corresponding to the intermediate image is fused with the first image based on the pseudo-edge mask so as to obtain the target image.
In this embodiment, the intermediate image is obtained by performing filtering processing on the first image, and the intermediate image and the first image are fused based on the pseudo-edge mask to obtain the target image, so that the pseudo-color removal processing on the area corresponding to the pseudo-edge mask can be implemented, and the error processing on other normal color areas except the pseudo-edge mask is avoided, thereby improving the accuracy of pseudo-color correction.
In some embodiments, fusing the intermediate image and the first image based on the pseudo-edge mask to obtain the target image includes: and taking the pseudo-edge mask as the weight of the intermediate image, and carrying out weighted fusion on the intermediate image and the first image to obtain the target image.
In this embodiment, the pseudo-edge mask is used as the weight of the intermediate image, and the intermediate image and the first image are subjected to weighted fusion to obtain the target image. Alternatively, the pseudo-edge mask may be used as a weight of the intermediate image, and the mask corresponding to the region other than the pseudo-edge mask in the first image may be used as a weight of the first image, so that the intermediate image and the first image are weighted and fused to obtain the target image.
Alternatively, the pseudo-edge mask may be used as a weight of the initial image, and the intermediate image and the first image may be weighted and averaged to obtain the target image. For example, the pseudo edge mask is used as the weight of the intermediate image, and the mask corresponding to the region other than the pseudo edge mask in the first image is used as the weight of the first image, and then the intermediate image and the first image are weighted and averaged to obtain the target image.
Alternatively, the pseudo-edge mask may be used as a weight of the intermediate image, and the intermediate image and the third image may be weighted and fused to obtain the target image. For example, the pseudo-edge mask is used as the weight of the intermediate image, and the intermediate image and the third image are weighted and averaged to obtain the target image.
In this embodiment, by taking the pseudo-edge mask as the weight of the intermediate image and performing weighted fusion on the intermediate image and the first image to obtain the target image, the region corresponding to the pseudo-edge mask after the pseudo-color is removed and the region corresponding to the normal color can be fused, so that the region corresponding to the pseudo-edge mask in the processed target image is the region where the pseudo-color is accurately removed, and the other regions are regions of the normal color, which are not processed, thereby better retaining the color of the region of the normal color, and performing pseudo-color removing processing only on the region corresponding to the pseudo-edge mask to realize pseudo-color correction, thereby avoiding the situation that the region of the normal color is misprocessed, and further improving the accuracy of pseudo-color correction to a greater extent.
In one embodiment, the reasons for forming the false colors such as purple fringing, green fringing, etc. are different, but they are often present at the highlight border. As shown in fig. 9, no false color appears at the highlighting boundary 902 and the highlighting boundary 906, and the areas where the false color actually appears correspond to the 904 and 908 positions. Thus, the location where the false color appears is not just the location of the highlight boundary, but other locations are possible. According to the method and the device, the false edge mask corresponding to the false color areas such as purple edges and green edges can be accurately detected, meanwhile, false color removal processing is not carried out on the areas without purple edges and green edges, and the situation that color abnormality is caused by misprocessing of the normal color areas is avoided. In an alternative embodiment, as shown in fig. 10, the image processing flow includes a TMC (Tone mapping) module for converting an image from a higher pixel depth to a lower pixel depth, and implementing a nonlinear mapping of the image brightness. The method comprises the steps of obtaining an input image corresponding to a TMC module, taking the input image as a first image, carrying out brightness conversion on the first image in a YUV format to obtain a non-overexposed brightness map, taking the non-overexposed brightness map as a second image, detecting a pseudo-color edge corresponding to a high-light green edge in the second image in a YUV format to obtain a corresponding high-light green edge mask, carrying out high-light green edge removal on an output image of the TMC module based on the high-light green edge mask to obtain a target image, and carrying out subsequent other processes in an image processing flow, such as pixel depth conversion, brightening, sharpening and the like, based on the target image. In one example, the corresponding schematic diagram of the input image, i.e., the first image, corresponding to the TMC module is shown in fig. 11, the highlight green side mask obtained through the highlight green side detection is shown in fig. 12, the highlight green side removal is performed on the output image of the TMC module based on the highlight green side mask, the target image is shown in fig. 13, and after the whole image processing process is performed, the image finally output by the camera is shown in fig. 14. The image processing method in this embodiment may specifically include the following steps:
1. Taking an input image corresponding to the TMC module, that is, the pixel depth of the first image is 16 bits as an example, a processing procedure of performing brightness conversion on the first image to obtain the second image is described. The pixel depth of the first image is 16 bits, and the maximum value is 2 16 Areas exceeding 65535 are overexposed areas, without color detail information. The pixel depth of the second image is 10 bits, the pixel depth of the initial image acquired by the image sensor is 10 bits, and when the image sensor finishes AE (Auto Exposure), an Exposure ratio is generated, and the Exposure ratio is used for representing the proportion difference of the brightness of the image compared with the standard brightness. The luminance reference value is determined according to the exposure ratio, and the corresponding luminance reference value flag can be obtained according to the following formula (4).
flag=(2 16 /2 10 ) Ratio formula (4)
The first image is denoted by y1 and the second image is denoted by y2, and then the second image y2 can be obtained by the following formula (5).
Wherein 2 is 16 The pixel depth characterizing the first image is 16bit,2 10 Characterization ofThe pixel depth of the second image is 10 bits.
2. Green edge mask inspection
(2.1) converting the format of the second image from YUV format to RGB format.
(2.2) downsampling the second image in RGB format by a factor of 4 to obtain rgb_ds4.
Wherein RGB width 、RGB height Respectively representing the width and height, ds, of the RGB image before downsampling width 、ds height The width and height of rgb_ds4 after downsampling are characterized, respectively.
(2.3) converting rgb_ds4 into a corresponding Gray-scale map gray_ds4.
Gray_ds4=0.2989×R_ds4+0.5870×G_ds4+0.1140×B_ds4 formula (7)
And (2.4) performing gradient detection on the Gray image Gray_ds4 in the x direction and the y direction to obtain Gray edge data Gray_ds4_edge.
And (2.5) judging the brightness threshold value of the Gray_ds4_edge to acquire a target brightness section. Pixels with a luminance less than the target luminance and pixels exceeding the overexposure threshold (i.e., overexposure filtering) may be removed to obtain gray_ds4_edge_luma.
/>
(2.6) performing 5*5-scale expansion processing on the Gray_ds4_edge_luma, and iterating twice to obtain Gray_ds4_edge_dialate.
Gray_ds4_edge_dilate=dilate 5*5 (Gray_ds4_edge_luma) formula (10)
And (2.7) judging the green area according to the target color value condition to obtain a green edge mask Gray_edge_cond_ds4.
(2.8) performing an AND operation on the green edge mask Gray_edge_cond_ds4 and the Gray edge data Gray_ds4_edge to obtain mask_ds4.
mask_ds4=bitwise_and (gray_edge_cond_ds4, gray_ds4_edge) formula (12)
(2.9) mask_ds4 was up-sampled 4 times to obtain a green side mask.
And (2.10) blurring the green edge mask to obtain the target green edge mask. The blurring process can avoid the green mask image from being too sharp, and the saw tooth shape appears, so that the edge is softer.
The detection step for detecting the purple fringing mask is similar to the detection step for detecting the green fringing mask, and the target color value condition and the determination condition corresponding to the green fringing determination in step (2.7) may be modified to the following determination conditions.
3. Green and purple edge removing treatment
And (3.1) obtaining an output image of the TMC module as a third image, wherein the third image is an image in YUV format.
(3.2) downsampling the third image by a factor of 4 to obtain yuv_ds4.
Wherein YUV width 、YUV height Respectively representing the width and height of the YUV image before downsampling, YUV_ds width 、YUV_ds height The width and height of the downsampled yuv_ds4 are characterized, respectively.
And (3.3) carrying out YUV guide filtering on YUV_ds4 to obtain a filtered image.
Using the Y channel as a guide map, guide filtering is performed on the U and V channels, and color is eliminated in the chrominance channels.
(3.4) up-sampling the filtered image by 4 times to obtain an intermediate image.
4. Based on the pseudo-edge mask, the intermediate image and the third image are fused to obtain a target image YUV out
Wherein YUV in Representing a third image, YUV filter Representing an intermediate image.
In the above embodiment, the detection of the pseudo-edge mask is performed based on the input data of the TMC module, and the input data of the TMC module is subjected to brightness conversion to obtain a non-overexposed brightness map, and then the detection of the high-brightness pseudo-color area is performed according to the non-overexposed brightness map, so that the detection results in the pseudo-edge mask which is not a high-brightness boundary but a pseudo-color area inside the high-brightness boundary, and the accurate detection of the pseudo-color area is realized; the detection of the pseudo-edge mask by using the input data of the TMC module can not be interfered by nonlinear modules such as the TMC module, the gamma module and the like, so that the accuracy of the detection of the pseudo-edge mask is further improved. In addition, the high-gloss false color mask is detected through a plurality of dimensions such as overexposure filtering, color judgment, gradient detection and the like, so that the detection of the false side mask is more accurate. Then, in the YUV domain, the Y channel is used as a guide diagram, after the UV channel is subjected to guide filtering and filtering on the pseudo-color region, the pseudo-edge mask and the third image are fused, so that the situation that the normal color region is processed by mistake can be avoided, and the accuracy of pseudo-color correction is improved.
It should be understood that, although the steps in the flowcharts related to the embodiments described above are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
Based on the same inventive concept, the embodiments of the present application also provide an image processing apparatus for implementing the above-mentioned image processing method. The implementation of the solution provided by the apparatus is similar to the implementation described in the above method, so the specific limitation of one or more embodiments of the image processing apparatus provided below may refer to the limitation of the image processing method hereinabove, and will not be repeated herein.
In one embodiment, as shown in fig. 15, there is provided an image processing apparatus including: a data acquisition module 1502, a luminance conversion module 1504, a mask detection module 1506, and a pseudo-edge processing module 1508, wherein:
an image acquisition module 1502 for acquiring a first image;
a brightness conversion module 1504, configured to perform brightness conversion on the first image to obtain a second image;
the mask detection module 1506 is configured to perform pseudo-color edge detection on the second image to obtain a pseudo-edge mask;
and the pseudo-edge processing module 1508 is configured to perform pseudo-color removal processing on the first image according to the pseudo-edge mask, so as to obtain a target image.
In one embodiment, the image processing apparatus further includes a pixel depth module configured to: converting the pixel depth of the first image to obtain a third image; the pixel depth of the third image is shallower than the pixel depth of the first image;
the pseudo-edge processing module 1508 is further configured to: and performing pseudo-color removal processing on the third image according to the pseudo-edge mask to obtain the target image.
In one embodiment, the luminance conversion module 1504 is further configured to:
acquiring an exposure ratio corresponding to the first image; determining a luminance reference value based on the exposure ratio and a pixel depth of the first image; determining a brightness conversion relation corresponding to the first image based on a target brightness interval to which the brightness reference value belongs; and obtaining the second image based on the brightness conversion relation and the first image.
In one embodiment, the mask detection module 1506 is further configured to:
converting the format of the second image into a gray scale format to obtain a gray scale image; performing gradient detection on the gray level image to obtain gray level edge data; filtering the gray edge data according to a target gray threshold value to obtain target edge data; and determining the pseudo-edge mask according to the target edge data and the second image.
In one embodiment, the mask detection module 1506 is further configured to:
aiming at a target pixel in the second image, changing the gray level of the target pixel into target gray level if the target pixel meets the target color value condition; if the target pixel does not meet the target color value condition, changing the gray level of the target pixel into the gray level of the corresponding pixel in the target edge data; and determining the pseudo-edge mask according to the gray scale of the target pixel.
In one embodiment, the image processing apparatus further comprises a mask optimization module for:
fusing the pseudo-edge mask and the gray-scale edge data to obtain a target pseudo-edge mask;
the pseudo-edge processing module 1508 is further configured to: and performing de-pseudo-color processing on the first image according to the target pseudo-edge mask to obtain the target image.
In one embodiment, pseudo-edge processing module 1508 is further configured to:
filtering the first image to obtain an intermediate image; and fusing the intermediate image and the first image based on the pseudo-edge mask to obtain the target image.
In one embodiment, pseudo-edge processing module 1508 is further configured to:
and taking the pseudo-edge mask as the weight of the intermediate image, and carrying out weighted fusion on the intermediate image and the first image to obtain the target image.
The respective modules in the above-described image processing apparatus may be implemented in whole or in part by software, hardware, and combinations thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a server, and the internal structure of which may be as shown in fig. 16. The computer device includes a processor, a memory, an Input/Output interface (I/O) and a communication interface. The processor, the memory and the input/output interface are connected through a system bus, and the communication interface is connected to the system bus through the input/output interface. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The database of the computer device is used for storing image data correspondingly generated by each step in the image processing flow. The input/output interface of the computer device is used to exchange information between the processor and the external device. The communication interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement an image processing method.
It will be appreciated by those skilled in the art that the structure shown in fig. 16 is merely a block diagram of a portion of the structure associated with the present application and is not limiting of the computer device to which the present application is applied, and that a particular computer device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
Embodiments of the present application also provide a computer-readable storage medium. One or more non-transitory computer-readable storage media containing computer-executable instructions that, when executed by one or more processors, cause the processors to perform steps of an image processing method.
Embodiments of the present application also provide a computer program product containing instructions that, when run on a computer, cause the computer to perform an image processing method.
It should be noted that, the user information (including, but not limited to, user equipment information, user personal information, etc.) and the data (including, but not limited to, data for analysis, stored data, presented data, etc.) referred to in the present application are information and data authorized by the user or sufficiently authorized by each party, and the collection, use and processing of the related data are required to comply with the related laws and regulations and standards of the related countries and regions.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the various embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magnetic random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (Phase Change Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in the form of a variety of forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), and the like. The databases referred to in the various embodiments provided herein may include at least one of relational databases and non-relational databases. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic units, quantum computing-based data processing logic units, etc., without being limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples only represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the present application. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application shall be subject to the appended claims.

Claims (11)

1. An image processing method, comprising:
acquiring a first image;
performing brightness conversion on the first image to obtain a second image;
performing false color edge detection on the second image to obtain a false edge mask;
and performing pseudo-color removal processing on the first image according to the pseudo-edge mask to obtain a target image.
2. The method according to claim 1, wherein the method further comprises:
Converting the pixel depth of the first image to obtain a third image; the pixel depth of the third image is shallower than the pixel depth of the first image;
performing a process of removing the false color on the first image according to the false edge mask to obtain a target image, including:
and performing pseudo-color removal processing on the third image according to the pseudo-edge mask to obtain the target image.
3. The method of claim 1, wherein said luminance converting the first image to obtain a second image comprises:
acquiring an exposure ratio corresponding to the first image;
determining a luminance reference value based on the exposure ratio and a pixel depth of the first image;
determining a brightness conversion relation corresponding to the first image based on a target brightness interval to which the brightness reference value belongs;
and obtaining the second image based on the brightness conversion relation and the first image.
4. The method of claim 1, wherein performing false color edge detection on the second image to obtain a false edge mask comprises:
converting the format of the second image into a gray scale format to obtain a gray scale image;
Performing gradient detection on the gray level image to obtain gray level edge data;
filtering the gray edge data according to a target gray threshold value to obtain target edge data;
and determining the pseudo-edge mask according to the target edge data and the second image.
5. The method of claim 4, wherein said determining said pseudo-edge mask from said target edge data and said second image comprises:
aiming at a target pixel in the second image, changing the gray level of the target pixel into target gray level if the target pixel meets the target color value condition;
if the target pixel does not meet the target color value condition, changing the gray level of the target pixel into the gray level of the corresponding pixel in the target edge data;
and determining the pseudo-edge mask according to the gray scale of the target pixel.
6. The method according to claim 4, wherein the method further comprises:
fusing the pseudo-edge mask and the gray-scale edge data to obtain a target pseudo-edge mask;
performing a process of removing the false color on the first image according to the false edge mask to obtain a target image, including:
And performing de-pseudo-color processing on the first image according to the target pseudo-edge mask to obtain the target image.
7. The method according to claim 1, wherein said performing a de-pseudo-color process on the first image according to the pseudo-edge mask to obtain a target image comprises:
filtering the first image to obtain an intermediate image;
and fusing the intermediate image and the first image based on the pseudo-edge mask to obtain the target image.
8. The method of claim 7, wherein fusing the intermediate image and the first image based on the pseudo-edge mask to obtain the target image comprises:
and taking the pseudo-edge mask as the weight of the intermediate image, and carrying out weighted fusion on the intermediate image and the first image to obtain the target image.
9. An image processing apparatus, comprising:
the data acquisition module is used for acquiring a first image;
the brightness conversion module is used for carrying out brightness conversion on the first image to obtain a second image;
the mask detection module is used for carrying out false color edge detection on the second image to obtain a false edge mask;
And the pseudo-edge processing module is used for carrying out pseudo-color removal processing on the first image according to the pseudo-edge mask to obtain a target image.
10. An electronic device comprising a memory and a processor, the memory having stored therein a computer program which, when executed by the processor, causes the processor to perform the steps of the method of any of claims 1 to 8.
11. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method according to any one of claims 1 to 8.
CN202311676106.9A 2023-12-07 2023-12-07 Image processing method, apparatus, electronic device, and computer-readable storage medium Pending CN117764877A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311676106.9A CN117764877A (en) 2023-12-07 2023-12-07 Image processing method, apparatus, electronic device, and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311676106.9A CN117764877A (en) 2023-12-07 2023-12-07 Image processing method, apparatus, electronic device, and computer-readable storage medium

Publications (1)

Publication Number Publication Date
CN117764877A true CN117764877A (en) 2024-03-26

Family

ID=90309688

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311676106.9A Pending CN117764877A (en) 2023-12-07 2023-12-07 Image processing method, apparatus, electronic device, and computer-readable storage medium

Country Status (1)

Country Link
CN (1) CN117764877A (en)

Similar Documents

Publication Publication Date Title
JP6469678B2 (en) System and method for correcting image artifacts
RU2400815C2 (en) Method of enhancing digital image quality
US11127117B2 (en) Information processing method, information processing apparatus, and recording medium
EP1931130B1 (en) Image processing apparatus, image processing method, and program
US8098294B2 (en) Image processing apparatus, image pickup apparatus, control method for image processing apparatus, and storage medium storing control program therefor
US8284271B2 (en) Chroma noise reduction for cameras
CN107395991B (en) Image synthesis method, image synthesis device, computer-readable storage medium and computer equipment
JP5847471B2 (en) Image processing apparatus, imaging apparatus, image processing method, and image processing program
CN111784603A (en) RAW domain image denoising method, computer device and computer readable storage medium
JP2008506174A (en) Method, system, program module and computer program product for restoration of color components in an image model
CN113168669B (en) Image processing method, device, electronic equipment and readable storage medium
JP2008147980A (en) Image processor, image sensing device, method for processing image, program and recording medium
WO2019210707A1 (en) Image sharpness evaluation method, device and electronic device
WO2023137956A1 (en) Image processing method and apparatus, electronic device, and storage medium
KR101257946B1 (en) Device for removing chromatic aberration in image and method thereof
US9053552B2 (en) Image processing apparatus, image processing method and non-transitory computer readable medium
JP6235168B2 (en) Image conversion based on classification
CN105809677B (en) Image edge detection method and system based on bilateral filter
JP2015149618A (en) Image processing apparatus, image processing method and program
JP2009200635A (en) Image processor, and processing method and program
CN117764877A (en) Image processing method, apparatus, electronic device, and computer-readable storage medium
CN111915497B (en) Image black-and-white enhancement method and device, electronic equipment and readable storage medium
CN108810320B (en) Image quality improving method and device
JP5178421B2 (en) Image processing apparatus, image processing method, and imaging apparatus
JP5963913B2 (en) Image processing apparatus, imaging apparatus, image processing method, and image processing program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination