WO2023134235A1 - 图像处理方法及电子设备 - Google Patents

图像处理方法及电子设备 Download PDF

Info

Publication number
WO2023134235A1
WO2023134235A1 PCT/CN2022/124254 CN2022124254W WO2023134235A1 WO 2023134235 A1 WO2023134235 A1 WO 2023134235A1 CN 2022124254 W CN2022124254 W CN 2022124254W WO 2023134235 A1 WO2023134235 A1 WO 2023134235A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
brightness
curve
target
display
Prior art date
Application number
PCT/CN2022/124254
Other languages
English (en)
French (fr)
Inventor
徐巍炜
文锦松
吴蕾
余全合
钟顺才
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2023134235A1 publication Critical patent/WO2023134235A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1415Digital output to display device ; Cooperation and interconnection of the display device with other functional units with means for detecting differences between the image stored in the host and the images displayed on the displays
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units

Definitions

  • the embodiments of the present application relate to the technical field of image processing, and in particular, to an image processing method and an electronic device.
  • an electronic device when an electronic device displays an image or video, it can tone-map the display brightness of image pixels on the display screen according to the maximum display brightness of the display screen.
  • the current tone mapping method has poor image or video display effect in the scene where the backlight of the display screen is adjustable.
  • the present application provides an image processing method and device.
  • the display information of the image to be displayed on the display screen can be obtained based on the maximum display brightness of the display screen and the current target backlight brightness, which can improve the image display effect in the scene where the backlight of the display screen is adjustable.
  • the embodiment of the present application provides an image processing method.
  • the method includes: acquiring the first display information of the target image to be displayed according to the current target backlight brightness of the display screen; acquiring the second display information of the target image according to the maximum display brightness of the display screen and the current target backlight brightness Display information: process the first display information and the second display information according to preset information of the target image, and acquire third display information of the target image, wherein the preset information includes the same Information related to the brightness of the target image; displaying the target image on a display screen according to the third display information.
  • this method can combine the backlight brightness to obtain the first display information of the image, and then combine the maximum display brightness of the display screen and Backlight brightness, obtain the second display information of the image, and process the two kinds of display information to obtain the final display information of the image, then the final display information can be combined with the backlight brightness and the highest display brightness, thereby improving the adaptability of the image on the display screen Matching effect to enhance the display effect of the image.
  • the second display information of the image can be obtained based on the brightness greater than the current target backlight brightness and less than or equal to the maximum display brightness, which can avoid using the third display information to display the target image. , the overbrightness of the highlighted area is enhanced.
  • the target image may be a single image, or a frame image in a video, which is not limited in the present application.
  • the current target backlight brightness is the desired backlight brightness.
  • the maximum display brightness of the display screen is 500 nit, and the current backlight intensity level is set to 10%, and the current target backlight brightness level is 50 nit.
  • the current backlight brightness level of the display screen ie The actual backlight brightness
  • the backlight brightness corresponding to the backlight intensity level here may be called the current target backlight brightness, that is, the backlight brightness expected by the system or the user.
  • the acquiring the first display information of the target image to be displayed according to the current target backlight brightness of the display screen includes: acquiring the first tone mapping curve of the target image to be displayed; brightness, processing the first tone mapping curve, and acquiring a second tone mapping curve of the target image as the first display information.
  • the first tone mapping curve may be obtained from the source of the target image, or may be generated at the image display end, which is not limited in the present application.
  • Information source information may refer to any one or multiple types of information obtained by analyzing the information carried in the image.
  • the source information can be a kind of metadata information of the image, which is carried and transmitted in the code stream.
  • the first tone mapping curve may be a global initial tone mapping curve carried in a code stream, or a local initial tone mapping curve.
  • the first tone mapping curve is an initial tone mapping curve of a part (such as a partial image area of the target image)
  • the partial image area of the target image can be obtained according to the method of the first aspect.
  • the third display information when the first tone mapping curve is an initial tone mapping curve of a part (such as a partial image area of the target image), the partial image area of the target image can be obtained according to the method of the first aspect.
  • the third display information of the target image may be acquired according to the method of the first aspect.
  • the processing manner when processing the first tone mapping curve according to the current target backlight brightness of the display screen, the processing manner includes but not limited to scaling processing, translation processing, etc., which is not limited in the present application.
  • the acquiring the second display information of the target image according to the maximum display brightness of the display screen and the current target backlight brightness includes: The maximum display brightness of the display screen and the current target backlight brightness, determine a first brightness, the first brightness is higher than the current target backlight brightness; according to the first brightness, the first tone mapping curve Processing is performed to acquire a third tone mapping curve of the target image as the second display information.
  • the first brightness may be less than or equal to the maximum display brightness.
  • the processing manner when processing the first tone mapping curve according to the first brightness, includes but not limited to scaling processing, translation processing, etc., which is not limited in the present application.
  • the preset information includes: the second brightness of the target image, and according to the preset information of the target image, the first Processing the display information and the second display information, and acquiring third display information of the target image includes: acquiring a second brightness of the target image; extracting from the second tone mapping curve, a pixel brightness less than or equal to the first sub-curve of the second brightness; obtaining a first tone-mapping value corresponding to the second brightness in the second tone-mapping curve; based on the first tone-mapping value, for the third tone
  • the mapping curve is processed to generate a fourth tone mapping curve, wherein the tone mapping value corresponding to the second brightness in the fourth tone mapping curve is the first tone mapping value; from the fourth tone mapping curve Among them, extracting a second sub-curve whose pixel brightness is greater than or equal to the second brightness; based on the second brightness, connecting the first sub-curve and the second sub-curve to obtain the target tone mapping of the target image Cur
  • the second brightness can be called an anchor point
  • the anchor point (the second brightness here) can be a global anchor point, that is, an anchor point of the target image, or a local anchor point, that is, a local image area in the target image
  • the anchor point of when generating a target tone mapping curve for an image frame or an image region, it can be realized by using a pair of initial tone mapping curve and anchor point.
  • generating a target tone mapping curve for a frame image may be realized by using an initial tone mapping curve of the frame image and an anchor point of the frame image.
  • generating a target tone mapping curve for a partial image region in a frame image may be achieved by using an initial tone mapping curve of the partial image region in the frame image and an anchor point of the partial image region.
  • the global anchor point can be used as the anchor point of multiple frames of images or multiple image regions
  • the global initial tone mapping curve can be used as the initial tone mapping curve of the frame sequence (that is, multiple frame images) or multiple image regions.
  • the anchor point of the target image when obtaining the anchor point of the target image, it can be obtained from the information source, that is, the anchor point of the video or image can be imported in the code stream.
  • the anchor point of the target image when acquiring the anchor point of the target image, it may also be generated by an image or video display terminal (ie, the display device to which the above-mentioned display screen belongs), which is not limited in this application.
  • an image or video display terminal ie, the display device to which the above-mentioned display screen belongs
  • the second brightness of the target image for example, the second brightness is the global anchor point of the frame of the target image, and the second brightness may be a value between the minimum brightness value and the maximum brightness value of the image pixels of the target image.
  • the second brightness of the target image for example, the second brightness is a local anchor point of a local image region in the target image, then the second brightness may be between the minimum brightness value and the maximum brightness value of an image pixel in the local image region value in between.
  • the tone mapping curve described in various implementation manners of the first aspect may be used to represent the mapping relationship between the brightness of a pixel in an image (ie, pixel brightness, brightness of a pixel point) and display brightness on a display screen.
  • the acquiring the first tone mapping curve of the target image to be displayed includes: acquiring the respective first tone of a plurality of local regions constituting the target image A mapping curve; the obtaining the second brightness of the target image includes: obtaining the respective second brightness of the plurality of local regions; and connecting the first sub-curve and the second sub-curve based on the second brightness Two sub-curves, obtaining the target tone mapping curve of the target image as the third display information, including: for the plurality of local areas of the target image, based on the second of each of the local areas Brightness, connecting the respective first sub-curves and the second sub-curves of each of the partial regions, and obtaining respective target tone mapping curves of the plurality of partial regions as the third display information of the target image .
  • each local area in the target image has its own first tone mapping curve (ie, local tone mapping curve) and its own anchor point (ie, local anchor point), then each local area can be divided according to
  • a respective target tone mapping curve is generated for each local area, and then, when the target image is displayed, tone mapping is performed on each local area according to the respective target tone mapping curve , so as to realize the display adaptation of the target image on a display screen under a certain backlight.
  • the acquiring the second brightness of the target image includes: acquiring the second brightness according to the minimum value of the pixel brightness and the average value of the pixel brightness of the target image The second brightness of the target image.
  • the anchor point may be obtained based on the minimum value of pixel brightness and the average value of pixel brightness in the image pixels in the target image.
  • the anchor point may be obtained based on the minimum value of pixel brightness and the average value of pixel brightness among image pixels in the local image area.
  • the acquiring the second brightness of the target image according to the minimum value of pixel brightness and the average value of pixel brightness of the target image includes: clustering the image pixels of the target image according to the pixel brightness and pixel position in the target image to obtain a plurality of image regions; classifying the plurality of image regions in the target image according to the pixel brightness, Obtain a first-type image area and a second-type image area, wherein the brightness of the first-type image area is higher than the brightness of the second-type image area; based on the minimum value of pixel brightness in the first-type image area and the pixel The average value of brightness is used to determine the second brightness of the target image.
  • pixel points with close pixel brightness and close spatial position can be divided into one image region according to the pixel brightness and pixel spatial position in the target image, so as to realize the image
  • the division of regions is not limited to the above clustering methods.
  • the first type of image area may be called a highlighted area
  • the second type of image may be called a non-highlighted area
  • the average brightness value of the image area can be compared with the preset threshold to realize the distinction between the highlighted area and the non-highlighted area, which is not limited in this application .
  • the acquiring the first display information of the target image to be displayed according to the current target backlight brightness of the display screen includes: according to the current target backlight brightness of the display screen , process the target image to be displayed, and acquire the first image as the first display information.
  • the image pixels of the target image may be processed according to the current target backlight brightness, and the processing methods include but not limited to image processing methods such as image scaling processing.
  • the acquiring the second display information of the target image according to the maximum display brightness of the display screen and the current target backlight brightness includes: determining a third brightness based on the maximum display brightness of the display screen and the current target backlight brightness, the third brightness being higher than the current target backlight brightness; processing the target image according to the third brightness, Acquiring a second image as the second display information.
  • the third brightness may be less than or equal to the maximum display brightness.
  • the processing manner includes but not limited to image processing manners such as scaling processing, which is not limited in the present application.
  • the preset information includes: a gain coefficient of the target image, and according to the preset information of the target image, the first display Information and the second display information are processed, and obtaining the third display information of the target image includes: obtaining the gain coefficient of the target image; based on the gain coefficient, the first image and the second The image is processed to generate a third image as the third display information.
  • the gain coefficient of the target image may be a pixel-level or down-sampled gain coefficient.
  • the gain coefficient can be used to adjust the brightness of the image pixels of the target image.
  • the gain coefficient of a 5*5 pixel unit may be 1.2, which is used to indicate that image pixels in a certain 5*5 image area in the target image are brightened by 1.2 times.
  • areas with large brightness differences in the target image have different gain coefficients, and the gain coefficients of image areas with higher brightness are higher than the gain coefficients of image areas with more brightness.
  • the gain coefficient of the highlighted area in the target image is greater than the gain coefficient of the non-highlight area in the target image.
  • the acquiring the gain coefficient of the target image includes: according to the pixel brightness and pixel position in the target image, the image of the target image Clustering pixels to obtain a plurality of image regions; classifying the plurality of image regions in the target image according to pixel brightness to obtain a first-type image region and a second-type image region, wherein the first-type image
  • the brightness of the area is higher than the brightness of the second type of image area
  • different gain coefficients are configured for the first type of image area and the second type of image area; wherein, the gain coefficient of the first type of image area is greater than The gain coefficient of the second type of image area.
  • pixel points with close pixel brightness and close spatial position can be divided into one image region according to the pixel brightness and pixel spatial position in the target image, so as to realize the image
  • the division of regions is not limited to the above clustering methods.
  • the first type of image area may be called a highlighted area
  • the second type of image may be called a non-highlighted area
  • the average brightness value of the image area can be compared with the preset threshold to realize the distinction between the highlighted area and the non-highlighted area, which is not limited in this application .
  • the highlight area in the target image can be brightened, and the non-highlight area can be darkened, so as to realize the display adaptation of the target image to a display screen with a certain backlight brightness.
  • the first display information and the second display information are processed according to the preset information of the target image to obtain the target
  • the third display information of the image includes: obtaining the preset information of the target image from the source information of the target image to be displayed; based on the preset information, the first display information and the second
  • the display information is processed to acquire third display information of the target image.
  • the preset information can be carried in the code stream and transmitted to the terminal side corresponding to the image processing method, so that the video or image can be displayed on the display screen even when the display screen has a backlight by using the preset information.
  • the display effect is better.
  • Carrying the preset information in the source information of the target image allows the image display terminal to directly use the preset information to perform image display adaptation under the backlight of the display screen, without the need for the image display terminal to generate the preset information. Reduce the display delay of images or videos.
  • the embodiment of the present application provides an electronic device.
  • the electronic device includes: a memory and a processor, the memory and the processor are coupled; the memory stores program instructions, and when the program instructions are executed by the processor, the electronic device can realize the first aspect And the method in any one of the implementation manners of the first aspect.
  • the second aspect and any implementation manner of the second aspect correspond to the first aspect and any implementation manner of the first aspect respectively.
  • the technical effects corresponding to the second aspect and any one of the implementations of the second aspect please refer to the technical effects corresponding to the above-mentioned first aspect and any one of the implementations of the first aspect, which will not be repeated here.
  • the embodiment of the present application provides a chip.
  • the chip includes one or more interface circuits and one or more processors; the interface circuits are configured to receive signals from the memory of the electronic device and send the signals to the processor, the signals including Computer instructions; when the processor executes the computer instructions, the electronic device is made to execute the first aspect and the method in any one of the implementation manners of the first aspect.
  • the third aspect and any implementation manner of the third aspect correspond to the first aspect and any implementation manner of the first aspect respectively.
  • the technical effects corresponding to the third aspect and any one of the implementation manners of the third aspect refer to the above-mentioned first aspect and the technical effects corresponding to any one of the implementation manners of the first aspect, which will not be repeated here.
  • the embodiment of the present application provides a computer-readable storage medium.
  • the computer-readable storage medium stores a computer program, and when the computer program runs on the computer or the processor, the computer or the processor executes the method in the first aspect or any possible implementation manner of the first aspect.
  • the fourth aspect and any implementation manner of the fourth aspect correspond to the first aspect and any implementation manner of the first aspect respectively.
  • the technical effects corresponding to the fourth aspect and any one of the implementation manners of the fourth aspect refer to the above-mentioned first aspect and the technical effects corresponding to any one of the implementation manners of the first aspect, and details are not repeated here.
  • the embodiment of the present application provides a computer program product.
  • the computer program product includes a software program, and when the software program is executed by a computer or a processor, the method in the first aspect or any possible implementation manner of the first aspect is executed.
  • the fifth aspect and any implementation manner of the fifth aspect correspond to the first aspect and any implementation manner of the first aspect respectively.
  • the technical effects corresponding to the fifth aspect and any one of the implementation manners of the fifth aspect refer to the technical effects corresponding to the above-mentioned first aspect and any one of the implementation manners of the first aspect, and details are not repeated here.
  • FIG. 1 is a schematic structural diagram of a system exemplarily shown
  • FIG. 2 is a schematic diagram of an exemplary system application architecture
  • FIG. 3 is a schematic diagram of an exemplary mapping relationship of a dynamic range
  • FIG. 4 is a schematic diagram of a scenario of adjusting the brightness of the backlight of the mobile phone
  • Fig. 5a is a schematic structural diagram of a system exemplarily shown
  • Fig. 5b is a schematic diagram of an image processing process exemplarily shown
  • Fig. 5c is a schematic diagram of a data generation process exemplarily shown
  • Fig. 5d is a schematic diagram of an image display adaptation process exemplarily shown
  • Figure 5e is a schematic diagram of a curve generation process exemplarily shown
  • Fig. 6a is a schematic structural diagram of a system exemplarily shown
  • Fig. 6b is a schematic diagram of an image processing process exemplarily shown
  • Fig. 6c is a schematic diagram of an image display adaptation process exemplarily shown
  • FIG. 7 is a schematic structural diagram of a device provided in an embodiment of the present application.
  • FIG. 8 is a schematic structural diagram of a chip provided by an embodiment of the present application.
  • first and second in the description and claims of the embodiments of the present application are used to distinguish different objects, rather than to describe a specific order of objects.
  • first target object, the second target object, etc. are used to distinguish different target objects, rather than describing a specific order of the target objects.
  • words such as “exemplary” or “for example” are used as examples, illustrations or illustrations. Any embodiment or design scheme described as “exemplary” or “for example” in the embodiments of the present application shall not be interpreted as being more preferred or more advantageous than other embodiments or design schemes. Rather, the use of words such as “exemplary” or “such as” is intended to present related concepts in a concrete manner.
  • multiple processing units refer to two or more processing units; multiple systems refer to two or more systems.
  • Fig. 1 is a schematic diagram showing an exemplary system framework structure.
  • the system in Fig. 1 includes a sending end and a receiving end.
  • FIG. 1 is only an example, the system of the present application, and the sending end and receiving end in the system may have more or less components than those shown in the figure, and the two or more components, or may have different component configurations.
  • the various components shown in FIG. 1 may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing and/or application specific integrated circuits.
  • the sending end may be an image or video generating end
  • the receiving end may be an image or video display end.
  • the sending end and the receiving end can be configured in the same electronic device.
  • the electronic device is a mobile phone
  • the sending end may include a camera
  • the receiving end may include a display screen.
  • the sending end can generate a code stream from the video data collected by the camera, and the code stream carries backlight metadata, and the sending end can send the code stream to the display screen of the receiving end for display.
  • the application scenario of the system shown in FIG. 1 may be a process in which a user shoots a video with a camera of a mobile phone and displays the captured video on a display screen of the mobile phone.
  • the sending end can be integrated into a kind of encoder, and the receiving end can be integrated into a kind of decoder.
  • the sending end and the receiving end can be configured in different electronic devices.
  • the electronic device 1 is a mobile phone
  • the electronic device 2 is a TV.
  • the mobile phone can send the code stream of the video data captured by the mobile phone to the TV through a Wi-Fi network, Bluetooth, etc., and display the video data on the display screen of the TV. show.
  • the electronic device 1 is an electronic device used by a video software manufacturer to shoot and produce videos.
  • the electronic device 1 can transmit the produced video code stream (including backlight metadata) to the application server, and the electronic device 2 is a mobile phone.
  • the mobile phone receives the user's user operation on the video application installed on the mobile phone.
  • the mobile phone can obtain the incoming video stream from the electronic device 1 from the application server, and display and play the video data on the display screen of the mobile phone.
  • an electronic device may be a terminal, or may be called a terminal device, and the terminal may be a cellular phone (cellular phone), a tablet computer (pad), a wearable device, a television, a personal computer (PC, personal computer), an object Devices such as networked devices are not limited in this application.
  • cellular phone cellular phone
  • tablet computer pad
  • wearable device a television
  • PC personal computer
  • object Devices such as networked devices are not limited in this application.
  • the application scenarios of this application are not limited to the above-mentioned example scenarios, and can also be applied to various scenarios, for example, scenarios where Huawei Cloud stores (or transmits) images (or videos), for example, video surveillance scenarios, and for example, live broadcast Scenarios and the like are not limited in this application.
  • the sending end may include an acquisition module, a photoelectric conversion module and an encoding module.
  • the acquisition module can collect ambient light and obtain the image data of the light signal
  • the photoelectric conversion module can perform photoelectric conversion on the image data of the optical signal through the photoelectric transfer function to generate the image data of the electrical signal.
  • the dynamic range can be used to represent the ratio of the maximum value to the minimum value of a certain variable.
  • dynamic range can be used to express the ratio between the maximum gray value and the minimum gray value within the displayable range of the image.
  • the dynamic range in nature is quite large, as shown in Figure 3, which shows the brightness information of various ambient lights, for example, the brightness of the night scene under the starry sky is about 0.001cd/m2, the brightness of the sun itself is as high as 1,000,000,000cd/m2, and the brightness of the moonlight
  • the brightness is about 1cd/m2
  • the ambient brightness of the indoor lighting scene is about 100cd/m2
  • the ambient brightness of the outdoor cloudy scene is about 500cd/m2
  • the ambient brightness of the outdoor sunny scene is about 2000cd/m2.
  • the dynamic range is in the range of 10 -3 to 10 6 .
  • each channel of RGB Red Green Blue, red, green and blue
  • RGB Red Green Blue, red, green and blue
  • the representation range of each channel is 0 to 255 gray levels. , where 0 ⁇ 255 is the dynamic range of the image.
  • the dynamic range in the same scene in the real world is in the range of 10 -3 to 10 6 , which can be called high dynamic range (high dynamic range, HDR).
  • the dynamic range on ordinary pictures can be called low dynamic range ( low dynamic range, LDR).
  • the imaging process of electronic equipment can be understood from the high dynamic range of the real world (such as 100cd/m2 to 2000cd/m2) to the low dynamic range of the image (1cd/m2 to 500cd/m2) non-linear mapping.
  • mapping range from the brightness of the ambient light to the display brightness of the display device is not limited to the example of the mapping range of the shaded area in FIG. It can be increased, for example, the maximum display brightness can be higher than 500.
  • the display device in FIG. 3 and described herein may be the electronic device to which the receiving end belongs in the embodiment of the present application.
  • the dynamic range of the image data of the optical signal collected by the acquisition module is in the range of 10 -3 to 10 6 , while the maximum display brightness of the display module at the receiving end is only 500 for example, which cannot reach Brightness information for real-world ambient light.
  • the photoelectric conversion module needs to convert the optical signal into an electrical signal through the photoelectric transfer function to obtain an 8bit or 10bit image (the size of each pixel in the image is 8bit or 10bit).
  • a standard dynamic range (standard Dynamic Range, SDR) image an image with a dynamic range generally between 1 nit and 100 nit, where nit is a light unit) and a high dynamic range (HDR) image (a dynamic range between 0.001 and 10000 nit) image) corresponds to.
  • SDR standard Dynamic Range
  • HDR high dynamic range
  • Traditional 8-bit images such as jpeg can be considered as SDR images.
  • the image format of the 8bit or 10bit image of the electrical signal can be RAW (unprocessed) image, RGB image, YUV ("Y” means brightness (Luminance, Luma), “U” and “V” are It is any one of chromaticity, concentration (Chrominance, Chroma)) images, and Lab images, which is not limited in the present application.
  • the Lab image is composed of three elements, one element is brightness (L), and a and b are two color channels.
  • a includes colors from dark green (low brightness value) to gray (medium brightness value) to bright pink (high brightness value); b is from bright blue (low brightness value) to gray (medium brightness value) and then to yellow (high brightness value).
  • the size of each pixel in the image displayed by the display device is not limited to 8bit or 10bit, and with the improvement of the display capability of the display device, image pixels can be displayed with a size higher than 10bit.
  • the photoelectric transfer function used in the photoelectric transfer module in FIG. 1 will be introduced below.
  • the early display device can be a CRT display, and its photoelectric transfer function is a Gamma function, and the photoelectric transfer function shown in formula (1) is defined in the ITU-R Recommendation BT.1886 standard,
  • the image quantized into 8 bits can be called SDR image. m2 or so) performed well.
  • the illuminance range of display devices continues to increase.
  • the illuminance information of common HDR displays can reach 600cd/m2, and the illuminance information of advanced HDR displays can reach 2000cd/m2, far exceeding the illuminance of SDR display devices.
  • the photoelectric transfer function shown in the formula (1) in the ITU-R Recommendation BT.1886 standard cannot express the display performance of the HDR display device well, so an improved electro-optic transfer function is needed to adapt to the upgrade of the display device.
  • the idea of the photoelectric transfer function comes from the mapping function in the Tone Mapping (TM) algorithm, and the proper adjustment of the mapping function is the photoelectric transfer function.
  • PQ Perception quantization, perceptual quantization
  • HLG Hybrid Log-Gamma, hybrid logarithmic gamma
  • SLF photoelectric transfer functions There are three common photoelectric transfer functions at this stage, PQ (Perception quantization, perceptual quantization), HLG (Hybrid Log-Gamma, hybrid logarithmic gamma) and SLF photoelectric transfer functions. These three photoelectric transfer functions are AVS The conversion function specified by the standard.
  • the PQ photoelectric transfer function represents the image pixel
  • the conversion relationship from the linear signal value to the nonlinear signal value in the PQ domain is shown in formula (2), and the PQ photoelectric transfer function can be expressed as formula (3):
  • L represents the linear signal value, and its value is normalized to [0, 1].
  • L' represents the nonlinear signal value, and its value range is [0, 1].
  • the HLG photoelectric transfer function is improved on the basis of the traditional Gamma curve.
  • the HLG photoelectric transfer function applies the traditional Gamma curve in the low section and supplements the log curve in the high section.
  • the HLG photoelectric transfer function represents the conversion relationship from the linear signal value of the image pixel to the nonlinear signal value in the HLG domain, and the HLG photoelectric transfer function can be expressed as formula (4):
  • L represents the linear signal value, and its value range is [0, 12].
  • L' represents the nonlinear signal value, and its value range is [0, 1].
  • c 0.55991073
  • c is the photoelectric transfer coefficient of HLG.
  • the SLF photoelectric transfer function is the optimal curve obtained according to the brightness distribution of the HDR scene under the premise of satisfying the optical characteristics of the human eye.
  • the SLF photoelectric transfer curve represents the conversion relationship from the linear signal value of the image pixel to the nonlinear signal value in the SLF domain.
  • the conversion relationship from the linear signal value of the image pixel to the nonlinear signal value in the SLF domain is shown in formula (5):
  • the SLF photoelectric transfer function can be expressed as formula (6):
  • L represents the linear signal value, and its value is normalized to [0,1];
  • L' represents the nonlinear signal value, and its value range is [0,1];
  • the encoding module at the sending end can receive the electrical signal image generated by the photoelectric conversion module, encode the electrical signal image, and generate a code stream.
  • code stream generated by the sender of the present application may carry backlight metadata, and the sender in the embodiment of the present application may generate backlight metadata for the image or video when generating the image or video.
  • the backlight metadata may be a kind of attribute information of video or image.
  • the backlight metadata is used to represent metadata (Metadata) related to image display adaptation when the backlight of the display device changes.
  • the sending end can send the generated code stream to the receiving end.
  • the image processed by the system shown in FIG. 1 of the embodiment of the present application may be one frame of image, or a sequence of frames (that is, multiple frames of images), which is not limited in the present application.
  • the receiving end may include a decoding module, an electro-optic conversion module, a display adaptation module, and a display module.
  • the decoding module can decode the code stream to obtain, for example, an 8-bit electrical signal image and attribute information of the image (which may include backlight metadata).
  • the operation performed by the electro-optic conversion module is the reverse process of the operation performed by the photoelectric conversion module, that is, the electro-optic conversion module can convert the received electrical signal image into an optical signal image corresponding to the illumination intensity of the ambient light of the image.
  • the dynamic range mapping method can be applied to adapt the image (a kind of HDR signal) generated by the sending end and the HDR signal displayed by the display device to which the receiving end belongs.
  • the ambient light collected is a 4000nit light signal
  • the maximum display brightness of each pixel (ie, the HDR display capability) of the display device to which the receiving end belongs is only 500nit.
  • the light signal of 4000 nit on the display screen of the display device it is necessary to perform tone mapping from the brightness of the image to the brightness of the display screen of the display device.
  • the sending end when the sending end generates an image, it collects a 100nit SDR signal, but the maximum display brightness of the display device (such as a TV) to which the receiving end belongs is 2000nit. Then, how to map the 100nit signal to a 2000nit display device for display also needs to tone-map the brightness of the image to the brightness of the display screen of the display device.
  • the bit stream generated by the sending end does not include the backlight metadata described in the embodiment of the present application.
  • the current tone mapping curve can include the adjustment process based on the sigmoidal curve proposed by Dolby and the Bezier curve. This tone mapping curve technology corresponds to the ST2094 standard.
  • the process can be adjusted according to the global or local tone mapping curve (such as based on the sigmoidal curve , and a Bezier curve), mapped to the range between the maximum display brightness and the minimum display brightness value of the display.
  • the global or local tone mapping curve such as based on the sigmoidal curve , and a Bezier curve
  • the tone mapping solution in the above-mentioned conventional technology does not take into account the adjustable backlight of the display device.
  • the display screen of the mobile phone may have an arrayed backlight source, which can be used to light up the screen of the display screen.
  • the backlight intensity level is the highest (for example, 100%)
  • the brightness of the display screen is its maximum display brightness (for example, 500nit).
  • the backlight intensity level can be described as a percentage of the maximum display brightness, then the product of the backlight intensity level and the maximum display brightness can be the current target backlight brightness of the display screen.
  • this application does not limit the maximum display brightness of the display device, and it can be continuously improved according to the product upgrade of the display device.
  • this application does not limit the layout of the backlight source of the display device. Different electronic devices (such as mobile phones and television sets) have different settings for the backlight source.
  • Fig. 4 is a schematic diagram of a scenario of adjusting the brightness of the backlight of a mobile phone exemplarily shown.
  • the mobile phone may include an ambient light sensor 101, which can perceive the ambient brightness value and identify the ambient brightness change value, and the mobile phone may collect the ambient light according to the ambient light sensor 101.
  • the brightness value is used to set the current target backlight brightness of the mobile phone screen, and adjust the current target backlight brightness according to the ambient brightness change collected by the ambient light sensor 101 .
  • the system can adjust the backlight brightness of the mobile phone screen according to the brightness change of the ambient light, for example, the backlight brightness set for the mobile phone during the day is higher than the backlight brightness set for the mobile phone at night.
  • the mobile phone displays an interface 100, and the user can slide down the interface 100 from the screen area near the right side and from the top of the screen in the direction of the arrow.
  • the present application does not limit the interface 100, which may be an application interface, or an application icon interface, or the like.
  • the mobile phone can switch the display interface of the mobile phone from the interface 100 shown in FIG. 4(1) to the control center interface 102 shown in FIG. 4(2).
  • the control center interface 102 includes one or more controls, and the implementation of the controls may include but not limited to: icons, buttons, windows, layers, and the like.
  • control center interface 102 may include a "Huawei Video" play window, an icon for wireless network, a Bluetooth icon, and a window 103 .
  • Window 103 may include a mobile data icon, a ring/mute icon, an auto-rotate screen icon.
  • the window 103 may also include a brightness progress bar control for controlling the brightness of the mobile phone screen backlight.
  • window 103 may include more or less controls, which is not limited in this application.
  • the brightness progress bar control may include an icon 104 , an icon 105 , a progress bar 106 , and a control 107 .
  • the icon 104 is used to represent the minimum backlight brightness, that is, the backlight intensity level is 0%;
  • the icon 105 is used to indicate the maximum backlight brightness, that is, the backlight intensity level is 100%, and the display brightness of the mobile phone screen at this time is the maximum display brightness of the mobile phone screen (for example, the above-mentioned 500nit).
  • the mobile phone can adjust the current backlight intensity level of the mobile phone screen in response to the user's operation, that is, adjust the backlight intensity level within the range of 0% to 100%, so as to change the current target backlight of the mobile phone brightness.
  • control 107 is on the progress bar 106, moving toward the icon 105 direction can increase the backlight intensity level of the mobile phone screen to increase the current target backlight brightness, and the control 107 is on the progress bar 106, moving toward the icon 104 direction can lower the mobile phone The backlight intensity level of the screen to reduce the current target backlight brightness.
  • the user can set the current target backlight brightness of the mobile phone screen according to the usage requirements and actual phone usage conditions.
  • the interface provided by the mobile phone for the user to manually adjust the current target backlight brightness of the screen is not limited to FIG. 4 (2), and the backlight brightness can also be adjusted through other interfaces.
  • the final displayed brightness of the screen may be the result of the joint action of the current target backlight brightness value and the screen display value.
  • the brightness range of the screen changes accordingly.
  • the OLED screen is self-illuminating.
  • the brightness of the excitation light source for each pixel in the OLED screen can be directly controlled, and the overall change of the screen display value can be expressed by the intensity of the backlight.
  • the backlight of a mobile phone is adjustable, for example, the maximum display brightness (that is, the display capability) of the mobile phone screen is 500 nit, and the backlight intensity level is 10%, then the current target backlight brightness is 50 nit.
  • the maximum display brightness that is, the display capability
  • the current target backlight brightness is 50 nit.
  • the actual brightness of the screen can be higher than 50nit, such as 60nit, or 100nit, etc.
  • the adjustable backlight brightness of the display screen of electronic equipment it can be found that whether it is a global or local tone mapping scheme in the traditional technology, it does not take into account the final display brightness of the screen under a certain backlight intensity. It can be increased from the current target backlight brightness (for example, the above-mentioned 50nit) until it reaches the brightness when the real backlight level of a single or partial pixel of the screen is the strongest (that is, the above-mentioned maximum display brightness, for example, 100nit).
  • the current target backlight brightness for example, the above-mentioned 50nit
  • the maximum display brightness for example, 100nit
  • the traditional technology scheme which only aligns the maximum brightness value and the minimum brightness value of the image with the maximum brightness value and the minimum brightness value of the screen, does not adapt to the weakening of the backlight (that is, the backlight is weakened from the strongest backlight, the backlight intensity level
  • the display adaptation in the case of reducing from 100% will cause the video or image displayed by the electronic device to have low contrast and dark colors.
  • the code stream sent by the sending end can carry backlight metadata
  • the display adaptation module at the receiving end can dynamically perform dynamic processing on the optical signal image based on the backlight metadata. Range mapping, to obtain image display information and send the image display information to the display module;
  • the CPU at the receiving end can display information according to the image, and control the circuit of the display module to generate light of corresponding brightness to the pixels in the display module, so as to realize the display of the image (high HDR signal) generated by the sending end and the display to which the receiving end belongs. Adaptation for low HDR signals displayed by the device.
  • the system in the embodiment of the present application can realize adaptive display of an image on a display device in a scene where the backlight is adjustable.
  • the backlight metadata may include a local or global initial gain adjustment curve, and a local or global anchor point.
  • the global can be used to represent a frame of image, or frame sequence.
  • a local can be used to represent an image region in a frame of an image, an image scene, etc.
  • Backlight metadata can include local initial gain adjustment curves and local anchor points; global initial gain adjustment curves and global anchor points; global initial gain adjustment curves and local anchor points; local initial gain adjustment curves and global At least one type of data in the anchor.
  • the anchor point described in this article is a brightness value.
  • the local anchor point of the partial image may be: a brightness value between the maximum brightness value of the pixel that is smaller than the maximum brightness value of the pixel of the partial image (for example, the image area) and larger than the minimum brightness value of the pixel of the partial image.
  • the global anchor point is the backlight control anchor point is: a brightness value that is smaller than the global maximum pixel brightness (for example, a frame of image) and larger than the global minimum pixel brightness.
  • the method of generating a tone mapping curve in the traditional technology may be used as the initial gain adjustment curve of the image, which is not limited in the present application.
  • the tone mapping curve has various forms, including but not limited to: Sigmoid, cubic spline, gamma, piecewise cubic spline function, etc., which can be used as the initial gain adjustment curve.
  • FIG. 5a is an exemplary system architecture diagram after refinement of FIG. 1 , and FIG. 5a is the same as FIG. 1 , which will not be repeated here, and only the details will be described in detail.
  • the system shown in Figure 5a is only an example, the system of the present application, and the sending end and receiving end in the system may have more or less components than those shown in the figure, and the two or more components, or may have different component configurations.
  • the various components shown in Figure 5a may be implemented in hardware, software, or a combination of hardware and software including one or more signal processing and/or application specific integrated circuits.
  • the sending end may include a backlight metadata generating module
  • the backlight metadata generation module can generate backlight metadata for the electrical signal image or image sequence generated by the photoelectric conversion module, where the backlight metadata can include local or global initial gain adjustment curves, and local or global anchor points (specifically defined Refer to the above, and will not repeat them here).
  • the encoding module can encode the electrical signal image and backlight metadata, generate a code stream and send it to the receiving end.
  • FIG. 5b it exemplarily shows the process of generating the initial gain adjustment curve 0 and the anchor point for the region A of image 1.
  • the processing object is image 1 as an example for illustration:
  • the backlight metadata generation module can divide the image 1 to be displayed, and divide the pixels with close brightness and close spatial position into the same area, so as to divide the image 1 into multiple areas C; then, identify multiple Highlight region C in region C.
  • the division method is not limited to the example shown in Figure 5b, and other methods can also be used to divide pixels with similar brightness and close spatial positions in an image into a region, so as to realize Region division of the image.
  • the process in Figure 5b may include the following steps:
  • image 1 is divided into regions for the first time.
  • image 1 may be divided into an area (that is, no area division is performed).
  • image 1 is area A.
  • the image 1 is divided into multiple rectangular areas to obtain multiple areas A.
  • a backlight metadata generation module capable of generating an initial gain adjustment curve 0 for area A.
  • image 1 is a 64x64 image.
  • the generation method of the initial gain adjustment curve 0 may be realized by any method of generating a tone mapping curve in the conventional technology, which is not limited in the present application.
  • an initial gain adjustment curve for an image area may be generated according to the above method.
  • the initial gain adjustment curve in the backlight metadata includes a global initial gain adjustment curve
  • the initial gain adjustment curve when the global is a frame of image, the initial gain adjustment curve can be generated for a frame of image, and when the global is a frame sequence, the frame sequence can be An initial gain adjustment curve is generated from a frame of image in , and the initial gain adjustment curve is used as the initial gain adjustment curve of the frame sequence.
  • the backlight metadata generating module divides the image of area A into equal parts to generate area B.
  • the area A can be divided into 4 equally divided areas B, and the size of each area B is 32x32.
  • dividing area A into four areas B is equivalent to performing initial clustering on the pixels in area A.
  • the four areas B can be called class 0, class 1, class 2, and class 3, respectively.
  • the backlight metadata generation module performs iterative clustering on the pixels in the area A according to the brightness and spatial position of the pixels.
  • This step can iteratively update the clustering of the pixels in the four regions B in region A, so as to cluster the pixels in region A with close brightness and close spatial position into one class, thus forming four new cluster types , which are the four regions C in Fig. 5b.
  • the 4 regions C may be referred to as class 0', class 1', class 2' and class 3'.
  • the cluster center position (x_centra, y_centra) of each area B can be calculated, and the average color value (for example, the mean value of the three channels of RGB: R_avg, G_avg, B_avg) or Lab average (l_avg, a_avg, b_avg).
  • the position of the clustering center here may be the geometric center of the pixel points in area B;
  • represents the square root operation
  • K and M are two constants
  • the function of K and M is normalization.
  • one region A may include four regions B, and then one pixel in region A may obtain four distortion values.
  • the third step for the 4 distortion values of each pixel in area A, select the area B corresponding to the minimum distortion value as the updated clustering type of the pixel (for example, class 0), so that all of the area A Pixels implement an iterative clustering;
  • the updated clustering type can be calculated, which is equivalent to re-dividing area A, and the four areas B' obtained by this division are different from the regular rectangle shown in Figure 5b 4 areas, the shape of the 4 areas obtained by this division may be irregular.
  • each area A in the image 1 can be divided into four areas C.
  • the number of final regions C generated when the region A is divided is not limited to four, for example, when the image is equally divided in S401, any number of equal divisions such as 6 equal divisions, 9 equal divisions, etc. can also be performed. In this way, the number of regions C obtained by dividing the region A can be 6 or 9 and so on.
  • the method of dividing the pixels with similar luminance and spatial position into the same region to divide the image 1 is not limited to the above-mentioned scheme, and other methods in the field can also be used. method to realize the region division of image 1.
  • the backlight metadata generation module using a preset algorithm to identify highlighted areas from the divided areas of the image.
  • the backlight metadata generation module can use bilateral filtering or guided filtering to filter the image 1 (for example, including the four regions C shown in FIG. 5b ), according to The difference between the pixel values of the same pixel in image 1 before and after filtering, and the pixel with a larger difference in pixel value in the image is regarded as a highlighted pixel. Then, according to the 4 regions C obtained by dividing the image 1, the region C whose number of highlighted pixels is greater than or equal to the preset threshold is regarded as the highlighted region C, and the number of the included highlighted pixels is smaller than the preset threshold The area C of is regarded as the non-highlight area C. In this way, the area included in the image 1 can be divided into two types of areas, one is the highlighted area and the other is the non-highlighted area.
  • the four regions C in the region A include two highlighted regions C and two non-highlighted regions C.
  • the attributes of the corresponding photographed object in image 1 may affect whether the image area is a highlighted area.
  • the reflectivity of clouds and glass is high, and the image areas of clouds and glass are likely to be highlighted areas.
  • the image areas of objects with low reflectivity such as rough cotton and linen clothes are easily regarded as non-highlight areas.
  • the regions divided from the image when identifying which regions belong to the highlighted region and which regions belong to the non-highlighted region, it is not limited to the above-mentioned method of bilateral filtering or guided filtering, and can also be realized by other methods. For example, calculate the average luminance value of pixels for different regions C in region A, and use the region C with the average luminance value higher than the preset threshold as the highlighted region C, so as to realize the highlighting and non-highlighting regions of region A classification, this application does not limit it.
  • the backlight metadata generation module determines the brightness value of the anchor point of the region A according to the pixel brightness of the highlighted region C in the region A.
  • the minimum value a1 of the pixel brightness and the average value a2 of the pixel brightness may be obtained.
  • the anchor point of the area A is generated by using the minimum value a1 of the pixel brightness and the average value a2 of the pixel brightness.
  • any luminance value greater than a1 and less than a2 may be used as the anchor point of the region A.
  • the average value of a1 and a2 may also be used as the anchor point of the area A.
  • the weighted results of a1 and a2 may also be used as the anchor point of the region A.
  • an area A has an anchor point.
  • the minimum value a1 of pixel brightness and the average value a2 of pixel brightness may also be obtained for all pixels in area C in area A (ie, all pixels in area A).
  • the anchor point of the area A is generated by using the minimum value a1 of the pixel brightness and the average value a2 of the pixel brightness.
  • the method for generating the anchor point in this application is not limited to the method in the above embodiment, and it can also be implemented in other ways, which is not limited in this application.
  • the global anchor point of the frame image can be generated for the frame image, and the local anchor point can also be generated for the region in the image.
  • the anchor point in the backlight metadata is the anchor point of the frame sequence
  • the anchor point can be generated for a frame image through the above method, and then the anchor point can be used as the anchor point of the frame sequence including the frame image .
  • Fig. 5b describes the process of generating the initial gain adjustment curve and the anchor point in one embodiment.
  • a local or global initial gain adjustment curve and a local or global anchor point can also be generated through the process shown in FIG. 5c(1).
  • the process may include the following procedures:
  • Fig. 5c(1) can be executed by the backlight metadata generation module in Fig. 5a, the area A in Fig. 5c(1) can be a frame of image, and can also be a partial image area in the image. This is not limited.
  • This process is similar to S400 in FIG. 5b , and will not be repeated here.
  • FIG. 5c(2) shows a cumulative distribution histogram of the luminance of pixels in the region A.
  • the horizontal axis x is used to represent the brightness value of the pixels in the region A
  • the vertical axis y is used to represent the number of pixels in the region A.
  • area A includes 100 pixels
  • the number of pixels with brightness x1 in area A is 20
  • the number of pixels with brightness x2 is 30,
  • the number of pixels with brightness x3 is The number of pixels is 50.
  • the preset ratio can be 95%, 99% or 99.5%, etc., the preset ratio is higher than 50%, and the specific value can be flexibly configured according to needs, which is not limited in the present application.
  • the preset ratio is 95%. Based on the histogram shown in FIG. A's anchor.
  • S503 is executed after S502, but this application does not limit the execution order of S501, S502, and S503.
  • the sending end in Fig. 5a is used to generate a local or global initial gain adjustment curve and a local or global anchor point as an example, so that the curve and anchor point information can be passed in as attributes of the code stream Receiving end.
  • the receiving end of the embodiment of the present application may also include a backlight metadata generation module to perform the above operations to generate a local or global initial gain adjustment curve and a local or global anchor point.
  • the method is similar and will not be repeated here. .
  • the display adaptation module in FIG. 1 may include the tone mapping curve generation module shown in FIG. 5a, and the tone mapping module.
  • the process may include the following steps:
  • the receiving end acquires an image 1 to be displayed in a code stream.
  • the decoding module can obtain an image, such as image 1 in FIG. 5b, by decoding the code stream.
  • the receiving end obtains the backlight metadata in the code stream, and the backlight metadata includes the initial gain adjustment curve 0 of the area A in the image 1 and the first brightness of the area A (also known as an anchor point).
  • the decoding module can decode the code stream to obtain backlight metadata, wherein the backlight metadata can include a local or global initial gain adjustment curve of the code stream, and a local or global anchor point.
  • the decoding module can obtain the initial gain adjustment curve of area A in image 1 and the anchor point of area A, and send them to the tone mapping curve generation module.
  • the receiving end acquires the maximum display brightness of the display screen and the current backlight intensity level.
  • the tone mapping curve generation module may acquire the maximum display brightness of the display screen and current backlight intensity information of the display screen from the display module.
  • the current backlight intensity information may be the current backlight intensity level, or the current target backlight brightness.
  • the receiving end can obtain the maximum display brightness (for example, 500nit) of the display screen from the display device to which the receiving end belongs.
  • the maximum display brightness for example, 500nit
  • the embodiment of the present application can also display a test sample (a test image) on the display screen of the display device, and the area ratio of the test sample in the test sample is N% (such as 5%, 10%, etc. ) area is white, and other areas are black.
  • N% such as 5%, 10%, etc.
  • the measured brightness is the maximum display brightness of the display screen.
  • the present application does not limit the manner of obtaining the maximum display brightness of the display device.
  • the current backlight intensity information of the display screen obtained by the tone mapping curve generation module may be the current target backlight brightness of the display screen automatically adjusted by the system, or The current target backlight brightness of the display screen manually adjusted by the user will not be repeated here.
  • the brightness value based on it is not limited to the maximum display brightness.
  • the brightness value can be greater than the current target backlight brightness and less than or equal to A brightness value of the maximum display brightness of the display screen (for example, a weighted result of the current target backlight brightness and the maximum display brightness).
  • curve 0 can be any tone mapping curve in the conventional technology
  • curve 1 and subsequent curve 2 are also tone mapping curves
  • the argument x of the function f(x) of any tone mapping curve represents the SDR source or
  • the luminance value of the image pixel of the HDR source, f(x) represents the luminance value of the image pixel displayed on the display.
  • the tone mapping curve generation module can process, for example, the initial gain adjustment curve (for example, curve 0, also called curve0) of area A according to the maximum display brightness maxT, and generate an updated gain adjustment curve. (e.g. curve 1, aka curve1).
  • the initial gain adjustment curve for example, curve 0, also called curve0
  • an updated gain adjustment curve e.g. curve 1, aka curve1.
  • the maximum display brightness can be used to scale curve 0 in area A to obtain curve 1. It should be noted that the processing method is not limited to the scaling processing described in mode 1 and mode 2, and may also include methods such as Mode 3, Mode 4, Mode 5, etc. and other modes not listed are not limited in this application.
  • curve 0 is the initial gain adjustment curve of region A (a tone mapping curve), then the maximum x value of curve 0 represents the maximum brightness value of the image pixel in region A here, and the corresponding y value (ie maxY) represents The brightness value of the image pixel with the maximum brightness value in area A displayed on the display screen, then the maximum x value of curve 0 is the maximum brightness value of the image pixel in area A.
  • the backlight metadata may include a preset brightness value T3.
  • the preset brightness value T3 is any target brightness target_maxdisplay originally generated by the curve 0, wherein the target brightness is also the brightness value displayed on the display screen by an image pixel of a certain brightness, for example, it may be the maximum displayed brightness value.
  • the actual brightness of the display screen can be higher than the backlight brightness.
  • the maximum display brightness of the screen is 500nit
  • the backlight intensity level is 10%
  • the backlight brightness is 50nit
  • the electronic device may have a strategy to adjust the final brightness of the display screen so that the final brightness may be higher than 50 nit. Therefore, in this embodiment, the The tone mapping curve (i.e. curve 1) to which luminance values are matched.
  • the image parameters include, but are not limited to, the maximum value of the brightness of the pixel, the minimum value of the brightness of the pixel, or the average value of the brightness of the pixel.
  • the display brightness may include the maximum display brightness of the display screen or the current target backlight brightness.
  • Curve A is obtained by using image parameters and display brightness as curve parameters of a traditional tone mapping curve.
  • the difference between the maximum display brightness and the target brightness target_maxdisplay may be used as the weight value w, but the manner of obtaining the weight value w is not limited thereto.
  • the curve A and the above curve 0 in the area A are weighted and summed to obtain the curve 1 .
  • the backlight metadata may include each curve parameter representing curve 0, and the curve adjustment parameter B here.
  • the value of the curve adjustment parameter B is 0.2
  • the maximum display brightness of the display screen is 200, and the maximum display brightness of the display screen corresponding to curve 0 is 500;
  • the curve parameter adjustment value B1 is superimposed (eg, added or subtracted) on each curve parameter of curve 0 to obtain each curve parameter of curve 1 , and then curve 1 is obtained.
  • each curve parameter of curve 0 can be added or subtracted by 0.6 to obtain a new set of curve parameters to obtain curve 1.
  • mode 4 is to adjust each curve parameter of curve 0 according to the curve parameter adjustment value B1.
  • a certain target parameter in curve 0 has the greatest influence on the shape of the curve
  • the backlight metadata can include curve 0 Curve adjustment parameter B of a certain target parameter of curve 0, that is, only adjust the certain target parameter of curve 0 according to mode 4.
  • the curve 0 is scaled to obtain the curve 00;
  • the parameters of curve 0 other than the above-mentioned target parameters can be scaled to obtain curve 00 .
  • B1 may be added or subtracted to the target parameter of curve 00 to obtain new curve 1 .
  • the receiving end processes curve 0 according to the maximum display brightness and the current backlight intensity level to generate curve 2.
  • the tone mapping curve generation module may process, for example, the initial gain adjustment curve (curve0) of area A according to the current backlight intensity information (for example, the current backlight intensity level) and the maximum display brightness maxT, to generate The updated gain adjustment curve (eg curve 2, aka curve2).
  • the curve 2 does not need to be generated using the maximum display brightness of the display module.
  • the processing method is not limited to scaling, and may include other methods, which are not limited in this application.
  • the backlight metadata may include a preset brightness value T3.
  • the preset brightness value T3 is the target brightness target_maxdisplay originally generated by the curve 0 .
  • tone mapping that matches the backlight brightness can be generated curve (ie curve 2).
  • the receiving end processes the curve 1 and the curve 2 according to the anchor point of the area A, and generates the tone mapping curve 3 of the area A.
  • FIG. 5 e is a schematic diagram of generating curve 3 from curve 1 and curve 2 exemplarily shown.
  • each x-axis represents the luminance value of the image pixel in region A
  • the y-axis represents the luminance value displayed on the display screen by the image pixel in region A.
  • the value range of x of each tone mapping curve in FIG. 5e is 0-1 as an example, and the anchor point of area A is 0.7 as an example for illustration.
  • curve2 is a tone mapping curve generated based on the brightness of the current target backlight, most of the tone mapping values in curve2, that is, the brightness values of image pixels are mapped
  • the display brightness value (ie y value) of is relatively reasonable, and a few tone mapping values are unreasonable. Therefore, based on the anchor point, first intercept part of the tone mapping curve from curve2, and then perform curve splicing based on curve 1 to generate this area A's final tone mapping curve (ie curve 3, also known as curve3).
  • the y value corresponding to the anchor point 0.7 is y1, and curve2 can be intercepted from the origin (0, 0) to P1 (0.7, y1) to obtain curve2-1. That is to say, the final tone mapping curve, namely curve3, uses curve2 for the part of the curve whose pixel brightness is below the anchor point (0-0.7).
  • the part of curve2-1 in FIG. 5e(1) is used within the range of pixel brightness 0-0.7.
  • a graph of curve1 is shown.
  • the value of x ranges from 0 to 1
  • the curve below the anchor point has been determined from curve2
  • the curve segment P2P3 in curve1 can be translated to the x-axis along the opposite direction of the y-axis, and the translation direction is parallel to the y-axis as shown by the translation arrow in Figure 5e(2), so that the translated curve segment P2P3 (that is, Figure 5e( 2)
  • the curve segment MN shown in 2) is also called curve1'), when the x value is the anchor point 0.7, the y value is y1, where P2 is M(0.7, y1) after translation, and P3 is point N after translation.
  • the translation amount of the curve here may be (y2-y1).
  • curve3 is realized by translating curve1, and it can also be realized by scaling or stretching curve1, which is not limited in this application.
  • Curve3 as shown in Figure 5e(3) is below the anchor point, that is, the pixel brightness is in the range of 0 to 0.7, using the curve2-1 part in Figure 5e(1), above the anchor point, that is, the pixel brightness is 0.7 In the range of ⁇ 1, the MN section in curve1' obtained in FIG. 5e(2) is used to obtain the tone mapping curve curve3 of area A (that is, the target tone mapping curve in FIG. 5a).
  • point M in FIG. 5e(3) is also point P1 in FIG. 5e(1), and a letter M is used here to indicate this point.
  • the entire curve1 can also be translated, Then, intercept the curve segment above the anchor point from the translated curve1 to connect with curve2-1 at the anchor point to obtain curve3, which is not limited in this application.
  • curve3 is connected by two curve segments originating from two curves at the anchor point position, the curve3 is not smooth enough at the anchor point position M as shown in Figure 5e(3), which may easily lead to curve3 based on After an image is tone mapped, the image appears distorted when displayed on the display.
  • the receiving end performs tone mapping processing on the area A according to the tone mapping curve 3, and acquires image display information of the area A.
  • This application does not limit the execution sequence of S204 and S205.
  • the tone mapping curve generating module at the receiving end may send the generated target tone mapping curve (such as the above curve 3) to the tone mapping module.
  • the tone mapping module can perform tone mapping processing on the area A in the image 1 according to the curve 3, so as to adjust the image pixels in the area A in the image 1, so as to obtain the display brightness of each pixel brightness in the area A on the display module.
  • the display module can display the area A in the image 1 according to the display brightness corresponding to each pixel in the area A in the image 1, and can display the image 1 after adjusting the image pixels on the display module, so as to realize the Adaptation of image 1 to the brightness of the backlight of the display.
  • the display module can display each area A according to its own image display information.
  • Performing tone mapping on an image can be understood as dynamic range mapping, such as the dynamic range mapping shown in FIG. 3 .
  • the mapping of the dynamic range can be divided into a static mapping method and a dynamic mapping method.
  • the static mapping method can be described as: according to the same video content or the same hard disk content, an overall tone mapping process is performed from a single data, that is, the same target tone mapping curve is used for one or more frames of images The tone mapping is performed, so that the information carried in the code stream is less, the process of adapting and displaying the image is more concise, and the image display delay is lower.
  • the dynamic mapping method can be described as: according to a specific area in the image, or each scene, or the image content of each frame, perform dynamic tone mapping with different tone mapping curves, then each frame or each scene needs to carry relevant scene information.
  • the tone mapping processing of different tone mapping curves can be performed according to a specific area, each scene or each frame of image, and different tone mapping curves can be used for tone mapping for different scenes, different image areas, and different frame images. For example, according to the shooting scene, tone mapping can be performed on an image of a brighter scene using a tone mapping curve that focuses on protecting the display effect of the bright area.
  • the static mapping method is equivalent to using the same target tone mapping curve for multiple frames of images to perform tone mapping, then the target tone mapping curve generated by one frame of the multiple frames of images can be used as the target tone mapping curve of the multiple frames of images,
  • the global initial gain adjustment curve and the global anchor point of a frame of image can be used for realization.
  • the dynamic mapping method is equivalent to using different target tone mapping curves for tone mapping for different frame images, different image regions in the same frame image, and images of different scenes, then local anchor points and local initial tone mapping curves can be used , to generate the target tone mapping curve for each part.
  • backlight metadata may include local or global initial gain adjustment curves, and local or global anchor points.
  • the initial gain adjustment curve of the local area of the image may be called a local initial gain adjustment curve
  • the anchor point of the local area of the image may be called a local anchor point
  • the initial gain adjustment curve of a frame of images may be called a global initial gain adjustment curve, and the anchor point of a frame of images may be called a global anchor point.
  • the initial gain adjustment curves and anchor points of different frame images may be different.
  • An initial gain adjustment curve of multiple frames of images may also be called a global initial gain adjustment curve.
  • an initial gain adjustment curve of one frame of images among multiple frames of images may be used as a global initial gain adjustment curve of multiple frames of images.
  • An anchor point shared by multiple frames of images may be called a global anchor point.
  • an anchor point may be generated for one frame of images in the multiple frames of images, and the anchor point may be used as a global anchor point of the multiple frames of images.
  • the respective target tone mapping curves of each frame of image can be generated by using the initial gain adjustment curve and anchor point of each frame of image according to the above method .
  • Each frame of image is tone-mapped according to its respective target tone-mapping curve, so as to realize the display adaptation between the image and the display screen.
  • the global initial gain adjustment curve of the multi-frame image and the global anchor point of the multi-frame image can be determined according to the above method A target tone mapping curve suitable for the multiple frames of images is generated, and tone mapping can be performed on the multiple frames of images according to the same target tone mapping curve, so as to realize display adaptation between the images and the display screen.
  • the image can be Each region in 1 generates its own target tone mapping curve (also a local tone mapping curve) according to its initial gain adjustment curve and its anchor point. Then, for different areas in image 1, perform tone mapping according to their respective target tone mapping curves, so as to realize the display adaptation between different areas in the image and the display screen, and perform tone mapping on different areas of the same image according to different tone mapping curves .
  • target tone mapping curve also a local tone mapping curve
  • the respective local initial gain adjustment curves and local anchor points can be used to generate the target tone mapping curves of each scene, and then, for images of different scenes, according to the target tone mapping of the corresponding scene Curves are used for tone mapping to realize the display adaptation of images in different scenes to the display screen.
  • anchor points can be generated for local regions and global images (such as a frame or frame sequence) can be generated The initial gain adjustment curve for .
  • each local image may use its respective local anchor point and the global initial gain adjustment curve to generate the Respective target tone mapping curves, wherein when each partial image generates its own target tone mapping curve, the global initial gain adjustment curve may be used as the initial gain adjustment curve of the partial image.
  • the process of generating respective target tone mapping curves for each partial image is similar to the process described above in FIG. 5d and FIG. 5e , and will not be repeated here.
  • each local image may be generated using a respective local initial gain adjustment curve and a global anchor point.
  • respective target tone-mapping curves wherein each partial image can use the global anchor point as the anchor point of the local image when generating its own target tone-mapping curve, that is, each partial image multiplexes the same anchor point.
  • the process of generating respective target tone mapping curves for each partial image is similar to the process described above in FIG. 5d and FIG. 5e , and will not be repeated here.
  • the initial gain adjustment curve and the anchor point in the code stream may not be one-to-one, when using the image or area to be displayed, use an initial gain adjustment curve and an anchor point to generate the image or area
  • the target gain adjustment curve that is, the initial gain adjustment curve and the anchor point are used as a pair.
  • the receiving end performs tone mapping processing on the region A according to the tone mapping curve 3, and when acquiring the image display information of the region A, if the tone mapping curve 3 is a global tone mapping curve, use Method 1 implements tone mapping processing, and method 2 implements tone mapping processing when the tone mapping curve 3 is a global tone mapping curve.
  • the brightness value displayed on the display screen is acquired for each pixel of the image to be displayed according to the global tone mapping curve.
  • gain can be calculated according to formula (7);
  • Rp, Gp, Bp are different color components of the current pixel P in the image to be displayed
  • max(Rp, Gp, Bp) is the maximum component value of the three color components.
  • f(x) represents the function of the global tone mapping curve used in the image to be displayed, and [i] indicates that the current pixel P is the i-th pixel in the image to be displayed for tone mapping.
  • the tone mapping value of the current pixel P is calculated according to formula (8), formula (9), and formula (10).
  • RpTM[i], GpTM[i], BpTM[i] are the tone mapping values of the red, green and blue color components of the current pixel P (that is, the brightness value displayed on the display screen by the current pixel P in the image to be displayed ).
  • the local tone mapping curve for each pixel of the partial image in the image to be displayed, obtain the brightness value displayed on the display screen, wherein the local tone mapping curve is a local target tone mapping curve generated for the partial image, for example curve3 of area A above.
  • each region A can generate the local target tone of each region A according to the method of generating the target tone mapping curve above map curve.
  • each partial image can perform its own tone mapping according to its own partial tone mapping curve, and the shapes of the partial images in Image 1 can be the same or different, which is not limited in the present application.
  • Information about each partial image (also referred to as a tone mapping area) divided into the image 1 may be size information of the partial image in the image 1, such as length and width information, for determining the partial image.
  • the information describing the size of the partial image can be carried in the above-mentioned backlight metadata, so that the receiving end can perform tone mapping on each partial image according to the target tone mapping curve of each partial image based on the information of each partial image .
  • the spatial distance from the pixel point P can be determined
  • the nearest target area A can be one or more
  • Calculate the first tone mapping value of the current pixel P; if the quantity of the first region A is one, then use the first tone mapping value as the final tone mapping value of the current pixel P; if the quantity of the target region A is Multiple, wherein, the gain[j] of the jth target area A can be obtained by formula (7), where j 1, 2, 3, 4, and multiple first tone mappings corresponding to each target area A Value, weighted summation can be performed on multiple first tone mapping values, and the weighted summation result is divided by the sum of weights gain[j] corresponding to each
  • Weight[j] f(Num[j]/Numtotal), formula (11);
  • j in the formula (11) represents the j-th target area A in the four areas A of image 1
  • Num[j] represents the number of pixels in the j-th target area A in image 1
  • Numtotal represents the current pixel point The total number of pixels in all target areas A around P;
  • f(x) in the formula (11) represents a function of the target tone mapping curve of the jth target area A in the image 1 .
  • f(x) power(x, NumStr[j]) in formula (11), where f(x) here is a specific representation of a tone mapping curve function, where, NumStr[j] represents the intensity information of the jth target area A, and the intensity information is used to indicate the attenuation speed of the weighting coefficient of the jth target area, which is a preset value for each area A.
  • the final tone mapping values Rtm, Gtm, and Btm of the current pixel P can be calculated according to formula (12), formula (13), and formula (14).
  • Rtm ⁇ (RP*gain[j]*Weight[j])/ ⁇ Weight[j], formula (12);
  • Gtm ⁇ (GP*gain[j]*Weight[j])/ ⁇ Weight[j], formula (13);
  • Rp, Gp, Bp are the different color components of the current pixel P of a certain area A (such as the initial area A referred to above) in the image 1 to be displayed;
  • gain[j] f[j](max(Rp,Gp,Bp))/max(Rp,Gp,Bp);
  • Rp, Gp, Bp are the different color components of the current pixel P
  • max(Rp, Gp, Bp) is the maximum component value of the three color components
  • f[j] is the jth target area A in image 1.
  • a function of the local target tone mapping curve, j is an integer greater than or equal to 1 or less than or equal to 4.
  • the schedule pixel P is a pixel point on the common boundary of at least two areas A, and at least the pixels around the pixel P can be used.
  • the respective target tone mapping curves of the two regions A are used to perform tone mapping on the pixel point P respectively, and weight the obtained at least two first tone mapping values to obtain the final tone mapping value of the pixel point P, so that In different image areas with different tone mapping curves, the display of common pixels in different image areas can take into account the display scenes of different image areas, and avoid the occurrence of abnormally jumping pixels in the displayed image.
  • the receiving end of the embodiment of the present application can also use the saturation (color) adjustment algorithm to perform tone mapping in the region A using the target tone mapping curve. For each pixel, make further saturation adjustments.
  • the data of the curve may not be directly carried in the backlight metadata, but the parameters of the initial gain adjustment curve and the type of the curve may be carried in the backlight metadata , then the receiving end can obtain the curve type and the parameter from the backlight metadata, and obtain a local or global target tone mapping curve based on the HDR source or the SDR source.
  • an HDR source or an SDR source can be understood as a video or image to be processed at the receiving end, and the HDR source or SDR source may include but not limited to: image data, such as pixel data, and the like.
  • the backlight metadata may include but not limited to: the data format of the HDR source or the SDR source, the division information of the image area, the traversal order information of the image area, the image characteristics, and the curve parameters of the local or global initial gain adjustment curve, and one or more metadata information units.
  • the metadata information unit may include coordinate information, the aforementioned image features, and curve parameters of the aforementioned initial gain adjustment curve.
  • the backlight metadata can include the histogram information and the parameter information of the tone mapping curve as in the ST2094-40 standard, or include the tone mapping curve as in the ST2094-10 standard. parameter information.
  • the initial gain adjustment curve in the backlight metadata can be any traditional tone mapping curve.
  • the receiver can obtain the initial gain adjustment curve from the metadata information unit in the code stream and the HDR source or SDR source.
  • the curve parameters a, b, p, m, n to obtain the initial gain adjustment curve; then, using the anchor point and the initial gain adjustment curve, as well as the image to be displayed, the target tone mapping curve can be obtained, the target tone mapping
  • the curve can still include the curve parameters described above.
  • the function of the target tone mapping curve can be shown as formula (15):
  • the normalized HDR source or SDR source data (such as the image to be displayed) mapped to the normalized HDR display data (such as the pixel points of the image to be displayed in the display The mapping relationship between the display value on the screen).
  • L and L' may be normalized optical signals or electrical signals, which is not limited in this application.
  • the pixels in the image or video to be displayed can be YUV can also be RGB; as another example, in terms of data bit width, each pixel in the image to be displayed can be 8 bits, 10 bits, or 12 bits, etc.
  • the receiving end of the embodiment of the present application can perform tone mapping processing on all pixels in the image or video for the image or video in SDR or HDR format, and can realize the transition from high dynamic range to low dynamic range in the scene where the backlight of the display device is adjustable. Range of tonemapping.
  • the initial tone mapping curve (that is, the initial gain adjustment curve described above) and the anchor point used to generate the target tone mapping curve can be carried in the code stream and transmitted to the receiving end. , can reuse the hardware channel integrated in the hardware of the electronic device belonging to the receiving end for the transmission curve, which is more convenient for implementation in terms of hardware, and can be commonly used in the hardware channels of various electronic devices.
  • the backlight metadata may include pixel-level or downsampled gain adjustment coefficients.
  • the image and each frame of image in the video may have a pixel-level or down-sampled gain adjustment coefficient.
  • the pixel-level or down-sampled gain adjustment coefficient may be expressed in an image with the same size as the original frame of image, and the information of each pixel in the image is a coefficient, which is referred to as the gain adjustment coefficient herein.
  • the pixel-level or down-sampled gain adjustment coefficient can also be represented as: a table corresponding to each pixel in the original frame of image, the table corresponds to the pixel of the original image one-to-one, and the data in the table is the coefficient of each pixel, which is called the gain adjustment coefficient here.
  • the gain adjustment coefficients of the pixels in the highlighted region in each frame of image may be higher than the gain adjustment coefficients of the pixels in the non-highlighted regions.
  • Fig. 6a is an exemplary system architecture diagram after refining Fig. 1.
  • Fig. 6a is the same as Fig. 1 and Fig. 5a. The difference of Fig. 5a is described in detail.
  • the system shown in Figure 6a is only an example, the system of the present application, and the sending end and receiving end in the system may have more or less components than those shown in the figure, and the two or more components, or may have different component configurations.
  • the various components shown in Figure 6a may be implemented in hardware, software, or a combination of hardware and software including one or more signal processing and/or application specific integrated circuits.
  • the sending end may include a backlight metadata generation module
  • the backlight metadata generation module can generate backlight metadata for the electrical signal image or image sequence generated by the photoelectric conversion module, where the backlight metadata can include the pixel level of the image or the down-sampled gain adjustment coefficient.
  • the gain adjustment coefficient is a kind of gain coefficient, and the gain coefficient is used to represent the amplification factor.
  • the pixel-level gain adjustment coefficient of the image is used to adjust the pixel-level display pixels of the image according to the gain adjustment coefficient. For example, for an image area whose pixel-level unit is 4*4, the gain adjustment coefficient of the image area is 1.2, the value of the display pixels in the 4*4 image area can be enlarged by 1.2 times, for example, the brightness can be brightened by 1.2 times.
  • the encoding module can encode the electrical signal image and backlight metadata, generate a code stream and send it to the receiving end.
  • FIG. 6b it exemplarily shows the process of generating a gain adjustment coefficient for image 1 (expressed as image 2 shown in FIG. 6b).
  • the processing object is image 1 as an example for illustration:
  • the backlight metadata generating module may execute S404:
  • the backlight metadata generating module configures gain adjustment coefficients for image 1 according to the highlighted area and the non-highlighted area, and generates image 2 composed of gain adjustment coefficients.
  • coefficient 1, coefficient 2, coefficient 3 and coefficient 4 are respectively configured for four regions C.
  • the coefficients 1 and 2 corresponding to the two highlighted regions C are greater than the coefficients 3 and 4 corresponding to the two non-highlighted regions C.
  • the backlight metadata generation module can configure different gain adjustment coefficients for different highlighted areas C according to the average brightness of pixels in each highlighted area C, for example The gain adjustment coefficient of the area C having a large average luminance is higher than that of the area C having a small average luminance.
  • the gain adjustment coefficient of the highlighted area C may be greater than 1, and the gain adjustment coefficient of the non-highlighted area C may be less than or equal to 1.
  • an initial gain adjustment coefficient is configured for each area C in the area A, and the initial gain adjustment coefficient may be a first preset coefficient threshold.
  • the first preset coefficient threshold may be 1.0 or 0.8, and the specific threshold may be flexibly configured according to actual needs, which is not limited in the present application.
  • the first preset coefficient threshold is less than or equal to 1.0.
  • each area C (which can be understood as a pixel level) in FIG. 6b can be configured with an initial gain adjustment coefficient, for example, the initial gain coefficients of 64x64 pixels in area A are all 1.0.
  • the initial gain adjustment coefficient is configured according to the area C, that is, the gain adjustment coefficients of all pixels in the same area C are configured as 1.0, therefore, the gain adjustment coefficient in the embodiment of the present application can be called Adjustment factor for pixel-level gain.
  • the coefficient multiple is expanded on the basis of the initial gain adjustment coefficient.
  • the backlight metadata generation module may calculate the average luminance value of the pixels in each highlighted area C, so that each highlighted area C has its own average luminance value, and for a high average luminance value (for example, the average luminance).
  • the coefficient multiplier configured for the highlighted area C that is greater than or equal to the preset brightness threshold) is the third preset coefficient threshold, and the coefficient configured for the highlighted area C with a low average brightness value (for example, the average brightness is less than the preset brightness threshold)
  • the multiple is a fourth preset coefficient threshold, wherein the third preset coefficient threshold is greater than the fourth preset coefficient threshold.
  • the threshold value of the third preset coefficient is 1.5
  • the fourth preset coefficient is 1.2
  • coefficient 2 1.2
  • coefficient 3 1
  • coefficient 4 1.
  • the area C corresponding to the coefficient 1 the value of each pixel in the area C in the image 2 is no longer the color value in the image 1, but the coefficient 1, that is, 1.5. In this way, an image 2 having the same size as the image 1 can be obtained, and the value of each pixel in the image 2 is a pixel-level gain adjustment coefficient.
  • different gain adjustment coefficients can also be configured according to the average brightness of the pixels in the area C, and the gain adjustment coefficient of the area C with high average brightness is larger than that of the area C with low average brightness. Gain adjustment factor for region C.
  • the above-mentioned method of configuring the gain adjustment coefficients for the region A in the image 1 is only an implementation mode, and the present application can also realize the respective configuration of the gain adjustment coefficients for each region A in the image 1 through other methods , the configuration method is not limited to the above examples.
  • the image 1 includes multiple areas A, it is necessary to configure different gain coefficients for each area A according to the highlighted area and non-highlighted area in the respective area A.
  • the method is the same and will not be repeated here.
  • this application can configure gain adjustment coefficients for the pixels in the image area for each image area, that is, the pixel level, such as 4*10, 8*6, etc., then what is obtained after S404 is
  • the image 2 is of the same size as the image 1, that is, 64*64.
  • the value of each pixel in the image 2 is the gain adjustment coefficient configured by the backlight metadata generation module.
  • the above embodiment is described by taking the sending end in FIG. 6a as an example to generate pixel-level or down-sampling gain adjustment coefficients, so that the pixel-level or down-sampling gain adjustment coefficients can be passed to the receiving end as attributes of the code stream.
  • the receiving end of the embodiment of the present application may also include a backlight metadata generation module to perform the above operations to generate pixel-level or down-sampled gain adjustment coefficients to generate pixel-level or down-sampled gain adjustment coefficients.
  • the method is similar to , which will not be repeated here.
  • the display adaptation module in FIG. 1 may include the pixel processing module shown in FIG. 6a.
  • the process may include the following steps:
  • the receiving end acquires the image 1 to be displayed in the code stream.
  • This step is similar to S201 in FIG. 5d , and will not be repeated here.
  • the receiving end acquires backlight metadata in the code stream, where the backlight metadata includes a pixel-level gain adjustment coefficient of image 1 .
  • the decoding module may decode the code stream to obtain backlight metadata, where the backlight metadata may include pixel-level or down-sampled gain adjustment coefficients of images in the code stream.
  • the pixel-level or down-sampled gain adjustment coefficients may be gain adjustment coefficients of images or all images in the code stream, which is not limited in the present application.
  • the decoding module can obtain the pixel-level gain adjustment coefficient of Image 1 (such as Image 2 in Figure 6b), and send it to the pixel processing module.
  • the receiving end acquires the maximum display brightness of the display screen and the current backlight intensity level.
  • This step is similar to S203 in FIG. 5d , which can be referred to above, and will not be repeated here.
  • the pixel processing module in the receiving end can obtain the maximum display brightness of the display screen of the display device to which the receiving end belongs and the current backlight intensity information (which can be the current backlight intensity level, or the current target backlight intensity, this The application does not limit this), that is, S303 can be executed by the pixel processing module in FIG. 6a.
  • the receiving end processes the pixels of the image 1 according to the maximum display brightness to generate the first image.
  • the brightness value based on it is not limited to the maximum display brightness maxT of the display screen, and the brightness value can be greater than the current target backlight brightness, and is less than or equal to a brightness value of the maximum display brightness of the display screen (for example, a weighted result of the current target backlight brightness and the maximum display brightness).
  • the first image is generated by processing the pixels of image 1 according to the maximum display brightness as an example.
  • the brightness value based on is not the maximum display brightness maxT, the method is the same, and will not be repeated here.
  • the maximum display brightness can be used to scale the display pixels (i.e., image pixels) of image 1 to obtain the first image (represented by TMvaluemid1 hereinafter). It should be noted that the processing method is not limited to scaling, Other modes may also be included, and this application does not limit this.
  • the maximum display brightness of the display screen is maxT.
  • the backlight metadata may include a preset brightness value T3.
  • the preset brightness value T3 may be the target brightness target_maxdisplay originally generated by the curve 0 mentioned in the first embodiment above.
  • the maximum display brightness of the display screen is maxT.
  • the actual brightness of the display screen can be higher than the backlight brightness.
  • the maximum display brightness of the screen is 500nit, and the backlight intensity level is 10%, then the backlight brightness is 50nit, but
  • the electronic device may have a strategy to adjust the final brightness of the display screen so that the final brightness may be higher than 50 nit. Therefore, in this embodiment, it may be based on a brightness value greater than the backlight brightness and less than or equal to the maximum display brightness (here, the The maximum display brightness of the display screen), to generate the first image after processing the image 1 to be displayed according to the brightness value.
  • the receiving end processes the pixels of the image 1 according to the maximum display brightness and the current backlight intensity level to generate a second image.
  • the pixel processing module can process the display pixels of image 1 according to the current backlight intensity information (such as the current backlight intensity level) and the maximum display brightness maxT to generate The second image (hereinafter may be represented by TMvaluemid2).
  • the second image does not need to be generated using the maximum display brightness of the display module.
  • the processing method is not limited to scaling, and may also include other methods. Do limit.
  • the maximum display brightness of the display screen is maxT
  • L x maxT*Level
  • Level is the current backlight intensity level of the display screen.
  • the backlight metadata may include a preset brightness value T3.
  • the preset brightness value T3 may be the target brightness target_maxdisplay originally generated by the curve 0 mentioned in the first embodiment above.
  • the maximum display brightness of the display screen is maxT
  • L x maxT*Level
  • Level is the current backlight intensity level of the display screen.
  • the display screen has a certain backlight
  • the maximum display brightness of the display screen is 500nit
  • the backlight intensity level is 10%
  • the backlight brightness is 50nit.
  • the target backlight brightness is processed to generate the second image.
  • the scaling factor S when using the scaling factor S to scale the image pixels of the image 1 as a whole to obtain the above-mentioned first image, it can be realized by any one of the following three methods, and is not limited to the three methods listed here :
  • TMvaluemid1.R scale factor S*h (R value of the image to be displayed);
  • TMvaluemid1.G scaling factor S*h (G value of the image to be displayed);
  • TMvaluemid1.B scaling factor S*h (B value of the image to be displayed);
  • TMvaluemid1.R scaling coefficient S*(h (R value of image to be displayed)+gain adjustment coefficient 2);
  • TMvaluemid1.G scaling factor S*(h (G value of image to be displayed)+gain adjustment factor 2);
  • TMvaluemid1.B scaling coefficient S*(h (B value of image to be displayed)+gain adjustment coefficient 2);
  • TMvaluemid1.R scaling factor S*h (R value of the image to be displayed)+gain adjustment factor 2;
  • TMvaluemid1.G scaling factor S*h (G value of the image to be displayed)+gain adjustment factor 2;
  • TMvaluemid1.B scaling factor S*h (B value of the image to be displayed)+gain adjustment factor 2;
  • the image to be displayed is the above-mentioned image 1, and TMvaluemid1 represents the above-mentioned first image;
  • TMvaluemid1.R represents the value of the R channel of the first image
  • TMvaluemid1.G represents the value of the G channel of the first image
  • TMvaluemid1.B represents the value of the B channel of the first image
  • the pixel-level or down-sampled gain adjustment coefficients in the backlight metadata may include gain adjustment coefficient 1 and gain adjustment coefficient 2, that is, two sets of gain adjustment coefficients.
  • each set of gain adjustment coefficients is the same size as the original image. .
  • the backlight metadata generation module can also gain adjustment factor 2:
  • an image signal loss (for example, a difference that can be converted into a floating point number) between the ambient light collected by the acquisition module and the image of the electrical signal generated by the photoelectric conversion module, and the backlight metadata generation module can convert the image signal
  • the loss is used as a gain adjustment coefficient 2, where the pixel value of each pixel in the gain adjustment coefficient 2 is the image signal lost by each pixel.
  • a 12-bit image can be converted into an 8-bit image through the acquisition module and the photoelectric conversion module, then each pixel in the image loses 4-bit image data, and the 4-bit data lost by each pixel can be used as a gain adjustment coefficient The pixel value of the corresponding pixel in 2.
  • the first image and the second image are obtained through the gain adjustment coefficient 2 and the gain adjustment coefficient 1, which can improve the contrast of the image.
  • the receiving end processes the first image and the second image according to the gain adjustment coefficient of the image 1, and acquires image display information of the image 1.
  • This application does not limit the execution sequence of S304 and S305.
  • the pixel processing module at the receiving end may, according to the gain adjustment coefficient of the image 1 (such as the image 2 in FIG. 6b), among the gain adjustment coefficients at the pixel level, perform the first image corresponding to the image 1 and The second image is processed to obtain the image display information of the image 1 on the display screen, that is, the tone mapping of each pixel in the image 1 .
  • the gain adjustment coefficient of the image 1 such as the image 2 in FIG. 6b
  • Tmvalue W*TMvaluemid1+(1-W)*TMvaluemid2, formula (16);
  • Tmvalue in formula (16) represents the tone mapping value of each pixel in Image 1;
  • TMvaluemid1 represents the first image
  • TMvaluemid2 represents the second image
  • W can be used to represent the gain adjustment coefficient of image 1, for example, it can be represented as image 2 generated in FIG. 6b.
  • W may be used to represent a function such as the square of the gain adjustment coefficient 1 of the image 1 .
  • the gain adjustment coefficient 1 of the image 1 or a function of the gain adjustment coefficient 1 is used as a weight to perform weighted summation on the first image and the second image to obtain the tone mapping value of the image 1 .
  • Tmvalue W*TMvaluemid2, formula (17);
  • Tmvalue in the formula (17) represents the tone mapping value of each pixel in Image 1;
  • TMvaluemid1 represents the first image
  • W may be a gain adjustment coefficient of image 1 (for example, it may be represented as image 2 generated in FIG. 6b);
  • Tmvalue is the information of the final image pixel of each pixel in image 1 under the current target backlight brightness
  • the tone mapping value of each pixel in Tmvalue is the display value of each pixel in image 1 The display value (or the final RGB value) on .
  • the image 1 to be displayed can be processed according to the current target backlight brightness to obtain the second image, and the highlight in the second image can be adjusted according to the respective gain adjustment coefficients of the highlighted area and the non-highlighted area in the image 1.
  • the area is brightened, and the non-highlight area (such as a dark area) is darkened to obtain the display brightness of each pixel in the image 1 on the screen.
  • the display brightness of the area can be increased by 1.5 times when displayed, which may cause the area to be displayed too brightly, then optionally, optionally, Tmvalue can be clamped by using the first image TMvaluemid1, so as to prevent certain areas from being too bright when the image 1 is displayed on the display screen.
  • the gain adjustment coefficient in the backlight metadata is a downsampled gain adjustment coefficient
  • the downsampled gain coefficient and the pixels of the image to be displayed may be 1 to 1, 1 to 4, or 1 to 16 and so on one-to-many correspondence.
  • the corresponding relationship between the downsampled gain coefficient and the pixel of image 1 can be obtained; then, according to the corresponding relationship, the image
  • the corresponding pixels in 1 are processed using the corresponding down-sampled gain coefficients.
  • the specific processing method is similar to the principle of the process of processing image 1 based on the pixel-level gain coefficients described in the above-mentioned embodiment 2. Refer to the introduction above. , which will not be repeated here.
  • each pixel can be adjusted according to the respective gain adjustment coefficient of each pixel in the image.
  • this method is easier to implement in software, tone mapping can reach the pixel level, brightness adjustment is more refined, high flexibility, and higher adjustability.
  • Embodiment 1 and Embodiment 2 can also be combined, that is, the backlight metadata can include local or global initial gain adjustment curves, local or global anchor points, and pixel-level or downsampled gain adjustment coefficients.
  • the similarities between Example 1 and Example 2 can be referred to each other.
  • the receiving end of the embodiment of the present application can perform tone mapping on a part of the image or image area or frame sequence by using the target tone mapping curve to generate tone mapping value , to obtain the image display information, and for another part of the image or frame sequence, use the gain adjustment coefficient corresponding to the image, as well as the current target backlight brightness and the maximum display brightness of the display screen to obtain the image display information.
  • the display effect of images or videos on devices with adjustable backlight is usually not considered.
  • the video is mainly adapted to the maximum display capability of the screen (that is, the maximum display In terms of brightness), it is not suitable for image or video display in scenes with weak backlight, and the display effect is poor.
  • the above-mentioned system in the embodiment of the present application can adjust the brightness of the image or video to be displayed in combination with the maximum display brightness of the display screen and the current target backlight brightness when the backlight of the display screen is adjustable.
  • the adjustment and change of the level can automatically adjust the display brightness of the image on the display screen, and improve the display effect of the image or video when the backlight becomes weak.
  • FIG. 7 is a schematic structural diagram of a device provided by an embodiment of the present application.
  • the device 500 may include: a processor 501 , a transceiver 505 , and optionally a memory 502 .
  • the transceiver 505 may be called a transceiver unit, a transceiver, or a transceiver circuit, etc., and is used to implement a transceiver function.
  • the transceiver 505 may include a receiver and a transmitter, and the receiver may be called a receiver or a receiving circuit for realizing a receiving function; the transmitter may be called a transmitter or a sending circuit for realizing a sending function.
  • Computer program or software code or instructions 504 may be stored in memory 502, which may also be referred to as firmware.
  • the processor 501 may implement the methods provided in the embodiments of the present application by running the computer program or software code or instruction 503 therein, or calling the computer program or software code or instruction 504 stored in the memory 502 .
  • the processor 501 may be a central processing unit (central processing unit, CPU)
  • the memory 502 may be, for example, a read-only memory (read-only memory, ROM) or a random access memory (random access memory, RAM).
  • the processor 501 and transceiver 505 described in this application can be implemented in integrated circuit (integrated circuit, IC), analog IC, radio frequency integrated circuit RFIC, mixed signal IC, application specific integrated circuit (application specific integrated circuit, ASIC), printed circuit board (printed circuit board, PCB), electronic equipment, etc.
  • integrated circuit integrated circuit, IC
  • analog IC analog IC
  • radio frequency integrated circuit RFIC radio frequency integrated circuit
  • mixed signal IC application specific integrated circuit
  • ASIC application specific integrated circuit
  • PCB printed circuit board
  • electronic equipment etc.
  • the above-mentioned device 500 may further include an antenna 506, and each module included in the device 500 is only for illustration, and this application is not limited thereto.
  • a device may be a stand-alone device or may be part of a larger device.
  • the implementation form of the device can be:
  • An independent integrated circuit IC, or a chip, or a chip system or subsystem (2) a set of one or more ICs, optionally, the set of ICs can also include storage for storing data and instructions components; (3) modules that can be embedded in other equipment; (4) vehicle equipment, etc.; (5) others, etc.
  • the chip shown in FIG. 8 includes a processor 601 and an interface 602 .
  • the number of processors 601 may be one or more, and the number of interfaces 602 may be more than one.
  • the chip or chip system may include a memory 603 .
  • the embodiment of the present application also provides a computer-readable storage medium, the computer-readable storage medium stores a computer program, the computer program includes at least one piece of code, and the at least one piece of code can be executed by a computer to control the computer It is used to realize the above-mentioned method embodiment.
  • the embodiments of the present application further provide a computer program, which is used to implement the foregoing method embodiments when the computer program is executed by a terminal device.
  • the program may be stored in whole or in part on a storage medium packaged with the processor, or stored in part or in whole in a memory not packaged with the processor.
  • the embodiment of the present application also provides a chip, including a network port controller and a processor.
  • the network port controller and the processor can implement the foregoing method embodiments.
  • the steps of the methods or algorithms described in connection with the disclosure of the embodiments of the present application may be implemented in the form of hardware, or may be implemented in the form of a processor executing software instructions.
  • the software instructions can be composed of corresponding software modules, and the software modules can be stored in random access memory (Random Access Memory, RAM), flash memory, read-only memory (Read Only Memory, ROM), erasable programmable read-only memory ( Erasable Programmable ROM, EPROM), Electrically Erasable Programmable Read-Only Memory (Electrically EPROM, EEPROM), registers, hard disk, removable hard disk, CD-ROM, or any other form of storage medium known in the art.
  • An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium.
  • the storage medium may also be a component of the processor.
  • the processor and storage medium can be located in the ASIC.
  • the functions described in the embodiments of the present application may be implemented by hardware, software, firmware or any combination thereof.
  • the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium.
  • Computer-readable media includes both computer storage media and media including any medium that facilitates transfer of a computer program from one place to another.
  • a storage media may be any available media that can be accessed by a general purpose or special purpose computer.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Control Of Indicators Other Than Cathode Ray Tubes (AREA)

Abstract

本申请实施例提供了一种图像处理方法及电子设备,涉及图像处理技术领域,该方法包括:获取待显示图像在显示屏的当前目标背光亮度下的显示信息,以及待显示图像在显示屏的高于该当前目标背光亮度下的显示信息,基于两种显示信息,来获取待显示图像的显示信息,可在显示屏背光可调的场景下,提升图像的显示效果。

Description

图像处理方法及电子设备
本申请要求于2022年01月12日提交中国国家知识产权局、申请号为202210033857.8、申请名称为“图像处理方法及电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请实施例涉及图像处理技术领域,尤其涉及一种图像处理方法及电子设备。
背景技术
目前,电子设备在显示图像或视频时,可按照显示屏的最大显示亮度,来对图像像素在显示屏上的显示亮度进行色调映射。
但是,目前的色调映射方法在显示屏的背光可调场景下,图像或视频的显示效果较差。
发明内容
为了解决上述技术问题,本申请提供一种图像处理方法及装置。在该方法中,可基于显示屏的最大显示亮度以及当前目标背光亮度,来获取待显示图像在显示屏上的显示信息,能够在显示屏背光可调的场景下,提升图像显示效果。
第一方面,本申请实施例提供一种图像处理方法。该方法包括:根据显示屏的当前目标背光亮度,获取待显示的目标图像的第一显示信息;根据所述显示屏的最大显示亮度和所述当前目标背光亮度,获取所述目标图像的第二显示信息;根据所述目标图像的预设信息,对所述第一显示信息和所述第二显示信息进行处理,获取所述目标图像的第三显示信息,其中,所述预设信息包括与所述目标图像的亮度有关的信息;按照所述第三显示信息,在显示屏上显示所述目标图像。这样,在显示屏具有一定背光亮度的场景下,考虑到显示屏的实际亮度可能高于该背光亮度,该方法可结合背光亮度获取图像的第一显示信息,再结合显示屏的最大显示亮度和背光亮度,获取图像的第二显示信息,并对两种显示信息进行处理,可得到图像的最终显示信息,那么最终显示信息能够结合背光亮度以及最高显示亮度,从而提升图像在显示屏上的适配效果,以提升图像的显示效果。
示例性的,在获取第二显示信息时,可基于大于当前目标背光亮度且小于或等于该最大显示亮度的亮度,来获取图像的第二显示信息,可避免采用第三显示信息对目标图像显示时,对高亮区域提升的过亮的情况。
示例性的,目标图像可以是单独的图像,也可以是视频中的一帧图像,本申请对此不做限制。
示例性的,当前目标背光亮度为期望得到的背光亮度。
示例性的,显示屏的最大显示亮度为500nit,设置当前背光强度级别为10%,得到 当前目标背光亮度50nit,但是,在设置当前背光亮度级别为10%之后,显示屏的当前背光亮度(即实际背光亮度)可高于50nit,例如100nit,因此,这里的与该背光强度级别对应的背光亮度可称为当前目标背光亮度,即***或用户期望的背光亮度。
根据第一方面,所述根据显示屏的当前目标背光亮度,获取待显示的目标图像的第一显示信息,包括:获取待显示的目标图像的第一色调映射曲线;根据显示屏的当前目标背光亮度,对所述第一色调映射曲线进行处理,获取所述目标图像的第二色调映射曲线作为所述第一显示信息。
示例性的,该第一色调映射曲线可从该目标图像的信源中获取,也可以在图像显示端生成,本申请对此不做限制。
信源信息可以是指对图像所携带的信息进行分析所得到的任意一种或多种信息。信源信息可以是图像的一种元数据信息,携带在码流中传送。
示例性的,该第一色调映射曲线可以是码流中携带的全局的初始色调映射曲线,或者局部的初始色调映射曲线。
示例性的,当该第一色调映射曲线为局部(例如目标图像的局部图像区域)的初始色调映射曲线时,可对目标图像的局部图像区域,按照第一方面的方法获取该局部图像区域的第三显示信息。
示例性的,当该第一色调映射曲线为全局(例如目标图像)的初始色调映射曲线时,可对目标图像,按照第一方面的方法获取该目标图像的第三显示信息。
示例性的,在根据显示屏的当前目标背光亮度,对所述第一色调映射曲线进行处理时,处理方式包括但不限于缩放处理,平移处理等,本申请对此不做限制。
根据第一方面,或者以上第一方面的任意一种实现方式,所述根据所述显示屏的最大显示亮度和所述当前目标背光亮度,获取所述目标图像的第二显示信息,包括:根据所述显示屏的最大显示亮度和所述当前目标背光亮度,确定第一亮度,所述第一亮度高于所述当前目标背光亮度;根据所述第一亮度,对所述第一色调映射曲线进行处理,获取所述目标图像的第三色调映射曲线作为所述第二显示信息。
示例性的,第一亮度可小于或等于最大显示亮度。
示例性的,在根据所述第一亮度,对所述第一色调映射曲线进行处理时,处理方式包括但不限于缩放处理,平移处理等,本申请对此不做限制。
根据第一方面,或者以上第一方面的任意一种实现方式,所述预设信息包括:所述目标图像的第二亮度,所述根据所述目标图像的预设信息,对所述第一显示信息和所述第二显示信息进行处理,获取所述目标图像的第三显示信息,包括:获取所述目标图像的第二亮度;从所述第二色调映射曲线中,提取像素亮度小于或等于所述第二亮度的第一子曲线;获取所述第二色调映射曲线中与所述第二亮度对应的第一色调映射值;基于所述第一色调映射值,对所述第三色调映射曲线进行处理,生成第四色调映射曲线,其中,所述第四色调映射曲线中与所述第二亮度对应的色调映射值为所述第一色调映射值; 从所述第四色调映射曲线中,提取像素亮度大于或等于所述第二亮度的第二子曲线;基于所述第二亮度,连接所述第一子曲线和所述第二子曲线,获取所述目标图像的目标色调映射曲线作为所述第三显示信息。
示例性的,第二亮度可称为锚点,该锚点(这里的第二亮度)可以是全局锚点,即目标图像的锚点,还可以是局部锚点,即目标图像中局部图像区域的锚点,在对一帧图像,或一个图像区域生成目标色调映射曲线时,可使用成对的初始色调映射曲线以及锚点来实现。
示例性的,对一帧图像生成目标色调映射曲线,可使用该帧图像的初始色调映射曲线以及该帧图像的锚点来实现。
示例性的,在对一帧图像中的局部图像区域生成目标色调映射曲线,可使用该帧图像中该局部图像区域的初始色调映射曲线以及该局部图像区域的锚点来实现。
示例性的,全局锚点可作为多帧图像或多个图像区域的锚点,全局初始色调映射曲线可作为帧序列(即多帧图像)或多个图像区域的初始色调映射曲线。
示例性的,在获取目标图像的锚点时,可以从信源中获取,即码流中可传入视频或图像的锚点。
示例性的,在获取目标图像的锚点时,还可以由图像或视频显示端(即上述显示屏所属的显示设备)生成,本申请对此不做限制。
示例性的,目标图像的第二亮度,例如第二亮度为该帧目标图像的全局锚点,则第二亮度可以是该目标图像的图像像素的亮度最小值和亮度最大值之间的数值。
示例性的,目标图像的第二亮度,例如第二亮度为目标图像中局部图像区域的局部锚点,则该第二亮度可以是该局部图像区域中图像像素的亮度最小值和亮度最大值之间的数值。
示例性的,第一方面的各个实现方式中所述的色调映射曲线可用于表示图像中像素的亮度(即像素亮度,像素点的亮度)与在显示屏上的显示亮度的映射关系。
根据第一方面,或者以上第一方面的任意一种实现方式,所述获取待显示的目标图像的第一色调映射曲线,包括:获取构成所述目标图像的多个局部区域各自的第一色调映射曲线;所述获取所述目标图像的第二亮度,包括:获取所述多个局部区域各自的第二亮度;所述基于所述第二亮度,连接所述第一子曲线和所述第二子曲线,获取所述目标图像的目标色调映射曲线作为所述第三显示信息,包括:对于所述目标图像的所述多个局部区域,基于每个所述局部区域各自的所述第二亮度,连接每个所述局部区域各自的所述第一子曲线和所述第二子曲线,获取所述多个局部区域各自的目标色调映射曲线作为所述目标图像的所示第三显示信息。
本实现方式中,目标图像中的每个局部区域具有各自的第一色调映射曲线(即局部色调映射曲线)以及各自的锚点(即局部锚点),那么可对每个局部区域,分别按照结合第一方面的上述实现方式中所描述的过程,来对每个局部区域生成各自的目标色调映射曲线,然后,在显示目标图像时,对各个局部区域按照各自的目标色调映射曲线进行色调映射,以实现目标图像在存在一定背光下的显示屏上进行显示适配。
根据第一方面,或者以上第一方面的任意一种实现方式,所述获取所述目标图像的第二亮度,包括:根据所述目标图像的像素亮度的最小值和像素亮度的平均值,获取所述目标图像的第二亮度。
示例性的,该第二亮度为目标图像的全部图像区域的锚点时,可基于该目标图像内的图像像素中像素亮度的最小值,和像素亮度的平均值,来获取该锚点。
示例性的,该第二亮度为目标图像的局部图像区域的锚点时,可基于该局部图像区域内的图像像素中像素亮度的最小值,和像素亮度的平均值,来获取该锚点。
根据第一方面,或者以上第一方面的任意一种实现方式,所述根据所述目标图像的像素亮度的最小值和像素亮度的平均值,获取所述目标图像的第二亮度,包括:按照所述目标图像中的像素亮度和像素位置,对所述目标图像的图像像素进行聚类,得到多个图像区域;按照像素亮度,对所述目标图像中的所述多个图像区域进行分类,得到第一类图像区域和第二类图像区域,其中,第一类图像区域的亮度高于所述第二类图像区域的亮度;基于所述第一类图像区域内像素亮度的最小值和像素亮度的平均值,确定所述目标图像的第二亮度。
示例性的,在对目标图像划分为多个图像区域时,可按照目标图像中像素亮度和像素的空间位置,来将像素亮度接近且空间位置接近的像素点划分为一个图像区域,以实现图像区域的划分,并不限制于上述聚类的方式。
示例性的,第一类图像区域可称为高亮区域,第二类图像可称为非高亮区域。
示例性的,用于区分高亮区域和非高亮区域,可利用图像区域的亮度均值与预设阈值进行比较,来实现高亮区域和非高亮区域的区分,本申请对此不做限制。
根据第一方面,或者以上第一方面的任意一种实现方式,所述根据显示屏的当前目标背光亮度,获取待显示的目标图像的第一显示信息,包括:根据显示屏的当前目标背光亮度,对待显示的目标图像进行处理,获取第一图像作为所述第一显示信息。
示例性的,可按照当前目标背光亮度,来对目标图像的图像像素进行处理,处理方式包括但不限于图像缩放处理等图像处理方式。
根据第一方面,或者以上第一方面的任意一种实现方式,所述根据所述显示屏的最大显示亮度和所述当前目标背光亮度,获取所述目标图像的第二显示信息,包括:根据所述显示屏的最大显示亮度和所述当前目标背光亮度,确定第三亮度,所述第三亮度高于所述当前目标背光亮度;根据所述第三亮度,对所述目标图像进行处理,获取第二图像作为所述第二显示信息。
示例性的,示例性的,第三亮度可小于或等于最大显示亮度。
示例性的,在根据所述第三亮度,对所述目标图像进行处理时,处理方式包括但不限于缩放处理等图像处理方式,本申请对此不做限制。
根据第一方面,或者以上第一方面的任意一种实现方式,所述预设信息包括:所述目标图像的增益系数,所述根据所述目标图像的预设信息,对所述第一显示信息和所述第二显示信息进行处理,获取所述目标图像的第三显示信息,包括:获取所述目标图像的增益系数;基于所述增益系数,对所述第一图像和所述第二图像进行处理,生成第三图像作为所述第三显示信息。
示例性的,目标图像的增益系数可以是像素级或下采样的增益系数。
示例性的,增益系数可用于对目标图像的图像像素进行亮度调节。
示例性的,5*5的像素单位的增益系数可以是1.2,用于表示对目标图像中某个5*5的图像区域的图像像素调亮1.2倍。
示例性的,目标图像中亮度差异大的区域对应的增益系数不同,亮度更高的图像区域的增益系数高于亮度更多的图像区域的增益系数。
例如目标图像中高亮区域的增益系数大于目标图像中非高亮区域的增益系数。
根据第一方面,或者以上第一方面的任意一种实现方式,所述获取所述目标图像的增益系数,包括:按照所述目标图像中的像素亮度和像素位置,对所述目标图像的图像像素进行聚类,得到多个图像区域;按照像素亮度,对所述目标图像中的所述多个图像区域进行分类,得到第一类图像区域和第二类图像区域,其中,第一类图像区域的亮度高于所述第二类图像区域的亮度;对所述第一类图像区域和所述第二类图像区域配置不同的增益系数;其中,所述第一类图像区域的增益系数大于所述第二类图像区域的增益系数。
示例性的,在对目标图像划分为多个图像区域时,可按照目标图像中像素亮度和像素的空间位置,来将像素亮度接近且空间位置接近的像素点划分为一个图像区域,以实现图像区域的划分,并不限制于上述聚类的方式。
示例性的,第一类图像区域可称为高亮区域,第二类图像可称为非高亮区域。
示例性的,用于区分高亮区域和非高亮区域,可利用图像区域的亮度均值与预设阈值进行比较,来实现高亮区域和非高亮区域的区分,本申请对此不做限制。
这样可对目标图像中高亮区域提亮,非高亮区域压暗,来实现对目标图像向存在一定背光亮度的显示屏的显示适配。
根据第一方面,或者以上第一方面的任意一种实现方式,所述根据所述目标图像的预设信息,对所述第一显示信息和所述第二显示信息进行处理,获取所述目标图像的第三显示信息,包括:从待显示的目标图像的信源信息中,获取所述目标图像的预设信息;基于所述预设信息,对所述第一显示信息和所述第二显示信息进行处理,获取所述目标图像的第三显示信息。
示例性的,该预设信息可携带在码流中传入该图像处理方法对应的端侧,以利用该预设信息,在显示屏存在背光的情况下,也能够使视频或图像在显示屏上显示适配,显示效果更好。
将预设信息携带在目标图像的信源信息中,可使图像显示端可直接利用该预设信息 进行图像在显示屏背光下的显示适配,无需由图像显示端生成该预设信息,可降低图像或视频的显示延迟。
第二方面,本申请实施例提供一种电子设备。该电子设备包括:存储器和处理器,所述存储器和所述处理器耦合;所述存储器存储有程序指令,所述程序指令由所述处理器执行时,使得所述电子设备可实现第一方面以及第一方面的任意一种实现方式中的方法。
第二方面以及第二方面的任意一种实现方式分别与第一方面以及第一方面的任意一种实现方式相对应。第二方面以及第二方面的任意一种实现方式所对应的技术效果可参见上述第一方面以及第一方面的任意一种实现方式所对应的技术效果,此处不再赘述
第三方面,本申请实施例提供一种芯片。芯片包括包括一个或多个接口电路和一个或多个处理器;所述接口电路用于从电子设备的存储器接收信号,并向所述处理器发送所述信号,所述信号包括存储器中存储的计算机指令;当所述处理器执行所述计算机指令时,使得所述电子设备执行现第一方面以及第一方面的任意一种实现方式中的方法。
第三方面以及第三方面的任意一种实现方式分别与第一方面以及第一方面的任意一种实现方式相对应。第三方面以及第三方面的任意一种实现方式所对应的技术效果可参见上述第一方面以及第一方面的任意一种实现方式所对应的技术效果,此处不再赘述。
第四方面,本申请实施例提供一种计算机可读存储介质。计算机可读存储介质存储有计算机程序,当计算机程序运行在计算机或处理器上时,使得计算机或处理器执行第一方面或第一方面的任一种可能的实现方式中的方法。
第四方面以及第四方面的任意一种实现方式分别与第一方面以及第一方面的任意一种实现方式相对应。第四方面以及第四方面的任意一种实现方式所对应的技术效果可参见上述第一方面以及第一方面的任意一种实现方式所对应的技术效果,此处不再赘述。
第五方面,本申请实施例提供一种计算机程序产品。计算机程序产品包含软件程序,当软件程序被计算机或处理器执行时,使得第一方面或第一方面的任一种可能的实现方式中的方法被执行。
第五方面以及第五方面的任意一种实现方式分别与第一方面以及第一方面的任意一种实现方式相对应。第五方面以及第五方面的任意一种实现方式所对应的技术效果可参见上述第一方面以及第一方面的任意一种实现方式所对应的技术效果,此处不再赘述。
附图说明
为了更清楚地说明本申请实施例的技术方案,下面将对本申请实施例的描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1为示例性示出的一种***的架构示意图;
图2为示例性示出的***应用的架构示意图;
图3为示例性示出的动态范围的映射关系的示意图;
图4为示例性示出的手机背光亮度调节的场景示意图;
图5a为示例性示出的一种***的架构示意图;
图5b为示例性示出的一种图像处理过程的示意图;
图5c为示例性示出的一种数据生成过程的示意图;
图5d为示例性示出的一种图像显示适配的过程示意图;
图5e为示例性示出的一种曲线生成过程的示意图;
图6a为示例性示出的一种***的架构示意图;
图6b为示例性示出的一种图像处理过程的示意图;
图6c为示例性示出的一种图像显示适配的过程示意图;
图7为本申请实施例提供的一种装置的结构示意图;
图8为本申请实施例提供的一种芯片的结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
本文中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。
本申请实施例的说明书和权利要求书中的术语“第一”和“第二”等是用于区别不同的对象,而不是用于描述对象的特定顺序。例如,第一目标对象和第二目标对象等是用于区别不同的目标对象,而不是用于描述目标对象的特定顺序。
在本申请实施例中,“示例性的”或者“例如”等词用于表示作例子、例证或说明。本申请实施例中被描述为“示例性的”或者“例如”的任何实施例或设计方案不应被解释为比其它实施例或设计方案更优选或更具优势。确切而言,使用“示例性的”或者“例如”等词旨在以具体方式呈现相关概念。
在本申请实施例的描述中,除非另有说明,“多个”的含义是指两个或两个以上。例如,多个处理单元是指两个或两个以上的处理单元;多个***是指两个或两个以上的***。
图1为示例性示出***框架结构示意图。图1的***包括发送端和接收端。
应该理解的是,图1所示***仅是一个范例,本申请的***,以及***中的发送端和接收端可以具有比图中所示的更多的或者更少的部件,可以组合两个或多个的部件,或者可以具有不同的部件配置。图1中所示出的各种部件可以在包括一个或多个信号处理和/或专用集成电路在内的硬件、软件、或硬件和软件的组合中实现。
示例性的,发送端可为图像或视频的生成端,接收端可为图像或视频的显示端。
如图2(1)所示,发送端和接收端可以配置于同一电子设备中。
示例性的,电子设备为手机,发送端可包括摄像头,接收端可包括显示屏。
发送端可将摄像头采集的视频数据生成码流,且该码流携带背光元数据,发送端可将码流发送至接收端的显示屏进行显示。
示例性的,图1所示的***的应用场景可以是用户使用手机摄像头拍摄视频并将拍摄的视频在手机的显示屏上进行显示的过程。
示例性的,发送端可以集成为一种编码器,接收端可以集成为一种解码器。
如图2(2)所示,发送端和接收端可配置于不同的电子设备中。
示例性的,电子设备1为手机,电子设备2为电视,手机可通过Wi-Fi网络、蓝牙等方式将手机拍摄的视频数据的码流发送至电视,并将视频数据在电视的显示屏上显示。
示例性的,电子设备1为视频软件厂商拍摄制作视频的电子设备,电子设备1可将制作的视频码流(包括背光元数据)传入应用服务器,电子设备2为手机。手机接收用户对手机上安装的视频应用的用户操作,手机可响应于该用户操作,从应用服务器获取电子设备1传入的视频码流,并在手机的显示屏上显示播放视频数据。
需要说明的是,电子设备可以为终端,也可以称为终端设备,终端可以为蜂窝电话(cellular phone),平板电脑(pad)、可穿戴设备、电视、个人电脑(PC,personal computer)、物联网设备等设备,本申请对此不做限定。
此外,本申请的应用场景不限于上述示例场景,还可可以应用于多种场景,例如,华为云存储(或传输)图像(或视频)的场景,又例如,视频监控场景,还例如,直播场景等等,本申请对此不作限制。
下面结合图1,对图1***的具体工作过程进行描述:
发送端可包括采集模块、光电转换模块以及编码模块。
采集模块可采集环境光,得到光信号的图像数据;
光电转换模块可通过光电转移函数,对光信号的图像数据进行光电转换,生成电信号的图像数据。
下面对光电转换模块进行光电转换的背景进行介绍:
示例性的,动态范围(Dynamic Range)可用于表示某个变量最大值与最小值的比率。在数字图像中,动态范围可用于表示在图像可显示的范围内最大灰度值和最小灰度值之间的比率。
其中,自然界的动态范围相当大,如图3所示,示出了各种环境光的亮度信息,例如星空下的夜景亮度约为0.001cd/m2,太阳本身亮度高达1,000,000,000cd/m2,月光的亮度约为1cd/m2,室内照明场景的环境光亮度约为100cd/m2,室外多云场景的环境光亮度约为500cd/m2,室外晴天场景的环境光亮度约为2000cd/m2。那么,自然界的动态范围可达到1,000,000,000/0.001=10 13数量级。但是在自然界真实的场景当中,太阳的亮度和星光的亮度不会同时得到。对真实世界中的自然场景来说,动态范围在10 -3到10 6范围内。
而目前大部分的彩色数字图像中,RGB(Red Green Blue,红绿蓝)各通道分别使用 8位(bit)或10bit来存储,也就是说,各通道的表示范围是0~255灰度级,这里的0~255为图像的动态范围。
那么真实世界中同一场景中动态范围在10 -3到10 6范围内,可称之为高动态范围(high dynamic range,HDR),相对的,普通图片上的动态范围可称为低动态范围(low dynamic range,LDR)。如图3所示,电子设备(例如数码相机)的成像过程可理解为真实世界的高动态范围(例如100cd/m2~2000cd/m2)到图像的低动态范围(1cd/m2~500cd/m2)的非线性映射。
需要说明的是,环境光的亮度到显示设备的显示亮度的映射范围并不限于图3的阴影区域的映射范围的举例,随着电子设备的显示能力的提高,显示设备可显示的亮度的范围可增大,例如最大显示亮度可高于500。
示例性的,图3以及本文中所述的显示设备可以是本申请实施例的接收端所属的电子设备。
基于以上背景,结合图1和图3可知,采集模块采集的光信号的图像数据的动态范围在10 -3到10 6范围内,而接收端的显示模块的最大显示亮度只有例如500,并不能达到真实世界的环境光的亮度信息。为了通过显示设备来浏览图像,需要由光电转换模块,通过光电转移函数将光信号转换为电信号,得到8bit或10bit的图像(图像中每个像素点的大小为8bit或10bit)。
示例性的,标准动态范围(standard Dynamic Range,SDR)图像(动态范围一般在1nit到100nit之间的图像,nit是光照单位)与高动态范围(HDR)图像(动态范围在0.001到10000nit之间的图像)相对应。传统的jpeg等格式8bit的图像可以认为是SDR图像。
需要说明的是,电信号的8bit或10bit的图像的图像格式可以是RAW(未加工)图像、RGB图像、YUV(“Y”表示明亮度(Luminance、Luma),“U”和“V”则是色度、浓度(Chrominance、Chroma))图像、Lab图像中的任意一种,本申请对此不作限制。
其中,Lab图像由三个要素组成,一个要素是亮度(L),a和b是两个颜色通道。a包括的颜色是从深绿色(低亮度值)到灰色(中亮度值)再到亮粉红色(高亮度值);b是从亮蓝色(低亮度值)到灰色(中亮度值)再到黄色(高亮度值)。
此外,本申请对于显示设备所显示的图像中每个像素点的大小并不限于8bit或10bit,还可以随着显示设备的显示能力的提高,以高于10bit的大小来显示图像像素。
下面对图1中的光电转移模块所用的光电转移函数进行介绍。
示例性的,早期的显示设备可以是CRT显示器,其光电转移函数为Gamma函数,在ITU-R Recommendation BT.1886标准当中定义了公式(1)所示的光电转移函数,
Figure PCTCN2022124254-appb-000001
在通过公式(1)对采集的光信号进行光电转换后,得到量化成8比特的图像可以称为SDR图像,SDR图像和公式(1)的光电转移函数在传统的显示设备(照度在100cd/m2左右)上表现良好。
但随着显示设备的升级,显示设备的照度范围不断增加,普遍的HDR显示器的照度信息可为600cd/m2,高级的HDR显示器的照度信息能达到2000cd/m2,远远超过SDR显示设备的照度信息,ITU-R Recommendation BT.1886标准中公式(1)所示的光电转移函数不能很好的表达HDR显示设备的显示性能,因此需要改进的电光转移函数以适应显示设备的升级。光电转移函数的思想来源于Tone Mapping(色调映射,TM)算法当中的映射函数,将映射函数做适当的调整就是光电转移函数。现阶段常见的光电转移函数有如下三种,PQ(Perception quantization,感知量化)、HLG(Hybrid Log-Gamma,混合对数伽马)和SLF光电转移函数3种,此三种光电转移函数是AVS标准规定的转换函数。
下面介绍以上三种光电转移函数的曲线。
PQ光电转移函数,这里的光电转移函数不同于传统的公式(1)所示的“Gamma”光电转移函数,可根据人眼的亮度感知模型,提出感知量化转移函数,PQ光电转移函数表示图像像素的线性信号值到PQ域非线性信号值的转换关系如公式(2)所示,PQ光电转移函数可以表示为公式(3):
Figure PCTCN2022124254-appb-000002
Figure PCTCN2022124254-appb-000003
其中,L表示线性信号值,其值归一化为[0,1]。
L’表示非线性信号值,其值取值范围为[0,1]。
Figure PCTCN2022124254-appb-000004
为PQ光电转移系数。
Figure PCTCN2022124254-appb-000005
为PQ光电转移系数。
Figure PCTCN2022124254-appb-000006
PQ光电转移系数。
Figure PCTCN2022124254-appb-000007
PQ光电转移系数。
Figure PCTCN2022124254-appb-000008
PQ光电转移系数。
HLG光电转移函数是在传统的Gamma曲线的基础上改进得到的。HLG光电转移函数在低段应用传统的Gamma曲线,在高段补充了log曲线。HLG光电转移函数表示图像像素的 线性信号值到HLG域非线性信号值的转换关系,HLG光电转移函数可表示为公式(4):
Figure PCTCN2022124254-appb-000009
其中,L表示线性信号值,其取值范围为[0,12]。
L’表示非线性信号值,其值取值范围为[0,1]。
a=0.17883277,a为HLG光电转移系数;
b=0.28466892,b为HLG光电转移系数;
c=0.55991073,c为HLG光电转移系数。
SLF光电转移函数是在满足人眼光学特性的前提下,根据HDR场景亮度分布获得的最优曲线。
SLF光电转移曲线表示图像像素的线性信号值到SLF域非线性信号值的转换关系。图像像素的线性信号值到SLF域非线性信号值的转换关系如公式(5)所示:
Figure PCTCN2022124254-appb-000010
其中,SLF光电转移函数可以表示为公式(6):
Figure PCTCN2022124254-appb-000011
其中:L表示线性信号值,其值归一化为[0,1];
L’表示非线性信号值,其值取值范围为[0,1];
p=2.3,SLF光电转移系数;
m=0.14,SLF光电转移系数;
a=1.12762,SLF光电转移系数;
b=-0.12762,SLF光电转移系数。
继续参照图1,发送端的编码模块可接收光电转换模块生成的电信号图像,对该电信号图像进行编码,生成码流。
需要说明的是,本申请的发送端所生成的码流可携带背光元数据,本申请实施例的发送端在生成图像或视频时,可对该图像或视频生成背光元数据。
示例性的,背光元数据可以是视频或图像的一种属性信息。
该背光元数据用于表示在显示设备的背光变化情况下,与图像的显示适配有关的元数据(Metadata)。
继续参照图1,发送端可将生成的码流发送至接收端。
示例性的,本申请实施例的图1所示的***所处理的图像可以是一帧图像,还可以是帧序列(即多帧图像),本申请对此不作限制。
继续参照图1,接收端可包括解码模块、电光转换模块、显示适配模块、显示模块。
解码模块可对码流进行解码,以获取例如8bit的电信号图像和图像的属性信息(可包括背光元数据)。
电光转换模块执行的操作是光电转换模块执行的操作的逆过程,即,电光转换模块可将接收到的电信号图像转换为和图像的环境光的光照强度对应的光信号图像。
示例性的,动态范围的映射方法可应用于对发送端生成的图像(一种HDR信号)和接收端所属的显示设备所显示的HDR信号的适配。
例如,发送端生成图像时,采集到的环境光是4000nit的光照信号,而接收端所属的显示设备中(例如电视机,平板电脑)每个像素点的最大显示亮度(即HDR显示能力)只有500nit。那么为了将4000nit的光照信号显示在显示设备的显示屏上,需要对图像亮度向显示设备的显示屏亮度进行色调映射。
再如,发送端生成图像时,采集到的100nit的SDR信号,而接收端所属的显示设备(例如电视机)的最大显示亮度是2000nit。那么如何将100nit信号映射到2000nit的显示设备上进行显示,也需要对图像亮度向显示设备的显示屏亮度进行色调映射。
在传统技术中,发送端生成的码流中不包括本申请实施例所述的背光元数据,目前的色调映射曲线可包括杜比提出的基于sigmoidal曲线调整过程,以及贝塞尔曲线,这两种色调映射曲线的技术对应的ST2094标准,在对图像的显示像素映射到显示像素在显示屏上的显示亮度时,通常是将图像的亮度最大值以及亮度最小值,与显示屏的最大显示亮度值(即显示能力,例如500nit)与最小显示亮度值进行对齐,对于图像中介于亮度最大值和亮度最小值之间的亮度值,可以根据全局或者局部的色调映射曲线(例如基于sigmoidal曲线调整过程,以及贝塞尔曲线),映射至显示屏的最大显示亮度与最小显示亮度值之间的范围内。
但是,上述传统技术中的色调映射方案,没有考虑到显示设备的背光可调场景。
示例性的,手机的显示屏可具有阵列式的背光源,可用于点亮显示屏的屏幕,在背光强度级别最高(例如100%)时,显示屏的亮度为其最大显示亮度(例如500nit),背光强度级别可描述为最大显示亮度的百分比,那么背光强度级别与最大显示亮度的乘积可为显示屏的当前目标背光亮度。
需要说明的的是,本申请对于显示设备的最大显示亮度不做限制,可根据显示设备的产品升级而不断提升,另外,本申请对于显示设备的背光源的布局方式也不做限制,不同类型的电子设备(例如手机和电视机)的背光源的设置方式可以不同。
图4为示例性示出的手机背光亮度调节的场景示意图。
示例1:
如图4(1)、图4(2)所示,手机可包括环境光传感器101,环境光传感器101可感知环境亮度值以及识别环境亮度变化值,手机可根据环境光传感器101采集的环境光亮度值,设定手机屏幕的当前目标背光亮度,以及根据环境光传感器101采集的环境亮度变化调节当前目标背光亮度。
示例性的,***可根据环境光的亮度变化,来调节手机屏幕的背光亮度,例如在白天对手机设定的背光亮度高于在晚上对手机设定的背光亮度。
示例2:
如图4(1)所示,手机显示界面100,用户可在界面100,从靠近右侧的屏幕区域,从屏幕上方按照箭头方向向下滑动操作。
其中,本申请对于界面100不做限制,可以是应用界面,也可以是应用图标界面等。
手机可响应于该用户的滑动操作,将手机的显示界面从图4(1)所示的界面100切换至图4(2)所示的控制中心界面102。
控制中心界面102包括一个或多个控件,该控件的实现方式可以包括但不限于:图标、按钮、窗口、图层等。
如图4(2)所示,控制中心界面102可包括“华为视频”播放窗口、用于无线网络图标、蓝牙图标,以及窗口103。
窗口103可包括移动数据图标、响铃/静音图标、自动旋转屏幕图标。
此外,窗口103还可包括用于控制手机屏幕背光亮度的亮度进度条控件。
需要说明的是,窗口103可包括更多或更少的控件,本申请对此不做限制。
亮度进度条控件可包括图标104、图标105、进度条106、控件107。
图标104用于表示最小背光亮度,即背光强度级别为0%;
图标105用于表示最大背光亮度,即背光强度级别为100%,此时手机屏幕的显示亮度为手机屏幕的最大显示亮度(例如上述500nit)。
用户手指通过在进度条106上移动控件107,手机可响应于用户操作,调节手机屏幕的当前背光强度级别,即在0%~100%的范围内调节背光强度级别,以改变手机的当前目标背光亮度。
示例性的,控件107在进度条106上,向图标105方向移动,可提高手机屏幕的背光强度级别以提高当前目标背光亮度,控件107在进度条106上,向图标104方向移动,可降低手机屏幕的背光强度级别以降低当前目标背光亮度。
本实施例中,用户可根据使用需求和实际用机情况,而设置手机屏幕的当前目标背光亮度。
需要说明的是,手机所提供的供用户手动调节屏幕当前目标背光亮度的界面并不限于图4(2),还可通过其他界面来实现对背光亮度的调节。
那么在屏幕背光亮度可调的场景下,屏幕的最终显示的亮度可以是当前目标背光亮度值和屏幕显示值共同作用的结果。在背光亮度调整的过程中,屏幕亮度范围随之发生变化。
示例性的,OLED屏幕自发光,在进行背光控制时,可直接控制OLED屏幕中每个像素激发光源的亮度,可用背光强度来表述屏幕显示值的整体变化。
示例性的,在手机背光可调的场景下,例如手机屏幕最大显示亮度(即显示能力)为500nit,背光强度级别为10%,则当前目标背光亮度为50nit,但是,因为其最大显示亮度可以达到500nit,因此,在当前目标背光亮度为50nit时,屏幕的实际亮度(当前背光亮度,即实际背光亮度)是可以高于50nit的,例如60nit,或100nit等,可存在策略来配置屏幕的实际亮度。
那么基于对电子设备的显示屏的背光亮度可调的上述描述可以发现,传统技术中的 不论是全局,亦或是局部色调映射方案,并没有考虑到在一定背光强度下,屏幕的最终显示亮度是可以从当前目标背光亮度(例如上述50nit)增加的,直至达到屏幕单个或者部分像素的真正背光最强级别时的亮度(即上述最大显示亮度,例如100nit)。那么传统技术的方案,仅仅将图像的亮度最大值和亮度最小值,与屏幕的最大亮度值与最小亮度值对齐的机制,并不适应背光减弱(即从背光最强进行背光减弱,背光强度级别从100%开始降低)情况下的显示适配,会导致电子设备显示的视频或图像的对比度较低,颜色灰暗。
为此,本申请提供了上述***,继续参照图1,该***中的发送端所发送的码流可携带背光元数据,接收端的显示适配模块可基于背光元数据,对光信号图像进行动态范围映射,来获取图像显示信息并将图像显示信息发送至显示模块;
接收端的CPU可按照该图像显示信息,控制显示模块的电路,以将显示模块中的像素点,产生相应亮度的光,以实现对发送端生成的图像(高HDR信号)和接收端所属的显示设备所显示的低HDR信号的适配。
本申请实施例的***能够在背光可调的场景下,实现对图像向显示设备的适配显示。
实施例1:
对于图1中码流携带的背光元数据,在实施例1中,背光元数据可包括局部或全局的初始增益调整曲线、局部或全局的锚点。
其中,全局可以用于表示一帧图像,或帧序列。
局部可用于表示一帧图像中的图像区域,一种图像场景等。
背光元数据可包括局部的初始增益调整曲线和局部的锚点;全局的初始增益调整曲线和全局的锚点;全局的初始增益调整曲线和局部的锚点;局部的初始增益调整曲线和全局的锚点中的至少一种数据。
其中,本文所述的锚点是一个亮度值。
局部图像的局部锚点可以是:小于局部图像(例如图像区域)的像素最大亮度值、且大于局部图像的像素最小亮度值之间的一个亮度值。
全局锚点是背光控制锚点是:小于全局(例如一帧图像)最大像素亮度、且大于全局最小像素亮度的一个亮度值。
在对局部或全局的图像生成初始增益调整曲线时,可按照传统技术中生成色调映射曲线的方式,来作为图像的初始增益调整曲线,本申请对此不做限制。
示例地,色调映射曲线有各种各样的形态,包括但不限于:Sigmoid、三次样条、gamma、分段的三次样条函数等,均可作为初始增益调整曲线。
图5a为示例性示出的对图1进行细化后的***架构图,图5a与图1相同之处,这里不再赘述,只细化之处做详细说明。
应该理解的是,图5a所示***仅是一个范例,本申请的***,以及***中的发送端和接收端可以具有比图中所示的更多的或者更少的部件,可以组合两个或多个的部件,或者可以具有不同的部件配置。图5a中所示出的各种部件可以在包括一个或多个信号处理和/或专用集成电路在内的硬件、软件、或硬件和软件的组合中实现。
如图5a所示,发送端可包括背光元数据生成模块;
背光元数据生成模块可对光电转换模块生成的电信号图像或图像序列,生成背光元数据,这里的背光元数据可包括局部或全局的初始增益调整曲线,以及局部或全局的锚点(具体定义参照上文,这里不再赘述)。
编码模块可将电信号图像和背光元数据编码,生成码流并发送至接收端。
下面介绍背光元数据生成模块生成局部或全局的初始增益调整曲线,以及局部或全局的锚点的过程:
结合图5a,参照图5b,示例性的示出了对图像1的区域A生成初始增益调整曲线0和锚点的过程,在图5b中以处理对象为图像1为例进行说明:
在图5b中,背光元数据生成模块可对待显示的图像1,将亮度接近且空间位置接近的像素划分为同一区域,以对图像1进行区域划分,得到多个区域C;然后,再识别多个区域C中高亮区域C。
需要说明的是,在对图像1进行区域划分时,划分方法并不限于图5b的举例,还可通过其他方式,来将一个图像中亮度接近且空间位置接近的像素划分为一个区域,以实现图像的区域划分。
示例性的,图5b的过程可包括如下步骤:
可选地,对图像1进行第一次区域划分。
示例性的,可将图像1划分为一个区域(即不进行区域划分),在图5b中,图像1即为区域A。
或者,示例性的,将图像1划分为矩形的多个区域,得到多个区域A。
S400,背光元数据生成模块,可对区域A生成初始增益调整曲线0。
示例性的,图像1为64x64的图像。
初始增益调整曲线0的生成方式可以采用传统技术中任意一种生成色调映射曲线的方式来实现,本申请对此不做限制。
在背光元数据中的初始增益调整曲线包括局部的初始增益调整曲线时,可对一个图像区域按照上述方法,生成该图像区域的初始增益调整曲线。
在背光元数据中的初始增益调整曲线包括全局的初始增益调整曲线时,该全局为一帧图像时,可对一帧图像生成初始增益调整曲线,该全局为帧序列时,可对该帧序列中的一帧图像生成初始增益调整曲线,并将该初始增益调整曲线作为该帧序列的初始增益调整曲线。
S401,背光元数据生成模块,对区域A进行图像等分,生成区域B。
示例性的,可将区域A划分为等分的4个区域B,每个区域B大小为32x32。
需要说明的是,可对图像进行等分或不等分,本申请对此不做限制。
这里将区域A划分为4个区域B,相当于对区域A中的像素点做初始的聚类,例如4个区域B分别可以被称为0类、1类、2类和3类。
S402,背光元数据生成模块,按照像素点的亮度和空间位置,对区域A内的像素点进行迭代聚类。
本步骤可对区域A中4个区域B内的像素进行迭代更新的聚类,以将区域A中亮度 接近、空间位置接近的像素点聚类为一类,从而形成新的4个聚类类型,分别为图5b中4个区域C。例如4个区域C可以被称为0类’、1类’、2类’和3类’。
示例性的,对于一个区域A中的每个区域B,第一步,可计算每个区域B的聚类中心位置(x_centra,y_centra),以及平均的颜色值(例如RGB三个通道的均值:R_avg,G_avg,B_avg)或者Lab的平均值(l_avg,a_avg,b_avg)。其中,这里的聚类中心位置可以为区域B的像素点的几何中心;
然后,第二步,对于一个区域A中的每个像素(坐标(x、y),颜色值RGB或lab),计算与该区域A内所有区域B的聚类中心位置的几何距离以及颜色差异,作为该区域A中该像素的失真值cost;
cost=(√((x-x_centra) 2+(y-y_centra) 2))/K+(√((l-l_avg) 2+(a-a_avg) 2+(b-b_avg) 2))/M;
其中,√表示开方运算,K、M是两个常量,K、M的作用是归一化。
例如,在图5b中一个区域A可包括4个区域B,那么对于区域A中的一个像素可得到4个失真值。
接着,第三步,对于区域A中每个像素的4个失真值,选取最小失真值对应的区域B,作为该像素更新后的聚类类型(例如0类),这样可将区域A的所有像素点实现一次迭代聚类;
这样,对于区域A中所有像素,可计算得到更新后的聚类类型,相当于对区域A重新进行了一次区域划分,本次划分得到的4个区域B’不同于图5b所示的规则矩形的4个区域,本次划分得到的4个区域的形状可能是不规则的。
然后,按照上述第一步至第三步的流程,对更新后的4个聚类类型对应的区域内的像素点,重新计算聚类中心位置和平均颜色值等,以进行下一轮迭代聚类,直到该区域A内的像素点的聚类类型不再发生变化,可得到图5b所示的4个区域C(作为最终的4个聚类类型)。
这样,对于图像1中的每个区域A可划分得到4个区域C。
需要说明的是,在对区域A划分时所生成的最终区域C的数量并不限于4个,例如在S401中进行图像等分时,还可进行6等分、9等分等任意数量的等分方式,从而使得对区域A划分得到的区域C的数量可以是6个或9个等。
需要说明的是,在对待显示的图像1,将亮度接近且空间位置接近的像素划分为同一区域,以对图像1进行区域划分的方法并不限于上述示例的方案,还可通过本领域的其他方法,来实现对图像1的区域划分。
S403,背光元数据生成模块,使用预设算法从图像划分区域中识别高亮区域。
在一种可能的实施方式中,示例性的,背光元数据生成模块可以使用双边滤波或者引导滤波的方法,来对图像1(例如包括图5b所示的4个区域C)进行滤波处理,根据在滤波前后图像1同一像素的像素值的差异大小,将图像中像素值的差异较大的像素作为高亮像素。然后,根据对图像1所划分得到的4个区域C,将包括的高亮像素的数目大于或等于预设阈值的区域C作为高亮区域C,将包括的高亮像素的数目小于预设阈值的区域C作为非高亮区域C。这样,就可将图像1所包括的区域划分为两类区域,一类为高亮 区域,一类为非高亮区域。
示例性的,如图5b所示,区域A的4个区域C中包括两个高亮区域C和两个非高亮区域C。
示例性的,图像1中对应的拍摄对象的属性可影响该图像区域是否为高亮区域。例如云层、玻璃的反射率较高,云层、玻璃的图像区域容易作为高亮区域,再如,粗糙的棉麻衣物等反射率较低的对象的图像区域,容易作为非高亮区域。
需要说明的是,在从图像划分的区域中,识别哪些区域属于高亮区域,哪些区域属于非高亮区域时,并不限于上述双边滤波或引导滤波的方法,还可通过其他方法来实现,例如对区域A中的不同区域C分别计算像素的平均亮度值,将平均亮度值高于预设阈值的区域C作为高亮区域C,以此实现对区域A的高亮区域和非高亮区域的分类,本申请对此不做限制。
另外,需要说明的是,在图像1划分为多个区域A时,对不同区域A可通过相同或不同的实现方式,来实现对各区域A内的高亮区域和非高亮区域的识别,本申请对此不做限制。
S405,背光元数据生成模块,根据区域A中高亮区域C的像素亮度,确定作为区域A的锚点的亮度值。
示例性的,对于一个区域A,可对区域A中的所有高亮区域C内的像素,获取像素亮度的最小值a1,和像素亮度的平均值a2。对于一个区域A,利用像素亮度的最小值a1和像素亮度的平均值a2,生成区域A的锚点。
示例性的,可将大于a1且小于a2的任意一个亮度值作为区域A的锚点。
示例性的,还可将a1和a2的平均值作为区域A的锚点。
示例性的,还可将a1和a2的加权结果作为区域A的锚点。
其中,一个区域A具有一个锚点。
可选地,对于一个区域A,也可对区域A中的所有区域C内的像素(即区域A中的所有像素),获取像素亮度的最小值a1,和像素亮度的平均值a2。对于一个区域A,利用像素亮度的最小值a1和像素亮度的平均值a2,生成区域A的锚点。
需要说明的是,本申请对于生成锚点的方式不限于上述实施例的方法,还可以通过其他方式来实现,本申请对此不做限制。
那么按照上述实施例的方法可对一帧图像生成该帧图像的全局锚点,还可对图像中的区域来生成局部的锚点。另外,当背光元数据中的全局的锚点为帧序列的锚点时,可通过上述方法,对一帧图像生成锚点,然后,将该锚点作为包括该帧图像的帧序列的锚点。
上述图5b以一种实施方式描述了生成初始增益调整曲线和锚点的过程。
在另一个实施方式中,还可通过图5c(1)的流程来生成局部或全局的初始增益调整曲线,以及局部或全局的锚点。
如图5c(1)所示,该过程可包括如下流程:
示例性的,图5c(1)可由图5a中的背光元数据生成模块来执行,图5c(1)中的区域A可以是一帧图像,还可以是图像中的局部图像区域,本申请对此不做限制。
S501,对区域A生成初始增益调整曲线0。
该过程与图5b的S400类似,这里不再赘述。
S502,根据区域A的像素点的亮度值,生成区域A的像素点的亮度的累计分布直方图。
示例性的,图5c(2)示出了区域A的像素点的亮度的累计分布直方图。
该直方图中,横轴x用于表示区域A的像素点的亮度值,纵轴y用于表示区域A的像素点的数量。
从图5c(2)可以看出,例如区域A包括100个像素点,区域A中具有亮度x1的像素点的数量为20个,具有亮度x2的像素点的数量为30个,具有亮度x3的像素点的数量为50个。
S503,基于上述直方图,将区域A中高于区域A内预设比例的像素点亮度的亮度值作为区域A的锚点。
示例性的,该预设比例可以是95%、99%或99.5%等,该预设比例高于50%,具体数值可根据需要灵活配置,本申请对此不做限制。
示例性的,例如预设比例为95%,基于图5c(2)所示的直方图,可以易于确定高于区域A中95%的像素点的亮度的亮度值为x3,可将x3作为区域A的锚点。
在图5c的实施方式中,在对图像生成局部或全局的锚点时,无需对图像进行区域划分,只需要利用图像的像素点亮度的累计分布直方图,就可以确定需要锚点。考虑到在对图像进行色调映射时,需要对亮度在锚点以上的像素点提高亮度,对亮度在锚点以下的像素点降低亮度,可基于该预设比例,利于确定该预设比例的像素点需要降低亮度,(1-预设比例)的像素点需要提亮,容易确定提亮和压暗幅度。
需要说明的是,对图像或局部图像或帧序列在生成初始增益调整曲线,和生成锚点时,是两个独立的过程,本申请对两个过程的执行顺序不做限制。
需要说明的是,S503在S502之后执行,但是本申请对于S501与S502、S503之间的执行顺序不做限制。
上述实施方式,以图5a中的发送端来生成局部或全局的初始增益调整曲线,以及局部或全局的锚点为例进行了描述,使得该曲线和锚点信息可作为码流的属性传入接收端。
需要说明的是,本申请实施例的接收端也可以包括背光元数据生成模块执行上述操作,来生成局部或全局的初始增益调整曲线,以及局部或全局的锚点,方法类似,这里不再赘述。
继续回到图5a,下面结合图5d对图5a中接收端的执行过程进行描述:
对比图1和图5a可以看到,图1中的显示适配模块可包括图5a所示的色调映射曲线生成模块,以及色调映射模块。
下面结合图5d,对图5a中色调映射曲线生成模块以及色调映射模块的工作过程进行说明:
如图5d所示,该过程可包括如下步骤:
S201,接收端获取码流中待显示的图像1。
示例性的,如图5a所示,解码模块通过对码流解码,可将获取到图像,例如图5b 中的图像1。
S202,接收端获取码流中背光元数据,背光元数据包括图像1中区域A的初始增益调整曲线0和区域A的第一亮度(又称锚点)。
示例性的,如图5a所示,解码模块可对码流进行解码,以获取背光元数据,其中,背光元数据可包括码流的局部或全局的初始增益调整曲线,以及局部或全局的锚点。
以图5b的图像1为码流中的待显示图像为例,解码模块可获取到图像1中区域A的初始增益调整曲线,以及区域A的锚点,并发送至色调映射曲线生成模块。
S203,接收端获取显示屏的最大显示亮度以及当前背光强度级别。
示例性的,如图5a所示,色调映射曲线生成模块可从显示模块获取显示屏的最大显示亮度,以及显示屏的当前背光强度信息。
示例性的,当前背光强度信息可以是当前背光强度级别,也可以是当前目标背光亮度。
示例性的,接收端可从接收端所属的显示设备,获取显示屏的最大显示亮度(例如500nit)
示例性的,本申请实施例还可在显示设备的显示屏上显示测试样例(一个测试图像),该测试样例中占测试样例的面积比例为N%(例如5%、10%等)的区域为白色,其他区域为黑色,通过使用仪器测量该N%面积的白色区域的亮度,测量到的亮度为显示屏的最大显示亮度。
需要说明的是,本申请对于获取显示设备的最大显示亮度的方式不做限制。
另外,色调映射曲线生成模块所获取到的显示屏的当前背光强度信息,根据图4实施例的描述,该当前背光强度信息可以为***自动调节后的显示屏的当前目标背光亮度,还可以是用户手动调节后的显示屏的当前目标背光亮度,这里不再赘述。
S204,接收端根据最大显示亮度,对曲线0进行处理,生成曲线1。
需要说明的是,在对区域A的曲线0进行处理,生成曲线1时,所基于的亮度值并不限制于最大显示亮度来实现,该亮度值可以是大于当前目标背光亮度、且小于或等于显示屏的最大显示亮度的一个亮度值(例如当前目标背光亮度与最大显示亮度的加权结果)。
示例性的,以根据最大显示亮度,来对曲线0进行处理生成曲线1为例进行说明,当所基于的亮度值不是该最大显示亮度时,方法同理,这里不再赘述。
其中,曲线0可以是传统技术中的任意一种色调映射曲线,那么曲线1以及后续的曲线2也均为色调映射曲线,任意色调映射曲线的函数f(x)的自变量x表示SDR源或HDR源的图像像素的亮度值,f(x)表示图像像素在显示屏上显示的亮度值。
示例性的,如图5a所示,色调映射曲线生成模块可根据最大显示亮度maxT,对例如区域A的初始增益调整曲线(例如曲线0,又称curve0)进行处理,生成更新后的增益调整曲线(例如曲线1,又称curve1)。
示例性的,可利用最大显示亮度,对区域A的曲线0进行缩放处理,来得到曲线1,需要说明的是,处理方式并不限于方式1、方式2所述的缩放处理,还可包括诸如方式3、方式4、方式5等以及未列举的其他方式,本申请对此不做限制。
方式1:
例如曲线0的最大x值为maxsource,对应的y值为maxY,则曲线0整体的缩放系数为D=maxT/maxY,其中,maxT为接收端所属的显示设备的显示屏的最大显示亮度,可利用缩放系数D对曲线0进行整体缩放,curve1=D*curve0。
其中,曲线0为区域A的初始增益调整曲线(一种色调映射曲线),那么曲线0的最大x值,表示这里的区域A的图像像素的最大亮度值、对应的y值(即maxY)表示区域A中该最大亮度值的图像像素在显示屏上显示的亮度值,那么曲线0的最大x值即为区域A中图像像素的最大亮度值。
方式2:
背光元数据可包括预设亮度值T3。
例如,该预设亮度值T3为曲线0原来生成的任意目标亮度target_maxdisplay,其中目标亮度也是某亮度的图像像素在显示屏上显示的亮度值,例如可以是最大显示的亮度值。
曲线0整体的缩放系数为D=maxT/T3,利用缩放系数D对曲线0进行整体缩放,curve1=D*curve0。
考虑到在显示屏在存在一定背光的情况下,该显示屏的实际亮度可高于该背光亮度,例如显示屏的最大显示亮度为500nit,背光强度级别为10%,则背光亮度为50nit,但电子设备可存在策略来调整显示屏的最终亮度,使得最终亮度可高于50nit,因此在本实施例中,可以基于大于该背光亮度且小于或等于最大显示亮度的一个亮度值,来生成与该亮度值匹配的色调映射曲线(即曲线1)。
方式3:
根据区域A的图像参数和显示亮度获取曲线A;
示例性的,图像参数包括但不限于像素的亮度最大值、像素的亮度最小值,或像素的亮度平均值。
示例性的,显示亮度可以包括显示屏的最大显示亮度或当前目标背光亮度。
将图像参数和显示亮度作为传统的色调映射曲线的曲线参数,从而得到曲线A。
根据最大显示亮度和所述目标亮度target_maxdisplay,获得权重值w;
示例性的,可以将最大显示亮度与目标亮度target_maxdisplay的差值作为权重值w,但是获取权重值w的方式不限于此。
利用该权重值w,对曲线A与区域A的上述曲线0进行加权求和,得到曲线1。
方式4:
根据最大显示亮度以及曲线调整参数B,获取曲线参数调整值B1;
示例性的,背光元数据可包括用于表示曲线0的每个曲线参数,以及这里的曲线调整参数B。
例如该曲线调整参数B的取值为0.2,
例如显示屏的最大显示亮度为200,曲线0对应的显示屏的最大显示亮度是500;
那么|(200-500)|/100=3,那么曲线调整参数B(这里为0.2)乘以前述的结果“3”得到曲线参数调整值B1为0.6。
根据曲线0叠加曲线参数调整值B1获取曲线1。
对曲线0的每个曲线参数均叠加(例如加法或减法运算)曲线参数调整值B1,从而得到曲线1的每个曲线参数,进而获取到曲线1。
即曲线0的每个曲线参数可以加或减0.6,从而得到新的一组曲线参数,以得到曲线1。
方式5:
根据最大显示亮度以及曲线调整参数B,获取曲线参数调整值B1;
示例性的,方式4是对曲线0的每个曲线参数按照曲线参数调整值B1进行调整,这里例如曲线0中某个目标参数对曲线形状的影响程度最大,则背光元数据中可包括曲线0的某个目标参数的曲线调整参数B,即只对曲线0的该某个目标参数按照方式4,进行调整。
按照上述方式1或方式2,对于曲线0进行缩放,得到曲线00;
这里在缩放时,可以对曲线0的除上述目标参数之外的参数进行缩放处理,得到曲线00。
根据曲线00叠加B1获取曲线1。
示例性的,可对曲线00的目标参数,相加或减去B1,得到新的曲线1。
S205,接收端根据最大显示亮度和当前背光强度级别,对曲线0进行处理,生成曲线2。
示例性的,如图5a所示,色调映射曲线生成模块可根据当前背光强度信息(例如当前背光强度级别)以及最大显示亮度maxT,对例如区域A的初始增益调整曲线(curve0)进行处理,生成更新后的增益调整曲线(例如曲线2,又称curve2)。
其中,如果色调映射曲线生成模块所获取的当前背光强度信息为当前目标背光亮度信息,则无需利用显示模块的最大显示亮度,来生成曲线2。
示例性的,可利用最大显示亮度maxT与当前背光强度级别Level的乘积(即当前目标背光亮度L x=maxT*Level),例如最大显示亮度为500,当前背光强度级别为10%,则当前目标背光亮度为50,对区域A的曲线0进行缩放处理,来得到曲线1,需要说明的是,处理方式并不限于缩放,还可包括其他方式,本申请对此不做限制。
方式1:
例如区域A的曲线0的最大x值为maxsource,对应的y值为maxY,则曲线0整体的缩放系数为D=L x/maxY,利用缩放系数D对曲线0进行整体缩放,curve2=D*curve0。
方式2:
背光元数据可包括预设亮度值T3。
例如,该预设亮度值T3为曲线0原来生成的目标亮度target_maxdisplay。
曲线0整体的缩放系数为D=L x/T3,利用缩放系数D对曲线0进行整体缩放,curve2=D*curve0。
在显示屏在存在一定背光的情况下,例如显示屏的最大显示亮度为500nit,背光强度级别为10%,则背光亮度为50nit,在本实施例中,可以生成与该背光亮度匹配的色调映射曲线(即曲线2)。
对于除缩放之外的处理方式,可参照S205中示例介绍的方式3~方式5,原理类似,这里不再赘述。
S206,接收端根据区域A的锚点,对曲线1和曲线2进行处理,生成区域A的色调映射曲线3。
图5e为示例性示出的对曲线1和曲线2生成曲线3的示意图。
其中,图5e中的三个坐标图,每个x轴表示这里的区域A的图像像素的亮度值、y轴表示区域A的图像像素在显示屏上显示的亮度值。
以图5e中各色调映射曲线的x的取值范围为0~1为例,以区域A的锚点为0.7为例进行说明。
请参照图5e(1)示出了curve2的曲线图,由于curve2是基于当前目标背光亮度所生成的色调映射曲线,因此curve2中对大部分的色调映射值,即对图像像素的亮度值所映射的显示亮度值(即y值)是比较合理的,少数色调映射值不合理,因此,可基于锚点,首先从curve2中截取部分色调映射曲线,再基于曲线1进行曲线拼接,来生成该区域A的最终的色调映射曲线(即曲线3,又称curve3)。
如图5e(1)所示,锚点0.7对应的y值为y1,可对curve2从原点(0,0)至P1(0.7,y1)进行截取,得到curve2-1。也就是说,最终的色调映射曲线,即curve3,对于像素亮度在锚点以下(0~0.7)的曲线部分,采用curve2。如图5e(3)所示出的curve3,在像素亮度0~0.7范围内,使用图5e(1)中的curve2-1部分。
然后,可根据curve2在锚点(0.7)处的色调映射值,即y值,这里为图5e(1)中的y1,对curve1进行平移,或者拉伸,得到curve1’,使得curve1’与curve2在锚点处的色调映射值(即y值)相同;
示例性的,如图5e(2)所示,示出了curve1的曲线图,curve1中锚点位置(x=0.7)的色调映射值为y2时,例如curve1中的P2(0.7,y2),由于x值取值范围为0~1,而锚点以下的曲线已经从curve2中确定,这里只需要从图5e(2)所述的curve1中确定x取值范围为0~1的曲线段,即曲线段P2P3。然后可以对curve1中的曲线段P2P3沿y轴的反方向向x轴平移,平移方向如图5e(2)的平移箭头所示与y轴平行,使得平移后的曲线段P2P3(即图5e(2)所示的曲线段MN,又称curve1’),在x值为锚点0.7时,y值为y1,其中,P2平移后为M(0.7,y1),P3平移后为点N。这里的曲线的平移量可以为(y2-y1)。
需要说明的是,图5e以对curve1进行平移,来实现curve3的生成,还可通过对curve1进行缩放或拉伸来实现,本申请对此不做限制。
如图5e(3)所示出的curve3,在锚点以下,即像素亮度在0~0.7范围内,使用图5e(1)中的curve2-1部分,在锚点以上,即像素亮度在0.7~1范围内,使用图5e(2)中得到的curve1’中的MN段,从而得到区域A的色调映射曲线curve3(即图5a中的目标色调映射曲线)。
其中,图5e(3)中的M点也是图5e(1)中的P1点,这里以一个字母M示意该点。
另外,在图5e(2)中,以对curve1中锚点以上的曲线段进行平移为例,来获取curve3中锚点以上的曲线段,在其他实施例中,也可以对curve1整体进行平移,然后,再从平移后的curve1中截取锚点以上的曲线段,来与curve2-1在锚点位置连接,得到curve3, 本申请对此不做限制。
可选地,由于curve3在锚点位置是源自两个曲线的两个曲线段连接而成,使得curve3在例如图5e(3)所示的锚点位置M处不够光滑,容易导致基于curve3对图像进行色调映射后,图像在显示屏上显示时出现失真的情况。
那么在本申请实施例中,如图5e(3)所示,可以对于curve3在锚点前后预设范围(0.7-T2,0.7+T1)的曲线段EF进行修正,使得EF中在M处更加平滑。
然后,将修正后的曲线段EF,与curve3中的曲线段0E(即原点至E点的曲线段),在E点进行平滑连接,以及将修正后的曲线段EF,与curve3中的曲线段FN,在F点进行平滑连接(例如E点、F点处一阶导数连续),得到修正后的curve3,即区域A最终的色调映射曲线,使用修正后的curve3对图像像素进行色调映射显示时,不容易出高频系数被增强和画面失真的问题。
S207,接收端按照色调映射曲线3,对区域A进行色调映射处理,获取区域A的图像显示信息。
需要说明的是,本申请对于S201、S202以及S203之间的执行顺序不做限制。
本申请对于S204、S205的执行顺序不做限制。
示例性的,如图5a所示,接收端的色调映射曲线生成模块,可将生成的目标色调映射曲线(例如上述曲线3),发送至色调映射模块。
色调映射模块可按照曲线3,对图像1中的区域A进行色调映射处理,以调整图像1中区域A的图像像素,从而可获取到区域A中每个像素亮度在显示模块上的显示亮度。
显示模块就可以按照图像1中区域A内各个像素点对应的所述显示亮度,来对图像1中的区域A进行显示,能够将调整图像像素后的图像1显示在显示模块上,以实现对图像1向显示屏的背光亮度的适配。
同理,在图像1包括多个区域A时,显示模块可对各个区域A分别按照各自的图像显示信息进行显示。
对图像进行色调映射可理解为动态范围的映射,例如图3所示的动态范围的映射。
动态范围的映射可分为静态映射方法和动态映射方法。
示例性的,静态映射方法可以描述为:根据同一个视频内容或者同一个硬盘内容,由单一的数据进行整体的色调映射过程,也就是说,对一帧或多帧图像采用同一目标色调映射曲线进行色调映射,使得码流中携带的信息较少,对图像进行适配显示的处理过程更加简洁,图像显示时延较低。
动态映射方法可以描述为:根据图像中的特定区域,或每一个场景,或每一帧的图像内容,以不同的色调映射曲线进行动态的色调映射,那么每一帧或者每一场景需要携带相关的场景信息。可根据特定区域,每一场景或者每一帧图像进行不同的色调映射曲线的色调映射处理,能够对不同场景,不同图像区域,不同帧图像,以不同的色调映射曲线进行色调映射。例如可根据拍摄场景,对较亮场景的图像,采用重点保护亮区显示效果的色调映射曲线进行色调映射。
静态映射方法,相当于对多帧图像采用同一目标色调映射曲线进行色调映射,那么可使用对该多帧图像中一帧图像生成的目标色调映射曲线,作为该多帧图像的目标色调 映射曲线,在生成一帧图像的目标色调映射曲线时,可利用一帧图像的全局的初始增益调整曲线和全局的锚点来实现。
动态映射方法,相当于对不同帧图像、对同一帧图像中的不同图像区域、对不同场景的图像,采用不同的目标色调映射曲线进行色调映射,那么可使用局部锚点和局部初始色调映射曲线,生成各局部的目标色调映射曲线。
如图5a的描述,背光元数据可包括局部或全局的初始增益调整曲线,以及局部或全局的锚点。
其中,图像的局部区域的初始增益调整曲线可称为局部初始增益调整曲线,图像的局部区域的锚点,可以称为局部的锚点。
一帧图像的初始增益调整曲线可称为全局初始增益调整曲线,一帧图像的锚点,可以称为全局的锚点,但是,不同帧图像的初始增益调整曲线,以及锚点可以不同。
多帧图像的一条初始增益调整曲线也可称为全局初始增益调整曲线,示例的,可将多帧图像中一帧图像的初始增益调整曲线作为多帧图像的全局初始增益调整曲线。
多帧图像共用的一个锚点,可以称为全局的锚点,示例的,可对多帧图像中的一帧图像生成一个锚点,将该锚点作为多帧图像的全局的锚点。
示例性的,在码流中包括全局初始增益调整曲线和全局的锚点时,则可以按照上述方法,利用每帧图像的初始增益调整曲线以及锚点,生成每帧图像各自的目标色调映射曲线。对每帧图像按照各自的目标色调映射曲线进行色调映射,以实现图像与显示屏的显示适配。
示例性的,在码流中包括全局初始增益调整曲线和全局的锚点时,可以按照上述方法,对该多帧图像的全局的初始增益调整曲线以及该多帧图像的全局的锚点,来生成适用于该多帧图像的一个目标色调映射曲线,可对该多帧图像按照同一目标色调映射曲线进行色调映射,以实现图像与显示屏的显示适配。
示例性的,在码流中包括局部的初始增益调整曲线和局部的锚点时,例如图像1中不同区域分别具有各自的局部的初始增益调整曲线以及各自的局部的锚点,那么可对图像1中各区域,按照各自的初始增益调整曲线以及各自的锚点,来生成各自的目标色调映射曲线(也是局部的色调映射曲线)。然后,对图像1中不同区域,按照各自的目标色调映射曲线进行色调映射,以实现图像中不同区域与显示屏的显示适配,可对同一图像的不同区域按照不同的色调映射曲线进行色调映射。再如,还可对不同场景,分别利用各自的局部的初始增益调整曲线以及局部的锚点,来生成各个场景的目标色调映射曲线,然后,对不同场景的图像,按照相应场景的目标色调映射曲线进行色调映射,以实现不同场景的图像与显示屏的显示适配。
另外,由于对图像(或局部图像)生成锚点,和生成初始增益调整曲线是两个独立的过程,因此,可以对局部区域生成锚点,对全局图像(例如一帧或帧序列)生成全局的初始增益调整曲线。
示例的,在码流中包括全局的初始增益调整曲线和局部的锚点时,可以对每个局部图像,使用各自的局部锚点,以及全局的初始增益调整曲线,来生成每个局部图像的各自的目标色调映射曲线,其中,各个局部图像在生成各自的目标色调映射曲线时,可将 全局的初始增益调整曲线作为该局部图像的初始增益调整曲线。对各个局部图像生成各自的目标色调映射曲线的过程与上文的图5d和图5e描述的过程类似,这里不再赘述。
示例的,在码流中包括局部的初始增益调整曲线和全局的锚点时,可以对每个局部图像,使用各自的局部的初始增益调整曲线,以及全局的锚点,来生成每个局部图像的各自的目标色调映射曲线,其中,各个局部图像在生成各自的目标色调映射曲线时,可将全局的锚点作为该局部图像的锚点,即各个局部图像复用同一锚点。对各个局部图像生成各自的目标色调映射曲线的过程与上文的图5d和图5e描述的过程类似,这里不再赘述。
虽然码流中的初始增益调整曲线以及锚点可能不是一对一的,但是,在使用时,对待显示的图像或区域,使用一个初始增益调整曲线和一个锚点,来生成该图像或区域的目标增益调整曲线,即初始增益调整曲线和锚点成对使用。
示例性的,在上述S207中,接收端按照色调映射曲线3,对区域A进行色调映射处理,获取区域A的图像显示信息时,在色调映射曲线3是全局的色调映射曲线的情况下,采用方法一来实现色调映射处理,在色调映射曲线3是全局的色调映射曲线的情况下,方法二来实现色调映射处理。
方法一:
根据全局色调映射曲线对于待显示图像的各个像素获取在显示屏上显示的亮度值。
示例性的,可按照公式(7)计算gain;
gain=f[i](max(Rp,Gp,Bp))/max(Rp,Gp,Bp),公式(7);
其中,Rp,Gp,Bp为待显示图像中当前像素P的不同颜色分量,max(Rp,Gp,Bp)是求解三个颜色分量的最大分量值。
f(x)表示该待显示图像所使用的全局色调映射曲线的函数,[i]表示当前像素P是待显示图像中的第i个进行色调映射的像素。
然后,按照公式(8)、公式(9)、公式(10),计算当前像素P的色调映射值。
RpTM[i]=Rp*gain,公式(8);
GpTM[i]=Gp*gain,公式(9);
BpTM[i]=Bp*gain,公式(10);
其中,RpTM[i],GpTM[i],BpTM[i]为当前像素P红绿蓝三个颜色分量的色调映射值(即,待显示图像中的当前像素P在显示屏上显示的亮度值)。
方法二:
根据局部色调映射曲线对于待显示图像中局部图像的各个像素,获取在显示屏上显示的亮度值,其中,该局部色调映射曲线,即为对局部图像所生成的局部的目标色调映射曲线,例如上文的区域A的curve3。
例如待显示的图像1被划分为4个区域A(即4个色调映射区域),那么每个区域A可按照上文的生成目标色调映射曲线的方式,生成各个区域A各自的局部的目标色调映射曲线。
需要说明的是,对于图像1中具体被划分为局部图像的数量(例如区域A的数量),本申请对此不做限制。其中,每个局部图像可按照各自的局部的色调映射曲线进行各自 的色调映射,而且,图像1中各局部图像的形状可以相同或不同,本申请对此不做限制。
对于图像1中所划分为的各个局部图像(也成为色调映射区域)的信息,其中该信息例如可以是局部图像在图像1中的尺寸信息,例如长宽信息,以用于确定局部图像。且描述该局部图像的尺寸的信息可以携带在上文所述的背光元数据中,以使接收端基于各个局部图像的信息,按照各个局部图像的目标色调映射曲线对各个局部图像分别进行色调映射。
示例性的,在对图像1中某个区域A(可称为:初始区域A)内的一个像素点P进行色调映射时,可在上述4个区域A中,确定距离该像素点P空间距离最近的目标区域A(可以是一个或多个),采用目标区域A对应的目标色调映射曲线作为上述方法一中公式(7)的f(x),按照方法一的公式(7)~公式(10)计算当前像素P的第一色调映射值;如果该第一区域A的数量为一个,则将该第一色调映射值作为当前像素P的最终色调映射值;如果该目标区域A的数量为多个,其中,可通过公式(7)得到第j个目标区域A的gain[j],其中,j=1,2,3,4,以及对应于各目标区域A的多个第一色调映射值,可对多个第一色调映射值进行加权求和,将加权求和结果除以各个目标区域A对应的权重gain[j]之和,作为当前像素P的最终色调映射值。
其中,在对当前像素点P的至少两个第一色调映射值进行加权时,j表示图像1的4个区域A中的第j个区域A,j=1,2,3,4,那么基于第j个区域A所得到的第一色调映射值的权重Weight[j],可按照公式(11)进行计算:
Weight[j]=f(Num[j]/Numtotal),公式(11);
其中,公式(11)中j表示图像1的4个区域A中的第j个目标区域A,Num[j]表示图像1中第j个目标区域A内的像素点数量,Numtotal表示当前像素点P周围的所有目标区域A内的像素点的总数量;
可选地,示例性的,公式(11)中的f(x)表示图像1中第j个目标区域A的目标色调映射曲线的函数。
可选地,示例性的,公式(11)中的f(x)=power(x,NumStr[j]),其中,这里的f(x)是一种色调映射曲线函数的具体表示,其中,NumStr[j]表示第j个目标区域A的强度信息,该强度信息用于表示该第j个目标区域的加权系数的衰减快慢,是对各个区域A预设的数值。
示例性的,可按照公式(12)、公式(13)、公式(14)计算当前像素P的最终的色调映射值Rtm、Gtm、Btm。
Rtm=∑(RP*gain[j]*Weight[j])/∑Weight[j],公式(12);
Gtm=∑(GP*gain[j]*Weight[j])/∑Weight[j],公式(13);
Btm=∑(BP*gain[j]*Weight[j])/∑Weight[j],公式(14);
其中,Rp,Gp,Bp为待显示图像1中某个区域A(例如上文所称的初始区域A)的当前像素P的不同颜色分量;
其中,gain[j]=f[j](max(Rp,Gp,Bp))/max(Rp,Gp,Bp);
其中,Rp,Gp,Bp为当前像素P的不同颜色分量,max(Rp,Gp,Bp)是求解三个颜色分量的最大分量值,f[j]是图像1中第j个目标区域A对应的局部的目标色调映射曲线的函 数,j为大于等于1或小于等于4的整数。
在本申请实施例中,在当前像素P位于图像1中两个区域A共用的边界上时,如果边界上的两个像素点的灰度值相同,一个像素点使用区域A的目标色调映射曲线进行映射,另一个像素点使用另一个区域A的目标色调映射曲线进行映射,将造成图像1在这两个像素点上的图像跳变。因此,在上述目标区域A的数量是多个(至少两个)的情况下,说明该档期像素点P是至少两个区域A的共用边界上的像素点,可以使用该像素点P周围的至少两个区域A各自的目标色调映射曲线,来对该像素点P分别进行色调映射,并将得到的至少两个第一色调映射值加权,以获取该像素点P的最终的色调映射值,从而使得具有不同色调映射曲线的不同图像区域中,不同图像区域公共的像素点的显示上能够兼顾不同图像区域的显示场景,避免在显示的图像中出现跳变异常的像素点。
可选地,在对像素点P获取到最终的色调映射值之后,本申请实施例的接收端还可利用饱和度(色彩)调整算法,对采用目标色调映射曲线进行色调映射后的区域A中的各个像素点,做进一步的饱和度调整。
可选地,对于图5a中局部或全局的初始增益调整曲线,可以不直接以曲线的数据携带在背光元数据中,可将该初始增益调整曲线的参数以及曲线的类型携带在背光元数据中,那么接收端可从背光元数据中获取该曲线类型和该参数,并基于HDR源或SDR源,来获取局部或全局的目标色调映射曲线。
示例性的,HDR源或SDR源,可以理解为接收端待处理的视频或图像,该HDR源或SDR源可包括但不限于:图像数据,如像素数据等。
示例性的,背光元数据可包括但不限于:HDR源或SDR源的数据格式、图像区域的划分信息、图像区域的遍历顺序信息、图像特征以及局部或全局的初始增益调整曲线的曲线参数,以及一个或者多个元数据信息单元。其中,元数据信息单元可包含坐标信息、上述图像特征以及上述初始增益调整曲线的曲线参数等。
需要说明的是,本申请对于背光元数据的格式不做限制,背光元数据可以如ST2094-40标准,包含直方图信息和色调映射曲线的参数信息,也可以如ST2094-10标准包含色调映射曲线的参数信息。
如上文所述背光元数据中的初始增益调整曲线可以是任意一种传统的色调映射曲线,示例的,接收端可从码流中的元数据信息单元和HDR源或SDR源获得初始增益调整曲线的曲线参数a,b,p,m,n,以获取到初始增益调整曲线;然后,利用锚点和初始增益调整曲线,以及待显示的图像,可获取到目标色调映射曲线,该目标色调映射曲线仍旧可包括上述曲线参数。
示例性的,目标色调映射曲线的函数可如公式(15)所示:
Figure PCTCN2022124254-appb-000012
根据公式(15)的目标色调映射曲线,可以获得从归一化的HDR源,或SDR源数据(例如待显示图像)映射到归一化的HDR显示数据(例如待显示图像各像素点在显示屏上的显示值)之间的映射关系。
需要说明的是,L和L’可以是归一化的光信号或者电信号,本申请对此不做限定。
需要说明的是,本申请对于接收端所处的码流中的HDR源或SDR源的数据格式不做限制,例如从像素数据的色彩空间上描述,待显示的图像或视频中的像素可以是YUV的,也可以是RGB的;再如,从数据的位宽说,待显示的图像中每个像素可以是8位,也可以是10位,或者12位等。
本申请实施例的接收端,对于SDR或者HDR格式的图像或者视频,可对图像或视频中的全部像素进行色调映射处理,可在显示设备背光可调的场景下,实现高动态范围向低动态范围的色调映射。
此外,本申请实施例中,初始的色调映射曲线(即上文所述的初始增益调整曲线)和用于生成目标色调映射曲线的锚点可携带在码流中传入接收端,在实施时,可复用接收端所属电子设备的硬件中集成的用于传输曲线的硬件通道,在硬件方面更利于实施,可通用于各个电子设备的硬件通路。
实施例2:
对于图1中码流携带的背光元数据,在实施例2中,背光元数据可包括像素级的或者下采样的增益调整系数。
示例性的,对于码流中的SDR或者HDR格式的图像或者视频,图像以及视频中的每帧图像可具有像素级或下采样的增益调整系数。
示例性的,像素级或下采样的增益调整系数的表现方式可以是与原始的一帧图像尺寸相同的图像,该图像中每个像素点具有的信息为一个系数,这里称为增益调整系数。
示例性的,像素级或下采样的增益调整系数的表现方式也可以是:对原始的一帧图像中各个像素点对应的表格,表格中与原图的像素点一一对应,表格中的数据为各个像素点的系数,这里称为增益调整系数。
其中,每帧图像中高亮区域的像素点的增益调整系数可高于非高亮区域的像素点的增益调整系数。
图6a为示例性示出的对图1进行细化后的***架构图,图6a与图1、图5a相同之处,这里不再赘述,只对与图1的细化之处,以及与图5a的不同之处,做详细说明。
应该理解的是,图6a所示***仅是一个范例,本申请的***,以及***中的发送端和接收端可以具有比图中所示的更多的或者更少的部件,可以组合两个或多个的部件,或者可以具有不同的部件配置。图6a中所示出的各种部件可以在包括一个或多个信号处理和/或专用集成电路在内的硬件、软件、或硬件和软件的组合中实现。
如图6a所示,发送端可包括背光元数据生成模块;
背光元数据生成模块可对光电转换模块生成的电信号图像或图像序列,生成背光元数据,这里的背光元数据可包括图像的像素级或下采样的增益调整系数。
其中,增益调整系数即为一种增益系数,增益系数用于表示放大倍数。
图像的像素级的增益调整系数,用于对图像的像素级的显示像素,按照该增益调整系数进行调整,例如某个像素级单位为4*4的图像区域,该图像区域的增益调整系数为1.2,则可对该4*4的图像区域的显示像素的取值扩大1.2倍,例如亮度调亮1.2倍。
编码模块可将电信号图像和背光元数据编码,生成码流并发送至接收端。
下面介绍背光元数据生成模块生成图像的像素级或下采样的增益调整系数的过程:
结合图6a,参照图6b,示例性的示出了对图像1生成增益调整系数(表现为图6b所示的图像2)的过程,在图6b中以处理对象为图像1为例进行说明:
对比图5b和图6b可知,图6b中S401、S402和S403的过程,与图5b中生成锚点时的S401、S402和S403的过程相同,这里参照图5b的相关描述即可,这里不再赘述。
继续参照图6b,在S403之后,背光元数据生成模块可执行S404:
S404,背光元数据生成模块,按照高亮区域和非高亮区域,对图像1配置增益调整系数,生成由增益调整系数构成的图像2。
示例性的,如图6b所示,对于图像1中的每个区域A,可对区域A中高亮的区域C内的像素点,和非高亮的区域C内的像素点,配置不同的增益调整系数。
示例性的,对图6b中,对4个区域C分别配置了系数1、系数2、系数3和系数4。其中,两个高亮的区域C对应的系数1和系数2是大于两个非高亮的区域C的系数3和系数4的。
可选地,对于同属于高亮的不同区域C,背光元数据生成模块可根据高亮的各区域C的像素点的平均亮度大小,对高亮的不同区域C配置不同的增益调整系数,例如平均亮度大的区域C的增益调整系数高于,平均亮度小的区域C的增益调整系数。
示例性的,图6b中4个区域C的增益调整系数分别为:系数1=1.2,系数2=1.1,系数3=0.8、系数4=0.5。
示例性的,高亮的区域C的增益调整系数可大于1,非高亮的区域C的增益调整系数可小于或等于1。
在一种可能的实施方式中,在对区域A生成增益调整系数时,可通过以下方式来实现:
首先,对区域A中的每个区域C配置初始增益调整系数,该初始增益调整系数可为第一预设系数阈值。
示例性的,该第一预设系数阈值可为1.0或0.8等,具体阈值可根据实际需求灵活配置,本申请对此不做限制,可选地,第一预设系数阈值小于或等于1.0。
这样,图6b中每个区域C(可理解为像素级别)可配置有初始增益调整系数,例如区域A的64x64个像素点的初始增益系数均为1.0。
其中,在配置增益调整系数时,是按照区域C配置初始增益调整系数,即同一区域C中的所有像素点的增益调整系数均被配置为1.0,因此,本申请实施例的增益调整系数可称为像素级的增益调整系数。
然后,对区域A中每个高亮的区域C在初始增益调整系数的基础上扩大系数倍数。
示例性的,该系数倍数可以是第二预设系数阈值,例如1.2,那么高亮的区域C的增益调整系数更新为第一预设系数阈值*第二预设系数阈值,例如1.0*1.2=1.2,那么图6b中所示的系数1=系数2=1.2,系数3=系数4=1.0。
示例性的,背光元数据生成模块可计算每个高亮的区域C的像素点的平均亮度值,使得每个高亮的区域C具有各自的平均亮度值,对于平均亮度值高(例如平均亮度大于或等于预设亮度阈值)的高亮的区域C配置的系数倍数为第三预设系数阈值,对于平均 亮度值低(例如平均亮度小于预设亮度阈值)的高亮的区域C配置的系数倍数为第四预设系数阈值,其中,第三预设系数阈值大于第四预设系数阈值。
例如第三预设系数阈值为1.5,第四预设系数为1.2,那么图6b中更新后的各区域C的系数分别为:系数1=1.5,系数2=1.2,系数3=1,系数4=1。示例性的,例如系数1对应的区域C,在图像2中区域C每个像素点的取值不再是图像1中的颜色值,而是系数1,即1.5。以此可以得到与图像1等大小的图像2,图像2中的每个像素点的取值为像素级的增益调整系数。
当然,在其他实施方式中,对于非高亮的区域C,也可以根据区域C的像素的平均亮度,来配置不同的增益调整系数,平均亮度高的区域C的增益调整系数大于平均亮度低的区域C的增益调整系数。
需要说明的是,上述对于图像1中的区域A配置增益调整系数的方法仅是一种实施方式,本申请还可通过其他方式,来实现对图像1中各个区域A的增益调整系数的各自配置,配置方法并不限于上述举例。
例如,图像1包括多个区域A,则需要对每个区域A均按照各自区域A内高亮区域和非高亮区域,进行不同增益系数的配置,方法同理,这里不再赘述。
如上述描述可以发现,本申请可对每个图像区域,即像素级别,例如4*10、8*6等图像区域,对图像区域内的像素点均配置增益调整系数,那么经过S404得到的是与图像1等大小的,即64*64的图像2,图像2中的每个像素点的数值为背光元数据生成模块所配置的增益调整系数。
上述实施方式,以图6a中的发送端来生成像素级或下采样的增益调整系数为例进行了描述,使得像素级或下采样的增益调整系数可作为码流的属性传入接收端。
需要说明的是,本申请实施例的接收端也可以包括背光元数据生成模块,来执行上述操作,以生成像素级或下采样的增益调整系数生成像素级或下采样的增益调整系数,方法类似,这里不再赘述。
继续回到图6a,下面结合图6c对图6a中接收端的执行过程进行描述:
对比图1和图6a可以看到,图1中的显示适配模块可包括图6a所示的像素处理模块。
下面结合图6c,对图6a中像素处理模块的工作过程进行说明:
如图6c所示,该过程可包括如下步骤:
S301,接收端获取码流中待显示的图像1。
本步骤与图5d中的S201类似,这里不再赘述。
S302,接收端获取码流中背光元数据,背光元数据包括图像1的像素级的增益调整系数。
示例性的,如图6a所示,解码模块可对码流进行解码,以获取背光元数据,其中,背光元数据可包括码流中图像的像素级的或者下采样的增益调整系数。
该像素级的或者下采样的增益调整系数可以是码流中图像或全部图像的增益调整系数,本申请对此不做限制。
以图6b的图像1为码流中的待显示图像为例,解码模块可获取到图像1的像素级的 增益调整系数(例如图6b中的图像2),并发送至像素处理模块。
S303,接收端获取显示屏的最大显示亮度以及当前背光强度级别。
本步骤与图5d中的S203类似,参照上文即可,这里不再赘述。
示例性的,参照图6a,接收端内的像素处理模块可获取接收端所属的显示设备的显示屏的最大显示亮度以及当前背光强度信息(可以是当前背光强度级别,或者当前目标背光亮度,本申请对此不做限制),即S303可由图6a中的像素处理模块来执行。
S304,接收端根据最大显示亮度,对图像1的像素进行处理,生成第一图像。
需要说明的是,在对图像1的像素进行处理,生成第一图像时,所基于的亮度值并不限制于显示屏的最大显示亮度maxT来实现,该亮度值可以是大于当前目标背光亮度、且小于或等于显示屏的最大显示亮度的一个亮度值(例如当前目标背光亮度与最大显示亮度的加权结果)。
示例性的,以根据最大显示亮度,来对图像1的像素进行处理,生成第一图像为例进行说明,当所基于的亮度值不是该最大显示亮度maxT时,方法同理,这里不再赘述。
示例性的,可利用最大显示亮度,对图像1的显示像素(即图像像素)进行缩放处理,来得到第一图像(后文可以TMvaluemid1表示),需要说明的是,处理方式并不限于缩放,还可包括其他方式,本申请对此不做限制。
方式1:
例如,可将图像1的图像像素的最大亮度值maxsource作为maxY,则对图像1的图像像素整体的缩放系数为S=maxT/maxY,利用缩放系数S对图像1的图像像素进行整体缩放,即图像1中每个像素点的像素值乘以该缩放系数S,得到第一图像。
其中,显示屏的最大显示亮度为maxT。
方式2:
背光元数据可包括预设亮度值T3。
例如,该预设亮度值T3可为上述实施例1中提及的曲线0原来生成的目标亮度target_maxdisplay。
缩放系数为S=maxT/T3,利用缩放系数S对图像1的图像像素进行整体缩放,即图像1中每个像素点的像素值乘以该缩放系数S,得到第一图像。
其中,显示屏的最大显示亮度为maxT。
考虑到在显示屏在存在一定背光的情况下,该显示屏的实际亮度可高于该背光亮度,例如显示屏的最大显示亮度为500nit,背光强度级别为10%,则背光亮度为50nit,但电子设备可存在策略来调整显示屏的最终亮度,使得最终亮度可高于50nit,因此在本实施例中,可以基于大于该背光亮度且小于或等于最大显示亮度的一个亮度值(这里采用的是显示屏的最大显示亮度),来生成按照该亮度值对待显示的图像1进行处理后的第一图像。
S305,接收端根据最大显示亮度和当前背光强度级别,对图像1的像素进行处理,生成第二图像。
与S304的执行原理类似,示例性的,如图6a所示,像素处理模块可根据当前背光强度信息(例如当前背光强度级别)以及最大显示亮度maxT,来对图像1的显示像素进 行处理,生成第二图像(后文可以TMvaluemid2来表示)。
其中,如果像素处理模块所获取的当前背光强度信息为当前目标背光亮度信息,则无需利用显示模块的最大显示亮度,来生成第二图像。
示例性的,可利用最大显示亮度maxT与当前背光强度级别Level的乘积(即当前目标背光亮度L x=maxT*Level),例如最大显示亮度为500,当前背光强度级别为10%,则当前目标背光亮度为50,可利用当前目标背光亮度对图像1的图像像素进行缩放处理,来得到第二图像,需要说明的是,处理方式并不限于缩放,还可包括其他方式,本申请对此不做限制。
方式1:
例如可将图像1的图像像素的最大亮度值maxsource作为maxY,则对图像1的图像像素整体的缩放系数为S=L x/maxY,利用缩放系数S对图像1的图像像素(例如RGB信息)进行整体缩放,即图像1中每个像素点的像素值乘以该缩放系数S,得到第二图像。
其中,显示屏的最大显示亮度为maxT,L x=maxT*Level,Level为显示屏的当前背光强度级别。
方式2:
背光元数据可包括预设亮度值T3。
例如,该预设亮度值T3可为上述实施例1中提及的曲线0原来生成的目标亮度target_maxdisplay。
缩放系数为S=L x/T3,利用缩放系数S对图像1的图像像素进行整体缩放,即图像1中每个像素点的像素值乘以该缩放系数S,得到第二图像。
其中,显示屏的最大显示亮度为maxT,L x=maxT*Level,Level为显示屏的当前背光强度级别。
在显示屏在存在一定背光的情况下,例如显示屏的最大显示亮度为500nit,背光强度级别为10%,则背光亮度为50nit,在本实施例中,可以对待显示的图像1,按照该当前目标背光亮度进行处理,生成第二图像。
示例性的,在利用缩放系数S对图像1的图像像素进行整体缩放,得到上述第一图像时,可通过以下三种方法中的任意一种方法来实现,且不限于这里列举的三种方法:
下述三种方法以生成第一图像(TMvaluemid1)为例进行说明,在生成第二图像(TMvaluemid2)时,下述三种方法同理,这里不再赘述。
方法1:
TMvaluemid1.R=缩放系数S*h(待显示图像的R值);
TMvaluemid1.G=缩放系数S*h(待显示图像的G值);
TMvaluemid1.B=缩放系数S*h(待显示图像的B值);
方法2:
TMvaluemid1.R=缩放系数S*(h(待显示图像的R值)+增益调整系数2);
TMvaluemid1.G=缩放系数S*(h(待显示图像的G值)+增益调整系数2);
TMvaluemid1.B=缩放系数S*(h(待显示图像的B值)+增益调整系数2);
方法3:
TMvaluemid1.R=缩放系数S*h(待显示图像的R值)+增益调整系数2;
TMvaluemid1.G=缩放系数S*h(待显示图像的G值)+增益调整系数2;
TMvaluemid1.B=缩放系数S*h(待显示图像的B值)+增益调整系数2;
其中,例如待显示图像为上述图像1,TMvaluemid1表示上述第一图像;
TMvaluemid1.R表示第一图像的R通道的值;
TMvaluemid1.G表示第一图像的G通道的值;
TMvaluemid1.B表示第一图像的B通道的值;
h(x)的函数表达可以是h(x)=x,或者PQ、HLG、gamma之间互相转换的公式、或者是其他曲线的函数公式,本申请对此不做限制。
其中,背光元数据中的像素级或下采样的增益调整系数可包括增益调整系数1和增益调整系数2,即两组增益调整系数,当然,每组增益调整系数都是和原图等大小的。
增益调整系数1的生成方式可参考图6b实施例,这里不再赘述。
背光元数据生成模块还可增益调整系数2:
可参照图6a,采集模块采集到的环境光和光电转换模块生成的电信号的图像之间会存在图像信号损失(例如可转换为浮点数之差),背光元数据生成模块可将该图像信号损失作为增益调整系数2,其中,增益调整系数2中每个像素点的像素值即为各个像素点所损失的图像信号。
例如经过采集模块和光电转换模块可将12bit的图像,转换为8bit的图像,那么图像中每个像素点均损失了4bit的图像数据,可将每个像素点所损失的4bit数据作为增益调整系数2中相应像素点的像素值。
经过增益调整系数2和增益调整系数1来得到第一图像和第二图像,可提升图像的对比度。
S306,接收端根据图像1的增益调整系数,对第一图像和第二图像进行处理,获取图像1的图像显示信息。
需要说明的是,本申请对于S301、S302以及S303之间的执行顺序不做限制。
本申请对于S304、S305的执行顺序不做限制。
示例性的,参照图6a,接收端的像素处理模块,可根据像素级的增益调整系数中,关于图像1的增益调整系数(例如图6b中的图像2),对图像1对应的第一图像和第二图像进行处理,以获取图像1在显示屏上的图像显示信息,即图像1中每个像素的色调映射。
示例性的,Tmvalue=W*TMvaluemid1+(1-W)*TMvaluemid2,公式(16);
其中,公式(16)中的Tmvalue表示图像1中每个像素的色调映射值;
TMvaluemid1表示第一图像,TMvaluemid2表示第二图像;
示例性的,W可用于表示图像1的增益调整系数,例如可以表现为图6b中生成的图像2。
或者,示例性的,W可用于表示图像1的增益调整系数1的平方等函数。
即可将图像1的增益调整系数1或该增益调整系数1的函数作为权重,来对第一图像和第二图像进行加权求和,得到图像1的色调映射值。
或者,
示例性的,Tmvalue=W*TMvaluemid2,公式(17);
其中,公式(17)中的Tmvalue表示图像1中每个像素的色调映射值;
TMvaluemid1表示第一图像;
示例性的,W可以是图像1的增益调整系数(例如可以表现为图6b中生成的图像2);
或者,示例性的,W也可以是图像1的增益调整系数1的函数A(x),例如A(x)=1+(S-1)*x,其中,x表示图像1的增益调整系数1,S表示是生成第一图像还是生成第二图像时的缩放系数。需要说明的是,这里对于A(x)函数不做限制。
示例性的,Tmvalue是图像1中每个像素点在该当前目标背光亮度下最终的图像像素的信息,Tmvalue中的每个像素点的色调映射值即为图像1中每个像素点在显示屏上的显示值(或者说最终的RGB值)。
本实施例,可按照当前目标背光亮度对待显示的图像1进行处理,得到第二图像,并按照图像1中高亮区域和非高亮区域各自的增益调整系数,来对第二图像中的高亮区域进行提亮,对非高亮区域(例如暗区)进行压暗,得到图像1各像素点在屏幕上的显示亮度。示例性的,例如图像1中某个区域的增益调整系数为1.5,那么可使该区域在显示时,显示亮度提高1.5倍,可能造成该区域显示过亮,那么可选地,可选地,可利用第一图像TMvaluemid1,对Tmvalue进行钳位处理,以避免图像1在显示屏上显示时某些区域过亮。
可选地,当背光元数据中的增益调整系数是下采样的增益调整系数,下采样的增益系数与待显示图像(例如图像1)的像素可能是1对1,1对4,或者1对16等一对多的对应关系。那么在利用下采样的增益调整系数,来获取图像1各像素点在屏幕上的显示亮度时,可获取下采样的增益系数与图像1的像素的对应关系;然后,按照该对应关系,对图像1中的相应像素,采用对应的下采样的增益系数进行处理,具体处理方式与上述实施例2所描述的基于像素级的增益系数对图像1进行处理的过程原理类似,参照上文介绍即可,这里不再赘述。
本申请实施例,通过在码流中携带像素级别的增益调整系数,可在背光可调的场景下,在显示图像时,按照图像中每个像素各自的增益调整系数,来调整各个像素在显示屏上的显示亮度,该方法在软件上更易于实施,色调映射可达到像素级别,亮度调节更加精细化,灵活性高、可调性更高。
需要说明的是,实施例1和实施例2也可以结合,即背光元数据可包括局部或全局的初始增益调整曲线、局部或全局的锚点,以及像素级或下采样的增益调整系数,实施例1和实施例2之间相同之处相互参考即可。
可选地,对于码流中的一帧图像,或帧序列,本申请实施例的接收端可对一部分图像或图像区域或帧序列,采用目标色调映射曲线的方式进行色调映射,生成色调映射值,以获取图像显示信息,还可对另一部分图像或帧序列,采用图像对应的增益调整系数,以及当前目标背光亮度和显示屏的最高显示亮度,来获取图像显示信息。
在传统的动态元数据显示适配标准或者算法中,通常未考虑背光可调设备上图像或视频的显示效果,在传统方案中,主要是将视频适配到屏幕的最大显示能力(即最大显 示亮度)上,并不适合背光变弱的场景下的图像或视频显示,显示效果较差。
而本申请实施例的上述***,在显示屏背光可调的情况下,可结合显示屏的最大显示亮度,以及当前目标背光亮度,来对待显示的图像或视频进行亮度适配,能够根据背光强度级别的调整变化,对图像在显示屏上的显示亮度进行自动调节,提高了背光变弱情况下的图像或视频的显示效果。
下面介绍本申请实施例提供的一种装置。如图7所示:
图7为本申请实施例提供的一种装置的结构示意图。如图7所示,该装置500可包括:处理器501、收发器505,可选的还包括存储器502。
所述收发器505可以称为收发单元、收发机、或收发电路等,用于实现收发功能。收发器505可以包括接收器和发送器,接收器可以称为接收机或接收电路等,用于实现接收功能;发送器可以称为发送机或发送电路等,用于实现发送功能。
存储器502中可存储计算机程序或软件代码或指令504,该计算机程序或软件代码或指令504还可称为固件。处理器501可通过运行其中的计算机程序或软件代码或指令503,或通过调用存储器502中存储的计算机程序或软件代码或指令504,以实现本申请各实施例提供的方法。其中,处理器501可以为中央处理器(central processing unit,CPU),存储器502例如可以为只读存储器(read-only memory,ROM),或为随机存取存储器(random access memory,RAM)。
本申请中描述的处理器501和收发器505可实现在集成电路(integrated circuit,IC)、模拟IC、射频集成电路RFIC、混合信号IC、专用集成电路(application specific integrated circuit,ASIC)、印刷电路板(printed circuit board,PCB)、电子设备等上。
上述装置500还可以包括天线506,该装置500所包括的各模块仅为示例说明,本申请不对此进行限制。
如前所述,以上实施例描述中的装置的结构可以不受图7的限制。装置可以是独立的设备或者可以是较大设备的一部分。例如所述装置的实现形式可以是:
(1)独立的集成电路IC,或芯片,或,芯片***或子***;(2)具有一个或多个IC的集合,可选的,该IC集合也可以包括用于存储数据,指令的存储部件;(3)可嵌入在其他设备内的模块;(4)车载设备等等;(5)其他等等。
对于装置的实现形式是芯片或芯片***的情况,可参见图8所示的芯片的结构示意图。图8所示的芯片包括处理器601和接口602。其中,处理器601的数量可以是一个或多个,接口602的数量可以是多个。可选的,该芯片或芯片***可以包括存储器603。
其中,上述方法实施例涉及的各步骤的所有相关内容均可以援引到对应功能模块的功能描述,在此不再赘述。
基于相同的技术构思,本申请实施例还提供一种计算机可读存储介质,该计算机可读存储介质存储有计算机程序,该计算机程序包含至少一段代码,该至少一段代码可由计算机执行,以控制计算机用以实现上述方法实施例。
基于相同的技术构思,本申请实施例还提供一种计算机程序,当该计算机程序被终端设备执行时,用以实现上述方法实施例。
所述程序可以全部或者部分存储在与处理器封装在一起的存储介质上,也可以部分或者全部存储在不与处理器封装在一起的存储器上。
基于相同的技术构思,本申请实施例还提供一种芯片,包括网口控制器与处理器。网口控制器与处理器可实现上述方法实施例。
结合本申请实施例公开内容所描述的方法或者算法的步骤可以硬件的方式来实现,也可以是由处理器执行软件指令的方式来实现。软件指令可以由相应的软件模块组成,软件模块可以被存放于随机存取存储器(Random Access Memory,RAM)、闪存、只读存储器(Read Only Memory,ROM)、可擦除可编程只读存储器(Erasable Programmable ROM,EPROM)、电可擦可编程只读存储器(Electrically EPROM,EEPROM)、寄存器、硬盘、移动硬盘、只读光盘(CD-ROM)或者本领域熟知的任何其它形式的存储介质中。一种示例性的存储介质耦合至处理器,从而使处理器能够从该存储介质读取信息,且可向该存储介质写入信息。当然,存储介质也可以是处理器的组成部分。处理器和存储介质可以位于ASIC中。
本领域技术人员应该可以意识到,在上述一个或多个示例中,本申请实施例所描述的功能可以用硬件、软件、固件或它们的任意组合来实现。当使用软件实现时,可以将这些功能存储在计算机可读介质中或者作为计算机可读介质上的一个或多个指令或代码进行传输。计算机可读介质包括计算机存储介质和介质,其中介质包括便于从一个地方向另一个地方传送计算机程序的任何介质。存储介质可以是通用或专用计算机能够存取的任何可用介质。
上面结合附图对本申请的实施例进行了描述,但是本申请并不局限于上述的具体实施方式,上述的具体实施方式仅仅是示意性的,而不是限制性的,本领域的普通技术人员在本申请的启示下,在不脱离本申请宗旨和权利要求所保护的范围情况下,还可做出很多形式,均属于本申请的保护之内。

Claims (15)

  1. 一种图像处理方法,其特征在于,包括:
    根据显示屏的当前目标背光亮度,获取待显示的目标图像的第一显示信息;
    根据所述显示屏的最大显示亮度和所述当前目标背光亮度,获取所述目标图像的第二显示信息;
    根据所述目标图像的预设信息,对所述第一显示信息和所述第二显示信息进行处理,获取所述目标图像的第三显示信息,其中,所述预设信息包括与所述目标图像的亮度有关的信息;
    按照所述第三显示信息,在显示屏上显示所述目标图像。
  2. 根据权利要求1所述的方法,其特征在于,所述根据显示屏的当前目标背光亮度,获取待显示的目标图像的第一显示信息,包括:
    获取待显示的目标图像的第一色调映射曲线;
    根据显示屏的当前目标背光亮度,对所述第一色调映射曲线进行处理,获取所述目标图像的第二色调映射曲线作为所述第一显示信息。
  3. 根据权利要求2所述的方法,其特征在于,所述根据所述显示屏的最大显示亮度和所述当前目标背光亮度,获取所述目标图像的第二显示信息,包括:
    根据所述显示屏的最大显示亮度和所述当前目标背光亮度,确定第一亮度,所述第一亮度高于所述当前目标背光亮度;
    根据所述第一亮度,对所述第一色调映射曲线进行处理,获取所述目标图像的第三色调映射曲线作为所述第二显示信息。
  4. 根据权利要求3所述的方法,其特征在于,所述预设信息包括:所述目标图像的第二亮度,所述根据所述目标图像的预设信息,对所述第一显示信息和所述第二显示信息进行处理,获取所述目标图像的第三显示信息,包括:
    获取所述目标图像的第二亮度;
    从所述第二色调映射曲线中,提取像素亮度小于或等于所述第二亮度的第一子曲线;
    获取所述第二色调映射曲线中与所述第二亮度对应的第一色调映射值;
    基于所述第一色调映射值,对所述第三色调映射曲线进行处理,生成第四色调映射曲线,其中,所述第四色调映射曲线中与所述第二亮度对应的色调映射值为所述第一色调映射值;
    从所述第四色调映射曲线中,提取像素亮度大于或等于所述第二亮度的第二子曲线;
    基于所述第二亮度,连接所述第一子曲线和所述第二子曲线,获取所述目标图像的目标色调映射曲线作为所述第三显示信息。
  5. 根据权利要求4所述的方法,其特征在于,
    所述获取待显示的目标图像的第一色调映射曲线,包括:
    获取构成所述目标图像的多个局部区域各自的第一色调映射曲线;
    所述获取所述目标图像的第二亮度,包括:
    获取所述多个局部区域各自的第二亮度;
    所述基于所述第二亮度,连接所述第一子曲线和所述第二子曲线,获取所述目标图像的目标色调映射曲线作为所述第三显示信息,包括:
    对于所述目标图像的所述多个局部区域,基于每个所述局部区域各自的所述第二亮度,连接每个所述局部区域各自的所述第一子曲线和所述第二子曲线,获取所述多个局部区域各自的目标色调映射曲线作为所述目标图像的所示第三显示信息。
  6. 根据权利要求4所述的方法,其特征在于,所述获取所述目标图像的第二亮度,包括:
    根据所述目标图像的像素亮度的最小值和像素亮度的平均值,获取所述目标图像的第二亮度。
  7. 根据权利要求6所述的方法,其特征在于,所述根据所述目标图像的像素亮度的最小值和像素亮度的平均值,获取所述目标图像的第二亮度,包括:
    按照所述目标图像中的像素亮度和像素位置,对所述目标图像的图像像素进行聚类,得到多个图像区域;
    按照像素亮度,对所述目标图像中的所述多个图像区域进行分类,得到第一类图像区域和第二类图像区域,其中,第一类图像区域的亮度高于所述第二类图像区域的亮度;
    基于所述第一类图像区域内像素亮度的最小值和像素亮度的平均值,确定所述目标图像的第二亮度。
  8. 根据权利要求1所述的方法,其特征在于,所述根据显示屏的当前目标背光亮度,获取待显示的目标图像的第一显示信息,包括:
    根据显示屏的当前目标背光亮度,对待显示的目标图像进行处理,获取第一图像作为所述第一显示信息。
  9. 根据权利要求8所述的方法,其特征在于,所述根据所述显示屏的最大显示亮度和所述当前目标背光亮度,获取所述目标图像的第二显示信息,包括:
    根据所述显示屏的最大显示亮度和所述当前目标背光亮度,确定第三亮度,所述第三亮度高于所述当前目标背光亮度;
    根据所述第三亮度,对所述目标图像进行处理,获取第二图像作为所述第二显示信息。
  10. 根据权利要求9所述的方法,其特征在于,所述预设信息包括:所述目标图像的增益系数,所述根据所述目标图像的预设信息,对所述第一显示信息和所述第二显示信 息进行处理,获取所述目标图像的第三显示信息,包括:
    获取所述目标图像的增益系数;
    基于所述增益系数,对所述第一图像和所述第二图像进行处理,生成第三图像作为所述第三显示信息。
  11. 根据权利要求10所述的方法,其特征在于,所述获取所述目标图像的增益系数,包括:
    按照所述目标图像中的像素亮度和像素位置,对所述目标图像的图像像素进行聚类,得到多个图像区域;
    按照像素亮度,对所述目标图像中的所述多个图像区域进行分类,得到第一类图像区域和第二类图像区域,其中,第一类图像区域的亮度高于所述第二类图像区域的亮度;
    对所述第一类图像区域和所述第二类图像区域配置不同的增益系数;
    其中,所述第一类图像区域的增益系数大于所述第二类图像区域的增益系数。
  12. 根据权利要求1所述的方法,其特征在于,所述根据所述目标图像的预设信息,对所述第一显示信息和所述第二显示信息进行处理,获取所述目标图像的第三显示信息,包括:
    从待显示的目标图像的信源信息中,获取所述目标图像的预设信息;
    基于所述预设信息,对所述第一显示信息和所述第二显示信息进行处理,获取所述目标图像的第三显示信息。
  13. 一种电子设备,其特征在于,包括:存储器和处理器,所述存储器和所述处理器耦合;所述存储器存储有程序指令,所述程序指令由所述处理器执行时,使得所述电子设备执行如权利要求1至12中任意一项所述的图像处理方法。
  14. 一种计算机可读存储介质,其特征在于,包括计算机程序,当所述计算机程序在电子设备上运行时,使得所述电子设备执行如权利要求1至12中任意一项所述的图像处理方法。
  15. 一种芯片,其特征在于,包括一个或多个接口电路和一个或多个处理器;所述接口电路用于从电子设备的存储器接收信号,并向所述处理器发送所述信号,所述信号包括存储器中存储的计算机指令;当所述处理器执行所述计算机指令时,使得所述电子设备执行权利要求1至12中任意一项所述的图像处理方法。
PCT/CN2022/124254 2022-01-12 2022-10-10 图像处理方法及电子设备 WO2023134235A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210033857.8A CN116466899A (zh) 2022-01-12 2022-01-12 图像处理方法及电子设备
CN202210033857.8 2022-01-12

Publications (1)

Publication Number Publication Date
WO2023134235A1 true WO2023134235A1 (zh) 2023-07-20

Family

ID=87174055

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/124254 WO2023134235A1 (zh) 2022-01-12 2022-10-10 图像处理方法及电子设备

Country Status (2)

Country Link
CN (1) CN116466899A (zh)
WO (1) WO2023134235A1 (zh)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010074222A (ja) * 2008-09-16 2010-04-02 Sony Corp 画像処理装置、輝度変換曲線補正方法およびプログラム
US20100265263A1 (en) * 2009-04-20 2010-10-21 Wintek Corporation Image display method
CN105139809A (zh) * 2015-09-01 2015-12-09 青岛海信电器股份有限公司 液晶显示亮度控制方法和装置以及液晶显示设备
CN112215760A (zh) * 2019-07-11 2021-01-12 华为技术有限公司 一种图像处理的方法及装置
WO2021222310A1 (en) * 2020-04-28 2021-11-04 Dolby Laboratories Licensing Corporation Image-dependent contrast and brightness control for hdr displays
CN113628106A (zh) * 2020-05-08 2021-11-09 华为技术有限公司 图像动态范围处理方法和装置
CN113674718A (zh) * 2021-08-10 2021-11-19 北京小米移动软件有限公司 显示亮度调整方法、装置及存储介质

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010074222A (ja) * 2008-09-16 2010-04-02 Sony Corp 画像処理装置、輝度変換曲線補正方法およびプログラム
US20100265263A1 (en) * 2009-04-20 2010-10-21 Wintek Corporation Image display method
CN105139809A (zh) * 2015-09-01 2015-12-09 青岛海信电器股份有限公司 液晶显示亮度控制方法和装置以及液晶显示设备
CN112215760A (zh) * 2019-07-11 2021-01-12 华为技术有限公司 一种图像处理的方法及装置
WO2021222310A1 (en) * 2020-04-28 2021-11-04 Dolby Laboratories Licensing Corporation Image-dependent contrast and brightness control for hdr displays
CN113628106A (zh) * 2020-05-08 2021-11-09 华为技术有限公司 图像动态范围处理方法和装置
CN113674718A (zh) * 2021-08-10 2021-11-19 北京小米移动软件有限公司 显示亮度调整方法、装置及存储介质

Also Published As

Publication number Publication date
CN116466899A (zh) 2023-07-21

Similar Documents

Publication Publication Date Title
US8941757B2 (en) Apparatus and method for adjusting white balance
KR100678101B1 (ko) 영상의 색정보를 이용하여 메뉴 화면의 디스플레이 색상을구성하는 휴대용 단말기 및 그 방법
US10565742B1 (en) Image processing method and apparatus
TWI399100B (zh) 影像處理方法
CN109274985B (zh) 视频转码方法、装置、计算机设备和存储介质
KR101998531B1 (ko) 실시간 비디오 향상 방법, 단말, 및 비일시적 컴퓨터 판독가능 저장 매체
US8577142B2 (en) Image processing device, image processing method and program with improved image contrast
CN107077830B (zh) 适用于无人机控制端的屏幕亮度调整方法及无人机控制端
EP3731221B1 (en) Image brightness adjustment method, device and system, and computer-readable storage medium
JPWO2006059573A1 (ja) 色彩調整装置及び方法
US20140292616A1 (en) Computer monitor equalization using handheld device
US10542242B2 (en) Display device and method for controlling same
CN112541868B (zh) 图像处理方法、装置、计算机设备和存储介质
CN114866809B (zh) 视频转换方法、装置、设备、存储介质及程序产品
WO2021073304A1 (zh) 一种图像处理的方法及装置
CN113132696A (zh) 图像色调映射方法、装置、电子设备和存储介质
WO2018165023A1 (en) Method of decaying chrominance in images
CN109348207B (zh) 色温调节方法、图像处理方法及装置、介质和电子设备
JP2023516184A (ja) 逆トーンマッピングのための方法及び装置
CN111726542A (zh) 一种摄像头补光方法及终端、存储介质
WO2023134235A1 (zh) 图像处理方法及电子设备
CN107491161B (zh) 显示器的节能方法及***
CN114999363A (zh) 色偏校正方法、装置、设备、存储介质及程序产品
KR20230084126A (ko) 크리에이티브 인텐트 메타데이터 및 주변 조명에 기반하는 hdr 톤 매핑
CN111147837B (zh) 饱和度增强方法、装置及计算机可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22919869

Country of ref document: EP

Kind code of ref document: A1