GB2567668A - Deflickering of a series of images - Google Patents

Deflickering of a series of images Download PDF

Info

Publication number
GB2567668A
GB2567668A GB1717220.6A GB201717220A GB2567668A GB 2567668 A GB2567668 A GB 2567668A GB 201717220 A GB201717220 A GB 201717220A GB 2567668 A GB2567668 A GB 2567668A
Authority
GB
United Kingdom
Prior art keywords
frames
periodicity
pixel
frame
recent
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB1717220.6A
Other versions
GB201717220D0 (en
GB2567668B (en
Inventor
Pawlik Bartek
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nokia Technologies Oy
Original Assignee
Nokia Technologies Oy
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nokia Technologies Oy filed Critical Nokia Technologies Oy
Priority to GB1717220.6A priority Critical patent/GB2567668B/en
Publication of GB201717220D0 publication Critical patent/GB201717220D0/en
Publication of GB2567668A publication Critical patent/GB2567668A/en
Application granted granted Critical
Publication of GB2567668B publication Critical patent/GB2567668B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/745Detection of flicker frequency or suppression of flicker wherein the flicker is caused by illumination, e.g. due to fluorescent tube illumination or pulsed LED illumination

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Television Systems (AREA)
  • Picture Signal Circuits (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

An output image signal comprising a series of image frames 101-104, each corresponding to a respective instance in time is received. A block of pixels bi neighbouring a particular pixel 105 present in each of the image frames is identified. A fluctuation over time in pixel brightness values of the block of pixels is detected by analysing the pixel brightness values of the block of pixels in each image frame of the series. For each of plural sets of plural consecutive frames in the series, each set including a most recent frame and plural earlier frames, a periodicity of the fluctuation for the set of frames is determined. Based on the periodicity of each set of frames and a pixel brightness value of the particular pixel in the most recent frame of each set of frames, a weighted average of the pixel brightness values of the particular pixel over the most recent frames of the sets of frames is determined. The pixel brightness value of the particular pixel in a most recent frame of the most recent set of frames based on the weighted average is adjusted.

Description

This specification relates to the field of image processing, particularly that of flicker detection and correction.
Background
Output image quality in image sequences is important for a desirable user experience.
Output image quality of image sequences is known to be adversely affected by flicker effects. Flicker effects are unnatural fluctuations in the brightness of an image from one frame to the next. Flicker reduction is limited due to its complexity and the varying levels of flicker which can be experienced over the spatial extent of an image. Flicker may arise as a result of, for instance, flickering light sources in the image.
Summary
In a first aspect, this specification describes a method comprising: receiving an output image signal comprising a series of image frames, each corresponding to a respective instance in time; identifying a block of pixels neighbouring a particular pixel present in each of the image frames; detecting a fluctuation over time in pixel brightness values of 20 the block of pixels by analysing the pixel brightness values of the block of pixels in each image frame of the series; for each of plural sets of plural consecutive frames in the series, each set including a most recent frame and plural earlier frames, determining a periodicity of the fluctuation for the set of frames; based on the periodicity of each set of frames and a pixel brightness value of the particular pixel in the most recent frame of each set of frames, 25 determining a weighted average of the pixel brightness values of the particular pixel over the most recent frames of the sets of frames; and adjusting the pixel brightness value of the particular pixel in a most recent frame of the most recent set of frames based on the weighted average.
The determining the periodicity of the fluctuation may further comprise: for each of plural candidate periodicities: for each of plural pairs of image frames selected based on the candidate periodicity, determining a similarity measure between the pixel brightness values of the block of pixels in each image frame of the pair; and determining an average of the similarity measures, wherein the periodicity of the fluctuation is determined to be the candidate periodicity for which the average of the similarity measures is the smallest.
Each of the plural pairs of image frames selected based on the candidate periodicity may comprise image frames spaced apart in the set by a number of image frames corresponding to a multiple of the candidate periodicity.
Each candidate periodicity may be denoted by an integer, wherein each frame may be numbered based on its position in the set and wherein, for each candidate periodicity, the plural pairs of image frames selected may include only the pairs of frames which have a first frame having a number below the integer denoting the candidate periodicity and a 10 second frame which may be spaced apart from the first frame by a number of frames corresponding to a multiple of the candidate periodicity.
The sets of frames may partially overlap with one another.
The determining the weighted average may further comprise: for each set of frames in the series, determining a weight based on the determined periodicity for the set and applying the weight to the pixel brightness value of the pixel in the most recent frame of the set, and averaging the weighted pixel brightness values of the pixels to give an adjustment value for the pixel brightness value for the particular pixel in the most recent frame of the most recent set of frames.
The adjusting the pixel brightness value of the particular pixel in a most recent frame of the most recent set of frames may further comprise: adding the adjustment value to an unadjusted pixel brightness value for the particular pixel in the most recent frame of the 25 most recent set of frames to give an adjusted pixel brightness value, and normalising the adjusted pixel brightness value.
The weight for each set may be determined using a measure of approximation error between the fluctuation in pixel brightness values in the set and the determined periodicity for that set.
The measure of approximation error may be the largest of the similarity measures contributing to the smallest average of the similarity measures on the basis of which the periodicity in that set was determined.
-3A lower value of the measure of approximation error may correspond to a closer approximation and a higher value of the measure of approximation error may correspond to a less close approximation.
A lower value of the measure of approximation error may result in a larger weight than does a higher value of the measure of approximation error.
A distribution of the determined weights may be based on a Gaussian distribution.
If the periodicity is determined to be one image frame for a given set, the weight determined for that set may be substantially equal to zero.
In a second aspect, this specification describes an apparatus configured to perform any method as described with reference to the first aspect.
In a third aspect, this specification describes computer-readable instructions which, when executed by computing apparatus, cause the computing apparatus to perform any method as described with reference to the first aspect.
In a fourth aspect, this specification describes a computer-readable medium having computer-readable code stored thereon, the computer readable code, when executed by at least one processor, causes performance of: receiving an output image signal comprising a series of image frames, each corresponding to a respective instance in time; identifying a block of pixels neighbouring a particular pixel present in each of the image frames;
detecting a fluctuation over time in pixel brightness values of the block of pixels by analysing the pixel brightness values of the block of pixels in each image frame of the series; for each of plural sets of plural consecutive frames in the series, each set including a most recent frame and plural earlier frames, determining a periodicity of the fluctuation for the set of frames; based on the periodicity of each set of frames and a pixel brightness value of the particular pixel in the most recent frame of each set of frames, determining a weighted average of the pixel brightness values of the particular pixel over the most recent frames of the sets of frames; and adjusting the pixel brightness value of the particular pixel in a most recent frame of the most recent set of frames based on the weighted average. The computer-readable code may further, when executed, cause performance of any of the operations described with reference to the method of the first aspect.
-4In a fifth aspect, this specification describes an apparatus comprising: at least one processor; and at least one memoiy including computer program code which, when executed by the at least one processor, causes the apparatus to: receive an output image 5 signal comprising a series of image frames, each corresponding to a respective instance in time; identify a block of pixels neighbouring a particular pixel present in each of the image frames; detect a fluctuation over time in pixel brightness values of the block of pixels by analysing the pixel brightness values of the block of pixels in each image frame of the series; for each of plural sets of plural consecutive frames in the series, each set including a 10 most recent frame and plural earlier frames, determine a periodicity of the fluctuation for the set of frames; based on the periodicity of each set of frames and a pixel brightness value of the particular pixel in the most recent frame of each set of frames, determine a weighted average of the pixel brightness values of the particular pixel over the most recent frames of the sets of frames; and adjust the pixel brightness value of the particular pixel in 15 a most recent frame of the most recent set of frames based on the weighted average. The computer program code may further, when executed, cause performance of any of the operations described with reference to the method of the first aspect.
In a sixth aspect, this specification describes apparatus comprising: means for receiving an 20 output image signal comprising a series of image frames, each corresponding to a respective instance in time; means for identifying a block of pixels neighbouring a particular pixel present in each of the image frames; means for detecting a fluctuation over time in pixel brightness values of the block of pixels by analysing the pixel brightness values of the block of pixels in each image frame of the series; means for determining, for 25 each of plural sets of plural consecutive frames in the series, each set including a most recent frame and plural earlier frames, a periodicity of the fluctuation for the set of frames; means for determining, based on the periodicity of each set of frames and a pixel brightness value of the particular pixel in the most recent frame of each set of frames, a weighted average of the pixel brightness values of the particular pixel over the most recent 30 frames of the sets of frames; and means for adjusting the pixel brightness value of the particular pixel in a most recent frame of the most recent set of frames based on the weighted average. The apparatus of the sixth aspect may further comprise means for causing performance of any of the operations described with reference to the method of the first aspect.
-5Brief Description of the Drawings
For better understanding of the present application, reference will now be made by way of example to the accompanying drawings in which: Figures 1A and 1B show a series of image frames.
Figures 2A and 2B show fluctuations in a pixel brightness value of a pixel over a series of image frames.
Figures 3A, 3B and 3C are illustrating various operations which may be performed in order to detect and/or correct flicker.
Figure 4 is a schematic illustration of an example hardware configuration of an image 10 processing apparatus for executing the methods of Figures 3A, 3B and 3C.
Figure 5 is an illustration of a computer-readable medium upon which computer readable code may be stored.
Detailed Description of Embodiments
In the description and drawings, like reference numerals refer to like elements throughout.
Flicker often occurs in image sequences when a flickering light source outputs light which varies with a low frequency. A flickering effect may also be worsened when a low frequency 20 output of a light source is combined with a short exposure time of a video camera.
One method of combatting flicker involves adjusting an exposure time of a video camera during an image capture process. However, flicker can still arise despite this method due to a flickering light source. Additionally, adjusting the exposure time of the video camera 25 can negatively impact the image quality, giving rise to undesirable effects such as motion blur. Also, these methods may only be applied at the image capture stage and do not address flicker which is present in pre-captured image sequences. This is because exposure time is fixed at capture time and thus cannot be changed during post-processing.
Methods of flicker reduction post-processing include global flicker reduction and local flicker reduction. Global flicker reduction applies the same rectification effects across the entire spatial extent of a frame in the image sequence. Local flicker reduction applies different rectification effects at various locations in the image frame.
-6Methods of global flicker reduction may, for instance, involve applying a contrast change to an entire image in each frame. However, a flickering intensity may differ across the spatial extent of the image.
Methods of local flicker reduction, on the other hand, may involve the use of algorithms based on heavy motion estimation and histogram equalisation. However, although these approaches have been shown to work in certain conditions, their complexity can make them unusable in the real-time scenarios. In addition, histogram equalisation can produce severe defects in the image sequence, such as motion blur, ghosting and object deformation. In addition, when existing local flicker reduction techniques are applied to non-flickering image sequences, it can result in video quality degradation and severe artefacts, such as motion blur, ghosting and object deformation.
The local flicker reduction described herein is a two-stage flicker detection and correction 15 technique, which ensures, via the flicker detection, that flicker reduction is not performed on parts of image sequences that aren’t flickering, while the flicker correction reduces the above-mentioned problems and artefacts, such as motion blur, ghosting and object deformation, which can result from existing local reduction techniques. Other benefits and advantages over existing methods will also be apparent from the description and drawings, or evident through practice of the described techniques.
Figure 1A shows a series of image frames too comprising individual image frames 101,
102,103,104. The series of image frames too can comprise any number of image frames and this is generalised in Figure 1 to be a series of M image frames. The series of image frames too can be included in an output image signal, for example. The series of image frames may be (or may be derived from) a sequence of video frames derived from a video capture device.
The most recent frame 101 of the series of image frames (also referred to as the “current 30 frame”) may correspond to an image capture time t, as shown in Figure 1. Image frames
102 and 103 are earlier, or “history”, image frames and correspond to an image capture time t-i and a time t-2 respectively. Image frame 104 is the Mth image frame in the series of image frames and corresponds to a time t-M. Time t-M is prior to t-2 and time t-2 is prior time t-i, as indicated by the time arrow in Figure 1.
-ΊThe image frames 101,102,103 and 104 include a number of pixels including a particular pixel (pd 105. The particular pixel pi could be any of the pixels in the image frames. Indeed, the below-described technique maybe performed in respect of a plurality (or, in some instances, all) of the pixels of the images frames. The position of the particular pixel 5 is denoted by the index ‘i’. The particular pixel in the current image frame may be referred to as pi t, with the same pixel in the first history image frame being referred to as p, t-iThe image frames 101,102,103 and 104 include a block of pixels bi selected based on the particular pixel pi 105 currently under consideration. The block of pixels bi includes a 10 plurality of other pixels 106, including pixels 106a, 106b, 106c neighbouring the particular pixel 105, and the particular pixel 105. As will be appreciated, each of the pixels in the image frames has a corresponding block of neighbouring pixels. In addition, the block of pixels bi corresponding to a particular pixel pi 105 is in the same position in all of the image frames. The particular pixel 105 maybe at the centre of the block of pixels (not 15 shown). The block shown in Figure 1A is provided merely as an example and it will be appreciated that the block could be of any size or shape by including any number of pixels neighbouring the particular pixel.
In order to perform flicker reduction on the series of frames, the series may be divided 20 into plural, N, subsets of consecutive frames (as illustrated in Figure 1B). The most recent subset of frames (in this example Set 1) is the subset which includes the most recent frame. The subsets may overlap with one another such that each subset includes frames in common with an adjacent (in time) subset. In the example of Figure 1B, each subset of frames shares all but one frame in common with the adjacent subset. As such, set 2 includes all of the frames of set 1 except the most recent frame of set 1. In such examples, the number of subsets N corresponds with the number N of consecutive frames in each subset. In other examples, however, adjacent subsets may overlap by different numbers of frames.
In what follows, a particular pixel and a block of pixels shall be denoted with one index indicating the frame. For simplicity, an index indicating the position of the particular pixel and the block of pixels in the image shall be omitted.
Flicker Detection
-8The following describes a number of operations which might be performed in order to detect and quantify the flicker in a series of images. Firstly, the block of pixels bi corresponding to a particular pixel 105 is analysed to determine the pixel brightness values of the pixels in the block of pixels bi in each of the image frames. That is, the pixel 5 brightness values of each of the pixels 105,106a, 106b, 106c in the block of pixels may be determined in each frame.
Throughout this specification, an example will be described in which there are 10 consecutive image frames in each subset (i.e. in the following example, N=io). Using subsets of ten frames is appropriate in many implementations since it may facilitate the detection of flicker that results from many common flickering light sources, which may be expected to have a frequency of 50Hz and 60Hz for example.
These frames are numbered from 0 to 9, in which the most recent image frame (the current image) is denoted by the integer 0 and the nine history image frames are denoted by the integers 1 to 9. The integer 1 is given to the most recent of the history image frames. In what follows, the integer frame numbers are indicated by an index, for instance Fo represents the zeroth frame, I4 represents a pixel brightness value in the fourth frame and b7 represents a block of pixels in the seventh frame. Although the example is described using subsets of ten consecutive frames, it will be appreciated that subsets of different lengths may be used. Indeed, as will become apparent from the below discussion, the number of frames in the subsets may be selected on the basis of a highest expected periodicity.
A number of different possible pixel brightness values for a particular pixel in each image frame of a particular subset of image frames is shown in Figures 2A and 2B. A fluctuation in the pixel brightness value of the particular pixel over time I(t) is denoted by the curves 201, 202, 203, 204 and 205 and the points (denoted by an X) on the curves indicate the pixel brightness values of the pixel in each respective image frame of the subset of image frames. The frames are denoted by the letter ‘F’ with an index which corresponds to the position of the frame in the subset.
The fluctuation in the pixel brightness value over time I(t) of the pixel may be relatively constant in each of the image frames. That is, the pixel brightness values of the pixel may 35 be the same or substantially the same in all of the image frames of the subset. This is
-9shown by the curve 201 in Figure 2A, which has a periodicity of one image frame (P=i) or substantially one image frame. Where the periodicity of the fluctuation of the pixel brightness value is one (or substantially equal to one), it may be said that the pixel does not include any flicker. Although this is described here for a particular pixel, it will be appreciated that the pixel brightness values of a block of pixels may also be relatively constant, indicating that the block does not include any flicker.
Alternatively, in some circumstances the pixel brightness value of the pixel may not be a constant value. A number of different possible fluctuations in the pixel brightness value over time I(t) of the pixel are shown in Figures 2A and 2B by the curves 202, 203, 204 and 205. The curves 202, 203, 204 and 205 denote fluctuations in the pixel brightness value of the pixel which have periodicity of, or substantially, two, three, four and five image frames respectively. Put another way, the pixel denoted by curves 202, 203, 204 and 205 may be said to exhibit flicker with a period of 2,3,4 and 5 respectively. However, as will be appreciated, the periodicity in the fluctuation in the pixel brightness value of each pixel is not limited to these periodicities.
Next, the periodicity of the fluctuation of the pixel brightness values of the block of pixels is determined for each subset based on the pixel brightness values of the block in each of 20 the frames in the subset. More specifically, the periodicity of the fluctuation is determined by analysing the similarity in the pixel brightness values of the pixels in the block of pixels between image frames of the subset.
In general, the similarity between the pixel brightness values of a block of pixels in one 25 frame and the pixel brightness values of that block of pixels in another frame may be determined as follows.
Firstly, for each pixel in the block of pixels, the pixel brightness value of that pixel in a particular frame may be compared to the pixel brightness value of that pixel in another 30 frame and the absolute difference between these pixel brightness values may be determined. This may be repeated to determine the differences between the pixel brightness values of each of the pixels in the block of pixels in the two frames. Once all the differences of the pixel brightness values for a block in a pair of frames have been determined, the differences may be summed to give a similarity measure between the pixel 35 brightness values of the block of pixels in each frame of the pair. That is, a similarity
- 10 measure between the pixel brightness values of the block of pixels in each frame of the pair may be determined by calculating the Sum of Absolute Difference (SAD) (also known as the Li norm) between the pixel brightness values in the block in each frame of the pair.
Alternatively, a similarity measure between the pixel brightness values in the block of pixels in each frame of the pair may be determined by calculating the L2 norm. That is, the differences between the pixel brightness values of the block in each frame may be squared and then summed, known as a Sum of Square Difference (SSD). Calculating the square root of the sum may provide a similarity measure between the pixel brightness values in 10 the block of pixels in a pair of frames and is known as the L2 norm.
Although the similarity measure may be determined by calculating the Li or L2 norm as above, these are merely provided as examples. It will be appreciated that an L norm of any order may be calculated as the similarity measure. That is, any Lp norm, where p>i may 15 be calculated as the similarity measure.
Based on the similarities in the pixel brightness values of the block of pixels between the image frames, it is determined which of plural candidate ideal periodicities is a best fit for the analysed similarities in pixel brightness values. As used herein, an “ideal periodicity” 20 is one for which the periodicity is an integer number of image frames.
To determine the best-fit candidate periodicity, for each of the one or more candidate periodicities, one or more pairs of image frames are selected from the subset of image frames based on the candidate periodicity. More specifically, the pairs of frames are 25 selected for each candidate periodicity such that the image frames are spaced apart in the subset by a number of image frames corresponding to a multiple of the candidate periodicity. As such, for a candidate periodicity of two, the pairs that maybe selected may include (but are not limited to) the zeroth (i.e. most recent) and second frames in the subset, the first and third frames in the subset, and the zeroth and fourth frames in the 30 subset. Similarly, for a candidate periodicity of three, the selected pairs may include the zeroth and third frames in the subset, the first and fourth frames in the subset, and the second and fifth frames in the subset.
The reason for selecting the pairs of image frames in this way is that, if the periodicity of 35 the flicker in the subset of images is ideal, it would be expected that the pixel brightness
- 11 values of frames that are spaced apart by that periodicity (or a multiple of that periodicity) would be the same.
For each candidate periodicity, the plural pairs of image frames selected may include only those pairs of frames which include a first frame having a number below the integer denoting the candidate periodicity and a second frame which is spaced apart from the first frame by a number of frames corresponding to a multiple of the candidate periodicity. For example, each of the pairs selected on the basis of a candidate periodicity of two, would only include either of the zeroth and first frames of the subset as the most recent frame in 10 the pair. As such, a pair consisting of the second and fourth frames in the subset may not be selected, since the frame number of the second frame in the subset (i.e. two) is not less than the candidate periodicity. In examples in which the pairs of image frames are selected on this basis, every pair of images frames that satisfies these requirements may be selected.
In other examples, the plural pairs of image frames selected may include all pairs of image frames in which the frames are spaced apart from each other by a number of frames corresponding to a multiple of the candidate periodicity. In other words, there may be no restriction on the number associated with the most recent frame of the pair.
As a next stage, for each of the pairs of image frames selected based on the candidate periodicity, the similarity between the pixel brightness values of the block of pixels in each image frame of the pair may be determined. In what follows, the similarity between the pixel brightness values of the block in each of the frames is represented by the Sum of
Absolute Difference (SAD) as described above. It will be apparent, however, that any Lp norm, where p>i, can be used.
For instance, taking a candidate periodicity of one, the absolute difference between the pixel brightness value of a pixel in the block of pixels in the zeroth frame in the subset and 30 the pixel brightness value of that pixel in the image frame that is spaced apart from the zeroth image frame by one image frame is determined. This difference is denoted as d(I0, Ii). For illustrative purposes, the difference d(I0, Ii) between the zeroth and first image frames in the subset is marked on the curves 202, 203, 204 and 205 in Figure 2A and 2B. As will be appreciated from Figures 2A and 2B, the differences in pixel brightness values
- 12 of a pixel between two frames varies depending on the fluctuation of the brightness. As such, the difference d(I0, IJ is different for each of the curves 202, 203, 204 and 205.
The absolute differences between the pixel brightness values of each of the other pixels in the block of pixels in the zeroth frame and in the first frame of the subset may then be determined. The absolute differences (determined for all pixels in the block)may then be summed to provide a similarity measure between the block of pixels in the zeroth frame b0 and the block of pixels in the first frame bi. The similarity measure between the block in the zeroth frame and the block of pixels in the first frame maybe denoted d(b0, bi).
For each candidate periodicity, pairs of frames maybe selected and the similarity measure may be determined for each pair of frames. That is, the above determination of the similarity measure may be repeated for each of the selected pairs of frames for each candidate periodicity.
An example will now be described in which the periodicity of flicker present in a subset of image frames is determined. In this example, five candidate periodicities are used. More specifically, periodicities between one and five image frames are used. As the highest candidate periodicity is five image frames, a subset of ten image frames is used.
For the first candidate periodicity (in this example, a periodicity of one image frame), the following pairs of image frames may be selected: (Fo, F,), (Fo, F2), (Fo, F3), (Fo, F4), (Fo, F5), (Fo, F6), (Fo, F7), (Fo, F8), and (Fo, F9).
The similarity measures of the pixel brightness values of the block of pixel in each of the pairs of image frames are determined and these are denoted as d(b0,bi), d(b0, b2) d(b0, b3) d(b0, b4) d(b0, b5) d(b0, be) d(b0, b7) d(b0, b8) d(b0, b9).
It will be apparent that the similarity measure d(b0, b3), the similarity measure d(b0, be), and the similarity measure d(b0, b9) are all substantially equal to zero. However, this is not the case with the other similarity measures (i.e. d(b0, bi), d(b0, b2), d(b0, b4), d(b0, b5), d(b0, b7), d(b0, b8)), which are non-zero. It should be noted that a similarity measure which is substantially equal to zero may indicate a high similarity between the pixel brightness values of the block of pixels.
-13The mean average of these similarity measures (i.e. the similarity measures between the block of pixels in the frames in each pair) is then determined. This may be referred to as the mean average similarity measure for candidate periodicity 1 (di). In this example, in order to obtain di, the similarity measures determined for candidate periodicity 1 are 5 summed and the resultant total is divided by the number of similarity measures included in the sum (in this case, nine).
Next, the pairs of image frames are selected for a second candidate periodicity, which in this example is a periodicity of two image frames. In this example, the selected pairs of frames maybe as follows: (F0,F2) (Fi,F3) (F0,F4) (Fi,F5) (F0,F6) (F15F7) (F0,F8) (Fi,F9). Note that the integer associated with the most recent frame in each of the pairs is not greater than the candidate periodicity (in this case 2), and that the frames in each pair are spaced apart by a multiple of the candidate periodicity.
For illustrative purposes, the difference d(I0, E) between the pixel brightness value of a pixel in a block of pixels in the most recent image frame (Fo) in the subset and the pixel brightness value of that pixel in the image frame spaced apart from the most recent image frame by two image frames (F2) is shown in Figure 2A on the curve 203 and in Figure 2B on the curves 204 and 205.
The absolute differences between the pixel brightness values of each of the pixels in the block of pixels in the zeroth frame and the block of pixels in the second frame may then be determined and the absolute differences between the pixel brightness values may be summed to give a similarity measure between the block in the pair of frames. The determination of a similarity measure may then be repeated for each of the selected pairs of frames.
The similarity measure in the pixel brightness values of a pixel in the frames in the selected pairs of images frames may be denoted as follows d(b0, b2), d(bi, b3), d(b0, b4), 30 d(bi, b5), d(b0, be), d(bi, b7), d(b0, bs) d(bi, b9).
The mean average of these similarity measures is then determined and shall be referred to as the mean average similarity measure for candidate periodicity 2 (d2). In this example, in order to obtain d2, similarity measures determined for candidate periodicity 2 are summed
-14and the resultant total is divided by eight due to the eight similarity measures included in the sum.
Next, the pairs of images frames for a third candidate periodicity are selected. In this example, the third candidate periodicity is three image frames. In this example, the selected pairs of frames maybe as follows: (F0,F3) (Fi,F4) (F2,F5) (F0,F6) (Fi,F7) (F2,Fs) (Fo,F9). The similarity measures between the pixel brightness values of the block of pixels in the selected pairs of frames may thus be expressed follows: d(b0, b3), d(bi, b4), d(b2, b5), d(b0, be), d(bi, b7), d(b2, bs), d(b0, b9). Again, note that the integer associated with the most recent frame in each of the pairs is not greater than the candidate periodicity (in this case 3), and that the frames in each pair are spaced apart by a multiple of the candidate periodicity.
The mean average of these similarity measures is then determined and may be referred to as the mean average similarity measure for candidate periodicity 3 (d3). In this example, in order to obtain d3, the similarity measures determined for candidate periodicity 3 are summed and the resultant total is divided by seven due to the seven similarity measures included in the sum.
This process is repeated for candidate periodicities in which the periodicity is four image frames and five frames.
In summary, for the example described above with ten frames in the subset and five candidate periodicities, the average of the similarity measures for each candidate periodicity may be expressed as follows:
6(4 = (6((60,64)+6((60,62)+6((60,63) + ... + 6?(60, 69)) / 9
6(2 = (6((bo,b2) + 6((bi,b3)+ ... + 6((60,68) + 6((6 69)) / 8
6(3 = (6((60,63) + dfb^b^ + 6((62,65) + 6((60,66) + dfb^b·/) + 6((62,68) + 6/(6o,69))/7 d4 = (d(b0, b4) + d(bt, b5) + d(b2, b6) + d(b3, b7) + d(b0, b8) + d(bt,69)) / 6
6(5 = (6((6o, b5) + dfb-L, be) + 6((62, b7) + 6((63, b8) + 6((64, b9)) I 5
A formula which may be used for calculating the mean average similarity measure for any number of image frames and any candidate periodicity is provided below.
5For j, k = all integers > 0,
In the above equation, Equation 4, the number of frames is denoted ‘Ν’, the candidate periodicity is denoted ‘n’, and the indices ‘j’ and ‘k’ are provided as counters. ‘dn’ denotes the average similarity measure for the candidate periodicity ‘n’.
Once the average similarity measures for each of the candidate periodicities have been determined, the periodicity of the subset of images is determined. More specifically, the periodicity for the subset is determined to be the candidate periodicity for which the average similarity measure is the smallest. That is, the periodicity of the fluctuation in the pixel brightness values for the subset may be determined to be the candidate periodicity for which the average similarity measure dnhas its smallest value. This maybe represented as follows:
dmin = min((dn})
As will be appreciated, if it is determined that the periodicity is 1 image frame, it may be determined that there is no flicker in that particular pixel.
The determination of the periodicity as described hereinabove is repeated for each of the subsets in the series to provide a periodicity corresponding to each subset.
Once the periodicities for the subsets have been determined, these may be used to determine an adjusted pixel brightness value for the particular pixel in the most recent image frame of the most recent subset of image frames. This is described below.
Flicker Correction
As will be explained below, the periodicity of each subset may be used to determine a weight which corresponds to the most recent image frame in the subset. For instance, referring to Figure 1B, to determine a weight corresponding to the most recent frame in
-16the series, the periodicity of set 1 is used, whereas to determine the weight corresponding to the next most recent frame in the series, the periodicity of set 2 is used, and so on. Put another way, in order to determine the weight corresponding to a particular frame in the series, the subset whose periodicity is determined comprises that frame and N-i prior frames, where N is the length of the subset.
More specifically, having determined a periodicity corresponding to each subset, the similarity measures determined for that periodicity are analysed, and the largest of the similarity measures is selected. This similarity measure may be referred to as dseiected· The 10 determination of dseiected may (for 5 candidate periodicities and a subset of ten frames) be represented as follows:
^selected <
if dmin d4 max(d(b0,b2fd(b1,b3)') if dmin == d2 max(d(b0,b3fd(b1,b4fd(b2,b5f) if dmin == d3 max(dfb0,b4),d(b1,b5),d(b2,b6'),d(b3,b7')') if dmin == d4 Vmax(d(bQ,b3'),d(b1,b6'),d(b2,b7'),d(b3,bs'),d(b4,b9')') if dmin == d5
If the periodicity for the subset is determined to be 1, it is determined that the particular pixel does not exhibit any flicker over the image frames of the subset. Accordingly, in this case, the largest of the similarity measures may not be used. Instead, when the periodicity for the subset is determined to be 1, the value of dseiected may be set equal to 1. That is, if the candidate periodicity for which the average similarity measure is the smallest is the candidate periodicity 1, the value of dseiected is set equal to 1.
The values of dseiected may, in some implementations, range from 0 to 1. In some implementations the pixel brightness values may be normalized to be between 0 and 1 and this may lead to a value of dseiected in the range from 0 to 1.
As will be appreciated, the value of dseiected is a measure of approximation error between fluctuations of the brightness over the subset and the selected one of the candidate periodicities. As such, in addition to performing detection of flicker in each pixel, the techniques described herein also provide an estimation of the periodicity of the flicker and a measure of approximation error of the actual fluctuation to the estimated periodicity.
-17Next, based on the determined periodicity corresponding to each of the subsets of image frames in the series, a weight for the particular pixel in the most recent image frame in that subset may be determined. More specifically, the weight for the particular pixel in the most recent frame of each subset may be determined based on the selected similarity measure, dseiected corresponding to that subset. The values of the weights may, in some implementations, range between o and 1.
A distribution of the weights may be based on a Gaussian distribution. For example, the weight for each frame m may be determined using the equation below:
selected
IO Wm = e 2·σ2
In the above equation, σ may denote a filter strength. The value of the filter strength controls the decay of the exponential function in the weight formula above, and therefore the decay of the weights as function of the similarity measures dseiected· The value of the 15 filter strength may be greater than zero. Furthermore, the value of sigma may depend on the amplitude of the flicker, or alternatively may be manually controllable by a user. In some implementations, a suitable value of sigma maybe substantially close to 0.01, for example.
Although the weight distribution may, in some examples, be based on a Gaussian distribution, any distribution which provides a continuous transition between 0 and 1 may instead be used.
As will be apparent from the formula above, a larger value of the measure of approximation error, dseiected, gives a weight which is smaller than the weight given by a smaller value of the measure of approximation error, dseiected· Therefore, a periodicity which is closer to a candidate periodicity results in a weight which is higher than a periodicity which is further from a candidate periodicity.
The weight for the most recent image frame in each of the subsets may then be applied to the pixel brightness value of the particular pixel in that frame. The weighted pixel brightness values of the pixels are averaged over the N most recent frames in the series (which are also the frames in the most recent subset) to give a weighted average.
-18The weighted average may be added to the pixel brightness value for the pixel in the most recent frame of the series and the resulting sum maybe normalised to give an adjusted pixel brightness value in the most recent frame of the series. The normalisation value may, in some examples, include a sum of all of the weights.
The adjusted pixel brightness value p’ifor the particular pixel in the most recent one of the image frames in the series may be expressed as follows (where pm,i is the original pixel brightness value of the pixel in a frame m, wm is the weight determined for that pixel in frame m, and p0 is the unadjusted pixel brightness value of the particular pixel in the most 10 recent one of the image frames in the series):
Po + Στη=1 Wm ' Pm,i + Zm=luzm
A weighted average of the pixel brightness values with the weight applied may be beneficial when compared to a simple average because it may allow the system to better 15 handle periodicities which change over time. In addition, using the weighted average may allow the system to better deal with transitions between flicker and non-flicker of a particular light source within the image frames (e.g. flickering light being switched off).
As described above, if the periodicity is determined to be a periodicity of one image frame, the value of dseiected is set equal to 1. This may result in a weight which is equal to zero. That is, if the fluctuation is determined to have a periodicity of 1, it can be said that there is no flicker and no adjustment is made to the pixel brightness value.
The method described above may be carried out for all of the pixels in each image frame, thereby providing a deflickering effect on the series of image frames.
The functionality described above with reference to Figures 1, 2A and 2B will now be described with respect to the flow charts of Figures 3A, 3B and 3C.
In operation S300, the series of image frames is received by the image processing apparatus (described below with reference to Figure 4). The method of Figure 3 may be performed post-capture in which case the series of image frames is captured and stored and subsequently, the series of image frames is retrieved from storage and provided (in operation S300) for processing. In such examples, an image processing apparatus 600, for
-19instance as illustrated in Figure 4, may be located remotely from an image capture device 700 capable of capturing a series of image frames. Alternatively, the series of image frames may be received from the image capture device 700 in substantially real-time. In such examples, the image processing apparatus 600 may be communicatively coupled with the image capture device 700. For instance, the functionality of the image processing apparatus 600 maybe performed by the image capture device 700. For example, the image processing apparatus 600 may store the sequence of image frames in a buffer and the image frames in the buffer may be updated in real time.
In operation S302, a block of pixels neighbouring a particular pixel in the series of image frames is identified. The pixels neighbouring a particular pixel may include pixels which are adjacent to the particular pixel in the image frames.
In operation S304, a fluctuation over time in pixel brightness values of the pixels in the block of pixels may be detected. The fluctuation over time in pixel brightness values of the block of pixels may be detected by analysing the pixel brightness values of the block of pixels in each of the image frames of the series of image frames.
In operation S306, for each of plural subsets of consecutive frames of the series, a periodicity of the fluctuation in the pixel brightness values of the block of pixels over the subset of image frames may be determined. The periodicity of the fluctuation may be determined as described above with reference to Figures 1A, 1B, 2A and 2B, and below with reference to Figure 3B.
Once the periodicity has been determined for each of the subsets, a weighted average may be determined in operation S308 based on the periodicities for each subset and a pixel brightness value of the particular pixel in the most recent frame of each subset. The weighted average may be determined as described above and below with reference to Figure 3C.
Finally, in operation S310, the pixel brightness value of the pixel in the most recent frame of the most recent subset maybe adjusted based on the value of the weighted average determined in operation S308.
- 20 As discussed above, the operations of Figure 3A may be performed for each of the pixels in a particular image frame. In addition, the operations may be performed for each of a sequence of image frames.
Figure 3B is a flow chart illustrating operations which may be used to determine the periodicity of the fluctuation for each subset of frames.
A number of candidate periodicities may be selected. In some examples, these may be selected based on expected frequencies of sources of flicker and a frame rate. The number 10 of frames in the subset may depend on the candidate periodicities selected. Specifically, the number of frames may be at least twice the highest candidate periodicity.
In operation S3o6a, for each candidate periodicity, an average similarity measure between pixel brightness values in image frames of selected pairs of image frames in the subset 15 may be determined. The pairs of image frames for each candidate periodicity may be selected as described above with reference to Figures 2A and 2B. The average similarity measure for each candidate periodicity maybe determined as described hereinabove.
In operation S3o6b, the candidate periodicity with smallest average similarity measure may be determined to be the periodicity of the fluctuation for the subset. The average similarity measures may provide an indication of the similarity between the candidate periodicity and the actual fluctuation, and therefore, the smallest similarity measure may indicate the candidate periodicity which is most similar to the actual fluctuation.
Figure 3C is a flow chart illustrating various operations which may be used to determine the weighted average of a brightness of a particular pixel over the N most recent frames in the series of frames and adjust the pixel brightness value of the pixel in the most recent frame of the series.
In operation 8308a, a measure (dseiected) of the approximation error between the actual brightness fluctuation over the subset and the determined periodicity may be obtained. This measure may obtained by selecting the largest of the similarity measures determined for the determined periodicity (i.e. the candidate periodicity having the smallest average difference, as determined in operation 8306b).
- 21 In operation S3o8b, for each subset of image frames in the series, a weight may be determined for the pixel in the most recent image frame of the subset. The weight maybe determined based on the measure of approximation error between the actual brightness fluctuation over the subset and the determined periodicity (as determined in operation 5 S306). In other words, the weight may be determined using the similarity measure identified in operation 8308a.
A larger value of the similarity measure identified in operation 8308a may lead to a determination of a smaller weight in S308b than does a smaller value of the similarity measure identified in operation S3o8a. Similarly, a smaller value of the similarity measure identified in operation S3o8a may lead to a determination of a larger weight in S3o8b than does a larger value of the similarity measure identified in operation S3o8a.
The distribution of the weights may be based on a Gaussian distribution.
In operation S308C, the weight for each image frame may be applied to the pixel brightness value of the pixel in that image frame. The application of the weight for each image frame to the pixel brightness value of the pixel in that image frame results in a weighted pixel brightness value of the pixel in that image frame, and may involve multiplying the weight and the pixel brightness value of the pixel in that image frame.
In operation S3o8d, the weighted pixel brightness values of the pixels maybe averaged over the N most recent frames in the series of frames to give an adjustment for the pixel brightness value of the particular pixel in the most recent frame of the series.
In operation S3o8e, the adjustment maybe added to the unadjusted pixel brightness value for the particular pixel in the most recent frame of the series and the resulting sum may be suitably normalised to give an adjusted pixel brightness value for the particular pixel in the most recent frame of the series. The adjusted pixel brightness value may then be applied to the particular pixel in the most recent image frame of the series of image frames. For example, referring again to Figure 1, the adjusted pixel brightness value may be applied to the particular pixel 105 in the most recent image frame 101.
- 22 Figure 4 is a schematic illustration of an example hardware configuration with which an image processing apparatus described with reference to Figures 3A, 3B and 3C may be implemented.
The image processing apparatus comprises processing apparatus 40. The processing apparatus 40 is configured to receive the series of image frames and to perform the method as described with reference Figures 3A, 3B and 3C.
The series of images may be received at the processing apparatus 40 via an input interface 10 43. In the example in Figure 4, the series of image frames may be received at the processing apparatus 40 via wired communication (e.g. via the input interface 43) or wireless communication (via transceiver 44 and antenna 45) from the image capture device 700 or from a storage medium. In some other examples, the series of image frames may be pre-stored in the memory 41 which forms part of the processing apparatus 40.
After the brightness has been adjusted, the processing apparatus 40 may provide the adjusted brightness output image data via an output interface 46. The adjusted brightness output image data may be provided for display via a display device 800 or to a storage device for storage and later retrieval. In some instances, the adjusted brightness output 20 image data may be transmitted wirelessly via the transceiver 44 and antenna 45 to a display device 800 or a storage device as appropriate. Additionally or alternatively, the adjusted brightness output image data may be stored in local storage 41 at the processing apparatus 40 for later retrieval.
The processing apparatus 40 may comprise processing circuitry 42 and memory 41. Computer-readable code 412A may be stored on the memory 41, which when executed by the processing circuitry 42, causes the processing apparatus 40 to perform any of the operations described herein. Example configurations of the memory 41 and processing circuitry 42 will be discussed in more detail below.
In implementations in which the image processing apparatus 600 is a device designed for human interaction, the user may control the operation of the image processing apparatus 600 by means of a suitable user input interface UII (not shown) such as key pad, voice commands, touch sensitive screen or pad, combinations thereof or the like. A speaker and 35 a microphone (also not shown) may also be provided, for instance in conjunction with the
-23display 800. Furthermore, the image processing apparatus 600 may comprise appropriate connectors (either wired or wireless) to other devices and/or for connecting external accessories thereto.
Some further details of components and features of the above-described apparatus 600 and alternatives for them will now be described.
The processing apparatus 40 may comprise processing circuitry 42 communicatively coupled with memory 41. The memory 41 has computer readable instructions 412A stored 10 thereon, which when executed by the processing circuitry 42 causes the processing apparatus 40 to cause performance of various ones of the operations described with reference to Figures 1 to 3. The processing apparatus 40 may in some instances be referred to, in general terms, as “apparatus”, “computing apparatus” or “processing means”.
The processing circuitry 42 may be of any suitable composition and may include one or more processors 42A of any suitable type or suitable combination of types. Indeed, the term “processing circuitry” should be understood to encompass computers having differing architectures such as single/multi-processor architectures and sequencers/parallel architectures. For example, the processing circuitry 42 may be a programmable processor that interprets computer program instructions 412A and processes data. The processing circuitry 42 may include plural programmable processors. Alternatively, the processing circuitry 42 maybe, for example, programmable hardware with embedded firmware. The processing circuitry 42 may alternatively or additionally include one or more specialised circuit such as field programmable gate arrays FPGA, Application Specific Integrated Circuits (ASICs), signal processing devices etc.
The processing circuitry 42 is coupled to the memory 41 and is operable to read/write data to/from the memory 41. The memoiy 41 may comprise a single memoiy unit or a plurality 30 of memory units, upon which the computer readable instructions (or code) 412A is stored.
For example, the memory 41 may comprise both volatile memory 411 and non-volatile memoiy 412. In such examples, the computer readable instructions/program code 42A may be stored in the non-volatile memory 412A and may be executed by the processing circuitry 42 using the volatile memory 411 for temporary storage of data or data and instructions. Examples of volatile memory include RAM, DRAM, and SDRAM etc.
-24Examples of non-volatile memory include ROM, PROM, EEPROM, flash memory, optical storage, magnetic storage, etc.
The memory 41 may be referred to as one or more non-transitory computer readable memory medium or one or more storage devices. Further, the term ‘memory’, in addition to covering memory comprising both one or more non-volatile memory and one or more volatile memory, may also cover one or more volatile memories only, one or more nonvolatile memories only. In the context of this document, a “memory” or “computerreadable medium” may be any media or means that can contain, store, communicate, propagate or transport the instructions for use by or in connection with an instruction execution system, apparatus, or device, such as a computer.
The computer readable instructions/program code 412A maybe pre-programmed into the processing apparatus 40. Alternatively, the computer readable instructions 412A may arrive at the control apparatus via an electromagnetic carrier signal or may be copied from a physical entity 50 such as a computer program product, a memory device or a record medium such as a CD-ROM or DVD an example of which is illustrated in Figure 5. The computer readable instructions 412A may provide the logic and routines that enables the apparatus 600 to perform the functionality described above. The combination of computer-readable instructions stored on memory (of any of the types described above) may be referred to as a computer program product. In general, references to computer program, instructions, code etc. should be understood to express software for a programmable processor firmware such as the programmable content of a hardware device as instructions for a processor or configured or configuration settings for a fixed function device, gate array, programmable logic device, etc.
The transceiver and antenna 44,45 may be adapted for any suitable type of wireless communication including but not limited to a Bluetooth protocol, a cellular data protocol or a protocol in accordance with IEEE 802.11.
The input and/or output interface 43, 46 may be of any suitable type of wired interface. For instance, when one or both of the interfaces is configured for wired connection with another device, they may be, for instance but not limited to, physical Ethernet or USB interfaces.
-25If desired, the different functions discussed herein may be performed in a different order and/or concurrently with each other. Furthermore, if desired, one or more of the abovedescribed functions maybe optional or maybe combined.
Although various aspects of the methods, apparatuses described herein are set out in the independent claims, other aspects may comprise other combinations of features from the described embodiments and/or the dependent claims with the features of the independent claims, and not solely the combinations explicitly set out in the claims.
It is also noted herein that while the above describes various examples, these descriptions should not be viewed in a limiting sense. Rather, there are several variations and modifications which maybe made without departing from the scope of the present invention as defined in the appended claims.

Claims (18)

Claims
1. A method comprising:
receiving an output image signal comprising a series of image frames, each
5 corresponding to a respective instance in time;
identifying a block of pixels neighbouring a particular pixel present in each of the image frames;
detecting a fluctuation over time in pixel brightness values of the block of pixels by analysing the pixel brightness values of the block of pixels in each image frame of the io series;
for each of plural sets of plural consecutive frames in the series, each set including a most recent frame and plural earlier frames, determining a periodicity of the fluctuation for the set of frames;
based on the periodicity of each set of frames and a pixel brightness value of the
15 particular pixel in the most recent frame of each set of frames, determining a weighted average of the pixel brightness values of the particular pixel over the most recent frames of the sets of frames; and adjusting the pixel brightness value of the particular pixel in a most recent frame of the most recent set of frames based on the weighted average.
2. The method of claim 1, wherein the determining the periodicity of the fluctuation further comprises:
for each of plural candidate periodicities:
for each of plural pairs of image frames selected based on the candidate
25 periodicity, determining a similarity measure between the pixel brightness values of the block of pixels in each image frame of the pair; and determining an average of the similarity measures, wherein the periodicity of the fluctuation is determined to be the candidate periodicity for which the average of the similarity measures is the smallest.
3. The method of claim 2, wherein in each of the plural pairs of image frames selected based on the candidate periodicity, the image frames are spaced apart in the set by a number of image frames corresponding to a multiple of the candidate periodicity.
4· The method of claim 3, wherein each candidate periodicity is denoted by an integer, wherein each frame is numbered based on its position in the set and wherein, for each candidate periodicity, the plural pairs of image frames selected include only the pairs of frames which have a first frame having a number below the integer
5 denoting the candidate periodicity and a second frame which is spaced apart from the first frame by a number of frames corresponding to a multiple of the candidate periodicity.
5. The method of any of claims 2 to 4, wherein the sets of frames partially overlap 10 with one another.
6. The method of any preceding claim, wherein determining the weighted average further comprises:
for each set of frames in the series, determining a weight based on the
15 determined periodicity for the set and applying the weight to the pixel brightness value of the pixel in the most recent frame of the set, and averaging the weighted pixel brightness values of the pixels to give an adjustment value for the pixel brightness value for the particular pixel in the most recent frame of the most recent set of frames.
7. The method of claim 6, wherein the adjusting the pixel brightness value of the particular pixel in a most recent frame of the most recent set of frames further comprises:
adding the adjustment value to an unadjusted pixel brightness value for the
25 particular pixel in the most recent frame of the most recent set of frames to give an adjusted pixel brightness value, and normalising the adjusted pixel brightness value.
8. The method of claim 6 or claim 7, wherein the weight for each set is determined 30 using a measure of approximation error between the fluctuation in pixel brightness values in the set and the determined periodicity for that set.
9. The method of claim 8, when dependent on any of claims 2 to 5, wherein the measure of approximation error is the largest of the similarity measures contributing to
- 28 the smallest average of the similarity measures on the basis of which the periodicity in that set was determined.
10. The method of claim 9, wherein a lower value of the measure of approximation
5 error corresponds to a closer approximation and a higher value of the measure of approximation error corresponds to a less close approximation.
11. The method of any of claims 8, 9 and 10, wherein a lower value of the measure of approximation error results in a larger weight than does a higher value of the
10 measure of approximation error.
12. The method of any of claims 6 to 11, wherein a distribution of the determined weights is based on a Gaussian distribution.
15
13. The method of any of claims 6 to 12, wherein, if the periodicity is determined to be one image frame for a given set, the weight determined for that set is substantially equal to zero.
14. Apparatus configured to perform a method according to any preceding claim.
15. Computer-readable instructions which, when executed by computing apparatus, cause the computing apparatus to perform a method according to any preceding claim.
16. A computer-readable medium having computer-readable code stored thereon,
25 the computer readable code, when executed by at least one processor, causing performance of:
receiving an output image signal comprising a series of image frames, each corresponding to a respective instance in time;
identifying a block of pixels neighbouring a particular pixel present in each of
30 the image frames;
detecting a fluctuation over time in pixel brightness values of the block of pixels by analysing the pixel brightness values of the block of pixels in each image frame of the series;
-29for each of plural sets of plural consecutive frames in the series, each set including a most recent frame and plural earlier frames, determining a periodicity of the fluctuation for the set of frames;
based on the periodicity of each set of frames and a pixel brightness value of the
5 particular pixel in the most recent frame of each set of frames, determining a weighted average of the pixel brightness values of the particular pixel over the most recent frames of the sets of frames; and adjusting the pixel brightness value of the particular pixel in a most recent frame of the most recent set of frames based on the weighted average.
17. Apparatus comprising:
at least one processor; and at least one memory including computer program code, which when executed by the at least one processor, causes the apparatus to:
15 receive an output image signal comprising a series of image frames, each corresponding to a respective instance in time;
identify a block of pixels neighbouring a particular pixel present in each of the image frames;
detect a fluctuation over time in pixel brightness values of the block of
20 pixels by analysing the pixel brightness values of the block of pixels in each image frame of the series;
for each of plural sets of plural consecutive frames in the series, each set including a most recent frame and plural earlier frames, determine a periodicity of the fluctuation for the set of frames;
25 based on the periodicity of each set of frames and a pixel brightness value of the particular pixel in the most recent frame of each set of frames, determine a weighted average of the pixel brightness values of the particular pixel over the most recent frames of the sets of frames; and adjust the pixel brightness value of the particular pixel in a most recent
30 frame of the most recent set of frames based on the weighted average.
18. Apparatus comprising:
means for receiving an output image signal comprising a series of image frames, each corresponding to a respective instance in time;
-30means for identifying a block of pixels neighbouring a particular pixel present in each of the image frames;
means for detecting a fluctuation over time in pixel brightness values of the block of pixels by analysing the pixel brightness values of the block of pixels in each 5 image frame of the series;
means for determining, for each of plural sets of plural consecutive frames in the series, each set including a most recent frame and plural earlier frames, a periodicity of the fluctuation for the set of frames;
means for determining, based on the periodicity of each set of frames and a
10 pixel brightness value of the particular pixel in the most recent frame of each set of frames, a weighted average of the pixel brightness values of the particular pixel over the most recent frames of the sets of frames; and means for adjusting the pixel brightness value of the particular pixel in a most recent frame of the most recent set of frames based on the weighted average.
GB1717220.6A 2017-10-20 2017-10-20 Deflickering of a series of images Active GB2567668B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB1717220.6A GB2567668B (en) 2017-10-20 2017-10-20 Deflickering of a series of images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1717220.6A GB2567668B (en) 2017-10-20 2017-10-20 Deflickering of a series of images

Publications (3)

Publication Number Publication Date
GB201717220D0 GB201717220D0 (en) 2017-12-06
GB2567668A true GB2567668A (en) 2019-04-24
GB2567668B GB2567668B (en) 2022-03-02

Family

ID=60481787

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1717220.6A Active GB2567668B (en) 2017-10-20 2017-10-20 Deflickering of a series of images

Country Status (1)

Country Link
GB (1) GB2567668B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007096556A (en) * 2005-09-28 2007-04-12 Pentax Corp Flicker noise reduction unit
JP2013127773A (en) * 2011-10-07 2013-06-27 Zakrytoe Akcionernoe Obshchestvo (Impul's) Noise reduction method in digital x ray frame series
US20150172529A1 (en) * 2013-12-16 2015-06-18 Olympus Corporation Imaging device and imaging method
JP2017184265A (en) * 2017-06-01 2017-10-05 株式会社朋栄 Image processing method of removing flicker and image processing device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007096556A (en) * 2005-09-28 2007-04-12 Pentax Corp Flicker noise reduction unit
JP2013127773A (en) * 2011-10-07 2013-06-27 Zakrytoe Akcionernoe Obshchestvo (Impul's) Noise reduction method in digital x ray frame series
US20150172529A1 (en) * 2013-12-16 2015-06-18 Olympus Corporation Imaging device and imaging method
JP2017184265A (en) * 2017-06-01 2017-10-05 株式会社朋栄 Image processing method of removing flicker and image processing device

Also Published As

Publication number Publication date
GB201717220D0 (en) 2017-12-06
GB2567668B (en) 2022-03-02

Similar Documents

Publication Publication Date Title
KR101967089B1 (en) Convergence Neural Network based complete reference image quality evaluation
US9454805B2 (en) Method and apparatus for reducing noise of image
CN108604369B (en) Method, device and equipment for removing image noise and convolutional neural network
US8525883B2 (en) Methods, systems and apparatus for automatic video quality assessment
JP5389903B2 (en) Optimal video selection
CN106504278A (en) HDR tone maps
US9292911B2 (en) Automatic image adjustment parameter correction
CN106961550B (en) Camera shooting state switching method and device
US8666148B2 (en) Image adjustment
CN111684800B (en) Method and system for predicting perceived video quality
US11259029B2 (en) Method, device, apparatus for predicting video coding complexity and storage medium
WO2019037739A1 (en) Image processing parameter acquisition method, readable storage medium and computer device
US9558534B2 (en) Image processing apparatus, image processing method, and medium
US11836898B2 (en) Method and apparatus for generating image, and electronic device
US20150187051A1 (en) Method and apparatus for estimating image noise
US11700383B2 (en) Techniques for modeling temporal distortions when predicting perceptual video quality
KR20150035315A (en) Method for generating a High Dynamic Range image, device thereof, and system thereof
CN110689496B (en) Method and device for determining noise reduction model, electronic equipment and computer storage medium
JP7446797B2 (en) Image processing device, imaging device, image processing method and program
JP6652052B2 (en) Image processing apparatus and image processing method
GB2567668A (en) Deflickering of a series of images
CN112561818B (en) Image enhancement method and device, electronic equipment and storage medium
CN113409210B (en) Pupil bright spot eliminating method
CN117726542B (en) Controllable noise removing method and system based on diffusion model
JP2018207176A (en) Image processing system, imaging apparatus, image processing method, and program

Legal Events

Date Code Title Description
732E Amendments to the register in respect of changes of name or changes affecting rights (sect. 32/1977)

Free format text: REGISTERED BETWEEN 20200109 AND 20200115