WO2021155549A1 - Procédé, système et support lisible par ordinateur pour générer des effets de composition d'image stabilisée pour des séquence d'images - Google Patents

Procédé, système et support lisible par ordinateur pour générer des effets de composition d'image stabilisée pour des séquence d'images Download PDF

Info

Publication number
WO2021155549A1
WO2021155549A1 PCT/CN2020/074458 CN2020074458W WO2021155549A1 WO 2021155549 A1 WO2021155549 A1 WO 2021155549A1 CN 2020074458 W CN2020074458 W CN 2020074458W WO 2021155549 A1 WO2021155549 A1 WO 2021155549A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
stabilized
images
image compositing
stabilization
Prior art date
Application number
PCT/CN2020/074458
Other languages
English (en)
Inventor
Hirotake Cho
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp., Ltd. filed Critical Guangdong Oppo Mobile Telecommunications Corp., Ltd.
Priority to CN202080095961.9A priority Critical patent/CN115066881B/zh
Priority to PCT/CN2020/074458 priority patent/WO2021155549A1/fr
Publication of WO2021155549A1 publication Critical patent/WO2021155549A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/682Vibration or motion blur correction
    • H04N23/683Vibration or motion blur correction performed by a processor, e.g. controlling the readout of an image memory
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects

Definitions

  • the present disclosure relates to the field of image processing, and more particularly, to a method, a system, and a computer-readable medium for generating stabilized image compositing effects for an image sequence.
  • An image compositing effect is an effect caused by combining of visual elements from separate sources into a single image, to create an illusion that all of the visual elements are parts of a same scene.
  • the image compositing effect may be an image blending effect under which transitions between the visual elements are smooth.
  • An example of the image blending effect is an artificial bokeh effect.
  • a bokeh lens effect is an effect under which an out-of-focus part is blurred in an image.
  • the artificial bokeh effect imitates the bokeh lens effect by blending a region created from an image region corresponding to the out-of-focus part with a region surrounding the image region. In this way, the image region is enlarged and blurred.
  • an artificial face art sticker effect An example of the image compositing effect for which transitions between the visual elements are abrupt is an artificial face art sticker effect.
  • a face region is composited with a plurality of face art stickers.
  • the artificial face art sticker effect is not limited to have abrupt transitions between the visual elements, and may alternatively have smooth transitions.
  • An object of the present disclosure is to propose a method, a system, and a computer-readable medium for generating stabilized image compositing effects for an image sequence.
  • a computer-implemented method includes: obtaining a first image sequence in which a target is captured, wherein the first image sequence comprises a plurality of original images, and a plurality of corresponding target regions corresponding to the target are in the original images; detecting the corresponding target regions in the original images, to obtain a detection result; performing stabilization using the detection result to obtain a stabilization result; and generating the stabilized image compositing effects using the stabilization result to obtain a second image sequence, so that the stabilized image compositing effects are more temporal flickering-smoothened than a plurality of image compositing effects in a third image sequence, wherein the third image sequence is obtained same as the second image sequence except that the image compositing effects are generated using the detection result rather than the stabilization result.
  • the step of performing stabilization comprises: for the corresponding target region in a first image of the original images that is successfully or unsuccessfully detected based on the detection result, building the stabilization result to be used to cause a first stabilized image compositing effect of a plurality of corresponding stabilized image compositing effects for the corresponding target regions in the original images to be generated, wherein the first stabilized image compositing effect has a first opacity; and for the corresponding target region in a second image of the original images that is unsuccessfully detected based on the detection result, building the stabilization result to be used to cause a second stabilized image compositing effect of the stabilized image compositing effects to be generated, wherein the second stabilized image compositing effect has a second opacity; wherein the first image and the second image are consecutive; and wherein the second opacity is less than the first opacity.
  • a system in a second aspect of the present disclosure, includes at least one memory and a processor module.
  • the at least one memory is configured to store program instructions.
  • the processor module is configured to execute the program instructions, which cause the processor module to perform steps including: obtaining a first image sequence in which a target is captured, wherein the first image sequence comprises a plurality of original images, and a plurality of corresponding target regions corresponding to the target are in the original images; detecting the corresponding target regions in the original images, to obtain a detection result; performing stabilization using the detection result to obtain a stabilization result; and generating the stabilized image compositing effects using the stabilization result to obtain a second image sequence, so that the stabilized image compositing effects are more temporal flickering-smoothened than a plurality of image compositing effects in a third image sequence, wherein the third image sequence is obtained same as the second image sequence except that the image compositing effects are generated using the detection result rather than the stabilization result.
  • the step of performing stabilization comprises: for the corresponding target region in a first image of the original images that is successfully or unsuccessfully detected based on the detection result, building the stabilization result to be used to cause a first stabilized image compositing effect of a plurality of corresponding stabilized image compositing effects for the corresponding target regions in the original images to be generated, wherein the first stabilized image compositing effect has a first opacity; and for the corresponding target region in a second image of the original images that is unsuccessfully detected based on the detection result, building the stabilization result to be used to cause a second stabilized image compositing effect of the stabilized image compositing effects to be generated, wherein the second stabilized image compositing effect has a second opacity; wherein the first image and the second image are consecutive; and wherein the second opacity is less than the first opacity.
  • a non-transitory computer-readable medium with program instructions stored thereon When the program instructions are executed by a processor module, the processor module is caused to perform steps including: obtaining a first image sequence in which a target is captured, wherein the first image sequence comprises a plurality of original images, and a plurality of corresponding target regions corresponding to the target are in the original images; detecting the corresponding target regions in the original images, to obtain a detection result; performing stabilization using the detection result to obtain a stabilization result; and generating the stabilized image compositing effects using the stabilization result to obtain a second image sequence, so that the stabilized image compositing effects are more temporal flickering-smoothened than a plurality of image compositing effects in a third image sequence, wherein the third image sequence is obtained same as the second image sequence except that the image compositing effects are generated using the detection result rather than the stabilization result.
  • the step of performing stabilization comprises: for the corresponding target region in a first image of the original images that is successfully or unsuccessfully detected based on the detection result, building the stabilization result to be used to cause a first stabilized image compositing effect of a plurality of corresponding stabilized image compositing effects for the corresponding target regions in the original images to be generated, wherein the first stabilized image compositing effect has a first opacity; and for the corresponding target region in a second image of the original images that is unsuccessfully detected based on the detection result, building the stabilization result to be used to cause a second stabilized image compositing effect of the stabilized image compositing effects to be generated, wherein the second stabilized image compositing effect has a second opacity; wherein the first image and the second image are consecutive; and wherein the second opacity is less than the first opacity.
  • FIG. 1 is a schematic diagram illustrating a terminal taking a first image sequence in which a target set is captured in accordance with an embodiment of the present disclosure.
  • FIG. 2 is a schematic diagram illustrating a temporal flickering problem of a plurality of artificial bokeh effects generated for a light source of the light source set captured in the first image sequence in accordance with an embodiment of the present disclosure.
  • FIG. 3 is a schematic diagram illustrating a size jumping problem of the artificial bokeh effects generated for a light source of the light source set captured in the first image sequence in accordance with an embodiment of the present disclosure.
  • FIG. 4 is a block diagram illustrating inputting, processing, and outputting hardware modules in a terminal in accordance with an embodiment of the present disclosure.
  • FIG. 5 is a flowchart illustrating a method of generating stabilized image compositing effect sets for the first image sequence in accordance with an embodiment of the present disclosure.
  • FIG. 6 is a schematic diagram illustrating a plurality of original images resulting from a step of obtaining the first image sequence in FIG. 5 and a plurality of images resulting from a step of detecting a plurality of light source region sets in FIG. 5 in accordance with an embodiment of the present disclosure.
  • FIG. 7 is a flowchart illustrating a step of performing stabilization to obtain a stabilization result and generating a plurality of stabilized image compositing effect sets using the stabilization result in FIG. 5 in accordance with an embodiment of the present disclosure.
  • FIG. 8 is a flowchart illustrating a probability increasing or decreasing step in FIG. 7 in accordance with an embodiment of the present disclosure.
  • FIG. 9 is a flowchart illustrating a probability initializing step in FIG. 7 in accordance with an embodiment of the present disclosure.
  • FIG. 10 is a flowchart illustrating a probability-dependent opacity determining step in FIG. 7 in accordance with an embodiment of the present disclosure.
  • FIG. 11 is a flowchart illustrating an opacity-dependent stabilized image compositing effect generating step in FIG. 7 in accordance with an embodiment of the present disclosure.
  • FIG. 12 is a schematic diagram illustrating a portion of the original images in which a light source being in an image compositing effect desired state is captured in FIG. 6, a portion of the images with an image in which the light source is unsuccessfully detected in FIG. 6, and a plurality of images resulting from performing the probability increasing or decreasing step in FIG. 8, the probability-dependent opacity determining step in FIG. 10, and the opacity-dependent stabilized image compositing effect generating step in FIG. 11 for the light source.
  • FIG. 13 is a schematic diagram illustrating the original images in which a light source being in a captured OFF state in two consecutive images is captured in FIG. 6, a plurality of images with two consecutive images in which the light source is unsuccessfully detected in FIG. 6, and a plurality of images resulting from performing the probability increasing or decreasing step in FIG. 8, the probability-dependent opacity determining step in FIG. 10, and the opacity-dependent stabilized image compositing effect generating step in FIG. 11 for the light source.
  • FIG. 14 is a flowchart illustrating a probability increasing or decreasing step that implements the probability increasing or decreasing step in FIG. 8 using a counter in accordance with an embodiment of the present disclosure.
  • FIG. 15 is a flowchart illustrating a probability initializing step that implements the probability initializing step in FIG. 9 using a counter in accordance with an embodiment of the present disclosure.
  • FIG. 16 is a flowchart illustrating a probability-dependent opacity determining step that implements the probability-dependent opacity determining step in FIG. 10 using a counter in accordance with an embodiment of the present disclosure.
  • FIG. 17 is a schematic diagram illustrating an example of a curve of a relationship between a first variable corresponding to a corresponding probability of each target being in the image compositing effect desired state when a plurality of images up to a current image are captured and a second variable corresponding to a corresponding opacity for the probability-dependent opacity determining step in FIG. 16 in accordance with an embodiment of the present disclosure.
  • FIG. 18 is an exemplary timing diagram for a counter for a target using a first step value for being added to the counter and a second step value for being subtracted from the counter in accordance with an embodiment of the present disclosure, wherein the first step value is less than the second step value.
  • FIG. 19 is a schematic diagram illustrating a probability increasing or decreasing step further including at least one step for size smoothing in addition to the probability increasing or decreasing step in FIG. 8 in accordance with an embodiment of the present disclosure.
  • FIG. 20 is a schematic diagram illustrating a probability initializing step with at least one step modified from at least one corresponding step in the probability initializing step in FIG. 9 to further include at least one corresponding portion for size smoothing in accordance with an embodiment of the present disclosure
  • FIG. 21 is a schematic diagram illustrating an opacity-dependent stabilized image compositing effect generating step with at least one step modified from at least one corresponding step in the opacity-dependent stabilized image compositing effect generating step in FIG. 11 to further include at least one corresponding portion for size smoothing in accordance with an embodiment of the present disclosure.
  • FIG. 22 is a schematic diagram illustrating the original images in which a plurality of corresponding light source regions corresponding to a light source have a plurality of unstable corresponding depth-related properties and a plurality of images resulting from size smoothing in FIGs. 19, 20, and 21.
  • performing at least one operation using at least one object refers to a case in which the at least one object is directly employed for performing the at least one operation, or a case in which the at least one object is modified by at least one intervening operation and the modified at least one object is directly employed to perform the at least one operation.
  • performing at least one operation on at least one object refers to a case in which the at least one object is directly employed for performing the at least one operation.
  • portion is intended to mean a fragment or an entirety.
  • image sequence refers to a portion of a video, a portion of a movie or a plurality of live view images in series.
  • FIG. 1 is a schematic diagram illustrating a terminal 102 taking a first image sequence in which a light source set 1LS is captured in accordance with an embodiment of the present disclosure.
  • the light source set 1LS is physical.
  • the light source set 1LS includes a plurality of light sources 1LSa to 1LSe.
  • the terminal 102 is used to take the first image sequence in which the light source set 1LS is captured.
  • FIG. 2 is a schematic diagram illustrating a temporal flickering problem of a plurality of artificial bokeh effects 26BEc generated for the light source 1LSc of the light source set 1LS (illustrated in FIG. 1) captured in an image sequence 22F in accordance with an embodiment of the present disclosure.
  • the image sequence 22F is the first image sequence captured by the terminal 102 in FIG. 1.
  • FIG. 2 only the light source 1LSc is illustrated to be captured in the image sequence 22F, and other light sources 1LSa, 1LSb, 1LSd, and 1LSe are omitted.
  • the image sequence 22F includes a plurality of original images 22F 1 to 22F t .
  • Exemplary two original images 22F n and 22F n+1 may be any two of the original images 22F 1 to 22F t that are consecutive.
  • the light source 1LSc is in a captured sufficiently illuminating state (i.e. in a sufficiently illuminating state when the light source 1LSc is captured in the two original images 22F n and 22F n+1 by the terminal 102) .
  • a plurality of corresponding light source regions 22LSRc n and 22LSRc n+1 in the two original images 22F n and 22F n+1 correspond to the light source 1LSc.
  • the corresponding light source regions 22LSRc n and 22LSRc n+1 need to be detected.
  • the light source region 22LSRc n is successfully detected, which is represented by an image 24F n with the light source region 24LSRc n corresponding to the light source region 22LSRc n .
  • the light source region 22LSRc n+1 is unsuccessfully detected, which is represented by an image 24F n+1 without any light source region.
  • the light source region 22LSRc n+1 is unsuccessfully detected because of at least one noise of the terminal 102 or a lighting effect, etc. Therefore, in an image sequence 26F generated by the image sequence compositing, a corresponding artificial bokeh effect 26BEc n for the light source region 22LSRc n is successfully generated, and a corresponding artificial bokeh effect 26BEc n+1 for the light source region 22LSRc n+1 is unsuccessfully generated.
  • the artificial bokeh effects 26BEc in the image sequence 26F have the temporal flickering problem.
  • a light source is in a sufficiently illuminating state when the light source is captured in a first image by a terminal” refers to a case in which the light source is in an ON state and the terminal receives sufficient illumination such that detection of a first light source region in the first image is successful.
  • the first light source region corresponds to the light source. Because the light source in the ON state may be flickering or blinking, and a flickering frequency or blinking frequency of the light source and an image rate of the terminal may be subjected to at least one error, the light source may also be in an insufficiently illuminating state when the light source is captured in a second image by the terminal.
  • the light source is in the ON state and the terminal receives insufficient illumination such that detection of a second light source region in the second image is unsuccessful.
  • the second light source region corresponds to the light source.
  • An example of the light source that is flickering may be a pulse width modulation (PWM) -controlled light emitting diode (LED) .
  • the temporal flickering problem for a plurality of image compositing effects generated for a target set captured in an image sequence is illustrated, wherein the image compositing effects are image blending effects such as the corresponding artificial bokeh effects 26BEc, and the target set is the light source set 1LS.
  • the image compositing effects are image blending effects such as the corresponding artificial bokeh effects 26BEc
  • the target set is the light source set 1LS.
  • FIG. 3 is a schematic diagram illustrating a size jumping problem of a plurality of artificial bokeh effects 36BEd generated for a light source 1LSd of the light source set 1LS (illustrated in FIG. 1) captured in an image sequence 32F in accordance with an embodiment of the present disclosure.
  • the image sequence 32F is the first image sequence captured by the terminal 102 in FIG. 1.
  • only the light source 1LSd is illustrated to be captured in the image sequence 32F, and other light sources 1LSa, 1LSb, 1LSc, and 1LSe are omitted.
  • the image sequence 32F includes a plurality of original images 32F 1 to 32F t .
  • Exemplary two original images 32F n-1 and 32F n may be any two of the original images 32F 1 to 32F t that are consecutive.
  • the light source 1LSd is not moving, and the terminal 102 is not moving either.
  • a size of a light source region 32LSRd n-1 in the original image 32F n-1 is larger than a size of a light source region 32LSRd n in the original image 32F n .
  • the light source regions 32LSRd n-1 and 32LSRd n correspond to the light source 1LSd.
  • a depth of the light source 1LSd appear to jump from being greater for the original image 32F n-1 to being less for the original image 32F n .
  • the depth of the light source 1LSd jumps again.
  • a plurality of corresponding artificial bokeh effects 36BEd n-1 and 36BEd n are generated based on a plurality of corresponding depth-related properties of the corresponding light source regions 32LSRd n-1 and 32LSRd n using the image sequence compositing.
  • An example of the artificial bokeh effects 36BEd n-1 and 36BEd n is a plurality of corresponding Bokeh light effects.
  • Each depth-related property may be the corresponding size of the corresponding light source region 32LSRd n-1 or 32LSRd n or the corresponding depth of the light source 1LSd for the corresponding original image 32F n-1 or 32F n . Therefore, in an image sequence 36F generated by the image sequence compositing, a plurality of corresponding sizes of the artificial bokeh effects 36BEd n-1 and 36BEd n appear to be jumping. Then, similarly, a plurality of corresponding sizes of the artificial bokeh effects 36BEd n and 36BEd n+1 appear to be jumping again.
  • the artificial bokeh effects 36BEd in the image sequence 36F have the size jumping problem.
  • the size jumping problem for a plurality of image compositing effects generated for a target set captured in an image sequence is illustrated, wherein the image compositing effects are image blending effects such as the corresponding artificial bokeh effects 36BEd, and the target set is the light source set 1LS.
  • the image compositing effects are image blending effects such as the corresponding artificial bokeh effects 36BEd
  • the target set is the light source set 1LS.
  • FIG. 4 is a block diagram illustrating inputting, processing, and outputting hardware modules in a terminal 102 in accordance with an embodiment of the present disclosure.
  • the terminal 102 includes a camera module 402, a processor module 404, a memory module 406, a display module 408, a storage module 410, a wired or wireless communication module 412, and buses 414.
  • the terminal 102 may be mobile phones, tablets, notebook computers, desktop computers, or any electronic device having enough computing power to perform the image sequence compositing.
  • the camera module 402 is an inputting hardware module and is configured to take the first image sequence (described with reference to FIG. 1) in which the target set is captured.
  • the first image sequence is to be transmitted to the processor module 404 through the buses 414.
  • the camera module 402 includes an RGB camera, or a grayscale camera.
  • the first image sequence may be obtained using another inputting hardware module, such as the storage module 410, or the wired or wireless communication module 412.
  • the storage module 410 is configured to store the first image sequence that is to be transmitted to the processor module 404 through the buses 414.
  • the wired or wireless communication module 412 is configured to receive the first image sequence from a network through wired or wireless communication, wherein the first image sequence is to be transmitted to the processor module 404 through the buses 414.
  • the memory module 406 stores program instructions, and the program instructions are executed by the processor module 404, which causes the processor module 404 to perform the image sequence compositing to generate a second image sequence with a plurality of corresponding stabilized image compositing effect sets for the target set captured in the first image sequence.
  • the memory module 406 may be a transitory or non-transitory computer-readable medium that includes at least one memory.
  • the processor module 404 includes at least one processor that sends signals directly or indirectly to and/or receives signals directly or indirectly from the camera module 402, the memory module 406, the display module 408, the storage module 410, and the wired or wireless communication module 412 via the buses 414.
  • the at least one processor may be central processing unit (s) (CPU (s) ) , graphics processing unit (s) (GPU (s) ) , and/or digital signal processor (s) (DSP (s) ) .
  • the CPU (s) may send the first image sequence, some of the program instructions and other data or instructions to the GPU (s) , and/or DSP (s) via the buses 414.
  • the display module 408 is an outputting hardware module and is configured to display the second image sequence that is received from the processor module 404 through the buses 414.
  • the second image sequence may be output using another outputting hardware module, such as the storage module 410, or the wired or wireless communication module 412.
  • the storage module 410 is configured to store the second image sequence that is received from the processor module 404 through the buses 414.
  • the wired or wireless communication module 412 is configured to transmit the second image sequence to the network through wired or wireless communication, wherein the second image sequence is received from the processor module 404 through the buses 414.
  • the terminal 102 is one type of computing system all of components of which are integrated together by the buses 414.
  • Other types of computing systems such as a computing system that has a remote camera module instead of the camera module 102 are within the contemplated scope of the present disclosure.
  • FIG. 5 is a flowchart illustrating a method 500 of generating stabilized image compositing effect sets for the first image sequence in accordance with an embodiment of the present disclosure.
  • the method 500 of generating stabilized image compositing effect sets for the first image sequence includes the following steps.
  • a step 502 a first image sequence in which a target set including a plurality of targets are captured.
  • the first image sequence includes a plurality of original images.
  • a plurality of corresponding target region sets are in the original images.
  • Each target region set includes a plurality of corresponding target regions corresponding to the targets.
  • the corresponding target region sets in the original images are detected to obtain a detection result.
  • a step 506 separately or in combination, stabilization is performed using the detection result to obtain a stabilization result, and a plurality of corresponding stabilized image compositing effect sets are generated for the corresponding target region sets in the original images using the stabilization result to obtain a second image sequence, so that the stabilized image compositing effect sets are more temporal flickering-smoothened than a plurality of image compositing effect sets in a third image sequence.
  • Each stabilized image compositing effect set includes a plurality of stabilized image compositing effects corresponding to a corresponding portion of the corresponding target regions of the corresponding target region set.
  • the third image sequence is obtained same as the second image sequence except that the image compositing effect sets are generated using the detection result rather than the stabilization result.
  • the target set including the targets is a light source set including a plurality of light sources.
  • Each stabilized image compositing effect set including the stabilized image compositing effects is a corresponding stabilized artificial bokeh effect set including a plurality of stabilized artificial bokeh effects.
  • FIG. 6 is a schematic diagram illustrating a plurality of original images 62F n-1 to 62F n+1 resulting from the step 502 of obtaining the first image sequence in FIG. 5 and a plurality of images 64F n-1 to 64F n+1 resulting from the step 504 of detecting the light source set captured in the first image sequence in FIG. 5 in accordance with an embodiment of the present disclosure.
  • the first image sequence in which the light source set 1LS including the light sources 1LSa to 1LSe are captured.
  • the first image sequence includes the original images 62F 1 to 62F t .
  • Exemplary three original images 62F n-1 to 62F n+1 may be any three of the original images 62F 1 to 62F t that are consecutive.
  • the corresponding light source region sets 62LSR n- 1 to 62LSR n+1 are in the original images 62F n-1 to 62F n+1 .
  • Each light source region set 62LSR n-1 , 62LSR n , or 62LSR n+1 includes a plurality of corresponding light source regions 62LSRa n-1 to 62LSRe n-1 , 62LSRa n to 62LSRe n , or 62LSRa n+1 to 62LSRe n+1 corresponding to the light sources 1LSa to 1LSe.
  • the light sources 1LSa to 1LSe are in corresponding captured sufficiently illuminating states when the original image 62F n-1 is captured.
  • the light sources 1LSa to 1LSd are in corresponding captured sufficiently illuminating states and the light source 1LSe is in an captured OFF state when the original image 62F n is captured.
  • the light sources 1LSa to 1LSd are in corresponding captured sufficiently illuminating states and the light source 1LSe is in an captured OFF state when the original image 62F n+1 is captured.
  • the corresponding light source region sets 62LSR n-1 to 62LSR n+1 in the original images 62F n-1 to 62F n+1 are detected to obtain a detection result.
  • the light source regions 62LSRa n-1 , 62LSRc n-1 , 62LSRd n-1 , and 62 LSRe n- 1 are successfully detected, and the light source region 62LSRb n-1 is unsuccessfully detected, which is represented by an image 64F n-1 having only a plurality of light source regions 64LSRa n-1 , 64LSRc n-1 , 64LSRd n-1 , and 64LSRe n-1 corresponding to the light source regions 62LSRa n-1 , 62LSRc n-1 , 62LSRd n-1 , and 62LSRe n-1 .
  • the detection result includes a plurality of corresponding positions of the light source regions 62LSRa n-1 , 62LSRc n-1 , 62LSRd n-1 , and 62 LSRe n-1 in the original image 62F n-1 , as represented by a plurality of corresponding positions of the 64LSRa n-1 , 64LSRc n-1 , 64LSRd n-1 , and 64LSRe n-1 in the image 64F n-1 .
  • the light source regions 62LSRa n , 62LSRc n , and 62LSRd n are successfully detected, and the light source region 62LSRb n , and 62LSRe n are unsuccessfully detected, which is represented by an image 64F n having only a plurality of light source regions 64LSRa n , 64LSRc n , and 64LSRd n corresponding to the light source regions 62LSRa n , 62LSRc n , and 62LSRd n .
  • the detection result includes a plurality of corresponding positions of the light source regions 62LSRa n , 62LSRc n , and 62LSRd n in the original image 62F n , as represented by a plurality of corresponding positions of the 64LSRa n , 64LSRc n , and 64LSRd n in the image 64F n .
  • the light source regions 62LSRa n+1 , 62LSRb n+1 , and 62LSRd n+1 are successfully detected, and the light source regions 62LSRc n+1 and 62 LSRe n+1 are unsuccessfully detected, which is represented by an image 64F n+1 having only a plurality of light source regions 64LSRa n+1 , 64LSRb n+1 , and 64LSRd n+1 corresponding to the light source regions 62LSRa n+1 , 62LSRb n+1 , and 62LSRd n+1 .
  • the detection result includes a plurality of corresponding positions of the light source regions 62LSRa n+1 , 62LSRb n+1 , and 62LSRd n+1 in the original image 62F n+1 , as represented by a plurality of corresponding positions of the light source regions 64LSRa n+1 , 64LSRb n+1 , and 64LSRd n+1 in the image 64F n+1 .
  • the target set is the light source set 1LS.
  • the target region sets in the corresponding original images are the corresponding light source region sets 62LSR n-1 to 62LSR n+1 in the corresponding original images 62F n-1 to 62F n+1 .
  • Detection successes for the corresponding light source regions (e.g. 62LSRa n-1 ) in the light source region sets 62LSR n-1 to 62LSR n+1 may result from the corresponding light sources (e.g. 1LSa) in the light source set 1LS being in the corresponding captured sufficiently illuminating states (i.e. in the corresponding sufficiently illuminating states when the corresponding light sources (e.g.
  • a detection failure for the light source region 62LSRc n+1 results from at least one noise of the terminal 102 or a lighting effect, etc.
  • a detection failure for the light source region 62 LSRe n+1 results from the corresponding light source 1LSe being in the captured OFF state (i.e. in the OFF state when the light sources 1LSe is captured in the original image (e.g. 62F n+1 ) ) .
  • Other reasons such as the terminal 102 moves and a light source is no longer in a field of view of the camera module 402 of the terminal 102 when the camera module 402 of the terminal 102 captures may also cause detection to be unsuccessful.
  • the target set is a face set.
  • the target region sets in the corresponding original images are a plurality of corresponding face region sets in a plurality of corresponding original images. Detection successes for a plurality of corresponding face regions in the face region sets may result from a plurality of corresponding faces in the face set being in a plurality of corresponding captured sufficiently illuminated states (i.e. in a plurality of corresponding sufficiently illuminated states when the corresponding faces are captured in the corresponding original images) .
  • a reason such as at least one noise of the terminal 102 or a lighting effect, etc., or the terminal 102 moves and a face is no longer in a field of view of the camera module 402 of the terminal 102 when the camera module 402 of the terminal 102 captures may cause detection to be unsuccessful.
  • a face is in a sufficiently illuminated state when a face is captured in a first image by a terminal
  • the face reflects sufficient illumination to the terminal such that detection of a first face region in the first image is successful.
  • the first face region corresponds to the face.
  • lighting for the face may be unstable, the face may also be in an insufficiently illuminated state when the face is captured in a second image by the terminal. In this case, the face reflects insufficient illumination to the terminal such that detection for a second face region in the second image is unsuccessful.
  • the second face region corresponds to the face.
  • FIG. 7 is a flowchart illustrating the step 506 of performing stabilization to obtain a stabilization result and generating a plurality of stabilized image compositing effect sets using the stabilization result in FIG. 5 in accordance with an embodiment of the present disclosure.
  • the step 506 includes the following steps.
  • the stabilization result for a current image of the original images is set to be the stabilization result for a previous image of the original images.
  • each matching flag of a plurality of matching flags corresponding to a plurality of target regions in the stabilization result for the previous image and a plurality of target regions in the detection result for the current image is initialized to be FALSE.
  • a probability increasing or decreasing step is performed.
  • a probability initializing step is performed.
  • a probability-dependent opacity determining step is performed.
  • an opacity-dependent stabilized image compositing effect generating step is performed. Then, the step 506 loops back to the step 702 until there are no more images to be processed by the steps 702 to 712.
  • FIG. 8 is a flowchart illustrating the probability increasing or decreasing step 706 in FIG. 7 in accordance with an embodiment of the present disclosure.
  • the probability increasing or decreasing step 706 includes the following steps.
  • a step 802 a corresponding position of a current target region in the stabilization result for the previous image is obtained.
  • a step 806 a corresponding position of a current target region in the detection result for the current image is obtained.
  • a step 808 whether the corresponding matching flag for the current target region in the detection result for the current image is TRUE is determined. If so, the step 706 loops back to the step 806. If not, a step 810 is proceeded.
  • step 810 whether the corresponding position of the current target region in the detection result for the current image matches the corresponding position of the current target region in the stabilization result for the previous image is determined. If so, a step 812 is proceeded. If not, the step 806 is proceeded. In the step 812, the corresponding matching flag for the current target region in the stabilization result for the previous image and the current target region in the detection result for the current image is set to be TRUE.
  • a corresponding probability of a first target being in an image compositing effect desired state when a plurality of images up to the current image are captured is caused, in the stabilization result for the current image, to be increased from a corresponding probability of the first target being in the image compositing effect desired state when a plurality of images up to the previous image are captured, wherein the first target corresponds to the current target region in the detection result for the current image.
  • a corresponding position of the first target region corresponding to the current target region in the detection result for the current image is updated, in the stabilization result for the current image, to be the corresponding position of the current target region in the detection result for the current image.
  • the step 706 loops back to the step 806 (i.e. loops within a box 804) here and as mentioned previously to update the current target region to be a next target region until there are no more target region (s) in the detection result for the current image. Then, the a step 818 is proceeded. In the step 818, whether the corresponding matching flag for the current target region in the stabilization result for the previous image is FALSE is determined. If so, a step 820 is proceeded. If not, the step 706 loops back to the step 802.
  • a corresponding probability of a second target being in the image compositing effect desired state when the images up to the current image are captured is caused, in the stabilization result for the current image, to be decreased from a corresponding probability of the second target being in the image compositing effect desired state when a plurality of images up to the previous image are captured, wherein the second target corresponds to the current target region in the stabilization result for the previous image.
  • the step 706 loops back to the step 802 here and as mentioned previously to update the current target region to be a next target region until there are no more target region (s) in the stabilization result for the previous image. Then, the then step 708 is proceeded.
  • FIG. 9 is a flowchart illustrating the probability initializing step 708 in FIG. 7 in accordance with an embodiment of the present disclosure.
  • the probability initializing step 708 includes the following steps.
  • a step 902 a current matching flag is obtained from the matching flags corresponding to the target regions in the detection result for the current image.
  • whether the current matching flag is TRUE is determined.
  • a step 906 a corresponding probability of a third target being in the image compositing effect desired state when the images up to the current image are captured is caused, in the stabilization result for the current image, to be initialized, wherein the third target corresponds to a target region in the stabilization result for the current image corresponding to the current matching flag.
  • a corresponding position of the target region corresponding to the current matching flag is set, in the stabilization result for the current image, to be the corresponding position of the current target region in the detection result for the current image. Then, the step 708 loops back to the step 902 until there are no more matching flag (s) .
  • FIG. 10 is a flowchart illustrating the probability-dependent opacity determining step 710 in FIG. 7 in accordance with an embodiment of the present disclosure.
  • the probability-dependent opacity determining step 710 includes the following steps. In a step 1002, a corresponding probability of a fourth target being in the image compositing effect desired state when the images up to the current image are captured is obtained from the stabilization result for the current image, wherein the fourth target corresponds to a current target region in the stabilization result for the current image.
  • a corresponding opacity for the current target region in the stabilization result for the current image is determined in the stabilization result for the current image using the corresponding probability of the fourth target being in the image compositing effect desired state when the images up to the current image are captured. Then, the step 710 loops back to the step 1002 until there are no more target region (s) in the stabilization result for the current image.
  • FIG. 11 is a flowchart illustrating the opacity-dependent stabilized image compositing effect generating step 712 in FIG. 7 in accordance with an embodiment of the present disclosure.
  • the opacity-dependent stabilized image compositing effect generating step 712 includes the following steps.
  • a step 1102 the corresponding position and the corresponding opacity of a current target region in the stabilization result for the current image are obtained.
  • a step 1104 a first stabilized image compositing effect of a first stabilized image compositing effect set of the stabilized image compositing effect sets is generated.
  • the first stabilized image compositing effect set corresponds to the current image
  • the first stabilized image compositing effect corresponds to the current target region in the stabilization result for the current image
  • the first stabilized image compositing effect is located at the corresponding position of the current target region in the stabilization result for the current image and has the corresponding opacity of the current target region in the stabilization result for the current image. Then, the step 712 loops back to the step 1102 until there are no more target region (s) in the stabilization result for the current image.
  • the stabilization result for a current image 62F n of the original images 62F n-1 to 62F n+1 is set to be the stabilization result for a previous image 62F n-1 of the original images 62F n-1 to 62F n+1 .
  • the stabilization result for the previous image 62F n-1 is also represented by the image 64F n .
  • each matching flag of a plurality of matching flags corresponding to a plurality of light source regions 64LSRa n-1 , 64LSRc n-1 , 64LSRd n-1 , and 64LSRe n-1 in the stabilization result for the previous image 62F n-1 and a plurality of light source regions 64LSRa n , 64LSRc n , and 64LSRd n in the detection result for the current image 62F n is initialized to be FALSE.
  • a corresponding position of a current light source region 64LSRa n-1 in the stabilization result for the previous image 62F n-1 is obtained.
  • a corresponding position of a current light source region 64LSRa n in the detection result for the current image 62F n is obtained.
  • the step 808 it is determined that the corresponding matching flag for the current light source region 64LSRa n in the detection result for the current image 62F n is FALSE. Therefore, the step 810 is performed.
  • the step 810 it is determined that the corresponding position of the current light source region 64LSRa n in the detection result for the current image 62F n matches the corresponding position of the current light source region 64LSRa n-1 in the stabilization result for the previous image 62F n-1 . Therefore, the step 812 is performed.
  • the corresponding matching flags for the current light source region 64LSRa n-1 in the stabilization result for the previous image 62F n-1 and the current light source region 64LSRa n in the detection result for the current image 62F n are set to be TRUE.
  • a corresponding probability of a first light source being in an image compositing effect desired state when a plurality of images 62F 1 to 62F n up to the current image 62F n are captured is caused, in the stabilization result for the current image 62F n , to be increased from a corresponding probability of the first light source being in the image compositing effect desired state when a plurality of images 62F 1 to 62F n-1 up to the previous image 62F n-1 are captured, wherein the first light source corresponds to the current light source region 64LSRa n in the detection result for the current image 62F n .
  • a first light source being in an image compositing effect desired state when a plurality of images up to the current image are captured refers to a case that the first light source is in a captured ON state when the images up to the current image are captured by the camera module 402 of the terminal 102.
  • the term “the first light source is in a captured ON state when the images up to the current image are captured by the camera module 402 of the terminal 102” refers to a case that the first light source is in an ON state and the first light source is in a field of view of the camera module 402 of the terminal 102 when the camera module 402 of the terminal 102 captures the images up to the current image.
  • a corresponding position of a first light source region corresponding to the current light source region 64LSRa n in the detection result for the current image 62F n is updated, in the stabilization result for the current image 62F n , to be the corresponding position of the current light source region 64LSRa n in the detection result for the current image 62F n. Because there are more light source regions 64LSRc n and 64LSRd n in the detection result for the current image 62F n , the step 706 loops back to the step 806.
  • step 808 it is determined that the step 810 is proceeded, and in the step 810, it is determined that the step 706 loops back to the step 806. Then, because there are no more light source regions in the detection result for the current image 62F n , the step 818 is proceeded. In the step 818, it is determined that the corresponding matching flag for the current light source region 64LSRa n-1 in the stabilization result for the previous image 62F n-1 is TRUE.
  • the step 706 loops back to the step 802.
  • similar steps as the steps for the light source region 64LSRa n-1 are performed and are omitted here.
  • the step 820 is proceeded.
  • a corresponding probability of a second light source being in the image compositing effect desired state when the images 62F 1 to 62F n up to the current image 62F n are captured is caused, in the stabilization result for the current image 62F n , to be decreased from a corresponding probability of the second light source being in the image compositing effect desired state when a plurality of images 62F 1 to 62F n-1 up to the previous image 62F n-1 are captured, wherein the second light source corresponds to the current light source region 64LSRe n-1 in the stabilization result for the previous image 62F n-1 .
  • the step 906 and the step 908 are not performed for any one of light source regions 64LSRa n , 64LSRc n , and 64LSRd n in the detection result for the current image 62F n .
  • steps 710 and 712 are performed for the current image 62F n .
  • Examples of images resulting from the steps 710 and 712 are to be described with reference to FIGs. 12 and 13.
  • the step 506 loops back to the step 702 and the previous image 62F n-1 is updated to be the current image 62F n , and the current image 62F n is updated to be a next image 62F n+1 .
  • the light source region 64LSRa n and 64LSRd n in the stabilization result for the previous image 62F n and the corresponding light source regions 64LSRa n+1 and 64LSRd n+1 in the detection result for the current image 62F n+1 that match each other similar steps as the steps for light source regions 64LSRa n-1 and the light source region 64LSRa n that match each other are performed.
  • the step 820 is proceeded.
  • a corresponding probability of a second light source being in the image compositing effect desired state when the images 62F 1 to 62F n+1 up to the current image 62F n+1 are captured is caused, in the stabilization result for the current image 62F n+1 , to be decreased from a corresponding probability of the second light source being in the image compositing effect desired state when the images 62F 1 to 62F n up to the previous image 62F n are captured, wherein the second light source corresponds to the current light source region 64LSRc n in the stabilization result for the previous image 62F n .
  • the step 820 is proceeded.
  • a corresponding probability of a second light source being in the image compositing effect desired state when the images 62F 1 to 62F n+1 up to the current image 62F n+1 are captured is caused, in the stabilization result for the current image 62F n+1 , to be decreased from a corresponding probability of the second light source being in the image compositing effect desired state when the images 62F 1 to 62F n up to the previous image 62F n are captured, wherein the second light source corresponds to the current light source region 64LSRe n in the stabilization result for the previous image 62F n .
  • a current matching flag (for the light source region 64LSRb n+1 ) is obtained from the matching flags corresponding to the light source regions 64LSRa n+1 , 64LSRb n+1 , and 64LSRd n+1 in the detection result for the current image 62F n+1 .
  • the step 906 is proceeded.
  • a corresponding probability of a third target being in the image compositing effect desired state when the images 62F 1 to 62F n+1 up to the current image 62F n+1 are captured is caused, in the stabilization result for the current image 62F n+1 , to be initialized, wherein the third target corresponds to a light source region 64LSRb n+1 in the stabilization result for the current image 62F n+1 corresponding to the current matching flag.
  • a corresponding position of the light source region 64LSRb n+1 corresponding to the current matching flag is set, in the stabilization result for the current image 62F n+1 , to be the corresponding position of the current light source region 64LSRb n+1 in the detection result for the current image 62F n+1 .
  • steps 710 and 712 are performed for the current image 62F n+1 .
  • Examples of images resulting from the steps 710 and 712 are to be described with reference to FIGs. 12 and 13.
  • FIG. 12 is a schematic diagram illustrating a portion 62F n and 62F n+1 of the original images 62F n-1 to 62F n+1 in which the light source 1LSc being in an image compositing effect desired state is captured in FIG. 6, a portion 64F n and 64F n+1 of the images 64F n-1 to 64F n+1 with the image 64F n+1 in which the light source 1LSc is unsuccessfully detected in FIG. 6, and a plurality of images 126F n and 126F n+1 resulting from performing the probability increasing or decreasing step 706 in FIG. 8, the probability-dependent opacity determining step 710 in FIG.
  • step 712 in FIG. 11 for the light source 1LSc.
  • FIG. 12 only portions relevant to the light source 1LSc are illustrated as an example, and portions relevant to other light sources 1LSa, 1LSb, 1LSd, and 1LSe are omitted.
  • FIGs. 6, 8, and 10 to 12 as mentioned previously, when the current image is the image 62F n in the step 506 in FIG.
  • the light source region 64LSRc n-1 in the stabilization result for the previous image 62F n-1 it is determined that the light source region 64LSRc n-1 matches the light source region 64LSRc n in the detection result for the current image 62F n . Based on the above, the light source region 62LSRc n in the current image 62F n is determined to be successfully detected based on the detection result.
  • the corresponding probability of the light source 1LSc being in the image compositing effect desired state when the images 62F 1 to 62F n up to the current image 62F n are captured is caused, in the stabilization result for the current image 62F n , to be increased from the corresponding probability of the light source 1LSc being in the image compositing effect desired state when the images 62F 1 to 62F n-1 up to the previous image 62F n-1 are captured.
  • the corresponding probability of the light source 1LSc being in the image compositing effect desired state when the images 62F 1 to 62F n up to the current image 62F n are captured is obtained from the stabilization result for the current image 62F n .
  • a corresponding opacity for the light source region 64LSRc n in the stabilization result for the current image 62F n is determined in the stabilization result for the current image 62F n using the corresponding probability of the light source 1LSc being in the image compositing effect desired state when the images 62F 1 to 62F n up to the current image 62F n are captured.
  • the corresponding position and the corresponding opacity of the light source region 64LSRc n in the stabilization result for the current image 62F n are obtained.
  • a first stabilized image compositing effect 126BEc n of a first stabilized image compositing effect set of the stabilized image compositing effect sets is generated.
  • the first stabilized image compositing effect set includes a plurality of stabilized image compositing effects corresponding to the light source regions 64LSRa n , 64LSRc n , 64LSRd n , and 64LSRe n in the stabilization result for current image 62F n .
  • the first stabilized image compositing effect 126BEc n corresponds to the current light source region 64LSRc n in the stabilization result for the current image 62F n .
  • the first stabilized image compositing effect 126BEc n is located at the corresponding position of the current light source region 64LSRc n in the stabilization result for the current image 62F n and has the corresponding opacity of the current light source region 64LSRc n in the stabilization result for the current image 62F n .
  • the stabilization result to be used to cause the first stabilized image compositing effect 126BEc n to be generated is built in the steps 814, 1002, and 1004.
  • the first stabilized image compositing effect 126BEc n is generated using the stabilization result in the steps 1102 and 1104.
  • the first stabilized image compositing effect having the corresponding opacity may be generated using alpha blending.
  • the first stabilized image compositing effect having the corresponding opacity may be generated using other types of blending, such as additive blending or multiplicative blending.
  • the current image is the image 62F n+1 in the step 506 in FIG. 7, for the light source region 64LSRc n in the stabilization result for the previous image 62F n , it is determined that none of the light source regions 64LSRa n+1 , 64LSRb n+1 , and 64LSRd n+1 in the detection result for the current image 62F n+1 match the light source region 64LSRc n . Based on the above, the light source region 62LSRc n+1 in the current image 62F n+1 is determined to be unsuccessfully detected based on the detection result.
  • the corresponding probability of the light source 1LSc being in the image compositing effect desired state when the images 62F 1 to 62F n+1 up to the current image 62F n+1 are captured is caused, in the stabilization result for the current image 62F n+1 , to be decreased from the corresponding probability of the light source 1LSc being in the image compositing effect desired state when the images 62F 1 to 62F n up to the previous image 62F n are captured.
  • the corresponding probability of the light source 1LSc being in the image compositing effect desired state when the images 62F 1 to 62F n+1 up to the current image 62F n+1 are captured is obtained from the stabilization result for the current image 62F n+1 .
  • a corresponding opacity for the light source region 64LSRc n+1 (not illustrated, but having a same position as the position of the light source region 64LSRc n ) in the stabilization result for the current image 62F n+1 is determined in the stabilization result for the current image 62F n+1 using the corresponding probability of the light source 1LSc being in the image compositing effect desired state when the images 62F 1 to 62F n+1 up to the current image 62F n+1 are captured.
  • the corresponding position and the corresponding opacity of the light source region 64LSRc n+1 in the stabilization result for the current image 62F n+1 are obtained.
  • a first stabilized image compositing effect 126BEc n+1 of a first stabilized image compositing effect set of the stabilized image compositing effect sets is generated.
  • the first stabilized image compositing effect set includes a plurality of stabilized image compositing effects corresponding to the light source regions 64LSRa n+1 , 64LSRb n+1 , 64LSRc n+1 , 64LSRd n+1 , and 64LSRe n+1 in the stabilization result for current image 62F n+1 .
  • the first stabilized image compositing effect 126BEc n+1 corresponds to the current light source region 64LSRc n+1 in the stabilization result for the current image 62F n+1 .
  • the first stabilized image compositing effect 126BEc n+1 is located at the corresponding position of the current light source region 64LSRc n+1 in the stabilization result for the current image 62F n+1 and has the corresponding opacity of the current light source region 64LSRc n+1 in the stabilization result for the current image 62F n+1 .
  • the stabilization result to be used to cause the first stabilized image compositing effect 126BEc n+1 to be generated is built in the steps 820, 1002, and 1004.
  • the first stabilized image compositing effect 126BEc n+1 is generated using the stabilization result in the steps 1102 and 1104.
  • the corresponding opacities for the stabilized image compositing effect 126BEc n and 126BEc n+1 in the step 1104 are determined using the probability of the light source 1LSc being in the image compositing effect desired state when the images 62F 1 to 62F n up to the image 62F n are captured and the probability of the light source 1LSc being in the image compositing effect desired state when the images 62F 1 to 62F n+1 up to the image 62F n+1 are captured in the step 1004, respectively.
  • a curve of a relationship between a first variable corresponding to the corresponding probabilities and a second variable corresponding to the corresponding opacities is used.
  • the probability increasing or decreasing step 706 is implemented using a counter for a target such as the light source 1LSc.
  • FIG. 17 is a schematic diagram illustrating an example of the curve 1700 of the relationship between the first variable corresponding to the probabilities and the second variable corresponding to the opacities for the embodiment using the counter.
  • a higher value of the counter corresponds to a higher probability of the light source 1LSc being in the image compositing effect desired state when a plurality of images up to a current image are captured and a lower value of the counter corresponds to a lower probability of the light source 1LSc being in the image compositing effect desired state when the images up to the current image are captured.
  • the curve 1700 is non-decreasing.
  • the corresponding probability of the light source 1LSc being in the image compositing effect desired state when the images 62F 1 to 62F n+1 up to the current image 62F n+1 are captured is caused to be decreased from the corresponding probability of the light source 1LSc being in the image compositing effect desired state when the images 62F 1 to 62F n up to the previous image 62F n are captured, the corresponding opacity for the stabilized image compositing effect 126BEc n+1 is less than the corresponding opacity for the stabilized image compositing effect 126BEc n , as illustrated in FIG. 12.
  • FIG. 13 is a schematic diagram illustrating the original images 62F n-1 to 62F n+1 in which a light source 1LSe being in an captured OFF state in two consecutive images 62F n and 62F n+1 is captured in FIG. 6, the images 64F n-1 to 64F n+1 with two consecutive images 64F n and 64F n+1 in which the light source 1LSe is unsuccessfully detected in FIG. 6, and a plurality of images resulting from performing the probability increasing or decreasing step 706 in FIG. 8, the probability-dependent opacity determining step 710 in FIG. 10, and the opacity-dependent stabilized image compositing effect generating step 712 in FIG. 11 for the light source 1LSe.
  • FIG. 13 is a schematic diagram illustrating the original images 62F n-1 to 62F n+1 in which a light source 1LSe being in an captured OFF state in two consecutive images 62F n and 62F n+1 is captured in FIG. 6, the images 64F n-1 to
  • the stabilized image compositing effect 136Be n-1 is similar to the stabilized image compositing effect 126BEc n in the example described with reference to FIG. 12, and is omitted to be described here.
  • the example described with reference to FIG. 13 has the following differences.
  • the light source region 64LSRe n-1 in the stabilization result for the previous image 62F n-1 it is determined that none of the light source regions 64LSRa n , 64LSRc n , and 64LSRd n in the detection result for the current image 62F n match the light source region 64LSRe n-1 . Based on the above, the light source region 62LSRe n in the current image 62F n is determined to be unsuccessfully detected based on the detection result.
  • the corresponding probability of the light source 1LSe being in the image compositing effect desired state when the images 62F 1 to 62F n up to the current image 62F n are captured is caused, in the stabilization result for the current image 62F n , to be decreased from the corresponding probability of the light source 1LSe being in the image compositing effect desired state when the images 62F 1 to 62F n-1 up to the previous image 62F n-1 are captured.
  • the corresponding probability of the light source 1LSe being in the image compositing effect desired state when the images 62F 1 to 62F n up to the current image 62F n are captured is obtained from the stabilization result for the current image 62F n .
  • a corresponding opacity for the light source region 64LSRe n (not illustrated, but having a same position as the position of the light source region 64LSRe n-1 ) in the stabilization result for the current image 62F n is determined in the stabilization result for the current image 62F n using the corresponding probability of the light source 1LSe being in the image compositing effect desired state when the images 62F 1 to 62F n up to the current image 62F n are captured.
  • the corresponding position and the corresponding opacity of the light source region 64LSRe n in the stabilization result for the current image 62F n are obtained.
  • a first stabilized image compositing effect 136BEe n of a first stabilized image compositing effect set of the stabilized image compositing effect sets is generated.
  • the first stabilized image compositing effect set includes a plurality of stabilized image compositing effects corresponding to the light source regions 64LSRa n , 64LSRc n , 64LSRd n , and 64LSRe n in the stabilization result for current image 62F n .
  • the first stabilized image compositing effect 136BEe n corresponds to the current light source region 64LSRe n in the stabilization result for the current image 62F n .
  • the first stabilized image compositing effect 136BEe n is located at the corresponding position of the current light source region 64LSRe n in the stabilization result for the current image 62F n and has the corresponding opacity of the current light source region 64LSRe n in the stabilization result for the current image 62F n .
  • the stabilization result to be used to cause the first stabilized image compositing effect 136BEe n to be generated is built in the steps 820, 1002, and 1004.
  • the first stabilized image compositing effect 136BEe n is generated using the stabilization result in the steps 1102 and 1104.
  • the current image is the image 62F n+1 in the step 506 in FIG. 7, for the light source region 64LSRe n in the stabilization result for the previous image 62F n , it is determined that none of the light source regions 64LSRa n+1 , 64LSRb n+1 , and 64LSRd n+1 in the detection result for the current image 62F n+1 match the light source region 64LSRe n . Based on the above, the light source region 62LSRe n+1 in the current image 62F n+1 is determined to be unsuccessfully detected based on the detection result.
  • the corresponding probability of the light source 1LSe being in the image compositing effect desired state when the images 62F 1 to 62F n+1 up to the current image 62F n+1 are captured is caused, in the stabilization result for the current image 62F n+1 , to be decreased from the corresponding probability of the light source 1LSe being in the image compositing effect desired state when the images 62F 1 to 62F n up to the previous image 62F n are captured.
  • the corresponding probability of the light source 1LSe being in the image compositing effect desired state when the images 62F 1 to 62F n+1 up to the current image 62F n+1 are captured is obtained from the stabilization result for the current image 62F n+1 .
  • a corresponding opacity for the light source region 64LSRe n+1 (not illustrated, but having a same position as the position of the light source region 64LSRe n ) in the stabilization result for the current image 62F n+1 is determined in the stabilization result for the current image 62F n+1 using the corresponding probability of the light source 1LSe being in the image compositing effect desired state when the images 62F 1 to 62F n+1 up to the current image 62F n+1 are captured.
  • the corresponding position and the corresponding opacity of the light source region 64LSRe n+1 in the stabilization result for the current image 62F n+1 are obtained.
  • a first stabilized image compositing effect 136BEe n+1 of a first stabilized image compositing effect set of the stabilized image compositing effect sets is generated.
  • the first stabilized image compositing effect set includes a plurality of stabilized image compositing effects corresponding to the light source regions 64LSRa n+1 , 64LSRb n+1 , 64LSRc n+1 , 64LSRd n+1 , and 64LSRe n+1 in the stabilization result for current image 62F n+1 .
  • the first stabilized image compositing effect 136BEe n+1 corresponds to the current light source region 64LSRe n+1 in the stabilization result for the current image 62F n+1 .
  • the first stabilized image compositing effect 136BEe n+1 is located at the corresponding position of the current light source region 64LSRe n+1 in the stabilization result for the current image 62F n+1 and has the corresponding opacity of the current light source region 64LSRe n+1 in the stabilization result for the current image 62F n+1 .
  • the stabilization result to be used to cause the first stabilized image compositing effect 136BEe n+1 to be generated is built in the steps 820, 1002, and 1004.
  • the first stabilized image compositing effect 136BEe n+1 is generated using the stabilization result in the steps 1102 and 1104.
  • the corresponding opacities for the stabilized image compositing effect 136BEe n and 136BEe n+1 in the step 1104 are determined using the probability of the light source 1LSe being in the image compositing effect desired state when the images 62F 1 to 62F n up to the image 62F n are captured and the probability of the light source 1LSe being in the image compositing effect desired state when the images 62F 1 to 62F n+1 up to the image 62F n+1 are captured in the step 1004, respectively.
  • the curve 1700 in FIG. 17 is used.
  • the corresponding opacity for the stabilized image compositing effect 136BEe n+1 is less than the corresponding opacity for the stabilized image compositing effect 136BEe n
  • the corresponding opacity for the stabilized image compositing effect 136BEe n+1 is less than rather than same as the corresponding opacity for the stabilized image compositing effect 126BEc n+1 .
  • the light source 1LSc is in the image compositing effect desired state for the image F n+1 but is accidentally unsuccessfully detected for the image 62F n+1 due to at least one noise of the terminal 102 or a lighting effect, etc.
  • the light source 1LSe is unsuccessfully detected for the images 62F n and 62F n+1 due to the light source 1LSe being in the captured OFF state when the images 62F n and 62F n+1 are captured.
  • the light source 1LSc has more number of images in the images 62F 1 to 62F n with successfully detected light source regions corresponding to the light source 1LSc
  • the light source 1LSe has fewer number of images in the images 62F 1 to 62F n with successfully detected light source regions corresponding to the light source 1LSe. This is exemplarily reflected by the light source 1LSc being successfully detected for the image 62F n and the light source 1LSe being unsuccessfully detected for the image 62F n .
  • the corresponding probability of the light source 1LSc being in the image compositing effect desired state when the images 62F 1 to 62F n up to the image 62F n are captured may be higher than the corresponding probability of the light source 1LSe being in the image compositing effect desired state when the images 62F 1 to 62F n up to the image 62F n are captured.
  • the corresponding probability of the light source 1LSc being in the image compositing effect desired state when the images 62F 1 to 62F n+1 up to the image 62F n+1 are captured is decreased from a higher value and the corresponding probability of the light source 1LSe being in the image compositing effect desired state when the images 62F 1 to 62F n+1 up to the image 62F n+1 is decreased from a lower value.
  • the stabilized image compositing effect sets in the second image sequence are more temporal flickering-smoothened than the image compositing effect sets in the third image sequence.
  • An example of the third image sequence is the image sequence 26F including the images 26F n and 26F n+1 described with reference to FIG. 2, and an example of the second image sequence is the image sequence 126F including the images 126F n and 126F n+1 described with reference to FIG. 12.
  • the stabilized image compositing effect 126BEc n+1 is generated for the image 64F n+1 in which the light source 1LSc is unsuccessfully detected.
  • a first curve of a relationship between a complete or partial detection history-dependent variable and an opacity variable is used to determine the corresponding opacity for the stabilized image compositing effect 126BEc n+1 .
  • the first curve is non-decreasing and having an increasing portion.
  • the first curve also has a plurality of points that divides a range between an upper bound and a lower bound of the opacity variable into more than two intervals. A number of the intervals is greater than a maximum number of intervals that that one step can go across.
  • the first curve is the curve 1700 used to determine the corresponding opacity of the stabilized image compositing effect 126BEc n+1 using the probability of the light source 1LSc being in the image compositing effect desired state when the images 62F 1 to 62F n+1 up to the image 62F n+1 are captured.
  • a second step value described with reference to FIG. 18 is an example for one step that goes across three intervals of a range between an upper bound and a lower bound of the opacity variable.
  • the case where the light source 1LSc is in the image compositing effect desired state for the image 62F n+1 but is accidentally unsuccessfully detected for the image 62F n+1 needs to be differentiated from the case where the light source 1LSe is unsuccessfully detected for the images 62F n and 62F n+1 due to the light source 1LSe being in the captured OFF state when the images 62F n and 62F n+1 are captured, so that the second image sequence (which is also the image sequence 136F in FIG. 13) is not overly temporal flickering-smoothened.
  • the corresponding probabilities of each of the light source 1LSc and the light source 1LSe being in the image compositing effect desired state are kept track image by image based on detection being successfully or unsuccessfully.
  • the probability of the light source 1LSc being in the image compositing effect desired state when the images 62F 1 to 62F n up to the image 62F n are captured reflect that the light source 1LSc is more likely to be in the image compositing effect desired state for the image 62F n+1 while the probability of the light source 1LSe being in the image compositing effect desired state when the images 62F 1 to 62F n up to the image 62F n are captured reflect that the light source 1LSc is less likely to be in the image compositing effect desired state for the image 62F n+1 .
  • the corresponding opacity for the stabilized image compositing effect 136BEe n+1 is less than the corresponding opacity for the stabilized image compositing effect 126BEc n+1 , which means the stabilized image compositing effects for the light source 1LSe gradually disappear when the light source 1LSe is in the captured OFF state.
  • image compositing effect desired state is a state that is desired for each target in a target set in order to achieve a corresponding stabilized compositing effect for each target.
  • the target set is the light source set.
  • Each stabilized image compositing effect set is the corresponding stabilized artificial bokeh effect set.
  • the image compositing effect desired state for any one of the light source set is the any one of the light source set is in the captured ON state (i.e. in the ON state when the any one of the light source set is captured in a corresponding one of the first image sequence) .
  • the any one of the light source set is in the captured OFF state (i.e.
  • the target set is the face set.
  • Each stabilized image compositing effect set is a corresponding stabilized artificial face art sticker effect set.
  • An image compositing effect desired state for any one of the face set is the any one of the face set is in a captured present state (i.e.
  • any one of the face set is not in the image compositing effect desired state.
  • an alternative embodiment where the probabilities are not used is as follows. As long as detection for a target is successful for the current image, the highest opacity is used for the current image. When detection for the target is unsuccessful for the current image, an opacity lower than the opacity for the previous image is used for the current image.
  • the second image sequence using the alternative embodiment is not as temporal flickering-smoothened as the second image sequence using the embodiment above due to less gradual change in opacity from an image for which detection is successful to an image for which detection is unsuccessful, but is still more temporal flickering-smoothened than the third image sequence. Furthermore, because whenever detection for the target is successful, a detection history is reset, a determination result that a detection failure is accidental or is caused by the target not being in the image compositing effect desired state is less certain. In this way, the second image sequence using the alternative embodiment is less certain to be not overly temporal flickering-smoothened than the second image sequence using the above embodiment.
  • an alternative embodiment where the two steps in the step 506 are performed separately is as follows.
  • the stabilization result for all of the original images are first generated.
  • the corresponding stabilized image compositing effect sets are generated for the corresponding target region sets in the original images using the stabilization result.
  • an alternative embodiment where the step 504 is performed in combination with the step 506 is as follows. Detection for a current image is performed prior to the step 704 in FIG. 7, and after the steps 702 to 712 in the step 506 are completed for the current image, detection for a next image which now becomes a current image is performed.
  • FIG. 14 is a flowchart illustrating a probability increasing or decreasing step 706’ that implements the probability increasing or decreasing step 706 in FIG. 8 using a counter in accordance with an embodiment of the present disclosure.
  • an embodiment described with reference to FIG. 14 implements the step 814 using a step 1414 and implements the step 820 using a step 1420.
  • a first value is added to a corresponding counter for a first target to obtain a current value of the corresponding counter.
  • the current value of the counter is in the stabilization result for the current image.
  • the first target corresponds to the current target region in the detection result for the current image.
  • a second value is subtracted from a corresponding counter for a first target to obtain a current value of the corresponding counter.
  • the current value of the counter is in the stabilization result for the current image.
  • the first target corresponds to the current target region in the detection result for the current image.
  • FIG. 15 is a flowchart illustrating a probability initializing step 708’ that implements the probability initializing step 708 in FIG. 9 using a counter in accordance with an embodiment of the present disclosure.
  • the probability initializing step 708’ in FIG. 15 has a step 1506 that implements the step 906 in FIG. 9.
  • a corresponding counter for a third target is initialized in the stabilization result for the current image.
  • the third target corresponds to the target region in the stabilization result for the current image.
  • FIG. 16 is a flowchart illustrating a probability-dependent opacity determining step 710’ that implements the probability-dependent opacity determining step 710 in FIG. 10 using a counter in accordance with an embodiment of the present disclosure.
  • the probability-dependent opacity determining step 710’ in FIG. 16 has a step 1602 that implements the step 1002 in FIG. 10, and a step 1604 that implements the step 1004 in FIG. 10.
  • a corresponding counter for a fourth target is obtained from the stabilization result for the current image.
  • the fourth target corresponds to a current target region in the stabilization result for the current image.
  • a corresponding opacity for the current target region in the stabilization result for the current image is determined, in the stabilization result for the current image, using a value of the corresponding counter of the fourth target.
  • an alternative embodiment where the corresponding probabilities in the step 814 and the step 820 are obtained not using the counter that adds and subtracts is as follows.
  • Each of the corresponding probabilities in the step 814 and the step 820 is obtained by calculating a corresponding ratio of a number of first images for which detection for a target is successful in a plurality of second images up to a current image over a total number of the second images.
  • FIG. 17 is a schematic diagram illustrating an example of the curve 1700 of the relationship between the first variable corresponding to a corresponding probability of each target being in the image compositing effect desired state when a plurality of images up to a current image are captured and the second variable corresponding to a corresponding opacity for the probability-dependent opacity determining step 710’ in FIG. 16 in accordance with an embodiment of the present disclosure.
  • the first variable corresponds to an axis for the corresponding values of the corresponding counter for each target
  • the second variable corresponds to an axis for the corresponding opacities for each target.
  • the value of the corresponding counter of the fourth target is looked up in the axis for the first variable in the curve 1700, and then the corresponding opacity for the current target region in the stabilization result for the current image is a value in the axis for the second variable in the curve 1700 that corresponds to the found value in the axis for the first variable in the curve 1700.
  • the curve 1700 of the relationship between the first variable corresponding to the corresponding probability of each target being in the image compositing effect desired state when the images up to the current image are captured and the second variable corresponding to the corresponding opacity has an increasing portion that is non-linear.
  • the increasing portion corresponds to a range of the first variable from three to seven.
  • the increasing portion may be linear.
  • the counter is clipped at an upper bound and a lower bound.
  • the first variable in the curve 1700 has a range from zero to eleven. Therefore, the upper bound of the counter is eleven. The lower bound of the counter is zero.
  • the reason for clipping the counter at the upper bound is as follows. When a target is in the image compositing effect desired state when many original images prior to an original image F n+1 (not illustrated in FIG. 17) are captured, and if the counter for the target is not clipped at the upper bound, the value of the counter for an original image F n-1 (not illustrated in FIG. 17) is very high.
  • the target when the target is not in the image compositing effect desired state starting from the original image F n (not illustrated in FIG. 17) , it takes a long time for the counter to drop to three (which has a corresponding opacity of zero in the curve 1700) , and therefore it takes a long time for disappearance of an image compositing effect for an original image in which the target is not in the image compositing effect desired state to occur. Similar reason applies to the case where the counter is clipped at the lower bound. Alternatively, the counter is not clipped, but the first value in the step 1414 is progressive increased for continuous detection successes and the second value in the step 1420 is progressively increased for continuous detection failures.
  • FIG. 18 is an exemplary timing diagram 1800 for a counter for a target using a first step value for being added to the counter and a second step value for being subtracted from the counter in accordance with an embodiment of the present disclosure, wherein the first step value is less than the second step value.
  • the first value is equal to the first step value which is one.
  • the second value is equal to the second step value which is three, but the counter is clipped at zero. The reason for using the greater second step value is as follows.
  • the image compositing effects for the original images in which the target is not in the image compositing effect desired state need to gradually disappear. Even though disappearance is gradual, it should not be too slow because a user may notice the image compositing effects for the original images in which the target is not in the captured image compositing effect desired state. Therefore, the second step value is greater than the first step value so that the disappearance is not too slow.
  • an alternative embodiment keeps track of a history of a counter, and varies the first value in the step 1414 and the second value in the step 1420 based on the history of the counter.
  • FIG. 19 is a schematic diagram illustrating a probability increasing or decreasing step 706” further including at least one step 1915 for size smoothing in addition to the probability increasing or decreasing step 706 in FIG. 8 in accordance with an embodiment of the present disclosure.
  • the probability increasing or decreasing step 706” in FIG. 19 further includes a step 1915 in the YES path of the step 810.
  • a corresponding depth-related property of a first target region corresponding to the current target region in the detection result for the current image is set, in the stabilization result for the current image, to be a value obtained by averaging a plurality of corresponding depth-related properties of the corresponding target regions of a set of images in the original images, wherein the set of images includes the previous image and the current image.
  • FIG. 20 is a schematic diagram illustrating a probability initializing step 708” with at least one step 2008 modified from at least one corresponding step 908 in the probability initializing step 708 in FIG. 9 to further include at least one corresponding portion for size smoothing in accordance with an embodiment of the present disclosure.
  • the probability initializing step 708” in FIG. 20 has the step 2008 modified from the step 908 to further include a portion to correspond to the probability increasing or decreasing step 706” in FIG. 19.
  • a corresponding position and a corresponding depth-related property of the target region corresponding to the current matching flag are set, in the stabilization result for the current image, to be the corresponding position and a corresponding depth-related property of the current target region in the detection result for the current image.
  • FIG. 21 is a schematic diagram illustrating an opacity-dependent stabilized image compositing effect generating step 712’ with at least one step 2102 and 2104 modified from at least one corresponding step 1102 and 1104 in the opacity-dependent stabilized image compositing effect generating step 712 in FIG. 11 to further include at least one corresponding portion for size smoothing in accordance with an embodiment of the present disclosure.
  • 21 has the steps 2102 and 2104 modified from the corresponding steps 1102 and 1104 to further include the corresponding portions to correspond to the probability increasing or decreasing step 706” in FIG. 19, and the probability initializing step 708” in FIG. 20.
  • the step 2102 the corresponding position, the corresponding opacity, and the corresponding depth-related property of a current target region in the stabilization result for the current image are obtained.
  • a first stabilized image compositing effect of a first stabilized image compositing effect set of the stabilized image compositing effect sets is generated.
  • the first stabilized image compositing effect set corresponds to the current image
  • the first stabilized image compositing effect corresponds to the current target region in the stabilization result for the current image
  • the first stabilized image compositing effect is located at the corresponding position of the current target region in the stabilization result for the current image, is generated using the corresponding depth-related property of the current target region in the stabilization result for the current image, and has the corresponding opacity of the current target region in the stabilization result for the current image.
  • the target set including the targets is a light source set including a plurality of light sources.
  • Each stabilized image compositing effect set including the stabilized image compositing effects is a corresponding stabilized artificial bokeh effect set including a plurality of stabilized artificial bokeh effects.
  • the target set including the target is a face set including a plurality of faces.
  • Each stabilized image compositing effect set including the stabilized image compositing effects is a corresponding stabilized artificial face art sticker effect set including a plurality of stabilized artificial face art sticker effects.
  • FIG. 22 is a schematic diagram illustrating the original images 62F n-1 to 62F n+1 in which a plurality of corresponding light source regions 62LSRd n-1 to 62LSRd n+1 corresponding to the light source 1LSd (illustrated in FIG. 1) have a plurality of unstable corresponding depth-related properties (not illustrated in FIG. 6 for simplicity) and a plurality of images 226F n-1 to 226F n+1 resulting from size smoothing in FIGs. 19, 20, and 21.
  • FIG. 22 is a schematic diagram illustrating the original images 62F n-1 to 62F n+1 in which a plurality of corresponding light source regions 62LSRd n-1 to 62LSRd n+1 corresponding to the light source 1LSd (illustrated in FIG. 1) have a plurality of unstable corresponding depth-related properties (not illustrated in FIG. 6 for simplicity) and a plurality of images 226F n-1 to 226F n+1 resulting from size smoothing in
  • a depth of the light source 1LSd appear to jump from being greater for the original image 62F n-1 to being less for the original image 62F n , and appear to jump again from less for the original image 62F n to being greater for the original image 62F n+1 .
  • a corresponding depth-related property of a first light source region corresponding to the current light source region 64LSRd n in the detection result for the current image 62F n is set, in the stabilization result for the current image 62F n , to be a value obtained by averaging a plurality of corresponding depth-related properties of the corresponding light source regions 62LSRd n-1 and 62LSRd n of a set of images in a plurality of images 62F 1 to 62F n up to the current image 62F n , wherein the set of images includes the previous image 62F n-1 and the current image 62F n .
  • each depth-related property may be the corresponding size of the corresponding light source region 62LSRd n-1 or 62LSRd n .
  • each depth-related property may be the corresponding depth of the light source 1LSd for the corresponding original frame 62F n-1 or 62F n .
  • a number of images in the set of images is two.
  • a number of images in the set of images may be greater than two.
  • a first stabilized image compositing effect of a first stabilized image compositing effect set of the stabilized image compositing effect sets is generated.
  • the first stabilized image compositing effect set includes a plurality of stabilized image compositing effects corresponding to the light source regions 64LSRa n , 64LSRc n , 64LSRd n , and 64LSRe n in the stabilization result for current image 62F n .
  • the first stabilized image compositing effect 226BEd n corresponds to the current light source region 64LSRd n in the stabilization result for the current image 62F n .
  • the first stabilized image compositing effect 226BEd n is located at the corresponding position of the current light source region 64LSRd n in the stabilization result for the current image 62F n , is generated using the corresponding depth-related property of the current light source region 64LSRd n in the stabilization result for the current image 62F n , and has the corresponding opacity of the current light source region 64LSRd n in the stabilization result for the current image 62F n .
  • the stabilization result to be used to cause the first stabilized image compositing effect 226BEd n to be generated is built in the step 1915.
  • the first stabilized image compositing effect 126BEc n is generated using the stabilization result in the steps 2102 and 2104.
  • the step 2008 is performed similarly as the step 908 for light source region 64LSRb n+1 and is omitted to be described here.
  • depth-related property averaging is performed in the step 1915, corresponding sizes of the stabilized image compositing effects 226BEd n-1 and 226BEd n do not appear to be jumping and are therefore size-smoothened. Similarly, depth-related property averaging performed in the step 1915 causes corresponding sizes of the stabilized image compositing effects 226BEd n and 226BEd n+1 do not appear to be jumping again and are therefore size-smoothened.
  • an opacity may be logically equivalently replaced by a transparency.
  • increasing a probability of a target being in an image compositing effect desired state may be logically equivalently replaced by decreasing a probability of a target being not in the image compositing effect desired state.
  • the curve the relationship between the first variable and the second variable may be non-increasing, and the counter is initialized to have the maximum value, and is added and subtracted in an opposite manner.
  • the disclosed system, device, and computer-implemented method in the embodiments of the present disclosure can be realized with other ways.
  • the above-mentioned embodiments are exemplary only.
  • the division of the units or modules is merely based on logical functions while other divisions exist in realization.
  • the units or modules may or may not be physical units or modules. It is possible that a plurality of units or modules are combined or integrated into one physical unit or module. It is also possible that any of the units or modules is divided into a plurality of physical units or modules. It is also possible that some characteristics are omitted or skipped.
  • the displayed or discussed mutual coupling, direct coupling, or communicative coupling operate through some ports, devices, units or modules whether indirectly or communicatively by ways of electrical, mechanical, or other kinds of forms.
  • the units or modules as separating components for explanation are or are not physically separated.
  • the units or modules are located in one place or distributed on a plurality of network units or modules. Some or all of the units or modules are used according to the purposes of the embodiments.
  • each of the functional units or modules in each of the embodiments can be integrated in one processing unit or module, physically independent, or integrated in one processing unit or module with two or more than two units or modules.
  • the software function unit or module is realized and used and sold as a product, it can be stored in a computer readable storage medium.
  • the technical plan proposed by the present disclosure can be essentially or partially realized as the form of a software product.
  • one part of the technical plan beneficial to the conventional technology can be realized as the form of a software product.
  • the software product is stored in a computer readable storage medium, including a plurality of commands for a processor module of a computational device (such as a personal computer, a mobile phone) to run all or some of the steps disclosed by the embodiments of the present disclosure.
  • the storage medium includes a USB disk, a mobile hard disk, a read-only memory (ROM) , a random access memory (RAM) , a floppy disk, or other kinds of media capable of storing program instructions.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)

Abstract

Procédé, système et support lisible par ordinateur pour des séquence d'images. Le procédé comprend : l'obtention d'une première séquence d'images dans laquelle une cible est capturée, la première séquence d'images comprenant une pluralité d'images originales, et une pluralité de régions cibles correspondantes correspondant à la cible étant dans les images originales ; la détection des régions cibles correspondantes dans les images originales pour obtenir un résultat de détection ; la réalisation d'une stabilisation à l'aide du résultat de détection pour obtenir un résultat de stabilisation ; et la génération des effets de composition d'image stabilisés à l'aide du résultat de stabilisation pour obtenir une deuxième séquence d'images, de telle sorte que les effets de composition d'image stabilisés sont plus brillants par scintillement lissé temporel qu'une pluralité d'effets de composition d'image dans une troisième séquence d'images, la troisième séquence d'images étant obtenue à l'instar de la deuxième séquence d'images à l'exception du fait que les effets de composition d'image sont générés à l'aide du résultat de détection plutôt que du résultat de stabilisation.
PCT/CN2020/074458 2020-02-06 2020-02-06 Procédé, système et support lisible par ordinateur pour générer des effets de composition d'image stabilisée pour des séquence d'images WO2021155549A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202080095961.9A CN115066881B (zh) 2020-02-06 2020-02-06 对于图像序列生成稳定化图像合成效果的方法、***及计算机可读介质
PCT/CN2020/074458 WO2021155549A1 (fr) 2020-02-06 2020-02-06 Procédé, système et support lisible par ordinateur pour générer des effets de composition d'image stabilisée pour des séquence d'images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/074458 WO2021155549A1 (fr) 2020-02-06 2020-02-06 Procédé, système et support lisible par ordinateur pour générer des effets de composition d'image stabilisée pour des séquence d'images

Publications (1)

Publication Number Publication Date
WO2021155549A1 true WO2021155549A1 (fr) 2021-08-12

Family

ID=77199700

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/074458 WO2021155549A1 (fr) 2020-02-06 2020-02-06 Procédé, système et support lisible par ordinateur pour générer des effets de composition d'image stabilisée pour des séquence d'images

Country Status (2)

Country Link
CN (1) CN115066881B (fr)
WO (1) WO2021155549A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106550243A (zh) * 2016-12-09 2017-03-29 武汉斗鱼网络科技有限公司 直播视频处理方法、装置及电子设备
CN107730460A (zh) * 2017-09-26 2018-02-23 维沃移动通信有限公司 一种图像处理方法及移动终端
WO2019070299A1 (fr) * 2017-10-04 2019-04-11 Google Llc Estimation de profondeur à l'aide d'une seule caméra
CN110363702A (zh) * 2019-07-10 2019-10-22 Oppo(重庆)智能科技有限公司 图像处理方法及相关产品
CN110363814A (zh) * 2019-07-25 2019-10-22 Oppo(重庆)智能科技有限公司 一种视频处理方法、装置、电子装置和存储介质

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7911513B2 (en) * 2007-04-20 2011-03-22 General Instrument Corporation Simulating short depth of field to maximize privacy in videotelephony
US8306283B2 (en) * 2009-04-21 2012-11-06 Arcsoft (Hangzhou) Multimedia Technology Co., Ltd. Focus enhancing method for portrait in digital image
CN107820019B (zh) * 2017-11-30 2020-03-06 Oppo广东移动通信有限公司 虚化图像获取方法、装置及设备

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106550243A (zh) * 2016-12-09 2017-03-29 武汉斗鱼网络科技有限公司 直播视频处理方法、装置及电子设备
CN107730460A (zh) * 2017-09-26 2018-02-23 维沃移动通信有限公司 一种图像处理方法及移动终端
WO2019070299A1 (fr) * 2017-10-04 2019-04-11 Google Llc Estimation de profondeur à l'aide d'une seule caméra
CN110363702A (zh) * 2019-07-10 2019-10-22 Oppo(重庆)智能科技有限公司 图像处理方法及相关产品
CN110363814A (zh) * 2019-07-25 2019-10-22 Oppo(重庆)智能科技有限公司 一种视频处理方法、装置、电子装置和存储介质

Also Published As

Publication number Publication date
CN115066881A (zh) 2022-09-16
CN115066881B (zh) 2023-11-14

Similar Documents

Publication Publication Date Title
CN109783178B (zh) 一种界面组件的颜色调整方法、装置、设备和介质
US8848059B2 (en) Systems and methods for receiving infrared data with a camera designed to detect images based on visible light
US9672764B2 (en) Liquid crystal display device
US20180204524A1 (en) Controlling brightness of an emissive display
EP2245594B1 (fr) Détection de flash
KR20190107217A (ko) 이미지-적응 톤 맵핑 방법 및 이를 채용한 표시 장치
KR20210006276A (ko) 플리커를 약화시키기 위한 이미지 처리 방법
CN102111640B (zh) 一种图像有效区域检测方法及***
US9503609B2 (en) Data-generating device, data-generating method, data-generating program and recording medium
CN109493831B (zh) 一种图像信号的处理方法及装置
CN103856720A (zh) 图像处理设备和方法
CN113053324A (zh) 背光控制方法、装置、设备、***及存储介质
WO2021155549A1 (fr) Procédé, système et support lisible par ordinateur pour générer des effets de composition d'image stabilisée pour des séquence d'images
CN108564923B (zh) 基于分区背光的高动态对比度图像显示方法及装置
US10210816B2 (en) Image display apparatus and method for dimming light source
US9832395B2 (en) Information processing method applied to an electronic device and electronic device having at least two image capturing units that have the same image capturing direction
CN101385027A (zh) 一种元数据产生方法及装置
JP6180135B2 (ja) 画像表示装置及びその制御方法
WO2020215227A1 (fr) Procédé et système de détection de modèles de mouvement non erroné
CN100454972C (zh) 一种视频图像3d降噪方法
Kerofsky et al. Improved adaptive video delivery system using a perceptual pre-processing filter
CN113391779B (zh) 类纸屏幕的参数调节方法、装置及设备
CN115334250A (zh) 一种图像处理方法、装置及电子设备
US20210366420A1 (en) Display method and device, and storage medium
CN114005059A (zh) 视频转场的检测方法、装置及电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20918060

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20918060

Country of ref document: EP

Kind code of ref document: A1