CN117560580B - Smooth filtering method and system applied to multi-camera virtual shooting - Google Patents

Smooth filtering method and system applied to multi-camera virtual shooting Download PDF

Info

Publication number
CN117560580B
CN117560580B CN202311510958.0A CN202311510958A CN117560580B CN 117560580 B CN117560580 B CN 117560580B CN 202311510958 A CN202311510958 A CN 202311510958A CN 117560580 B CN117560580 B CN 117560580B
Authority
CN
China
Prior art keywords
filter
images
fused
image
input condition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311510958.0A
Other languages
Chinese (zh)
Other versions
CN117560580A (en
Inventor
姚平
李子清
谢超平
李怡
钟义啸
蒋涵
佘俊
郑慧明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan Xinshi Chuangwei Ultra High Definition Technology Co ltd
Original Assignee
Sichuan Xinshi Chuangwei Ultra High Definition Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan Xinshi Chuangwei Ultra High Definition Technology Co ltd filed Critical Sichuan Xinshi Chuangwei Ultra High Definition Technology Co ltd
Priority to CN202311510958.0A priority Critical patent/CN117560580B/en
Publication of CN117560580A publication Critical patent/CN117560580A/en
Application granted granted Critical
Publication of CN117560580B publication Critical patent/CN117560580B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/387Composing, repositioning or otherwise geometrically modifying originals
    • H04N1/3876Recombination of partial images to recreate the original image

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a smoothing filtering method and a filtering system applied to multi-camera virtual shooting, which are used for obtaining and outputting fused images by processing images to be fused, wherein the fused images comprise fusion areas, then traversing and filtering the fusion areas of the images based on input conditions to obtain filtered images.

Description

Smooth filtering method and system applied to multi-camera virtual shooting
Technical Field
The invention relates to the technical field of image processing, in particular to a smoothing filtering method and a smoothing filtering system applied to multi-camera virtual shooting.
Background
In multi-camera image fusion, due to the difference of images shot under different viewing angles and conditions, smooth filtering processing is required to be performed on a fusion area so as to achieve a continuous and natural effect.
Conventional methods typically use fixed filters, which are difficult to accommodate for the requirements of different fusion regions, and may lead to the following problems:
1. processing all the fused regions using the same fixed filter may result in excessive smoothing, which may cause the image to lose detail, exhibit a blurred appearance, and affect visual realism.
2. Some fused regions may contain important detail information such as texture, edges, etc., however, it is difficult for a fixed filter to hold detail and smooth it at the same time, which tends to result in loss of detail, making the image appear less sharp.
3. Excessive smoothing or improper filter selection may result in edge blurring, especially when there is a pronounced edge structure in the fused region, where it is difficult for a fixed filter to achieve both edge sharpness and smoothing.
Therefore, it is necessary to provide a smoothing filtering method and a filtering system for multi-camera virtual shooting to solve the above technical problems.
Disclosure of Invention
In order to solve the technical problems, the invention provides a smooth filtering method and a filtering system applied to multi-camera virtual shooting, which dynamically select a filter suitable for fusion area pixels according to input conditions, so that more accurate and efficient image fusion results are realized and visual sense realism and quality are improved while detail and edge sharpness are maintained.
The invention provides a smoothing filtering method applied to multi-camera virtual shooting, which comprises the following steps:
s1: processing the images to be fused to obtain fused images and outputting the fused images, wherein the fused images comprise a fusion area;
s2: traversing and filtering the fusion area of the image based on the input condition to obtain a filtered image.
Preferably, the step S1 includes:
S101: selecting images to be fused, wherein the images to be fused have different shooting visual angles for the same scene;
s102: extracting characteristic information of images to be fused;
S103: based on the extracted characteristic information, aligning images with different shooting visual angles to a reference visual angle to obtain an aligned image;
s104: and fusing the aligned images to obtain fused images.
Preferably, the input conditions include an input condition one, an input condition two, an input condition three, and an input condition four, wherein,
The first input condition is smoothness;
The second input condition is of a noise type, wherein the noise type comprises Gaussian noise, pretzel noise and speckle noise;
the third input condition is texture requirement and is whether texture details need to be reserved or not;
and the input condition IV is feature similarity, and the features comprise space distance, color similarity, texture features and edge information.
Preferably, the step S2 includes:
S201: representing pixels of the fusion area by using a two-dimensional matrix to obtain matrix elements corresponding to the pixels one by one;
s202: traversing matrix elements, and selecting one of smoothness, noise type and texture requirement as an input condition in the traversal process;
S203: matching filters from the filter tool and processing the matrix elements with the matched filters based on the selected input conditions;
s204: based on the feature similarity, comparing the filtered matrix element with the current matrix element, and matching the matrix element meeting the feature similarity from the filtered matrix element;
S205: and selecting a filter used by the matched matrix element to process the current matrix element.
Preferably, the filter means comprises a gaussian filter, a bilateral filter and a mean shift filter.
Preferably, the step S203 includes:
S203a: if the input condition is smoothness and the required smoothness is not less than the smoothness threshold, a Gaussian filter is selected to process matrix elements,
If the required smoothness is less than the smoothness threshold value, selecting a bilateral filter or a mean shift filter to process matrix elements;
S203b: if the input condition is noise type and the matrix element is affected by Gaussian noise, a Gaussian filter is selected to process the matrix element,
If the matrix element is affected by the salt and pepper noise and/or the speckle noise, selecting a bilateral filter or a mean shift filter to process the matrix element;
S203c: if the input condition is texture demand and texture detail is required to be reserved, selecting an average shift filter to process matrix elements,
If texture detail is not required to be reserved, a bilateral filter or a Gaussian filter is selected to process matrix elements.
The invention also provides a smooth filtering system applied to the multi-machine virtual shooting, which comprises:
the image processing module is used for processing the images to be fused, obtaining fused images and outputting the fused images, wherein the fused images comprise fusion areas;
and the filtering processing module is used for traversing and filtering the fusion area of the image based on the input condition to obtain a filtered image.
Preferably, the image processing module includes:
the image selection module is used for selecting images to be fused, wherein the images to be fused have different shooting visual angles for the same scene;
the feature extraction module is used for extracting feature information of the images to be fused;
the image alignment module is used for aligning the images with different shooting angles to a reference angle based on the extracted characteristic information to obtain an aligned image;
And the image fusion module is used for fusing the aligned images to obtain fused images.
Preferably, the filtering processing module includes:
The image matrix conversion module is used for representing the pixels of the fusion area by utilizing a two-dimensional matrix to obtain matrix elements corresponding to the pixels one by one;
the input condition selection module is used for traversing the matrix elements and selecting one of smoothness, noise type and texture requirement as an input condition in the traversing process;
A filter selection module for matching filters from the filter tool based on the selected input conditions and processing the matrix elements with the matched filters;
the characteristic similarity comparison module is used for comparing the filtered matrix elements with the current matrix elements based on the characteristic similarity and matching the matrix elements meeting the characteristic similarity from the filtered matrix elements;
and the equivalent filtering module is used for processing the current matrix element by selecting a filter used by the matched matrix element.
Compared with the related art, the smoothing filtering method and the filtering system applied to the multi-camera virtual shooting have the following beneficial effects:
1. According to the input conditions of the smoothness requirement, the noise type and the texture requirement, a proper filter is matched for each pixel of the fusion area of the image from a Gaussian filter, a bilateral filter or a mean shift filter, and the filter operation is carried out by using the selected filter, so that the user-defined filter operation is carried out according to the requirements, and the smoothing treatment of different degrees is realized.
2. The invention sets the characteristic similarity condition, when traversing the fusion area pixel, the other pixels except the first pixel need to be compared with the filtered pixel before filtering, and the current pixel is processed by using the filter used by the pixel meeting the characteristic similarity condition, thereby realizing the dynamic selection of the filter type according to the characteristic similarity and the input condition between the pixels, and further improving the calculation efficiency and the processing accuracy while ensuring the image quality and the consistency.
Drawings
FIG. 1 is a flowchart of a smoothing filtering method applied to multi-camera virtual shooting;
fig. 2 is a flowchart of step S1 of a smoothing filtering method applied to multi-camera virtual shooting provided by the present invention;
fig. 3 is a flowchart of step S2 of a smoothing filtering method applied to multi-camera virtual shooting provided by the present invention;
FIG. 4 is another flowchart of a smoothing filtering method applied to multi-camera virtual shooting according to the present invention;
Fig. 5 is a schematic block diagram of a smoothing filter system applied to multi-camera virtual shooting according to the present invention.
Detailed Description
The invention will be further described with reference to the drawings and embodiments.
In the image filtering method, the Gaussian filtering method reduces high-frequency noise in the image by carrying out convolution operation on the image and using a Gaussian kernel, the Gaussian distribution function is adopted as a weight coefficient by the filter, and surrounding pixels are weighted and summed, so that a smoothing effect is realized, and the Gaussian filtering has the characteristics of simplicity and rapidness, can effectively blur the image, and can possibly cause a certain degree of detail loss.
The bilateral filtering method is a nonlinear filtering method, two factors of space distance and pixel value difference are considered, when the filter calculates the average value of pixels, the distance relation of surrounding pixels is considered, gray level or color similarity of the pixels is considered, and the filter can carry out smoothing processing on images while maintaining edge information, and is suitable for smoothing images and maintaining edge and texture details.
The mean shift filter method is a non-parameter method based on density estimation, and the mean shift filter can effectively remove noise and smooth images and keep edge information and texture details by iteratively shifting pixel points from the current position to the position with the maximum local density so as to smooth the images.
Example 1
The invention provides a smoothing filtering method applied to multi-camera virtual shooting, which is shown with reference to fig. 1 and 4 and comprises the following steps:
S1: and processing the images to be fused to obtain fused images and outputting the fused images, wherein the fused images comprise a fusion area.
In the step S1, images with different shooting visual angles are required to be fused together to form a smooth and continuous scene, the fused scene is divided into independent areas and fusion areas, and in the application, smooth filtering processing is required to be carried out on the fusion, so that the fusion areas are smoothly transited through the filtering processing, noise and artifacts are reduced, and the quality of the images is improved.
For example: the method comprises the steps of fusing a left view image and a right view image of the same scene, finding corresponding points between the left view image and the right view image by using a feature matching algorithm, and extracting image features such as corner points and edges and matching, wherein the feature matching algorithm can be used for detecting and matching the image feature points by using the algorithms such as SIFT, SURF or ORB.
According to the position relation of the corresponding points, the position of the region to be fused in the left view image in the right view image is calculated, the parallax of the corresponding point between the left view image and the right view image is obtained through parallax calculation, and the parallax is applied to the fused region in the left view image to determine the position in the right view image.
And fusing the fusion area in the left view image and the right view image, and seamlessly embedding the fusion area into the right image by using an image patching algorithm according to the characteristics of the fusion area and the geometric relationship between the corresponding points to form a fused image.
Specifically, referring to fig. 2, step S1 specifically includes:
S101: and selecting images to be fused, wherein the images to be fused have different shooting visual angles for the same scene.
Specifically, in the shooting process, a plurality of images may be shot at the same view angle, one image needs to be selected from the plurality of images to be used as an image to be fused, and one image to be fused is sequentially selected for all view angles, so that the images to be fused of all view angles of a scene are obtained.
S102: and extracting characteristic information of the images to be fused.
Specifically, the feature extraction is performed on the image of each view angle, and methods such as, but not limited to, corner or edge detection may be used, and these extracted features may be used for subsequent alignment and fusion.
S103: and aligning the images with different shooting angles to a reference angle based on the extracted characteristic information to obtain an aligned image.
Specifically, a reference view angle is selected as an alignment target according to a scene, matching points, feature points or feature descriptors between images are found, a RANSAC image alignment algorithm is used for calculating geometric transformation relations between the images, and on the basis of the calculated geometric transformation relations, matrix transformation is carried out on the images of different view angles, including rotation, translation, scaling and other operations, so that the images are aligned to the reference view angle.
S104: and fusing the aligned images to obtain fused images.
Specifically, the aligned images are fused by a weighted average method to generate a composite result, but the smoothness of the fused image cannot meet the requirement, so that subsequent filtering processing is required to obtain a smoother image.
S2: traversing and filtering the fusion area of the processed image based on the input condition to obtain a filtered image.
In the step S2, the input conditions comprise an input condition I, an input condition II, an input condition III and an input condition IV, wherein the input condition I is a smoothness degree; the second input condition is a noise type, and the noise type comprises Gaussian noise, pretzel noise and speckle noise; the third input condition is texture requirement and whether texture details need to be reserved or not; and the fourth input condition is feature similarity, wherein the features comprise spatial distance, color similarity, texture features and edge information.
Specifically, if a stronger smoothing effect is required, gaussian filtering can be selected;
if it is desired to smooth the image while preserving the edge information, bilateral filtering or mean shift filtering may be selected.
If the image is affected by Gaussian noise, gaussian filtering can be selected;
If the image is affected by salt and pepper noise or speckle noise, bilateral filtering or mean shift filtering can be selected.
If the texture details of the image need to be preserved, mean shift filtering may be selected.
Specifically, referring to fig. 3, step S2 specifically includes:
s201: and representing the pixels of the fusion area by using a two-dimensional matrix to obtain matrix elements corresponding to the pixels one by one.
Specifically, a two-dimensional pixel matrix is used to represent the fusion of the images, the number of rows and columns of the matrix corresponds to the height and width of the images, each matrix element is a pixel, and in the filtering process, the position and color information of each pixel can be accessed and processed by traversing each element of the matrix.
S202: the matrix elements are traversed and one of the smoothness, noise type and texture requirements is selected as an input condition during the traversal.
Specifically, in the process of traversing the matrix elements, one of the input conditions may be selected according to the requirement, that is, if the user focuses on the smoothness, the input condition of the smoothness is selected, if the user focuses on the noise type, the input condition of the noise type is selected, and if the user focuses on the texture requirement, the input condition of the texture requirement is selected.
S203: based on the selected input conditions, filters are matched from the filter tool and matrix elements are processed with the matched filters.
The filter tool comprises three filter types, namely a Gaussian filter, a bilateral filter and a mean shift filter, which are respectively in one-to-one correspondence with three input conditions, namely an input condition I, an input condition II and an input condition III.
Specifically, step S203 specifically includes:
S203a: if the input condition is smoothness and the required smoothness is not less than the smoothness threshold, a Gaussian filter is selected to process matrix elements,
Otherwise, if the required smoothness is less than the smoothness threshold, a bilateral filter or a mean shift filter is selected to process the matrix element.
Specifically, in the process of traversing the matrix elements, if the input condition is smoothness and the required smoothness is not less than the smoothness threshold, a Gaussian filter is selected to process the matrix elements, and the Gaussian filter controls the smoothness by adjusting parameters, so that noise can be removed, detail information can be reduced, and images or data can be smoother.
Conversely, if the required smoothness is less than the smoothness threshold, no gaussian filter processing is required, and one of a bilateral filter or a mean shift filter is selected to process the matrix elements, where the bilateral filter retains edge information and reduces noise by taking into account spatial distance and pixel value differences, thereby retaining details of the image or data while reducing noise, and the mean shift filter retains edge information by calculating the mean of the sample points in the color space and locally adjusts the samples using the mean, thereby achieving smoothing.
S203b: if the input condition is noise type and the matrix element is affected by Gaussian noise, a Gaussian filter is selected to process the matrix element,
And if the matrix element is affected by the salt and pepper noise and/or the speckle noise, selecting a bilateral filter or a mean shift filter to process the matrix element.
Specifically, in traversing the matrix elements, if the input condition is a noise type and the matrix elements are affected by gaussian noise, a gaussian filter is selected to process the matrix elements because the gaussian filter can effectively reduce gaussian noise, making the image or data clearer and smoother.
On the other hand, if the matrix elements are affected by salt and pepper noise and/or speckle noise, one of the bilateral filter or the mean shift filter is selected to process the matrix elements.
S203c: if the input condition is texture demand and texture detail is required to be reserved, selecting an average shift filter to process matrix elements,
If texture detail is not required to be reserved, a bilateral filter or a Gaussian filter is selected to process matrix elements.
Specifically, in traversing the matrix elements, if the input condition is texture demand and it is desired to preserve texture details, the mean shift filter is selected to process the matrix elements because the mean shift filter is capable of smoothing the image or data while preserving edge and texture details so that the processed result has both smoothness and preserves important texture features.
On the other hand, if texture detail is not required to be preserved, a bilateral filter or a gaussian filter is selected to process the matrix elements.
S204: based on the feature similarity, comparing the filtered matrix element with the current matrix element, and matching the matrix element meeting the feature similarity from the filtered matrix elements.
S205: and selecting a filter used by the matched matrix element to process the current matrix element.
In the processing of step S204 and step S205, in order to ensure the quality and consistency of the image and improve the computing efficiency and processing accuracy, the feature similarity between the currently filtered pixel and the filtered pixel needs to be compared, and if the feature similarity meets the requirement, only the filter used by the pixel meeting the feature similarity condition needs to be used for processing the currently to-be-filtered pixel.
In the comparison, the comparison needs to be performed on several features of spatial distance, color similarity, texture features and edge information, specifically:
spatial distance: comparing the spatial distance between two pixels, if they are closer, they can be considered to belong to the same region, and the same filter can be used for smoothing.
Color similarity: comparing the color similarity of two pixels, if their color values are very close, indicates that they are visually similar, may be smoothed using the same filter.
Texture features: comparing the texture features of two pixels, if they are similar, indicates that they are consistent in texture, can be smoothed using the same filter.
According to the comparison result, the same filter can be used to process the current pixel as long as the one condition is satisfied.
The invention provides a smooth filtering method applied to multi-machine virtual shooting, which has the following working principle: according to the input conditions of the smoothness requirement, the noise type and the texture requirement, matching a proper filter from a Gaussian filter, a bilateral filter or a mean shift filter for each pixel of a fusion area of the image, and carrying out filtering operation by using the selected filter so as to carry out self-defined filtering operation according to the requirement and realize smoothing processing of different degrees; meanwhile, the invention sets the characteristic similarity condition, when traversing the fusion area pixel, other pixels except the first pixel need to be compared with the filtered pixel before filtering, and the current pixel is processed by using the filter used by the pixel meeting the characteristic similarity condition, so that the type of the filter is dynamically selected according to the characteristic similarity and the input condition between the pixels, and the computing efficiency and the processing accuracy are improved while the image quality and the consistency are ensured.
Example two
The invention also provides a smoothing filter system applied to multi-camera virtual shooting, which is shown by referring to fig. 4 and comprises:
the image processing module is used for processing the images to be fused, obtaining fused images and outputting the fused images, wherein the fused images comprise fusion areas.
Specifically, the image processing module is specifically configured to fuse images with different shooting angles together to form a smooth and continuous scene, where the fused scene is divided into independent areas and fusion areas.
In this embodiment, the image processing module includes:
The image selection module is used for selecting images to be fused, and the images to be fused have different shooting visual angles for the same scene.
Specifically, the image selecting module is specifically configured to, in a shooting process, possibly shoot a plurality of images at the same viewing angle, select one image to be fused from the plurality of images, and sequentially select one image to be fused for all the viewing angles, thereby obtaining images to be fused at all the viewing angles of the scene.
And the feature extraction module is used for extracting feature information of the images to be fused.
Specifically, the feature extraction module is specifically configured to select a reference view angle as an alignment target according to a scene, find a matching point, a feature point or a feature descriptor between images, calculate a geometric transformation relationship between the images by using a RANSAC image alignment algorithm, and perform matrix transformation on images of different views based on the calculated geometric transformation relationship, including operations such as rotation, translation, scaling, and the like, so as to align the images to the reference view angle.
And the image alignment module is used for aligning the images with different shooting angles to a reference angle based on the extracted characteristic information to obtain an aligned image.
Specifically, the image alignment module is specifically configured to select a reference view angle as an alignment target according to a scene, find matching points, feature points or feature descriptors between images, calculate a geometric transformation relationship between the images by using a RANSAC image alignment algorithm, and perform matrix transformation on images of different views based on the calculated geometric transformation relationship, including operations such as rotation, translation, scaling, and the like, so as to align the images to the reference view angle.
And the image fusion module is used for fusing the aligned images to obtain fused images.
Specifically, the image fusion module is specifically configured to fuse the aligned images by using a weighted average method to generate a composite result, but smoothness of the fused image cannot meet a requirement, so that subsequent filtering processing is required to obtain a smoother image.
And the filtering processing module is used for traversing and filtering the fusion area of the image based on the input condition to obtain a filtered image.
In this embodiment, the input conditions include an input condition one, an input condition two, an input condition three, and an input condition four, wherein the input condition one is a smoothness degree; the second input condition is a noise type, and the noise type comprises Gaussian noise, pretzel noise and speckle noise; the third input condition is texture requirement and whether texture details need to be reserved or not; and the fourth input condition is feature similarity, wherein the features comprise spatial distance, color similarity, texture features and edge information.
Specifically, if a stronger smoothing effect is required, gaussian filtering can be selected;
if it is desired to smooth the image while preserving the edge information, bilateral filtering or mean shift filtering may be selected.
If the image is affected by Gaussian noise, gaussian filtering can be selected;
If the image is affected by salt and pepper noise or speckle noise, bilateral filtering or mean shift filtering can be selected.
If the texture details of the image need to be preserved, mean shift filtering may be selected.
Specifically, the filtering processing module includes:
And the image matrix conversion module is used for representing the pixels of the fusion area by utilizing the two-dimensional matrix to obtain matrix elements corresponding to the pixels one by one.
Specifically, the image matrix conversion module is specifically configured to use a two-dimensional pixel matrix to represent fusion of images, where the number of rows and columns of the matrix corresponds to the height and width of the images, and each matrix element is a pixel, and in the filtering process, by traversing each element of the matrix, the position and color information of each pixel can be accessed and processed.
And the input condition selection module is used for traversing the matrix elements and selecting one of smoothness, noise type and texture requirement as an input condition in the traversing process.
Specifically, the input condition selection module is specifically configured to select one of the input conditions according to a requirement in a process of traversing the matrix element, that is, if the user focuses on the smoothness, then select the input condition of the smoothness, if the user focuses on the noise type, then select the input condition of the noise type, and if the user focuses on the texture requirement, then select the input condition of the texture requirement.
A filter selection module for matching filters from the filter tool based on the selected input conditions and processing matrix elements with the matched filters.
The filter selection module is specifically used for three filter types of a filter tool, including a Gaussian filter, a bilateral filter and a mean shift filter, and corresponds to three input conditions, namely an input condition I, an input condition II and an input condition III one by one.
Specifically, the filter selection module is specifically configured to select a gaussian filter to process the matrix element if the input condition is smoothness and the required smoothness is greater than or equal to a smoothness threshold,
Otherwise, if the required smoothness is less than the smoothness threshold, a bilateral filter or a mean shift filter is selected to process the matrix element.
Specifically, in the process of traversing the matrix elements, if the input condition is smoothness and the required smoothness is not less than the smoothness threshold, a Gaussian filter is selected to process the matrix elements, and the Gaussian filter controls the smoothness by adjusting parameters, so that noise can be removed, detail information can be reduced, and images or data can be smoother.
Conversely, if the required smoothness is less than the smoothness threshold, no gaussian filter processing is required, and one of a bilateral filter or a mean shift filter is selected to process the matrix elements, where the bilateral filter retains edge information and reduces noise by taking into account spatial distance and pixel value differences, thereby retaining details of the image or data while reducing noise, and the mean shift filter retains edge information by calculating the mean of the sample points in the color space and locally adjusts the samples using the mean, thereby achieving smoothing.
The filter selection module is specifically configured to select a gaussian filter to process the matrix element if the input condition is a noise type and the matrix element is affected by gaussian noise,
And if the matrix element is affected by the salt and pepper noise and/or the speckle noise, selecting a bilateral filter or a mean shift filter to process the matrix element.
Specifically, in traversing the matrix elements, if the input condition is a noise type and the matrix elements are affected by gaussian noise, a gaussian filter is selected to process the matrix elements because the gaussian filter can effectively reduce gaussian noise, making the image or data clearer and smoother.
On the other hand, if the matrix elements are affected by salt and pepper noise and/or speckle noise, one of the bilateral filter or the mean shift filter is selected to process the matrix elements.
The filter selection module is specifically configured to, if the input condition is a texture requirement and the texture detail is required to be preserved, select an average shift filter to process the matrix element,
If texture detail is not required to be reserved, a bilateral filter or a Gaussian filter is selected to process matrix elements.
Specifically, in traversing the matrix elements, if the input condition is texture demand and it is desired to preserve texture details, the mean shift filter is selected to process the matrix elements because the mean shift filter is capable of smoothing the image or data while preserving edge and texture details so that the processed result has both smoothness and preserves important texture features.
On the other hand, if texture detail is not required to be preserved, a bilateral filter or a gaussian filter is selected to process the matrix elements.
And the characteristic similarity comparison module is used for comparing the filtered matrix elements with the current matrix elements based on the characteristic similarity and matching the matrix elements meeting the characteristic similarity from the filtered matrix elements.
Specifically, to ensure the quality and consistency of the image and improve the computing efficiency and processing accuracy, the feature similarity between the current filtered pixel and the filtered pixel needs to be compared, and if the feature similarity meets the requirement, only the filter used by the pixel meeting the feature similarity condition needs to be used for processing the current pixel to be filtered.
The feature similarity comparison module is specifically configured to compare the spatial distance, the color similarity, the texture feature and the edge information, and specifically includes:
spatial distance: comparing the spatial distance between two pixels, if they are closer, they can be considered to belong to the same region, and the same filter can be used for smoothing.
Color similarity: comparing the color similarity of two pixels, if their color values are very close, indicates that they are visually similar, may be smoothed using the same filter.
Texture features: comparing the texture features of two pixels, if they are similar, indicates that they are consistent in texture, can be smoothed using the same filter.
According to the comparison result, the same filter can be used to process the current pixel as long as the one condition is satisfied.
And the equivalent filtering module is used for processing the current matrix element by selecting a filter used by the matched matrix element.
Specifically, the equivalent filtering module is specifically configured to process the current pixel by using the same filter as long as the above-mentioned one condition is satisfied according to the above-mentioned comparison result.
The invention provides a smooth filtering system applied to multi-machine virtual shooting, which has the working principle that: obtaining a fused image through an image processing module, matching a proper filter for each pixel of a fused region of the image from a Gaussian filter, a bilateral filter or a mean shift filter through a filtering processing module according to the input conditions of the smoothness requirement, the noise type and the texture requirement, and carrying out filtering operation by using the selected filter so as to carry out self-defined filtering operation according to the requirements, thereby realizing smoothing processing of different degrees; meanwhile, when traversing the pixels in the fusion area, the filter processing module compares the characteristics of other pixels except the first pixel and the filtered pixels according to the set characteristic similarity condition, and processes the current pixel by using a filter used by the pixels meeting the characteristic similarity condition, so that the types of the filter are dynamically selected according to the characteristic similarity between the pixels and the input condition, and the computing efficiency and the processing accuracy are improved while the image quality and the consistency are ensured.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flowchart and/or block of the flowchart illustrations and/or block diagrams, and combinations of flowcharts and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Those of ordinary skill in the art will appreciate that all or part of the steps of the various methods of the above embodiments may be implemented by a program that instructs associated hardware, the program may be stored in a computer readable storage medium including Read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), programmable Read-Only Memory (Programmable Read-Only Memory, PROM), erasable programmable Read-Only Memory (Erasable Pr ogrammable Read Only Memory, EPROM), one-time programmable Read-Only Memory (OTPROM), electrically erasable programmable Read-Only Memory (EEPROM), compact disc Read-Only Memory (CD-ROM), or other optical disc Memory, magnetic disk Memory, tape Memory, or any other medium capable of being used for computer readable carrying or storing data.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises an element.

Claims (6)

1. The smoothing filtering method applied to the multi-camera virtual shooting is characterized by comprising the following steps of:
s1: processing the images to be fused to obtain fused images and outputting the fused images, wherein the fused images comprise a fusion area;
s2: traversing and filtering the fusion area of the image based on the input condition to obtain a filtered image;
wherein the input conditions comprise input condition one, input condition two, input condition three and input condition four,
The first input condition is smoothness;
The second input condition is of a noise type, wherein the noise type comprises Gaussian noise, pretzel noise and speckle noise;
the third input condition is texture requirement and is whether texture details need to be reserved or not;
the input condition IV is feature similarity, wherein the features comprise space distance, color similarity, texture features and edge information;
Specifically, the step S2 includes:
S201: representing pixels of the fusion area by using a two-dimensional matrix to obtain matrix elements corresponding to the pixels one by one;
s202: traversing matrix elements, and selecting one of smoothness, noise type and texture requirement as an input condition in the traversal process;
S203: matching filters from the filter tool and processing the matrix elements with the matched filters based on the selected input conditions;
Specifically, step S203 includes:
S203a: if the input condition is smoothness and the required smoothness is not less than the smoothness threshold, a Gaussian filter is selected to process matrix elements,
If the required smoothness is less than the smoothness threshold value, selecting a bilateral filter or a mean shift filter to process matrix elements;
S203b: if the input condition is noise type and the matrix element is affected by Gaussian noise, a Gaussian filter is selected to process the matrix element,
If the matrix element is affected by the salt and pepper noise and/or the speckle noise, selecting a bilateral filter or a mean shift filter to process the matrix element;
S203c: if the input condition is texture demand and texture detail is required to be reserved, selecting an average shift filter to process matrix elements,
If texture details are not required to be reserved, a bilateral filter or a Gaussian filter is selected to process matrix elements;
s204: based on the feature similarity, comparing the filtered matrix element with the current matrix element, and matching the matrix element meeting the feature similarity from the filtered matrix element;
S205: and selecting a filter used by the matched matrix element to process the current matrix element.
2. The smoothing filtering method applied to multi-camera virtual shooting according to claim 1, wherein the step S1 includes:
S101: selecting images to be fused, wherein the images to be fused have different shooting visual angles for the same scene;
s102: extracting characteristic information of images to be fused;
S103: based on the extracted characteristic information, aligning images with different shooting visual angles to a reference visual angle to obtain an aligned image;
s104: and fusing the aligned images to obtain fused images.
3. The smoothing filter method applied to multi-camera virtual shooting according to claim 2, wherein the filter tool comprises a gaussian filter, a bilateral filter and a mean shift filter.
4. A smoothing filter system applied to multi-camera virtual shooting, and a smoothing filter method applied to any one of the above claims 1 to 3, comprising:
the image processing module is used for processing the images to be fused, obtaining fused images and outputting the fused images, wherein the fused images comprise fusion areas;
and the filtering processing module is used for traversing and filtering the fusion area of the image based on the input condition to obtain a filtered image.
5. The smoothing filter system applied to multi-camera virtual shooting of claim 4, wherein the image processing module comprises:
the image selection module is used for selecting images to be fused, wherein the images to be fused have different shooting visual angles for the same scene;
the feature extraction module is used for extracting feature information of the images to be fused;
the image alignment module is used for aligning the images with different shooting angles to a reference angle based on the extracted characteristic information to obtain an aligned image;
And the image fusion module is used for fusing the aligned images to obtain fused images.
6. The smoothing filter system applied to multi-camera virtual shooting according to claim 5, wherein the filter processing module comprises:
The image matrix conversion module is used for representing the pixels of the fusion area by utilizing a two-dimensional matrix to obtain matrix elements corresponding to the pixels one by one;
the input condition selection module is used for traversing the matrix elements and selecting one of smoothness, noise type and texture requirement as an input condition in the traversing process;
A filter selection module for matching filters from the filter tool based on the selected input conditions and processing the matrix elements with the matched filters;
the characteristic similarity comparison module is used for comparing the filtered matrix elements with the current matrix elements based on the characteristic similarity and matching the matrix elements meeting the characteristic similarity from the filtered matrix elements;
and the equivalent filtering module is used for processing the current matrix element by selecting a filter used by the matched matrix element.
CN202311510958.0A 2023-11-13 2023-11-13 Smooth filtering method and system applied to multi-camera virtual shooting Active CN117560580B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311510958.0A CN117560580B (en) 2023-11-13 2023-11-13 Smooth filtering method and system applied to multi-camera virtual shooting

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311510958.0A CN117560580B (en) 2023-11-13 2023-11-13 Smooth filtering method and system applied to multi-camera virtual shooting

Publications (2)

Publication Number Publication Date
CN117560580A CN117560580A (en) 2024-02-13
CN117560580B true CN117560580B (en) 2024-05-03

Family

ID=89815970

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311510958.0A Active CN117560580B (en) 2023-11-13 2023-11-13 Smooth filtering method and system applied to multi-camera virtual shooting

Country Status (1)

Country Link
CN (1) CN117560580B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105279746A (en) * 2014-05-30 2016-01-27 西安电子科技大学 Multi-exposure image integration method based on bilateral filtering
CN111242137A (en) * 2020-01-13 2020-06-05 江西理工大学 Salt and pepper noise filtering method and device based on morphological component analysis
CN114219740A (en) * 2021-12-20 2022-03-22 重庆理工大学 Edge perception guiding filtering method fusing superpixels and window migration

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6287100B2 (en) * 2013-11-20 2018-03-07 株式会社リコー Image processing apparatus, image processing method, program, and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105279746A (en) * 2014-05-30 2016-01-27 西安电子科技大学 Multi-exposure image integration method based on bilateral filtering
CN111242137A (en) * 2020-01-13 2020-06-05 江西理工大学 Salt and pepper noise filtering method and device based on morphological component analysis
CN114219740A (en) * 2021-12-20 2022-03-22 重庆理工大学 Edge perception guiding filtering method fusing superpixels and window migration

Also Published As

Publication number Publication date
CN117560580A (en) 2024-02-13

Similar Documents

Publication Publication Date Title
CN103426182B (en) The electronic image stabilization method of view-based access control model attention mechanism
EP2268047A2 (en) Conversion device and method converting a two dimensional image to a three dimensional image
WO2018082185A1 (en) Image processing method and device
CN105574838B (en) The image registration of more mesh cameras and joining method and its device
US7974470B2 (en) Method and apparatus for processing an image
CN105243371A (en) Human face beauty degree detection method and system and shooting terminal
CN111681198A (en) Morphological attribute filtering multimode fusion imaging method, system and medium
CN114862725B (en) Method and device for realizing motion perception fuzzy special effect based on optical flow method
Liu et al. Image contrast enhancement based on intensity expansion-compression
Karaali et al. Deep multi-scale feature learning for defocus blur estimation
CN108234826B (en) Image processing method and device
US20140098246A1 (en) Method, Apparatus and Computer-Readable Recording Medium for Refocusing Photographed Image
EP3143549B1 (en) Segmentation based image transform
CN117560580B (en) Smooth filtering method and system applied to multi-camera virtual shooting
Zhao et al. Iterative range-domain weighted filter for structural preserving image smoothing and de-noising
Yin et al. Combined window filtering and its applications
Sakurikar et al. Defocus magnification using conditional adversarial networks
Xu et al. A dehazing algorithm based on local adaptive template for transmission estimation and refinement
EP3211600B1 (en) Adaptive depth-guided non-photorealistic rendering method, corresponding computer program product, computer-readable carrier medium and device
Chen et al. Multiple exposure fusion based on sharpness-controllable fuzzy feedback
CN118279142A (en) Large scene image stitching method and system
CN116993644B (en) Multi-focus image fusion method and device based on image segmentation
Houben et al. Fast and robust disparity estimation for noisy light fields
Alkinani et al. Non-local means for stereo image denoising using structural similarity
Yao Image mosaic based on SIFT and deformation propagation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant