CN115035147A - Matting method, device and system based on virtual shooting and image fusion method - Google Patents

Matting method, device and system based on virtual shooting and image fusion method Download PDF

Info

Publication number
CN115035147A
CN115035147A CN202210760178.0A CN202210760178A CN115035147A CN 115035147 A CN115035147 A CN 115035147A CN 202210760178 A CN202210760178 A CN 202210760178A CN 115035147 A CN115035147 A CN 115035147A
Authority
CN
China
Prior art keywords
image
edge
matting
foreground
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210760178.0A
Other languages
Chinese (zh)
Inventor
何志民
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Colorlight Cloud Technology Co Ltd
Original Assignee
Colorlight Cloud Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Colorlight Cloud Technology Co Ltd filed Critical Colorlight Cloud Technology Co Ltd
Priority to CN202210760178.0A priority Critical patent/CN115035147A/en
Publication of CN115035147A publication Critical patent/CN115035147A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/14Transformations for image registration, e.g. adjusting or mapping for alignment of images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/94Dynamic range modification of images or parts thereof based on local image properties, e.g. for local contrast enhancement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a matting method, a device and a system based on virtual shooting and an image fusion method, wherein the matting method comprises the steps of obtaining a first image and a second image which are obtained by shooting a target at the same time, wherein the first image is an image shot by a common camera or a video camera, and the second image is an image shot by a depth camera; separating the foreground and the background in the first image by using the second image, and carrying out image registration on the separated foreground and background; performing edge processing on the second image to obtain a plurality of edge lines, and determining target edge lines corresponding to the cutout target; mapping the target edge line to a first image for display as the edge of the sectional target; after the super-resolution processing is carried out on the first image, the transparency of the edge of the cutout object in the first image is adjusted, and the cutout object is extracted from the first image. The method, the device and the system provided by the invention can improve the matting efficiency and effect and ensure that the image fusion is more real and natural.

Description

Matting method, matting device and matting system based on virtual shooting and image fusion method
Technical Field
The invention relates to the technical field of matting and image fusion, in particular to a matting method, a matting device, a matting system and an image fusion method based on virtual shooting.
Background
In the prior art, many scenes are shot by using a green screen as a background, the shot picture is processed in a computer, the green color of the background is processed, other scene pictures are used as the background, and finally various picture contents which are displayed in the screen and contain people or objects and scene pictures are generated, such as a scene of a host in movie shooting, a scene of the host and the like. And after utilizing green curtain to shoot, need the manual work to utilize matting software to carry out the matting, consequently the user that display terminal watched can't see matting or the effect of image fusion in real time.
At present, with the wide application of LEDs, an LED display screen is also built into an LED stage display screen composed of three surfaces, and is used to replace the above-mentioned green screen usage scenario, for example, in the built LED stage display screen, a host can stand in the center of the screen to host, and the screen displays corresponding picture content as a background, and finally, the screen is shot (i.e., virtual shooting) by a camera.
But utilize the LED display screen to set up the LED stage display screen and carry out virtual shooting, if later stage needs carry out the cutout processing with the prospect picture (for example the host) of shooing and put in another scene picture, perhaps need carry out the cutout processing with the background picture (for example the picture that shows in the LED display screen) of shooing, fuse with other prospect pictures, because algorithm and real-time problem, the cutout effect is not good, the picture that obtains after the cutout fuses with other pictures again, there is the problem that shows unnaturalness and unreality. For a user watching through a display terminal, there is a problem of poor experience effect, and therefore, a new technical solution is urgently needed to be found by those skilled in the art to solve the above problem.
Disclosure of Invention
Aiming at the problems, the invention provides a matting method, a matting device, a matting system and an image fusion method based on virtual shooting.
The invention provides a matting method based on virtual shooting, which comprises the steps of obtaining a first image and a second image which are obtained by shooting a target at the same time, wherein the first image is an image shot by a common camera or a video camera, and the second image is an image shot by a depth camera;
separating the foreground and the background in the first image by using the second image, carrying out image registration on the separated foreground and background, and adjusting the position of the foreground and the background in the first image;
selecting a foreground or background in the first image as a matting target according to the matting requirement;
performing edge processing on the second image to obtain a plurality of edge lines, and determining target edge lines corresponding to the sectional targets;
mapping the target edge line to a first image for display as the edge of the sectional target;
after the super-resolution processing is carried out on the first image, the transparency of the edge of the matting object in the first image is adjusted, and the matting object is extracted from the first image.
Further, the separating the foreground and the background in the first image by using the second image comprises:
and extracting depth information from the second image, and performing binarization processing on the first image according to the depth information to separate foreground and background in the first image.
Further, determining an object edge line corresponding to the cutout object therefrom includes:
and taking the edge line closest to the cutout object as the edge line of the object corresponding to the cutout object.
Further, adjusting the transparency of the edge of the matte object in the first image includes:
if the matting object is a foreground, adjusting the transparency of the edge of the matting object to be in the range of [0.5, 1 ];
if the cutout object is a background, the transparency of the edge of the cutout object is adjusted to be in the range of [0, 0.5).
Further, the edge of the matte object is composed of a plurality of edge pixels, and adjusting the transparency of the edge of the matte object in the first image further comprises:
if the matting object is a foreground, in the edge of the matting object, the greater the distance between the edge pixel and the matting object is, the smaller the transparency of the edge pixel is set;
if the cutout object is a background, in the edge of the cutout object, the greater the distance between the edge pixel and the cutout object, the greater the transparency of the edge pixel is set.
The invention also provides an image fusion method, which comprises the following steps:
extracting a cutout target according to the cutout method based on virtual shooting;
and fusing the matting target and the image to be fused, and sending the fused image to a display terminal for displaying.
The invention also provides a cutout device based on virtual shooting, which comprises an image acquisition module, a foreground and background separation module, an edge processing module, a mapping module and an extraction module, wherein:
the image acquisition module is connected with the foreground and background separation module and used for acquiring a first image and a second image which are obtained by shooting a target at the same time, wherein the first image is an image shot by a common camera or a video camera, and the second image is an image shot by a depth camera;
the foreground and background separation module is connected with the edge processing module and used for separating the foreground and the background in the first image by using the second image, carrying out image registration on the separated foreground and background and adjusting the position of the foreground and the background in the first image;
the edge processing module is connected with the mapping module and used for selecting the foreground or the background in the first image as a matting target according to the matting requirement; performing edge processing on the second image to obtain a plurality of edge lines, and determining target edge lines corresponding to the sectional targets;
the mapping module is connected with the extraction module and used for mapping the edge line of the target to the first image to be displayed as the edge of the matting target;
and the extraction module is used for adjusting the transparency of the edge of the matting object in the first image after the super-resolution processing is carried out on the first image, and extracting the matting object from the first image.
Further, the foreground and background separating module, which separates the foreground and background in the first image by using the second image, includes: and extracting depth information from the second image, and performing binarization processing on the first image according to the depth information to separate the foreground from the background in the first image.
Further, the edge processing module, determining an object edge line corresponding to the scratch object therefrom, includes: and taking the edge line closest to the cutout object as the object edge line corresponding to the cutout object.
The invention also provides a cutout system based on virtual shooting, which comprises a server, a depth camera and a common camera or a video camera;
the common camera or the video camera is used for shooting a target at the same time with the depth camera to obtain a first image;
the depth camera is used for shooting a target at the same time with a common camera or a video camera to obtain a second image;
and the server is connected with the depth camera and the common camera or the video camera and is used for extracting the matting target from the first image according to the matting method based on the virtual shooting.
The matting method, the matting device and the matting system based on virtual shooting and the image fusion method provided by the invention at least have the following beneficial effects:
(1) the matting method, the device and the system based on virtual shooting and the image fusion method realize automatic matting and automatic fusion of images, do not need manual processing, improve the efficiency of matting and image fusion, and can directly display in real time in a display terminal after automatic matting and automatic fusion of images; in addition, the cutout and image fusion are based on the cutout and image fusion after virtual shooting, and compared with the cutout and image fusion after green curtain shooting, the cutout and image fusion are carried out without setting up a green curtain and setting up a special environment.
(2) The matting method, the device and the system based on virtual shooting provided by the invention extract depth information from a second image by utilizing the second image shot by a depth camera, separate front and back scenes in a first image shot by a common camera or a video camera, adjust the position of the front and back scenes separated in the first image by image registration, extract a target edge line of a matting target from the second image, map the target edge line into the first image, perform super-resolution processing on the first image, adjust the transparency of the edge of the matting target, finally obtain an accurate matting target, adjust the transparency of the edge of the matting target, improve the matting effect and enable the obtained matting target to be more truly and naturally fused with other images in the follow-up process.
(3) According to the image fusion method provided by the invention, the accurate matting target with the adjusted edge transparency is obtained by using the matting method provided by the invention, and the matting target and the image to be fused are fused, so that the display effect of the fused image is improved, and the experience effect of a user is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the embodiments or the prior art descriptions will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flowchart of a matting method based on virtual photography in an embodiment of the present invention;
FIG. 2 is a flowchart illustrating an edge transparency adjustment method according to an embodiment of the present invention;
FIG. 3 is a flowchart of an image fusion method according to an embodiment of the invention;
fig. 4 is a schematic structural diagram of a matting device based on virtual shooting in an embodiment of the present invention;
FIG. 5 is a schematic diagram of a matting system based on virtual photography in an embodiment of the present invention
401-image acquisition module, 402-foreground and background separation module, 403-edge processing module, 404-mapping module, 405-extraction module, 501-server, 502-depth camera, 503-common camera or video camera, 504-display terminal.
Detailed Description
The technical solutions in the embodiments of the present invention will be described clearly and completely with reference to the accompanying drawings in the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In an embodiment of the present invention, as shown in fig. 1, a matting method based on virtual shooting is disclosed, an execution subject of the method is a server, and the method specifically includes the following steps:
step S101: a first image and a second image obtained by shooting a target at the same time are obtained.
The target comprises an LED stage display screen built by an LED display screen and a shooting object.
The first image is an image shot by a common camera or a video camera, and the second image is an image shot by a depth camera. The depth camera is a camera that can measure a distance (depth) from an object (the object is referred to as a target in the present invention) to the camera, a second image obtained after the depth camera takes a picture is a depth image, and the depth image is also called a range image (range image) and is an image in which a distance (depth) from the depth camera to each point in the taken target is taken as a pixel value.
Step S102: and separating the foreground and the background in the first image by using the second image, carrying out image registration on the separated foreground and background, and adjusting the position of the foreground and the background in the first image.
Specifically, in this step, the separating the foreground and the background in the first image by using the second image includes: and extracting depth information from the second image, and performing binarization processing on the first image according to the depth information to separate foreground and background in the first image.
The depth information refers to a pixel value of each pixel point in the second image. Since in the second image, the pixel value of each pixel point represents: and the distance from the point in the shot target corresponding to the pixel point to the depth camera, so that the distance from each point in the shot target to the depth camera can be obtained by extracting the depth information. The depth information is combined into the first image, so that binarization processing can be performed according to the depth information corresponding to each pixel point in the first image, foreground and background in the first image are distinguished, and specifically, the position of the pixel point corresponding to the foreground and the position of the pixel point corresponding to the background are distinguished.
In this step, after the depth information is extracted from the second image, the shot object in the first image and the LED stage display screen built by the LED display screen can be distinguished according to the depth information, where the shot object (e.g., a person) is a foreground, and the LED stage display screen built by the LED display screen is a background. After the shot object in the first image and the LED stage display screen built by the LED display screen are distinguished, the pixel values of the pixels of the shot object and the corresponding pixels of the LED stage display screen built by the LED display screen in the first image can be binarized, and binarization (English) is the simplest method for image segmentation. Binarization can convert a gray image into a binary image, and the gray value (pixel value) of a pixel point on the image is set to be 0 or 255, that is, the whole image presents an obvious visual effect only including black and white. Therefore, in this step, the pixels corresponding to the foreground (foreground pixels) and the pixels corresponding to the background (background pixels) in the first image are binarized, so as to separate the foreground from the background in the first image.
More specifically, since the first image and the second image are images obtained by respectively shooting the target by different cameras at the same time, the positions of the camera shooting the first image and the camera shooting the second image at the same time cannot be the same, that is, a position difference exists between a common camera or a video camera shooting the first image and a depth camera shooting the second image, and thus, when the target is shot, the shot image content is different. Therefore, in this step, the first image is binarized according to the depth information, and after the foreground and background in the first image are separated, the second image is reused to perform image registration (so that points corresponding to the same position of the target in the first image and the second image are in one-to-one correspondence, thereby achieving the purpose of information fusion, and further achieving more accurate fusion of the depth information and the pixel points in the first image), and further adjusting the position of the foreground and background in the first image, thereby identifying the accurate position of the foreground and background.
The second image and the first image are used for image registration, and the existing multi-angle image registration method can be adopted. Multi-angle image registration refers to registering images (corresponding to the first image and the second image in the present application) with different field angles obtained from different viewpoints, so as to reconstruct the depth or shape of the images.
Furthermore, when the second image and the first image are used for multi-angle image registration, the registration method based on the feature information of the image to be registered can be used for extracting the feature information of points, lines, edges and the like in the image to be registered without other auxiliary information, the calculated amount is reduced, the efficiency is improved, and meanwhile, certain robustness can be achieved on the change of the image gray level.
Furthermore, according to the different selected characteristic information, the image registration method based on the characteristics is divided into three categories: (a) based on the matching of the feature points, the selected feature points are pixel points which show a certain singularity relative to the field of the feature points. The feature points are often easy to extract, but the feature points contain relatively little information and can only reflect the position coordinate information of the feature points in the images, so that the key point is to find the matched feature points in the two images. (b) Based on the matching of the characteristic regions, some obvious region information is searched in the image to be used as the characteristic regions, and after the characteristic regions are found in practical application, the centroid points of the regions are adopted for registration. (c) Based on the matching of feature edges, the most obvious feature in the image is an edge, and an edge feature is also one of the best extracted features. Therefore, the edge matching method has strong robustness and wide application range.
Step S103: and selecting the foreground or background in the first image as a matting target according to the matting requirement.
Step S104: and performing edge processing on the second image to obtain a plurality of edge lines, and determining target edge lines corresponding to the sectional targets.
Specifically, in this step, a sobel operator (edge detection operator) may be used to perform edge processing, and a plurality of edge lines may be extracted from the second image. More specifically, the second image is represented by pixel levels, and after performing edge processing, a plurality of edge lines (specifically, position coordinates corresponding to the plurality of edge lines) are obtained from the second image.
The sobel operator (sobel operator) is an important processing method in the field of computer vision, and is mainly used for obtaining the first-order gradient of a digital image, and the common application and physical significance is edge detection. The Sobel operator is used for detecting the edge by adding the weighted difference of the gray values of the upper, lower, left and right fields of each pixel in the image and enabling the gray values to reach an extreme value at the edge.
Further, in this step, after the edge lines are obtained, determining an object edge line corresponding to the matting object includes: and taking the edge line closest to the cutout object as the edge line of the object corresponding to the cutout object.
Step S105: and mapping the target edge line to the first image for display as the edge of the sectional target.
Generally, pixels of a first image captured by the ordinary camera are significantly higher than those of a second image captured by the depth camera, and it is assumed that the wide resolution and the high resolution of the first image are multiple times of those of the second image, that is, multiple pixels in the first image correspond to one pixel in the second image (for example, the number of rows of pixels in the first image is 5 times of that of pixels in the second image, and at this time, one row of pixels in the second image is equivalent to 5 rows of pixels in the first image). Therefore, in this step, the target edge line is mapped to the first image for display, and it is necessary to determine the corresponding pixel point of the target edge line in the first image according to the mapping relationship between the second image and the pixel point in the first image after the position coordinate of the target edge line in the second image is known, so that the target edge line is mapped to the first image for display, and the target edge line is used as the edge of the matting object.
In another implementation mode, a common camera (or a camera) with the same CCD specification and a depth camera can be used for shooting, so that the resolution of the first image and the resolution of the second image obtained by shooting are ensured to be the same, and thus each pixel point in the second image obtained by shooting corresponds to one pixel point in the first image, thereby improving the matching efficiency.
Step S106: after the super-resolution processing is carried out on the first image, the transparency of the edge of the cutout object in the first image is adjusted, and the cutout object is extracted from the first image.
Specifically, the adjustment of the transparency of the edge of the cutout object refers to the adjustment of the transparency of a pixel point (also called an edge pixel) corresponding to the edge, that is, the adjustment of the definition of the image.
Further, in an implementation manner of this embodiment, as shown in fig. 2, the adjusting the transparency of the edge of the matting object in the first image includes the following steps:
step S201: and judging whether the sectional target is a foreground, if so, executing a step S202, otherwise, executing a step S203.
Step S202: the transparency of the edge of the matting object is adjusted to be in the range of [0.5, 1 ].
Step S203: the transparency of the edge of the matting object is adjusted to be in the range of [0, 0.5).
Further, the edge of the matte object is composed of a plurality of edge pixels, and adjusting the transparency of the edge of the matte object in the first image further comprises: if the cutout object is a foreground, in the edge of the cutout object, the greater the distance between the edge pixel and the cutout object is, the smaller the transparency of the edge pixel is set; if the cutout object is a background, in the edge of the cutout object, the greater the distance between the edge pixel and the cutout object, the greater the transparency of the edge pixel is set.
After the foreground and the background are separated in the first image, obvious line segment separation exists between the foreground and the background, the false condition of the image is serious, and certain influence can be caused on the fusion of the background image. Therefore, in the step, the super-resolution processing is performed on the first image, so that the resolution of the first image is improved, the transparency of the edge of the cutout object can be adjusted, the transparency is within the range of 0-1, and compared with the case that the transparency of the edge is adjusted to be 0 or 1, the transparency is adjusted to be within the range of 0-1, the edge boundary between the processed foreground and the background can be softer, and the obtained cutout object and other images to be fused can be fused more naturally.
The matting method based on virtual shooting provided in the embodiment extracts depth information from a second image shot by a depth camera, separates front and back scenes in a first image shot by a common camera or a video camera, adjusts the position of the front and back scenes separated in the first image through image registration, extracts a target edge line of a matting target from the second image, maps the target edge line to the first image, performs super-resolution processing on the first image, adjusts the transparency of the edge of the matting target, and finally obtains the matting target, so that the obtained matting target can be better fused with other images, the final fused image is real and natural, the display effect of the fused image is improved, the experience effect of a user is improved, and the real-time matting efficiency is improved (a server can be used for real-time automatic matting, and the display is carried out after the manual digging through software is not needed).
In another embodiment of the present invention, the present invention further provides an image fusion method, as shown in fig. 3, the method comprising the steps of:
step S301: extracting a cutout target according to the cutout method based on virtual shooting;
step S302: and fusing the matting target and the image to be fused, and sending the fused image to a display terminal for displaying.
Specifically, it should be understood that when the matting target is a foreground, the image to be fused is a scene image needing scene display, the scene image needing scene display is acquired, the processed foreground is fused into the scene image, and the combined image of the foreground and the processed foreground is sent to a display terminal of a user for display; the scene image is a pure background image, and the color difference between the scene image and a foreground image needs to be ensured to be as small as possible; the scene images can be from the preset scene images, the images can be arranged and displayed on a software display interface, and each scene corresponds to one image to be fused. In addition, when the cutout target is a background, a foreground picture can be extracted from a picture shot by a green curtain to be used as an image to be fused with the cutout target.
More specifically, in the embodiment, the matting target is fused with the pre-extracted image to be fused, the matting target is added into the image to be fused in a pixel-by-pixel scanning manner, in the process of application rendering, the processed matting target is added into the image to be fused in a pixel-by-pixel scanning manner, and pixels other than the matting target (for example, the matting target is a foreground, the pixels other than the matting target refer to a background in the first image, and the matting target is a background, and the pixels other than the matting target refer to a foreground in the first image) are not added, so that data processing can be reduced, and the processing performance of software can be improved.
The invention also provides a matting device based on virtual shooting, as shown in fig. 4, the device includes an image acquisition module 401, a foreground and background separation module 402, an edge processing module 403, a mapping module 404, and an extraction module 405, wherein:
an image obtaining module 401, connected to the foreground and background separating module 402, configured to obtain a first image and a second image obtained by shooting a target at the same time, where the first image is an image shot by a general camera or a video camera, and the second image is an image shot by a depth camera;
a foreground and background separation module 402, connected to the edge processing module 403, configured to separate a foreground and a background in the first image by using the second image, perform image registration on the separated foreground and background, and adjust the position of the foreground and background in the first image;
an edge processing module 403 connected to the mapping module 404, configured to select a foreground or a background in the first image as a matting target according to the matting requirement; performing edge processing on the second image to obtain a plurality of edge lines, and determining target edge lines corresponding to the sectional targets;
the mapping module 404 is connected to the extracting module 405, and is configured to map the edge line of the object into the first image for display, as an edge of the matting object;
the extracting module 405 is configured to adjust transparency of an edge of the matting object in the first image after performing super-resolution processing on the first image, and extract the matting object therefrom.
Further, in another embodiment of the present invention, the foreground and background separating module 402, using the second image, for separating the foreground and the background in the first image includes: and extracting depth information from the second image, and performing binarization processing on the first image according to the depth information to separate foreground and background in the first image.
Further, in another embodiment of the present invention, the edge processing module 403, determining the object edge line corresponding to the scratch object therefrom includes: and taking the edge line closest to the cutout object as the edge line of the object corresponding to the cutout object.
The invention also provides a cutout system based on virtual shooting, as shown in fig. 5, the system comprises a server 501, a depth camera 502 and a common camera or video camera 503;
a general camera or video camera 503 for shooting a target at the same time as the depth camera to obtain a first image;
a depth camera 502 for shooting a target at the same time as a general camera or a video camera to obtain a second image;
and the server 501 is connected with the depth camera and the ordinary camera or video camera, and is used for extracting the matting target from the first image according to the matting method based on virtual shooting.
Further, as shown in fig. 5, the system further includes a display terminal 504, where the display terminal 504 is connected to the server 501, and is used to display the matting target in real time or display the merged image in real time after the matting target and the image to be merged are merged by the server.
The matting method, the device and the system based on virtual shooting and the image fusion method realize automatic matting and automatic fusion of images, do not need manual processing, and improve the efficiency of matting and image fusion. Meanwhile, the matting method, the matting device and the matting system based on the virtual shooting extract depth information from a second image shot by a depth camera, separate front and back scenes in a first image shot by a common camera or a video camera, adjust the front and back positions separated in the first image through image registration, extract a target edge line of a matting object from the second image, map the target edge line to the first image, perform super-resolution processing on the first image, adjust the transparency of the edge of the matting object, finally obtain an accurate matting object, and adjust the transparency of the edge of the matting object, so that the matting effect is improved, and the obtained matting object can be more truly and naturally fused with other images in the follow-up process. In addition, the system also comprises a display terminal, and because the automatic matting and automatic image fusion are realized in the invention, manual participation is not needed, so that the matting target or the fused image can be displayed in the display terminal in real time in the matting or image fusion process, and a user can observe the matting effect or the image fusion effect in real time. In addition, the image fusion method provided by the invention obtains the accurate matting target with the adjusted edge transparency by using the matting method, and fuses the matting target and the image to be fused, so that the display effect of the fused image is improved, and the experience effect of a user is improved.
The terms and expressions used in the specification of the present invention have been set forth for illustrative purposes only and are not meant to be limiting. The terms "first" and "second" used herein in the claims and the description of the present invention are for the purpose of convenience of distinction, have no special meaning, and are not intended to limit the present invention. It will be appreciated by those skilled in the art that changes could be made to the details of the above-described embodiments without departing from the underlying principles thereof. The scope of the invention is, therefore, to be determined only by the following claims, in which all terms are to be interpreted in their broadest reasonable sense unless otherwise indicated.

Claims (10)

1. A matting method based on virtual shooting is characterized by comprising the following steps:
acquiring a first image and a second image which are obtained by shooting a target at the same time, wherein the first image is an image shot by a common camera or a video camera, and the second image is an image shot by a depth camera;
separating the foreground and the background in the first image by using the second image, carrying out image registration on the separated foreground and background, and adjusting the position of the foreground and the background in the first image;
selecting a foreground or background in the first image as a matting target according to a matting requirement;
performing edge processing on the second image to obtain a plurality of edge lines, and determining target edge lines corresponding to the sectional targets;
mapping the target edge line to the first image to be displayed as the edge of the sectional target;
after the super-resolution processing is carried out on the first image, the transparency of the edge of the cutout object in the first image is adjusted, and the cutout object is extracted from the first image.
2. The matting method based on virtual photography according to claim 1, wherein the separating the foreground and background in the first image with the second image comprises:
and extracting depth information from the second image, and performing binarization processing on the first image according to the depth information so as to separate the foreground and the background in the first image.
3. The matting method based on virtual photography according to claim 1, wherein the determining of the object edge line corresponding to the matting object therefrom comprises:
and taking the edge line closest to the cutout object as the object edge line corresponding to the cutout object.
4. The matting method based on virtual photography of claim 1, wherein said adjusting transparency of the edge of the matting object in the first image comprises:
if the matting object is a foreground, adjusting the transparency of the edge of the matting object to be in the range of [0.5, 1 ];
if the cutout object is a background, the transparency of the edge of the cutout object is adjusted to be in the range of [0, 0.5).
5. The method of matting based on virtual photography according to claim 4 wherein the edge of the matting object is composed of a plurality of edge pixels, the adjusting the transparency of the edge of the matting object in the first image further comprises:
if the cutout object is a foreground, in the edge of the cutout object, the greater the distance between an edge pixel and the cutout object is, the smaller the transparency of the edge pixel is set to be;
if the cutout object is a background, in the edge of the cutout object, the greater the distance between the edge pixel and the cutout object is, the greater the transparency of the edge pixel is set.
6. An image fusion method, characterized in that the method comprises:
the matting method based on virtual photography according to any one of claims 1-5, extracting a matting object;
and fusing the matting target and the image to be fused, and sending the fused image to a display terminal for displaying.
7. The utility model provides a matting device based on virtual shooting, its characterized in that, the device includes image acquisition module, foreground and background separation module, edge processing module, mapping module, draws the module, wherein:
the image acquisition module is connected with the foreground and background separation module and is used for acquiring a first image and a second image which are obtained by shooting a target at the same time, wherein the first image is an image shot by a common camera or a video camera, and the second image is an image shot by a depth camera;
the foreground and background separation module is connected with the edge processing module and used for separating the foreground and the background in the first image by using the second image, carrying out image registration on the separated foreground and background and adjusting the position of the foreground and the background in the first image;
the edge processing module is connected with the mapping module and used for selecting a foreground or a background in the first image as a matting target according to the matting requirement; performing edge processing on the second image to obtain a plurality of edge lines, and determining target edge lines corresponding to the sectional targets;
the mapping module is connected with the extracting module and is used for mapping the target edge line to the first image to be displayed as the edge of the matting target;
the extraction module is used for adjusting the transparency of the edge of the cutout object in the first image after the super-resolution processing is carried out on the first image, and extracting the cutout object from the first image.
8. The matting device based on virtual photography of claim 7, wherein the foreground and background separating module, using the second image, separating the foreground and background in the first image comprises: and extracting depth information from the second image, and performing binarization processing on the first image according to the depth information so as to separate the foreground from the background in the first image.
9. The matting device based on virtual photography according to claim 7, wherein the edge processing module, from which determining the object edge line corresponding to the matting object comprises: and taking the edge line closest to the cutout object as the object edge line corresponding to the cutout object.
10. A matting system based on virtual shooting is characterized in that the system comprises a server, a depth camera and a common camera or a video camera;
the common camera or the video camera is used for shooting a target at the same time with the depth camera to obtain a first image;
the depth camera is used for shooting a target at the same time with the common camera or the video camera to obtain a second image;
the server is connected with the depth camera and the ordinary camera or the video camera and is used for extracting the sectional target from the first image according to the sectional method based on virtual shooting of any one of claims 1 to 5.
CN202210760178.0A 2022-06-29 2022-06-29 Matting method, device and system based on virtual shooting and image fusion method Pending CN115035147A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210760178.0A CN115035147A (en) 2022-06-29 2022-06-29 Matting method, device and system based on virtual shooting and image fusion method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210760178.0A CN115035147A (en) 2022-06-29 2022-06-29 Matting method, device and system based on virtual shooting and image fusion method

Publications (1)

Publication Number Publication Date
CN115035147A true CN115035147A (en) 2022-09-09

Family

ID=83129189

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210760178.0A Pending CN115035147A (en) 2022-06-29 2022-06-29 Matting method, device and system based on virtual shooting and image fusion method

Country Status (1)

Country Link
CN (1) CN115035147A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116597129A (en) * 2023-05-09 2023-08-15 深圳市松林湾科技有限公司 Object detection method in image recognition
CN116843819A (en) * 2023-07-10 2023-10-03 上海随幻智能科技有限公司 Green curtain infinite extension method based on illusion engine

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116597129A (en) * 2023-05-09 2023-08-15 深圳市松林湾科技有限公司 Object detection method in image recognition
CN116843819A (en) * 2023-07-10 2023-10-03 上海随幻智能科技有限公司 Green curtain infinite extension method based on illusion engine
CN116843819B (en) * 2023-07-10 2024-02-02 上海随幻智能科技有限公司 Green curtain infinite extension method based on illusion engine

Similar Documents

Publication Publication Date Title
KR101121034B1 (en) System and method for obtaining camera parameters from multiple images and computer program products thereof
US8130244B2 (en) Image processing system
US8508580B2 (en) Methods, systems, and computer-readable storage media for creating three-dimensional (3D) images of a scene
CN115035147A (en) Matting method, device and system based on virtual shooting and image fusion method
CN108694741B (en) Three-dimensional reconstruction method and device
GB2465793A (en) Estimating camera angle using extrapolated corner locations from a calibration pattern
AU2017246716A1 (en) Efficient determination of optical flow between images
JP7159384B2 (en) Image processing device, image processing method, and program
CN105869115B (en) A kind of depth image super-resolution method based on kinect2.0
Li et al. HDRFusion: HDR SLAM using a low-cost auto-exposure RGB-D sensor
CN117036641A (en) Road scene three-dimensional reconstruction and defect detection method based on binocular vision
Böhm Multi-image fusion for occlusion-free façade texturing
CN111539311A (en) Living body distinguishing method, device and system based on IR and RGB double photographing
CN110120012B (en) Video stitching method for synchronous key frame extraction based on binocular camera
CN113348489A (en) Image processing method and device
EP3616399B1 (en) Apparatus and method for processing a depth map
CN108564654B (en) Picture entering mode of three-dimensional large scene
US11043019B2 (en) Method of displaying a wide-format augmented reality object
CN116051681B (en) Processing method and system for generating image data based on intelligent watch
CN111630569B (en) Binocular matching method, visual imaging device and device with storage function
KR101718309B1 (en) The method of auto stitching and panoramic image genertation using color histogram
CN113840135A (en) Color cast detection method, device, equipment and storage medium
JP2016071496A (en) Information terminal device, method, and program
CN116185544B (en) Display image fusion method and device based on image feature recognition and storage medium
CN117710868B (en) Optimized extraction system and method for real-time video target

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination