CN114092535A - Depth map reconstruction method, system, device, storage medium and processor - Google Patents

Depth map reconstruction method, system, device, storage medium and processor Download PDF

Info

Publication number
CN114092535A
CN114092535A CN202010857906.0A CN202010857906A CN114092535A CN 114092535 A CN114092535 A CN 114092535A CN 202010857906 A CN202010857906 A CN 202010857906A CN 114092535 A CN114092535 A CN 114092535A
Authority
CN
China
Prior art keywords
depth map
viewpoint
original
mapping
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010857906.0A
Other languages
Chinese (zh)
Inventor
盛骁杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN202010857906.0A priority Critical patent/CN114092535A/en
Publication of CN114092535A publication Critical patent/CN114092535A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a method, a system, a device, a storage medium and a processor for reconstructing a depth map. Wherein, the method comprises the following steps: acquiring depth maps of a plurality of original viewpoints, carrying out edge detection on the depth map of each original viewpoint, and acquiring edge pixel points of each depth map; determining a pixel area to be shielded in the depth map based on edge pixel points of each depth map, wherein the pixel area to be shielded comprises at least one pixel point which faces to the foreground direction by taking the edge pixel points as a reference; in the process of carrying out virtual viewpoint mapping on the depth map, image mapping is carried out on a pixel region to be shielded in the depth map of a target viewpoint, wherein the target viewpoint is a viewpoint which meets mapping conditions in a plurality of original viewpoints. The invention solves the technical problem that the original cavity area generates mapping textures of the foreground in the process of reconstructing the virtual viewpoint of the depth map.

Description

Depth map reconstruction method, system, device, storage medium and processor
Technical Field
The invention relates to the field of computers, in particular to a depth map reconstruction method, a depth map reconstruction system, a depth map reconstruction device, a storage medium and a processor.
Background
Currently, Depth Image Based Reconstruction (DIBR) algorithms are designed Based on correct Depth map assumptions and do not consider optimization for Depth map compression loss. Any pixel of the original image can be mapped to a virtual viewpoint position of a target, so that a hole is filled, but the assumption is that the original depth map is accurate enough.
However, the depth map has a certain image loss after compression, transmission, and other processes, and such a depth map causes a foreground to fly out when the depth map is mapped to the virtual viewpoint, and thus a mapping texture of the foreground is generated in an original hollow area.
In view of the above technical problem that the original void region generates mapping texture of the foreground in the process of reconstructing the virtual viewpoint of the depth map, no effective solution has been proposed at present.
Disclosure of Invention
The embodiment of the invention provides a method, a system, a device, a storage medium and a processor for reconstructing a depth map, which are used for at least solving the technical problem that an original cavity area generates mapping textures of a foreground in the process of reconstructing a virtual viewpoint of the depth map.
According to an aspect of the embodiments of the present invention, there is provided a method for reconstructing a depth map. The method can comprise the following steps: acquiring depth maps of a plurality of original viewpoints, carrying out edge detection on the depth map of each original viewpoint, and acquiring edge pixel points of each depth map; determining a pixel area to be shielded in the depth map based on edge pixel points of each depth map, wherein the pixel area to be shielded comprises at least one pixel point which faces to the foreground direction by taking the edge pixel points as a reference; in the process of carrying out virtual viewpoint mapping on the depth map, image mapping is carried out on a pixel region to be shielded in the depth map of a target viewpoint, wherein the target viewpoint is a viewpoint which meets mapping conditions in a plurality of original viewpoints.
According to another aspect of the embodiments of the present invention, another depth map reconstruction method is also provided. The method can comprise the following steps: acquiring depth maps of a plurality of original viewpoints, carrying out edge detection on the depth map of each original viewpoint, and acquiring edge pixel points of each depth map; in the process of carrying out virtual viewpoint mapping on the depth map, image mapping is carried out on a pixel region to be shielded in the depth map of a target viewpoint, wherein the target viewpoint is a viewpoint which meets mapping conditions in a plurality of original viewpoints.
According to another aspect of the embodiments of the present invention, another depth map reconstruction method is also provided. The method can comprise the following steps: displaying depth maps of a plurality of original viewpoints; displaying edge pixel points extracted after edge detection is carried out on each depth map; and displaying an image result obtained after image mapping is performed on a pixel region to be shielded in the depth map of the target viewpoint in the process of performing virtual viewpoint mapping on the depth map, wherein the target viewpoint is a viewpoint which meets the mapping condition in a plurality of original viewpoints, and the pixel region to be shielded comprises at least one pixel point which takes an edge pixel point as a reference and faces to the foreground direction.
According to another aspect of the embodiments of the present invention, another depth map reconstruction method is also provided. The method can comprise the following steps: determining a target viewpoint from a plurality of original viewpoints, wherein the target viewpoint is a viewpoint which meets mapping conditions in the plurality of original viewpoints; acquiring a depth map of a target viewpoint, and performing edge detection on the depth map of the target viewpoint to acquire edge pixel points of the depth map of the target viewpoint; determining a pixel area to be shielded based on edge pixel points of the depth map of the target viewpoint, wherein the pixel area to be shielded comprises at least one pixel point facing the foreground direction by taking the edge pixel points as a reference; in the process of carrying out virtual viewpoint mapping on the depth maps of a plurality of original viewpoints, selecting a pixel area to be shielded in the depth map of a target viewpoint to carry out image mapping.
According to another aspect of the embodiments of the present invention, another depth map reconstruction method is also provided. The method can comprise the following steps: in the live broadcasting process, acquiring depth maps of a plurality of original viewpoints on a live broadcasting picture; performing edge detection on the depth map of each original viewpoint to obtain edge pixel points of each depth map; determining a pixel area to be shielded in the depth map based on edge pixel points of each depth map, wherein the pixel area to be shielded comprises at least one pixel point which faces to the foreground direction by taking the edge pixel points as a reference; in the process of carrying out virtual viewpoint mapping on the depth map, image mapping is carried out on pixel regions to be shielded in the depth map of a target viewpoint, wherein the target viewpoint is a viewpoint which meets mapping conditions in a plurality of original viewpoints.
According to another aspect of the embodiments of the present invention, another depth map reconstruction method is also provided. The method can comprise the following steps: the method comprises the steps of obtaining depth maps of a plurality of original viewpoints, selecting an image of a preset region in the depth map of each original viewpoint, and obtaining a pixel region to be shielded in the depth map, wherein the pixel region to be shielded comprises at least one pixel point which takes an edge pixel point as a reference and faces to the foreground direction; in the process of carrying out virtual viewpoint mapping on the depth map, image mapping is carried out on pixel regions to be shielded in the depth map of a target viewpoint, wherein the target viewpoint is a viewpoint which meets mapping conditions in a plurality of original viewpoints.
According to another aspect of the embodiment of the invention, a depth map reconstruction system is also provided. The method can comprise the following steps: the client is used for displaying the depth maps of a plurality of original viewpoints; the cloud server is communicated with the client and used for acquiring depth maps of a plurality of original viewpoints, and after edge detection is carried out on the depth map of each original viewpoint to generate edge pixel points, pixel areas to be shielded in the depth maps are determined, wherein the pixels to be shielded comprise at least one pixel point which takes the edge pixel points as a reference and faces the foreground direction; the cloud server performs image mapping on a pixel region to be shielded in the depth map of a target viewpoint in the process of performing virtual viewpoint mapping on the depth map, and returns a reconstructed image generated based on a mapping result to the client, wherein the target viewpoint is a viewpoint which meets mapping conditions in the multiple original viewpoints.
According to another aspect of the embodiment of the invention, a device for reconstructing a depth map is also provided. The method can comprise the following steps: the acquisition unit is used for acquiring depth maps of a plurality of original viewpoints, carrying out edge detection on the depth map of each original viewpoint and acquiring edge pixel points of each depth map; the determining unit is used for determining a pixel area to be shielded in the depth map based on the edge pixel points of each depth map, wherein the pixel area to be shielded comprises at least one pixel point which faces to the foreground direction by taking the edge pixel points as the reference; and the forbidding unit is used for executing image mapping on the pixel area to be shielded in the depth map of the target viewpoint in the process of carrying out virtual viewpoint mapping on the depth map, wherein the target viewpoint is the viewpoint which meets the mapping condition in the plurality of original viewpoints.
According to another aspect of the embodiments of the present invention, there is also provided a processor. The processor is configured to execute a program, where the program executes the method for reconstructing a depth map according to the embodiment of the present invention.
According to another aspect of the embodiment of the invention, a depth map reconstruction system is also provided. The depth map reconstruction system comprises: a processor; a memory coupled to the processor for providing instructions to the processor for processing the following processing steps: acquiring depth maps of a plurality of original viewpoints, carrying out edge detection on the depth map of each original viewpoint, and acquiring edge pixel points of each depth map; determining a pixel area to be shielded in the depth map based on edge pixel points of each depth map, wherein the pixel area to be shielded comprises at least one pixel point which faces to the foreground direction by taking the edge pixel points as a reference; in the process of carrying out virtual viewpoint mapping on the depth map, image mapping is carried out on a pixel region to be shielded in the depth map of a target viewpoint, wherein the target viewpoint is a viewpoint which meets mapping conditions in a plurality of original viewpoints.
In the embodiment of the invention, the depth maps of a plurality of original viewpoints are obtained, the edge detection is carried out on the depth map of each original viewpoint, and the edge pixel point of each depth map is obtained; determining a pixel area to be shielded in the depth map based on edge pixel points of each depth map, wherein the pixel area to be shielded comprises at least one pixel point which faces to the foreground direction by taking the edge pixel points as a reference; in the process of carrying out virtual viewpoint mapping on the depth map, image mapping is carried out on a pixel region to be shielded in the depth map of a target viewpoint, wherein the target viewpoint is a viewpoint which meets mapping conditions in a plurality of original viewpoints. That is to say, after the edge pixel points of the depth map are obtained, in the process of performing virtual viewpoint mapping on the depth map, image mapping is performed on a pixel region to be shielded in the depth map, robustness against compression loss of virtual viewpoint reconstruction based on the depth map is guaranteed, interpolation quality of a foreground edge of the depth map is improved, the technical problem that an original cavity region generates mapping textures of a foreground in the virtual viewpoint reconstruction process of the depth map is solved, and the technical effect that the original cavity region does not generate the mapping textures of the foreground in the virtual viewpoint reconstruction process of the depth map is achieved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
fig. 1A is a block diagram of a hardware structure of a computer terminal (or mobile device) for implementing a depth map reconstruction method according to an embodiment of the present invention;
FIG. 1B is a structural diagram illustrating reconstruction of a depth map in a specific application scenario according to an embodiment of the present invention;
FIG. 2 is a flow chart of a method for reconstructing a depth map according to an embodiment of the present invention;
FIG. 3 is a flow chart of another depth map reconstruction method according to an embodiment of the present invention;
FIG. 4 is a flow chart of a method for reconstructing a depth map according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a depth map reconstruction system according to an embodiment of the present invention;
FIG. 6 is a schematic illustration of the loss of a depth map according to an embodiment of the present invention;
FIG. 7 is a diagram illustrating the effect of compression loss of a depth map on DIBR quality according to an embodiment of the present invention;
FIG. 8 is a schematic diagram of an edge detection and edge mapping mask for a depth map according to an embodiment of the present invention;
FIG. 9 is a diagram illustrating interpolation results according to an embodiment of the present invention;
FIG. 10 is a schematic diagram of a depth map reconstruction apparatus according to an embodiment of the present invention;
FIG. 11 is a schematic diagram of another depth map reconstruction apparatus according to an embodiment of the present invention;
FIG. 12 is a schematic diagram of another depth map reconstruction apparatus according to an embodiment of the present invention; and
fig. 13 is a block diagram of a computer terminal according to an embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
First, some terms or terms appearing in the description of the embodiments of the present application are applicable to the following explanations:
based on Depth Image Based Rendering (DIBR), firstly projecting a reference Image to a three-dimensional Euclidean space by using Depth information, and then projecting a three-dimensional space point to an imaging plane of a virtual camera;
a Depth Map (Depth Map) which is an image or an image channel containing information on a distance of a surface of a scene object of a viewpoint;
in order to provide a high-freedom viewing experience, a user can adjust a viewing angle during viewing through an interactive means, and the user can view the video from a free viewpoint which the user wants to view;
the 6DoF parameter refers to 6 directional degrees of freedom, specifically, a parameter of translation along three directions and rotation around three axes.
Example 1
There is also provided, in accordance with an embodiment of the present invention, an embodiment of a method for depth map reconstruction, it being noted that the steps illustrated in the flowchart of the drawings may be performed in a computer system such as a set of computer-executable instructions and that, although a logical order is illustrated in the flowchart, in some cases the steps illustrated or described may be performed in an order different than here.
The method provided by the first embodiment of the present application may be executed in a mobile terminal, a computer terminal, or a similar computing device. Fig. 1A is a hardware structure block diagram of a computer terminal (or mobile device) for implementing a depth map reconstruction method according to an embodiment of the present invention. As shown in fig. 1A, the computer terminal 10 (or mobile device 10) may include one or more (shown as 102a, 102b, … …, 102 n) processors 102 (the processors 102 may include, but are not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA, etc.), a memory 104 for storing data, and a transmission device 106 for communication functions. Besides, the method can also comprise the following steps: a display, an input/output interface (I/O interface), a Universal Serial Bus (USB) port (which may be included as one of the ports of the I/O interface), a network interface, a power source, and/or a camera. It will be understood by those skilled in the art that the structure shown in fig. 1A is only an illustration and is not intended to limit the structure of the electronic device. For example, the computer terminal 10 may also include more or fewer components than shown in FIG. 1A, or have a different configuration than shown in FIG. 1A.
It should be noted that the one or more processors 102 and/or other data processing circuitry described above may be referred to generally herein as "data processing circuitry". The data processing circuitry may be embodied in whole or in part in software, hardware, firmware, or any combination thereof. Further, the data processing circuit may be a single stand-alone processing module, or incorporated in whole or in part into any of the other elements in the computer terminal 10 (or mobile device). As referred to in the embodiments of the application, the data processing circuit acts as a processor control (e.g. selection of a variable resistance termination path connected to the interface).
The memory 104 may be used to store software programs and modules of application software, such as program instructions/data storage devices corresponding to the depth map reconstruction method in the embodiment of the present invention, and the processor 102 executes various functional applications and data processing by executing the software programs and modules stored in the memory 104, that is, implements the depth map reconstruction method of the application program. The memory 104 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory located remotely from the processor 102, which may be connected to the computer terminal 10 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 106 is used for receiving or transmitting data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the computer terminal 10. In one example, the transmission device 106 includes a Network adapter (NIC) that can be connected to other Network devices through a base station to communicate with the internet. In one example, the transmission device 106 can be a Radio Frequency (RF) module, which is used to communicate with the internet in a wireless manner.
The display may be, for example, a touch screen type Liquid Crystal Display (LCD) that may enable a user to interact with a user interface of the computer terminal 10 (or mobile device).
It should be noted here that in some alternative embodiments, the computer device (or mobile device) shown in fig. 1A described above may include hardware elements (including circuitry), software elements (including computer code stored on a computer-readable medium), or a combination of both hardware and software elements. It should be noted that FIG. 1A is only one example of a specific embodiment and is intended to illustrate the types of components that may be present in the computer device (or mobile device) described above.
Fig. 1B is a schematic structural diagram of reconstruction of a depth map in a specific application scenario according to an embodiment of the present invention, which illustrates an arrangement scenario of a depth map reconstruction system 20, where the depth map reconstruction 20 may include an acquisition array 21 composed of multiple acquisition devices, a data processing device 22, a cloud server cluster 23 (which may include a server 231, a server 232, a server 233, and a server 234), a play control device 24, a play terminal 25, and an interaction terminal 26. After the edge pixel points of the depth map are obtained, the depth map reconstruction system 20 performs image mapping on the pixel region to be shielded in the depth map in the process of performing virtual viewpoint mapping on the depth map, so that the robustness of compression loss resistance of virtual viewpoint reconstruction based on the depth map is ensured, and the interpolation quality of the foreground edge of the depth map is improved.
Specifically, referring to fig. 1B, the collection array 21 may include a plurality of cameras, which may be disposed at different positions of the field collection area in a fan shape according to a preset multi-angle free view range.
The data processing device 22 may send an instruction to each camera in the acquisition array 21 through a wireless local area network, and each acquisition device in the acquisition array 21 transmits an obtained image captured by the camera to the data processing device 22 based on the instruction sent by the data processing device 22.
The interactive terminal 26 of this embodiment triggers an instruction to reconstruct a depth map based on an interactive operation, when the data processing device 22 detects that an interactive operation occurs on an operation interface of the interactive terminal 26, the instruction may be responded to, a depth map of a plurality of original viewpoints is obtained, an edge detection is performed on the depth map of each original viewpoint, an edge pixel point of each depth map is obtained, a pixel region to be shielded in the depth map is determined based on the edge pixel point of each depth map, wherein the pixel region to be shielded includes at least one pixel point facing a foreground direction with the edge pixel point as a reference, in a process of performing virtual viewpoint mapping on the depth map, image mapping is performed on the pixel region to be shielded in the depth map of a target viewpoint, wherein the target viewpoint is a viewpoint satisfying a mapping condition among the plurality of original viewpoints, and then an image result obtained after performing virtual viewpoint position mapping on the depth map is uploaded to the cloud server cluster 23, the server cluster 23 may send the image results to the interactive terminal 26 for display.
As another optional implementation, after the data processing device 22 detects an interactive operation occurring on the operation interface of the interactive terminal 26, the depth maps of multiple original viewpoints may be uploaded to the server cluster 23 in the cloud, the server cluster 23 performs edge detection on the depth map of each original viewpoint to obtain edge pixel points of each depth map, based on the edge pixel points of each depth map, a pixel region to be masked in the depth map is determined, in the process of performing virtual viewpoint mapping on the depth map, image mapping is performed on the pixel region to be masked in the depth map of the target viewpoint, and the server cluster 23 further sends the mapped image result to the interactive terminal 26 for display.
Then, the playing control device 24 may receive the image result sent by the server cluster 23, and the playing terminal 25 receives the image result from the playing control device 24 and performs real-time playing. The playing control device 24 may be a manual playing control device or a virtual playing control device. In a specific implementation, a director control apparatus such as a director table may be used as a play control apparatus in the embodiment of the present invention.
In the embodiment, the depth map reconstruction system is adopted, and on one hand, a user can trigger an instruction for reconstructing the depth map through the interactive terminal 26; on the other hand, the user can directly view the image result obtained by mapping the depth map to the virtual viewpoint position through the playback terminal 25. It is understood that the above reconstruction system 20 of the depth map may also include only the cast terminal 25 or only the interactive terminal 26, or may use the same terminal device as the cast terminal 25 and the interactive terminal 26.
As can be understood by those skilled in the art, after the depth map is compressed, transmitted, and the like, there is a certain image loss, and such a depth map may cause a foreground to fly out when being mapped to a virtual viewpoint, resulting in generation of a mapping texture of the foreground in an original void region, so that in a process of reconstructing the virtual viewpoint of the depth map, the original void region generates a technical problem of the mapping texture of the foreground.
In view of this, the present specification provides a solution, in an operating environment shown in fig. 1A or fig. 1B, the present application provides a method for reconstructing a depth map as shown in fig. 2. It should be noted that the depth map reconstruction method of this embodiment may be executed by the mobile terminal of the embodiment shown in fig. 1A or the depth map reconstruction system shown in fig. 1B.
Fig. 2 is a flowchart of a method for reconstructing a depth map according to an embodiment of the present invention. As shown in fig. 2, the method may include the steps of:
step S202, obtaining depth maps of a plurality of original viewpoints, carrying out edge detection on the depth map of each original viewpoint, and obtaining edge pixel points of each depth map.
In the technical solution provided by step S202 of the present invention, there may be a plurality of original viewpoints, each of the original viewpoints has a corresponding depth map, and depth maps of the plurality of original viewpoints are obtained, where the depth maps are images or image channels containing information about distances of surfaces of scene objects of the viewpoints, and may be depth maps for performing processing such as compression and transmission.
The embodiment performs edge detection on the depth map of each original viewpoint, the edge detection belongs to concepts in image processing and computer vision, and can be used for detecting pixel points at edges in the depth map of the original viewpoint, that is, edge pixel points of the depth map, so as to obtain edge pixel points of each depth map.
Optionally, in this embodiment, the Depth map of each original viewpoint is subjected to edge detection, which can be performed by simply comparing | Depth _ left-Depth _ right | > THR.
Step S204, determining a pixel area to be shielded in the depth map based on the edge pixel points of each depth map, wherein the pixel area to be shielded comprises at least one pixel point which faces to the foreground direction by taking the edge pixel points as the reference.
In the technical solution provided in step S204 of the present invention, because the depth map is compressed and transmitted, the edge of the depth map is subjected to a certain loss, and the depth map changes from a larger value to a smaller value of the original foreground, and such depth map causes the foreground to fly out when mapping to the virtual viewpoint, thereby generating the mapping texture of the foreground in the original hollow area. In this embodiment, after obtaining the depth maps of the multiple original viewpoints, performing edge detection on the depth map of each original viewpoint, and obtaining edge pixel points of each depth map, a pixel region to be masked in the depth map may be determined based on the edge pixel points of each depth map, where the pixel region to be masked includes a pixel point set, and the pixel point set may be at least one pixel point that is marked in advance and faces the foreground direction with the edge pixel points as references, for example, pixel points other than the pixel points that cause loss after the edge of the depth map is compressed, and other pixel points in the depth map other than the pixel region to be masked may be reconstructed by using the depth maps of all the original viewpoints when performing mapping operation.
Step S206, in the process of carrying out virtual viewpoint mapping on the depth map, image mapping is carried out on pixel areas to be shielded in the depth map of the target viewpoint.
In the technical solution provided in step S206 of the present invention, after determining the pixel region to be masked in the depth map based on the edge pixel point of each depth map, in the process of performing virtual viewpoint mapping on the depth map, image mapping is performed on the pixel region to be masked in the depth map of the target viewpoint, where the target viewpoint is a viewpoint that satisfies the mapping condition among the multiple original viewpoints.
In this embodiment, the process of performing virtual viewpoint mapping on the depth map refers to a process of mapping the depth map to a virtual viewpoint position of a target, that is, a virtual viewpoint reconstruction (DIBR) process based on the depth map, where image mapping is performed on a to-be-shielded pixel region in the depth map of the target viewpoint, for example, the to-be-shielded pixel region includes a pixel point closer to the virtual viewpoint in the depth map, image mapping is performed on the to-be-shielded pixel region, but image mapping is prohibited from being performed on a pixel point farther from the virtual viewpoint in the depth map, and the target viewpoint is a viewpoint that satisfies mapping conditions among a plurality of original viewpoints, that is, a viewpoint used for implementing virtual viewpoint mapping on the depth map, so that robustness against compression loss of a DIBR algorithm is ensured, and quality of a difference value of a foreground edge of the depth map is significantly improved.
Through the steps S202 to S206, depth maps of a plurality of original viewpoints are obtained, and edge detection is performed on the depth map of each original viewpoint to obtain edge pixel points of each depth map; determining a pixel area to be shielded in the depth map based on edge pixel points of each depth map, wherein the pixel area to be shielded comprises at least one pixel point which faces to the foreground direction by taking the edge pixel points as a reference; in the process of carrying out virtual viewpoint mapping on the depth map, image mapping is carried out on a pixel region to be shielded in the depth map of a target viewpoint, wherein the target viewpoint is a viewpoint which meets mapping conditions in a plurality of original viewpoints. That is to say, after the edge pixel points of the depth map are obtained, in the process of performing virtual viewpoint mapping on the depth map, image mapping is performed on the pixel region to be shielded in the depth map, so that robustness against compression loss of virtual viewpoint reconstruction based on the depth map is ensured, interpolation quality of the foreground edge of the depth map is improved, the technical problem that the original void region generates mapping textures of the foreground in the virtual viewpoint reconstruction process of the depth map is solved, and the technical effect that the original void region does not generate the mapping textures of the foreground in the virtual viewpoint reconstruction process of the depth map is achieved.
The above method of this embodiment is described below.
As an optional implementation, the target viewpoint is: and a viewpoint having a distance from the virtual viewpoint within a predetermined range among the plurality of original viewpoints.
In this embodiment, the target viewpoint is a viewpoint that satisfies the mapping condition among the plurality of original viewpoints, a distance between each original viewpoint and the virtual viewpoint is obtained, whether the distance between each original viewpoint and the virtual viewpoint is within a predetermined range is determined, and if a viewpoint whose distance from the virtual viewpoint is within the predetermined range exists among the plurality of original viewpoints, it is determined that the viewpoint satisfies the mapping condition, and it is determined as the target viewpoint.
As an optional implementation manner, marking the pixel points in the pixel region to be shielded, wherein only the pixel points with the marks in the depth map of the target viewpoint are subjected to the mapping operation.
For real image reconstruction, a number of images are taken, which are all captured by real cameras of the layout. In this embodiment, images of a plurality of original cameras within a predetermined range from the current virtual viewpoint are read to complete image reconstruction of the virtual viewpoint, and at this time, in the process of mapping the virtual viewpoint of the depth map, image reconstruction is performed on the basis of pixel points of images captured by original cameras close to the virtual viewpoint, while images captured by original cameras far from the virtual viewpoint are shielded and are not subjected to mapping operation in the image reconstruction process.
The mapping of the pixel points of the image of the original camera far away from the virtual viewpoint is to be shielded mainly because of the physical position between the original camera and the virtual viewpoint, and when the distance between the original camera and the virtual viewpoint is farther, the error of the result of the mapped reconstructed image is larger, so that in order to ensure the effect of the reconstructed image and avoid generating a cavity area, the embodiment shields the images of the original camera which cause the cavity and have large error, and does not participate in the process of image reconstruction.
Therefore, in this embodiment, after determining the pixel region to be masked in the depth map based on the edge pixel point of each depth map, not every pixel point in the depth map may perform the mapping operation. The embodiment marks the pixel points in the pixel region to be shielded, and the marked pixel points can be the pixel points of the image captured by the original camera close to the virtual viewpoint, so that the mapping operation is only executed on the marked pixel points in the depth map of the target viewpoint.
As an alternative implementation, mapping operation is not performed on the marked pixel points of the original viewpoints except the target viewpoint.
In this embodiment, even if the marked pixel points are included in the original viewpoints excluding the target viewpoint from the plurality of original viewpoints, the mapping operation may not be performed on the original viewpoints excluding the target viewpoint, that is, the original viewpoints excluding the target viewpoint from the plurality of original viewpoints are masked, for example, M target viewpoints closest to the virtual viewpoint position are masked, and the mapping operation is not performed on the M target viewpoints.
As an alternative embodiment, when performing image mapping on the region except the pixel region to be masked in the depth map, the depth map of all the original viewpoints is used for reconstruction.
In this embodiment, in the process of performing virtual viewpoint mapping on the depth map, an area of the depth map other than the pixel area to be masked may also be subjected to image mapping, where pixel points in the area of the depth map other than the pixel area to be masked are unmarked pixel points. When image mapping is performed on unmarked pixel points of regions except the pixel region to be shielded in the depth map, the depth map of all original viewpoints can be adopted for reconstruction.
As an optional implementation manner, taking the edge pixel point as a reference point, extracting a predetermined number of pixel points in a direction toward the foreground, and writing into a pixel point set to be shielded, wherein a pixel area to be shielded includes the pixel point set.
In this embodiment, after performing edge detection on the depth map of each original viewpoint and obtaining edge pixel points of each depth map, a predetermined number of pixel points extracted in the direction toward the foreground, for example, a predetermined number of pixel points marked out in the same row of the edge pixel points along the foreground direction, may be taken as reference points, and the predetermined number of marked pixel points is determined as the predetermined number of pixel points extracted in the direction toward the foreground, where the predetermined number is a settable parameter, for example, 1, 2, 3, 4, 5, and the like.
As an alternative implementation, the number of extracted pixel points in the direction toward the foreground is determined based on the degree of loss of the depth map, wherein the degree of loss of the depth map is proportional to the number of extracted pixel points.
In this embodiment, when determining that the number of the pixel points needs to be extracted in the direction toward the foreground, the loss degree of the depth map may be determined first, or the degree of a certain loss of the edge of the depth map caused in the processes of compression, transmission, and the like of the depth map may be determined. Optionally, in this embodiment, the number of the pixel points extracted in the direction toward the foreground is determined as the predetermined number of the pixel points written into the pixel point set to be masked.
As an optional implementation manner, before performing edge detection on the depth map of the original viewpoint in step S202, the method further includes: determining the position of a virtual viewpoint to be mapped; marking a first original viewpoint set with the distance from the virtual viewpoint position exceeding a preset threshold value as an original viewpoint needing to execute edge detection; marking the second set of original viewpoints not exceeding the predetermined threshold from the virtual viewpoint position as original viewpoints not requiring to perform edge detection.
In this embodiment, before performing edge detection on a depth map of an original viewpoint, positions of virtual viewpoints to be mapped may be determined, where the positions of the virtual viewpoints may be 6DoF virtual viewpoint positions, then a first number of pixel points whose distances from the virtual viewpoint positions exceed a predetermined threshold are determined, and the first number of pixel points are combined into a first original viewpoint set, where the predetermined threshold is a critical value used for measuring a distance close to the virtual viewpoint positions, and the first original viewpoint set may be marked as an original viewpoint for which edge detection needs to be performed.
Optionally, in this embodiment, a second number of pixel points whose distance from the virtual viewpoint position does not exceed the predetermined threshold are obtained, the second number of pixel points may be combined into a second original viewpoint set, and the second original viewpoint set is marked as an original viewpoint that does not need to perform edge detection, that is, the second original viewpoint set closer to the virtual viewpoint position may still perform mapping operation without being affected by the marked original viewpoint that needs to perform edge detection. Wherein the second number is a settable threshold, which may be represented by M.
As an optional implementation manner, in the process of performing virtual viewpoint mapping on the depth map, mapping pixel points except for a pixel region to be shielded, and taking values from an image of a first original camera based on the newly mapped depth map to generate a first reconstructed image.
In this embodiment, in the process of mapping the depth map to the virtual viewpoint position, image mapping may be performed on a pixel region to be masked in the depth map of the target viewpoint, and when mapping a pixel point other than the pixel region to be masked, the pixel point other than the pixel region to be masked in the depth map may be mapped to a new 6DoF virtual viewpoint position according to a spatial geometric relationship, so as to obtain a newly mapped depth map, perform post-processing on the newly mapped depth map, and take a value from an image of a first original camera based on the newly mapped depth map, so as to generate a first reconstructed image, where the first original camera may be an image of a first camera among images of N cameras, so as to form a reconstructed image of the new virtual viewpoint, and since the reconstructed image is obtained from the first camera image and the depth map, and thus may be referred to as P1.
As an optional implementation, after generating the first reconstructed image, the method further includes performing image reconstruction of other virtual perspectives on the depth map to obtain a plurality of reconstructed images, wherein the plurality of virtual perspectives are determined based on cameras arranged at different perspective positions; and fusing the first reconstructed image and the plurality of reconstructed images to generate a reconstructed result of the depth map.
In this embodiment, after the first reconstructed image is generated, image reconstruction of virtual perspectives other than the virtual perspective corresponding to the virtual viewpoint may be performed on the depth map, so as to obtain a plurality of reconstructed images, where the virtual perspectives may be determined by cameras arranged at different perspective positions, and optionally, the embodiment processes images of N cameras in the same processing manner as the image of the first original camera, so as to obtain P1, P2 … PN, and N reconstructed images.
After the depth map is subjected to image reconstruction of other virtual viewing angles to obtain a plurality of reconstructed images, the first reconstructed image and the plurality of reconstructed images may be fused, and the weighted average and the hole filling algorithm may be performed on pixels of the first reconstructed image and pixels of the plurality of reconstructed images. Due to the occlusion relationship of the depth map, the image pixels mapped in one reconstructed image in the embodiment are not mapped in other reconstructed images in the plurality of reconstructed images, for example, the image pixels mapped in the reconstructed image P1 are not necessarily mapped in the reconstructed image P2, or the image pixels mapped in the reconstructed images P1 and P2 are determined and then weighted and averaged, so as to obtain the final image pixels, which are determined as the reconstruction result of the depth map.
Optionally, when the fusion of the first reconstructed image and the plurality of reconstructed images is implemented to generate the reconstructed result of the depth map, for a certain pixel position (x, y) in the depth map, all m pixel points with mapping values at the pixel position (x, y) may be obtained in the reconstructed images P1, P2 … PN (if all the images of all the cameras have no value, the position (x, y) may be marked as a hole pixel, and the next step is performed after the above steps are performed for the pixel position of the part of pixels, wherein for the obtained m pixel points (m | ═ 0), the m pixel points are weighted and averaged to obtain a final value, wherein the weighted weights may be selected as simple averages or weighted according to inverses of distances between the virtual viewpoint position and the positions of the respective cameras, wherein the cameras with closer positions are, the greater the corresponding weight.
Optionally, the next step is to interpolate from the pixels already obtained around by the hole filling algorithm for all pixel positions (x, y) in the reconstructed images P1, P2 … PN that have no value.
The embodiment of the invention also provides another depth map reconstruction method.
Fig. 3 is a flowchart of another depth map reconstruction method according to an embodiment of the present invention. As shown in fig. 3, the method may include the steps of:
step S302, obtaining depth maps of a plurality of original viewpoints, carrying out edge detection on the depth map of each original viewpoint, and obtaining edge pixel points of each depth map.
In the technical solution provided by step S302 of the present invention, there may be a plurality of original viewpoints, each original viewpoint has a corresponding depth map, which may be a depth map for compression and transmission, and the depth maps of the plurality of original viewpoints are obtained.
The depth map of this embodiment is compressing and the in-process of transmission, and the edge of depth map can receive certain loss, and this embodiment can carry out edge detection to the depth map of original viewpoint to obtain the pixel that is in the loss of edge in the depth map, can be with the pixel of edge as the benchmark towards the pixel on the prospect direction, its quantity can be a plurality of, treats the pixel region of shielding through its constitution.
Alternatively, in this embodiment, edge detection is performed for the Depth map of each original viewpoint, which can be performed by simply comparing | Depth _ left-Depth _ right | > THR.
Step S304, in the process of carrying out virtual viewpoint mapping on the depth map, image mapping is carried out on pixel areas to be shielded in the depth map of the target viewpoint.
In the technical solution provided in step S304 of the present invention, after performing edge detection on the depth map of each original viewpoint and obtaining lost pixel points in each depth map, in the process of performing virtual viewpoint mapping on the depth map, image mapping is performed on a pixel region to be shielded in the depth map of a target viewpoint, where the target viewpoint is a viewpoint that satisfies mapping conditions in a plurality of original viewpoints.
In this embodiment, the edge of the depth map is subjected to a certain loss due to compression and transmission, and is changed from a larger value to a smaller value of the original foreground, and such a depth map causes the foreground to fly out when mapping to the virtual viewpoint, thereby generating the mapping texture of the foreground in the original hollow area. In the process of performing virtual viewpoint mapping on the depth map, the embodiment performs image mapping on the pixel area to be shielded in the depth map of the target viewpoint. For example, pixel points in the pixel region to be shielded, which are closer to the virtual viewpoint in the depth map, are included, image mapping is performed on the pixel points, and image mapping is prohibited from being performed on the pixel points in the depth map, which are farther from the virtual viewpoint, so that the robustness of the DIBR algorithm against compression loss is ensured, and the quality of the difference value of the foreground edge of the depth map is obviously improved.
As an optional implementation manner, in step S302, after performing edge detection on the depth map of each original viewpoint and obtaining lost pixel points in each depth map, the method further includes: and determining a pixel area to be shielded in the depth map based on the edge pixel points of each depth map, wherein the pixel area to be shielded comprises at least one pixel point which faces the foreground direction by taking the edge pixel points as the reference.
In this embodiment, after the edge detection is performed on the depth map of each original viewpoint and the lost pixel points in each depth map are obtained, the pixel region to be shielded in the depth map may be determined based on the edge pixel points of each depth map, where the pixel region to be shielded includes at least one pixel point facing the foreground direction with the edge pixel point as a reference, and may be at least one pixel point facing the foreground direction with the edge pixel point as a reference, which is marked in advance, for example, at least one pixel point except for the pixel value causing the loss after the edge of the depth map is compressed, so as to ensure that the DIBR algorithm has robustness against compression loss.
The embodiment of the invention also provides another depth map reconstruction method.
Fig. 4 is a flowchart of a method for reconstructing a depth map according to an embodiment of the present invention. As shown in fig. 4, the method may include the steps of:
step S402, displaying depth maps of a plurality of original viewpoints.
In the technical solution provided by step S402 of the present invention, the number of the original viewpoints may be multiple, each original viewpoint has a corresponding depth map, which may be a depth map for compression and transmission, and optionally, the obtained depth maps of the multiple original viewpoints are displayed on a graphical user interface.
Step S404, showing the edge pixel points extracted after the edge detection is carried out on each depth map.
In the technical solution provided in step S404 of the present invention, after the depth maps of a plurality of original viewpoints are displayed, the edge pixel points extracted after the edge detection is performed on each depth map may be displayed on the graphical user interface.
In this embodiment, the edge detection is performed on the depth map of the original viewpoint, so that pixel points at the edge in the depth map of the original viewpoint can be detected, and then the edge pixel points extracted after the edge detection is performed on each depth map are displayed on the graphical user interface.
Step S406, displaying the image result after image mapping is performed on the pixel area to be shielded in the depth map of the target viewpoint in the process of performing virtual viewpoint mapping on the depth map.
In the technical solution provided in step S406 of the present invention, after the edge pixel points extracted after performing edge detection on each depth map are displayed, an image result obtained after performing image mapping on a to-be-shielded pixel region in the depth map of a target viewpoint may be displayed, where the target viewpoint is a viewpoint that satisfies a mapping condition from a plurality of original viewpoints, and the to-be-shielded pixel region includes at least one pixel point that faces a foreground direction with the edge pixel point as a reference.
In this embodiment, the edge of the depth map is subjected to a certain loss due to compression, transmission, and other processing, and changes from a larger value to a smaller value of the original foreground, and such a depth map causes the foreground to fly out when mapping to the virtual viewpoint, thereby causing the mapped texture of the foreground to be generated in the original hollow area. The embodiment may determine a pixel region to be shielded in the depth map based on edge pixel points of each depth map, where the pixel region to be shielded includes a pixel point set, and the plurality of pixel point sets may be at least one pixel point that is marked in advance and faces the foreground direction with the edge pixel points as references, for example, pixel points other than pixel points that cause loss after the edge of the depth map is compressed, and other pixel points other than the pixel region to be shielded in the depth map may be reconstructed by using the depth maps of all the original viewpoints during the mapping operation, so that in the process of performing virtual viewpoint mapping on the depth map, the embodiment may perform image mapping on the pixel region to be shielded in the depth map of the target viewpoint, thereby obtaining an image result after performing virtual viewpoint mapping on the depth map, where the image result may be a mapping result of performing virtual viewpoint mapping on the depth map, and the pixel area to be shielded does not exist, and the image result is displayed on a graphical user interface.
As an optional implementation manner, an embodiment of the present invention further provides another depth map reconstruction method. The method can comprise the following steps: determining a target viewpoint from a plurality of original viewpoints, wherein the target viewpoint is a viewpoint which meets mapping conditions in the plurality of original viewpoints; acquiring a depth map of a target viewpoint, and performing edge detection on the depth map of the target viewpoint to acquire edge pixel points of the depth map of the target viewpoint; determining a pixel area to be shielded based on edge pixel points of the depth map of the target viewpoint, wherein the pixel area to be shielded comprises at least one pixel point facing the foreground direction by taking the edge pixel points as a reference; in the process of carrying out virtual viewpoint mapping on the depth maps of a plurality of original viewpoints, selecting a pixel area to be shielded in the depth map of a target viewpoint to carry out image mapping.
In this embodiment, each original viewpoint has a corresponding depth map, depth maps of a plurality of original viewpoints are obtained, and a target viewpoint is determined from the plurality of original viewpoints, where the target viewpoint is a viewpoint that satisfies a mapping condition among the plurality of original viewpoints, so that the embodiment may perform edge detection only on the depth map of the target viewpoint.
The embodiment performs edge detection on the depth map of the target viewpoint, and can be used for detecting pixel points at the edge in the depth map of the target viewpoint, that is, edge pixel points of the depth map, thereby obtaining the edge pixel points of the depth map of the target viewpoint.
Optionally, in this embodiment, the Depth map of the target viewpoint is subjected to edge detection, which may be performed by simply comparing | Depth _ left-Depth _ right | > THR.
In this embodiment, the depth map of the target viewpoint is subjected to compression, transmission, and the like, and the edge thereof is subjected to a certain loss and is changed from a larger value of the original foreground to a smaller value, and such a depth map of the target viewpoint causes the foreground to fly out when being mapped to the virtual viewpoint, thereby causing the generation of the mapping texture of the foreground in the original hollow area. In the embodiment, after the edge detection is performed on the depth map of the target viewpoint in the depth map of the target viewpoint to obtain the edge pixel points of the target viewpoint, the pixel region to be shielded in the depth map of the target viewpoint may be determined based on the edge pixel points of the depth map of the target viewpoint, where the pixel region to be shielded includes a pixel point set including at least one pixel point facing the foreground direction with the edge pixel points as references.
In the process of performing virtual viewpoint mapping on the depth maps of a plurality of original viewpoints, selecting a pixel region to be shielded in the depth map of a target viewpoint to perform image mapping, for example, the pixel region to be shielded comprises pixel points closer to the virtual viewpoint in the depth map of the target viewpoint, performing image mapping on the pixel regions, and forbidding performing image mapping on pixel points farther from the virtual viewpoint in the depth map of the target viewpoint, so that the DIBR algorithm has robustness against compression loss, and the quality of difference values of foreground edges of the depth map is obviously improved.
As an optional implementation manner, an embodiment of the present invention further provides another depth map reconstruction method. The method can comprise the following steps: in the live broadcasting process, acquiring depth maps of a plurality of original viewpoints on a live broadcasting picture; performing edge detection on the depth map of each original viewpoint to obtain edge pixel points of each depth map; determining a pixel area to be shielded in the depth map based on edge pixel points of each depth map, wherein the pixel area to be shielded comprises at least one pixel point which faces to the foreground direction by taking the edge pixel points as a reference; in the process of carrying out virtual viewpoint mapping on the depth map, image mapping is carried out on pixel regions to be shielded in the depth map of a target viewpoint, wherein the target viewpoint is a viewpoint which meets mapping conditions in a plurality of original viewpoints.
The depth map reconstruction method of the embodiment can be applied to transaction scenes, such as shopping live scenes. The live broadcast scene of the embodiment has a plurality of original viewpoints, each original viewpoint has a corresponding depth map, and the depth maps of the plurality of original viewpoints are obtained.
The embodiment performs edge detection on the depth map of each original viewpoint in the live-cast scene, and can be used for detecting pixel points at the edge in the depth map of the original viewpoint, thereby obtaining edge pixel points of each depth map.
Optionally, in this embodiment, the Depth map of each original viewpoint in the live scene is edge detected, which can be done by simply comparing | Depth _ left-Depth _ right | > THR.
In this embodiment, the depth map in the live broadcast scene has a certain loss in its edge due to the processing such as compression and transmission, and changes from a larger value of the original foreground to a smaller value, and such a depth map causes the foreground to fly out when mapping to the virtual viewpoint in the live broadcast scene, thereby generating the mapping texture of the foreground in the original hollow area. In this embodiment, after obtaining the depth maps of the multiple original viewpoints in the live broadcast scene, performing edge detection on the depth map of each original viewpoint, and obtaining edge pixel points of each depth map, a pixel region to be masked in the depth map may be determined based on the edge pixel points of each depth map in the live broadcast scene, where the pixel region to be masked includes a pixel point set, and the pixel point set may be at least one pixel point that is marked in advance and faces the foreground direction with the edge pixel points as references.
In this embodiment, in the live broadcast scene, the process of performing virtual viewpoint mapping refers to a process of mapping a depth map to a virtual viewpoint position of a target, that is, a virtual viewpoint reconstruction process based on the depth map in the live broadcast scene, performing image mapping on a pixel region to be shielded in the depth map of the target viewpoint, for example, the pixel region to be shielded includes pixel points in the depth map which are closer to the virtual viewpoint, performing image mapping on the point of interest, and forbidding performing image mapping on pixel points in the depth map which are far away from the virtual viewpoint, wherein the target viewpoint is a viewpoint which satisfies mapping conditions among a plurality of original viewpoints, namely, a viewpoint for realizing virtual viewpoint mapping on the depth map of the live broadcast scene, therefore, the robustness of the DIBR algorithm against compression loss is ensured, and the quality of the difference value of the foreground edge of the depth map is obviously improved.
As an optional implementation manner, an embodiment of the present invention further provides another depth map reconstruction method. The method can comprise the following steps: the method comprises the steps of obtaining depth maps of a plurality of original viewpoints, selecting an image of a preset region in the depth map of each original viewpoint, and obtaining a pixel region to be shielded in the depth map, wherein the pixel region to be shielded comprises at least one pixel point which takes an edge pixel point as a reference and faces to the foreground direction; in the process of carrying out virtual viewpoint mapping on the depth map, image mapping is carried out on pixel regions to be shielded in the depth map of a target viewpoint, wherein the target viewpoint is a viewpoint which meets mapping conditions in a plurality of original viewpoints.
The number of the original viewpoints of the embodiment may be multiple, each original viewpoint has a corresponding depth map, the depth maps of the multiple original viewpoints are obtained, and a part of the region of each original viewpoint depth map may be selected for mapping. The embodiment may select an image of a predetermined region in the depth map of each original viewpoint, and may perform edge detection on the depth map of each original viewpoint in the live-action scene to obtain an image of a predetermined region in the depth map of each original viewpoint, and determine the image as a pixel region to be masked in the depth map.
In this embodiment, the depth map undergoes a certain loss at its edge due to processing such as compression and transmission, and changes from a larger value to a smaller value of the original foreground, and such a depth map causes the foreground to fly out when mapping to the virtual viewpoint, thereby causing the generation of the mapping texture of the foreground in the original hollow area. In the embodiment, the depth maps of a plurality of original viewpoints are obtained, and the image of the predetermined region in the depth map of each original viewpoint is selected to obtain the pixel region to be shielded in the depth map, wherein the pixel region to be shielded comprises a pixel point set, and the pixel point set can be at least one pixel point which is marked in advance and faces the foreground direction by taking an edge pixel point as a reference.
In this embodiment, the process of performing virtual viewpoint mapping refers to a virtual viewpoint reconstruction process based on a depth map in a live broadcast scene, and performs image mapping on a pixel region to be shielded in the depth map of a target viewpoint, for example, the pixel region to be shielded includes a pixel point closer to the virtual viewpoint in the depth map, and performs image mapping on the pixel point, but prohibits image mapping on a pixel point farther from the virtual viewpoint in the depth map, where the target viewpoint is a viewpoint that satisfies a mapping condition among a plurality of original viewpoints, that is, a viewpoint used for performing virtual viewpoint mapping on the depth map of the live broadcast scene, so that a DIBR algorithm is guaranteed to have robustness against compression loss, and the quality of a difference value of a foreground edge of the depth map is obviously improved.
As an optional implementation manner, in this embodiment, edge detection may be performed on the depth map of the original viewpoint, and it is determined that edge pixel points in the depth map form a predetermined region.
In this embodiment, the edge detection is performed on the depth map of the original viewpoint, and the edge detection may be used to detect pixel points located at an edge in the depth map of the original viewpoint, so as to obtain edge pixel points of each depth map. The embodiment can form the predetermined region of the embodiment through the edge pixel points based on the edge pixel points of each depth map, so that the purpose of circling and selecting a part of the region of the depth map of the original viewpoint for mapping is achieved.
The embodiment masks pixel values which are lost after the edge of the depth map is compressed or transmitted in the DIBR mapping process, so that the robustness of the DIBR algorithm against compression loss is ensured, the method of the embodiment is the DIBR method with the robustness of the depth map compression loss, and the strategy of mapping and masking the edge of the depth map due to compression or transmission is adopted for the edge loss of the depth map, so that compared with other algorithms in the related art, the embodiment can obviously improve the interpolation quality of foreground edge of the depth map, solve the technical problem that the original cavity area generates the mapping texture of the foreground in the process of reconstructing the virtual viewpoint of the depth map, and achieve the technical effect that the original cavity area does not generate the mapping texture of the foreground in the process of reconstructing the virtual viewpoint of the depth map.
Example 2
The embodiment of the invention also provides a schematic diagram of a depth map reconstruction system. It should be noted that the depth map reconstruction system of this embodiment can be used to perform the depth map reconstruction method of embodiment 1 of the present invention.
Fig. 5 is a schematic diagram of a depth map reconstruction system according to an embodiment of the present invention. As shown in fig. 5, the depth map reconstruction system 50 may include: a client 51 and a cloud server 52.
And the client 51 is used for displaying the depth maps of the plurality of original viewpoints.
In this embodiment, the number of original viewpoints may be multiple, each original viewpoint has a corresponding depth map, which may be a depth map for compression and transmission, and optionally, the client 51 of this embodiment includes a graphical user interface, and the depth map may be displayed on the graphical user interface.
The cloud server 52 is in communication with the client 51, and is configured to obtain a depth map of the original viewpoints, perform edge detection on the depth map of each original viewpoint to generate edge pixel points, and determine a pixel region to be masked in the depth map, where the pixel region to be masked includes at least one pixel point facing the foreground direction with the edge pixel point as a reference.
The depth map reconstruction system according to this embodiment may further include a cloud server 52, which communicates with the client 51, acquires the depth map of the original viewpoint sent by the client 51, and may be configured to perform edge detection on the depth map of the original viewpoint, detect a pixel point at an edge in the depth map of the original viewpoint, and determine a pixel region to be masked in the depth map based on an edge pixel point of each depth map, where the pixel region to be masked includes a plurality of pixel points that are not subjected to mapping operation, and may be a pixel point that is marked in advance and that faces the foreground direction with the edge pixel point as a reference, and other pixel points in the depth map except for the pixel region to be masked may be subjected to mapping operation.
The cloud server 52 performs image mapping on a pixel region to be shielded in the depth map of the target viewpoint in the process of performing virtual viewpoint mapping on the depth map, and returns a reconstructed image generated based on the mapping result to the client, wherein the target viewpoint is a viewpoint which satisfies the mapping condition among the plurality of original viewpoints.
In this embodiment, the cloud server 52 may perform image mapping on a pixel region to be masked in the depth map of the target viewpoint in the process of performing virtual viewpoint mapping on the depth map, so as to obtain a mapping result of performing virtual viewpoint mapping on the depth map, generate a reconstructed image based on the mapping result, and the cloud server 52 may further return the generated reconstructed image to the client 51.
In this embodiment, the depth map of the original viewpoint is presented by the client 51; the method comprises the steps that a depth map of original viewpoints sent by a client 51 is obtained through a cloud server 52, after edge detection is carried out on the depth map of each original viewpoint to generate edge pixel points, pixel areas to be shielded in the depth map are determined, wherein the pixel areas to be shielded comprise at least one pixel point which takes the edge pixel points as a reference and faces the foreground direction; in the process of performing virtual viewpoint mapping on the depth map, the cloud server 52 performs image mapping on a pixel region to be shielded in the depth map of a target viewpoint, and returns a reconstructed image generated based on a mapping result to the client 51, where the target viewpoint is a viewpoint that satisfies a mapping condition among a plurality of original viewpoints. That is to say, after the client side of this embodiment obtains the edge pixel points of the depth map, in the process of performing virtual viewpoint mapping on the depth map, image mapping is performed on the pixel region to be shielded in the depth map, so that robustness against compression loss of virtual viewpoint reconstruction based on the depth map is ensured, interpolation quality of the foreground edge of the depth map is improved, the technical problem that the original void region generates mapping textures of the foreground in the virtual viewpoint reconstruction process of the depth map is solved, and the technical effect that the original void region does not generate mapping textures of the foreground in the virtual viewpoint reconstruction process of the depth map is achieved.
Example 3
A preferred embodiment of the above-described method of this example is further illustrated below.
In the related art, the DIBR algorithm is designed based on the assumption of a correct depth map, and is not optimized in consideration of the loss of the depth map in the compression and transmission processes, and the DIBR algorithm process is as follows:
1) and mapping the depth map to a new 6DoF virtual viewpoint position according to the space geometric relationship, then performing post-processing on the newly mapped depth map, and then performing value taking in the original camera image according to the newly mapped depth map, thereby forming a reconstructed image of the new virtual viewpoint. Since this reconstructed image is derived from the first camera image and the depth map, it may be referred to as a reconstructed image P1. The same operation is performed on all the images of the N cameras, resulting in reconstructed images P1, P2.. PN, which are N reconstructed images.
2) For the reconstructed images P1, P2.. PN, the N reconstructed images are fused, which mainly includes weighted averaging of pixels and a hole filling algorithm. Due to the occlusion relationship of the depth map, the image pixels mapped in the reconstructed image P1 may not be mapped in the reconstructed image P2, or after both the reconstructed images P1 and P2 are mapped, the mapped image pixels are weighted and averaged to obtain the final image pixels. The method can be realized by the following steps:
step a, for a certain pixel position (x, y) in the image, in P1, P2.. PN, all m pixels having mapping values at the pixel position (x, y) are obtained, (if all the images of the cameras have no value, the pixel position (x, y) can be marked as a hole pixel, and the part of pixels can jump to step b after step a is completed at all the image pixel positions).
For the acquired m pixels (m | ═ 0), the m pixels are weighted and averaged to obtain a final value, and the weighting is typically performed by a simple average or by a reciprocal of the distance between the virtual viewpoint position and the position of each camera (the weighting is larger for cameras with closer positions).
Step b, for all positions (x, y) where no value exists in the reconstructed images P1, P2.. PN, interpolation can be performed from pixels already obtained around by a hole filling algorithm.
In the above method, any pixel of the original image can be mapped to the virtual viewpoint position of the target, thereby filling the hole. However, the assumption here is a premise that the depth map based on the original viewpoint is sufficiently accurate. While the edge of the depth map after compression or transmission usually suffers a certain compression loss, as shown in fig. 6, where fig. 6 is a schematic diagram of the loss of the depth map according to the embodiment of the present invention, in fig. 6, the edge of the depth map suffers a certain loss due to compression and transmission, for example, the block 1, the block 2, and the block 3 change from the original foreground larger value to the smaller value. When the depth map is mapped to the virtual viewpoint position, a foreground can fly out, so that mapping textures of the foreground are generated in an original hollow area. As shown in fig. 7, fig. 7 is a schematic diagram illustrating the effect of compression loss of a depth map on the quality of DIBR, and in fig. 7, the logic of DIBR needs to be optimized for the images in block 4, block 5, block 6, and block 7, so that the algorithm of DIBR is robust to the compression loss of the depth map.
This embodiment further proposes the following algorithm for the above problem.
In this embodiment, the depth map is mapped to a new 6DoF virtual viewpoint position according to a spatial geometric relationship, then post-processing of the newly mapped depth map is performed, and then values are taken from the original camera image according to the newly mapped depth map, so that a reconstructed image of the new virtual viewpoint is formed, and the reconstructed image is obtained from the first camera image and the depth map, and thus may be referred to as a reconstructed image P1. The same operation is performed on all the images of the N cameras, resulting in reconstructed images P1, P2.. PN, which are N reconstructed images.
In this embodiment, the following additional logic may be added in the mapping process of the depth map:
step a, performing edge detection on the Depth map of each original viewpoint, where the edge detection may be performed by simply comparing | Depth _ left-Depth _ right | > THR, and for detecting a pixel at an edge, P pixels (P is a settable parameter ═ 1, 2, 3, 4, 5 … …) of the edge pixel along the same row in the foreground direction may be further marked as shown in fig. 6.
And b, mapping the M original viewpoints closest to the virtual viewpoint position (M is a settable threshold) according to an original method without being influenced by the mark pixels detected in the step a.
And c, for the other remaining N-M original viewpoints in the depth map, masking the mapping of the pixels marked in the step a (i.e. not performing mapping operation on the pixels), and mapping the remaining pixels which are not marked according to the original method.
The remaining steps may then be processed as originally done.
FIG. 8 is a diagram of an edge detection and edge mapping mask for a depth map, according to an embodiment of the invention. As shown in fig. 8, the N reconstructed images P1, P2.. PN are fused, which mainly includes weighted averaging of pixels and a hole filling algorithm. Due to the occlusion relationship of the depth map, the image pixels mapped in the reconstructed image P1 may not be mapped in the reconstructed image P2, or after both the reconstructed images P1 and P2 are mapped, the mapped image pixels are weighted and averaged to obtain the final pixel. Optionally, the method comprises the following steps:
step a, for a certain pixel position (x, y) in an image, acquiring all m pixels with mapping values at the pixel position (x, y) in a reconstructed image P1, P2.. PN, (if all images of a camera have no value, the pixel position (x, y) can be marked as a hole pixel, and after all image pixel positions are subjected to step a, the part of pixels is skipped to step b).
For obtaining the fetched m pixels (m | ═ 0), the m pixels are weighted and averaged to obtain a final value, and the weighted weight typically can be selected from a simple average or weighted according to the reciprocal of the distance between the virtual viewpoint position and the position of each camera (the closer the camera is, the greater the weight is).
Step b, for all pixel positions (x, y) without values in the reconstructed images P1, P2.. PN, interpolation can be performed from pixels already obtained around by a hole filling algorithm.
Fig. 9 is a diagram illustrating an interpolation result according to an embodiment of the present invention. As shown in fig. 9, pixel values that are lost after the foreground depth map edge is compressed are masked in the DIBR mapping process, so that the DIBR algorithm is guaranteed to have robustness against compression loss.
The embodiment provides a DIBR method with depth map compression loss robustness, and a strategy of mapping and shielding a sub-viewpoint depth map is adopted for the edge loss of the depth map caused by compression, so that compared with other algorithms in the related art, the embodiment has the advantages that the interpolation quality of the foreground edge is obviously improved, the technical problem that the original cavity area generates the mapping texture of the foreground in the process of reconstructing the virtual viewpoint of the depth map is solved, and the technical effect that the original cavity area does not generate the mapping texture of the foreground in the process of reconstructing the virtual viewpoint of the depth map is achieved.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the invention. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required by the invention.
Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (such as a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
Example 4
According to the embodiment of the invention, the reconstruction device of the depth map is also provided for implementing the reconstruction method of the depth map. It should be noted that the depth map reconstruction apparatus of this embodiment may be used to perform the depth map reconstruction method of this embodiment of the present invention.
Fig. 10 is a schematic diagram of a depth map reconstruction apparatus according to an embodiment of the present invention. As shown in fig. 10, the depth map reconstruction apparatus 100 may include: an acquisition unit 101, a determination unit 102, and a prohibition unit 103.
The obtaining unit 101 is configured to obtain depth maps of multiple original viewpoints, perform edge detection on the depth map of each original viewpoint, and obtain edge pixel points of each depth map.
The determining unit 102 is configured to determine a pixel region to be shielded in the depth map based on edge pixel points of each depth map, where the pixel region to be shielded includes at least one pixel point facing the foreground direction with the edge pixel point as a reference.
A prohibiting unit 103, configured to perform image mapping on a pixel region to be masked in the depth map of a target viewpoint in a process of performing virtual viewpoint mapping on the depth map, where the target viewpoint is a viewpoint that satisfies a mapping condition among the multiple original viewpoints.
It should be noted here that the above-mentioned acquiring unit 101, determining unit 102 and prohibiting unit 103 correspond to steps S202 to S206 in embodiment 1, and the three units are the same as the examples and application scenarios realized by the corresponding steps, but are not limited to what is disclosed in the above-mentioned embodiment one. It should be noted that the above units as a part of the apparatus may operate in the computer terminal 10 provided in the first embodiment.
Fig. 11 is a schematic diagram of another depth map reconstruction apparatus according to an embodiment of the present invention. As shown in fig. 11, the depth map reconstructing apparatus 110 may include: a first acquisition unit 111 and a mapping unit 112.
The first obtaining unit 111 is configured to obtain depth maps of multiple original viewpoints, perform edge detection on the depth map of each original viewpoint, and obtain edge pixel points of each depth map.
And a mapping unit 112, configured to perform image mapping on a pixel region to be masked in the depth map of a target viewpoint in a process of performing virtual viewpoint mapping on the depth map, where the target viewpoint is a viewpoint that satisfies a mapping condition among the multiple original viewpoints.
It should be noted here that the first acquiring unit 111 and the first prohibiting unit 112 correspond to step S302 and step S304 in embodiment 1, and the two units are the same as the example and application scenarios realized by the corresponding steps, but are not limited to the disclosure of the first embodiment. It should be noted that the above units as a part of the apparatus may operate in the computer terminal 10 provided in the first embodiment.
Fig. 12 is a schematic diagram of another depth map reconstruction apparatus according to an embodiment of the present invention. As shown in fig. 12, the depth map reconstructing device 120 may include: a first display unit 121, a presentation unit 122, and a second display unit 122.
A first display unit 121 for displaying depth maps of a plurality of original viewpoints.
And the display unit 122 is configured to display the edge pixel points extracted after the edge detection is performed on each depth map.
The second display unit 122 is configured to display an image result obtained by performing virtual viewpoint mapping on the depth map, where a target viewpoint is a viewpoint that satisfies a mapping condition among the multiple original viewpoints, and a pixel region to be shielded includes at least one pixel point that faces the foreground direction with an edge pixel point as a reference.
It should be noted here that the first display unit 121, the display unit 122, and the second display unit 122 correspond to step S402 and step S406 in embodiment 1, and the three units are the same as the examples and application scenarios realized by the corresponding steps, but are not limited to the disclosure in the first embodiment. It should be noted that the above units as a part of the apparatus may operate in the computer terminal 10 provided in the first embodiment.
In the depth map reconstruction apparatus of this embodiment, after the edge pixel points of the depth map are obtained, in the process of performing virtual viewpoint mapping on the depth map, image mapping is performed on a pixel region to be shielded in the depth map, so that robustness against compression loss in virtual viewpoint reconstruction based on the depth map is ensured, interpolation quality of the foreground edge of the depth map is improved, the technical problem that the original void region generates mapping textures of the foreground in the virtual viewpoint reconstruction process of the depth map is solved, and the technical effect that the original void region does not generate mapping textures of the foreground in the virtual viewpoint reconstruction process of the depth map is achieved.
Example 5
Embodiments of the present invention may provide a depth map reconstruction system, which may include a computer terminal, where the computer terminal may be any one computer terminal device in a computer terminal group. Optionally, in this embodiment, the computer terminal may also be replaced with a terminal device such as a mobile terminal.
Optionally, in this embodiment, the computer terminal may be located in at least one network device of a plurality of network devices of a computer network.
In this embodiment, the computer terminal may execute the program code of the following steps in the method for reconstructing a depth map of an application program: acquiring depth maps of a plurality of original viewpoints, carrying out edge detection on the depth map of each original viewpoint, and acquiring edge pixel points of each depth map; determining a pixel area to be shielded in the depth map based on edge pixel points of each depth map, wherein the pixel area to be shielded comprises at least one pixel point which faces to the foreground direction by taking the edge pixel points as a reference; in the process of carrying out virtual viewpoint mapping on the depth map, image mapping is carried out on a pixel region to be shielded in the depth map of a target viewpoint, wherein the target viewpoint is a viewpoint which meets mapping conditions in a plurality of original viewpoints.
Alternatively, fig. 13 is a block diagram of a computer terminal according to an embodiment of the present invention. As shown in fig. 13, the computer terminal a may include: one or more processors 132 (only one of which is shown), a memory 134, and a transmission device 136.
The memory may be configured to store a software program and a module, such as program instructions/modules corresponding to the depth map reconstruction method and apparatus in the embodiments of the present invention, and the processor executes various functional applications and data processing by running the software program and the module stored in the memory, that is, implements the depth map reconstruction method described above. The memory may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory may further include memory located remotely from the processor, which may be connected to the computer terminal a via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The processor can call the information and application program stored in the memory through the transmission device to execute the following steps: acquiring depth maps of a plurality of original viewpoints, carrying out edge detection on the depth map of each original viewpoint, and acquiring edge pixel points of each depth map; determining a pixel area to be shielded in the depth map based on edge pixel points of each depth map, wherein the pixel area to be shielded comprises at least one pixel point which faces to the foreground direction by taking the edge pixel points as a reference; in the process of carrying out virtual viewpoint mapping on the depth map, image mapping is carried out on a pixel region to be shielded in the depth map of a target viewpoint, wherein the target viewpoint is a viewpoint which meets mapping conditions in a plurality of original viewpoints.
Optionally, the processor may further execute the program code of the following steps: and taking the edge pixel points as reference points, extracting a preset number of pixel points in the direction towards the foreground, and writing the pixel points into a pixel point set to be shielded, wherein the pixel area to be shielded comprises the pixel point set.
Optionally, the processor may further execute the program code of the following steps: and marking the pixel points in the pixel area to be shielded, wherein the marked pixel points represent that the pixel points do not execute the mapping operation of the virtual viewpoint.
Optionally, the processor may further execute the program code of the following steps: and determining the quantity of the pixel points extracted in the direction towards the foreground based on the loss degree of the depth map, wherein the loss degree of the depth map is in direct proportion to the quantity of the extracted pixel points.
Optionally, the processor may further execute the program code of the following steps: determining the position of a virtual viewpoint to be mapped before edge detection is carried out on a depth map of an original viewpoint; marking a first original viewpoint set with the distance from the virtual viewpoint position exceeding a preset threshold value as an original viewpoint needing to execute edge detection; marking the second set of original viewpoints not exceeding the predetermined threshold from the virtual viewpoint position as original viewpoints not requiring to perform edge detection.
Optionally, the processor may further execute the program code of the following steps: in the process of carrying out virtual viewpoint mapping on the depth map, mapping pixel points except for a pixel region to be shielded, and taking values from an image of a first original camera based on the newly mapped depth map to generate a first reconstructed image.
Optionally, the processor may further execute the program code of the following steps: after the first reconstruction image is generated, performing image reconstruction of other virtual visual angles on the depth map to obtain a plurality of reconstruction images, wherein the plurality of virtual visual angles are determined based on cameras arranged at different visual angle positions; and fusing the first reconstructed image and the plurality of reconstructed images to generate a reconstructed result of the depth map.
As an alternative example, the processor may invoke the information stored in the memory and the application program via the transmission means to perform the following steps: acquiring depth maps of a plurality of original viewpoints, carrying out edge detection on the depth map of each original viewpoint, and acquiring edge pixel points of each depth map; in the process of carrying out virtual viewpoint mapping on the depth map, image mapping is carried out on a pixel region to be shielded in the depth map of a target viewpoint, wherein the target viewpoint is a viewpoint forbidden meeting the mapping condition in a plurality of original viewpoints.
Optionally, the processor may further execute the program code of the following steps: and determining a pixel area to be shielded in the depth map based on the edge pixel points of each depth map, wherein the pixel area to be shielded comprises at least one pixel point which faces the foreground direction by taking the edge pixel points as the reference.
As an alternative example, the processor may invoke the information stored in the memory and the application program via the transmission means to perform the following steps: displaying depth maps of a plurality of original viewpoints; displaying edge pixel points extracted after edge detection is carried out on each depth map; and displaying an image result obtained after image mapping is performed on a pixel region to be shielded in the depth map of the target viewpoint in the process of performing virtual viewpoint mapping on the depth map, wherein the target viewpoint is a viewpoint which meets the mapping condition in a plurality of original viewpoints, and the pixel region to be shielded comprises at least one pixel point which takes an edge pixel point as a reference and faces to the foreground direction.
As an alternative example, the processor may invoke the information stored in the memory and the application program via the transmission means to perform the following steps: determining a target viewpoint from a plurality of original viewpoints, wherein the target viewpoint is a viewpoint which meets mapping conditions in the plurality of original viewpoints; acquiring a depth map of a target viewpoint, and performing edge detection on the depth map of the target viewpoint to acquire edge pixel points of the depth map of the target viewpoint; determining a pixel area to be shielded based on edge pixel points of the depth map of the target viewpoint, wherein the pixel area to be shielded comprises at least one pixel point facing the foreground direction by taking the edge pixel points as a reference; in the process of carrying out virtual viewpoint mapping on the depth maps of a plurality of original viewpoints, selecting a pixel area to be shielded in the depth map of a target viewpoint to carry out image mapping.
As an alternative example, the processor may invoke the information stored in the memory and the application program via the transmission means to perform the following steps: in the live broadcasting process, acquiring depth maps of a plurality of original viewpoints on a live broadcasting picture; performing edge detection on the depth map of each original viewpoint to obtain edge pixel points of each depth map; determining a pixel area to be shielded in the depth map based on edge pixel points of each depth map, wherein the pixel area to be shielded comprises at least one pixel point which faces to the foreground direction by taking the edge pixel points as a reference; in the process of carrying out virtual viewpoint mapping on the depth map, image mapping is carried out on pixel regions to be shielded in the depth map of a target viewpoint, wherein the target viewpoint is a viewpoint which meets mapping conditions in a plurality of original viewpoints.
As an alternative example, the processor may invoke the information stored in the memory and the application program via the transmission means to perform the following steps: the method comprises the steps of obtaining depth maps of a plurality of original viewpoints, selecting an image of a preset region in the depth map of each original viewpoint, and obtaining a pixel region to be shielded in the depth map, wherein the pixel region to be shielded comprises at least one pixel point which takes an edge pixel point as a reference and faces to the foreground direction; in the process of carrying out virtual viewpoint mapping on the depth map, image mapping is carried out on pixel regions to be shielded in the depth map of a target viewpoint, wherein the target viewpoint is a viewpoint which meets mapping conditions in a plurality of original viewpoints.
Optionally, the processor may further execute the program code of the following steps: and determining that edge pixel points in the depth map form a preset area by carrying out edge detection on the depth map of the original viewpoint.
The embodiment of the invention provides a scheme of a depth map reconstruction method. The method comprises the steps of carrying out edge detection on a depth map of each original viewpoint by obtaining depth maps of a plurality of original viewpoints to obtain edge pixel points of each depth map; determining a pixel area to be shielded in the depth map based on edge pixel points of each depth map, wherein the pixel area to be shielded comprises at least one pixel point which faces to the foreground direction by taking the edge pixel points as a reference; in the process of carrying out virtual viewpoint mapping on the depth map, image mapping is carried out on a pixel region to be shielded in the depth map of a target viewpoint, wherein the target viewpoint is a viewpoint which meets mapping conditions in a plurality of original viewpoints. According to the depth map mapping method and device, after the edge pixel points of the depth map are obtained, mapping of pixel areas to be shielded in the depth map is forbidden, robustness of compression loss resistance of virtual viewpoint reconstruction based on the depth map is guaranteed, interpolation quality of foreground edges of the depth map is improved, the technical problem that original cavity areas generate mapping textures of the foreground in the virtual viewpoint reconstruction process of the depth map is solved, and the technical effect that the original cavity areas do not generate the mapping textures of the foreground in the virtual viewpoint reconstruction process of the depth map is achieved.
It can be understood by those skilled in the art that the structure shown in fig. 13 is only an illustration, and the computer terminal may also be a terminal device such as a smart phone (e.g., an Android phone, an iOS phone, etc.), a tablet computer, a palmtop computer, a Mobile Internet Device (MID), a PAD, and the like. Fig. 13 is not intended to limit the structure of the computer terminal. For example, the computer terminal a may also include more or fewer components (e.g., network interfaces, display devices, etc.) than shown in fig. 13, or have a different configuration than shown in fig. 13.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by a program instructing hardware associated with the terminal device, where the program may be stored in a computer-readable storage medium, and the storage medium may include: flash disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
Example 6
Embodiments of the present invention also provide a computer-readable storage medium. Optionally, in this embodiment, the computer-readable storage medium may be used to store program codes executed by the depth map reconstruction method provided in the first embodiment.
Optionally, in this embodiment, the computer-readable storage medium may be located in any one of a group of computer terminals in a computer network, or in any one of a group of mobile terminals.
Optionally, in this embodiment, the computer readable storage medium is configured to store program code for performing the following steps: acquiring depth maps of a plurality of original viewpoints, carrying out edge detection on the depth map of each original viewpoint, and acquiring edge pixel points of each depth map; determining a pixel area to be shielded in the depth map based on edge pixel points of each depth map, wherein the pixel area to be shielded comprises at least one pixel point which faces to the foreground direction by taking the edge pixel points as a reference; in the process of carrying out virtual viewpoint mapping on the depth map, image mapping is carried out on a pixel region to be shielded in the depth map of a target viewpoint, wherein the target viewpoint is a viewpoint which meets mapping conditions in a plurality of original viewpoints.
Optionally, the computer readable storage medium is further arranged to store program code for performing the steps of: and taking the edge pixel points as reference points, extracting a preset number of pixel points in the direction towards the foreground, and writing the pixel points into a pixel point set to be shielded, wherein the pixel area to be shielded comprises the pixel point set.
Optionally, the computer readable storage medium is further arranged to store program code for performing the steps of: and marking the pixel points in the pixel area to be shielded, wherein the marked pixel points represent that the pixel points do not execute the mapping operation of the virtual viewpoint.
Optionally, the computer readable storage medium is further arranged to store program code for performing the steps of: and determining the quantity of the pixel points extracted in the direction towards the foreground based on the loss degree of the depth map, wherein the loss degree of the depth map is in direct proportion to the quantity of the extracted pixel points.
Optionally, the computer readable storage medium is further arranged to store program code for performing the steps of: determining the position of a virtual viewpoint to be mapped before edge detection is carried out on a depth map of an original viewpoint; marking a first original viewpoint set with the distance from the virtual viewpoint position exceeding a preset threshold value as an original viewpoint needing to execute edge detection; marking the second set of original viewpoints not exceeding the predetermined threshold from the virtual viewpoint position as original viewpoints not requiring to perform edge detection.
Optionally, the computer readable storage medium is further arranged to store program code for performing the steps of: in the process of carrying out virtual viewpoint mapping on the depth map, mapping pixel points except for a pixel region to be shielded, and taking values from an image of a first original camera based on the newly mapped depth map to generate a first reconstructed image.
Optionally, the computer readable storage medium is further arranged to store program code for performing the steps of: after the first reconstruction image is generated, performing image reconstruction of other virtual visual angles on the depth map to obtain a plurality of reconstruction images, wherein the plurality of virtual visual angles are determined based on cameras arranged at different visual angle positions; and fusing the first reconstructed image and the plurality of reconstructed images to generate a reconstructed result of the depth map.
As an alternative example, in the present embodiment, the computer readable storage medium is configured to store program codes for performing the steps of: acquiring depth maps of a plurality of original viewpoints, carrying out edge detection on the depth map of each original viewpoint, and acquiring edge pixel points of each depth map; in the process of carrying out virtual viewpoint mapping on the depth map, image mapping is carried out on a pixel region to be shielded in the depth map of a target viewpoint, wherein the target viewpoint is a viewpoint which meets mapping conditions in a plurality of original viewpoints.
Optionally, the computer readable storage medium is further arranged to store program code for performing the steps of: and determining a pixel area to be shielded in the depth map based on the edge pixel points of each depth map, wherein the pixel area to be shielded comprises at least one pixel point which faces the foreground direction by taking the edge pixel points as the reference.
As an alternative example, in the present embodiment, the computer readable storage medium is configured to store program codes for performing the steps of: displaying depth maps of a plurality of original viewpoints; displaying edge pixel points extracted after edge detection is carried out on each depth map; and displaying an image result obtained after image mapping is carried out on a pixel area to be shielded in the depth map of a target viewpoint in the process of carrying out virtual viewpoint mapping on the depth map, wherein the target viewpoint is a viewpoint meeting the mapping condition in a plurality of original viewpoints, and the pixel area to be shielded comprises at least one pixel point map which takes edge pixel points as references and faces to the foreground direction.
As an alternative example, in the present embodiment, the computer readable storage medium is configured to store program codes for performing the steps of: determining a target viewpoint from a plurality of original viewpoints, wherein the target viewpoint is a viewpoint which meets mapping conditions in the plurality of original viewpoints; acquiring a depth map of a target viewpoint, and performing edge detection on the depth map of the target viewpoint to acquire edge pixel points of the depth map of the target viewpoint; determining a pixel area to be shielded based on edge pixel points of the depth map of the target viewpoint, wherein the pixel area to be shielded comprises at least one pixel point facing the foreground direction by taking the edge pixel points as a reference; in the process of carrying out virtual viewpoint mapping on the depth maps of a plurality of original viewpoints, selecting a pixel area to be shielded in the depth map of a target viewpoint to carry out image mapping.
As an alternative example, in the present embodiment, the computer readable storage medium is configured to store program codes for performing the steps of: in the live broadcasting process, acquiring depth maps of a plurality of original viewpoints on a live broadcasting picture; performing edge detection on the depth map of each original viewpoint to obtain edge pixel points of each depth map; determining a pixel area to be shielded in the depth map based on edge pixel points of each depth map, wherein the pixel area to be shielded comprises at least one pixel point which faces to the foreground direction by taking the edge pixel points as a reference; in the process of carrying out virtual viewpoint mapping on the depth map, image mapping is carried out on pixel regions to be shielded in the depth map of a target viewpoint, wherein the target viewpoint is a viewpoint which meets mapping conditions in a plurality of original viewpoints.
As an alternative example, in the present embodiment, the computer readable storage medium is configured to store program codes for performing the steps of: the method comprises the steps of obtaining depth maps of a plurality of original viewpoints, selecting an image of a preset region in the depth map of each original viewpoint, and obtaining a pixel region to be shielded in the depth map, wherein the pixel region to be shielded comprises at least one pixel point which takes an edge pixel point as a reference and faces to the foreground direction; in the process of carrying out virtual viewpoint mapping on the depth map, image mapping is carried out on pixel regions to be shielded in the depth map of a target viewpoint, wherein the target viewpoint is a viewpoint which meets mapping conditions in a plurality of original viewpoints.
Optionally, the computer readable storage medium is further arranged to store program code for performing the steps of: and determining that edge pixel points in the depth map form a preset area by carrying out edge detection on the depth map of the original viewpoint.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, a division of a unit is merely a division of a logic function, and an actual implementation may have another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that it is obvious to those skilled in the art that various modifications and improvements can be made without departing from the principle of the present invention, and these modifications and improvements should also be considered as the protection scope of the present invention.

Claims (22)

1. A method of reconstruction of a depth map, comprising:
acquiring depth maps of a plurality of original viewpoints, and performing edge detection on the depth map of each original viewpoint to acquire edge pixel points of each depth map;
determining a pixel region to be shielded in the depth map based on edge pixel points of each depth map, wherein the pixel region to be shielded comprises at least one pixel point facing to the foreground direction by taking the edge pixel points as reference;
and in the process of carrying out virtual viewpoint mapping on the depth map, carrying out image mapping on the pixel area to be shielded in the depth map of a target viewpoint, wherein the target viewpoint is a viewpoint which meets mapping conditions in the plurality of original viewpoints.
2. The method of claim 1, wherein the target viewpoint is: and the distance between the plurality of original viewpoints and the virtual viewpoint is within a preset range.
3. The method of claim 1, wherein pixel points within the pixel region to be masked are marked, wherein the mapping operation is performed only on the marked pixel points in the depth map of the target viewpoint.
4. The method of claim 3, wherein no mapping operation is performed for the marked pixel points for original viewpoints other than the target viewpoint.
5. The method of claim 1, wherein the depth maps of all original viewpoints are used for reconstruction when image mapping is performed on regions of the depth map other than the pixel region to be masked.
6. The method according to claim 1, wherein a predetermined number of pixels are extracted in a direction toward a foreground with the edge pixels as reference points, and the set of pixels to be masked is written, wherein the pixel region to be masked comprises the set of pixels.
7. The method of claim 1, wherein the number of extracted pixel points in the direction towards the foreground is determined based on a degree of loss of the depth map, wherein the degree of loss of the depth map is proportional to the number of extracted pixel points.
8. The method of any of claims 1 to 7, wherein prior to edge detection of the depth map of the original viewpoint, the method further comprises:
determining a position of the virtual viewpoint to be mapped;
marking a first original viewpoint set which is more than a preset threshold value away from the position of the virtual viewpoint as an original viewpoint which needs to execute the edge detection;
marking a second set of original viewpoints not exceeding a predetermined threshold from the virtual viewpoint position as original viewpoints not requiring the edge detection.
9. The method of claim 8, wherein in the process of performing virtual viewpoint mapping on the depth map, mapping pixel points except the pixel region to be shielded, and taking values from an image of a first original camera based on the newly mapped depth map to generate a first reconstructed image.
10. The method of claim 9, wherein after generating the first reconstructed image, the method further comprises:
performing image reconstruction of other virtual visual angles on the depth map to obtain a plurality of reconstructed images, wherein the plurality of virtual visual angles are determined based on cameras arranged at different visual angle positions;
and fusing the first reconstructed image and the plurality of reconstructed images to generate a reconstruction result of the depth map.
11. A method of reconstruction of a depth map, comprising:
acquiring depth maps of a plurality of original viewpoints, and performing edge detection on the depth map of each original viewpoint to acquire edge pixel points of each depth map;
and in the process of carrying out virtual viewpoint mapping on the depth map, carrying out image mapping on a pixel region to be shielded in the depth map of a target viewpoint, wherein the target viewpoint is a viewpoint which meets mapping conditions in the plurality of original viewpoints.
12. The method of claim 11, wherein after performing edge detection on the depth map of each original viewpoint and obtaining lost pixel points in each depth map, the method further comprises:
and determining a pixel area to be shielded in the depth map based on edge pixel points of each depth map, wherein the pixel area to be shielded comprises at least one pixel point which faces to the foreground direction by taking the edge pixel points as the reference.
13. A method of reconstruction of a depth map, comprising:
displaying depth maps of a plurality of original viewpoints;
displaying edge pixel points extracted after edge detection is carried out on each depth map;
and displaying an image result obtained after image mapping is performed on a pixel region to be shielded in the depth map of a target viewpoint in the process of performing virtual viewpoint mapping on the depth map, wherein the target viewpoint is a viewpoint which meets mapping conditions in the plurality of original viewpoints, and the pixel region to be shielded comprises at least one pixel point which faces to the foreground direction by taking the edge pixel point as a reference.
14. A depth map processing method comprises the following steps:
determining a target viewpoint from a plurality of original viewpoints, wherein the target viewpoint is a viewpoint which meets mapping conditions in the plurality of original viewpoints;
acquiring a depth map of the target viewpoint, and performing edge detection on the depth map of the target viewpoint to acquire edge pixel points of the depth map of the target viewpoint;
determining a pixel area to be shielded based on edge pixel points of the depth map of the target viewpoint, wherein the pixel area to be shielded comprises at least one pixel point which faces to the foreground direction by taking the edge pixel points as a reference;
and selecting to perform image mapping on the pixel area to be shielded in the depth map of the target viewpoint in the process of performing virtual viewpoint mapping on the depth maps of the plurality of original viewpoints.
15. A method of reconstruction of a depth map, comprising:
in the live broadcasting process, acquiring depth maps of a plurality of original viewpoints on a live broadcasting picture;
performing edge detection on the depth map of each original viewpoint to obtain edge pixel points of each depth map;
determining a pixel region to be shielded in the depth map based on edge pixel points of each depth map, wherein the pixel region to be shielded comprises at least one pixel point facing to the foreground direction by taking the edge pixel points as reference;
and in the process of carrying out virtual viewpoint mapping on the depth map, carrying out image mapping on the pixel region to be shielded in the depth map of a target viewpoint, wherein the target viewpoint is a viewpoint which meets mapping conditions in a plurality of original viewpoints.
16. A method of reconstruction of a depth map, comprising:
the method comprises the steps of obtaining depth maps of a plurality of original viewpoints, selecting an image of a preset region in the depth map of each original viewpoint, and obtaining a pixel region to be shielded in the depth map, wherein the pixel region to be shielded comprises at least one pixel point which takes an edge pixel point as a reference and faces to the foreground direction;
and in the process of carrying out virtual viewpoint mapping on the depth map, carrying out image mapping on the pixel region to be shielded in the depth map of a target viewpoint, wherein the target viewpoint is a viewpoint which meets mapping conditions in a plurality of original viewpoints.
17. The method of claim 16, wherein edge pixels in the depth map are determined to constitute the predetermined region by performing edge detection on the depth map of the original viewpoint.
18. A depth map reconstruction system, comprising:
the client is used for displaying the depth maps of a plurality of original viewpoints;
the cloud server is communicated with the client and used for acquiring the depth maps of the plurality of original viewpoints, and after edge detection is carried out on the depth map of each original viewpoint to generate edge pixel points, determining pixel point regions to be shielded in the depth maps, wherein the pixel point regions to be shielded comprise at least one pixel point which takes the edge pixel points as a reference and faces to the foreground direction;
the cloud server performs image mapping on the pixel region to be shielded in the depth map of a target viewpoint in the process of performing virtual viewpoint mapping on the depth map, and returns a reconstructed image generated based on a mapping result to the client, wherein the target viewpoint is a viewpoint which meets mapping conditions in the multiple original viewpoints.
19. An apparatus for reconstructing a depth map, comprising:
the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring depth maps of a plurality of original viewpoints, carrying out edge detection on the depth map of each original viewpoint and acquiring edge pixel points of each depth map;
the determining unit is used for determining a pixel area to be shielded in the depth map based on edge pixel points of each depth map, wherein the pixel area to be shielded comprises at least one pixel point which faces to the foreground direction by taking the edge pixel points as reference;
and the forbidding unit is used for executing image mapping on the pixel area to be shielded in the depth map of a target viewpoint in the process of carrying out virtual viewpoint mapping on the depth map, wherein the target viewpoint is a viewpoint which meets mapping conditions in the plurality of original viewpoints.
20. A computer readable storage medium comprising a stored program, wherein the program when executed by a processor controls an apparatus in which the computer readable storage medium is located to perform the method of any of claims 1 to 17.
21. A processor for running a program, wherein the program when running performs the method of any one of claims 1 to 17.
22. A depth map reconstruction system, comprising:
a processor;
a memory coupled to the processor for providing instructions to the processor for processing the following processing steps: acquiring depth maps of a plurality of original viewpoints, and performing edge detection on the depth map of each original viewpoint to acquire edge pixel points of each depth map; determining a pixel region to be shielded in the depth map based on edge pixel points of each depth map, wherein the pixel region to be shielded comprises at least one pixel point facing to the foreground direction by taking the edge pixel points as reference; and in the process of carrying out virtual viewpoint mapping on the depth map, carrying out image mapping on the pixel area to be shielded in the depth map of a target viewpoint, wherein the target viewpoint is a viewpoint which meets mapping conditions in the plurality of original viewpoints.
CN202010857906.0A 2020-08-24 2020-08-24 Depth map reconstruction method, system, device, storage medium and processor Pending CN114092535A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010857906.0A CN114092535A (en) 2020-08-24 2020-08-24 Depth map reconstruction method, system, device, storage medium and processor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010857906.0A CN114092535A (en) 2020-08-24 2020-08-24 Depth map reconstruction method, system, device, storage medium and processor

Publications (1)

Publication Number Publication Date
CN114092535A true CN114092535A (en) 2022-02-25

Family

ID=80295557

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010857906.0A Pending CN114092535A (en) 2020-08-24 2020-08-24 Depth map reconstruction method, system, device, storage medium and processor

Country Status (1)

Country Link
CN (1) CN114092535A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115174774A (en) * 2022-06-29 2022-10-11 上海飞机制造有限公司 Depth image compression method, device, equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115174774A (en) * 2022-06-29 2022-10-11 上海飞机制造有限公司 Depth image compression method, device, equipment and storage medium
CN115174774B (en) * 2022-06-29 2024-01-26 上海飞机制造有限公司 Depth image compression method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
US9113043B1 (en) Multi-perspective stereoscopy from light fields
CN111316650A (en) Three-dimensional model encoding device, three-dimensional model decoding device, three-dimensional model encoding method, and three-dimensional model decoding method
Cheng et al. Spatio-temporally consistent novel view synthesis algorithm from video-plus-depth sequences for autostereoscopic displays
US20150002636A1 (en) Capturing Full Motion Live Events Using Spatially Distributed Depth Sensing Cameras
EP2342900A1 (en) Generation of occlusion data for image properties
EP3340619A1 (en) Geometric warping of a stereograph by positional constraints
CN109644280B (en) Method for generating hierarchical depth data of scene
Lee et al. Free viewpoint video (FVV) survey and future research direction
JP5755571B2 (en) Virtual viewpoint image generation device, virtual viewpoint image generation method, control program, recording medium, and stereoscopic display device
Mao et al. Expansion hole filling in depth-image-based rendering using graph-based interpolation
CN115035235A (en) Three-dimensional reconstruction method and device
Yang et al. Dynamic 3D scene depth reconstruction via optical flow field rectification
CN106231411B (en) Main broadcaster's class interaction platform client scene switching, loading method and device, client
CN114092535A (en) Depth map reconstruction method, system, device, storage medium and processor
KR102319538B1 (en) Method and apparatus for transmitting image data, and method and apparatus for generating 3dimension image
CN115546034A (en) Image processing method and device
JP5906165B2 (en) Virtual viewpoint image composition device, virtual viewpoint image composition method, and virtual viewpoint image composition program
EP2745520B1 (en) Auxiliary information map upsampling
CN110892706B (en) Method for displaying content derived from light field data on a 2D display device
US20140092222A1 (en) Stereoscopic image processing device, stereoscopic image processing method, and recording medium
CN115588069A (en) Scene reconstruction method and device, equipment and storage medium
CN105282534A (en) System and method for embedding stereo imagery
Lai et al. Exploring manipulation behavior on video see-through head-mounted display with view interpolation
CN111405262B (en) Viewpoint information generation method, apparatus, system, device and medium
Sharma et al. A novel algebaric variety based model for high quality free-viewpoint view synthesis on a krylov subspace

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination