CN113538694A - Plane reconstruction method and display device - Google Patents

Plane reconstruction method and display device Download PDF

Info

Publication number
CN113538694A
CN113538694A CN202110763333.XA CN202110763333A CN113538694A CN 113538694 A CN113538694 A CN 113538694A CN 202110763333 A CN202110763333 A CN 202110763333A CN 113538694 A CN113538694 A CN 113538694A
Authority
CN
China
Prior art keywords
plane
points
reconstructed
distance
axis
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110763333.XA
Other languages
Chinese (zh)
Inventor
王冉冉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hisense Visual Technology Co Ltd
Original Assignee
Hisense Visual Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hisense Visual Technology Co Ltd filed Critical Hisense Visual Technology Co Ltd
Priority to CN202110763333.XA priority Critical patent/CN113538694A/en
Publication of CN113538694A publication Critical patent/CN113538694A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The utility model relates to a VR AR technical field provides a plane reconstruction method and display device, obtains the depth image that the depth camera gathered, based on the depth image, extract the three-dimensional point cloud data that contains a plurality of 3D points, select two 3D points from a plurality of 3D points, and confirm two 3D points at the distance on the axis perpendicular with the plane of waiting to rebuild, go through all 3D points in a plurality of 3D points, generate the 3D point set of the plane of waiting to rebuild according to each distance confirmed, and rebuild the plane that is used for showing the reinforcing image according to 3D point set, through according to two 3D points with wait to rebuild the difference in height on the perpendicular axis of plane, generate the 3D point set of the plane of waiting to rebuild, the calculation is simple, and is efficient, and rebuild planar precision is higher.

Description

Plane reconstruction method and display device
Technical Field
The present disclosure relates to the field of Virtual Reality (VR) and Augmented Reality (AR) technologies, and in particular, to a planar reconstruction method and a display device.
Background
Virtual Reality (VR) technology utilizes computer simulation technology to generate an interactive all-digital three-dimensional field of view to simulate a Virtual environment, thereby giving people a sense of environmental immersion. The Augmented Reality (AR) technology is a technology that skillfully fuses virtual information and a real world, and a plurality of technical means such as multimedia, three-dimensional modeling, real-time tracking, intelligent interaction, sensing and the like are widely applied, so that generated virtual information such as characters, images, three-dimensional models, music, videos and the like is applied to the real world after analog simulation, and the two kinds of information supplement each other, thereby realizing the 'enhancement' of the real world.
VR/AR displays have wide application in various industries, such as educational training, fire drilling, virtual driving, real estate, marketing, etc., to bring users with immersive and immersive visual feasts. The scene special effect is superimposed on the video content through the AR technology, the intelligent and unprecedented visual impact experience is fully shown, the video with the personalized special effect is superimposed, the interesting dynamic effect of body response can be generated for a user, and the advantage of high stability is displayed in the extreme environment.
Based on the visual tracking detection technology, the surrounding environment is understood according to the image shot by the camera, and the plane for displaying the enhanced image is reconstructed, so that the AR enhanced special effect is realized.
At present, all planes are generally reconstructed according to point cloud information, and then the plane is further determined to be a horizontal plane or a vertical plane according to the angle between the normal vector of the plane and the gravity direction, but the calculation complexity of the angle between the normal vector and the gravity direction is high, the time consumption is long, and the accuracy of the reconstructed plane is low.
Disclosure of Invention
The embodiment of the application provides a plane reconstruction method and display equipment, which are used for reconstructing a plane quickly and accurately.
In a first aspect, an embodiment of the present application provides a plane reconstruction method, including:
acquiring an acquired depth image, and extracting three-dimensional point cloud data based on the depth image, wherein the three-dimensional point cloud data comprises a plurality of 3D points;
selecting two 3D points from the plurality of 3D points, and determining the distance between the two selected 3D points on an axis perpendicular to a plane to be reconstructed, wherein the distance between the two selected 3D points is smaller than a set distance threshold;
traversing all 3D points in the plurality of 3D points, and generating a 3D point set of the plane to be reconstructed according to each determined distance;
and reconstructing a plane to be reconstructed for displaying the enhanced image according to the 3D point set.
In a second aspect, an embodiment of the present application provides a display device, including a display, a memory, and a controller:
the display, connected with the controller, configured to display the enhanced image on the reconstructed plane;
the memory, coupled to the controller, configured to store computer program instructions;
the controller configured to perform the following operations in accordance with the computer program instructions:
acquiring an acquired depth image, and extracting three-dimensional point cloud data based on the depth image, wherein the three-dimensional point cloud data comprises a plurality of 3D points;
selecting two 3D points from the plurality of 3D points, and determining the distance between the two selected 3D points on an axis perpendicular to a plane to be reconstructed, wherein the distance between the two selected 3D points is smaller than a set distance threshold;
traversing all 3D points in the plurality of 3D points, and generating a 3D point set of the plane to be reconstructed according to each determined distance;
and reconstructing a plane to be reconstructed for displaying the enhanced image according to the 3D point set.
In a third aspect, the present application provides a computer-readable storage medium storing computer-executable instructions for causing a computer to execute a plane reconstruction method provided in an embodiment of the present application.
In the above embodiments of the present application, three-dimensional point cloud data including a plurality of 3D points is extracted from a depth image acquired by a camera, all the 3D points are traversed, two 3D points are selected from the plurality of 3D points each time, a distance between the two 3D points on an axis perpendicular to a plane to be reconstructed is determined, after the traversal, a 3D point set of the plane to be reconstructed is generated according to each determined distance, a plane for displaying an enhanced image is reconstructed according to the 3D point set, the 3D point set of the plane to be reconstructed is generated according to a height difference between the two 3D points on an axis perpendicular to the plane to be reconstructed, and a required plane is determined by using angles of a normal vector and a gravity direction with respect to all planes to be reconstructed.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to these drawings without inventive exercise.
FIG. 1 illustrates a system architecture diagram provided by embodiments of the present application;
fig. 2 is a diagram illustrating a hardware structure of a display device provided in an embodiment of the present application;
FIG. 3 illustrates a planar reconstruction method provided by an embodiment of the present application;
fig. 4 illustrates a point cloud diagram of extracted three-dimensional point cloud data provided by an embodiment of the present application;
fig. 5 is a schematic diagram illustrating screening of a horizontal plane corresponding 3D point from a plurality of 3D points provided by an embodiment of the present application;
fig. 6 is a schematic diagram illustrating screening of a 3D point corresponding to a vertical plane from a plurality of 3D points according to an embodiment of the present application;
FIG. 7 is a schematic diagram illustrating a 3D point corresponding to an inclined plane screened from a plurality of 3D points provided by an embodiment of the present application;
FIG. 8 is a diagram illustrating a planar effect of reconstruction provided by an embodiment of the present application;
fig. 9 is a functional block diagram schematically illustrating a display device according to an embodiment of the present application.
Detailed Description
To make the objects, embodiments and advantages of the present application clearer, the following description of exemplary embodiments of the present application will clearly and completely describe the exemplary embodiments of the present application with reference to the accompanying drawings in the exemplary embodiments of the present application, and it is to be understood that the described exemplary embodiments are only a part of the embodiments of the present application, and not all of the embodiments.
All other embodiments, which can be derived by a person skilled in the art from the exemplary embodiments described herein without inventive step, are intended to be within the scope of the claims appended hereto. In addition, while the disclosure herein has been presented in terms of one or more exemplary examples, it should be appreciated that aspects of the disclosure may be implemented solely as a complete embodiment.
The terms "first", "second", "third", and the like in the description and claims of this application and in the above-described drawings are used for distinguishing between similar or analogous objects or entities and are not necessarily meant to define a particular order or sequence Unless otherwise indicated. It is to be understood that the terms so used are interchangeable under appropriate circumstances such that the embodiments described herein are, for example, capable of operation in sequences other than those illustrated or otherwise described herein.
Furthermore, the terms "comprises" and "comprising," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a product or device that comprises a list of elements is not necessarily limited to those elements explicitly listed, but may include other elements not expressly listed or inherent to such product or device.
Embodiments of the present application are described in detail below with reference to the accompanying drawings.
Fig. 1 schematically illustrates an application scenario provided in an embodiment of the present application. As shown in fig. 1, the display device has a camera (camera) for acquiring a depth image in a real environment. The embodiment of the present application does not impose a limiting requirement on the type of the video camera, and may be, for example, an RGBD camera, a binocular camera, and the like. The display device processes the depth image acquired by the camera to obtain a 3D point set used for reconstructing a plane, reconstructs the plane based on the obtained 3D point set, and displays an enhanced image on the reconstructed plane, as shown in fig. 1, a lovely dinosaur is displayed on the reconstructed plane, and an enhanced special effect of 'flaming' is added to the dinosaur, so that the substitution sense and authenticity of a picture are enhanced, and therefore strong and shocky visual impact is brought to a user, and an interesting dynamic effect of limb response is generated, for example, the body naturally avoids the flaming special effect.
It should be noted that the reconstructed horizontal plane in fig. 1 is only an example, and the present application is applicable to reconstructing various planes such as a vertical plane, an inclined plane, and the like, and the shape of the reconstructed plane is not limited, and may be a regular shape such as a rectangle, a circle, and the like, or an irregular polygon.
Taking a display device as an example of an intelligent electronic device, fig. 2 exemplarily shows a structure diagram of the display device provided in the embodiment of the present application. As shown in fig. 2, the first terminal 100 includes at least one of a controller 250, a tuner demodulator 210, a communicator 220, a detector 230, an input/output interface 255, a display 275, an audio output interface 285, a memory 260, a power supply 290, a user interface 265, and an external device interface 240 therein.
In some embodiments, the display 275 includes a display screen assembly for presenting a picture and a driver assembly for driving the display of an image, an assembly for receiving image signals derived from the output of the first processor, displaying video content and images, and a menu manipulation interface.
In some embodiments, display 275 is a projection display and may also include a projection device and a projection screen.
In some embodiments, communicator 220 is a component for communicating with external devices or external servers according to various communication protocol types. For example: the communicator may include at least one of a Wifi chip, a bluetooth communication protocol chip, a wired ethernet communication protocol chip, and other network communication protocol chips or near field communication protocol chips, and an infrared receiver.
In some embodiments, the first terminal 100 may establish control signal and data signal transmission and reception with an external device through the communicator 220.
In some embodiments, the user interface 265 may be used to receive control signals for external devices.
In some embodiments, the detector 230 includes a light receiver, an image collector, a temperature sensor, a sound collector, etc. for collecting signals of an external environment or interaction with the outside.
In some embodiments, the input/output interface 255 is configured to allow data transfer between the controller 250 and external other devices or other controllers 250. Such as receiving video signal data and audio signal data of an external device, or command instruction data, etc.
In some embodiments, the external device interface 240 may include, but is not limited to, the following: the interface can be any one or more of a high-definition multimedia interface (HDMI), an analog or data high-definition component input interface, a composite video input interface, a USB input interface, an RGB port and the like. The plurality of interfaces may form a composite input/output interface.
In some embodiments, the tuner demodulator 210 is configured to receive broadcast television signals through wired or wireless reception, perform modulation and demodulation processing such as amplification, mixing, resonance, and the like, and demodulate audio and video signals from a plurality of wireless or wired broadcast television signals, where the audio and video signals may include television audio and video signals carried in a television channel frequency selected by a user and EPG data signals.
In some embodiments, the frequency points demodulated by the tuner demodulator 210 are controlled by the controller 250, and the controller 250 can send out control signals according to user selection, so that the modem responds to the television signal frequency selected by the user and modulates and demodulates the television signal carried by the frequency.
In some embodiments, the controller 250 controls the operation of the first terminal and responds to user operations through various software control programs stored in the memory. The controller 250 may control the overall operation of the first terminal 100. For example: in response to receiving a user command for selecting a UI object to be displayed on the display 275, the controller 250 may perform an operation related to the object selected by the user command.
As shown in fig. 2, the controller 250 includes at least one of a Random Access Memory 251 (RAM), a Read-Only Memory 252 (ROM), a video processor 270, an audio processor 280, other processors 253 (e.g., a Graphics Processing Unit (GPU), a Central Processing Unit 254 (CPU), a Communication Interface (Communication Interface), and a Communication Bus 256(Bus), which connects the respective components.
In some embodiments, RAM251 is used to store temporary data for the operating system or other programs that are running.
In some embodiments, ROM 252 is used to store instructions for various system boots.
In some embodiments, the ROM 252 is used to store a Basic Input Output System (BIOS). The system is used for completing power-on self-test of the system, initialization of each functional module in the system, a driver of basic input/output of the system and booting an operating system.
In some embodiments, when the power-on signal is received, the first terminal 100 starts to power on, the CPU executes the system boot instruction in the ROM 252, and copies the temporary data of the operating system stored in the memory into the RAM251 so as to start or run the operating system. After the start of the operating system is completed, the CPU copies the temporary data of the various application programs in the memory to the RAM251, and then, the various application programs are started or run.
In some embodiments, CPU processor 254 is used to execute operating system and application program instructions stored in memory. And executing various application programs, data and contents according to various interactive instructions received from the outside so as to finally display and play various audio and video contents.
In some example embodiments, the CPU processor 254 may comprise a plurality of processors. The plurality of processors may include a main processor and one or more sub-processors. A main processor for performing some operations of the first terminal 100 in a pre-power-up mode and/or operations of displaying a screen in a normal mode. One or more sub-processors for one operation in a standby mode or the like.
In some embodiments, the graphics processor 253 is used to generate various graphics objects, such as: icons, operation menus, user input instruction display graphics, and the like. The display device comprises an arithmetic unit which carries out operation by receiving various interactive instructions input by a user and displays various objects according to display attributes. And the system comprises a renderer for rendering various objects obtained based on the arithmetic unit, wherein the rendered objects are used for being displayed on a display.
In some embodiments, the video processor 270 is configured to receive an external video signal, and perform video processing such as decompression, decoding, scaling, noise reduction, frame rate conversion, resolution conversion, image synthesis, and the like according to a standard codec protocol of the input signal, so as to obtain a signal that can be directly displayed or played on the first terminal 100.
In some embodiments, video processor 270 includes a demultiplexing module, a video decoding module, an image synthesis module, a frame rate conversion module, a display formatting module, and the like.
In some embodiments, the audio processor 280 is configured to receive an external audio signal, decompress and decode the received audio signal according to a standard codec protocol of the input signal, and perform noise reduction, digital-to-analog conversion, and amplification processes to obtain an audio signal that can be played in a speaker.
The power supply 290 provides power supply support for the first terminal 100 from the power input from the external power source under the control of the controller 250. The power supply 290 may include a built-in power supply circuit installed inside the first terminal 100, or may be a power supply interface installed outside the first terminal 100 to provide an external power supply in the first terminal 100.
A user interface 265 for receiving an input signal of a user and then transmitting the received user input signal to the controller 250. The user input signal may be a remote controller signal received through an infrared receiver, and various user control signals may be received through the network communication module.
The memory 260 includes a memory for storing various software modules for driving the first terminal 100. Such as: various software modules stored in the first memory, including: at least one of a basic module, a detection module, a communication module, a display control module, a browser module, and various service modules.
It should be noted that fig. 1-2 are only examples, and the display device in this application may be a terminal with video playing and interaction functions, such as a smart phone, a notebook computer, a desktop computer, a tablet, AR glasses, VR glasses, and the like.
Based on the scenario shown in fig. 1, fig. 3 exemplarily shows a flowchart of a plane reconstruction method provided in an embodiment of the present application, and as shown in fig. 3, the flowchart is executed by a display device and mainly includes the following steps:
s301: acquiring an acquired depth image, and extracting three-dimensional point cloud data based on the depth image, wherein the three-dimensional point cloud data comprises a plurality of 3D points.
In the step, the display equipment starts a camera, collects a depth image of the environment where the display equipment is located, and extracts three-dimensional point cloud data based on the collected depth image.
Fig. 4 illustrates a point cloud diagram of extracted three-dimensional point cloud data provided in an embodiment of the present application, as shown in fig. 4, which is three-dimensional point data extracted from a frame of depth image, and includes a plurality of 3D points.
S302: selecting two 3D points from the plurality of 3D points, and determining the distance between the two selected 3D points on an axis perpendicular to a plane to be reconstructed, wherein the distance between the two selected 3D points is smaller than a set distance threshold value.
In the step, two 3D points with the distance smaller than a set distance threshold value are randomly selected from the plurality of 3D points, and the distance between the two selected 3D points on an axis perpendicular to a plane to be reconstructed is determined. In the embodiment of the present application, the distance between two 3D points on the axis perpendicular to the plane to be reconstructed is determined in different ways according to the different planes to be reconstructed.
In an optional implementation manner, when the plane to be reconstructed is a horizontal plane or a vertical plane, in the process of executing S302, coordinates of the two selected 3D points on a first axis parallel to the plane to be reconstructed are respectively set to a preset value, coordinates of the two 3D points on a second axis parallel to the plane to be reconstructed are respectively reduced according to a preset reduction ratio, so as to obtain updated first coordinates of the two 3D points, and a distance between the two 3D points on an axis perpendicular to the plane to be reconstructed is determined according to the updated first coordinates.
Taking the plane to be reconstructed as a horizontal plane as an example, as shown in fig. 5, the Y axis is perpendicular to the horizontal plane XOZ, and whether two 3D points are 3D points corresponding to the horizontal plane is determined according to a height difference between the two 3D points and the horizontal plane, that is, the 3D points corresponding to the horizontal plane are screened from the 3D points according to a distance between the two selected 3D points on an axis perpendicular to the plane to be reconstructed. In specific implementation, two 3 points with a distance smaller than a set distance threshold are randomly selected from the extracted multiple 3D points and are respectively marked as A, B, wherein coordinates of the point a are represented by (x1, y1 and z1), and coordinates of the point B are represented by (x2, y2 and z 2). Optionally, the two selected 3D points are the two closest 3D points. Coordinate values X1 and X2 of A, B two 3D points on an X axis are set to be 0, coordinate values of A, B two 3D points on a Z axis are reduced by 10000 times, the updated first coordinate of the point A is (0, y1, Z1/1000), the updated first coordinate of the point B is (0, y2, Z2/1000), and the influence of the reduced coordinate on whether the two 3D points belong to a horizontal plane or not can be ignored because the reduced coordinate on the Z axis is small. From the A, B first coordinates of the two 3D points, the distance of the two 3D points on the Y-axis is determined A, B, i.e. the height difference of the two 3D points to the horizontal plane is determined A, B. A. The distance dy between the two 3D points B on the Y axis is Y2-Y1, dy is compared with a preset height difference ay, if dy is less than ay, then A, B shows that the two 3D points are two points on the horizontal plane, and if dy is more than ay, then A, B shows that the two 3D points do not belong to the horizontal plane.
Taking the plane to be reconstructed as a vertical plane as an example, as shown in fig. 6, the Z axis is perpendicular to the vertical plane XOY, and whether the two selected 3D points are 3D points corresponding to the vertical plane is determined according to a height difference between the two 3D points and the vertical plane, that is, the 3D points corresponding to the vertical plane are screened from the 3D points according to a distance between the two selected 3D points on an axis perpendicular to the plane to be reconstructed. In specific implementation, two 3 points with a distance smaller than a set distance threshold are randomly selected from the extracted multiple 3D points and are respectively marked as C, D, wherein coordinates of the point C are represented by (x3, y3 and z3), and coordinates of the point D are represented by (x4, y4 and z 4). Optionally, the two selected 3D points are the two closest 3D points. Setting coordinate values X3 and X4 of C, D two 3D points on an X axis as 0, reducing coordinate values of C, D two 3D points on a Y axis by 10000 times, obtaining the updated first coordinate of the C point as (0, Y3/1000, Z3), the first coordinate of the D point as (0, Y4/1000, Z4), and determining C, D the distance between the two 3D points on the Z axis according to the first coordinate of the C, D two 3D points, namely determining C, D the height difference between the two 3D points and the vertical plane. C. And comparing dz with a preset height difference az, if dz is less than az, the two 3D points C, D are two points on a vertical plane, and if dy is more than or equal to az, the two 3D points C, D are not in the vertical plane.
In another alternative embodiment, when the plane to be reconstructed is an inclined plane, in the process of performing S302, an axis corresponding to the plane to be reconstructed is determined according to the inclination angles of the plane to be reconstructed with respect to the horizontal plane and the vertical plane. Specifically, if the inclination angle of the plane to be reconstructed relative to the horizontal plane is greater than the inclination angle of the plane to be reconstructed relative to the vertical plane, determining that the axis corresponding to the plane to be reconstructed is the axis perpendicular to the vertical plane, and if the inclination angle of the plane to be reconstructed relative to the horizontal plane is less than the inclination angle of the plane to be reconstructed relative to the vertical plane, determining that the axis corresponding to the plane to be reconstructed is the axis perpendicular to the horizontal plane; and if the inclination angle of the plane to be reconstructed relative to the horizontal plane is equal to the inclination angle of the plane to be reconstructed relative to the vertical plane, determining any one of an axis perpendicular to the horizontal plane and an axis perpendicular to the vertical plane as the axis corresponding to the plane to be reconstructed.
Further, according to a preset reduction ratio, coordinates of the two selected 3D points on two coordinate axes perpendicular to the determined axis are reduced respectively to obtain updated second coordinates of the two 3D points, and the distance between the two 3D points on the axis is determined according to the updated second coordinates.
Taking the inclined plane determined by the diagonal line of the cube as the plane to be reconstructed as an example, as shown in fig. 7, the inclined plane XYZ has the same inclination angle with respect to the horizontal plane XOZ and the vertical plane XOZ, and the Y axis or the Z axis is taken as the axis corresponding to the inclined plane XYZ. Assuming that the Y axis is selected as the axis corresponding to the inclined plane XYZ, it is determined whether the two selected 3D points are the 3D points corresponding to the inclined plane according to the height difference from the two 3D points to the inclined plane, that is, the 3D points corresponding to the inclined plane are screened from the 3D points according to the distance between the two selected 3D points on the Y axis. In specific implementation, two 3 points with a distance smaller than a set distance threshold are randomly selected from the extracted multiple 3D points and are respectively marked as E, F, wherein coordinates of the point E are represented by (X5, Y5 and Z5), coordinates of the point F are represented by (X6, Y6 and Z6), coordinate values of the point E, F and the point Z on the X axis are respectively reduced by 10000 times, the updated first coordinates of the point C are (X5/1000, Y5 and Z5/1000), the updated first coordinates of the point D are (X6/1000, Y6 and Z6/1000), and the distance dw of the point E, F on the Y axis is determined according to second coordinates of the two 3D points E, F. And comparing the dw with a preset height difference aw, wherein if dw is less than aw, E, F two 3D points are indicated as two points on the inclined surface, and if dw is more than or equal to aw, C, D two 3D points do not belong to the inclined surface.
The sizes of ay, az, and aw may be set according to experimental data, and may be the same or different.
S303: and traversing all 3D points in the plurality of 3D points, and generating a 3D point set of the plane to be reconstructed according to each determined distance.
In this step, after traversing the extracted plurality of 3D points according to S302, a plurality of distances are determined. For each of the respective distances, performing the following operations, respectively: and comparing the distance with a preset height difference value, and if the distance is smaller than the preset height difference value, indicating that the two 3D points corresponding to the distance are points on the plane to be reconstructed, and therefore, taking the two 2D points corresponding to the distance as two points in a 3D point set of the plane to be reconstructed.
For example, assuming that the plane to be reconstructed is a horizontal plane, the distance between A, B two 3D points on the Y axis is less than ay, A, B two 3D points are reserved, and C, D the distance between two 3D points on the Y axis is greater than ay, C, D two 3D points are culled until all 3D points are traversed, and a set of 3D points corresponding to the horizontal plane is obtained.
When the plurality of 3D points are traversed, if the number of the plurality of 3D points is an odd number, the plane to which the remaining 3D point belongs may be determined for the remaining one 3D point in combination with the nearest 3D point.
S304: and reconstructing a plane to be reconstructed for displaying the enhanced image according to the 3D point set.
In the step, the plane to be reconstructed is reconstructed according to the 3D point set corresponding to the plane to be reconstructed, other planes do not need to be reconstructed, and the plane reconstruction efficiency and accuracy are improved. Wherein, a plane to be reconstructed can contain a plurality of planes with different shapes. And S304, clustering according to the distance between the 3D points in the 3D point set to obtain at least one shape class plane to be reconstructed.
Taking the plane to be reconstructed as a horizontal plane as an example, as shown in fig. 8, after clustering is performed according to the distance between the 3D points in the 3D point set corresponding to the horizontal plane, 4 horizontal planes with different shapes are obtained and are respectively marked as a plane 1, a plane 2, a plane 3, and a plane 4.
After the plane to be reconstructed is reconstructed, the enhanced video image can be displayed on the reconstruction plane, so that the functions of amplifying, reducing, enhancing the effect, interesting interaction and the like of the displayed video image are realized.
The embodiments of the present application do not impose any limiting requirements on the clustering algorithm, including but not limited to a mean clustering (K-means) algorithm, a mean shift clustering algorithm, a density-based clustering algorithm (DBSCAN), and a Gaussian Mixture Model (GMM) clustering algorithm.
In the above embodiment of the application, by determining the distance between any two 3D points on the axis perpendicular to the plane to be reconstructed, that is, the height difference between any two 3D points and the plane to be reconstructed, a 3D point set corresponding to the plane to be reconstructed is screened out from the multiple 3D points, and the plane to be reconstructed is clustered according to the distance between the 3D points in the 3D point set, so as to obtain the plane to be reconstructed in at least one shape, thereby displaying the AR special effect. The method can be used for quickly reconstructing the required horizontal plane, vertical plane, inclined plane and the like without reconstructing all planes according to all 3D points and then screening the required planes from all planes, and is high in efficiency, and the reconstruction plane is more accurate because the 3D point set used for reconstruction is the 3D point corresponding to the plane to be reconstructed. Under the environment of AR/VR and the like, the enhanced video image can be displayed by utilizing the reconstructed plane, so that strong and shocking visual impact is brought to the user, the experience and reality of the user are improved, and the body response corresponding to the special effect is generated to improve the experience interest.
Based on the same technical concept, embodiments of the present application provide a display device, which can implement the plane reconstruction method shown in fig. 3 and obtain the same technical effect.
Referring to fig. 9, the display device includes an obtaining module 901, a distance determining module 902, a filtering module 903, and a reconstructing module 904;
an obtaining module 901, configured to obtain an acquired depth image, and extract three-dimensional point cloud data based on the depth image, where the three-dimensional point cloud data includes a plurality of 3D points;
a distance determining module 902, configured to select two 3D points from the multiple 3D points, and determine a distance between the two selected 3D points on an axis perpendicular to the plane to be reconstructed, where the distance between the two selected 3D points is smaller than a set distance threshold;
the screening module 903 is configured to traverse all 3D points in the plurality of 3D points, and generate a 3D point set of a plane to be reconstructed according to each determined distance;
and a reconstruction module 904, configured to reconstruct a plane to be reconstructed for displaying the enhanced image according to the 3D point set.
Optionally, the plane to be reconstructed includes a horizontal plane or a vertical plane, and the distance determining module 902 is specifically configured to:
respectively setting the coordinates of the two selected 3D points on a first axis parallel to a plane to be reconstructed as preset values, and respectively reducing the coordinates of the two 3D points on a second axis parallel to the plane to be reconstructed according to a preset reduction ratio to obtain updated first coordinates of the two 3D points;
and determining the distance between the two 3D points on the axis perpendicular to the plane to be reconstructed according to the updated first coordinate.
Optionally, the plane to be reconstructed includes an inclined plane, and the distance determining module 902 is specifically configured to:
determining an axis corresponding to the plane to be reconstructed according to the inclination angles of the plane to be reconstructed relative to the horizontal plane and the vertical plane;
respectively reducing the coordinates of the two selected 3D points on two coordinate axes vertical to the axis according to a preset reduction ratio to obtain a second coordinate after the two 3D points are updated;
and determining the distance between the two 3D points on the axis according to the updated second coordinate.
Optionally, the distance determining module 902 is specifically configured to:
if the inclination angle of the plane to be reconstructed relative to the horizontal plane is larger than the inclination angle of the plane to be reconstructed relative to the vertical plane, determining the axis corresponding to the plane to be reconstructed as the axis vertical to the vertical plane; or
If the inclination angle of the plane to be reconstructed relative to the horizontal plane is smaller than the inclination angle of the plane to be reconstructed relative to the vertical plane, determining the axis corresponding to the plane to be reconstructed as the axis vertical to the horizontal plane; or
And if the inclination angle of the plane to be reconstructed relative to the horizontal plane is equal to the inclination angle of the plane to be reconstructed relative to the vertical plane, determining any one of an axis perpendicular to the horizontal plane and an axis perpendicular to the vertical plane as the axis corresponding to the plane to be reconstructed.
Optionally, the screening module 903 is specifically configured to:
and for each distance, if the distance is smaller than the preset height difference value, taking two 2D points corresponding to the distance smaller than the preset height difference value as two points in a 3D point set of the plane to be reconstructed.
Optionally, the reconstruction module 904 is specifically configured to:
and clustering according to the distance between the 3D points in the 3D point set to obtain at least one shape class plane to be reconstructed.
Embodiments of the present application also provide a computer-readable storage medium for storing instructions that, when executed, may implement the methods of the foregoing embodiments.
The embodiments of the present application also provide a computer program product for storing a computer program, where the computer program is used to execute the method of the foregoing embodiments.
Finally, it should be noted that: the above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present application.
The foregoing description, for purposes of explanation, has been presented in conjunction with specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the embodiments to the precise forms disclosed above. Many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles and the practical application, to thereby enable others skilled in the art to best utilize the embodiments and various embodiments with various modifications as are suited to the particular use contemplated.

Claims (10)

1. A planar reconstruction method, comprising:
acquiring an acquired depth image, and extracting three-dimensional point cloud data based on the depth image, wherein the three-dimensional point cloud data comprises a plurality of 3D points;
selecting two 3D points from the plurality of 3D points, and determining the distance between the two selected 3D points on an axis perpendicular to a plane to be reconstructed, wherein the distance between the two selected 3D points is smaller than a set distance threshold;
traversing all 3D points in the plurality of 3D points, and generating a 3D point set of the plane to be reconstructed according to each determined distance;
and reconstructing a plane to be reconstructed for displaying the enhanced image according to the 3D point set.
2. The method according to claim 1, wherein the plane to be reconstructed comprises a horizontal plane or a vertical plane, and the determining the distance between the two selected 3D points on the axis perpendicular to the plane to be reconstructed comprises:
respectively setting the coordinates of the two selected 3D points on a first axis parallel to a plane to be reconstructed as preset values, and respectively reducing the coordinates of the two 3D points on a second axis parallel to the plane to be reconstructed according to a preset reduction ratio to obtain updated first coordinates of the two 3D points;
and determining the distance between the two 3D points on the axis perpendicular to the plane to be reconstructed according to the updated first coordinate.
3. The method of claim 1, wherein the plane to be reconstructed includes an inclined plane, and the determining the distance between the two selected 3D points on the axis perpendicular to the plane to be reconstructed includes:
determining an axis corresponding to the plane to be reconstructed according to the inclination angles of the plane to be reconstructed relative to a horizontal plane and a vertical plane;
respectively reducing the coordinates of the two selected 3D points on two coordinate axes vertical to the axis according to a preset reduction ratio to obtain a second coordinate after the two 3D points are updated;
determining the distance between the two 3D points on the axis according to the updated second coordinate.
4. The method of claim 3, wherein determining the axis corresponding to the plane to be reconstructed according to the inclination angle of the plane to be reconstructed relative to a horizontal plane and a vertical plane comprises:
if the inclination angle of the plane to be reconstructed relative to the horizontal plane is larger than the inclination angle of the plane to be reconstructed relative to the vertical plane, determining the axis corresponding to the plane to be reconstructed as the axis vertical to the vertical plane; or
If the inclination angle of the plane to be reconstructed relative to the horizontal plane is smaller than the inclination angle of the plane to be reconstructed relative to the vertical plane, determining the axis corresponding to the plane to be reconstructed as the axis vertical to the horizontal plane; or
And if the inclination angle of the plane to be reconstructed relative to the horizontal plane is equal to the inclination angle of the plane to be reconstructed relative to the vertical plane, determining any one of an axis perpendicular to the horizontal plane and an axis perpendicular to the vertical plane as the axis corresponding to the plane to be reconstructed.
5. The method of any one of claims 1-4, wherein said traversing all 3D points of the plurality of 3D points, generating the set of 3D points of the plane to be reconstructed from the determined respective distances, comprises:
and regarding each distance, if the distance is smaller than a preset height difference value, taking two 2D points corresponding to the distance smaller than the preset height difference value as two points in a 3D point set of the plane to be reconstructed.
6. The method of any one of claims 1-4, wherein said reconstructing the plane to be reconstructed from the set of 3D points comprises:
and clustering according to the distance between the 3D points in the 3D point set to obtain at least one shape class plane to be reconstructed.
7. A display device, comprising a display, a memory, a controller:
the display, connected with the controller, configured to display the enhanced image on the reconstructed plane;
the memory, coupled to the controller, configured to store computer program instructions;
the controller configured to perform the following operations in accordance with the computer program instructions:
acquiring an acquired depth image, and extracting three-dimensional point cloud data based on the depth image, wherein the three-dimensional point cloud data comprises a plurality of 3D points;
selecting two 3D points from the plurality of 3D points, and determining the distance between the two selected 3D points on an axis perpendicular to a plane to be reconstructed, wherein the distance between the two selected 3D points is smaller than a set distance threshold;
traversing all 3D points in the plurality of 3D points, and generating a 3D point set of the plane to be reconstructed according to each determined distance;
and reconstructing a plane to be reconstructed for displaying the enhanced image according to the 3D point set.
8. The display device of claim 7, wherein the plane to be reconstructed comprises a horizontal plane or a vertical plane, the control means configured to:
respectively setting the coordinates of the two selected 3D points on a first axis parallel to a plane to be reconstructed as preset values, and respectively reducing the coordinates of the two 3D points on a second axis parallel to the plane to be reconstructed according to a preset reduction ratio to obtain updated first coordinates of the two 3D points;
and determining the distance between the two 3D points on the axis perpendicular to the plane to be reconstructed according to the updated first coordinate.
9. The display device of claim 7, wherein the plane to be reconstructed includes an inclined plane, the controller is configured to:
determining an axis corresponding to the plane to be reconstructed according to the inclination angles of the plane to be reconstructed relative to a horizontal plane and a vertical plane;
respectively reducing the coordinates of the two selected 3D points on two coordinate axes vertical to the axis according to a preset reduction ratio to obtain a second coordinate after the two 3D points are updated;
determining the distance between the two 3D points on the axis according to the updated second coordinate.
10. The display device according to any of claims 7-9, wherein the controller reconstructs the plane to be reconstructed from the set of 3D points, in particular being configured to:
and clustering according to the distance between the 3D points in the 3D point set to obtain at least one shape class plane to be reconstructed.
CN202110763333.XA 2021-07-06 2021-07-06 Plane reconstruction method and display device Pending CN113538694A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110763333.XA CN113538694A (en) 2021-07-06 2021-07-06 Plane reconstruction method and display device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110763333.XA CN113538694A (en) 2021-07-06 2021-07-06 Plane reconstruction method and display device

Publications (1)

Publication Number Publication Date
CN113538694A true CN113538694A (en) 2021-10-22

Family

ID=78126873

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110763333.XA Pending CN113538694A (en) 2021-07-06 2021-07-06 Plane reconstruction method and display device

Country Status (1)

Country Link
CN (1) CN113538694A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106570507A (en) * 2016-10-26 2017-04-19 北京航空航天大学 Multi-angle consistent plane detection and analysis method for monocular video scene three dimensional structure
US20180089887A1 (en) * 2012-08-02 2018-03-29 Here Global B.V. Three-Dimentional Plane Panorama Creation Through Hough-Based Line Detection
CN108205820A (en) * 2018-02-02 2018-06-26 浙江商汤科技开发有限公司 Method for reconstructing, fusion method, device, equipment and the storage medium of plane
CN108564652A (en) * 2018-03-12 2018-09-21 中国科学院自动化研究所 Efficiently utilize the high-precision three-dimensional method for reconstructing of memory and system and equipment
CN110189399A (en) * 2019-04-26 2019-08-30 浙江大学 A kind of method and system that interior three-dimensional layout rebuilds
CN110363858A (en) * 2019-06-18 2019-10-22 新拓三维技术(深圳)有限公司 A kind of three-dimensional facial reconstruction method and system
CN112562082A (en) * 2020-08-06 2021-03-26 长春理工大学 Three-dimensional face reconstruction method and system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180089887A1 (en) * 2012-08-02 2018-03-29 Here Global B.V. Three-Dimentional Plane Panorama Creation Through Hough-Based Line Detection
CN106570507A (en) * 2016-10-26 2017-04-19 北京航空航天大学 Multi-angle consistent plane detection and analysis method for monocular video scene three dimensional structure
CN108205820A (en) * 2018-02-02 2018-06-26 浙江商汤科技开发有限公司 Method for reconstructing, fusion method, device, equipment and the storage medium of plane
CN108564652A (en) * 2018-03-12 2018-09-21 中国科学院自动化研究所 Efficiently utilize the high-precision three-dimensional method for reconstructing of memory and system and equipment
CN110189399A (en) * 2019-04-26 2019-08-30 浙江大学 A kind of method and system that interior three-dimensional layout rebuilds
CN110363858A (en) * 2019-06-18 2019-10-22 新拓三维技术(深圳)有限公司 A kind of three-dimensional facial reconstruction method and system
CN112562082A (en) * 2020-08-06 2021-03-26 长春理工大学 Three-dimensional face reconstruction method and system

Similar Documents

Publication Publication Date Title
US10334238B2 (en) Method and system for real-time rendering displaying high resolution virtual reality (VR) video
US8705845B2 (en) Entertainment device and method of interaction
US6867781B1 (en) Graphics pipeline token synchronization
JP2020514900A (en) Mixed reality viewer system and method
US20170186243A1 (en) Video Image Processing Method and Electronic Device Based on the Virtual Reality
CN113244614B (en) Image picture display method, device, equipment and storage medium
CN105447898A (en) Method and device for displaying 2D application interface in virtual real device
CA2853761A1 (en) Rendering system, rendering server, control method thereof, program, and recording medium
US9588651B1 (en) Multiple virtual environments
CN209028563U (en) A kind of VR all-in-one machine for cooperating PC to use
US20110141301A1 (en) Image processing method, apparatus and system
US20240168615A1 (en) Image display method and apparatus, device, and medium
CN113206992A (en) Method for converting projection format of panoramic video and display equipment
CN108668168A (en) Android VR video players and its design method based on Unity 3D
CN112672131B (en) Panoramic video image display method and display device
EP3385915A1 (en) Method and device for processing multimedia information
CN114650434A (en) Cloud service-based rendering method and related equipment thereof
CN113242440A (en) Live broadcast method, client, system, computer equipment and storage medium
CN113206993A (en) Method for adjusting display screen and display device
CN112218132A (en) Panoramic video image display method and display equipment
US20190295324A1 (en) Optimized content sharing interaction using a mixed reality environment
CN114500970B (en) Panoramic video image processing and displaying method and equipment
CN111930233B (en) Panoramic video image display method and display device
CN113411537A (en) Video call method, device, terminal and storage medium
CN113538694A (en) Plane reconstruction method and display device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination