CN118138741A - Naked eye 3D data communication method based on meta universe - Google Patents

Naked eye 3D data communication method based on meta universe Download PDF

Info

Publication number
CN118138741A
CN118138741A CN202410559034.8A CN202410559034A CN118138741A CN 118138741 A CN118138741 A CN 118138741A CN 202410559034 A CN202410559034 A CN 202410559034A CN 118138741 A CN118138741 A CN 118138741A
Authority
CN
China
Prior art keywords
data
naked eye
depth
color
point cloud
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410559034.8A
Other languages
Chinese (zh)
Other versions
CN118138741B (en
Inventor
罗翼鹏
袁梁
易洁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan Wutong Technology Co ltd
Original Assignee
Sichuan Wutong Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan Wutong Technology Co ltd filed Critical Sichuan Wutong Technology Co ltd
Priority to CN202410559034.8A priority Critical patent/CN118138741B/en
Publication of CN118138741A publication Critical patent/CN118138741A/en
Application granted granted Critical
Publication of CN118138741B publication Critical patent/CN118138741B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Processing Or Creating Images (AREA)
  • Image Generation (AREA)

Abstract

The invention relates to the technical field of metauniverse, in particular to a naked eye 3D data communication method based on metauniverse, which comprises the following steps: step 1: generating three-dimensional point cloud data corresponding to naked eye 3D data; step 2: generating a depth map based on the three-dimensional point cloud data; step 3: calculating the normal vector of each point in the three-dimensional point cloud data; step 4: performing color correction on the color of each point in the three-dimensional point cloud data by using an illumination model; step 5: mapping the quantized color information to a depth map to generate a texture image; step 6: extracting features on the generated texture image, applying Huffman coding, compressing and transmitting to a receiving end; step 7: the receiving end restores the feature vector before compression; reconstructing the texture image represented by the features, and synthesizing the texture image and the depth map into naked eye 3D data. The method and the device improve the display effect of naked eye 3D data and improve the efficiency of data transmission.

Description

Naked eye 3D data communication method based on meta universe
Technical Field
The invention relates to the technical field of metauniverse, in particular to a naked eye 3D data communication method based on metauniverse.
Background
With the continuous development of technology, naked eye 3D technology has been developed as an attractive technology that allows a user to view 3D images or videos in a naked eye state without wearing any glasses or devices. This technique provides a more comfortable, natural viewing experience for the user and is therefore welcome. However, the conventional naked eye 3D display technology still has some problems such as limited viewing angle, limited viewing distance, and visual distortion occurring when viewing angle and distance are changed.
In existing naked eye 3D display technologies, common methods include parallax-based autostereoscopic imaging technology and grating-based autostereoscopic imaging technology. Parallax-based auto-stereoscopic imaging techniques capture images using imaging systems in two different positions, left and right, and produce a stereoscopic effect based on the parallax of an observer. The grating-based auto-stereoscopic imaging technology separates images seen by left and right eyes by arranging a series of tiny lenses or gratings on a display panel, respectively, so as to realize naked eye 3D display. Both of these techniques solve the use problems of conventional 3D glasses to some extent, but still present many challenges and limitations.
First, existing naked eye 3D display technologies have limitations in viewing distance and angle. The viewer must be within a specific distance and angle to obtain the best viewing effect, beyond which visual distortion or disappearance of the 3D effect may occur. This brings a certain inconvenience to the viewing experience of the user. Secondly, there is insufficient processing of depth information in the prior art. Conventional naked eye 3D display technology generally only provides a stereoscopic effect of surface level, is limited in representation of depth information, and lacks realism and stereoscopic impression. This results in difficulty in deep experience of depth and stereoscopic sense in the picture for the viewer when viewing the video, and influences the immersion and realism of the viewing. In addition, there is an inadequate handling of color and texture in the prior art. The traditional naked eye 3D display technology is often poor in performance when processing the colors and textures of the images, and is easy to cause color deviation or texture blurring, so that the definition and fidelity of the images are affected, and the experience quality of the viewing is reduced. In addition, the conventional naked eye 3D display technology also has certain challenges in processing dynamic scenes. Because of scene variability, the traditional technology is difficult to capture and process images in real time, and problems of picture jamming, blurring or delay are easy to occur, so that smoothness and continuity of viewing are affected.
Therefore, aiming at the problems existing in the existing naked eye 3D display technology, new technical means are required to be further researched and developed to improve the effect and quality of the naked eye 3D display. Particularly, in the aspects of processing depth information, optimizing color textures and processing dynamic scenes, more efficient and accurate algorithms and methods are needed to be solved so as to achieve more vivid, clear and smooth naked eye 3D display effects.
Disclosure of Invention
The invention mainly aims to provide a naked eye 3D data communication method based on the meta universe, which improves the display effect of naked eye 3D data and improves the efficiency of data transmission.
In order to solve the technical problems, the invention provides a bare eye 3D data communication method based on metauniverse, which comprises the following steps:
step 1: carrying out three-dimensional scene modeling representation on the naked eye 3D data to generate three-dimensional point cloud data corresponding to the naked eye 3D data; receiving demand data from a user side; the demand data includes: viewing distance, viewing angle, brightness, contrast, and color saturation;
step 2: generating a depth map based on the three-dimensional point cloud data; according to the watching distance and the watching angle in the required data, adjusting the depth value of the depth map;
Step 3: calculating the normal vector of each point in the three-dimensional point cloud data;
Step 4: performing color correction on the color of each point in the three-dimensional point cloud data by using an illumination model; quantifying each point color, applying a nearest color matching algorithm, mapping a continuous color space to a limited set of colors;
step 5: mapping the quantized color information to a depth map to generate a texture image; generating texture images of a plurality of viewpoints according to the watching distance and the watching angle in the required data;
step 6: extracting features on the generated texture image, and coding the extracted features into a vector form to obtain feature vectors; quantizing the extracted feature vector by using hash codes, and mapping high-dimensional features in the feature vector to a low-dimensional space; and applies Huffman coding to compress and then transmit to the receiving end;
Step 7: the receiving end restores the feature vector before compression; decoding the feature vector to obtain a feature representation of the texture image; finally, reconstructing the texture image represented by the features by using a reverse texture mapping method, synthesizing the texture image and the depth map of each viewpoint into naked eye 3D data, and dynamically selecting and displaying the naked eye 3D data closest to the viewpoint of the user according to the viewpoint of the user; and optimizing the display of naked eye 3D data according to the brightness, contrast and color saturation in the required data.
Further, the three-dimensional point cloud data corresponding to the naked eye 3D data generated in the step 1 is; Wherein/>Representing the/>, in three-dimensional point cloud dataSpatial coordinates of the individual points; Is an X-axis coordinate; /(I) Is Y-axis coordinate; /(I)Is Z-axis coordinate; /(I)Color information representing a point in a point cloud,/>And/>Color values representing red, green and blue channels, respectively; /(I)Is the number of points in the three-dimensional point cloud data.
Further, the method for generating the depth map based on the three-dimensional point cloud data in the step 2 includes:
Step 2.1: for each point in the three-dimensional point cloud data Calculating normalized plane coordinates/>, under a camera coordinate system
Step 2.2: according to radial distortion parametersAnd tangential distortion parameter/>For the coordinates on the normalized plane/>Performing distortion correction;
step 2.3: normalized plane coordinates to correct distortion Conversion to coordinates in a pixel coordinate system
Step 2.4: according to the external parameter matrix of the cameraPoint/>, in three-dimensional point cloud dataTransformed into the camera coordinate system and then taken to be/>The component is used as a depth value to obtain a pixel/>, in the depth mapDepth value at/>
Further, the method for calculating the normal vector of each point in the three-dimensional point cloud data in the step 3 includes: constructing a weight matrixWherein/>Representing a dot/>To its nearest neighbor/>Weights of the distances of (2); construction design matrix/>And observation matrix/>Wherein/>Contains the coordinates of each nearest neighbor,/>Normal vectors of the corresponding points are included; solving equation/>, using weighted least squaresWherein/>Is a parameter vector of the fitting curved surface; according to the parameter vector of the fitting curved surfaceCalculate the dot/>Normal vector at/>; Wherein,;/>;/>
Further, constructing the illumination model includes: let the observer position in the illumination model beThe illumination direction is/>Reflectivity is/>Ambient light intensity is/>The light source intensity is/>; Calculating the diffuse reflection correction coefficient/>, respectivelySpecular reflection correction factor/>And environmental correction factor/>; And carrying out color correction on the color of each point in the three-dimensional point cloud data by using the following formula:
Wherein, For the color of each point after the color correction.
Further, the diffuse reflection correction coefficient is calculated using the following formula:
Wherein, Is the diffuse reflection coefficient; /(I)Is the intensity of the light source; /(I)Is the unit vector of the illumination direction.
Further, a specular reflection correction factor is calculated using the following formula:
Wherein, The specular high light index is a set value, and the value range is 1 to 128; /(I)Is the reflection vector of the illumination direction with respect to the normal vector; /(I)Is a unit vector of the observer direction; /(I)Is the specular reflection coefficient;
the environmental correction factor is calculated using the following formula:
Wherein, Is the ambient reflection coefficient; /(I)Is the ambient light intensity.
Further, in step 7, the method for synthesizing the texture image and the depth map into naked eye 3D data includes: let the texture image be expressed asDepth map is expressed as/>Wherein/>Is a normalized planar coordinate; parallax map/>, is calculated using the following formula
Then calculate the corrected parallax map by the following formula
Wherein,And/>Are subscripts; /(I)Interpolation function for cubic spline; constructing a depth enhancement map; depth edge map/>, is generated using the following formula
Generating a depth edge enhancement map based on the depth enhancement map and the depth edge map
Wherein,Representing morphological dilation operations; /(I)Representing the scale/>A lower expansion core; /(I)Is a color space conversion function; /(I)For depth enhancement map, the representation is at position/>Enhancement depth information at; /(I)Is the degree of scale; finally, naked eye 3D data are synthesized by the following formula:
Wherein, Is the direction vector of specular reflection; /(I)Is the Phong index.
Further, a depth enhancement map is constructed using the following formula:
Wherein, The normalized coefficient is a set value; /(I)For double summing symbols for traversing oneA window covering a location/>Surrounding pixels; /(I)Is Gaussian filter kernel,/>And/>All are indexes of Gaussian filter kernels; /(I)As a weight function of depth difference, represent pixel/>And its surrounding pixels/>The larger the depth difference is, the smaller the weight is; /(I)Is a standard gaussian function; /(I)To define parameters for window size.
The naked eye 3D data communication method based on the meta universe has the following beneficial effects: firstly, naked eye 3D data are processed by using a three-dimensional scene modeling technology and are represented as three-dimensional point cloud data, so that capturing and representing of spatial position and shape information of objects in a scene are realized. The step provides basic data for subsequent processing and optimization, and lays a foundation for transmission and display of naked eye 3D data. The method for generating the depth map based on the three-dimensional point cloud data is adopted, and accurate extraction and representation of the depth information are achieved. The depth map is crucial to naked eye 3D display, and can help the system to accurately capture the far-near relationship and the stereoscopic effect of the object, so that a viewer can more clearly perceive the depth and the stereoscopic effect of the image. The method for calculating the normal vector of each point is utilized to further improve the realism and fidelity of the image. By calculating the normal vector of each point, the geometric features and illumination effects of the object surface can be simulated more accurately, so that the naked eye 3D image is more real and finer. The invention also adopts the illumination model to carry out color correction on naked eye 3D data, and further optimizes the color and texture effects of the image. The illumination model can simulate the illumination condition in the real world, and the image is adjusted according to the position of an observer and the intensity of the light source, so that the color of the image is clearer, the texture is clearer, and the visual enjoyment of a viewer is enhanced. The invention also utilizes the feature extraction and coding technology to process the image, and further optimizes the compression and transmission effects of the image. Through feature extraction and coding, the data volume can be effectively reduced, and the data transmission efficiency is improved, so that naked eye 3D data is more stable and smooth in the transmission process. The invention further adopts the methods of depth enhancement and image synthesis, and further improves the naked eye 3D display effect. Through depth enhancement and image synthesis, naked eye 3D data can be clearer, stereoscopic and real, and immersion and visual enjoyment of a viewer are enhanced.
Drawings
Fig. 1 is a schematic flow chart of a method of naked eye 3D data communication method based on meta-universe provided by the embodiment of the invention.
Detailed Description
The method of the present invention will be described in further detail with reference to the accompanying drawings.
Example 1: referring to fig. 1, a metadata-based naked eye 3D data communication method, the method comprising:
step 1: carrying out three-dimensional scene modeling representation on the naked eye 3D data to generate three-dimensional point cloud data corresponding to the naked eye 3D data; receiving demand data from a user side; the demand data includes: viewing distance, viewing angle, brightness, contrast, and color saturation;
And converting the naked eye 3D data into three-dimensional point cloud data, so that the data is easier to process and express. The three-dimensional point cloud data has a concise and clear structure, can accurately express the shape and position information of objects in a scene, and provides a basis for subsequent data processing. By using the three-dimensional point cloud data as the intermediate representation form, the cross-platform compatibility of the system can be improved. The three-dimensional point cloud data is a universal data format, which is convenient for data exchange and sharing, whether on different operating systems, hardware devices or in different application programs.
The viewing distance and viewing angle are important factors in determining the stereoscopic effect. Different viewing distances and viewing angles can lead to different visual experiences, so that display parameters of naked eye 3D data are adjusted according to the viewing distances and the viewing angles of users, the display effect can be more in line with the expectations of the users, and the stereoscopic experience is enhanced. These parameters directly affect the quality and realism of the image. The user may have different preferences and environmental requirements, such as increased brightness in brightly lit environments or increased contrast and color saturation in color rich scenes. Therefore, the parameters are adjusted according to the requirement data of the user, so that the naked eye 3D data can keep good display effect in different environments.
Step 2: generating a depth map based on the three-dimensional point cloud data; according to the watching distance and the watching angle in the required data, adjusting the depth value of the depth map;
Specifically, the depth map reflects distance information of each point in the scene, providing depth perception of the scene. This is important for tasks such as distance measurement, construction of virtual reality environment, and pose estimation of objects. The depth map provides an important basis for subsequent processing. For example, in naked eye 3D data communication, the depth map may be used for view angle synthesis, simulation of depth effect, and the like, so as to improve the sense of reality and the sense of quality of the image. The generation of the depth map may enhance the visual effect of the image. In naked eye 3D data, the depth map can be used for depth perception image enhancement, so that a receiving end user can better perceive the position relationship and distance of objects in the image. Compared with the original three-dimensional point cloud data, the depth map generally has lower data volume, so that the bandwidth requirement can be reduced in the data transmission process, and the transmission efficiency is improved.
The viewing distance determines the distance between the viewer and the display screen, and different viewing distances may result in different demands on depth perception. By adjusting the depth value of the depth map according to the viewing distance, the viewing habit of the audience can be more met during viewing, and the depth perception effect is enhanced. The viewing angle determines the angle of the viewer relative to the display screen, and different viewing angles may result in different perspective effects. By adjusting the depth value of the depth map according to the viewing angles, the stability of the stereoscopic effect can be maintained at different viewing angles, so that a viewer can obtain a good stereoscopic effect no matter from which angle the viewer views. The farther the viewing distance is, the smaller the depth perception requirement is, and at this time, the depth value of the depth map can be properly pulled up, so that the depth difference between distant objects is more obvious. Conversely, the closer the viewing distance is, the greater the depth perception requirement is, at which time the depth values of the depth map may be appropriately pulled apart, so that the depth difference between the nearby objects is more apparent. Different viewing angles may lead to different perspective effects, and thus it is necessary to adjust the depth values of the depth map according to the viewing angles to maintain the stability of the stereoscopic effect. For example, for high angle viewing, it may be desirable to expand the depth range of the depth map to maintain the stability of the stereoscopic effect.
Step 3: calculating the normal vector of each point in the three-dimensional point cloud data;
In three-dimensional space, normal vector refers to a vector perpendicular to a surface. For points on a plane, its normal vector can be calculated from the position information of its surrounding points. For a point in the three-dimensional point cloud data, the normal vector represents the direction information of the surface on which the point is located. The calculation of the normal vector may be performed by various methods, wherein common methods include a least squares method, principal Component Analysis (PCA), and the like. These methods estimate the normal vector for each point based on the geometry of the point cloud data, thereby determining the normal vector direction for each point in the point cloud data.
By calculating the normal vector for each point, the direction of the surface where each point in the point cloud data is located can be estimated, thereby determining the geometry of the surface. This is important for subsequent illumination model calculation, rendering and visual effect enhancement. The normal vector is one of the important parameters for calculating the lighting effect. In calculating illumination, the normal vector can be used to determine the angle between the light and the surface, thereby affecting the reflection and refraction effects of the light on the surface. In the graphic rendering process, the normal vector can be used for determining the smoothness and curvature of the object surface, so that the generation of illumination effect and shadow is affected, and the rendering sense of reality is improved.
Step 4: performing color correction on the color of each point in the three-dimensional point cloud data by using an illumination model; quantifying each point color, applying a nearest color matching algorithm, mapping a continuous color space to a limited set of colors;
the illumination model is a mathematical model describing illumination and color, which generally takes into account the effect of factors such as light source, surface texture, observer, etc. on color. In this step, the illumination model is used to correct the color of each point in the point cloud data so that the color more conforms to the illumination situation in the real scene. Color quantization refers to the process of mapping a continuous color space to a limited set of colors. In this step, the color of each point is quantized and represented as a limited number of color values to reduce the amount of data and improve the efficiency of subsequent processing. Recent color matching algorithms are used to map quantized colors into a predefined limited set of colors. The algorithm typically calculates the distance between the quantized color and each color in the set of colors and selects the closest color as the mapping result.
And correcting the color in the point cloud data through the illumination model, so that the color is closer to the illumination effect in the real scene, and the sense of reality and the visual effect of the data are improved. The color quantization and mapping can reduce the data amount and improve the data transmission efficiency. By mapping a continuous color space to a limited set of colors, the color information in the original data can be represented in a more compact form. The quantized color set has a simpler structure than the original continuous color space, so that the complexity and the calculation cost of data processing can be reduced in the subsequent processing process. Through color quantization and mapping, data from different sources can be provided with the same color representation, so that the consistency and comparability of the data are enhanced.
Step 5: mapping the quantized color information to a depth map to generate a texture image; generating texture images of a plurality of viewpoints according to the watching distance and the watching angle in the required data;
The mapping of color information onto the depth map is essentially filling in corresponding color values at each pixel location of the depth map, thereby forming a texture image. Such texture mapping may be implemented by a simple interpolation algorithm or a projection algorithm. By mapping the quantized color information onto the depth map, a texture image is generated. A texture image is a visual representation of a scene that contains color information for each point in the scene for subsequent data transmission and reconstruction. The generation of texture images may enhance the visualization effect of the data. By mapping the color information onto the depth map, colors can be superimposed on the basis of the depth map, thereby improving the visual quality and realism of the data. The texture image contains not only depth information but also color information, so that the texture image has a richer information amount. This is important for subsequent data processing and reconstruction and can provide more details and accurate information. The texture image is used as a part of naked eye 3D data, and provides a basis for data transmission and reconstruction. The receiving end can reconstruct the color and texture information of the scene through the texture image, so that the original naked eye 3D data is restored.
The multi-viewpoint texture image can simulate a human eye binocular vision system, so that viewers at different viewpoints can obtain stereoscopic effects at respective positions. The stereoscopic effect can enhance the realistic sense of naked eye 3D display, and enables the audience to feel more vivid visual experience. A texture image of a single viewpoint may have a viewing angle limitation when viewed, and an optimal stereoscopic effect can be obtained only at a specific angle. And generating texture images for multiple viewpoints may reduce such limitations so that viewers can view and obtain stereoscopic effects over a wider range of angles. Different audiences can be positioned at different positions and angles, and the texture images with multiple viewpoints can be generated to better adapt to the requirements of different audience positions, so that the audiences can obtain good naked eye 3D display effect wherever they are. The multi-view texture image can also provide more selectivity, and the audience can select the most suitable view according to the position and the angle of the audience, so that the best viewing experience is obtained.
Step 6: extracting features on the generated texture image, and coding the extracted features into a vector form to obtain feature vectors; quantizing the extracted feature vector by using hash codes, and mapping high-dimensional features in the feature vector to a low-dimensional space; and applies Huffman coding to compress and then transmit to the receiving end;
The feature extraction process on the texture image aims to extract representative feature information from the image. These features may be edges, textures, color distributions, etc. of the image. Common feature extraction methods include Local Binary Pattern (LBP), direction gradient Histogram (HOG), and the like. The extracted feature information is typically high-dimensional and these features need to be encoded for ease of processing and transmission. The purpose of encoding is to map high-dimensional features into low-dimensional space while retaining as much of the original information as possible. Common feature encoding methods include hash encoding, principal Component Analysis (PCA), and the like. After encoding, the feature data may be further compressed to reduce the bandwidth and storage space required during transmission. The purpose of compression is to reduce the redundancy of the data as much as possible while maintaining important information of the data. Common compression methods include huffman coding, entropy coding, and the like.
Step 7: the receiving end restores the feature vector before compression; decoding the feature vector to obtain a feature representation of the texture image; finally, reconstructing the texture image represented by the features by using a reverse texture mapping method, synthesizing the texture image and the depth map of each viewpoint into naked eye 3D data, and dynamically selecting and displaying the naked eye 3D data closest to the viewpoint of the user according to the viewpoint of the user; and optimizing the display of naked eye 3D data according to the brightness, contrast and color saturation in the required data.
The receiving end first needs to decode the received compressed feature vector to recover the original feature vector. The decoding process should be reversed with the encoding process to ensure that the recovered feature vectors are as close as possible to the original feature vectors. The receiving end uses the reverse texture mapping method to reconstruct the texture image represented by the characteristics. This involves a back-mapping or back-projection process from the feature vector to the texture image, which aims to restore the original texture image. The main function of step 7 is to restore the original texture image. The original texture image can be restored by decoding the compressed feature vector and mapping the feature vector to the texture image, so that the integrity and the accuracy of data are ensured. The obtained texture image can be used for displaying and observing, and can further extract information in the texture image, such as texture characteristics, color distribution and the like, so that more information support is provided for subsequent applications. The texture image obtained in the step 7 can be visually presented at the receiving end, so that a user can intuitively observe and understand original naked eye 3D data, and user experience and interactivity are improved. Step 7 ensures the integrity and reliability of the data transmission process. By decoding and reconstructing, it is possible to verify whether the data is lost or erroneous during transmission, and to make necessary repair and correction.
The nearest naked eye 3D data is dynamically selected for display according to the viewpoint of the user, so that the user can obtain the best stereoscopic effect when watching. Thus, the inconsistency of stereoscopic feeling can be reduced, and the consistency and fluency of watching can be improved. The naked eye 3D data can be dynamically selected, and display content can be adjusted according to the viewpoint of the user, so that the display content is more in line with the watching habit and the expectations of the user, and the realism of the naked eye 3D display are improved. By dynamically selecting the optimal naked eye 3D data, the satisfaction degree of the user on the naked eye 3D content can be improved. The user can obtain more consistent and good viewing effects at different positions and angles, thereby enhancing his viewing experience.
According to parameters such as brightness, contrast, color saturation and the like in the required data, the display effect of naked eye 3D data is optimized, so that a display image is clearer and brighter, and the contrast and color vividness of the image are improved. The display parameters are optimized, so that the naked eye 3D display can better meet the visual feeling and comfort requirements of audiences, and the eye fatigue and discomfort during watching are reduced, thereby improving the ornamental comfort and duration. Optimizing the display of naked eye 3D data can make the viewing effect more prominent and attractive, and enhance the attractiveness and concentration of the viewer to the displayed content.
Example 2: the three-dimensional point cloud data corresponding to the naked eye 3D data generated in the step 1 is that; Wherein/>Representing the/>, in three-dimensional point cloud dataSpatial coordinates of the individual points; Is an X-axis coordinate; /(I) Is Y-axis coordinate; /(I)Is Z-axis coordinate; /(I)Color information representing a point in a point cloud,/>And/>Color values representing red, green and blue channels, respectively; /(I)Is the number of points in the three-dimensional point cloud data.
Example 3: the method for generating the depth map based on the three-dimensional point cloud data in the step 2 comprises the following steps:
Step 2.1: for each point in the three-dimensional point cloud data Calculating normalized plane coordinates/>, under a camera coordinate system
Step 2.2: according to radial distortion parametersAnd tangential distortion parameter/>For the coordinates on the normalized plane/>Performing distortion correction;
step 2.3: normalized plane coordinates to correct distortion Conversion to coordinates in a pixel coordinate system
Step 2.4: according to the external parameter matrix of the cameraPoint/>, in three-dimensional point cloud dataTransformed into the camera coordinate system and then taken to be/>The component is used as a depth value to obtain a pixel/>, in the depth mapDepth value at/>
Specifically, let the internal reference matrix of the camera beThe external parameter matrix is/>The point coordinates in the three-dimensional point cloud data areAnd taking into account radial and tangential distortion parameters/>Then the corresponding pixels in the depth imageDepth value/>The calculation can be made by the following formula:
Wherein, Is the coordinates of a point in the three-dimensional point cloud data,/>Is an extrinsic matrix of the camera,/>Is an internal reference matrix of the camera,/>Is a radial and tangential distortion parameter,/>And/>Focal lengths of the camera in the X-axis direction and the Y-axis direction respectively; /(I)And/>Is the optical center coordinates of the camera.
By integrating each point in the three-dimensional point cloud dataConverted into normalized plane coordinates/>, under camera coordinate systemThe position of each point on the camera imaging plane is determined, providing a basis for subsequent distortion correction and depth calculation. Consider the radial distortion parameter/>And tangential distortion parameter/>And carrying out distortion correction on the coordinates on the normalized plane. Therefore, distortion generated in the imaging process of the camera can be eliminated, and the accuracy and reliability of the depth map are improved. Normalized plane coordinates/>, after distortion correctionConversion to coordinates in a pixel coordinate System/>. The step considers the internal reference matrix of the camera, including parameters such as pixel size, principal point offset and the like, converts the normalized coordinates into actual pixel coordinates, and provides accurate pixel position information for subsequent depth calculation. According to the external parameter matrix/>, of the cameraPoints in the three-dimensional point cloud dataTransformed into the camera coordinate system and then taken to be/>Component as depth value/>. The depth values thus obtained represent the pixels/>, in the depth mapDepth information at the location. Each pixel in the depth map corresponds to a point in three-dimensional space, so the depth map provides object-to-camera distance information at each pixel location.
Example 4: the method for calculating the normal vector of each point in the three-dimensional point cloud data in the step 3 comprises the following steps: constructing a weight matrixWherein/>Representing a dot/>To its nearest neighbor/>Weights of the distances of (2); construction design matrix/>And observation matrix/>Wherein/>Contains the coordinates of each nearest neighbor,/>Normal vectors of the corresponding points are included; solving equation/>, using weighted least squaresWherein/>Is a parameter vector of the fitting curved surface; according to the parameter vector of the fitting curved surfaceCalculate the dot/>Normal vector at/>; Wherein,;/>;/>
Specifically, the weighted least squares method is a method of fitting data by minimizing the weighted sum of squares of the residuals to find the parameters of the best-fit surface. In this example, a weighted least squares method is used to fit a surface in the three-dimensional point cloud data to obtain a parameter vector for the surface. Construction design matrixComprises the coordinates of each nearest neighbor, an observation matrix/>Normal vectors of the corresponding points are included. Solving equation/>, by weighted least squaresObtaining the parameter vector/>, of the fitting curved surface. This parameter vector describes characteristics of the fitted surface, such as normal vector and curvature of the plane. According to the parameter vector/>, of the fitting surfaceAnd calculating the normal vector of each point in the point cloud data. In this example, the normal vector is calculated based on parameters of the fitted surface, and typically the normal vector can be calculated from parameters of the surface. In embodiment 4, a weighted least square method is used to fit a curved surface in the three-dimensional point cloud data to obtain a parameter vector of the curved surface. The purpose of this is to describe the shape of the surface in the point cloud data so that the normal vector for each point is accurately calculated. And calculating the normal vector of each point in the point cloud data according to the parameter vector of the fitting curved surface. The normal vector describes the orientation of the surface at this point, which is critical to the subsequent tasks of model reconstruction, surface analysis, and rendering. The calculated normal vector can be used for surface analysis, such as detection of local features of curved surfaces, edge detection, and the like. Meanwhile, the normal vector is also important information for rendering operation, and is used for determining the reflection and refraction directions when light irradiates on the surface, so that the real surface effect is presented. The normal vector is very important for model reconstruction of point cloud data. By calculating the normal vector of each point, the geometric structure of the point cloud data can be better understood, and further, the reconstruction of the model and the surface reconstruction are realized. The normal vector can also be used for analysis of point cloud data, such as curvature calculation of curved surfaces, topology analysis of point clouds, and the like. Through analysis of the normal vector, characteristic information in the point cloud data can be extracted, and support is provided for subsequent data processing and analysis.
Example 5: constructing a lighting model, comprising: let the observer position in the illumination model beThe illumination direction is/>Reflectivity is/>Ambient light intensity is/>The light source intensity is/>; Calculating the diffuse reflection correction coefficient/>, respectivelySpecular reflection correction factor/>And environmental correction factor/>; And carrying out color correction on the color of each point in the three-dimensional point cloud data by using the following formula:
Wherein, For the color of each point after the color correction.
Specifically, the diffuse reflection correction coefficient indicates the degree to which light incident on the surface from the light source direction is reflected to the observer direction. It is generally determined by the angle between the direction of illumination and the normal to the surface and the diffuse reflectance of the surface. The specular reflection correction factor indicates the extent to which light incident on the surface from the light source direction is reflected to the observer direction, taking into account the specular reflectance of the surface and the angle between the illumination direction and the line-of-sight direction. The ambient correction factor indicates the extent to which ambient light affects the color of the surface and is related to the ambient reflectivity of the surface as well as the intensity of the ambient light. The formula weights the diffuse, specular, and ambient light components and multiplies them by the original color to obtain a color corrected color. This process can be understood as adjusting the original color to reflect the actual illumination situation based on the illumination effect calculated by the illumination model. The diffuse reflection, specular reflection, and ambient light component correction coefficients in the formula are used to simulate the lighting effects in the real world. The diffuse reflectance correction coefficient represents the process by which light is incident on the surface from the light source and reflected toward the observer, the specular reflectance correction coefficient represents the process by which light is reflected toward the observer by the surface, and the ambient correction coefficient represents the effect of ambient light on the surface. And adjusting the original color according to the correction coefficient in the formula. Different correction factors affect the brightness, saturation and hue of the final color, thereby adjusting the color of each point in the point cloud data to make it more realistic and realistic. The color correction process enhances the visualization effect of the point cloud data. After the illumination effect is considered, the color of the point cloud data has a stereoscopic impression and a sense of reality, so that the data can be more easily understood and analyzed in the visualization process. By simulating a real lighting effect, color correction can make the point cloud data more vivid and expressive. Different lighting conditions and scenes can produce different color effects, thereby better presenting the features and details of the data.
Example 6: the diffuse reflection correction factor is calculated using the following formula:
Wherein, Is the diffuse reflection coefficient; /(I)Is the intensity of the light source; /(I)Is the unit vector of the illumination direction.
Specifically, in the formulaRepresents normal vector/>And illumination direction/>An included angle between the two. This angle determines the angle of incidence of the light on the surface and thus affects the intensity of the diffuse reflection. The smaller the angle, the greater the intensity of the diffuse reflection and vice versa. The calculation of the value involves the dot multiplication operation of the normal vector and the illumination direction vector, and the included angle between the two vectors can be obtained through the dot multiplication result. Next, the dot product is multiplied by the light source intensity/>And diffuse reflectance/>The diffuse reflection correction coefficients are obtained by multiplication. This coefficient represents the extent to which light rays from the light source are reflected to the direction of the observer after incidence on the surface. Diffuse reflectance/>Representing the reflectivity of the surface to diffusely reflected light, and the light source intensity/>The intensity of the light source is indicated. By this calculation, the diffuse reflection intensity of each point can be obtained. First, the normal vector/>, of each point is obtainedAnd illumination direction/>Such information is typically obtained through a lighting model and point cloud data. Calculation method vector/>And illumination direction/>Included angle between them, get/>. According to the diffuse reflection coefficient/>And light Source intensity/>And calculating a diffuse reflection correction coefficient for each point as a result of the dot multiplication. The diffuse reflection correction coefficient is applied to the color of each point in the point cloud data to realize diffuse reflection correction of the color. The final result is to perform diffuse reflection correction on the color of each point in the point cloud data, so that the color is more consistent with the real illumination condition. The process improves the realism and the visual effect of the point cloud data, so that the point cloud data is more lifelike and stereoscopic. The reality and the stereoscopic impression are very important for various application scenes, such as virtual reality, augmented reality, three-dimensional modeling and the like, and can improve user experience and enhance the expressive force of data. The main purpose of this formula is to simulate the lighting effect in the real world and adjust the color of each point in the point cloud data according to the lighting conditions to enhance the visualization effect and the realism of the data. In the illumination model, diffuse reflection is scattered light generated after the intersection of a surface with a ray, which is an important part of the illumination effect. By calculating the diffuse reflection correction coefficient, the reflection intensity of the light on the surface can be determined, and the color of the point cloud data can be adjusted.
Example 7: the specular reflection correction factor is calculated using the following formula:
In practice, the purpose of the formula is to calculate the specular reflection intensity for each point in the point cloud data based on the lighting conditions and the surface properties. First, by calculating the reflection vector and the observer direction, the reflection of the light on the surface and the observation angle are determined. Then, a specular reflection correction factor is calculated from the angle between the reflection vector and the observer direction, and the specular reflection factor and the specular index. The coefficient reflects the specular reflection intensity of light generated when the light is reflected on the surface, and is used for adjusting the color of each point in the point cloud data, so that the color effect of the point cloud data is more consistent with the color effect under the real illumination condition. By the formula, the sense of reality and the visual effect of the point cloud data can be enhanced. The calculation of the specular reflection correction factor takes into account factors such as the incident angle, the reflection angle, the specular reflectivity of the surface, and the like of the light, thereby more accurately simulating the illumination effect in the real world. Therefore, the data can be more vivid and real in the visualization process, and the expressive force and the understandability of the data are improved.
Wherein,The specular high light index is a set value, and the value range is 1 to 128; /(I)Is the reflection vector of the illumination direction with respect to the normal vector; /(I)Is a unit vector of the observer direction; /(I)Is the specular reflection coefficient;
The purpose of the formula is to calculate the ambient light component for each point in the point cloud data based on the ambient lighting conditions and the surface properties. The color of each point in the point cloud data is adjusted by considering the ambient illumination intensity and the ambient reflectivity of the surface, so that the color effect of the point cloud data is more accordant with the color effect under the real illumination condition. By the formula, the sense of reality and the visual effect of the point cloud data can be enhanced. The calculation of the environmental correction coefficient takes into account factors such as the ambient light intensity and the ambient reflectivity of the surface, thereby more accurately simulating the ambient light effect in the real world. Therefore, the data can be more vivid and real in the visualization process, and the expressive force and the understandability of the data are improved.
The environmental correction factor is calculated using the following formula:
Wherein, Is the ambient reflection coefficient; /(I)Is the ambient light intensity.
In particular, in naked eye 3D data communication, the application of the illumination model is to maintain the sense of realism of the data during transmission and presentation, so that the user can better understand and process the data. The formula provided in example 7 is largely divided into two parts: the specular reflection intensity of each point in the point cloud data is determined by calculating the angle between the reflection vector and the observer direction, and taking into account the specular reflection coefficient and the specular reflection index. Therefore, the reflection effect of the surface under the irradiation of light can be simulated, so that the data has more stereoscopic impression and luster. And adjusting the color of each point in the point cloud data to reflect the real ambient light effect by considering the ambient light intensity and the ambient reflectivity of the surface. Therefore, the data can be presented with different colors under different illumination conditions, and the sense of reality and fidelity of the data are increased. By the correction method, the naked eye 3D data can better reflect the illumination condition in the real world in the transmission and display process, so that a user can more intuitively perceive the data and more accurately understand the information in the data. This is of great importance for many fields of application, such as virtual reality, augmented reality, medical image processing, etc. In these fields, requirements for data realism and visual effects are very high, and the illumination model provided in embodiment 7 can effectively meet these requirements, and improve the visual effects and user experience of the data.
Example 8: the method for synthesizing the texture image and the depth map into naked eye 3D data in the step 7 comprises the following steps: let the texture image be expressed asDepth map is expressed as/>Wherein/>Is a normalized planar coordinate; parallax map/>, is calculated using the following formula
Then calculate the corrected parallax map by the following formula
Wherein,And/>Are subscripts; /(I)Interpolation function for cubic spline; constructing a depth enhancement map; depth edge map/>, is generated using the following formula:/>
Generating a depth edge enhancement map based on the depth enhancement map and the depth edge map
Wherein,Representing morphological dilation operations; /(I)Representing the scale/>A lower expansion core; /(I)Is a color space conversion function; /(I)Is the degree of scale; finally, naked eye 3D data are synthesized by the following formula:
Wherein, Is the direction vector of specular reflection; /(I)Is the Phong index.
In particular, the color space conversion functionThe specific expression of (2) depends on the color space conversion method selected. Common color space conversion methods include RGB to gray, RGB to/>To HSV, etc. The RGB to HSV conversion process comprises the following steps: HSV is a color space describing colors,/>Representing hue,/>Representing saturation,/>Indicating brightness. The RGB to HSV conversion may use the following formula:
Wherein, And/>Respectively represent the position/>Hue, saturation and brightness at that point. /(I)And/>Respectively represent the position/>Pixel values for red, green, blue channels.
Morphological dilation is defined as the convolution of an image (or a portion of an image, such as a particular region) with a structuring element, which is a binary template of a specified shape. For each pixel, the dilation operation aligns the origin of the structural element with that pixel, then "overlays" all pixels contained in the structural element with the image, and if at least one pixel in the structural element matches the pixel in the corresponding location in the image, then the value of that pixel in the output image is set to 1 (or the original gray value); otherwise, the value of the pixel in the output image is set to 0.
The cubic spline interpolation function is typically expressed as a piecewise cubic polynomial, with each piecewise cubic polynomial being defined by 4 control points. Control point is setWherein/>. On each segment, the cubic spline interpolation function can be expressed as:
Wherein, Is the abscissa of the interpolation point,/>Is the corresponding ordinate,/>Is the starting point abscissa of the segment,/>Is the end point abscissa,/>Is a coefficient of uncertainty. Requirements/>At/>And/>Where continuous and the first derivative is continuous, i.e.:
By solving this equation set, we obtain Is a value of (2).
Firstly, the parallax image is obtained by calculating the parallax value of each pixel point in the depth image. The disparity value is the inverse of the depth, represents the actual depth corresponding to each pixel point, and is an important parameter in the naked eye 3D data synthesis process. Then, the texture image is corrected using the disparity map, and a corrected disparity map is generated in consideration of the influence of the depth information on the visual effect. The method can keep the authenticity of the texture image, and simultaneously enables the texture image to be more in line with the depth information, and the stereoscopic impression and the sense of reality of the data are improved. Next, a depth edge map, i.e., edge information of depth variations, is calculated by regarding the depth map. This step is to enhance the edge sharpness of the depth information, making the data clearer and more definite when presented. Then, a depth edge enhancement map is generated using the depth edge map and the depth enhancement map, and the dilation kernel. The definition and the third dimension of the depth edge are further enhanced by morphological dilation operation and combining the depth information and the texture information. And finally, on the premise of considering illumination conditions, synthesizing naked eye 3D data by using the corrected parallax image and the depth edge enhancement image in a weighted summation mode. In this step, texture information, depth information and illumination conditions are comprehensively considered, and 3D data with more realism and stereoscopic impression is generated. The result of naked eye 3D data synthesis can better reflect the authenticity of the data, and can also improve the visual effect and the visual quality of the data, so that a user can more intuitively understand and process the data.
Example 9: the depth enhancement map is constructed using the following formula:
Wherein, For depth enhancement map, the representation is at position/>Enhancement depth information at; /(I)The normalized coefficient is a set value; /(I)For double summation symbols for traversing one/>A window covering a location/>Surrounding pixels; /(I)Is Gaussian filter kernel,/>And/>All are indexes of Gaussian filter kernels; /(I)As a weight function of depth difference, represent pixel/>And its surrounding pixels/>The larger the depth difference is, the smaller the weight is; /(I)Is a standard gaussian function; /(I)To define parameters for window size.
Specifically, first, for each pixel point in an imageTraversing one by double summing symbolsA window covering a location/>Surrounding pixels. The purpose of this step is to consider the effect of surrounding pixels on the current pixel to integrate the depth information of the surrounding pixels. Then, gaussian filter kernel is utilizedAnd carrying out weighted summation on pixels in the window to obtain a weighted average value. The Gaussian filter kernel is used for carrying out weighted smoothing on surrounding pixels, so that a depth map is smoother in space, noise and interference in an image are reduced, and the definition and the readability of data are improved. Parameters in the Gaussian filter kernel can be adjusted according to requirements so as to adapt to image processing requirements in different scenes. Then, calculate the current pixel/>And its surrounding pixels/>Depth difference between them, expressed as/>. The calculation of the depth difference can reflect the depth change condition between different pixels in the image and is used for measuring the change degree of the depth information. After calculating the depth difference, a weight function of the depth difference is utilizedThis depth difference is weighted. The larger the depth difference is, the smaller the weight is, which means that the influence of the pixels with larger depth information difference on the current pixel is smaller, so that the influence of the regions with larger depth information change on the depth enhancement map can be effectively reduced, and the smoothness and continuity of the image are maintained. Finally, by normalizing the coefficient/>Normalization processing is carried out on the weighted sum to obtain a final depth enhancement map/>. The function of the normalization coefficient is to ensure that the values of the depth enhancement map are within a reasonable range and consistent with the scale of other parameters and pixel values. Therefore, the value of the depth enhancement map can better reflect the change condition of the depth information in the image, and the definition and the stereoscopic impression of the data are improved.
While specific embodiments of the present invention have been described above, it will be understood by those skilled in the art that these specific embodiments are by way of example only, and that various omissions, substitutions, and changes in the form and details of the methods and systems described above may be made by those skilled in the art without departing from the spirit and scope of the invention. For example, it is within the scope of the present invention to combine the above-described method steps to perform substantially the same function in substantially the same way to achieve substantially the same result. Accordingly, the scope of the invention is limited only by the following claims.

Claims (9)

1. The naked eye 3D data communication method based on the meta universe is characterized by comprising the following steps of:
step 1: carrying out three-dimensional scene modeling representation on the naked eye 3D data to generate three-dimensional point cloud data corresponding to the naked eye 3D data; receiving demand data from a user side; the demand data includes: viewing distance, viewing angle, brightness, contrast, and color saturation;
step 2: generating a depth map based on the three-dimensional point cloud data; according to the watching distance and the watching angle in the required data, adjusting the depth value of the depth map;
Step 3: calculating the normal vector of each point in the three-dimensional point cloud data;
Step 4: performing color correction on the color of each point in the three-dimensional point cloud data by using an illumination model; quantifying each point color, applying a nearest color matching algorithm, mapping a continuous color space to a limited set of colors;
step 5: mapping the quantized color information to a depth map to generate a texture image; generating texture images of a plurality of viewpoints according to the watching distance and the watching angle in the required data;
step 6: extracting features on the generated texture image, and coding the extracted features into a vector form to obtain feature vectors; quantizing the extracted feature vector by using hash codes, and mapping high-dimensional features in the feature vector to a low-dimensional space; and applies Huffman coding to compress and then transmit to the receiving end;
Step 7: the receiving end restores the feature vector before compression; decoding the feature vector to obtain a feature representation of the texture image; finally, reconstructing the texture image represented by the features by using a reverse texture mapping method, synthesizing the texture image and the depth map of each viewpoint into naked eye 3D data, and dynamically selecting and displaying the naked eye 3D data closest to the viewpoint of the user according to the viewpoint of the user; and optimizing the display of naked eye 3D data according to the brightness, contrast and color saturation in the required data.
2. The metadata-based naked eye 3D data communication method according to claim 1, wherein the three-dimensional point cloud data corresponding to the naked eye 3D data generated in the step 1 is; Wherein/>Representing the/>, in three-dimensional point cloud dataSpatial coordinates of the individual points; /(I)Is an X-axis coordinate; /(I)Is Y-axis coordinate; /(I)Is Z-axis coordinate; /(I)Color information representing a point in a point cloud,/>、/>And/>Color values representing red, green and blue channels, respectively; /(I)Is the number of points in the three-dimensional point cloud data.
3. The meta-universe-based naked eye 3D data communication method according to claim 2, wherein the method for generating the depth map based on the three-dimensional point cloud data in step 2 comprises:
Step 2.1: for each point in the three-dimensional point cloud data Calculating normalized plane coordinates/>, under a camera coordinate system
Step 2.2: according to radial distortion parametersAnd tangential distortion parameter/>For the coordinates on the normalized planePerforming distortion correction;
step 2.3: normalized plane coordinates to correct distortion Conversion to coordinates in a pixel coordinate System/>
Step 2.4: according to the external parameter matrix of the cameraPoint/>, in three-dimensional point cloud dataTransformed into the camera coordinate system and then taken to be/>The component is used as a depth value to obtain a pixel/>, in the depth mapDepth value at/>
4. The metadata-based naked eye 3D data communication method according to claim 3, wherein the method of calculating the normal vector of each point in the three-dimensional point cloud data in step 3 comprises: constructing a weight matrixWherein/>Representation pointsTo its nearest neighbor/>Weights of the distances of (2); construction design matrix/>And observation matrix/>Wherein/>Contains the coordinates of each nearest neighbor,/>Normal vectors of the corresponding points are included; solving equations using weighted least squaresWherein/>Is a parameter vector of the fitting curved surface; according to the parameter vector of the fitting curved surfaceCalculate the dot/>Normal vector at/>; Wherein,;/>;/>
5. The meta-universe-based naked eye 3D data communication method of claim 4, wherein constructing the illumination model comprises: let the observer position in the illumination model beThe illumination direction is/>Reflectivity is/>Ambient light intensity is/>The light source intensity is/>; Calculating diffuse reflection correction coefficients respectivelySpecular reflection correction factor/>And environmental correction factor/>; And carrying out color correction on the color of each point in the three-dimensional point cloud data by using the following formula:
Wherein, For the color of each point after the color correction.
6. The meta-universe based naked eye 3D data communication method according to claim 5, wherein the diffuse reflection correction coefficient is calculated using the following formula:
Wherein, Is the diffuse reflection coefficient; /(I)Is the intensity of the light source; /(I)Is the unit vector of the illumination direction.
7. The meta-universe based naked eye 3D data communication method according to claim 6, wherein the specular reflection correction factor is calculated using the formula:
Wherein, The specular high light index is a set value, and the value range is 1 to 128; /(I)Is the reflection vector of the illumination direction with respect to the normal vector; /(I)Is a unit vector of the observer direction; /(I)Is the specular reflection coefficient;
the environmental correction factor is calculated using the following formula:
Wherein, Is the ambient reflection coefficient; /(I)Is the ambient light intensity.
8. The meta-universe based naked eye 3D data communication method according to claim 7, wherein the method for synthesizing the texture image and the depth map into naked eye 3D data in step 7 comprises: let the texture image be expressed asThe depth map is expressed asWherein/>Is a normalized planar coordinate; parallax map/>, is calculated using the following formula
Then calculate the corrected parallax map by the following formula
Wherein,And/>Are subscripts; /(I)Interpolation function for cubic spline; constructing a depth enhancement map; depth edge map/>, is generated using the following formula
Generating a depth edge enhancement map based on the depth enhancement map and the depth edge map
Wherein,Representing morphological dilation operations; /(I)Representing the scale/>A lower expansion core; /(I)Is a color space conversion function; /(I)For depth enhancement map, the representation is at position/>Enhancement depth information at; /(I)Is the degree of scale; finally, naked eye 3D data are synthesized by the following formula:
Wherein, Is the direction vector of specular reflection; /(I)Is the Phong index.
9. The meta-universe based naked eye 3D data communication method of claim 8, wherein the depth enhancement map is constructed using the following formula:
Wherein, The normalized coefficient is a set value; /(I)For double summing symbols for traversing oneA window covering a location/>Surrounding pixels; /(I)Is Gaussian filter kernel,/>And/>All are indexes of Gaussian filter kernels; /(I)As a weight function of depth difference, represent pixel/>And its surrounding pixels/>The larger the depth difference is, the smaller the weight is; /(I)Is a standard gaussian function; /(I)To define parameters for window size.
CN202410559034.8A 2024-05-08 2024-05-08 Naked eye 3D data communication method Active CN118138741B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410559034.8A CN118138741B (en) 2024-05-08 2024-05-08 Naked eye 3D data communication method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410559034.8A CN118138741B (en) 2024-05-08 2024-05-08 Naked eye 3D data communication method

Publications (2)

Publication Number Publication Date
CN118138741A true CN118138741A (en) 2024-06-04
CN118138741B CN118138741B (en) 2024-07-09

Family

ID=91234156

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410559034.8A Active CN118138741B (en) 2024-05-08 2024-05-08 Naked eye 3D data communication method

Country Status (1)

Country Link
CN (1) CN118138741B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102568026A (en) * 2011-12-12 2012-07-11 浙江大学 Three-dimensional enhancing realizing method for multi-viewpoint free stereo display
WO2019229293A1 (en) * 2018-05-31 2019-12-05 Nokia Technologies Oy An apparatus, a method and a computer program for volumetric video
US20200380765A1 (en) * 2017-11-23 2020-12-03 Interdigital Vc Holdings, Inc. Method, apparatus and stream for encoding/decoding volumetric video
WO2021083174A1 (en) * 2019-10-28 2021-05-06 阿里巴巴集团控股有限公司 Virtual viewpoint image generation method, system, electronic device, and storage medium
CN113873264A (en) * 2021-10-25 2021-12-31 北京字节跳动网络技术有限公司 Method and device for displaying image, electronic equipment and storage medium
CN113989432A (en) * 2021-10-25 2022-01-28 北京字节跳动网络技术有限公司 3D image reconstruction method and device, electronic equipment and storage medium
US20220217314A1 (en) * 2019-05-24 2022-07-07 Lg Electronics Inc. Method for transmitting 360 video, method for receiving 360 video, 360 video transmitting device, and 360 video receiving device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102568026A (en) * 2011-12-12 2012-07-11 浙江大学 Three-dimensional enhancing realizing method for multi-viewpoint free stereo display
US20200380765A1 (en) * 2017-11-23 2020-12-03 Interdigital Vc Holdings, Inc. Method, apparatus and stream for encoding/decoding volumetric video
WO2019229293A1 (en) * 2018-05-31 2019-12-05 Nokia Technologies Oy An apparatus, a method and a computer program for volumetric video
US20220217314A1 (en) * 2019-05-24 2022-07-07 Lg Electronics Inc. Method for transmitting 360 video, method for receiving 360 video, 360 video transmitting device, and 360 video receiving device
WO2021083174A1 (en) * 2019-10-28 2021-05-06 阿里巴巴集团控股有限公司 Virtual viewpoint image generation method, system, electronic device, and storage medium
CN113873264A (en) * 2021-10-25 2021-12-31 北京字节跳动网络技术有限公司 Method and device for displaying image, electronic equipment and storage medium
CN113989432A (en) * 2021-10-25 2022-01-28 北京字节跳动网络技术有限公司 3D image reconstruction method and device, electronic equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
罗卫;杨志成;: "基于三维点云的重建算法", 现代计算机(专业版), no. 02, 15 January 2017 (2017-01-15) *

Also Published As

Publication number Publication date
CN118138741B (en) 2024-07-09

Similar Documents

Publication Publication Date Title
CN111656407B (en) Fusing, texturing and rendering views of a dynamic three-dimensional model
US10839591B2 (en) Stereoscopic rendering using raymarching and a virtual view broadcaster for such rendering
US10474227B2 (en) Generation of virtual reality with 6 degrees of freedom from limited viewer data
US20180033209A1 (en) Stereo image generation and interactive playback
US6205241B1 (en) Compression of stereoscopic images
US7463269B2 (en) Texture data compression and rendering in 3D computer graphics
US20130071012A1 (en) Image providing device, image providing method, and image providing program for providing past-experience images
EP3276578A1 (en) Method for depicting an object
JP6778163B2 (en) Video synthesizer, program and method for synthesizing viewpoint video by projecting object information onto multiple surfaces
KR101885090B1 (en) Image processing apparatus, apparatus and method for lighting processing
US8094148B2 (en) Texture processing apparatus, method and program
US20180329602A1 (en) Vantage generation and interactive playback
CN112233165A (en) Baseline extension implementation method based on multi-plane image learning view synthesis
JP7344988B2 (en) Methods, apparatus, and computer program products for volumetric video encoding and decoding
CN111612878A (en) Method and device for making static photo into three-dimensional effect video
Lochmann et al. Real-time Reflective and Refractive Novel-view Synthesis.
US20220084300A1 (en) Image processing apparatus and image processing method
KR20210147626A (en) Apparatus and method for synthesizing 3d face image using competitive learning
US20220222842A1 (en) Image reconstruction for virtual 3d
Fachada et al. Chapter View Synthesis Tool for VR Immersive Video
CN117557721A (en) Method, system, equipment and medium for reconstructing detail three-dimensional face of single image
CN118138741B (en) Naked eye 3D data communication method
Jin et al. From capture to display: A survey on volumetric video
CN114998514A (en) Virtual role generation method and equipment
Cho et al. Depth image processing technique for representing human actors in 3DTV using single depth camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant