CN113242384B - Panoramic video display method and display equipment - Google Patents

Panoramic video display method and display equipment Download PDF

Info

Publication number
CN113242384B
CN113242384B CN202110501270.0A CN202110501270A CN113242384B CN 113242384 B CN113242384 B CN 113242384B CN 202110501270 A CN202110501270 A CN 202110501270A CN 113242384 B CN113242384 B CN 113242384B
Authority
CN
China
Prior art keywords
definition video
definition
target
panoramic video
viewpoint position
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110501270.0A
Other languages
Chinese (zh)
Other versions
CN113242384A (en
Inventor
任子健
刘帅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Juhaokan Technology Co Ltd
Original Assignee
Juhaokan Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Juhaokan Technology Co Ltd filed Critical Juhaokan Technology Co Ltd
Priority to CN202110501270.0A priority Critical patent/CN113242384B/en
Publication of CN113242384A publication Critical patent/CN113242384A/en
Application granted granted Critical
Publication of CN113242384B publication Critical patent/CN113242384B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/174Segmentation; Edge detection involving the use of two or more images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/268Signal distribution or switching

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

The application relates to the technical field of panoramic videos, and provides a panoramic video display method and display equipment, wherein specifically, a target panoramic video and configuration information of the target panoramic video are obtained in response to a panoramic video playing request; determining the identifier of at least one target high-definition video block according to the identifier of each high-definition video block in the target panoramic video frame corresponding to the first viewpoint position and the second viewpoint position respectively aiming at each target panoramic video frame in the target panoramic video; and obtaining at least one target high-definition video block according to the configuration information and the identification of the at least one target high-definition video block, rendering a pre-created spherical grid in combination with the low-definition panoramic video frame, and obtaining and displaying the rendered panoramic video frame.

Description

Panoramic video display method and display equipment
Technical Field
The present application relates to the field of panoramic video technologies, and in particular, to a panoramic video display method and a display device.
Background
Panoramic video is a new multimedia form developed based on 360-degree panoramic images, and is converted into dynamic panoramic video by continuously playing a series of static panoramic images. The panoramic video is generally formed by splicing video images in all directions acquired by a panoramic camera through software, is played by using a special player, projects a planar video into a 360-degree panoramic mode, and presents the planar video to a viewer in a fully-enclosed space view field with 360 degrees in the horizontal direction and 180 degrees in the vertical direction. The viewer can control the playing of the panoramic video in modes of head motion, eyeball motion, remote controller control and the like, so that the viewer can experience the experience of being personally on the scene. As a new heterogeneous multimedia Service, a panoramic video Service stream contains multiple data types such as audio, video, text, interactive, control commands, etc., and has diversified Quality of Service (QoS) requirements.
Compared with the traditional video, the panoramic video has the characteristics of high resolution, large data volume and high code rate, and has high requirement on network bandwidth. In recent years, panoramic videos are continuously developed towards higher resolution, the demand is gradually changed from 4K to 8K, even 12K and 16K, and the existing network conditions are difficult to meet the development trend. In order to reduce the bandwidth requirement of panoramic video transmission, reduce data redundancy and improve supportable video resolution, a Field of View (FOV) transmission scheme is often adopted in the panoramic video transmission.
The panoramic video transmission scheme based on the FOV realizes the playing of the high-resolution panoramic video in a mode of selectively displaying video blocks in a visual field by dividing an original panoramic video frame with high resolution into the video blocks according to regions. However, the visual field range of human eyes is limited, images at the periphery of the visual field angle are not clearly seen by the human eyes, and if the whole visual field is displayed by adopting a high-definition video block, resource waste is caused, and the network bandwidth pressure is increased.
Disclosure of Invention
The application provides a panoramic video display method and display equipment, which are used for reducing the data volume of panoramic video transmission and reducing the bandwidth pressure.
In a first aspect, an embodiment of the present application provides a display device for converting a panoramic video projection format, including:
a display, coupled to the graphics processor, configured to display the panoramic video;
a memory coupled to the graphics processor and configured to store computer instructions;
the graphics processor configured to perform the following operations in accordance with the computer instructions:
responding to a panoramic video playing request, acquiring a target panoramic video and configuration information of the target panoramic video, wherein the configuration information comprises address information of high-definition video blocks contained in each target panoramic video frame in the target panoramic video;
acquiring a first viewpoint position and a second viewpoint position for each target panoramic video frame in the target panoramic video, and determining an identifier of at least one target high-definition video block according to the identifier of each first high-definition video block in the target panoramic video frame corresponding to the first viewpoint position and the identifier of each second high-definition video block in the target panoramic video frame corresponding to the second viewpoint position;
acquiring at least one target high-definition video block according to the address information of the high-definition video block contained in the target panoramic video frame and the identification of the at least one target high-definition video block;
and rendering a pre-created spherical grid according to the acquired at least one target high-definition video block and the low-definition panoramic video frame to obtain and display a rendered panoramic video frame, wherein the low-definition panoramic video frame is obtained by down-sampling the target panoramic video frame.
In a second aspect, an embodiment of the present application provides a panoramic video display method, including:
responding to a panoramic video playing request, acquiring a target panoramic video and configuration information of the target panoramic video, wherein the configuration information comprises address information of high-definition video blocks contained in each target panoramic video frame in the target panoramic video;
acquiring a first viewpoint position and a second viewpoint position for each target panoramic video frame in the target panoramic video, and determining an identifier of at least one target high-definition video block according to the identifier of each first high-definition video block in the target panoramic video frame corresponding to the first viewpoint position and the identifier of each second high-definition video block in the target panoramic video frame corresponding to the second viewpoint position;
acquiring at least one target high-definition video block according to the address information of the high-definition video block contained in the target panoramic video frame and the identification of the at least one target high-definition video block;
and rendering a pre-created spherical grid according to the acquired at least one target high-definition video block and the low-definition panoramic video frame to obtain and display a rendered panoramic video frame, wherein the low-definition panoramic video frame is obtained by down-sampling the target panoramic video frame.
In the above embodiment of the application, by interactively acquiring a target panoramic video selected by a user, acquiring, for each target panoramic video frame in the target panoramic video, an identifier of each first high-definition video partition in the target panoramic video frame corresponding to a first viewpoint position (viewpoint position of a display device), and an identifier of each second high-definition video partition in the target panoramic video frame corresponding to a second viewpoint position (viewpoint position of the user), determining an identifier of at least one target high-definition video partition according to the identifier of each first high-definition video partition and the identifier of each second high-definition video partition, and acquiring, in combination with configuration information of the acquired target panoramic video frame, at least one target high-definition video partition, since the target panoramic video partition is determined jointly according to the viewpoint position of the display device and the high-definition video partitions corresponding to the viewpoint positions of the user, compared with a high-definition video partition determined by the viewpoint positions of the display device in a conventional FOV scheme, the number of the target video partitions can be reduced, thereby reducing a network bandwidth occupied by the video partitions; when the target high-definition video is rendered, the high-definition area and the low-definition panoramic video obtained by down-sampling are rendered and displayed according to at least one target high-definition video block, and the user viewpoint position is considered by the target high-definition video block, so that the area displayed by eyeballs in the eye rotation process is high-definition, and the user experience is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to these drawings without inventive exercise.
Fig. 1 illustrates a block diagram of a VR head mounted display device 100 provided by an embodiment of the present application;
fig. 2 schematically illustrates a visual range diagram of a field angle of a human eye provided by an embodiment of the present application;
fig. 3 is a plan view schematically illustrating a field angle of a VR device and a field angle of human eyes provided by an embodiment of the present application;
fig. 4 schematically illustrates a dividing manner of a high-definition panoramic video provided by an embodiment of the present application;
FIG. 5 illustrates a created spherical mesh provided by embodiments of the present application;
fig. 6 is a schematic diagram illustrating a division manner of a visible region provided by an embodiment of the present application;
fig. 7 is a flowchart illustrating a panoramic video display method provided by an embodiment of the present application;
fig. 8 is a schematic diagram schematically illustrating a target high definition video blocking determination manner provided by an embodiment of the present application;
fig. 9 is a flowchart illustrating a complete process of displaying a panoramic video according to an embodiment of the present application;
fig. 10 is a functional diagram illustrating a structure of a display device provided in an embodiment of the present application;
fig. 11 is a diagram illustrating an example of a hardware structure of a display device according to an embodiment of the present application.
Detailed Description
To make the objects, embodiments and advantages of the present application clearer, the following description of exemplary embodiments of the present application will clearly and completely describe the exemplary embodiments of the present application with reference to the accompanying drawings in the exemplary embodiments of the present application, and it is to be understood that the described exemplary embodiments are only a part of the embodiments of the present application, and not all of the embodiments.
All other embodiments, which can be derived by a person skilled in the art from the exemplary embodiments described herein without inventive step, are intended to be within the scope of the claims appended hereto. In addition, while the disclosure herein has been presented in terms of one or more exemplary examples, it should be appreciated that aspects of the disclosure may be implemented solely as a complete embodiment.
It should be noted that the brief descriptions of the terms in the present application are only for the convenience of understanding the embodiments described below, and are not intended to limit the embodiments of the present application. These terms should be understood in their ordinary and customary meaning unless otherwise indicated.
The terms "first," "second," and the like in the description and claims of this application and in the above-described drawings are used for distinguishing between similar or analogous objects or entities and are not necessarily intended to limit the order or sequence Unless otherwise indicated. It is to be understood that the terms so used are interchangeable under appropriate circumstances such that the embodiments described herein are, for example, capable of operation in sequences other than those illustrated or otherwise described herein.
Furthermore, the terms "comprises" and "comprising," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a product or device that comprises a list of elements is not necessarily limited to those elements explicitly listed, but may include other elements not expressly listed or inherent to such product or device.
The term "module," as used herein, refers to any known or later developed hardware, software, firmware, artificial intelligence, fuzzy logic, or combination of hardware and/or software code that is capable of performing the functionality associated with that element.
The embodiment of the application provides a panoramic video display method and display equipment. The display device may be a Virtual Reality (VR) head-mounted display device, a smart phone, a tablet computer, a notebook computer, a smart television, and other devices having a panoramic video playing function and an interactive function.
Taking a VR head-mounted display device as an example, fig. 1 exemplarily shows a structure diagram of a VR head-mounted display device 100 provided in an embodiment of the present application. As shown in fig. 1, the VR head mounted display device 100 includes a lens group 101 and a display screen 102 disposed directly in front of the lens group 101, where the lens group 101 is composed of a left display lens 101_1 and a right display lens 101_2. When a user wears the VR head-mounted display device 100, human eyes can watch panoramic video frames displayed by the display screen 102 through the lens group 101, and experience VR effects.
The FOV transmission scheme is a scheme for differentially transmitting panoramic videos based on a view angle, mainly focuses on high-quality transmission of a picture in a current view angle area, realizes that a high-resolution panoramic video is generally divided in space, then performs multi-rate coding to generate a plurality of video streams, a display device transmits the video streams of corresponding blocks according to the view position of a user, and the video streams are decoded to obtain the blocks, combined and finally presented to the user for watching.
Generally, the horizontal field angle of the human eyes can reach 156 degrees at most, and the horizontal field angle of the human eyes can reach 188 degrees at most. Referring to fig. 2, the visual field of the human eye can be divided into a central region (0-5 degrees), a lateral central region (5-10 degrees), a near peripheral region (10-30 degrees), a middle peripheral region (30-60 degrees), and a far peripheral region (greater than 60 degrees), as shown in fig. 2. The human eye has a comfortable visual field range of 60 degrees for a single eye, which is indicated by a thick dotted line in fig. 2, that is, objects in the 60 degree range of the single eye can be seen clearly by the user, and a part of the object larger than 60 degrees belongs to an insensitive range (also called far periphery region) of the human eye, which is commonly called as the residual light of the eye, and the content in the range can not be seen clearly by the human eye.
Taking the display device as a VR head-mounted display device as an example, the size of the field angle (denoted as FOV 1) of the VR device is determined by the hardware structure of the VR device, and the three-dimensional scene content within the field angle can be displayed on the display screen of the VR device. The ray from a virtual camera (equivalent to eyes) in the VR device to the center of the display screen is the current sight line (namely, the central ray of the FOV 1) of the VR device, and when the head of a user rotates, the sight line of the VR device changes, and the content of a three-dimensional scene displayed in the field angle of the VR device also changes. The human eye has a comfortable viewing range, the field angle corresponding to the comfortable viewing range is recorded as FOV2, the size of the FOV2 can be set according to the actual situation, and the central ray of the FOV2 is the current sight line of the human eye. As the eyeballs of the user can rotate independently of the head of the user, the current sight line of the eyes of the user is changed, and the three-dimensional scene content in the visual field range comfortable for the eyes of the user is also changed.
The relationship of the field angle of the VR device to the field angle of the human eye is shown in fig. 3. As shown in fig. 3, which is a schematic plan view of FOV1 and FOV2, generally, the field angle FOV1 of the VR device is greater than 90 degrees, and is greater than the field angle FOV2 corresponding to a comfortable view range of a human eye, and is distinguished from FOV1, where a dotted line in fig. 3 indicates that a central ray of FOV1 is a current sight line of the VR device, a central ray of FOV2 is a current sight line of the human eye, an intersection point of the current sight line of the VR device and a panoramic video rendering carrier is a viewpoint position P1 (hereinafter also referred to as a first viewpoint position) of the VR device, and an intersection point of the current sight line of the human eye and the panoramic video rendering carrier is a viewpoint position P2 (hereinafter also referred to as a second viewpoint position) of the human eye (user). The panoramic video rendering carrier is a spherical grid created in advance by a rendering engine, and the spherical grid can be used as a display screen to display the panoramic video.
Because the field angle of the display device is larger than the field angle that can be actually seen by human eyes, and the peripheral part of the field angle of the display device cannot be seen clearly by human eyes, the traditional FOV transmission scheme wastes the situation that all the field angle of the display device is displayed as high-definition video blocks. If the high-definition video blocks are displayed according to the range of the eye comfort area, because human eyes can rotate along with the rotation of the head and can also rotate independently of the head, the low-definition image area outside the field angle of the display equipment can be located in the range of the eye comfort area, the resolution of the image displayed in the range of the eye comfort area is low, a user cannot see clearly, and the experience effect is influenced.
The embodiment of the application provides a panoramic video display method and display equipment, which comprehensively consider the field angle of the display equipment and the visual range of the field angle of human eyes, determine each high-definition video sub-block corresponding to the user viewpoint position determined by the field angle of human eyes and the intersection of each high-definition video sub-block corresponding to the display equipment viewpoint position determined by the field angle of the display equipment as at least one target high-definition video sub-block, and perform rendering display by combining with a low-definition panoramic video frame. The method can reduce the bandwidth occupied by the high-definition video blocks in the FOV transmission scheme, can dynamically change the display area of the high-definition video blocks according to the rotation of human eyes, ensures that the human eyes see high-definition images, and improves the user experience.
The terms used in the present application are explained for the sake of clarity in describing the embodiments of the present application.
In a three-dimensional rendering pipeline, geometric vertices are grouped into primitives, the primitives including: points, line segments, polygons. And outputting a fragment sequence after the primitive is rasterized. A fragment is not a true pixel but a collection of states that are used to calculate the final color of each pixel. These states include, but are not limited to, screen coordinates of the fragment, depth information, and other vertex information output from the geometry stage, such as normal, texture coordinates, and the like.
In an embodiment of the application, the high definition and the low definition are relative to the resolution of the panoramic video, and the resolution of the high definition panoramic video is higher than that of the low definition panoramic video.
In the FOV transmission scheme of panoramic video, high definition panoramic video needs to be divided into a plurality of video blocks. The embodiment of the present application is described by taking an example that each high definition panoramic video frame is divided into 32 video blocks, as shown in fig. 4, a schematic diagram of the division of the high definition panoramic video frame is shown, wherein each high definition video block corresponds to a unique block identifier, and a spatial range of each high definition video block is recorded and can be represented by longitude and latitude coordinates. And performing down-sampling on the high-definition panoramic video to obtain the low-definition panoramic video.
It should be noted that the method in the embodiment of the present application may be applicable to displaying a local panoramic video, and may also be applicable to displaying an online (including two modes, i.e., an on-demand mode and a live broadcast mode) panoramic video, where in the on-demand mode, the blocking operation and the downsampling operation are performed as a preprocessing process, and in the live broadcast mode, the blocking operation and the downsampling operation are performed as a real-time processing process.
In the embodiment of the application, when a video playing program of a display device is started, a spherical grid serving as a rendering carrier is created according to a partition rule of high-definition video blocks, as shown in fig. 5, the number of sub-grids of the spherical grid is equal to the number of high-definition video blocks, the longitude and latitude spans are the same, and each sub-grid corresponds to one high-definition video block.
In the embodiment of the application, the spherical mesh serves as a rendering carrier of the panoramic video and is used for displaying the panoramic video. As can be seen from fig. 4, an intersection point of the spherical grid and the central ray of the angle of view of the display device is a display device viewpoint position and is referred to as a first viewpoint position, and an intersection point of the spherical grid and the central ray of the angle of view of the human eye is a user (human eye) viewpoint position and is referred to as a second viewpoint position. Dividing a high-definition panoramic video into a plurality of subareas in advance, determining the identification of a high-definition video block corresponding to each subarea, and respectively generating viewpoint mapping data (recorded as first viewpoint mapping data) of display equipment and viewpoint mapping data (recorded as second viewpoint mapping data) of human eyes. The sub-region refers to a region range to which a certain viewpoint belongs on the panoramic video frame.
Referring to fig. 6, in the embodiment of the present application, for example, a high-definition panoramic video is divided into 16 sub-areas, each sub-area corresponds to a unique area identifier, and a longitude and latitude range of each sub-area is recorded.
For the difference description, a region range corresponding to the first viewpoint position on the panoramic video frame is recorded as a first visible region, a region range corresponding to the second viewpoint position on the panoramic video frame is recorded as a second visible region, the first visible region and the second visible region have the same dividing mode as the high-definition panoramic video, and respectively include a plurality of subregions, the subregion in the first visible region where the first viewpoint position is located is recorded as a first subregion, and the subregion in the second visible region where the second viewpoint position is located is recorded as a second subregion.
Further, by utilizing the corresponding relation between the sub-grids in the spherical grid and the high-definition video blocks, according to the first viewpoint position, generating first viewpoint mapping data corresponding to each sub-region in the first viewpoint region where the first viewpoint position is located, wherein the first viewpoint mapping data comprises the identification of the high-definition video blocks corresponding to each sub-region in the first viewpoint region where the first viewpoint position is located; and generating second viewpoint mapping data corresponding to each subarea in a second viewpoint area where the second viewpoint position is located according to the second viewpoint position, wherein the second viewpoint mapping data comprises the identification of the high-definition video block corresponding to each subarea in the second viewpoint area where the second viewpoint position is located. The first viewpoint mapping data and the second viewpoint mapping data may be stored locally on the display device, or may be stored in the server.
The embodiment of the application does not make a restrictive requirement on the way of determining the identifier of each high-definition viewpoint block corresponding to a sub-region, and takes the determination of the identifier of a high-definition video block corresponding to any sub-region i in a first visual region as an example:
for example, assuming that the central point of a sub-area i in the first visual area is a first viewpoint position, determining the visual range of the display device according to the size of the FOV1, and determining the identifier of each high definition video partition included in the first visual range of the display device according to the recorded spatial range of each high definition video partition, if the spatial range of a high definition video partition is within the first visual range, or there is an overlapping area with the first visual range, determining that the high definition video partition is within the first visual range, and determining the identifier of each high definition video partition included in the first visual range of the display device as the identifier of the high definition video partition corresponding to the sub-area i.
For another example, 4 top-left, bottom-left, top-right, and bottom-right vertexes of the sub-region i in the first visual region are taken as first viewpoint positions, first visual ranges of the first viewpoint positions at the 4 vertexes are respectively determined according to the size of the FOV1, identifiers of each high-definition video partition included in the first visual ranges corresponding to the 4 vertexes are respectively determined according to the recorded spatial ranges of each high-definition video partition, and a union set of the identifiers of each high-definition video partition included in the 4 first visual ranges is determined as the identifier of the high-definition video partition corresponding to the sub-region i. The determination method of each high definition video block included in the first visual range is the same as the previous example, and is not described herein again.
Similarly, the determination manner of the identifier of the high-definition video block corresponding to each sub-region in the second visual region where the second viewpoint position is located is the same as the determination manner of the identifier of the high-definition video block corresponding to each sub-region in the first visual region where the first viewpoint position is located, and is not repeated here.
The first sub-area where the first viewpoint position is located and the second sub-area where the second viewpoint position is located may be the same or different. Due to the fact that the field angles of the first viewpoint and the second viewpoint are different, the identification of each high-definition video block corresponding to the first sub-area where the first viewpoint is located and the identification of each high-definition video block corresponding to the second sub-area where the second viewpoint is located are different.
It should be noted that, in the embodiment of the present application, there is no limitation on a storage manner of the identifier of the high definition video partition corresponding to each sub-region, for example, the identifier of the high definition video partition corresponding to each sub-region may be stored in a list manner, or may be stored in a set or array manner.
Fig. 7 is a flowchart illustrating a panoramic video display method provided by an embodiment of the present application. The process can be realized by a software mode, and can also be realized by a mode of combining software and hardware. As shown in the figure, the process is executed by the display device, and mainly includes the following steps:
s701: and responding to the panoramic video playing request, and acquiring the target panoramic video and the configuration information of the target panoramic video.
In the step, a user carries out human-computer interaction with a display device, the user selects a target panoramic video to be played through a function key or a display screen of the display device and triggers a panoramic video playing request, the panoramic video playing request carries identification information (such as a website, an ID number, a name and the like of the target panoramic video) of the target panoramic video selected by the user, the display device sends a panoramic video acquiring request to a server according to the identification information, the server returns the target panoramic video and configuration information of the target panoramic video according to the received panoramic video acquiring request, and the configuration information contains address information of high-definition video blocks contained in each target panoramic video frame in the target panoramic video.
In order to improve the loading efficiency of the target panoramic video, the display device may first obtain the target panoramic video and the configuration file from the local, and if not, obtain the target panoramic video and the configuration file from the server.
S702: the method comprises the steps of acquiring a first viewpoint position and a second viewpoint position aiming at each target panoramic video frame in a target panoramic video, and determining the identification of at least one target high-definition video block according to the identification of each first high-definition video block in the target panoramic video frame corresponding to the first viewpoint position and the identification of each second high-definition video block in the target panoramic video frame corresponding to the second viewpoint position.
In the step, the display device is provided with a device sight tracking device and an eyeball tracking device, wherein the device sight tracking device is used for acquiring the sight direction of the display device, the eyeball tracking device is used for acquiring the sight direction of human eyes, the display device determines the acquired longitude and latitude coordinates of a first projection point of the sight direction of the display device on a pre-established spherical grid as a first viewpoint position, and determines the acquired longitude and latitude coordinates of a second projection point of the sight direction of the human eyes on the pre-established spherical grid as a second viewpoint position.
It should be noted that the embodiment of the present application does not require a device gaze tracking apparatus to be limited, for example, the device gaze tracking apparatus may be a gyroscope. A Software Development Kit (SDK) of the display device itself acquires line of sight data of the display device from the gyroscope to track a line of sight orientation of the display device.
After the first viewpoint position and the second viewpoint position are determined, further, the identifier of each first high-definition video block in the target panoramic video frame corresponding to the first viewpoint position is determined, and the identifier of each second high-definition video block in the target panoramic video frame corresponding to the second viewpoint position is determined.
During specific implementation, a first visual range of the display device is determined according to a first viewpoint position of the sight direction of the display device on the spherical grid and the size and direction of the field angle (including a horizontal field angle and a vertical field angle) of the display device, whether each high-definition video block is in the first visual range is determined according to the pre-recorded space range of each high-definition video block, and an identifier corresponding to each high-definition video block existing in the first visual range is determined as the identifier of each high-definition video block in a target panoramic video frame corresponding to the first viewpoint position; and determining a second visual range of the human eyes according to a second viewpoint position of the direction of the sight of the human eyes on the spherical grid and the size and the direction of the field angle (including a horizontal field angle and a vertical field angle) of the human eyes, determining whether each high-definition video block is in the second visual range according to the pre-recorded space range of each high-definition video block, and determining the identifier corresponding to each high-definition video block existing in the second visual range as the identifier of each high-definition video block in the target panoramic video frame corresponding to the second viewpoint position.
In order to improve the efficiency of data acquisition, when S702 is executed, the identifier of each first high definition video partition in the target panoramic video frame corresponding to the first viewpoint position may be directly acquired from the pre-generated first viewpoint mapping data, and the identifier of each second high definition video partition in the target panoramic video frame corresponding to the second viewpoint position may be directly acquired from the pre-generated second viewpoint mapping data.
In specific implementation, according to longitude and latitude coordinates of a first viewpoint position and longitude and latitude space ranges of all sub-areas in a first visual area, a first sub-area where the first viewpoint position is located is determined, identifiers of all high-definition video blocks corresponding to the first sub-area are obtained from pre-generated first viewpoint mapping data, and the identifiers of all high-definition video blocks corresponding to the first sub-area are determined as the identifiers of all first high-definition video blocks in a target panoramic video frame corresponding to the first viewpoint position; and determining a second subarea where the second viewpoint position is located according to the longitude and latitude coordinates of the second viewpoint position and the longitude and latitude spatial range of each subarea in the second visual area, acquiring the identification of each high-definition video block corresponding to the second subarea from the pre-generated second viewpoint mapping data, and determining the identification of each high-definition video block corresponding to the second subarea as the identification of each second high-definition video block in the target panoramic video frame corresponding to the second viewpoint position.
The manner of generating the first view mapping data and the second view mapping data is referred to the foregoing embodiments, and will not be repeated here.
Further, in S702, an identifier of at least one target high definition video partition is determined according to the identifier of each first high definition video partition in the target panoramic video frame corresponding to the first viewpoint position and the identifier of each second high definition video partition in the target panoramic video frame corresponding to the second viewpoint position.
Since the second viewpoint position may be in an edge area of the first visible range of the display device, part of the high definition video blocks in the identifier of each high definition video block corresponding to the second viewpoint position may not be in the display range of the display screen, and if all of the high definition video blocks corresponding to the identifier of each high definition video block corresponding to the second viewpoint position are requested to be loaded, data waste may be caused, and network transmission pressure may be increased. Therefore, the same high definition video blocks in the first visual range and the second visual range can be loaded.
In specific implementation, the identifier of at least one target high-definition video block is determined from the identifier of each first high-definition video block in the target panoramic video frame corresponding to the first viewpoint position and the identifier of the same at least one high-definition video block in the identifier of each second high-definition video block in the target panoramic video frame corresponding to the second viewpoint position. The target high-definition video blocks are the intersections of the high-definition video blocks contained in the target panoramic video frame in a visual range determined by the human eye field angle and the field angle of the display device.
For example, as shown in fig. 8, the schematic diagram of the determination method of the target high definition video partitions is that, the high definition video partitions in the target panoramic video frame corresponding to the first viewpoint position are marked by {10, 11, 12, 18, 19, 20, 26, 27, 28}, and are filled with solid oblique lines in fig. 8, the high definition video partitions in the target panoramic video frame corresponding to the second viewpoint position are marked by {3, 4, 11, 12}, and are filled with dotted lines in fig. 8, and then the high definition video partitions in at least one target are marked by {11, 12}, and are circled with thick solid lines in fig. 8.
S703: and acquiring at least one target high-definition video block according to the address information of the high-definition video block contained in the target panoramic video frame and the identification of the at least one target high-definition video block.
In the step, the display device sends a block acquisition request to the server according to the address information of the high-definition video blocks contained in the target panoramic video frame in the configuration information and the identification of at least one target high-definition video block, and the server returns the target high-definition video blocks corresponding to the identification of the at least one target high-definition video block to the display device after receiving the block acquisition request.
S704: and rendering a pre-created spherical grid according to the obtained at least one target high-definition video block and the low-definition panoramic video frame to obtain and display the rendered panoramic video frame.
In the step, the low-definition panoramic video frame is obtained by down-sampling the target panoramic video frame, wherein the low-definition panoramic video frame can be obtained by down-sampling the target panoramic video frame in real time, or a low-definition panoramic video of each high-definition panoramic video can be generated in advance, and when the target panoramic video is obtained, the low-definition panoramic video corresponding to the target panoramic video is obtained at the same time.
In the embodiment of the application, after the spherical mesh is created, the vertices of each mesh in the spherical mesh are rasterized to generate each fragment, and the global UV coordinate of each fragment is determined. The embodiment of the present application does not impose any limiting requirement on the determination manner of the global UV coordinate of each fragment.
For example, after each fragment is generated by rasterization, the three-dimensional coordinates of each fragment on the spherical grid are converted into longitude and latitude coordinates, since the longitude and latitude coordinates and the UV coordinates of the image are both two-dimensional plane coordinates, a mapping relation between the longitude and latitude coordinates and the UV coordinates can be established, and the longitude and latitude coordinates of each fragment are converted into global UV coordinates according to the mapping relation.
For another example, in the process of creating the spherical mesh, the UV coordinates of each fragment are obtained by interpolation according to the UV coordinates of the vertex of each sub-mesh in the spherical mesh.
In S704, for each fragment generated by spherical grid rasterization, it is determined whether each fragment is located in a target high-definition video partition according to a UV coordinate of each fragment and a pre-recorded spatial range of each high-definition video partition, if so, a global UV coordinate of each fragment is converted into a local UV coordinate using the target high-definition video partition as a coordinate system, and a color value of the corresponding fragment is obtained from the target high-definition video partition in which each fragment is located according to the local UV coordinate of each fragment, otherwise, a color value of each fragment is obtained from a low-definition panoramic video frame according to the global UV coordinate of each fragment. And further, rendering the spherical grid according to the color value of each fragment to obtain and display the rendered panoramic video frame. Wherein the local UV coordinates of each fragment are converted from the global UV coordinates.
In the above embodiment of the application, the device gaze direction is obtained by a device gaze tracking device, the gaze direction of human eyes is obtained by an eyeball tracking device, the first viewpoint position and the identifier of each high-definition video partition in a target panoramic video frame corresponding to the first viewpoint position are determined, the identifier of each high-definition video partition in a target panoramic video frame corresponding to the second viewpoint position is determined, the same identifier in the identifiers of each high-definition video partition corresponding to the first viewpoint position and the second viewpoint position is determined as the identifier of at least one target high-definition video partition, and the corresponding target high-definition video partition is obtained, wherein the first viewpoint position is a first projection point of a central ray of a field angle of the display device on a spherical grid, the second viewpoint position is a second projection point of a central ray of the field angle of the human eyes on the spherical grid, and as the field angle of the human eyes is smaller than the field angle of the display device, the number of video partitions contained in the visual range corresponding to the field angle of the human eyes is smaller than the number of video partitions in the visual range corresponding to the high-definition video partition, and the FOV of the video transmission scheme is reduced; further, when rendering display is carried out, a spherical grid is rendered according to at least one target high-definition video block and a low-definition panoramic video frame, a rendered panoramic video frame is obtained and displayed, due to the fact that the human eyes and the display device rotate independently, at least one target high-definition video block is an identification of the high-definition video block in a visual range determined by the field angle of the display device and the field angle of the human eyes, the corresponding high-definition video block can be dynamically loaded according to the rotation of the human eyes, the fact that the area seen by the human eyes is displayed as high-definition is guaranteed, other areas are displayed as low-definition, and user experience is improved.
In this embodiment of the present application, taking a VR head-mounted display device as an example, fig. 9 exemplarily shows a complete flow chart for displaying a panoramic video provided in this embodiment of the present application, and as shown in fig. 9, the flow chart mainly includes the following steps:
s901: and starting a panoramic video playing program by the VR equipment to create the spherical grid.
In the step, a user starts a panoramic video playing program of VR equipment through a power-on key, and a spherical grid is created, wherein the spherical grid is used as a rendering carrier of the panoramic video and comprises a plurality of sub-grids, and each sub-grid corresponds to a high-definition video block. The description of the spherical mesh refers to the foregoing embodiments and will not be repeated here.
S902: and the VR equipment responds to the panoramic video playing request and acquires the target panoramic video and the configuration information of the target panoramic video.
The detailed description of this step is referred to S701 and will not be repeated here.
S903: aiming at a target panoramic video frame i in a target panoramic video, the VR equipment obtains the sight direction of the VR equipment through the equipment sight tracking device, and a first viewpoint position is determined according to the sight direction of the VR equipment.
In this step, i is an integer greater than 0 and equal to or less than N (the total number of target panoramic video frames). When the sight orientation of the VR device is the center line of the field angle of the VR device, acquiring longitude and latitude coordinates of a first projection point of the sight orientation of the VR device on the spherical grid, and determining the longitude and latitude coordinates of the first projection point as a first viewpoint position.
S904: the VR equipment acquires the sight orientation of human eyes through the eyeball tracking device, and determines a second viewpoint position according to the sight orientation of the human eyes.
In the step, when the direction of the sight of the human eyes is the central line of the field angle of the human eyes, acquiring longitude and latitude coordinates of a second projection point of the direction of the sight of the human eyes on the spherical grid, and determining the longitude and latitude coordinates of the second projection point as a second viewpoint position.
S905: and the VR equipment determines whether a first high-definition video block list in a target panoramic video frame i corresponding to the first viewpoint position and a second high-definition video block list in the target panoramic video frame i corresponding to the second viewpoint position need to be calculated in real time.
In this step, the first high-definition video block list includes an identifier of each high-definition video block in the target panoramic video frame i corresponding to the first viewpoint position, and the first high-definition video block list includes an identifier of each high-definition video block in the target panoramic video frame i corresponding to the second viewpoint position.
In S905, the VR device first obtains the first viewpoint mapping data and the second viewpoint mapping data from the local, if the obtaining is successful, it indicates that the first high-definition video block list and the second high-definition video block list do not need to be calculated in real time, S908 is executed, if the obtaining is failed, the first viewpoint mapping data and the second viewpoint mapping data are obtained from the server, if the obtaining is successful, it indicates that the first high-definition video block list and the second high-definition video block list do not need to be calculated in real time, S908 is executed, otherwise, S906 is executed.
The description of the first view mapping data and the second view mapping data refers to the foregoing embodiments, and will not be repeated here.
S906: and the VR device determines a first visual range according to the first viewpoint position and the size of the field angle of the VR device, determines each high-definition video block contained in the first visual range according to the space range of each high-definition video block recorded in advance, determines a second visual range according to the second viewpoint position and the field angle of human eyes, and determines each high-definition video block contained in the second visual range according to the space range of each high-definition video block recorded in advance.
In this step, since the field angle of human eyes is smaller than that of the VR device, the number of high definition video blocks included in the second visual range is smaller than that included in the first visual range.
S907: and the VR equipment generates a first high-definition video block list in the target panoramic video frame i corresponding to the first viewpoint position according to the identification of each high-definition video block contained in the first visual range, and generates a second high-definition video block list in the target panoramic video frame i corresponding to the second viewpoint position according to the identification of each high-definition video block contained in the second visual range.
S908: and the VR equipment directly acquires a first high-definition video block list in a target panoramic video frame i corresponding to the first viewpoint position from the first viewpoint mapping data, and directly acquires a second high-definition video block list in a target panoramic video frame i corresponding to the second viewpoint position from the second viewpoint mapping data.
In this step, the VR device determines a first high definition video block list corresponding to a first visual area where a first viewpoint position is located as a first high definition video block list in a target panoramic video frame i corresponding to the first viewpoint position, and determines a second high definition video block list corresponding to a second visual area where a second viewpoint position is located as a second high definition video block list in the target panoramic video frame i corresponding to the second viewpoint position.
For determining the first high definition video block list corresponding to the first visual area and the first high definition video block list corresponding to the second visual area, refer to the foregoing embodiments, and are not repeated here.
S909: and the VR equipment determines the identifier of at least one target high-definition video block according to the identifiers of the high-definition video blocks in the first high-definition video block list and the same high-definition video blocks in the second high-definition video block list to obtain a target high-definition video block list.
In this step, the identifier of the high definition video partition in the first high definition video partition list is determined according to the field angle of the display device, the identifier of the high definition video partition in the second high definition video partition list is determined according to the field angle of human eyes, and the intersection of the two lists is determined as the identifier of at least the target high definition video partition.
S910: and the VR equipment acquires at least one high-definition video block corresponding to each identifier in the target high-definition video block list from the server and decodes the high-definition video block according to the address information of each high-definition video block contained in the target panoramic video frame.
In this step, since the configuration information of the target panoramic video includes address information of each high-definition video block included in each target panoramic video frame, the corresponding target high-definition video block is obtained and decoded according to each identifier in the target high-definition video block list and the corresponding address information.
S911: and the VR equipment acquires a low-definition panoramic video frame i corresponding to the target panoramic video frame i.
In this step, the low-definition panoramic video frame i may be obtained by down-sampling the target panoramic video frame i, or may be obtained from a pre-generated low-definition panoramic initial frequency, and the time stamps of the low-definition panoramic initial frequency frame i and the target panoramic initial frequency frame i are the same.
S912: and the VR equipment determines whether each fragment is located in the target panoramic video block or not according to the global UV coordinate of each fragment generated by the spherical grid rasterization, if so, S913 is executed, and otherwise, S914 is executed.
Wherein, the description of the global UV coordinates of each fragment is referred to S704, and is not repeated here.
In S912, it is determined whether each fragment is located in the target high definition video block according to the global UV coordinate of each fragment and the mapping relationship between the latitude and longitude coordinates of the high definition video block and the UV coordinate of the image. And the longitude and latitude coordinates of the high-definition video blocks are the space range of the pre-recorded high-definition video blocks.
S913: and the VR equipment converts the global UV coordinate of each fragment into a local UV coordinate taking the target high-definition video block as a coordinate, and acquires the color value of the corresponding fragment from the corresponding target high-definition video block according to the local UV coordinate of each fragment.
In the step, the global UV coordinate can be converted into the local UV coordinate according to the horizontal and vertical pixel proportion of the target high-definition video block and the target panoramic video frame, and further, the color value of the corresponding fragment is obtained from the target high-definition video block according to the local UV coordinate of each fragment.
S914: and the VR equipment acquires the color value of the corresponding fragment from the low-definition panoramic video frame i according to the global UV coordinate of each fragment.
In the step, according to the FOV transmission scheme, the area of the high definition video which is not loaded completely is displayed as low definition, that is, according to the global UV coordinate of each fragment, the color value of the corresponding fragment is obtained from the low definition panoramic video frame i.
S915: and rendering the spherical grid by the VR equipment according to the color value of each fragment to obtain and display the panoramic video frame.
In this step, each fragment corresponds to a pixel point to be rendered on the display screen, and the spherical grid is rendered according to the color value of each fragment, so that a rendered panoramic video frame can be obtained.
Based on the same technical concept, the embodiment of the present application provides a display device for displaying a panoramic video, and the display device can execute the display method and flow of the panoramic video provided by the embodiment of the present application, and can achieve the same technical effect, which is not repeated here.
Referring to fig. 10, the display apparatus includes an obtaining module 1001, a processing module 1002, and a rendering display module 1003;
an obtaining module 1001, configured to respond to a panoramic video playing request, to obtain a target panoramic video and configuration information of the target panoramic video, where the configuration information includes address information of a high-definition video partition included in each target panoramic video frame in the target panoramic video; acquiring at least one target high-definition video block according to the address information of the high-definition video block contained in the target panoramic video frame and the identification of the at least one target high-definition video block;
the processing module 1002 is configured to obtain a first viewpoint position and a second viewpoint position for each target panoramic video frame in a target panoramic video, and determine an identifier of at least one target high-definition video block according to an identifier of each first high-definition video block in the target panoramic video frame corresponding to the first viewpoint position and an identifier of each second high-definition video block in the target panoramic video frame corresponding to the second viewpoint position;
and a rendering display module 1003, configured to render a pre-created spherical grid according to the obtained at least one target high-definition video partition and the low-definition panoramic video frame, to obtain and display a rendered panoramic video frame, where the low-definition panoramic video frame is obtained by down-sampling the target panoramic video frame.
Optionally, the processing module 1002 is specifically configured to:
and determining the identifier of at least one target high-definition video block in the identifiers of each first high-definition video block in the target panoramic video frame corresponding to the first viewpoint position and the identifier of the same at least one high-definition video block in the identifiers of each second high-definition video block in the target panoramic video frame corresponding to the second viewpoint position.
Optionally, the processing module 1002 is further configured to:
determining the identification of each first high-definition video block in a target panoramic video frame corresponding to the first viewpoint position according to the high-definition video blocks contained in the first visual range determined by the first viewpoint position and the field angle size and direction of the display device, and determining the identification of each second high-definition video block in the target panoramic video frame corresponding to the second viewpoint position according to the second viewpoint position and the high-definition video blocks contained in the second visual range determined by the field angle size and direction of the human eyes; or
The method comprises the steps of obtaining identification of each high-definition video block corresponding to a first subregion from pre-generated first viewpoint mapping data according to the pre-divided first subregion where the first viewpoint position is located, determining the identification of each high-definition video block corresponding to the first subregion as the identification of each first high-definition video block in a target panoramic video frame corresponding to the first viewpoint position, obtaining the identification of each high-definition video block corresponding to a second subregion from pre-generated second viewpoint mapping data according to a pre-divided second subregion where the second viewpoint position is located, and determining the identification of each high-definition video block corresponding to the second subregion as the identification of each second high-definition video block in a target panoramic video frame corresponding to the second viewpoint position.
Optionally, the rendering and displaying module 1003 is specifically configured to:
for each fragment generated by spherical grid rasterization, if each fragment is located in a target high-definition video block, acquiring a color value of the corresponding fragment from the high-definition video block according to a local UV coordinate of each fragment, otherwise acquiring the color value of each fragment from a low-definition panoramic video frame according to a global UV coordinate of each fragment; the local UV coordinate is obtained by global UV coordinate conversion, and the global UV coordinate is obtained by three-dimensional coordinate conversion of each fragment or UV coordinate interpolation of each sub-grid vertex in the spherical grid;
and rendering the spherical grid according to the color value of each fragment.
Optionally, the display device further includes a tracking module, configured to acquire a gaze direction of the display device and acquire a gaze direction of human eyes;
the obtaining module 1001 is specifically configured to:
determining the acquired longitude and latitude coordinates of a first projection point of the sight orientation of the display device on the spherical grid as a first viewpoint position;
and determining the acquired longitude and latitude coordinates of the second projection point of the sight orientation of the human eyes on the spherical grid as a second viewpoint position.
Based on the same technical concept, embodiments of the present application provide a display device for displaying a panoramic video, where the display device can execute the processes shown in fig. 7 and 10 provided in the embodiments of the present application, and can achieve the same technical effects, which are not repeated here.
Referring to fig. 11, the display device includes a display 1101, a memory 1102, a graphic processor 1103. Wherein the display 1101 is connected to the graphics processor 1103, configured to display VR video; the memory 1102 is coupled to the graphics processor 1103 and is configured to store computer instructions; a graphics processor 1103 configured to perform a display method of the panoramic video according to computer instructions stored in the memory 1102.
Embodiments of the present application also provide a computer-readable storage medium for storing instructions that, when executed, may implement the methods of the foregoing embodiments.
The embodiments of the present application also provide a computer program product for storing a computer program, where the computer program is used to execute the method of the foregoing embodiments.
Finally, it should be noted that: the above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present application.
The foregoing description, for purposes of explanation, has been presented in conjunction with specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the embodiments to the precise forms disclosed above. Many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles and the practical application, to thereby enable others skilled in the art to best utilize the embodiments and various embodiments with various modifications as are suited to the particular use contemplated.

Claims (8)

1. A display device for displaying panoramic video, comprising:
a display, coupled to the graphics processor, configured to display the panoramic video;
a memory coupled to the graphics processor and configured to store computer instructions;
the graphics processor configured to perform the following operations in accordance with the computer instructions:
responding to a panoramic video playing request, acquiring a target panoramic video and configuration information of the target panoramic video, wherein the configuration information comprises address information of high-definition video blocks contained in each target panoramic video frame in the target panoramic video;
acquiring a first viewpoint position and a second viewpoint position for each target panoramic video frame in the target panoramic video, and determining an identifier of at least one target high-definition video block according to the identifier of each first high-definition video block in the target panoramic video frame corresponding to the first viewpoint position and the identifier of each second high-definition video block in the target panoramic video frame corresponding to the second viewpoint position; the first viewpoint position is a viewpoint position of display equipment, the second viewpoint position is a viewpoint position of human eyes, and the identifier of the at least one target high-definition video block is the identifier of the same high-definition video block in each first high-definition video block and each second high-definition video block;
acquiring at least one target high-definition video block according to the address information of the high-definition video block contained in the target panoramic video frame and the identification of the at least one target high-definition video block;
rendering a pre-created spherical grid according to at least one acquired target high-definition video block and a low-definition panoramic video frame to obtain and display a rendered panoramic video frame, wherein the low-definition panoramic video frame is obtained by down-sampling the target panoramic video frame;
the determination method of the identifier of each high-definition video block in the target panoramic video frame corresponding to each viewpoint position at least comprises the following steps:
acquiring identifiers of all high-definition video blocks corresponding to the subareas from pre-generated viewpoint mapping data according to the pre-divided subareas where the viewpoint positions are located, and determining the identifiers of all high-definition video blocks corresponding to the subareas as the identifiers of all high-definition video blocks in the target panoramic video frame corresponding to the viewpoint positions, wherein the viewpoint mapping data comprises the identifiers of all high-definition video blocks corresponding to all subareas in the viewpoint area where the viewpoint positions are located;
the determining mode of the identifier of the high-definition video block corresponding to each sub-area comprises the following steps:
setting the central point of a subarea as the viewpoint position, determining a visual area of the viewpoint by combining the size of a field angle corresponding to the viewpoint, determining the identifier of each high-definition video block contained in the visual area as the identifier of the high-definition video block corresponding to the subarea, wherein all or part of each high-definition video block falls into the visual area v or falls into the visual area
Setting 4 vertexes of the sub-regions to be the viewpoint positions, respectively determining the visual area of each vertex by combining the size of the field angle corresponding to the viewpoint, and determining the union set of the identifiers of the high-definition video blocks contained in each visual area as the identifier of the high-definition video block corresponding to the sub-region.
2. The display device of claim 1, wherein the manner of determining the identity of each high definition video partition in the target panoramic video frame corresponding to each viewpoint position further comprises:
determining the identifier of each first high-definition video block in the target panoramic video frame corresponding to the first viewpoint position according to the first viewpoint position and the high-definition video blocks contained in the first visual area determined by the size and the direction of the field angle of the display device; and the number of the first and second groups,
and determining the identifier of each second high-definition video block in the target panoramic video frame corresponding to the second viewpoint position according to the second viewpoint position and the high-definition video blocks contained in the second visual area determined by the size and the direction of the field angle of the human eyes.
3. The display device of claim 1, wherein the graphics processor renders a pre-created spherical mesh from the acquired at least one target high-definition video partition and the low-definition panoramic video frame, specifically configured to:
for each fragment generated by the spherical grid rasterization, if the fragment is located in the target high-definition video block, obtaining a color value of the corresponding fragment from the high-definition video block according to a local UV coordinate of the fragment, otherwise, obtaining the color value of the fragment from the low-definition panoramic video frame according to a global UV coordinate of the fragment; the local UV coordinate is obtained by converting the global UV coordinate, and the global UV coordinate is obtained by converting the three-dimensional coordinate of each fragment or by interpolating the UV coordinate of each sub-grid vertex in the spherical grid;
and rendering the spherical grid according to the color value of each fragment.
4. The display device according to any one of claims 1-3, wherein the display device comprises a device gaze tracking arrangement configured to acquire a gaze orientation of the display device, an eye tracking arrangement configured to acquire a gaze orientation of a human eye;
the graphics processor obtains a first viewpoint position and a second viewpoint position, and is specifically configured to:
determining the acquired longitude and latitude coordinates of a first projection point of the sight orientation of the display device on the spherical grid as a first viewpoint position;
and determining the acquired longitude and latitude coordinates of the second projection point of the sight direction of the human eyes on the spherical grid as a second viewpoint position.
5. A panoramic video display method, comprising:
responding to a panoramic video playing request, and acquiring a target panoramic video and configuration information of the target panoramic video, wherein the configuration information contains address information of high-definition video blocks contained in each target panoramic video frame in the target panoramic video;
acquiring a first viewpoint position and a second viewpoint position for each target panoramic video frame in the target panoramic video, and determining an identifier of at least one target high-definition video block according to the identifier of each first high-definition video block in the target panoramic video frame corresponding to the first viewpoint position and the identifier of each second high-definition video block in the target panoramic video frame corresponding to the second viewpoint position; the first viewpoint position is a viewpoint position of display equipment, the second viewpoint position is a viewpoint position of human eyes, and the identifier of the at least one target high-definition video block is the identifier of the same high-definition video block in each first high-definition video block and each second high-definition video block;
acquiring at least one target high-definition video block according to the address information of the high-definition video block contained in the target panoramic video frame and the identification of the at least one target high-definition video block;
rendering a pre-created spherical grid according to at least one acquired target high-definition video block and a low-definition panoramic video frame to obtain and display a rendered panoramic video frame, wherein the low-definition panoramic video frame is obtained by down-sampling the target panoramic video frame;
the determination method of the identifier of each high-definition video block in the target panoramic video frame corresponding to each viewpoint position at least comprises the following steps:
acquiring identifiers of all high-definition video blocks corresponding to the sub-regions from pre-generated viewpoint mapping data according to the pre-divided sub-regions where the viewpoint positions are located, and determining the identifiers of all high-definition video blocks corresponding to the sub-regions as the identifiers of all high-definition video blocks in the target panoramic video frame corresponding to the viewpoint positions, wherein the viewpoint mapping data comprises the identifiers of all high-definition video blocks corresponding to all sub-regions in the viewpoint regions where the viewpoint positions are located;
the determining mode of the identifier of the high-definition video block corresponding to each sub-area comprises the following steps:
setting the central point of a subarea as the viewpoint position, determining a visual area of the viewpoint by combining the size of a field angle corresponding to the viewpoint, determining the identifier of each high-definition video block contained in the visual area as the identifier of the high-definition video block corresponding to the subarea, wherein all or part of each high-definition video block falls into the visual area v or falls into the visual area
Setting 4 vertexes of the subareas as the viewpoint positions, respectively determining the visible area of each vertex by combining the size of the view angle corresponding to the viewpoint, and determining the union set of the identifications of all high-definition video blocks contained in each visible area as the identifications of the high-definition video blocks corresponding to the subareas.
6. The method of claim 5, wherein the determining of the identity of each high definition video partition in the target panoramic video frame corresponding to each viewpoint position further comprises:
determining the identifier of each first high-definition video block in the target panoramic video frame corresponding to the first viewpoint position according to the first viewpoint position and the high-definition video blocks contained in the first visual area determined by the size and the direction of the field angle of the display device; and the number of the first and second groups,
and determining the identifier of each second high-definition video block in the target panoramic video frame corresponding to the second viewpoint position according to the second viewpoint position and the high-definition video blocks contained in the second visual area determined by the size and the direction of the field angle of the human eyes.
7. The method of claim 5, wherein the rendering of the pre-created spherical mesh from the acquired at least one target high-definition video partition and the low-definition panoramic video frame comprises:
for each fragment generated by the spherical grid rasterization, if the fragment is located in the target high-definition video block, obtaining a color value of the corresponding fragment from the high-definition video block according to a local UV coordinate of the fragment, otherwise, obtaining the color value of the fragment from the low-definition panoramic video frame according to a global UV coordinate of the fragment; the local UV coordinate is obtained by converting the global UV coordinate, and the global UV coordinate is obtained by converting the three-dimensional coordinate of each fragment or by interpolating the UV coordinate of each sub-grid vertex in the spherical grid;
and rendering the spherical grid according to the color value of each fragment.
8. The method of any of claims 5-7, wherein the obtaining the first viewpoint location and the second viewpoint location comprises:
acquiring the direction of a sight line of the display equipment, and determining the acquired longitude and latitude coordinates of a first projection point of the direction of the sight line of the display equipment on the spherical grid as a first viewpoint position;
and acquiring the direction of the sight of the human eyes, and determining the longitude and latitude coordinates of a second projection point of the acquired direction of the sight of the human eyes on the spherical grid as a second viewpoint position.
CN202110501270.0A 2021-05-08 2021-05-08 Panoramic video display method and display equipment Active CN113242384B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110501270.0A CN113242384B (en) 2021-05-08 2021-05-08 Panoramic video display method and display equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110501270.0A CN113242384B (en) 2021-05-08 2021-05-08 Panoramic video display method and display equipment

Publications (2)

Publication Number Publication Date
CN113242384A CN113242384A (en) 2021-08-10
CN113242384B true CN113242384B (en) 2023-04-18

Family

ID=77132760

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110501270.0A Active CN113242384B (en) 2021-05-08 2021-05-08 Panoramic video display method and display equipment

Country Status (1)

Country Link
CN (1) CN113242384B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115103114A (en) * 2022-06-16 2022-09-23 京东方科技集团股份有限公司 Panoramic video view tracking method, device, equipment and medium
CN116320551B (en) * 2023-05-25 2023-08-29 南方科技大学 Multi-view video self-adaptive transmission method based on multiple multi-spherical images

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112367479A (en) * 2020-10-14 2021-02-12 聚好看科技股份有限公司 Panoramic video image display method and display equipment
CN112770051A (en) * 2021-01-04 2021-05-07 聚好看科技股份有限公司 Display method and display device based on field angle

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017033777A1 (en) * 2015-08-27 2017-03-02 株式会社コロプラ Program for controlling head-mounted display system
US10110935B2 (en) * 2016-01-29 2018-10-23 Cable Television Laboratories, Inc Systems and methods for video delivery based upon saccadic eye motion
WO2018076202A1 (en) * 2016-10-26 2018-05-03 中国科学院深圳先进技术研究院 Head-mounted display device that can perform eye tracking, and eye tracking method
KR102650572B1 (en) * 2016-11-16 2024-03-26 삼성전자주식회사 Electronic apparatus and method for controlling thereof
CN107516335A (en) * 2017-08-14 2017-12-26 歌尔股份有限公司 The method for rendering graph and device of virtual reality
CN107396077B (en) * 2017-08-23 2022-04-08 深圳看到科技有限公司 Virtual reality panoramic video stream projection method and equipment
EP3672251A1 (en) * 2018-12-20 2020-06-24 Koninklijke KPN N.V. Processing video data for a video player apparatus
CN112188303A (en) * 2020-09-03 2021-01-05 北京火眼目测科技有限公司 VR (virtual reality) streaming media playing method and device based on visual angle
CN112672131B (en) * 2020-12-07 2024-02-06 聚好看科技股份有限公司 Panoramic video image display method and display device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112367479A (en) * 2020-10-14 2021-02-12 聚好看科技股份有限公司 Panoramic video image display method and display equipment
CN112770051A (en) * 2021-01-04 2021-05-07 聚好看科技股份有限公司 Display method and display device based on field angle

Also Published As

Publication number Publication date
CN113242384A (en) 2021-08-10

Similar Documents

Publication Publication Date Title
US20180189980A1 (en) Method and System for Providing Virtual Reality (VR) Video Transcoding and Broadcasting
JP7181233B2 (en) Processing 3D image information based on texture maps and meshes
CN113242384B (en) Panoramic video display method and display equipment
KR102204212B1 (en) Apparatus and method for providing realistic contents
CN112367479B (en) Panoramic video image display method and display equipment
EP3646284A1 (en) Screen sharing for display in vr
CN112672131B (en) Panoramic video image display method and display device
JP6553184B2 (en) Digital video rendering
KR20200014281A (en) Image processing apparatus and method, file generating apparatus and method, and program
CN113286138A (en) Panoramic video display method and display equipment
CN112218132A (en) Panoramic video image display method and display equipment
CN114500970B (en) Panoramic video image processing and displaying method and equipment
JP7479386B2 (en) An image signal representing a scene
CN112130667A (en) Interaction method and system for ultra-high definition VR (virtual reality) video
CN114788287A (en) Encoding and decoding views on volumetric image data
WO2020193703A1 (en) Techniques for detection of real-time occlusion
EP3564905A1 (en) Conversion of a volumetric object in a 3d scene into a simpler representation model
KR101773929B1 (en) System for processing video with wide viewing angle, methods for transmitting and displaying vide with wide viewing angle and computer programs for the same
CN114051090B (en) Method for releasing resources in panoramic video and display equipment
CN114051089B (en) Method for releasing resources in panoramic video and display equipment
US20220174259A1 (en) Image signal representing a scene
Pintore et al. Deep synthesis and exploration of omnidirectional stereoscopic environments from a single surround-view panoramic image
WO2023217867A1 (en) Variable resolution variable frame rate video coding using neural networks
CN113115018A (en) Self-adaptive display method and display equipment for image
WO2019193011A1 (en) Region description for 360 or spherical video

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant