WO2019218783A1 - 图像渲染方法、装置、***、存储介质、图像显示方法、计算机设备 - Google Patents

图像渲染方法、装置、***、存储介质、图像显示方法、计算机设备 Download PDF

Info

Publication number
WO2019218783A1
WO2019218783A1 PCT/CN2019/080728 CN2019080728W WO2019218783A1 WO 2019218783 A1 WO2019218783 A1 WO 2019218783A1 CN 2019080728 W CN2019080728 W CN 2019080728W WO 2019218783 A1 WO2019218783 A1 WO 2019218783A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
sampling
area
resolution
original
Prior art date
Application number
PCT/CN2019/080728
Other languages
English (en)
French (fr)
Inventor
王雪丰
孙玉坤
苗京花
陈丽莉
张�浩
赵斌
王立新
李茜
索健文
李文宇
彭金豹
范清文
陆原介
王晨如
刘亚丽
孙建康
Original Assignee
京东方科技集团股份有限公司
北京京东方光电科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 京东方科技集团股份有限公司, 北京京东方光电科技有限公司 filed Critical 京东方科技集团股份有限公司
Priority to US16/607,224 priority Critical patent/US11392197B2/en
Publication of WO2019218783A1 publication Critical patent/WO2019218783A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4092Image resolution transcoding, e.g. by using client-server architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/32Indexing scheme for image data processing or generation, in general involving image mosaicing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2016Rotation, translation, scaling

Definitions

  • Embodiments of the present disclosure relate to an image rendering method, an image rendering apparatus, an image rendering system, a computer readable storage medium, a computer device, and an image display method that are oriented toward virtual reality.
  • At least some embodiments of the present disclosure provide an image rendering method for virtual reality, including:
  • the first display area and the second display area are spliced to obtain an output image to be transmitted to the virtual reality device.
  • performing first resolution sampling on the first sampling area and second resolution sampling on the to-be-displayed image includes:
  • the rendering model includes a original resolution sampling region, a compression resolution sampling region, and a resolution compression multiple of the compression resolution sampling region, the original resolution sampling region Corresponding to the first sampling area, the compressed resolution sampling area corresponds to the second sampling area;
  • determining a rendering model according to the gaze point location includes:
  • a resolution compression multiple of the original compressed resolution sampling region is based on a positional relationship between the original compressed resolution sampling region and the original original resolution sampling region. Preset and adjusted.
  • the resolution compression multiple of the compressed resolution sampling region includes a horizontal resolution compression multiple and/or a vertical resolution compression multiple.
  • the original resolution sampling region and the compressed resolution sampling region constitute a nine-grid structure
  • the nine-grid structure includes a plurality of regions of three rows and three columns.
  • the original resolution sampling area is located in the second row and the second column of the nine-grid structure.
  • a size of the first sampling area, a size of the original original resolution sampling area, and a size of the original resolution sampling area are the same.
  • adjusting a center point position of the original original resolution sampling area includes:
  • the center point of the original original resolution sampling area is adjusted to be the position closest to the gaze position if the original original resolution sampling area does not exceed the boundary of the image to be displayed.
  • performing second resolution sampling on the image to be displayed according to the rendering model includes:
  • acquiring an image to be displayed includes: acquiring an original image; performing inverse distortion processing on the original image to obtain the image to be displayed.
  • a resolution of the first sampling area is equal to a resolution of the first display area.
  • At least some embodiments of the present disclosure also provide an image display method, including:
  • the stretched image is displayed on a display screen of the virtual reality device.
  • the output image includes the first display area and the second display area
  • the output image is stretched by the virtual reality device
  • Obtaining a stretched image comprising: stretching, by the virtual reality device, the second display area in the output image to obtain a stretched display area; according to the first display area and the stretching A display area is determined to determine the stretched image.
  • Some embodiments of the present disclosure further provide an image rendering apparatus for virtual reality, including: a gaze point projection module, a rendering engine, and a splicing module;
  • the gaze point projection module is configured to obtain, according to a gaze point of a human eye on a display screen of the virtual reality device, a gaze point position corresponding to the gaze point on the image to be displayed;
  • the rendering engine is used to:
  • the splicing module is configured to splicing the first display area and the second display area to obtain an output image to be transmitted to the virtual reality device.
  • the rendering engine is further configured to load a rendering model according to the gaze point location, that is, determine a rendering model, where the rendering model includes a original resolution sampling. a region, a compression resolution sampling region, and a resolution compression factor of the compression resolution sampling region, the original resolution sampling region corresponding to the first sampling region, and the compression resolution sampling region corresponding to the second Sampling area
  • the image rendering apparatus provided by at least some embodiments of the present disclosure further includes an adjustment module, where the rendering engine is further configured to acquire an original rendering model, wherein the original rendering model includes an original original resolution sampling area and an original compressed resolution sampling.
  • the adjusting module is configured to adjust a center point position of the original original resolution sampling area and a resolution compression multiple of the original compressed resolution sampling area according to the gaze point position to determine the rendering model.
  • Some embodiments of the present disclosure further provide an image rendering system for virtual reality, comprising: a virtual reality device and the image rendering device of any of the above embodiments;
  • the virtual reality device is configured to acquire a gaze point of a human eye on a display screen of the virtual reality device and receive the output image transmitted by the image rendering device.
  • Some embodiments of the present disclosure further provide a computer readable storage medium having stored thereon a computer program that, when executed by a processor, implements a virtual reality oriented image rendering method provided by any of the embodiments of the present disclosure.
  • Some embodiments of the present disclosure also provide a computer device including a memory configured to store a computer program; a processor configured to execute the computer program, wherein the processor executes the computer program to implement any of the present disclosure An image rendering method for virtual reality provided by an embodiment.
  • FIG. 1A is a flow chart showing a method for rendering a virtual reality-oriented image provided by at least some embodiments of the present disclosure
  • FIG. 1B illustrates another flow chart of a virtual reality-oriented image rendering method provided by at least some embodiments of the present disclosure
  • FIG. 2A shows an implementation schematic diagram of a rendering process provided by at least some embodiments of the present disclosure
  • FIG. 2B shows a schematic diagram of a rendering model provided by at least some embodiments of the present disclosure
  • FIG. 3 is a diagram showing a process change process of a process of a virtual reality-oriented image rendering method provided by at least some embodiments of the present disclosure
  • FIG. 4A is a schematic diagram of an original rendering model provided by some embodiments of the present disclosure.
  • 4B is a schematic diagram of a rendering model provided by some embodiments of the present disclosure.
  • 5A is a schematic diagram of a positional relationship between a gaze point position and a primary resolution sampling area on an image to be displayed according to some embodiments of the present disclosure
  • FIG. 5B is a schematic diagram of another positional relationship between a fixation point position and a primary resolution sampling area according to some embodiments of the present disclosure.
  • FIG. 5C is a schematic diagram of adjusting a center of a original resolution sampling area according to some embodiments of the present disclosure.
  • FIG. 6 is a schematic flowchart of an image display method according to some embodiments of the present disclosure.
  • FIG. 7 is a schematic diagram of a rendering model and an output image corresponding to different gaze point positions provided by some embodiments of the present disclosure
  • FIG. 8 is a schematic structural diagram of an image rendering apparatus for virtual reality provided by at least some embodiments of the present disclosure.
  • FIG. 9 is a schematic diagram of a virtual reality oriented image rendering system provided by at least some embodiments of the present disclosure.
  • FIG. 10 is a block diagram showing the structure of a computer device according to at least some embodiments of the present disclosure.
  • FIG. 1A illustrates a flowchart of a virtual reality-oriented image rendering method provided by at least some embodiments of the present disclosure
  • FIG. 1B illustrates a virtual reality-oriented image rendering method provided by at least some embodiments of the present disclosure
  • FIG. 2A is a schematic diagram of an implementation of a rendering process provided by at least some embodiments of the present disclosure
  • FIG. 2B is a schematic diagram of a rendering model provided by at least some embodiments of the present disclosure
  • 3 is a diagram showing a process change process of a process of a virtual reality-oriented image rendering method provided by at least some embodiments of the present disclosure.
  • Some embodiments of the present disclosure provide an image rendering method that can be applied to a virtual reality device. For example, as shown in FIG. 1B, some embodiments of the present disclosure provide an image rendering method for virtual reality, including:
  • the position of the gaze point of the human eye on the display screen of the virtual reality device is obtained; wherein the position of the gaze point on the display screen can be obtained by the virtual reality of the display screen
  • the device is implemented based on gaze tracking technology by corresponding hardware or software;
  • the original resolution sampling area of the image is subjected to original resolution sampling and the compression resolution sampling area is subjected to compression resolution sampling;
  • the sampled original resolution sampling area and the compressed resolution sampling area are spliced to obtain an image to be transmitted to the virtual reality device.
  • FIG. 1A other embodiments of the present disclosure provide an image rendering method for virtual reality, including:
  • S12 Determine a first sampling area and a second sampling area of the image to be displayed according to the position of the fixation point;
  • S15 splicing the first display area and the second display area to obtain an output image to be transmitted to the virtual reality device.
  • the virtual reality-oriented image rendering method provided by the embodiments of the present disclosure may be implemented in a rendering engine, and the non-high-definition region (ie, the non-gaze point region) of the image may be compressed in the rendering engine by adjusting the rendering model in the rendering engine.
  • reducing the transmission bandwidth of the software output to the display device can save the transmission bandwidth during the image transmission process, and solve the limitation of the transmission bandwidth, the direct transmission of the 4K image is too much pressure on the hardware, and the high-resolution high refresh rate cannot be completed.
  • the image rendering method provided by the present disclosure has high real-time performance, fast calculation speed, small computational resource consumption, high computational efficiency, and real-time high-resolution and high refresh rate realization. display.
  • step S10 the image to be displayed may be acquired by a rendering engine.
  • step S10 may include: acquiring an original image; performing inverse distortion processing on the original image to obtain an image to be displayed.
  • the rendering engine can obtain the rendered image of the current scene, that is, the original image, and then the rendering engine can perform anti-distortion processing on the rendered image to obtain an image to be displayed, that is, The image to be displayed is an anti-distortion image.
  • the method is applied to image transmission of a computer device to a virtual reality device.
  • the size of the image to be displayed and the size of the original image may be the same.
  • the original image may be a color image or a grayscale image.
  • the original image can have a variety of different shapes, such as rectangular, circular, trapezoidal, and the like.
  • an embodiment of the present disclosure will be described by taking an original image and an image to be displayed each having a rectangular shape as an example.
  • step S11 according to the gaze point position of the human eye on the display screen of the virtual reality device, the position of the gaze point corresponding to the image to be displayed on the display screen is obtained; for example, the gaze of the gaze point on the display screen is obtained.
  • the point location can be implemented by the virtual reality device to which the display belongs, via the corresponding hardware or software based on the line of sight tracking technique.
  • the virtual reality device can track the human eye's line of sight based on changes in the characteristics of the eyeball and the periphery of the eyeball.
  • the virtual reality device can also track the human eye's line of sight based on changes in iris angle.
  • the virtual reality device can also track the human eye by actively projecting a beam of infrared light or the like to the iris to extract the eyeball feature.
  • the virtual reality device implements tracking of the human eye's line of sight based on gaze tracking techniques through corresponding software. As shown in FIG. 1B, after starting the running of the program, the virtual reality device can run an eye tracking program to acquire the gaze point of the current eyeball.
  • the size of the first sampling area may be preset by the user, and during the running of the program, the size of the first sampling area remains unchanged for different images to be displayed.
  • the second sampling area may be determined according to the gaze point position and the size of the first sampling area.
  • performing first resolution sampling on the first sampling area and performing second resolution sampling on the image to be displayed includes: loading a rendering model according to the gazing point position, that is, determining a rendering model, wherein, rendering The model includes a resolution resolution multiple of the original resolution sampling area, the compression resolution sampling area, and the compression resolution sampling area, the original resolution sampling area corresponding to the first sampling area, and the compression resolution sampling area corresponding to the second sampling area; The rendering model performs first resolution sampling on the first sampling region and second resolution sampling on the image to be displayed.
  • the resolution of the first sampling area is equal to the resolution of the first display area, that is, the size of the first display area is the same as the size of the first sampling area.
  • the resolution of the second sampling area is greater than the resolution of the second display area, that is, the size of the second sampling area is larger than the size of the second display area. That is, in steps S13 and S14, the first sampling area is subjected to original resolution sampling to obtain a first display area, and the second sampling area is subjected to compression resolution sampling to obtain a second display area.
  • compression resolution sampling can be implemented by an interpolation algorithm.
  • Interpolation algorithms include, for example, Lagrangian interpolation, Newton interpolation, and Hermite interpolation.
  • an anti-distortion image of the high-definition region may be sampled according to the gaze point position (ie, an image to be displayed); in step S14, the entire image to be displayed may be Low-resolution sampling is performed to obtain a compressed anti-distortion image of the entire region (ie, the entire region of the image to be displayed).
  • the resolution of the image to be displayed 20 may be 4320*4800.
  • the resolution of the output image 30 obtained by processing the image to be displayed 20 by the embodiment of the present disclosure is 2160*2400.
  • the output image 30 includes a first display area and a second display area.
  • the output image 30 includes nine output sub-regions, the first display region is an output sub-region 15, and the second display region includes eight output sub-regions, ie, the second display region may include an output.
  • the output sub-region 15 of the output image 30 corresponds to the sub-region 5 to be displayed of the image to be displayed 20
  • the output sub-region 11 of the output image 30 corresponds to the sub-region 1 to be displayed of the image 20 to be displayed
  • the output of the output image 30 The area 12 corresponds to the sub-area 2 to be displayed of the image to be displayed 20
  • the output sub-area 13 of the output image 30 corresponds to the sub-area 3 to be displayed of the image 20 to be displayed
  • the output sub-area 14 of the output image 30 corresponds to the image to be displayed 20
  • the sub-region 4 to be displayed, the output sub-region 16 of the output image 30 corresponds to the sub-region 6 to be displayed of the image to be displayed 20, and the output sub-region 17 of the output image 30 corresponds to the sub-region 7 to be displayed of the image 20 to be displayed, and the output The output sub-area 18 of the image 30 corresponds to the sub-area 8 to be displayed of the image 20 to be displayed, and the output sub-area 19 of
  • the resolution of the output sub-area 15 of the output image 30 is the same as the resolution of the sub-area 5 to be displayed of the image 20 to be displayed.
  • the resolution of each output sub-area in the second display area is smaller than the resolution of the corresponding sub-area to be displayed in the second sampling area.
  • the resolution of the output sub-area 11 of the output image 30 is smaller than the to-be-displayed image 20 to be displayed.
  • the image to be displayed 20 has a rectangular shape, and the first sampling area may be located at any one of the corners of the rectangle, and the first sampling area may be located on either side of the rectangle; or the first sampling area may be located in the middle of the rectangle, that is, It is said that the first sampling area is not in contact with the sides and corners of the image 20 to be displayed.
  • Embodiments of the present disclosure do not limit the specific location of the second sampling region.
  • the second sampling area may include a plurality of sub-areas to be displayed.
  • the first sampling area 21 may be located at an intermediate portion of an image to be displayed, in which case the image to be displayed 20 is divided into nine sub-regions to be displayed, the first sampling The area 21 is a sub-area 5 to be displayed, and the second sampling area 22 includes eight sub-areas to be displayed, that is, the second sampling area 22 may include a sub-area to be displayed, a sub-area to be displayed 2, a sub-area to be displayed 3, to be displayed.
  • the sub-area 4 the sub-area 6 to be displayed, the sub-area 7 to be displayed, the sub-area 8 to be displayed, and the sub-area 9 to be displayed.
  • the first sampling area 21 and the second sampling area 22 constitute a nine-grid structure, and the nine-square lattice structure includes a plurality of three rows and three columns, and the first sampling area 21 is located at the center of the nine-square lattice structure, that is, the first sampling area 21 is located in the second structure of the nine-square grid structure.
  • the second column of the line is located at the center of the nine-square lattice structure, that is, the first sampling area 21 is located in the second structure of the nine-square grid structure.
  • each sub-region to be displayed can continue to be divided.
  • the display sub-area 8 is adjacent to the first sampling area 21 in the first direction.
  • first direction and the second direction are perpendicular to each other.
  • the first sampling area 21 is located at any one of the corners of the rectangle.
  • the image to be displayed 20 can be divided into four sub-regions to be displayed, the first sampling region 21 is a sub-region A to be displayed, and the second sampling region 22 includes three sub-regions to be displayed, that is,
  • the second sampling area 22 may include a sub-area B to be displayed, a sub-area C to be displayed, and a sub-area D to be displayed.
  • the present disclosure is not limited thereto, and the first sampling area 21 may also be the sub-area C to be displayed.
  • the second sampling area 21 includes a sub-area A to be displayed, a sub-area B to be displayed, and a sub-area D to be displayed.
  • the sub-region B to be displayed of the second sampling region 22 is adjacent to the first sampling region 21 (ie, the sub-region A to be displayed) in the first direction, and the second sampling region 22 is to be displayed.
  • the sub-region C is adjacent to the first sampling region 21 in the second direction, and the sub-region D to be displayed of the second sampling region 22 is not adjacent to the first sampling region 21.
  • adjacent may represent the sub-region to be displayed in the second sampling region 22 (for example, the sub-region B to be displayed and the sub-region C to be displayed in FIG. 2B ) and the first sampling.
  • the area 21 has at least one side adjacent to it.
  • “Not adjacent” means that the sub-region to be displayed in the second sampling region 22 (for example, the sub-region D to be displayed in FIG. 2B) is not adjacent to any one of the first sampling regions 21.
  • step S14 may include: performing second resolution sampling on the image to be displayed according to the rendering model to obtain an intermediate image to be displayed; and segmenting the image to be displayed according to the position and proportional relationship of the first sampling region and the second sampling region, And obtaining a first intermediate display area corresponding to the first sampling area and a second intermediate display area corresponding to the second sampling area, where the second display area includes a second intermediate display area.
  • step 3 the image to be displayed 20 is subjected to second resolution sampling to obtain an intermediate image 26 to be displayed.
  • the intermediate image to be displayed 26 is an image obtained by compressing the entire image to be displayed 20, and then, according to the position and proportional relationship of the first sampling region and the second sampling region, the intermediate image to be displayed 26 can be divided, for example, for the image.
  • the intermediate image to be displayed 26 may also be divided into nine intermediate sub-areas, and nine intermediate sub-areas are arranged in three rows.
  • the first intermediate display area includes one intermediate sub-area located in the second column of the second row
  • the second intermediate display area includes eight intermediate sub-areas
  • eight of the eight intermediate sub-areas and the second sampling area are to be displayed
  • the sub-regions correspond one-to-one.
  • the intermediate sub-area of the second intermediate display area located in the first column of the first row corresponds to the sub-area 1 to be displayed of the second sampling area, that is, the first intermediate display area is located in the first column of the first row.
  • the position of the intermediate sub-region in the intermediate image to be displayed 26 and its proportional relationship with the remaining intermediate sub-regions, etc., and the position of the sub-region 1 to be displayed of the second sampling region in the image to be displayed 20 and the sub-region 1 to be displayed and the rest The proportional relationship of the sub-regions to be displayed is the same.
  • FIG. 4A is a schematic diagram of an original rendering model according to some embodiments of the present disclosure
  • FIG. 4B is a schematic diagram of a rendering model according to some embodiments of the present disclosure.
  • step S15 after the rendering model is determined, the first display area is placed at a position corresponding to the original resolution sampling area, and the second display area is placed at a position corresponding to the compression resolution sampling area, Get the output image.
  • the high-definition map ie, the first display area
  • the area corresponding to the second display area in the original image are scaled onto the multi-resolution rendering model, thereby obtaining an output image.
  • determining the rendering model according to the gaze point location includes: acquiring an original rendering model, wherein the original rendering model includes an original original resolution sampling region and an original compression resolution sampling region; and adjusting the original original resolution sampling region according to the gazing point position The center point position and the resolution compression factor of the original compression resolution sampling area to obtain a rendering model.
  • a multi-resolution original rendering model can be created by the modeling software before starting the running of the program, and then the original rendering model is imported into the rendering engine for subsequent use.
  • the rendering engine can modify the shape of the original rendering model according to the position of the gaze point to get the required rendering model.
  • modifying the shape of the original rendered model may include adjusting the center point position of the original original resolution sampling area and the resolution compression factor of the original compressed resolution sampling area.
  • the shape of the rendered model and the original rendered model can also be rectangular.
  • the size of the rendered model is the same as the size of the original rendered model.
  • the size of the rendered model and the size of the output image are also the same.
  • the resolution compression multiple of the original compressed resolution sampling region may be preset and adjusted according to the positional relationship between the original compressed resolution sampling region and the original original resolution sampling region.
  • the resolution compression factor of the compressed resolution sampling region may include a lateral resolution compression multiple and/or a vertical resolution compression multiple.
  • the longitudinal resolution compression multiple may represent the first direction.
  • the resolution compression factor, the horizontal resolution compression factor can represent the resolution compression factor along the second direction.
  • the resolution compression multiple when the resolution compression multiple is greater than 1, the original compression resolution sampling area is compressed; when the resolution compression multiple is less than 1, it indicates that the original compression resolution sampling area is Stretching.
  • the original resolution sampling area is an area corresponding to the position of the fixation point corresponding to the image to be displayed, and other user unfocused areas (ie, non-gazing areas) are set as the compression resolution sampling area. That is, the original resolution sampling area corresponds to the first display area of the output image, that is, corresponding to the output sub-area 15 shown in FIG. 2A, and the compressed resolution sampling area should output the second display image of the image, that is, corresponding to FIG. 2A.
  • the virtual camera samples the original resolution sampling area of the image
  • the positional relationship of the resolution sampling area is used to achieve normal local image resolution compression, and to ensure that the shape of the compressed overall image is the same as the original image (for example, the original image is a rectangular image, and the compressed image is also a rectangular image).
  • the size of the original compressed resolution sampling area, the size of the original resolution sampling area, the size of the first display area, and the size of the first sampling area are all the same.
  • the original resolution sampling area and the compressed resolution sampling area form a nine-grid structure.
  • the original resolution sampling area is located in the middle of the nine-square grid.
  • Such a rule facilitates adjustment of the center point position of the original resolution sampling area and the horizontal and/or vertical resolution compression factor of the compressed resolution sampling area.
  • the correspondence between the original resolution sampling area and the fixation point is also more accurate.
  • the rendering model 100 includes nine sampling sub-regions, and the nine sampling sub-regions are arranged in three rows and three columns, that is, the original resolution sampling region and the compressed resolution sampling region constitute a nine-square lattice structure.
  • the nine-square grid structure includes nine sampling sub-areas of three rows and three columns.
  • the original resolution sampling area is located in the middle of the nine-square grid, that is, the original resolution sampling area is located in the second row and the second column of the nine-square grid structure.
  • FIG. 4B the rendering model 100 includes nine sampling sub-regions, and the nine sampling sub-regions are arranged in three rows and three columns, that is, the original resolution sampling region and the compressed resolution sampling region constitute a nine-square lattice structure.
  • the nine-square grid structure includes nine sampling sub-areas of three rows and three columns.
  • the original resolution sampling area is located in the middle of the nine-square grid, that is, the original resolution sampling area is located in the second row and the second column of the nine-square grid structure.
  • the original resolution sampling area includes the sampling sub-area 105, that is, the sampling sub-area 105 is the original resolution sampling area;
  • the compression resolution sampling area includes eight sampling sub-areas, that is, the compressed resolution sampling area.
  • the sampling sub-area 101, the sampling sub-area 102, the sampling sub-area 103, the sampling sub-area 104, the sampling sub-area 106, the sampling sub-area 107, the sampling sub-area 108, and the sampling sub-area 109 are included.
  • each of the sampling sub-region 101, the sampling sub-region 103, the sampling sub-region 107, and the sampling sub-region 109 may have a lateral resolution compression multiple and a vertical resolution compression multiple, that is, in the first direction and the first In the two directions, each of the sampling sub-area 101, the sampling sub-area 103, the sampling sub-area 107, and the sampling sub-area 109 can be compressed.
  • the sampling sub-region 102 and the sampling sub-region 108 may only have a longitudinal resolution compression factor, that is, in the first direction, the sampling sub-region 102 and the sampling sub-region 108 may be compressed, and in the second direction, the sampling sub-sample Region 102 and sample sub-region 108 are not compressed.
  • sampling sub-region 104 and the sampling sub-region 106 may only have a lateral resolution compression multiple, that is, in the first direction, the sampling sub-region 104 and the sampling sub-region 106 are not compressed, but in the second direction, sampling Sub-region 104 and sampling sub-region 106 can be compressed.
  • the longitudinal resolution compression multiple of the sampling sub-area 101, the longitudinal resolution compression multiple of the sampling sub-area 102, and the longitudinal resolution compression multiple of the sampling sub-area 103 are all the same.
  • the longitudinal resolution compression factor of the sampling sub-region 107, the longitudinal resolution compression factor of the sampling sub-region 108, and the longitudinal resolution compression factor of the sampling sub-region 109 are also the same.
  • the lateral resolution compression factor of the sampling sub-region 101 the lateral resolution compression factor of the sampling sub-region 104, and the lateral resolution compression factor of the sampling sub-region 107 are all the same.
  • the lateral resolution compression factor of the sampling sub-region 103, the lateral resolution compression factor of the sampling sub-region 106, and the lateral resolution compression factor of the sampling sub-region 109 are also the same.
  • the original rendering model 200 may include nine original sampling sub-regions, which are the original sampling sub-region 201, the original sampling sub-region 202, the original sampling sub-region 203, and the original sampling sub-sample.
  • the area 204, the original sampling sub-area 205, the original sampling sub-area 206, the original sampling sub-area 207, the original sampling sub-area 108, and the original sampling sub-area 209, and the nine original sampling sub-areas are arranged in three rows and three columns, that is, the original original
  • the resolution sampling area and the original compression resolution sampling area may also constitute a nine-grid structure, and the original original resolution sampling area is located in the middle of the nine-square grid, that is, the original original resolution sampling area is located in the original sampling sub-area of the second row and the second column of the nine-square grid. 205.
  • the original compressed resolution sampling region includes eight original sampling sub-regions, that is, the original sampling sub-region 201, the original sampling sub-region 202, the original sampling sub-region 203, the original sampling sub-region 204, the original sampling sub-region 206, and the original sampling sub-region 207.
  • each of the original sample sub-regions of the original compression resolution sampling region and the sample sub-regions of the compression resolution sampling region are in one-to-one correspondence.
  • the original sample sub-region 201 located in the first column of the first row in the original compression resolution sampling region corresponds to the sampling sub-region 101 in which the compression resolution sampling region is located in the first column of the first row, and is located in the original compression resolution sampling region.
  • the original sampling sub-region 202 of the first row and the second column corresponds to the sampling sub-region 102 of the compressed-resolution sampling region located in the second row of the first row, and so on.
  • the original original resolution sampling area corresponds to the original resolution sampling area
  • the original original resolution sampling area has the same size as the original resolution sampling area. That is, the original sampling sub-region 205 in FIG. 4A corresponds to and is identical to the sampling sub-region 105 in FIG. 4B.
  • the shape of the original original resolution sampling area and the shape of the original resolution sampling area may both be rectangular.
  • the center of the original original resolution sampling area 205 coincides with the center of the original rendered model.
  • the center of the original resolution sampling area 205 does not coincide with the center of the rendering model.
  • adjusting a center point position of the original resolution sampling area includes:
  • the center point of the original resolution sampling area is adjusted to be the position closest to the fixation point position in the case where the original resolution sampling area does not exceed the boundary of the image to be displayed.
  • the center point position of the original resolution sampling area corresponds to the center point position of the overall model.
  • the center point position of the original resolution sampling area is adjusted from the center point position of the overall model as the fixation point is adjusted, the lateral and/or longitudinal resolution compression multiple of the compression resolution sampling area is adjusted accordingly.
  • FIG. 5A is a schematic diagram of a positional relationship between a gaze point position and a primary resolution sampling area on an image to be displayed according to some embodiments of the present disclosure
  • FIG. 5B is another gaze point location and original provided by some embodiments of the present disclosure.
  • FIG. 5C is a schematic diagram of adjusting the center of the original resolution sampling area according to some embodiments of the present disclosure.
  • the gaze point position is indicated by a point E
  • the center point of the original resolution sampling area 105 coincides with the gaze point position E
  • the original resolution is The sampling area 105 does not exceed the boundary of the image 20 to be displayed.
  • the center point of the original resolution sampling area 105 is the fixation point position E.
  • the center of the original compressed resolution sampling region of the original rendering model coincides with the center of the original rendering model, and the center of the original rendering model corresponds to the center of the image to be displayed.
  • the fixation point position E does not coincide with the center point C of the image to be displayed 20, and the fixation point position E is closer to the boundary of the right side of the image to be displayed 20 with respect to the center point C of the image 20 to be displayed. And closer to the boundary of the upper side of the image 20 to be displayed.
  • the center of the original original resolution sampling area 205 may be the center point C of the image 20 to be displayed, and the center of the original resolution sampling area 105 is the fixation point position E, in which case the original rendering is performed.
  • the three original sample sub-regions in the first row of the model are compressed in the longitudinal direction.
  • the three original sample sub-regions in the first row of the original rendering model are compressed longitudinally, and the three originals in the third row of the original rendering model.
  • the sampling sub-area is stretched in the longitudinal direction, and the three original sampling sub-areas in the third column of the original rendering model are compressed in the lateral direction, and the three original sampling sub-areas in the first column in the original rendering model are stretched in the horizontal direction.
  • the rendering model can be obtained after performing the above adjustment on the original rendering model. For example, if the original sampling sub-area 201 shown in FIG. 4A is subjected to compression processing in the first direction and is stretched in the second direction, the sampling sub-area 101 shown in FIG.
  • the rendering model can still include nine sample sub-regions.
  • the first sampling area of the image to be displayed 20 is the sampling sub-area 105, so that the center point of the first sampling area 105 is the fixation point position E.
  • rendering model may only include four or six sample sub-regions.
  • the gaze point position is indicated by a point C
  • the original resolution is The rate sampling area 105 exceeds the boundary of the image to be displayed 20, for example, the original resolution sampling area 105 exceeds the boundary between the right and bottom sides of the image 20 to be displayed, that is, the gaze point position with respect to the center point C of the image 20 to be displayed.
  • E is located in the lower right corner of the image 20 to be displayed. In this case, the center point of the original resolution sampling area 105 needs to be adjusted.
  • the position closest to the gaze point position E is the original resolution sampling area.
  • the center point of 105 For example, as shown in FIG. 5C, when two adjacent sides of the original resolution sampling area 105 respectively coincide with two adjacent sides of the image to be displayed 20 (ie, the sides of the right side and the side of the lower side), then this The position of the center point of the original resolution sampling area 105 is the position closest to the fixation point position E when the original resolution sampling area 105 does not exceed the boundary of the image to be displayed 20.
  • the rendering model may include four sampling sub-regions, that is, the area of the sampling sub-region 103 in FIG. 4B, the area of the sampling sub-region 107, and the sampling sub-region 108.
  • the area and the area of the sampling sub-area 109 are both zero.
  • the two original sampling sub-regions in the first row first column and the first row second column in the original rendering model ie, the original sampling sub-region 201 and the original sampling sub-region 202 in FIG.
  • the rendering model includes nine sampling sub-regions, the total area of the nine sampling sub-regions is SW, and when the rendering model includes four sampling sub-regions, the total area of the four sampling sub-regions is also SW. That is, the size of the rendered model does not change with the number of sampled subregions.
  • FIG. 6 is a schematic flowchart of an image display method according to some embodiments of the present disclosure. As shown in FIG. 6, the image display method may include the following steps:
  • S62 Determine a first sampling area and a second sampling area of the image to be displayed according to the position of the fixation point;
  • S63 Perform first resolution sampling on the first sampling area to obtain a first display area.
  • S64 performing second resolution sampling on the image to be displayed to obtain a second display area corresponding to the second sampling area, where a resolution of the second sampling area is greater than a resolution of the second display area;
  • the above steps S60-S66 are all implemented in the rendering engine, and steps S67-S68 are implemented in the virtual reality device. Therefore, the image display method provided by the embodiment of the present disclosure can implement image compression rendering on the rendering engine end, and then the compressed output image is transmitted to the virtual reality device, and then the virtual reality device performs display, thereby reducing the output of the software terminal to the display device.
  • the transmission bandwidth can be used to reduce the transmission bandwidth during image transmission, realizing the real-time sampling and transmission of images, and meeting the requirements of real-time processing of large amounts of data in virtual reality technology.
  • a detailed description of each of the steps S60-S66 performed in the rendering engine may refer to the description of the image rendering method in the above embodiment.
  • the detailed description of step S60 can refer to the description of step S10 above.
  • step S61 reference may be made to the description of step S11 above.
  • step S62 reference may be made to the description of step S12, and the detailed description of step S63 may refer to the above steps.
  • step S13 the detailed description of step S64 can refer to the description of step S14 above, and the detailed description of steps S65 and S66 can refer to the description of step S15 above.
  • the display screen may include a liquid crystal display panel or the like.
  • the output image includes a first display area and a second display area
  • step S67 includes: stretching a second display area in the output image by the virtual reality device to obtain a stretched display area; according to the first display area and pulling Extend the display area to determine the stretched image. That is, the size of the stretched display area is larger than the second display area, and the stretched image is obtained by stitching the first display area and the stretched display area, that is, the stretched image includes the stretched display area and the first display area.
  • step S67 the received output image is stretched by an integrated circuit (IC) using a virtual reality device to obtain a stretched image, and then displayed on a display screen. Stretch the image.
  • IC integrated circuit
  • the size of the stretched image may be the same as the size of the image to be displayed.
  • the stretching ratio of each sub-area in the second display area is the same as the compression multiple of each sub-area in the second sampling area, for example, in the example shown in FIG. 2A, for the sub-area 1 to be displayed, to be displayed.
  • the sub-area 1 corresponds to the output sub-area 11 in the output image 30, and when the sub-area 1 to be displayed is subjected to compression processing in a first direction by a ratio of 1/F1, the sub-area 1 to be displayed is compressed in a second direction by a ratio of 1/F2.
  • the sub-area 1 to be displayed is reduced by F1 times
  • the sub-area 1 to be displayed is reduced by F2 times
  • the output sub-area is required 11 is stretched in the first direction by the ratio F1
  • the output sub-region 11 is stretched in the second direction by the ratio F2
  • the output sub-area 11 is expanded by F1 times.
  • the output sub-area 11 is enlarged by F2.
  • the remaining output sub-areas in the output image are similar to the output sub-area 11 and will not be described here.
  • both F1 and F2 are greater than one. According to actual needs, F1 and F2 can be the same or different, and there is no limitation on this.
  • the virtual reality-oriented image rendering method and image display method provided by the embodiment are further described below by substituting a specific scene.
  • the display screen may be a display screen of a VR (Virtual Reality)/AR (Augmented Reality) head-mounted display device, and an image (ie, output image) transmission process occurs between the computer device and the VR/AR head-mounted display device.
  • VR Virtual Reality
  • AR Augmented Reality
  • the computer device directly transmits the 4K image to the VR/AR head-mounted display device.
  • the pressure on the hardware is too great to complete the real-time display of the high-resolution and high-refresh rate. According to the clarity of the human eye observation and the realization of the human eye tracking technology, it is possible to realize the non-high-definition region image quality compression of the 4K image to realize the transmission bandwidth saving.
  • the image to be displayed 20 is divided into nine sub-regions to be displayed according to the form of a nine-square grid, wherein the sub-region 5 to be displayed is the original resolution corresponding to the fixation point.
  • Rate sampling area HD area
  • the other eight sub-regions to be displayed are processed as follows: the sub-region 1 to be displayed, the sub-region 3 to be displayed, the sub-region 7 to be displayed, and the horizontal resolution and the vertical resolution of the sub-region 9 to be displayed are simultaneously compressed by 4 times; The horizontal resolution of the sub-area 2 to be displayed and the sub-area 8 to be displayed are unchanged, and the vertical resolution is compressed by 4 times; the vertical resolution of the sub-area 4 to be displayed and the sub-area 6 to be displayed are unchanged, and the horizontal resolution is compressed. Times.
  • the resolution of the image to be displayed 20 is 4320*4800
  • the resolution of the compressed output image 30 is 2160*2400.
  • the horizontal resolution of the compressed output image 30 is the original to be displayed.
  • the longitudinal resolution of the compressed output image 30 is 50% of the vertical resolution of the original image 20 to be displayed, so that 75% of the bandwidth can be saved in the process of transmitting the output image 30.
  • the range and position of the original resolution sampling area ie, the sub-area 5 to be displayed
  • the range and size of the other eight sub-areas to be displayed the size here corresponds to the resolution
  • Step 1 is an overall sampling of the scene to obtain an original image 15, and the resolution of the original image 15 is 4320*4800.
  • Step 2 is to perform inverse distortion processing on the original image 15 obtained in Step 1 to obtain an image to be displayed 20, and the resolution of the image to be displayed 20 is also 4320*4800.
  • the image to be displayed 20 includes a first sampling area and a second sampling area, and the first sampling area is an area corresponding to the fixation point position.
  • Step 3 is to sample the anti-distortion result (ie, the image to be displayed 20).
  • Step 3 according to the rendering model, the high-definition area (ie, the first sampling area) of the image to be displayed 20 is sampled at the original resolution (high-definition resolution), Obtaining an image 25 corresponding to the first display area, and the resolution of the sampled image 25 is 1440*1600.
  • the entire image to be displayed 20 is subjected to compression resolution (low resolution) sampling according to the rendering model to obtain
  • the image to be displayed 26 is displayed in the middle, and the resolution of the intermediate image to be displayed 26 after sampling is 1080*1200, and then the image to be displayed is divided according to the position and proportional relationship of the first sampling area and the second sampling area to obtain the second image.
  • Step 4 is an output image 30 obtained by splicing a high-definition area (ie, a first display image) and a non-high-definition area (ie, a second display area), and the resolution of the output image 30 is 2160*2400.
  • the Step 4 may include: determining a rendering model according to the position of the fixation point, the rendering model includes a original resolution sampling area and a compression resolution sampling area; placing the first display area at a position corresponding to the original resolution sampling area, and displaying the second display The area is placed at a position corresponding to the compression resolution sampling area to obtain an output image 30.
  • Step 5 is to obtain a stretched image 35 by stretching the output image 30 by a VR/AR head-mounted display device, and the resolution of the stretched image 35 is 4320*4800.
  • the original rendering model 200 is built according to the gaze point at the center, and the original rendering model 200 includes a plurality of original sampling sub-regions;
  • the reference position of each original sampling sub-area in the original rendering model 200 is a point where the position is unchanged at the time of compression.
  • the reference position of the original sampling sub-area 201 is the upper left corner of the original sampling sub-area 201, that is, the point Q1.
  • the original rendering model starts to change, only the size of the original sampling sub-area 201 is modified, without modifying the original sampling sub-area.
  • Reference position of 201 is the upper left corner of the original sampling sub-area 201, that is, the point Q1.
  • the original sampling sub-regions 203, 207, and 209 are also similarly processed, that is, the reference position of the original sampling sub-region 203 is the upper right corner of the original sampling sub-region 203, that is, the point Q3, and the reference position of the original sampling sub-region 207 is The lower left corner of the original sampling sub-region 207, that is, the point Q7, the reference position of the original sampling sub-region 209 is the lower right corner of the original sampling sub-region 209, that is, the point Q9.
  • the reference position of the original sample sub-region 202 is placed directly above the original sample sub-region 202, such as the midpoint of the immediately upper edge, point Q2; the reference position of the original sample sub-region 208 is placed directly below the original sample sub-region 208.
  • the reference position of the original sample sub-region 204 may be placed at the positive left edge of the original sample sub-region 204, such as the midpoint of the positive left edge, ie, the point Q4, and the reference position of the original sample sub-region 206 may be placed in the original sample sub-region.
  • the positive right edge of 206 such as the midpoint of the right right edge, point Q6, modifies the ordinate of point Q4 and point Q6 when the original rendering model changes, while the abscissa of point Q4 and point Q6 remains unchanged.
  • the reference position of the original sampling sub-area 205 is still at the center of the original sampling sub-area 205, that is, the point Q5. When the original rendering model changes, the original sampling sub-area 205 has the same size, and only the abscissa and the ordinate of the point Q5 are modified. .
  • the original rendering model is located in a Cartesian coordinate system x-o-y, and the coordinate origin o of the Cartesian coordinate system x-o-y coincides with the point Q5.
  • the abscissa of the point Q1, the point Q4, and the point Q7 are the same, the abscissa of the point Q2, the point Q5, and the point Q8 are the same, and the abscissas of the point Q3, the point Q6, and the point Q9 are the same; the point Q1
  • the ordinates of the point Q2 and the point Q3 are the same, the ordinates of the point Q4, the point Q5 and the point Q6 are the same, and the ordinates of the point Q7, the point Q8 and the point Q9 are the same.
  • the rendering model 100 when the original rendering modulo 200 is adjusted to obtain a rendering model 100, the rendering model 100 includes a plurality of sampling sub-regions, and the plurality of sampling sub-regions are multiples as shown in FIG. 4A.
  • the original sampling sub-regions correspond one-to-one.
  • the reference position of the sampling sub-area 101 is the upper left corner of the sampling sub-area 101, that is, the point P1.
  • the coordinates of the point P1 are the same as the coordinates of the reference position Q1 of the original sampling sub-area 201, and the sampler
  • the size of the area 101 is different from the size of the original sampling sub-area 201.
  • the sampling sub-regions 103, 107, and 109 are also similar to the sampling sub-region 101, that is, the reference position of the sampling sub-region 103 is the upper right corner of the sampling sub-region 103, that is, the point P3, and the reference position of the sampling sub-region 107 is the sampling sub-sample.
  • the lower left corner of the area 107, that is, the point P7, the reference position of the sampling sub-area 109 is the lower right corner of the sampling sub-area 109, that is, the point P9.
  • the reference position of the sampling sub-region 102 is placed directly above the sampling sub-region 102, such as the midpoint of the immediately upper edge, ie, the point P2, and the reference position of the sampling sub-region 108 is placed directly below the sampling sub-region 108.
  • the ordinate of the point P2 is the same as the ordinate of the reference position Q2 of the original sample sub-region 202, and the abscissa of the point P2 is different from the abscissa of the reference position Q2 of the original sample sub-region 202;
  • the ordinate of the point P8 is the same as the ordinate of the reference position Q8 of the original sample sub-area 208, and the abscissa of the point P8 is different from the abscissa of the reference position Q8 of the original sample sub-area 208.
  • the reference position of the sampling sub-region 104 can be placed at the positive left edge of the sampling sub-region 104, such as the midpoint of the positive left edge, ie, the point P4, and the reference position of the sampling sub-region 106 can be placed in the sampling sub-region 106.
  • the right edge such as the midpoint of the right edge, is point P6.
  • the abscissa of the point P4 is the same as the abscissa of the reference position Q4 of the original sample sub-area 204, and the ordinate of the point P4 is not the same as the ordinate of the reference position Q4 of the original sample sub-area 204;
  • the abscissa of the point P6 is the same as the abscissa of the reference position Q6 of the original sample sub-area 206, and the ordinate of the point P6 is different from the ordinate of the reference position Q6 of the original sample sub-area 206.
  • the reference position of the sampling sub-area 105 is at the center of the sampling sub-area 105, that is, the point P5.
  • the abscissa of the point P5 is different from the abscissa of the reference position Q5 of the original sampling sub-area 205, and the ordinate of the point P5 is also the same as the ordinate of the reference position Q5 of the original sampling sub-area 205.
  • the size of the sampling sub-area 105 is the same as the size of the original sampling sub-area 205.
  • the reference position P5 of the sampling sub-area 105 does not coincide with the coordinate origin o of the Cartesian coordinate system x-o-y.
  • the abscissa of the point P1, the point QP4, and the point P7 are the same, the abscissa of the point P2, the point P5, and the point P8 are the same, and the abscissas of the point P3, the point P6, and the point P9 are the same; the point P1
  • the ordinates of the point P2 and the point P3 are the same, the ordinates of the point P4, the point P5 and the point P6 are the same, and the ordinates of the point P7, the point P8 and the point P9 are the same.
  • FIG. 7 is a schematic diagram of a rendering model and an output image corresponding to different gaze point positions according to some embodiments of the present disclosure.
  • the position where the fixation point can fall is projected into the two-dimensional coordinate system x'-o'-y' of (-1, -1) to (1, 1), for example, FIG. 2A
  • the center point of the sub-area 1 to be displayed in the image to be displayed 20, the center point of the sub-area 3 to be displayed, the center point of the sub-area 7 to be displayed, and the center point of the sub-area 9 to be displayed are the gaze The point where the point can fall.
  • the coordinates of the center point of the sub-area 1 to be displayed are (-1, 1), and the coordinates of the center point of the sub-area 2 to be displayed are (0, 1).
  • the coordinates of the center point of the sub-area 3 to be displayed are (1, 1), the coordinates of the center point of the sub-area 4 to be displayed are (-1, 0), and the coordinates of the center point of the sub-area 5 to be displayed are (0, 0), the coordinates of the center point of the sub-region 6 to be displayed are (1, 0), the coordinates of the center point of the sub-region 7 to be displayed are (-1, -1), and the coordinates of the center point of the sub-region 8 to be displayed are (0, -1), the coordinates of the center point of the sub-area 9 to be displayed are (1, -1).
  • the gaze point positions corresponding to the four cases in FIG. 7 are (0, 0), (0.5) in the two-dimensional coordinate system x'-o'-y', respectively. 0.5), (1, 1) (the area of the sampling sub-areas 101, 102, 103, 106, 109 is 0), (1, 0) (the area of the sampling sub-areas 103, 106, 109 is 0).
  • the original sampling sub-area 201, 203, 207, 209 modifies its area size and corresponding range according to the gaze point position, without modifying its reference position; the original sampling sub-area 202, 208 is based on the gaze point position.
  • the original sampling sub-regions 204, 206 modify the ordinate of the reference position according to the position of the gaze point, the size of the region in the horizontal direction and the corresponding range;
  • the sub-area 205 modifies the horizontal and vertical coordinates of its reference position according to the gaze point position while adjusting the position of the virtual camera, thereby obtaining an anti-distortion image of the high-definition area.
  • the sample sub-region 101 is obtained by adjusting the original sample sub-region 201, and the calculation result of the sample sub-region 101 is as follows.
  • the first side of the original sampling sub-region 201 is T1
  • the second side of the original sampling sub-region 201 is T2
  • the first side and The second side is two adjacent sides of the original sample sub-region 201
  • the first side represents the edge in the first direction
  • the second side represents the edge in the second direction.
  • the sampling sub-area 101 includes a third side and a fourth side, the third side of the sampling sub-area 101 corresponds to the first side of the original sampling sub-area 201, and the fourth side of the sampling sub-area 101 and the second side of the original sampling sub-area 201
  • the edge corresponds to T1*(1-y) and the fourth side is T2*(x+1).
  • mapping sub-area 101 corresponds to a map area:
  • the shape of the rendered model can be a rectangle
  • the rendering model can be placed in a two-dimensional coordinate system x"-o"-y
  • the lower left corner of the rendering model is in a two-dimensional coordinate system x"-o"
  • the coordinate origin of -y and the rendered model is projected to the area of (-1, -1) to (1, 1).
  • the length of the rendered model is R1
  • the length of the rendered model is R2.
  • the length of the sampling sub-region 101 is R1*(x+1)/3, and in the y"-axis direction, the length of the sampling sub-region 101 is R2*(1-y)/3.
  • the coordinates of the lower left corner of the sampling sub-region 101 in the two-dimensional coordinate system x"-o"-y" are (0, (y+2)/3).
  • the other sample sub-areas also calculate the area size, reference position, and range in a similar manner.
  • FIG. 8 is a schematic structural diagram of an image rendering apparatus for virtual reality provided by at least some embodiments of the present disclosure. As shown in FIG. 8 , further embodiments of the present disclosure provide an image rendering apparatus for virtual reality, including: a gaze point projection module, an adjustment module, a rendering engine, and a splicing module.
  • the gaze point projection module is configured to obtain a gaze point position corresponding to the gaze point on the image to be displayed according to the gaze point of the human eye on the display screen of the virtual reality device.
  • the rendering engine is configured to determine a first sampling area and a second sampling area of the image to be displayed according to a gaze point position; perform first resolution sampling on the first sampling area to obtain a first display area; The second resolution is sampled to obtain a second display area corresponding to the second sampling area. For example, the resolution of the second sampling area is greater than the resolution of the second display area.
  • the rendering engine includes a virtual camera and a rendering model for performing first resolution sampling on the first sampling region and second resolution sampling on the image to be displayed according to the rendering model.
  • the rendering engine can be further configured to load a rendering model based on the gaze point location, ie, to determine a rendering model, wherein the rendering model includes a native resolution sampling region, a compressed resolution sampling region, and a compressed resolution sampling.
  • the rendering model includes a native resolution sampling region, a compressed resolution sampling region, and a compressed resolution sampling.
  • the rendering engine is also used to obtain the original rendering model, which includes the original native resolution sampling region and the original compressed resolution sampling region.
  • the adjustment module is configured to adjust a center point position of the original original resolution sampling area and a resolution compression multiple of the original compression resolution sampling area according to the gaze point position to determine a rendering model.
  • the splicing module is configured to splicing the first display area and the second display area to obtain an output image to be transmitted to the virtual reality device.
  • the components of the image rendering apparatus shown in FIG. 8 are merely exemplary and not limiting, and the image rendering apparatus may have other components depending on actual application needs.
  • FIG. 9 shows a schematic diagram of a virtual reality oriented image rendering system provided by at least some embodiments of the present disclosure.
  • an image rendering system for virtual reality including: a virtual reality device and an image rendering device; the image rendering device includes: a gaze point projection module, an adjustment module, a rendering engine, and a splicing module.
  • the gaze point projection module is configured to obtain a position of the gaze point corresponding to the image to be displayed on the display screen according to the gaze point position of the human eye on the display screen of the virtual reality device.
  • the rendering engine is used to load the rendering model, wherein the rendering model is pre-configured with a horizontal resolution and/or a vertical resolution compression factor of the original resolution sampling area and the compressed resolution sampling area and the compressed resolution sampling area.
  • the adjustment module is configured to adjust a center point position of the original resolution sampling area and a horizontal and/or vertical resolution compression multiple of the compression resolution sampling area according to the position of the fixation point corresponding to the image to be displayed on the display screen.
  • the rendering engine is configured to perform original resolution sampling on the original resolution sampling area of the image and perform compression resolution sampling on the compressed resolution sampling area according to the adjusted rendering model.
  • the splicing module is configured to splicing the sampled original resolution sampling area and the compressed resolution sampling area to obtain an image to be transmitted to the virtual reality device.
  • FIG. 9 further embodiments of the present disclosure further provide an image rendering system for virtual reality including a virtual reality device and an image rendering device.
  • the image rendering device is the image rendering device of any of the above embodiments of the present disclosure.
  • the virtual reality device may be a head mounted display device or the like.
  • the head mounted display device is configured to acquire a gaze point of the human eye on the display screen of the virtual reality device and receive an output image transmitted by the image rendering device.
  • the virtual reality device is also configured to acquire an original image of the scene.
  • the virtual reality device or the image rendering device may perform an anti-distortion process on the original image to obtain an image to be displayed.
  • FIG. 10 is a block diagram showing the structure of a computer device according to at least some embodiments of the present disclosure. Another embodiment of the present disclosure provides a computer device.
  • a computer device includes a memory and a processor.
  • the memory is configured to store a computer program.
  • the processor is configured to run a computer program.
  • One or more of the image rendering methods described in any of the above embodiments are implemented when the computer program is executed by the processor.
  • the processor can be a central processing unit (CPU) or other form of processing unit with data processing capabilities and/or program execution capabilities, such as a field programmable gate array (FPGA) or a tensor processing unit (TPU).
  • CPU central processing unit
  • FPGA field programmable gate array
  • TPU tensor processing unit
  • the memory can include any combination of one or more computer program products, which can include various forms of computer readable storage media such as volatile memory and/or nonvolatile memory.
  • Volatile memory can include, for example, random access memory (RAM) and/or caches and the like.
  • the non-volatile memory may include, for example, a read only memory (ROM), a hard disk, an erasable programmable read only memory (EPROM), a portable compact disk read only memory (CD-ROM), a USB memory, a flash memory, and the like.
  • ROM read only memory
  • EPROM erasable programmable read only memory
  • CD-ROM portable compact disk read only memory
  • USB memory a flash memory
  • flash memory and the like.
  • One or more computer instructions can be stored on the memory, and the processor can execute the computer instructions to perform various functions.
  • Various applications and various data as well as various data used and/or generated by the application, and the like can also be stored in the computer readable storage medium.
  • a computer device includes a central processing unit (CPU) that can be loaded into a random access memory (RAM) according to a program stored in a read only memory (ROM) or from a storage portion.
  • the program performs various appropriate actions and processes.
  • RAM random access memory
  • ROM read only memory
  • the CPU, ROM, and RAM are connected by this bus.
  • An input/input (I/O) interface is also connected to the bus.
  • the following components are connected to the I/O interface: an input portion including a keyboard, a mouse, and the like; an output portion including a liquid crystal display (LCD) or the like, a speaker, etc.; a storage portion including a hard disk or the like; and a network including a LAN card, a modem, and the like
  • the communication part of the interface card performs communication processing via a network such as the Internet.
  • the drive is also connected to the I/O interface as needed.
  • a removable medium such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory or the like is mounted on the drive as needed so that a computer program read therefrom is installed into the storage portion as needed.
  • the process described in the above flowchart can be implemented as a computer software program.
  • the present embodiment includes a computer program product comprising a computer program tangibly embodied on a computer readable medium, the computer program comprising program code for executing the method illustrated in the flowchart.
  • the computer program can be downloaded and installed from the network via a communication portion, and/or installed from a removable medium.
  • each block in the flowchart or diagram may represent a module, a program segment, or a portion of code that includes one or more of the Execute the instruction.
  • the functions noted in the blocks may also occur in a different order than that illustrated in the drawings. For example, two successively represented blocks may in fact be executed substantially in parallel, and they may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the schematic and/or flow diagrams, as well as combinations of blocks in the schematic and/or flowcharts, can be implemented in a dedicated hardware-based system that performs the specified functions or operations. Or it can be implemented by a combination of dedicated hardware and computer instructions.
  • the units described in this embodiment may be implemented by software or by hardware.
  • the described unit may also be provided in the processor, for example, as a processor package gaze point projection module, an adjustment module, a rendering engine, and a splicing module.
  • the names of these units do not in any way constitute a limitation on the unit itself.
  • the fixation point projection module can also be described as an "image fixation point acquisition module.”
  • Some embodiments of the present disclosure further provide a non-volatile computer storage medium, which may be a non-volatile computer storage medium included in the foregoing computer device in the above embodiment, or may be It is a non-volatile computer storage medium that exists alone and is not assembled into the terminal.
  • the above non-volatile computer storage medium stores one or more programs, and the image rendering method described in any of the above embodiments may be implemented when the one or more programs are executed by one device.
  • the orientation or positional relationship of the terms “upper”, “lower” and the like is based on the orientation or positional relationship shown in the drawings, and is merely for convenience of description of the present disclosure and simplified description. It is not intended to be a limitation or limitation of the invention.
  • the terms “mounted,” “connected,” and “connected” are used in a broad sense, and may be, for example, a fixed connection, a detachable connection, or an integral connection; it may be a mechanical connection, It can also be an electrical connection; it can be directly connected, or it can be connected indirectly through an intermediate medium, which can be the internal connection of two components.
  • the specific meanings of the above terms in the present disclosure can be understood by those skilled in the art on a case-by-case basis.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Architecture (AREA)
  • Controls And Circuits For Display Device (AREA)
  • Processing Or Creating Images (AREA)
  • Geometry (AREA)

Abstract

本公开提供一种图像渲染方法、装置、***、计算机可读存储介质、图像显示方法。图像渲染方法包括:获取待显示图像(S10);根据人眼在虚拟现实设备的显示屏上的注视点,得到所述注视点对应在所述待显示图像上的注视点位置(S11);根据所述注视点位置确定所述待显示图像的第一采样区域和第二采样区域(S12);对所述第一采样区域进行第一分辨率采样,以得到第一显示区域(S13);对所述待显示图像进行第二分辨率采样,以得到与所述第二采样区域对应的第二显示区域,其中,所述第二采样区域的分辨率大于所述第二显示区域的分辨率(S14);将所述第一显示区域和所述第二显示区域进行拼接,以得到待传输至所述虚拟现实设备的输出图像(S15)。

Description

图像渲染方法、装置、***、存储介质、图像显示方法、计算机设备
本申请要求于2018年05月16日递交的中国专利申请第201810466112.4号的优先权,在此全文引用上述中国专利申请公开的内容以作为本申请的一部分。
技术领域
本公开的实施例涉及一种面向虚拟现实的图像渲染方法、图像渲染装置、图像渲染***、计算机可读存储介质、计算机设备、图像显示方法。
背景技术
目前,对于显示清晰度,尤其是对于虚拟现实(VR)/增强现实(AR)的显示清晰度要求越来越高,计算机设备向显示设备输出图像的信息量也越来越大。对于一个高清分辨率渲染的场景,对软件计算速度、计算资源的消耗、图像数据传输数据量都有很大的要求。对于人眼而言,由于负责观察色彩和细节的视网膜上的视锥细胞浓度不同,只能接纳注视点中心的细节,在图像中任何超出人眼注视区5°以上的区域都会逐渐降低清晰度。
随着显示技术的发展,通过对图像进行基于算法的压缩处理的方式由于实时性差、计算资源消耗大已不能够满足需求。在节约图像传输过程中的传输带宽时,如何提高图像压缩传输的实时性及计算效率成为一个亟待解决的问题。
发明内容
本公开至少一些实施例提供一种面向虚拟现实的图像渲染方法,包括:
获取待显示图像;
根据人眼在虚拟现实设备的显示屏上的注视点,得到所述注视点对应在所述待显示图像上的注视点位置;
根据所述注视点位置确定所述待显示图像的第一采样区域和第二采样区域;
对所述第一采样区域进行第一分辨率采样,以得到第一显示区域;
对所述待显示图像进行第二分辨率采样,以得到与所述第二采样区域对应的第二显示区域,其中,所述第二采样区域的分辨率大于所述第二显示区域的 分辨率;
将所述第一显示区域和所述第二显示区域进行拼接,以得到待传输至所述虚拟现实设备的输出图像。
例如,在本公开一些实施例提供的图像渲染方法中,对所述第一采样区域进行第一分辨率采样和对所述待显示图像进行第二分辨率采样包括:
根据所述注视点位置,确定渲染模型,其中,所述渲染模型包括原分辨率采样区域、压缩分辨率采样区域以及所述压缩分辨率采样区域的分辨率压缩倍数,所述原分辨率采样区域对应于所述第一采样区域,所述压缩分辨率采样区域对应于所述第二采样区域;
根据所述渲染模型,对所述第一采样区域进行第一分辨率采样并对所述待显示图像进行第二分辨率采样。
例如,在本公开一些实施例提供的图像渲染方法中,根据所述注视点位置,确定渲染模型包括:
获取原始渲染模型,其中,所述原始渲染模型包括原始原分辨率采样区域和原始压缩分辨率采样区域;
根据所述注视点位置,调整所述原始原分辨率采样区域的中心点位置以及所述原始压缩分辨率采样区域的分辨率压缩倍数,以得到所述渲染模型。
例如,在本公开至少一些实施例提供的图像渲染方法中,所述原始压缩分辨率采样区域的分辨率压缩倍数根据所述原始压缩分辨率采样区域与所述原始原分辨率采样区域的位置关系被预设及调整。
例如,在本公开至少一些实施例提供的图像渲染方法中,所述压缩分辨率采样区域的分辨率压缩倍数包括横向分辨率压缩倍数和/或纵向分辨率压缩倍数。
例如,在本公开至少一些实施例提供的图像渲染方法中,所述原分辨率采样区域和所述压缩分辨率采样区域构成九宫格结构,所述九宫格结构包括三行三列的多个区域,所述原分辨率采样区域位于所述九宫格结构的第二行和第二列。
例如,在本公开至少一些实施例提供的图像渲染方法中,所述第一采样区域的尺寸、所述原始原分辨率采样区域的尺寸与所述原分辨率采样区域的尺寸相同。
例如,在本公开至少一些实施例提供的图像渲染方法中,调整所述原始原 分辨率采样区域的中心点位置包括:
当所述原始原分辨率采样区域的中心点为所述注视点位置时,判断所述原始原分辨率采样区域是否超出所述待显示图像的边界:
若否,则调整所述原始原分辨率采样区域的中心点为所述注视点位置;
若是,则调整所述原始原分辨率采样区域的中心点为在所述原始原分辨率采样区域不超出所述待显示图像的边界的情况下最接近所述注视位置的位置。
例如,在本公开至少一些实施例提供的图像渲染方法中,根据所述渲染模型,对所述待显示图像进行第二分辨率采样,包括:
根据所述渲染模型,对所述待显示图像进行所述第二分辨率采样,以得到中间待显示图像;
根据所述第一采样区域和第二采样区域的位置和比例关系,分割所述中间待显示图像,以得到与所述第一采样区域对应的第一中间显示区域和与所述第二采样区域对应的第二中间显示区域,所述第二显示区域包括所述第二中间显示区域。
例如,在本公开至少一些实施例提供的图像渲染方法中,获取待显示图像包括:获取原始图像;对所述原始图像进行反畸变处理以得到所述待显示图像。
例如,在本公开至少一些实施例提供的图像渲染方法中,所述第一采样区域的分辨率等于所述第一显示区域的分辨率。
本公开至少一些实施例还提供一种图像显示方法,包括:
在渲染引擎中:
获取待显示图像;
根据人眼在虚拟现实设备的显示屏上的注视点,得到所述注视点对应在所述待显示图像上的注视点位置;
根据所述注视点位置确定所述待显示图像的第一采样区域和第二采样区域;
对所述第一采样区域进行第一分辨率采样,以得到第一显示区域;
对所述待显示图像进行第二分辨率采样,以得到与所述第二采样区域对应的第二显示区域,其中,所述第二采样区域的分辨率大于所述第二显示区域的分辨率;
将所述第一显示区域和所述第二显示区域进行拼接,以得到输出图像;以及
将所述输出图像输出至虚拟现实设备;以及
在所述虚拟现实设备中,
通过所述虚拟现实设备对所述输出图像进行拉伸,以得到拉伸图像;
在所述虚拟现实设备的显示屏上显示所述拉伸图像。
例如,在本公开至少一些实施例提供的图像显示方法中,所述输出图像包括所述第一显示区域和所述第二显示区域,通过所述虚拟现实设备对所述输出图像进行拉伸,以得到拉伸图像,包括:通过所述虚拟现实设备对所述输出图像中的所述第二显示区域进行拉伸,以得到拉伸显示区域;根据所述第一显示区域和所述拉伸显示区域,确定所述拉伸图像。
本公开一些实施例还提供一种面向虚拟现实的图像渲染装置,包括:注视点投影模块、渲染引擎和拼接模块;
所述注视点投影模块用于根据人眼在虚拟现实设备的显示屏上的注视点,得到所述注视点对应在待显示图像上的注视点位置;
所述渲染引擎用于:
根据所述注视点位置确定所述待显示图像的第一采样区域和第二采样区域;
对所述第一采样区域进行第一分辨率采样,以得到第一显示区域;
对所述待显示图像进行第二分辨率采样,以得到与所述第二采样区域对应的第二显示区域,其中,所述第二采样区域的分辨率大于所述第二显示区域的分辨率;
所述拼接模块用于将所述第一显示区域和所述第二显示区域进行拼接,得到待传输至所述虚拟现实设备的输出图像。
例如,在本公开至少一些实施例提供的图像渲染装置中,所述渲染引擎还用于根据所述注视点位置,加载渲染模型,即确定渲染模型,其中,所述渲染模型包括原分辨率采样区域、压缩分辨率采样区域以及所述压缩分辨率采样区域的分辨率压缩倍数,所述原分辨率采样区域对应于所述第一采样区域,所述压缩分辨率采样区域对应于所述第二采样区域;
根据所述渲染模型,对所述第一采样区域进行第一分辨率采样并对所述待显示图像进行第二分辨率采样。
例如,本公开至少一些实施例提供的图像渲染装置还包括调整模块,所述渲染引擎还用于获取原始渲染模型,其中,所述原始渲染模型包括原始原分辨 率采样区域和原始压缩分辨率采样区域,所述调整模块用于根据所述注视点位置,调整所述原始原分辨率采样区域的中心点位置以及所述原始压缩分辨率采样区域的分辨率压缩倍数,以确定所述渲染模型。
本公开一些实施例还提供一种面向虚拟现实的图像渲染***,包括:虚拟现实设备和上述任一实施例所述的图像渲染装置;
所述虚拟现实设备用于获取人眼在虚拟现实设备的显示屏上的注视点并接收所述图像渲染装置传输的所述输出图像。
本公开一些实施例还提供一种计算机可读存储介质,其上存储有计算机程序,该计算机程序被处理器执行时实现本公开任一实施例提供的面向虚拟现实的图像渲染方法。
本公开一些实施例还提供一种计算机设备,包括存储器,被配置为存储计算机程序;处理器被配置为运行所述计算机程序,其中,所述处理器执行所述计算机程序时实现本公开任一实施例提供的面向虚拟现实的图像渲染方法。
附图说明
为了更清楚地说明本公开实施例的技术方案,下面将对实施例的附图作简单地介绍,显而易见地,下面描述中的附图仅仅涉及本公开的一些实施例,而非对本公开的限制。
图1A示出了本公开至少一些实施例提供的一种面向虚拟现实的图像渲染方法的一种流程图;
图1B示出了本公开至少一些实施例提供的一种面向虚拟现实的图像渲染方法的另一种流程图;
图2A示出了本公开至少一些实施例提供的一种渲染过程的实现原理图;
图2B示出了本公开至少一些实施例提供的一种渲染模型的示意图;
图3示出了本公开至少一些实施例提供的一种面向虚拟现实的图像渲染方法的过程中图像的变化过程图;
图4A示出了本公开一些实施例提供的一种原始渲染模型的示意图;
图4B示出了本公开一些实施例提供的一种渲染模型的示意图;
图5A为本公开一些实施例提供的一种在待显示图像上注视点位置与原分辨率采样区域的位置关系的示意图;
图5B为本公开一些实施例提供的另一种注视点位置与原分辨率采样区域 的位置关系的示意图;
图5C为本公开一些实施例提供的一种对原分辨率采样区域的中心进行调整后的示意图;
图6为本公开一些实施例提供的一种图像显示方法的示意性流程图;
图7示出了本公开一些实施例提供的一种注视点位置不同时对应的渲染模型和输出图像的示意图;
图8示出了本公开至少一些实施例提供的一种面向虚拟现实的图像渲染装置的结构示意图;
图9示出了本公开至少一些实施例提供的一种面向虚拟现实的图像渲染***的示意图;
图10示出了本公开至少一些实施例提供的一种计算机设备的结构示意图。
具体实施方式
为了使得本公开实施例的目的、技术方案和优点更加清楚,下面将结合本公开实施例的附图,对本公开实施例的技术方案进行清楚、完整地描述。显然,所描述的实施例是本公开的一部分实施例,而不是全部的实施例。基于所描述的本公开的实施例,本领域普通技术人员在无需创造性劳动的前提下所获得的所有其他实施例,都属于本公开保护的范围。
除非另外定义,本公开使用的技术术语或者科学术语应当为本公开所属领域内具有一般技能的人士所理解的通常意义。本公开中使用的“第一”、“第二”以及类似的词语并不表示任何顺序、数量或者重要性,而只是用来区分不同的组成部分。“包括”或者“包含”等类似的词语意指出现该词前面的元件或者物件涵盖出现在该词后面列举的元件或者物件及其等同,而不排除其他元件或者物件。“连接”或者“相连”等类似的词语并非限定于物理的或者机械的连接,而是可以包括电性的连接,不管是直接的还是间接的。“上”、“下”、“左”、“右”等仅用于表示相对位置关系,当被描述对象的绝对位置改变后,则该相对位置关系也可能相应地改变。
为了更清楚地说明本公开,下面结合本公开的一些实施例和附图对本公开做进一步的说明。附图中相似的部件以相同的附图标记进行表示。本领域技术人员应当理解,下面所具体描述的内容是说明性的而非限制性的,不应以此限制本公开的保护范围。
图1A示出了本公开至少一些实施例提供的一种面向虚拟现实的图像渲染方法的一种流程图,图1B示出了本公开至少一些实施例提供的一种面向虚拟现实的图像渲染方法的另一种流程图;图2A示出了本公开至少一些实施例提供的一种渲染过程的实现原理图;图2B示出了本公开至少一些实施例提供的一种渲染模型的示意图;图3示出了本公开至少一些实施例提供的一种面向虚拟现实的图像渲染方法的过程中图像的变化过程图。
本公开的一些实施例提供一种图像渲染方法,该图像渲染方法可以应用于虚拟现实设备中。例如,如图1B所示,本公开的一些实施例提供了一种面向虚拟现实的图像渲染方法包括:
根据人眼在虚拟现实设备的显示屏上的注视点位置,得到注视点对应在待显示在显示屏上的图像上的位置;其中,获取注视点在显示屏上的位置可由显示屏所属虚拟现实设备通过相应的硬件或软件基于视线追踪技术实现;
加载渲染模型,其中,渲染模型预设有原分辨率采样区域和压缩分辨率采样区域以及压缩分辨率采样区域的横向和/或纵向的分辨率压缩倍数;
根据注视点对应在待显示在显示屏上的图像上的位置,调整原分辨率采样区域的中心点位置以及压缩分辨率采样区域的横向和/或纵向的分辨率压缩倍数;
根据调整后的渲染模型,对图像的原分辨率采样区域进行原分辨率采样并对压缩分辨率采样区域进行压缩分辨率采样;
将采样后的原分辨率采样区域和压缩分辨率采样区域进行拼接,得到待传输至虚拟现实设备的图像。
例如,如图1A所示,本公开的另一些实施例提供了一种面向虚拟现实的图像渲染方法包括:
S10:获取待显示图像;
S11:根据人眼在虚拟现实设备的显示屏上的注视点,得到注视点对应在待显示图像上的注视点位置;
S12:根据注视点位置确定待显示图像的第一采样区域和第二采样区域;
S13:对第一采样区域进行第一分辨率采样,以得到第一显示区域;
S14:对待显示图像进行第二分辨率采样,以得到与第二采样区域对应的第二显示区域;
S15:将第一显示区域和第二显示区域进行拼接,以得到待传输至虚拟现 实设备的输出图像。
本公开实施例提供的面向虚拟现实的图像渲染方法可以在渲染引擎中实现,通过调整渲染引擎中的渲染模型即可实现在渲染引擎中对图像的非高清区域(即非注视点区域)进行压缩,降低软件端输出给显示设备的传输带宽,可节约图像传输过程中的传输带宽,解决由于传输带宽的限制,直接传输4K的图像对于硬件的压力太大,无法完成高分辨率高刷新率的实时显示的问题。且与通过对图像进行基于算法的压缩处理的方式相比,本公开提供的图像渲染方法实时性高、计算速度快、计算资源消耗小、计算效率高,可以实现高分辨率高刷新率的实时显示。
例如,在步骤S10中,待显示图像可以由渲染引擎获取。在一些实施例中,步骤S10可以包括:获取原始图像;对原始图像进行反畸变处理以得到待显示图像。如图1B所示,在开始运行程序后,渲染引擎可以获得当前场景的渲染画面,即原始图像,然后,渲染引擎还可以对渲染画面进行反畸变处理,以得到待显示图像,也就是说,待显示图像为反畸变图像。
在应用于虚拟现实设备,特别是虚拟现实头戴显示设备时,由于虚拟现实设备的显示屏通常设有透镜,为了正常显示,需要对图像进行反畸变。通过在本实施例提供的面向虚拟现实的图像渲染方法中加入上述反畸变处理的步骤,实现了将该方法应用于计算机装置向虚拟现实设备的图像传输。
例如,待显示图像的尺寸和原始图像的尺寸可以相同。
例如,原始图像可以为彩色图像,也可以为灰度图像。例如,原始图像可以具有各种不同的形状,例如矩形、圆形、梯形等。下面以原始图像和待显示图像均具有矩形形状为例描述本公开的实施例。
例如,在步骤S11中,根据人眼在虚拟现实设备的显示屏上的注视点位置,得到注视点对应在显示屏上的待显示图像上的位置;例如,获取注视点在显示屏上的注视点位置可由显示屏所属虚拟现实设备通过相应的硬件或软件基于视线追踪技术实现。虚拟现实设备可以根据眼球和眼球周边的特征变化跟踪人眼视线。虚拟现实设备也可以根据虹膜角度变化跟踪人眼视线。虚拟现实设备还可以通过主动投射红外线等光束到虹膜以提取眼球特征,从而实现跟踪人眼视线。
例如,在一些示例中,虚拟现实设备通过相应的软件基于视线追踪技术实现跟踪人眼视线。如图1B所示,在开始运行程序后,虚拟现实设备可以运行 眼球追踪程序,从而获取当前眼球的注视点。
例如,在步骤S12中,第一采样区域的尺寸可以由用户预先设置,且在程序运行过程中,对于不同的待显示图像,第一采样区域的尺寸保持不变。
例如,第二采样区域可以根据注视点位置和第一采样区域的尺寸确定。
例如,在步骤S13和步骤S14中,对第一采样区域进行第一分辨率采样和对待显示图像进行第二分辨率采样包括:根据注视点位置,加载渲染模型,即确定渲染模型,其中,渲染模型包括原分辨率采样区域、压缩分辨率采样区域以及压缩分辨率采样区域的分辨率压缩倍数,原分辨率采样区域对应于第一采样区域,压缩分辨率采样区域对应于第二采样区域;根据渲染模型,对第一采样区域进行第一分辨率采样并对待显示图像进行第二分辨率采样。
例如,在步骤S13和步骤S14中,第一采样区域的分辨率等于第一显示区域的分辨率,即第一显示区域的尺寸与第一采样区域的尺寸相同。第二采样区域的分辨率大于第二显示区域的分辨率,即第二采样区域的尺寸大于第二显示区域的尺寸。也就是说,在步骤S13和步骤S14中,对第一采样区域进行原分辨率采样以得到第一显示区域,对第二采样区域进行压缩分辨率采样以得到第二显示区域。
例如,压缩分辨率采样可以插值算法实现。插值算法例如包括拉格朗日插值、牛顿插值以及Hermite插值等。
例如,如图1B所示,在步骤S13中,可以根据注视点位置采样高清区(即第一采样区域)的反畸变图像(即待显示图像);在步骤S14中,可以对整个待显示图像进行低分辨率采样,以获得全区域(即整个待显示图像的区域)的压缩后的反畸变图像。
例如,如图2A所示,在一些实施例中,待显示图像20的分辨率可以为4320*4800。通过本公开的实施例对该待显示图像20进行处理后得到的输出图像30的分辨率为2160*2400。
例如,输出图像30包括第一显示区域和第二显示区域。如图2A所示,在一些示例中,输出图像30包括九个输出子区域,第一显示区域为输出子区域15,第二显示区域包括八个输出子区域,即第二显示区域可以包括输出子区域11、输出子区域12、输出子区域13、输出子区域14、输出子区域16、输出子区域17、输出子区域18和子区域19。
例如,输出图像30的输出子区域15对应于待显示图像20的待显示子区 域5,输出图像30的输出子区域11对应于待显示图像20的待显示子区域1,输出图像30的输出子区域12对应于待显示图像20的待显示子区域2,输出图像30的输出子区域13对应于待显示图像20的待显示子区域3,输出图像30的输出子区域14对应于待显示图像20的待显示子区域4,输出图像30的输出子区域16对应于待显示图像20的待显示子区域6,输出图像30的输出子区域17对应于待显示图像20的待显示子区域7,输出图像30的输出子区域18对应于待显示图像20的待显示子区域8,输出图像30的输出子区域19对应于待显示图像20的待显示子区域9。输出图像30的输出子区域15的分辨率与待显示图像20的待显示子区域5的分辨率相同。第二显示区域中的各输出子区域的分辨率小于第二采样区域的对应的待显示子区域的分辨率,例如,输出图像30的输出子区域11的分辨率小于待显示图像20的待显示子区域1的分辨率。
例如,待显示图像20具有矩形形状,第一采样区域可以位于矩形的任意一个角上,第一采样区域可以位于矩形的任意一边;或者,第一采样区域还可以位于矩形的中间部位,也就是说,第一采样区域不与待显示图像20的边和角相接触。本公开的实施例对第二采样区域的具***置不作限制。
例如,第二采样区域可以包括多个待显示子区域。
例如,如图2A所示,在一些示例中,第一采样区域21可以位于待显示图像的中间部位,在这种情况下,待显示图像20被划分为九个待显示子区域,第一采样区域21为待显示子区域5,第二采样区域22包括八个待显示子区域,即第二采样区域22可以包括待显示子区域1、待显示子区域2、待显示子区域3、待显示子区域4、待显示子区域6、待显示子区域7、待显示子区域8和待显示子区域9。第一采样区域21和第二采样区域22组成九宫格结构,九宫格结构包括三行三列的多个区域,第一采样区域21位于九宫格结构的中心,即第一采样区域21位于九宫格结构的第二行第二列。
值得注意的是,各待显示子区域还可以继续进行划分。
例如,如图2A所示,第二采样区域22的待显示子区域1、待显示子区域3、待显示子区域7和待显示子区域9与第一采样区域21(即,待显示子区域5)不相邻,第二采样区域22的待显示子区域4和待显示子区域6在第二方向上与第一采样区域21相邻,第二采样区域22的待显示子区域2和待显示子区域8在第一方向上与第一采样区域21相邻。
例如,第一方向和第二方向彼此垂直。
例如,如图2B所示,在另一些示例中,第一采样区域21以位于矩形的任意一个角上。此时,如图2B所示,待显示图像20可以被划分成为四个待显示子区域,第一采样区域21为待显示子区域A,第二采样区域22包括三个待显示子区域,即第二采样区域22可以包括待显示子区域B、待显示子区域C和待显示子区域D。但本公开不限于此,第一采样区域21也可以为待显示子区域C,相应地,第二采样区域21包括待显示子区域A、待显示子区域B和待显示子区域D。
例如,如图2B所示,第二采样区域22的待显示子区域B在第一方向上与第一采样区域21(即,待显示子区域A)相邻,第二采样区域22的待显示子区域C在第二方向上与第一采样区域21相邻,第二采样区域22的待显示子区域D与第一采样区域21不相邻。
需要说明的是,在本公开中,“相邻”可以表示第二采样区域22中的待显示子区域(例如,图2B中的待显示子区域B和待显示子区域C)与第一采样区域21至少有一条边相邻。“不相邻”表示第二采样区域22中的待显示子区域(例如,图2B中的待显示子区域D)与第一采样区域21的任意一条边都不相邻。
例如,步骤S14可以包括:根据渲染模型,对待显示图像进行第二分辨率采样,以得到中间待显示图像;根据第一采样区域和第二采样区域的位置和比例关系,分割中间待显示图像,以得到与第一采样区域对应的第一中间显示区域和与第二采样区域对应的第二中间显示区域,第二显示区域包括第二中间显示区域。
例如,如图3所示,在step3中,对待显示图像20进行第二分辨率采样,以得到中间待显示图像26。中间待显示图像26为对待显示图像20整体进行压缩后得到的图像,然后,根据第一采样区域和第二采样区域的位置和比例关系,可以将中间待显示图像26进行划分,例如,对于图2A所示的示例,即在第二采样区域包括待显示子区域1、待显示子区域2、待显示子区域3、待显示子区域4、待显示子区域6、待显示子区域7、待显示子区域8和待显示子区域9,第一采样区域为待显示子区域5的情况下,中间待显示图像26也可以被划分为九个中间子区域,九个中间子区域排列为三行三列,第一中间显示区域包括位于第二行第二列的一个中间子区域,第二中间显示区域包括八个中间子区域,且八个中间子区域与第二采样区域的八个待显示子区域分别一一对应。例 如,第二中间显示区域的位于第一行第一列的中间子区域与第二采样区域的待显示子区域1对应,也就是说,第二中间显示区域的位于第一行第一列的中间子区域在中间待显示图像26中的位置和其与其余中间子区域的比例关系等与第二采样区域的待显示子区域1在待显示图像20中的位置和待显示子区域1与其余待显示子区域的比例关系等均相同。
图4A为本公开一些实施例提供的一种原始渲染模型的示意图;图4B为本公开一些实施例提供的一种渲染模型的示意图。
例如,在一些实施例中,在步骤S15中,当渲染模型确定后,第一显示区域置于原分辨率采样区域对应的位置,第二显示区域置于压缩分辨率采样区域对应的位置,以得到输出图像。例如,如图1B所示,将高清图(即第一显示区域)和原始图像中与第二显示区域对应的区域按照比例贴到多分辨率的渲染模型上,从而得到输出图像。
例如,根据注视点位置,确定渲染模型包括:获取原始渲染模型,其中,原始渲染模型包括原始原分辨率采样区域和原始压缩分辨率采样区域;根据注视点位置,调整原始原分辨率采样区域的中心点位置以及原始压缩分辨率采样区域的分辨率压缩倍数,以得到渲染模型。
例如,如图1B所示,在开始运行程序之前,可以通过建模软件创建多分辨率的原始渲染模型,然后将原始渲染模型导入到渲染引擎中以供后续使用。在程序运行过程中,渲染引擎可以根据注视点位置修改原始渲染模型的形状,以得到需要的渲染模型。例如,修改原始渲染模型的形状可以包括调整原始原分辨率采样区域的中心点位置以及原始压缩分辨率采样区域的分辨率压缩倍数。例如,渲染模型和原始渲染模型的形状也可以均为矩形。
例如,渲染模型的尺寸和原始渲染模型的尺寸相同。渲染模型的尺寸和输出图像的尺寸也相同。
例如,在本实施例的一些可选的实现方式中,原始压缩分辨率采样区域的分辨率压缩倍数可以根据原始压缩分辨率采样区域与原始原分辨率采样区域的位置关系被预设及调整。
例如,压缩分辨率采样区域的分辨率压缩倍数可以包括横向分辨率压缩倍数和/或纵向分辨率压缩倍数,例如,如图4A和图4B所示,纵向分辨率压缩倍数可以表示沿第一方向的分辨率压缩倍数,横向分辨率压缩倍数可以表示沿第二方向的分辨率压缩倍数。
需要说明的是,在本公开的实施例中,当分辨率压缩倍数大于1时,表示原始压缩分辨率采样区域被压缩;当分辨率压缩倍数小于1时,则表示原始压缩分辨率采样区域被拉伸。
例如,在渲染模型中,原分辨率采样区域即对应注视点对应在待显示图像上的位置的区域,将其他的用户未关注区域(即非注视区域)设置为压缩分辨率采样区域。也就是说,原分辨率采样区域对应输出图像的第一显示区域,即对应图2A所示的输出子区域15,压缩分辨率采样区域应输出图像的第二显示图像,即对应图2A所示的输出子区域11、输出子区域12、输出子区域13、输出子区域14、输出子区域16、输出子区域17、输出子区域18和输出子区域19。为了保证虚拟相机对图像的原分辨率采样区域进行原分辨率采样,在预设及调整各压缩分辨率采样区域的横向和纵向的分辨率压缩倍数时,需要考虑各压缩分辨率采样区域与原分辨率采样区域的位置关系,以实现正常的局部图像分辨率压缩,并保证压缩后的整体图像的形状与原图像相同(例如原图像为矩形图像,压缩后的图像也为矩形图像)。
例如,原始压缩分辨率采样区域的尺寸、原分辨率采样区域的尺寸、第一显示区域的尺寸和第一采样区域的尺寸均相同。
例如,在本实施例的一些可选的实现方式中,原分辨率采样区域和压缩分辨率采样区域构成九宫格结构。例如,原分辨率采样区域位于九宫格的中间格。这样的规则便于调整原分辨率采样区域的中心点位置,及压缩分辨率采样区域的横向和/或纵向的分辨率压缩倍数。另外,在这种情况下,原分辨率采样区域与注视点的对应也较精确。
例如,如图4B所示,在一些示例中,渲染模型100包括九个采样子区域,九个采样子区域排列为三行三列,即原分辨率采样区域和压缩分辨率采样区域构成九宫格结构,九宫格结构包括三行三列的九个采样子区域。原分辨率采样区域位于九宫格的中间格,即原分辨率采样区域位于九宫格结构的第二行和第二列。在图4B所示的示例中,原分辨率采样区域包括采样子区域105,即采样子区域105为原分辨率采样区域;压缩分辨率采样区域包括八个采样子区域,即压缩分辨率采样区域包括采样子区域101、采样子区域102、采样子区域103、采样子区域104、采样子区域106、采样子区域107、采样子区域108和采样子区域109。
例如,采样子区域101、采样子区域103、采样子区域107和采样子区域 109中的每个都可能具有横向分辨率压缩倍数和纵向分辨率压缩倍数,也就是说,在第一方向和第二方向上,采样子区域101、采样子区域103、采样子区域107和采样子区域109中的每个均可以被压缩。采样子区域102和采样子区域108仅可以具有纵向分辨率压缩倍数,也就是说,在第一方向上,采样子区域102和采样子区域108可以被压缩,而在第二方向上,采样子区域102和采样子区域108不会被压缩。采样子区域104和采样子区域106仅可以具有横向分辨率压缩倍数,也就是说,在第一方向上,采样子区域104和采样子区域106不会被压缩,而在第二方向上,采样子区域104和采样子区域106可以被压缩。
例如,采样子区域101的纵向分辨率压缩倍数、采样子区域102的纵向分辨率压缩倍数和采样子区域103的纵向分辨率压缩倍数均相同。采样子区域107的纵向分辨率压缩倍数、采样子区域108的纵向分辨率压缩倍数和采样子区域109的纵向分辨率压缩倍数也均相同。
例如,采样子区域101的横向分辨率压缩倍数、采样子区域104的横向分辨率压缩倍数和采样子区域107的横向分辨率压缩倍数均相同。采样子区域103的横向分辨率压缩倍数、采样子区域106的横向分辨率压缩倍数和采样子区域109的横向分辨率压缩倍数也均相同。
例如,如图4A所示,原始渲染模型200可以包括九个原始采样子区域,九个原始采样子区域分别为原始采样子区域201、原始采样子区域202、原始采样子区域203、原始采样子区域204、原始采样子区域205、原始采样子区域206、原始采样子区域207、原始采样子区域108和原始采样子区域209,且九个原始采样子区域排列为三行三列,即原始原分辨率采样区域和原始压缩分辨率采样区域也可以构成九宫格结构,原始原分辨率采样区域位于九宫格的中间格,即原始原分辨率采样区域位于九宫格的第二行第二列的原始采样子区域205。原始压缩分辨率采样区域包括八个原始采样子区域,即原始采样子区域201、原始采样子区域202、原始采样子区域203、原始采样子区域204、原始采样子区域206、原始采样子区域207、原始采样子区域208和原始采样子区域209。
例如,原始压缩分辨率采样区域的各原始采样子区域与压缩分辨率采样区域的各采样子区域分别一一对应。例如,原始压缩分辨率采样区域中位于第一行第一列的原始采样子区域201与压缩分辨率采样区域位于第一行第一列的采样子区域101对应,原始压缩分辨率采样区域中位于第一行第二列的原始采样 子区域202与压缩分辨率采样区域位于第一行第二列的采样子区域102对应,依次类推。
例如,原始原分辨率采样区域与原分辨率采样区域对应,且原始原分辨率采样区域的尺寸与原分辨率采样区域的尺寸相同。也就是说,图4A中的原始采样子区域205与图4B中的采样子区域105对应且相同。
例如,原始原分辨率采样区域的形状与原分辨率采样区域的形状可以均为矩形。
例如,如图4A所示,原始原分辨率采样区域205的中心与原始渲染模型的中心重合。而在图4B中,原分辨率采样区域205的中心与渲染模型的中心不重合。
例如,在本实施例的一些可选的实现方式中,调整原分辨率采样区域的中心点位置包括:
当原分辨率采样区域的中心点为注视点位置时,判断原分辨率采样区域是否超出待显示图像的边界:
若否,则调整原分辨率采样区域的中心点为注视点位置;
若是,则调整原分辨率采样区域的中心点为在原分辨率采样区域不超出待显示图像的边界的情况下最接近注视点位置的位置。
需要说明的是,在渲染模型的预设中,原分辨率采样区域的中心点位置对应于整体模型的中心点位置。当原分辨率采样区域的中心点位置随着注视点被调整偏离整体模型的中心点位置时,压缩分辨率采样区域的横向和/或纵向的分辨率压缩倍数会被随之调整。
图5A为本公开一些实施例提供的一种在待显示图像上注视点位置与原分辨率采样区域的位置关系的示意图;图5B为本公开一些实施例提供的另一种注视点位置与原分辨率采样区域的位置关系的示意图;图5C为本公开一些实施例提供的一种对原分辨率采样区域的中心进行调整后的示意图。
例如,如图5A所示,在一些示例中,在待显示图像20上,注视点位置为点E所示,当原分辨率采样区域105的中心点与注视点位置E重合时,原分辨率采样区域105没有超出待显示图像20的边界,此时,原分辨率采样区域105的中心点即为注视点位置E。
需要说明的是,在本公开的实施例中,原始渲染模型的原始压缩分辨率采样区域的中心与原始渲染模型的中心重合,原始渲染模型的中心对应待显示图 像的中心。
例如,如图5A所示,注视点位置E与待显示图像20的中心点C不重合,相对于待显示图像20的中心点C,注视点位置E更靠近待显示图像20的右边的边界,且更靠近待显示图像20的上边的边界。例如,在图5A中,原始原分辨率采样区域205的中心可以为待显示图像20的中心点C,原分辨率采样区域105的中心则为注视点位置E,在这种情况下,原始渲染模型中位于第一行的三个原始采样子区域被沿纵向压缩,原始渲染模型中位于第一行的三个原始采样子区域被沿纵向压缩,原始渲染模型中位于第三行的三个原始采样子区域被沿纵向拉伸,原始渲染模型中位于第三列的三个原始采样子区域被沿横向压缩,原始渲染模型中位于第一列的三个原始采样子区域被沿横向拉伸,如图4B所示,在对原始渲染模型进行上述调整后即可得到渲染模型。例如,对图4A所示的原始采样子区域201沿第一方向进行压缩处理,并沿第二方向进行拉伸处理,则可以得到图4B所示的采样子区域101,也就是说,在第一方向上,原始采样子区域201的长度小于采样子区域101的长度,而在第二方向上,原始采样子区域201的长度大于采样子区域101的长度。此时,渲染模型仍然可以包括九个采样子区域。
例如,在图5A所示的示例中,待显示图像20的第一采样区域即为采样子区域105,从而第一采样区域105的中心点即为注视点位置E。
值得注意的是,渲染模型可能仅包括四个或六个采样子区域。
例如,如图5B所示,在另一些示例中,在待显示图像20上,注视点位置为点C所示,当原分辨率采样区域105的中心点与注视点位置E重合时,原分辨率采样区域105超出待显示图像20的边界,例如,原分辨率采样区域105超出待显示图像20的右边和下边的边界,也就是说,相对于待显示图像20的中心点C,注视点位置E位于待显示图像20的右下角。在此情况下,需要调整原分辨率采样区域105的中心点,当原分辨率采样区域105不超出待显示图像20的边界的情况下,最接近注视点位置E的位置为原分辨率采样区域105的中心点。例如,如图5C所示,当原分辨率采样区域105的两条相邻的边分别与待显示图像20的两条相邻的边(即右边的边和下边的边)重合时,则此时原分辨率采样区域105的中心点的位置为当原分辨率采样区域105不超出待显示图像20的边界的情况下最接近注视点位置E的位置。
例如,在图5C中,原分辨率采样区域105的两条相邻的边分别与待显示 图像20的两条相邻的边重合,即与原分辨率采样区域105对应的第一采样区域可以位于矩形的一个角(右下角)上,此时,渲染模型可以包括四个采样子区域,也就是说,图4B中的采样子区域103的面积、采样子区域107的面积、采样子区域108的面积和采样子区域109的面积均为零。在这种情况下,原始渲染模型中位于第一行第一列和第一行第二列的两个原始采样子区域(即图4A中的原始采样子区域201和原始采样子区域202)被沿纵向(即第一方向)拉伸,原始渲染模型中位于第一行第一列和第二行第一列的两个原始采样子区域(即图4A中的原始采样子区域201和原始采样子区域204)被沿横向拉伸,在对原始渲染模型进行上述调整后即可得到渲染模型。
需要说明的是,当渲染模型包括九个采样子区域时,九个采样子区域的总面积为SW,当渲染模型包括四个采样子区域时,四个采样子区域的总面积也为SW,也就是说,渲染模型的尺寸并不随着采样子区域的数量而变化。
本公开至少一些实施例还提供一种图像显示方法。图6为本公开一些实施例提供的一种图像显示方法的示意性流程图。如图6所示,图像显示方法可以包括以下步骤:
S60:获取待显示图像;
S61:根据人眼在虚拟现实设备的显示屏上的注视点,得到注视点对应在待显示图像上的注视点位置;
S62:根据注视点位置确定待显示图像的第一采样区域和第二采样区域;
S63:对第一采样区域进行第一分辨率采样,以得到第一显示区域;
S64:对待显示图像进行第二分辨率采样,以得到与第二采样区域对应的第二显示区域,第二采样区域的分辨率大于第二显示区域的分辨率;
S65:将第一显示区域和第二显示区域进行拼接,以得到输出图像;
S66:将输出图像输出至虚拟现实设备;
S67:通过虚拟现实设备对输出图像进行拉伸,以得到拉伸图像;
S68:在虚拟现实设备的显示屏上显示拉伸图像。
例如,上述步骤S60-S66均在渲染引擎中实现,步骤S67-S68则在虚拟现实设备中实现。由此,本公开实施例提供的图像显示方法可以在渲染引擎端实现图像压缩渲染,接着压缩后的输出图像传输至虚拟现实设备,然后虚拟现实设备进行显示,从而降低软件端输出给显示设备的传输带宽,可约图像传输过程中的传输带宽,实现图像的实时采样和传输,满足虚拟现实技术中实时处理 大量数据的要求。
例如,在图像显示方法中,在渲染引擎中执行的各步骤S60-S66的详细说明可以参考上述实施例中关于图像渲染方法的描述。例如,步骤S60的详细说明可以参考上述步骤S10的描述,步骤S61的详细说明可以参考上述步骤S11的描述,步骤S62的详细说明可以参考上述步骤S12的描述,步骤S63的详细说明可以参考上述步骤S13的描述,步骤S64的详细说明可以参考上述步骤S14的描述,步骤S65和S66的详细说明可以参考上述步骤S15的描述。
例如,显示屏可以包括液晶显示面板等。
例如,输出图像包括第一显示区域和第二显示区域,步骤S67包括:通过虚拟现实设备对输出图像中的第二显示区域进行拉伸,以得到拉伸显示区域;根据第一显示区域和拉伸显示区域,确定拉伸图像。也就是说,拉伸显示区域的尺寸大于第二显示区域,拉伸图像由第一显示区域和拉伸显示区域拼接得到,即拉伸图像包括拉伸显示区域和第一显示区域。
例如,在本实施例的一些可选的实现方式中,在步骤S67中,利用虚拟现实设备通过集成电路(IC)对接收的输出图像进行拉伸以得到拉伸图像,之后在显示屏上显示拉伸图像。
例如,拉伸图像的尺寸可以与待显示图像的尺寸相同。例如,第二显示区域中的各子区域的拉伸倍数与第二采样区域中的各子区域的压缩倍数相同,例如,在图2A所示的示例中,对于待显示子区域1,待显示子区域1对应于输出图像30中的输出子区域11,当对待显示子区域1沿第一方向按比例1/F1进行压缩处理,对待显示子区域1沿第二方向按比例1/F2进行压缩处理,也就是说,在第一方向上,待显示子区域1被缩小F1倍,在第二方向上,待显示子区域1被缩小F2倍;则在拉伸过程中,需要对输出子区域11沿第一方向按比例F1进行拉伸处理,对输出子区域11沿第二方向按比例F2进行拉伸处理,也就是说,在第一方向上,输出子区域11被扩大F1倍,在第二方向上,输出子区域11被扩大F2倍。输出图像中的其余输出子区域与输出子区域11处理方式类似,在此不再赘述。
例如,F1和F2均大于1。根据实际需求,F1和F2可以相同,也可以不同,对此不作限制。
下面代入一个具体的场景对本实施例提供的面向虚拟现实的图像渲染方法和图像显示方法作进一步地说明。
在一个场景中,显示屏可以为VR(虚拟现实)/AR(增强现实)头戴显示设备的显示屏,图像(即输出图像)传输过程在计算机设备与VR/AR头戴显示设备之间发生。
由于传输带宽的限制,计算机设备直接向VR/AR头戴显示设备传输4K的图像对于硬件的压力太大,无法完成高分辨率高刷新率的实时显示。根据人眼观察的清晰程度和人眼追踪技术的实现,可以实现对4K的图像进行非高清区域画质压缩实现传输带宽的节省。
例如,如图2A所示,对于一个高分辨的待显示图像20,将该待显示图像20按照九宫格的形式分为九个待显示子区域,其中待显示子区域5为注视点对应的原分辨率采样区域(高清区域)。对其他八个待显示子区域分别进行如下处理:待显示子区域1、待显示子区域3、待显示子区域7、待显示子区域9的横向分辨率和纵向分辨率同时被压缩4倍;待显示子区域2、待显示子区域8的横向分辨率不变,纵向分辨率被压缩4倍;待显示子区域4、待显示子区域6的纵向分辨率不变,横向分辨率被压缩4倍。待显示图像20的分辨率大小为4320*4800,压缩后的输出图像30的分辨率大小为2160*2400,相比于待显示图像20,压缩后的输出图像30的横向分辨率是原始待显示图像20的横向分辨率的50%,压缩后的输出图像30的纵向分辨率是原始待显示图像20的纵向分辨率的50%,这样在传输输出图像30的过程中可以节省75%的带宽。当注视点位置发生改变的时候,对原分辨率采样区域(即待显示子区域5)的范围和位置及其他八个待显示子区域的范围和大小(此处的大小对应的是分辨率)进行相应的调整,就可以完成正确的压缩输出结果。
如图3所示,执行本公开实施例提供的面向虚拟现实的图像渲染方法的过程中:Step1为对场景进行整体采样,以得到原始图像15,原始图像15的分辨率为4320*4800。Step2为对Step1中获得的原始图像15进行反畸变处理,以得到待显示图像20,待显示图像20的分辨率同样为4320*4800。例如,待显示图像20包括第一采样区域和第二采样区域,第一采样区域为注视点位置对应的区域。Step3为对反畸变结果(即待显示图像20)进行采样,在Step3中,根据渲染模型,对待显示图像20的高清区域(即第一采样区域)进行原分辨率(高清分辨率)采样,以得到第一显示区域对应的图像25,采样后的图像25的分辨率为1440*1600,在Step3中,根据渲染模型,对整个待显示图像20进行压缩分辨率(低分辨率)采样,以得到中间待显示图像26,采样后的中间 待显示图像26的分辨率为1080*1200,然后根据第一采样区域和第二采样区域的位置和比例关系,分割中间待显示图像,以得到与第二采样区域对应的第二中间显示区域,第二显示区域包括第二中间显示区域。需要说明的是,此处对整个待显示图像20进行压缩分辨率(低分辨率)采样而不是对第二采样区域进行采样的原因是便于实现,实际上,在一些示例中,仅对第二采样区域进行采样即可。Step4为进行高清区域(即第一显示图像)及非高清区域(即第二显示区域)拼接后得到的输出图像30,输出图像30的分辨率为2160*2400。例如,Step4可以包括:根据注视点位置,确定渲染模型,渲染模型包括原分辨率采样区域和压缩分辨率采样区域;将第一显示区域置于原分辨率采样区域对应的位置,将第二显示区域置于压缩分辨率采样区域对应的位置,以得到输出图像30。Step5为通过VR/AR头戴显示设备对输出图像30进行拉伸后得到拉伸图像35,拉伸图像35的分辨率为4320*4800。
例如,如图4A所示,用建模软件创建的(多分辨率)原始渲染模型,按照注视点在中心的情况建立原始渲染模型200,原始渲染模型200包括多个原始采样子区域;然后调整原始渲染模型200中的每个原始采样子区域的参考位置(为了方便模型形状的变化),需要说明的是,此处的每个原始采样子区域的参考位置为压缩时位置不变的点。
例如,原始采样子区域201的参考位置为原始采样子区域201的左上角,即点Q1,当原始渲染模型开始变化的,仅修改原始采样子区域201的大小,而不需要修改原始采样子区域201的参考位置。原始采样子区域203、207和209也做类似的处理,也就是说,原始采样子区域203的参考位置为原始采样子区域203的右上角,即点Q3,原始采样子区域207的参考位置为原始采样子区域207的左下角,即点Q7,原始采样子区域209的参考位置为原始采样子区域209的右下角,即点Q9。原始采样子区域202的参考位置置于原始采样子区域202的正上方边缘,例如正上方边缘的中点,即点Q2;原始采样子区域208的参考位置置于原始采样子区域208的正下方边缘,例如正下方边缘的中点,即点Q8,对于原始采样子区域202和208,当原始渲染模型变化的时候,只有点Q2和点Q8的横坐标需要进行修改而纵坐标表示不变。原始采样子区域204的参考位置可以置于原始采样子区域204的正左方边缘,例如正左方边缘的中点,即点Q4,原始采样子区域206的参考位置可以置于原始采样子区域206的正右方边缘,例如正右方边缘的中点,即点Q6,当原始渲染模 型变化的时候,修改点Q4和点Q6的纵坐标,而点Q4和点Q6的横坐标保持不变。原始采样子区域205的参考位置仍在原始采样子区域205的正中心,即点Q5,当原始渲染模型变化的时候,原始采样子区域205大小不变,仅修改点Q5的横坐标和纵坐标。
例如,原始渲染模型位于直角坐标系x-o-y中,直角坐标系x-o-y的坐标原点o与点Q5重合。
例如,在直角坐标系x-o-y中,点Q1、点Q4和点Q7的横坐标相同,点Q2、点Q5和点Q8的横坐标相同,点Q3、点Q6和点Q9的横坐标相同;点Q1、点Q2和点Q3的纵坐标相同,点Q4、点Q5和点Q6的纵坐标相同,点Q7、点Q8和点Q9的纵坐标相同。
例如,如图4B所示,在一些示例中,当对原始渲染模200进行调整后得到渲染模型100,渲染模型100包括多个采样子区域,多个采样子区域与图4A所示的多个原始采样子区域一一对应。
例如,采样子区域101的参考位置为采样子区域101的左上角,即点P1,在直角坐标系x-o-y中,点P1的坐标与原始采样子区域201的参考位置Q1的坐标相同,而采样子区域101的大小与原始采样子区域201的大小不相同。采样子区域103、107和109也与采样子区域101类似,也就是说,采样子区域103的参考位置为采样子区域103的右上角,即点P3,采样子区域107的参考位置为采样子区域107的左下角,即点P7,采样子区域109的参考位置为采样子区域109的右下角,即点P9。
例如,采样子区域102的参考位置置于采样子区域102的正上方边缘,例如正上方边缘的中点,即点P2,采样子区域108的参考位置置于采样子区域108的正下方边缘,例如正下方边缘的中点,即点P8。在直角坐标系x-o-y中,点P2的纵坐标与原始采样子区域202的参考位置Q2的纵坐标相同,而点P2的横坐标与原始采样子区域202的参考位置Q2的横坐标不相同;类似地,点P8的纵坐标与原始采样子区域208的参考位置Q8的纵坐标相同,而点P8的横坐标与原始采样子区域208的参考位置Q8的横坐标不相同。
例如,采样子区域104的参考位置可以置于采样子区域104的正左方边缘,例如正左方边缘的中点,即点P4,采样子区域106的参考位置可以置于采样子区域106的正右方边缘,例如正右方边缘的中点,即点P6。在直角坐标系x-o-y中,点P4的横坐标与原始采样子区域204的参考位置Q4的横坐标相同,而点 P4的纵坐标与原始采样子区域204的参考位置Q4的纵坐标不相同;类似地,点P6的横坐标与原始采样子区域206的参考位置Q6的横坐标相同,而点P6的纵坐标与原始采样子区域206的参考位置Q6的纵坐标不相同。
例如,采样子区域105的参考位置在采样子区域105的正中心,即点P5。在直角坐标系x-o-y中,点P5的横坐标与原始采样子区域205的参考位置Q5的横坐标不相同,点P5的纵坐标与原始采样子区域205的参考位置Q5的纵坐标也相同。采样子区域105的尺寸和原始采样子区域205的尺寸相同。如图4B所示,采样子区域105的参考位置P5与直角坐标系x-o-y的坐标原点o不重合。
例如,在直角坐标系x-o-y中,点P1、点QP4和点P7的横坐标相同,点P2、点P5和点P8的横坐标相同,点P3、点P6和点P9的横坐标相同;点P1、点P2和点P3的纵坐标相同,点P4、点P5和点P6的纵坐标相同,点P7、点P8和点P9的纵坐标相同。
图7为本公开一些实施例提供的一种注视点位置不同时对应的渲染模型和输出图像的示意图。
例如,如图7所示,将注视点可以落到的位置投影到(-1,-1)到(1,1)的二维坐标系x'-o'-y'中,例如,图2A中的待显示图像20中待显示子区域1的中心点、待显示子区域3的中心点、待显示子区域7的中心点和待显示子区域9的中心点所围成的区域即为注视点可以落到的位置。在该二维坐标系x'-o'-y'中,待显示子区域1的中心点的坐标为(-1,1),待显示子区域2的中心点的坐标为(0,1),待显示子区域3的中心点的坐标为(1,1),待显示子区域4的中心点的坐标为(-1,0),待显示子区域5的中心点的坐标为(0,0),待显示子区域6的中心点的坐标为(1,0),待显示子区域7的中心点的坐标为(-1,-1),待显示子区域8的中心点的坐标为(0,-1),待显示子区域9的中心点的坐标为(1,-1)。
例如,该图7中的四个情况(即Pic1、Pic2、Pic3、Pic4)所对应的注视点位置分别为二维坐标系x'-o'-y'中的(0,0)、(0.5,0.5)、(1,1)(采样子区域101、102、103、106、109的面积为0)、(1,0)(采样子区域103、106、109的面积为0)。当注视点位置发生改变的时候,原始采样子区域201、203、207、209根据注视点位置修改其区域大小和对应的范围,无须修改其参考位置;原始采样子区域202、208根据注视点位置修改其参考位置的横坐标,纵方向的区域大小和对应的范围;原始采样子区域204、206根据注视点位置修改其 参考位置可以的纵坐标,横方向的区域大小和对应的范围;原始采样子区域205根据注视点位置修改其参考位置的横纵坐标,同时调整虚拟相机的位置,从而获得高清区的反畸变图像。
例如,对原始采样子区域201进行调整后的得到采样子区域101,采样子区域101的计算结果如下。
假设注视点位置在二维坐标系x'-o'-y'中的坐标为(x,y)(x∈[-1,1],y∈[-1,1])。
则采样子区域101的大小为:localScale=(x+1,1-y)。例如,在一些示例中,当原始采样子区域201和采样子区域101为矩形,且原始采样子区域201的第一边为T1,原始采样子区域201的第二边为T2,第一边和第二边为原始采样子区域201的两条相邻边,第一边表示沿第一方向的边,第二边表示沿第二方向的边。采样子区域101包括第三边和第四边,采样子区域101的第三边与原始采样子区域201的第一边对应,采样子区域101的第四边与原始采样子区域201的第二边对应,则第三边表示为T1*(1-y),第四边表示为T2*(x+1)。
例如,采样子区域101对应贴图区域为:
Figure PCTCN2019080728-appb-000001
例如,在一些示例中,渲染模型的形状可以为矩形,渲染模型可以置于一个二维坐标系x”-o”-y”中,渲染模型的左下角位于二维坐标系x”-o”-y”的坐标原点,且渲染模型被投影到(-1,-1)到(1,1)的区域。在x”轴方向上,渲染模型的长度为R1,在y”轴方向上,渲染模型的长度为R2。在x”轴方向上,采样子区域101的长度为R1*(x+1)/3,在y”轴方向上,采样子区域101的长度为R2*(1-y)/3。采样子区域101的左下角在二维坐标系x”-o”-y”中的坐标为(0,(y+2)/3)。
其它采样子区域也通过相似的方式计算出区域大小、参考位置以及范围。例如,则采样子区域102的大小为:localScale=(1,1-y),采样子区域102对应贴图区域为:
Figure PCTCN2019080728-appb-000002
当注视点位置发生改变的时候,x和y值发生改变,对应渲染模型发生改变,从而生成匹配当前注视点位置的输出图像。
图8示出了本公开至少一些实施例提供的一种面向虚拟现实的图像渲染装置的结构示意图。如图8所示,本公开的另一些实施例提供了一种面向虚拟现实的图像渲染装置包括:注视点投影模块、调整模块、渲染引擎和拼接模块。
例如,注视点投影模块用于根据人眼在虚拟现实设备的显示屏上的注视 点,得到注视点对应在待显示图像上的注视点位置。
例如,渲染引擎用于根据注视点位置确定所述待显示图像的第一采样区域和第二采样区域;对第一采样区域进行第一分辨率采样,以得到第一显示区域;对待显示图像进行第二分辨率采样,以得到与第二采样区域对应的第二显示区域。例如,第二采样区域的分辨率大于第二显示区域的分辨率。
例如,如图8所示,渲染引擎包括虚拟相机和渲染模型,虚拟相机用于根据渲染模型,对第一采样区域进行第一分辨率采样和对待显示图像进行第二分辨率采样。
例如,在一些示例中,渲染引擎可以用于还用于根据注视点位置,加载渲染模型,即确定渲染模型,其中,渲染模型包括原分辨率采样区域、压缩分辨率采样区域以及压缩分辨率采样区域的分辨率压缩倍数;原分辨率采样区域对应于第一采样区域,压缩分辨率采样区域对应于第二采样区域。
例如,渲染引擎还用于获取原始渲染模型,原始渲染模型包括原始原分辨率采样区域和原始压缩分辨率采样区域。
例如,调整模块用于根据注视点位置,调整原始原分辨率采样区域的中心点位置以及原始压缩分辨率采样区域的分辨率压缩倍数,以确定渲染模型。
例如,拼接模块用于将第一显示区域和第二显示区域进行拼接,得到待传输至虚拟现实设备的输出图像。
应当注意,图8所示的图像渲染装置的组件只是示例性的,而非限制性的,根据实际应用需要,该图像渲染装置还可以具有其他组件。
需要说明的是,本实施例提供的面向虚拟现实的图像渲染装置的原理及工作流程与上述面向虚拟现实的图像渲染方法相似,相关之处可以参照上述说明,在此不再赘述。
图9示出了本公开至少一些实施例提供的一种面向虚拟现实的图像渲染***的示意图。
例如,本公开的另一些实施例提供了一种面向虚拟现实的图像渲染***,包括:虚拟现实设备和图像渲染装置;图像渲染装置包括:注视点投影模块、调整模块、渲染引擎和拼接模块。
注视点投影模块用于根据人眼在虚拟现实设备的显示屏上的注视点位置,得到注视点对应在待显示在显示屏上的图像上的位置。
渲染引擎用于加载渲染模型,其中,渲染模型预设有原分辨率采样区域和 压缩分辨率采样区域以及压缩分辨率采样区域的横向和/或纵向的分辨率压缩倍数。
调整模块用于根据注视点对应在待显示在显示屏上的图像上的位置,调整原分辨率采样区域的中心点位置以及压缩分辨率采样区域的横向和/或纵向的分辨率压缩倍数。
渲染引擎用于根据调整后的渲染模型,对图像的原分辨率采样区域进行原分辨率采样并对压缩分辨率采样区域进行压缩分辨率采样。
拼接模块用于将采样后的原分辨率采样区域和压缩分辨率采样区域进行拼接,得到待传输至虚拟现实设备的图像。
例如,如图9所示,本公开的另一些实施例还提供了一种面向虚拟现实的图像渲染***包括虚拟现实设备和图像渲染装置。图像渲染装置为本公开上述任一实施例所述的图像渲染装置。
例如,虚拟现实设备可以为头戴显示设备等。头戴显示设备用于获取人眼在虚拟现实设备的显示屏上的注视点并接收图像渲染装置传输的输出图像。
例如,虚拟现实设备还被配置为获取场景的原始图像。虚拟现实设备或图像渲染装置可以对原始图像进行反畸变处理,以得到待显示图像。
需要说明的是,本实施例提供的面向虚拟现实的图像渲染***的原理及工作流程与上述面向虚拟现实的图像渲染方法相似,相关之处可以参照上述说明,在此不再赘述。
图10示出了本公开至少一些实施例提供的一种计算机设备的结构示意图。本公开的另一个实施例提供了一种计算机设备。
例如,计算机设备包括存储器和处理器。例如,存储器被配置为存储计算机程序。处理器被配置为运行计算机程序。计算机程序被所述处理器运行时实现上述任一实施例所述的图像渲染方法中的一个或多个步骤。
例如,处理器可以是中央处理单元(CPU)或者具有数据处理能力和/或程序执行能力的其它形式的处理单元,例如现场可编程门阵列(FPGA)或张量处理单元(TPU)等。
例如,存储器可以包括一个或多个计算机程序产品的任意组合,计算机程序产品可以包括各种形式的计算机可读存储介质,例如易失性存储器和/或非易失性存储器。易失性存储器例如可以包括随机存取存储器(RAM)和/或高速缓冲存储器(cache)等。非易失性存储器例如可以包括只读存储器(ROM)、 硬盘、可擦除可编程只读存储器(EPROM)、便携式紧致盘只读存储器(CD-ROM)、USB存储器、闪存等。在存储器上可以存储一个或多个计算机指令,处理器可以运行所述计算机指令,以实现各种功能。在计算机可读存储介质中还可以存储各种应用程序和各种数据以及应用程序使用和/或产生的各种数据等。
例如,如图10所示,在一些示例中,计算机设备包括中央处理单元(CPU),其可以根据存储在只读存储器(ROM)中的程序或者从存储部分加载到随机访问存储器(RAM)中的程序而执行各种适当的动作和处理。在RAM中,还存储有计算机***操作所需的各种程序和数据。CPU、ROM以及RAM通过总线被此相连。输入/输入(I/O)接口也连接至总线。
以下部件连接至I/O接口:包括键盘、鼠标等的输入部分;包括诸如液晶显示器(LCD)等以及扬声器等的输出部分;包括硬盘等的存储部分;以及包括诸如LAN卡、调制解调器等的网络接口卡的通信部分。通信部分经由诸如因特网的网络执行通信处理。驱动器也根据需要连接至I/O接口。可拆卸介质,诸如磁盘、光盘、磁光盘、半导体存储器等等,根据需要安装在驱动器上,以便于从其上读出的计算机程序根据需要被安装入存储部分。
例如,根据本实施例,上文流程图描述的过程可以被实现为计算机软件程序。例如,本实施例包括一种计算机程序产品,其包括有形地包含在计算机可读介质上的计算机程序,上述计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信部分从网络上被下载和安装,和/或从可拆卸介质被安装。
附图中的流程图和示意图,图示了本实施例的***、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或示意图中的每个方框可以代表一个模块、程序段或代码的一部分,上述模块、程序段或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,示意图和/或流程图中的每个方框、以及示意和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的***来实现,或者可以用专用硬件与计算机指令的组合来实现。
描述于本实施例中所涉及到的单元可以通过软件的方式实现,也可以通过硬件的方式来实现。所描述的单元也可以设置在处理器中,例如,可以描述为:一种处理器包注视点投影模块、调整模块、渲染引擎和拼接模块。其中,这些单元的名称在某种情况下并不构成对该单元本身的限定。例如,注视点投影模块还可以被描述为“图像注视点获取模块”。
本公开的一些实施例还提供了一种非易失性计算机存储介质,该非易失性计算机存储介质可以是上述实施例中上述计算机设备中所包含的非易失性计算机存储介质,也可以是单独存在,未装配入终端中的非易失性计算机存储介质。上述非易失性计算机存储介质存储有一个或者多个程序,当上述一个或者多个程序被一个设备执行时可以实现上述任一实施例所述的图像渲染方法。
在本公开的描述中,需要说明的是,术语“上”、“下”等指示的方位或位置关系为基于附图所示的方位或位置关系,仅是为了便于描述本公开和简化描述,而不是指示或暗示所指的装置或元件必须具有特定的方位、以特定的方位构造和操作,因此不能理解为对本公开的限制。除非另有明确的规定和限定,术语“安装”、“相连”、“连接”应做广义理解,例如,可以是固定连接,也可以是可拆卸连接,或一体地连接;可以是机械连接,也可以是电连接;可以是直接相连,也可以通过中间媒介间接相连,可以是两个元件内部的连通。对于本领域的普通技术人员而言,可以根据具体情况理解上述术语在本公开中的具体含义。
还需要说明的是,在本公开的描述中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。
显然,本公开的上述实施例仅仅是为清楚地说明本公开所作的举例,而并非是对本公开的实施方式的限定,对于本领域的普通技术人员来说,在上述说明的基础上还可以做出其它不同形式的变化或变动,这里无法对所有的实施方式予以穷举,凡是属于本公开的技术方案所引伸出的显而易见的变化或变动仍 处于本公开的保护范围之列。

Claims (19)

  1. 一种面向虚拟现实的图像渲染方法,包括:
    获取待显示图像;
    根据人眼在虚拟现实设备的显示屏上的注视点,得到所述注视点对应在所述待显示图像上的注视点位置;
    根据所述注视点位置确定所述待显示图像的第一采样区域和第二采样区域;
    对所述第一采样区域进行第一分辨率采样,以得到第一显示区域;
    对所述待显示图像进行第二分辨率采样,以得到与所述第二采样区域对应的第二显示区域,其中,所述第二采样区域的分辨率大于所述第二显示区域的分辨率;
    将所述第一显示区域和所述第二显示区域进行拼接,以得到待传输至所述虚拟现实设备的输出图像。
  2. 根据权利要求1所述的图像渲染方法,其中,对所述第一采样区域进行第一分辨率采样和对所述待显示图像进行第二分辨率采样包括:
    根据所述注视点位置,确定渲染模型,其中,所述渲染模型包括原分辨率采样区域、压缩分辨率采样区域以及所述压缩分辨率采样区域的分辨率压缩倍数,所述原分辨率采样区域对应于所述第一采样区域,所述压缩分辨率采样区域对应于所述第二采样区域;
    根据所述渲染模型,对所述第一采样区域进行所述第一分辨率采样并对所述待显示图像进行所述第二分辨率采样。
  3. 根据权利要求2所述的图像渲染方法,其中,根据所述注视点位置,确定渲染模型包括:
    获取原始渲染模型,其中,所述原始渲染模型包括原始原分辨率采样区域和原始压缩分辨率采样区域;
    根据所述注视点位置,调整所述原始原分辨率采样区域的中心点位置以及所述原始压缩分辨率采样区域的分辨率压缩倍数,以得到所述渲染模型。
  4. 根据权利要求3所述的图像渲染方法,其中,所述原始压缩分辨率采样区域的分辨率压缩倍数根据所述原始压缩分辨率采样区域与所述原始原分辨率采样区域的位置关系被预设及调整。
  5. 根据权利要求3或4所述的图像渲染方法,其中,所述压缩分辨率采样区域的分辨率压缩倍数包括横向分辨率压缩倍数和/或纵向分辨率压缩倍数。
  6. 根据权利要求2-5任一项所述的图像渲染方法,其中,所述原分辨率采样区域和所述压缩分辨率采样区域构成九宫格结构,
    所述九宫格结构包括三行三列的多个区域,所述原分辨率采样区域位于所述九宫格结构的第二行和第二列。
  7. 根据权利要求3-6任一项项所述的图像渲染方法,其中,所述第一采样区域的尺寸、所述原始原分辨率采样区域的尺寸与所述原分辨率采样区域的尺寸相同。
  8. 根据权利要求3-6任一项所述的图像渲染方法,其中,调整所述原始原分辨率采样区域的中心点位置包括:
    当所述原始原分辨率采样区域的中心点为所述注视点位置时,判断所述原始原分辨率采样区域是否超出所述待显示图像的边界:
    若否,则调整所述原始原分辨率采样区域的中心点为所述注视点位置;
    若是,则调整所述原始原分辨率采样区域的中心点为在所述原始原分辨率采样区域不超出所述待显示图像的边界的情况下最接近所述注视点位置的位置。
  9. 根据权利要求2-8任一项所述的图像渲染方法,其中,根据所述渲染模型,对所述待显示图像进行第二分辨率采样,包括:
    根据所述渲染模型,对所述待显示图像进行所述第二分辨率采样,以得到中间待显示图像;
    根据所述第一采样区域和第二采样区域的位置和比例关系,分割所述中间待显示图像,以得到与所述第一采样区域对应的第一中间显示区域和与所述第二采样区域对应的第二中间显示区域,所述第二显示区域包括所述第二中间显示区域。
  10. 根据权利要求1-9任一项所述的图像渲染方法,其中,获取待显示图像包括:
    获取原始图像;
    对所述原始图像进行反畸变处理以得到所述待显示图像。
  11. 根据权利要求1-10任一项所述的图像渲染方法,其中,所述第一采样区域的分辨率等于所述第一显示区域的分辨率。
  12. 一种图像显示方法,包括:
    在渲染引擎中:
    获取待显示图像;
    根据人眼在虚拟现实设备的显示屏上的注视点,得到所述注视点对应在所述待显示图像上的注视点位置;
    根据所述注视点位置确定所述待显示图像的第一采样区域和第二采样区域;
    对所述第一采样区域进行第一分辨率采样,以得到第一显示区域;
    对所述待显示图像进行第二分辨率采样,以得到与所述第二采样区域对应的第二显示区域,其中,所述第二采样区域的分辨率大于所述第二显示区域的分辨率;
    将所述第一显示区域和所述第二显示区域进行拼接,以得到输出图像;以及
    将所述输出图像输出至虚拟现实设备;以及
    在所述虚拟现实设备中,
    通过所述虚拟现实设备对所述输出图像进行拉伸,以得到拉伸图像;
    在所述虚拟现实设备的显示屏上显示所述拉伸图像。
  13. 根据权利要求12所述的图像显示方法,其中,所述输出图像包括所述第一显示区域和所述第二显示区域,
    通过所述虚拟现实设备对所述输出图像进行拉伸,以得到拉伸图像,包括:
    通过所述虚拟现实设备对所述输出图像中的所述第二显示区域进行拉伸,以得到拉伸显示区域;
    根据所述第一显示区域和所述拉伸显示区域,确定所述拉伸图像。
  14. 一种面向虚拟现实的图像渲染装置,包括:注视点投影模块、渲染引擎和拼接模块;
    所述注视点投影模块用于根据人眼在虚拟现实设备的显示屏上的注视点,得到所述注视点对应在待显示图像上的注视点位置;
    所述渲染引擎用于:
    根据所述注视点位置确定所述待显示图像的第一采样区域和第二采样区域;
    对所述第一采样区域进行第一分辨率采样,以得到第一显示区域;
    对所述待显示图像进行第二分辨率采样,以得到与所述第二采样区域对应的第二显示区域,其中,所述第二采样区域的分辨率大于所述第二显示区域的分辨率;
    所述拼接模块用于将所述第一显示区域和所述第二显示区域进行拼接,得到待传输至所述虚拟现实设备的输出图像。
  15. 根据权利要求14所述的图像渲染装置,其中,所述渲染引擎还用于:
    根据所述注视点位置,确定渲染模型,其中,所述渲染模型包括原分辨率采样区域、压缩分辨率采样区域以及所述压缩分辨率采样区域的分辨率压缩倍数,所述原分辨率采样区域对应于所述第一采样区域,所述压缩分辨率采样区域对应于所述第二采样区域;
    根据所述渲染模型,对所述第一采样区域进行第一分辨率采样并对所述待显示图像进行第二分辨率采样。
  16. 根据权利要求15所述的图像渲染装置,还包括调整模块,
    其中,所述渲染引擎还用于获取原始渲染模型,其中,所述原始渲染模型包括原始原分辨率采样区域和原始压缩分辨率采样区域,
    所述调整模块用于根据所述注视点位置,调整所述原始原分辨率采样区域的中心点位置以及所述原始压缩分辨率采样区域的分辨率压缩倍数,以确定所述渲染模型。
  17. 一种面向虚拟现实的图像渲染***,包括:虚拟现实设备和如权利要求14-16任一项所述的图像渲染装置;
    所述虚拟现实设备用于获取人眼在所述虚拟现实设备的显示屏上的注视点并接收所述图像渲染装置传输的所述输出图像。
  18. 一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现如权利要求1-11中任一项所述的图像渲染方法。
  19. 一种计算机设备,包括:
    存储器,被配置为存储计算机程序;
    处理器,被配置为运行所述计算机程序,其中,所述计算机程序被所述处理器运行时实现如权利要求1-11中任一项所述的图像渲染方法。
PCT/CN2019/080728 2018-05-16 2019-04-01 图像渲染方法、装置、***、存储介质、图像显示方法、计算机设备 WO2019218783A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/607,224 US11392197B2 (en) 2018-05-16 2019-04-01 Image rendering method, device, system, storage medium, image display method and computer device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810466112.4 2018-05-16
CN201810466112.4A CN108665521B (zh) 2018-05-16 2018-05-16 图像渲染方法、装置、***、计算机可读存储介质及设备

Publications (1)

Publication Number Publication Date
WO2019218783A1 true WO2019218783A1 (zh) 2019-11-21

Family

ID=63779748

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/080728 WO2019218783A1 (zh) 2018-05-16 2019-04-01 图像渲染方法、装置、***、存储介质、图像显示方法、计算机设备

Country Status (3)

Country Link
US (1) US11392197B2 (zh)
CN (1) CN108665521B (zh)
WO (1) WO2019218783A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2595872A (en) * 2020-06-09 2021-12-15 Sony Interactive Entertainment Inc Gaze tracking apparatus and systems

Families Citing this family (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108665521B (zh) * 2018-05-16 2020-06-02 京东方科技集团股份有限公司 图像渲染方法、装置、***、计算机可读存储介质及设备
CN109256042A (zh) * 2018-11-22 2019-01-22 京东方科技集团股份有限公司 显示面板、电子设备及人眼追踪方法
CN109509150A (zh) * 2018-11-23 2019-03-22 京东方科技集团股份有限公司 图像处理方法和装置、显示装置、虚拟现实显示***
CN109727316B (zh) * 2019-01-04 2024-02-02 京东方科技集团股份有限公司 虚拟现实图像的处理方法及其***
CN109741289B (zh) * 2019-01-25 2021-12-21 京东方科技集团股份有限公司 一种图像融合方法和vr设备
CN109886876A (zh) * 2019-02-25 2019-06-14 昀光微电子(上海)有限公司 一种基于人眼视觉特征的近眼显示方法
CN110097622B (zh) * 2019-04-23 2022-02-25 北京字节跳动网络技术有限公司 渲染图像的方法、装置、电子设备和计算机可读存储介质
CN111131805A (zh) * 2019-12-31 2020-05-08 歌尔股份有限公司 图像处理方法、装置和可读存储介质
CN111258467A (zh) * 2020-01-07 2020-06-09 腾讯科技(深圳)有限公司 界面显示方法、装置、计算机设备及存储介质
CN111290581B (zh) * 2020-02-21 2024-04-16 京东方科技集团股份有限公司 虚拟现实显示方法、显示装置及计算机可读介质
CN111553972B (zh) * 2020-04-27 2023-06-30 北京百度网讯科技有限公司 用于渲染增强现实数据的方法、装置、设备及存储介质
CN111556305B (zh) * 2020-05-20 2022-04-15 京东方科技集团股份有限公司 图像处理方法、vr设备、终端、显示***和计算机可读存储介质
CN114071150B (zh) 2020-07-31 2023-06-16 京东方科技集团股份有限公司 图像压缩方法及装置、图像显示方法及装置和介质
CN111930233B (zh) * 2020-08-05 2023-07-21 聚好看科技股份有限公司 一种全景视频图像显示方法及显示设备
CN112085828B (zh) * 2020-09-18 2024-04-26 深圳市欢太科技有限公司 图像处理方法及装置、云真机***、存储介质和电子设备
CN112272294B (zh) * 2020-09-21 2023-01-06 苏州唐古光电科技有限公司 一种显示图像压缩方法、装置、设备及计算机存储介质
CN112433607B (zh) * 2020-11-17 2022-09-23 歌尔光学科技有限公司 一种图像显示方法、装置、电子设备及存储介质
CN112578564B (zh) * 2020-12-15 2023-04-11 京东方科技集团股份有限公司 一种虚拟现实显示设备及显示方法
CN112804504B (zh) * 2020-12-31 2022-10-04 成都极米科技股份有限公司 画质调整方法、装置、投影仪及计算机可读存储介质
CN112887646B (zh) * 2021-01-22 2023-05-26 京东方科技集团股份有限公司 图像处理方法及装置、扩展现实***、计算机设备及介质
CN115604528A (zh) * 2021-07-09 2023-01-13 影石创新科技股份有限公司(Cn) 鱼眼图像压缩、鱼眼视频流压缩以及全景视频生成方法
CN114356185A (zh) * 2022-01-05 2022-04-15 京东方科技集团股份有限公司 图像显示方法及计算机设备
CN116847106A (zh) * 2022-03-25 2023-10-03 北京字跳网络技术有限公司 图像压缩传输方法、装置、电子设备及存储介质
CN114972608B (zh) * 2022-07-29 2022-11-08 成都航空职业技术学院 一种渲染动漫人物的方法
CN117078868B (zh) * 2023-10-17 2023-12-15 北京太极信息***技术有限公司 基于信创软硬件的虚拟现实引擎及其建模和渲染方法

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107317987A (zh) * 2017-08-14 2017-11-03 歌尔股份有限公司 虚拟现实的显示数据压缩方法和设备、***
CN107516335A (zh) * 2017-08-14 2017-12-26 歌尔股份有限公司 虚拟现实的图形渲染方法和装置
CN108076384A (zh) * 2018-01-02 2018-05-25 京东方科技集团股份有限公司 一种基于虚拟现实的图像处理方法、装置、设备和介质
CN108665521A (zh) * 2018-05-16 2018-10-16 京东方科技集团股份有限公司 图像渲染方法、装置、***、计算机可读存储介质及设备

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8493390B2 (en) * 2010-12-08 2013-07-23 Sony Computer Entertainment America, Inc. Adaptive displays using gaze tracking
US11010956B2 (en) * 2015-12-09 2021-05-18 Imagination Technologies Limited Foveated rendering
US9965899B2 (en) * 2016-04-28 2018-05-08 Verizon Patent And Licensing Inc. Methods and systems for minimizing pixel data transmission in a network-based virtual reality media delivery configuration
US11551602B2 (en) * 2016-09-01 2023-01-10 Innovega Inc. Non-uniform resolution, large field-of-view headworn display
CN107065197B (zh) 2017-06-20 2020-02-18 合肥工业大学 面向vr眼镜的人眼跟踪远程渲染实时显示方法及***

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107317987A (zh) * 2017-08-14 2017-11-03 歌尔股份有限公司 虚拟现实的显示数据压缩方法和设备、***
CN107516335A (zh) * 2017-08-14 2017-12-26 歌尔股份有限公司 虚拟现实的图形渲染方法和装置
CN108076384A (zh) * 2018-01-02 2018-05-25 京东方科技集团股份有限公司 一种基于虚拟现实的图像处理方法、装置、设备和介质
CN108665521A (zh) * 2018-05-16 2018-10-16 京东方科技集团股份有限公司 图像渲染方法、装置、***、计算机可读存储介质及设备

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2595872A (en) * 2020-06-09 2021-12-15 Sony Interactive Entertainment Inc Gaze tracking apparatus and systems
GB2595872B (en) * 2020-06-09 2023-09-20 Sony Interactive Entertainment Inc Gaze tracking apparatus and systems

Also Published As

Publication number Publication date
US11392197B2 (en) 2022-07-19
US20210333870A1 (en) 2021-10-28
CN108665521B (zh) 2020-06-02
CN108665521A (zh) 2018-10-16

Similar Documents

Publication Publication Date Title
WO2019218783A1 (zh) 图像渲染方法、装置、***、存储介质、图像显示方法、计算机设备
KR101785027B1 (ko) 화면 왜곡 보정이 가능한 디스플레이 장치 및 이를 이용한 화면 왜곡 보정 방법
TWI756378B (zh) 用於深度學習影像超解析度的系統及方法
JP7195935B2 (ja) 画像表示方法、表示システム及びコンピューター読取可能記憶媒体
JP7222031B2 (ja) 画像の投影方法、装置、デバイス及び記憶媒体
CN112241933B (zh) 人脸图像处理方法、装置、存储介质及电子设备
JP4996679B2 (ja) オクルージョンコスト計算を用いるコラージュ生成
US10123024B2 (en) Image processing methods and image processing apparatuses
DE102017108096A1 (de) System, verfahren und computerprogrammprodukt zum rendern bei variablen abtastraten mittels projektiver geometrischer verzerrung
DE102015113240A1 (de) System, verfahren und computerprogrammprodukt für schattierung unter verwendung eines dynamischen objektraumgitters
JP7278766B2 (ja) 画像処理装置、画像処理方法およびプログラム
TW201905573A (zh) 投影系統以及顯示影像的校正方法
CN107452049B (zh) 一种三维头部建模方法及装置
US11127126B2 (en) Image processing method, image processing device, image processing system and medium
US11373337B2 (en) Image processing method of virtual reality and apparatus thereof
CN109741289B (zh) 一种图像融合方法和vr设备
DE102016109905A1 (de) Stückweise lineare unregelmäßige Rasterisierung
US20240242401A1 (en) Image drawing process generation method and apparatus, device, and storage medium
WO2023035841A1 (zh) 用于图像处理的方法、装置、设备、存储介质和程序产品
US11893081B2 (en) Map display method and apparatus
WO2020259123A1 (zh) 一种调整图像画质方法、装置及可读存储介质
WO2021184931A1 (zh) 适用于光学穿透式头戴显示器的颜色对比增强绘制方法、装置以及***
JP2015176251A (ja) 画像処理装置および画像処理方法
CN112488909A (zh) 多人脸的图像处理方法、装置、设备及存储介质
CN114296622B (zh) 图像处理方法、装置、电子设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19803775

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 04.05.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 19803775

Country of ref document: EP

Kind code of ref document: A1