WO2024113360A1 - 一种显示图像的方法、装置以及电子设备 - Google Patents

一种显示图像的方法、装置以及电子设备 Download PDF

Info

Publication number
WO2024113360A1
WO2024113360A1 PCT/CN2022/136207 CN2022136207W WO2024113360A1 WO 2024113360 A1 WO2024113360 A1 WO 2024113360A1 CN 2022136207 W CN2022136207 W CN 2022136207W WO 2024113360 A1 WO2024113360 A1 WO 2024113360A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
group
complete
images
image group
Prior art date
Application number
PCT/CN2022/136207
Other languages
English (en)
French (fr)
Inventor
段鑫慧
Original Assignee
北京小米移动软件有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京小米移动软件有限公司 filed Critical 北京小米移动软件有限公司
Priority to PCT/CN2022/136207 priority Critical patent/WO2024113360A1/zh
Publication of WO2024113360A1 publication Critical patent/WO2024113360A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/02Viewing or reading apparatus

Definitions

  • the present disclosure relates to the field of augmented reality display technology, and in particular to a method, device and electronic device for displaying images.
  • AR Augmented Reality
  • This technology projects the image of the micro display screen in front of the human eye through a series of optical elements.
  • the human eye can observe the superposition of the virtual world and the real world through the worn AR device.
  • the operator can interact with the virtual scene and the real scene at the same time through the AR device.
  • the present disclosure provides a method, device and electronic device for displaying an image.
  • a method for displaying an image comprising:
  • each image group includes N images, each image group corresponds to a complete image, the N images are N partial images constituting the complete image, and N is an integer greater than 1;
  • Each image in each image group is projected to a corresponding sub-area in a display area, and the projection time of each image is controlled to be less than a preset time, so that the complete image corresponding to the image group is displayed in the display area, wherein different images in each image group correspond to different sub-areas, and the preset time meets the requirement that the complete image can be viewed due to the persistence of vision effect.
  • acquiring the image sequence includes: acquiring an image sequence input at a first frame rate;
  • the complete image corresponding to the image group is displayed in the display area, comprising: the complete image corresponding to the image group is displayed in the display area at a second frame rate;
  • the first frame rate is N times the second frame rate.
  • projecting each image in each image group to a corresponding sub-area in the display area includes:
  • the image is projected onto a sub-area corresponding to the projection angle through the projection angle.
  • determining the projection angle corresponding to each image in the image group according to the sub-region corresponding to the image includes:
  • a sub-region corresponding to the image is determined.
  • projecting each image in each image group to a corresponding sub-area in the display area includes:
  • the image is projected onto a sub-region corresponding to the image through a grating exit angle corresponding to the grating incident angle.
  • the grating incident angle and the grating exit angle conform to degenerate logic, and the degenerate logic includes: the end point of the light vector corresponding to different grating incident angles is located in the first ring, the end point of the light vector corresponding to different grating exit angles is located in the second ring, and the length of the light vector difference between different grating incident angles and the corresponding grating exit angles is the same and the direction is the same.
  • N is an even value.
  • a device for displaying an image comprising an input unit and a projection unit;
  • the input unit is configured to acquire an image sequence, the image sequence sequentially comprising a plurality of image groups, each image group comprising N images, each image group corresponds to a complete image, and the N images are N partial images constituting the complete image;
  • the projection unit is configured to project each image in each image group to a corresponding sub-area in the display area, and to control the projection time of each image to be less than a preset time so that the complete image corresponding to the image group is displayed in the display area, wherein different images in each image group correspond to different sub-areas, and the preset time meets the requirement that the complete image can be viewed due to the persistence of vision effect.
  • an electronic device including a processor and a memory, wherein:
  • the memory is used to store computer programs
  • the processor is used to execute the computer program to implement the first aspect or any possible design of the first aspect.
  • a device for displaying an image comprising, along the propagation direction of light: a light emitting unit, an image input device, a lens, a degenerate grating, and a display;
  • the image input device is configured to input an image sequence, the image sequence sequentially comprising a plurality of image groups, each image group comprising N images, each image group corresponding to a complete image, and the N images are N partial images constituting the complete image;
  • the light emitting unit is configured to project each image in each image group to a corresponding sub-area in the display area, and to control the projection time of each image to be less than a preset time so that the complete image corresponding to the image group is displayed in the display area, wherein different images in each image group correspond to different sub-areas, and the preset time meets the requirement of viewing the complete image due to the persistence of vision effect.
  • an augmented reality (AR) glasses wherein the glasses include one or two display devices, and the display device includes a temple and a frame;
  • the temples include a projection device, which includes a light emitting unit, an image input device, a lens and a degenerate grating along the propagation direction of light; a display is fixed in the frame;
  • the image input device is configured to input an image sequence, the image sequence sequentially comprising a plurality of image groups, each image group comprising N images, each image group corresponding to a complete image, and the N images are N partial images constituting the complete image;
  • the light emitting unit is configured to project each image in each image group to a corresponding sub-area within the frame, and to control the projection time of each image to be less than a preset time so that the complete image corresponding to the image group is displayed within the frame, wherein different images in each image group correspond to different sub-areas, and the preset time meets the requirement of viewing the complete image due to the persistence of vision effect.
  • the technical solution provided by the embodiments of the present disclosure may include the following beneficial effects: changing the method of outputting a complete image through an image source to outputting partial images in sequence through the image source, projecting different partial images to different positions on the display with a sufficiently short projection time, so that the user can see the complete image due to the persistence of vision effect, thereby achieving the effect of splicing multiple partial images into a larger image.
  • the user can see images with higher resolution, thereby improving the user's visual experience.
  • Fig. 1 is a flow chart showing a method for displaying an image according to an exemplary embodiment.
  • Fig. 2 is a schematic diagram showing a correspondence between a position of an image in an image group and a splicing position according to an exemplary embodiment.
  • Fig. 3 is a schematic diagram showing the principle of grating degeneration according to an exemplary embodiment.
  • Fig. 4 is a schematic diagram showing the projection of a writing wave vector of a grating on an xy plane according to an exemplary embodiment.
  • Fig. 5 is a structural diagram of a device for displaying an image according to an exemplary embodiment.
  • Fig. 6 is a structural diagram of AR glasses showing a method for displaying images according to an exemplary embodiment.
  • Fig. 7 is a block diagram of an augmented reality display device according to an exemplary embodiment.
  • Fig. 8 is a block diagram of another augmented reality display device according to an exemplary embodiment.
  • Fig. 9 is a block diagram of another augmented reality display device according to an exemplary embodiment.
  • Fig. 10 is a schematic diagram showing an augmented reality display device implementing image stitching in glasses according to an exemplary embodiment.
  • FIG1 is a flow chart of a method for displaying an image according to an exemplary embodiment. As shown in FIG1 , the method includes S1 to S2, specifically:
  • the image sequence includes a plurality of image groups in sequence, each image group includes N images, each image group corresponds to a complete image, the N images are N partial images constituting the complete image, and N is an integer greater than 1;
  • Different images in each image group correspond to different sub-areas, and the preset duration satisfies the requirement of viewing the complete image due to the persistence of vision effect.
  • the value of N may be an even value; according to the usage requirements of different scenarios, the value of N may also be an odd value.
  • the method of projecting each image in each image group to a corresponding sub-area in the display area in S2 may be:
  • the image source outputs images at a rate of 100 frames per second, and the 100 frames of images output per second correspond to 25 image groups in sequence, and each image group includes 4 images.
  • the 4 images in each image group are spliced into a complete spliced image in a field structure.
  • the first image in each image group corresponds to the upper left splicing position
  • the second image corresponds to the upper right splicing position
  • the third image corresponds to the lower left splicing position
  • the fourth image corresponds to the lower right splicing position.
  • the upper left position corresponds to the first angle
  • the upper right position corresponds to the second angle
  • the lower left position corresponds to the third angle
  • the lower right position corresponds to the fourth angle.
  • the sub-region corresponding to each image can be determined according to the sorting position of the image in the image group to which it belongs, and the projection angle corresponding to the image can be determined according to the sub-region.
  • the method of projecting each image in each image group to a corresponding sub-area in the display area in S102 includes:
  • the grating incident angle and the grating exit angle conform to degenerate logic, and the degenerate logic includes: the end point of the light vector corresponding to different grating incident angles is located in the first circle, the end point of the light vector corresponding to different grating exit angles is located in the second circle, and the length of the light vector difference between different grating incident angles and the corresponding grating exit angles is the same and the direction is the same.
  • K is a constant vector, which is a grating vector formed by one pair of written gratings ki and ks .
  • the loci of the endpoints of these grating vectors are two rings on the K vector sphere, which are formed by the rotation of the written wave vectors ki and ks around the diameter of the vector sphere parallel to K.
  • the grating represented by each generatrix on the cylinder with the two rings as the base is degenerate, that is, each readout light on the conical surface where ki is located can reproduce this grating.
  • Fig. 4 is a schematic diagram of the projection of the writing wave vector of the grating on the xy plane according to an exemplary embodiment.
  • the embodiments of the present disclosure can be applicable to application scenarios of screen projection, where hardware with different functions in the projector completes different processing functions in the above method, or an existing projector is modified to obtain a modified projector so that the modified projector can realize the display function in the above method.
  • the method of outputting a complete image from an image source is changed to outputting partial images in sequence through the image source, and different partial images are projected to different positions on the display with a sufficiently short projection time, so that the user can see the complete image due to the persistence of vision effect, thereby achieving the effect of splicing multiple partial images into a larger image.
  • the user can see images with higher resolution, thereby improving the user's visual experience.
  • FIG5 is a structural diagram of a device for displaying an image according to an exemplary embodiment.
  • the device 500 includes an input unit and a projection unit;
  • the input unit is configured to acquire an image sequence, the image sequence sequentially comprising a plurality of image groups, each image group comprising N images, each image group corresponds to a complete image, and the N images are N partial images constituting the complete image;
  • the projection unit is configured to project each image in each image group to a corresponding sub-area in the display area, and to control the projection time of each image to be less than a preset time so that the complete image corresponding to the image group is displayed in the display area, wherein different images in each image group correspond to different sub-areas, and the preset time meets the requirement that the complete image can be viewed due to the persistence of vision effect.
  • the input unit and the projection unit in the apparatus for displaying an image in the embodiment of the present disclosure may be implemented by hardware products in different forms.
  • AR products can be divided into three categories, namely head-mounted displays, handheld displays, and spatial displays represented by PC monitors and HUDs (Head Up Displays).
  • HUDs Head Up Displays
  • a head-mounted display is composed of a head-mounted device and one or more (micro) displays, such as smart AR glasses.
  • Head-mounted displays are usually equipped with multi-degree-of-freedom sensors, allowing users to move their heads freely in six directions: front and back, up and down, left and right, pitch, yaw and roll, and head-mounted display AR products can adjust the screen accordingly based on the user's head movement.
  • Handheld displays are the representative products of AR mobile terminals, with smartphones as the mainstay. After continuous updates and functional optimization, smartphone displays have higher resolution, more powerful processors, better camera imaging quality, and more sensors. These advantages make smartphones the easiest platform for AR applications at this stage. However, handheld displays cannot provide users with direct visual contact with the virtual and real world.
  • Spatial displays are AR products that project virtual content directly into the real world. Spatial displays are often fixed in the physical world, and any physical surface around them, such as walls, desks, and even the human body, can become an interactive AR display.
  • AR glasses can break through the limitations of the screen and use the entire physical world as an AR interactive interface, which is considered to be the mainstream technology route for future AR products.
  • display systems of AR head-mounted displays are mainly divided into two categories:
  • AR head-mounted display systems based on geometric optics have limitations such as heavy optical components, small field of view, low external light transmittance or low image source light energy utilization, and greater processing difficulty, resulting in poor display effects in actual applications.
  • the other type is a head-mounted display system based on diffractive optical elements, which transmits and deflects light by designing diffractive optical waveguide devices or diffraction gratings, effectively reducing the size and weight of optical components.
  • diffractive optical elements which transmits and deflects light by designing diffractive optical waveguide devices or diffraction gratings, effectively reducing the size and weight of optical components.
  • problems with head-mounted display systems based on diffractive optical elements including the small field of view of the system, which leads to loss of field of view during observation, low transmission efficiency of diffractive waveguides, which leads to low utilization of image source light energy, and the control volume of the optical engine carried by the diffractive waveguide display, which leads to insufficient display image resolution. Improving the display resolution leads to excessive volume of the AR display system, and other technical difficulties.
  • Augmented reality display devices can be applied to head-mounted displays, handheld displays and spatial displays.
  • FIG6 is a structural diagram of a device for displaying an image according to an exemplary embodiment.
  • the glasses include two sets of display devices, each set of display devices includes a temple and a frame.
  • the temples include a projection device, which includes a light emitting unit, an image input device, a lens and a degenerate grating along the propagation direction of light; a display is fixed in the frame.
  • a projection device which includes a light emitting unit, an image input device, a lens and a degenerate grating along the propagation direction of light; a display is fixed in the frame.
  • the image input device is configured to input an image sequence, the image sequence sequentially comprising a plurality of image groups, each image group comprising N images, each image group corresponding to a complete image, and the N images are N partial images constituting the complete image;
  • the light emitting unit is configured to project each image in each image group to a corresponding sub-area in the frame, and control the projection time of each image to be less than a preset time so that the complete image corresponding to the image group is displayed in the frame.
  • Different images in each image group correspond to different sub-areas, and the preset time satisfies the requirement that the complete image can be viewed due to the persistence of vision effect.
  • each frame is used to fix a display.
  • the glasses shown in FIG6 include frames, and each frame fixes a display.
  • the image sequence input by the first set of display devices is the image sequence taken by the left eye camera
  • the image sequence input by the second set of display devices is the image sequence taken by the right eye camera.
  • the AR glasses may also include only one set of display devices, where the display device includes one temple and one frame, that is, the AR glasses may include only one temple and one frame.
  • FIG7 is a block diagram of an augmented reality display device according to an exemplary embodiment. As shown in FIG7 , the device includes along the propagation direction of light: a light emitting unit 101, an image source (also called an image input device) 102, a lens group 103, a grating 104, and a display 105.
  • a light emitting unit 101 an image source (also called an image input device) 102
  • a lens group 103 also called an image input device
  • a grating 104 also called an image input device
  • the grating 104 may be a degenerate grating.
  • the light emitting unit 101 , the image source 102 , the lens group 103 , and the grating 104 constitute the projection device in the embodiment corresponding to FIG. 6 .
  • the light transmission process in this device is as follows: the light emitting unit 101 emits light, which is modulated by the lens group 103 and the grating 104 and then reaches the display 105.
  • the display 105 reflects the image output by the image source 102 into the human eye.
  • the light emitting unit 101 includes only a light source 101 - 1 , and the light source can control the emission angle of the emitted light.
  • the light emitting unit 101 includes a light source 101-1 and a reflector 101-2.
  • the light source 101-1 is configured to output light at a fixed angle; the reflector 101-2 is configured to reflect the input light at different angles.
  • FIG10 is a schematic diagram of light modulation by a grating according to an exemplary embodiment.
  • Incident light at different angles corresponding to different output images reaches the grating 104 after passing through the lens group 103.
  • Light at different angles corresponds to the direction of rotation of the k i vector of the grating degenerate K vector sphere along the vector circle.
  • the grating 104 modulates the light at different angles.
  • the change of the k s vector as shown in FIG3 is satisfied, thereby mapping different output images to different positions in the display area on the display 105.
  • the image source outputs images at a rate of 100 frames per second.
  • the 100 frames of images output per second correspond to 25 image groups in sequence, and each image group includes 4 images.
  • the 4 images in each image group are spliced into a complete spliced image in a field structure.
  • the first image in each image group corresponds to the upper left splicing position
  • the second image in each image group corresponds to the upper right splicing position
  • the third image in each image group corresponds to the lower left splicing position
  • the fourth image in each image group corresponds to the lower right splicing position.
  • the upper left position corresponds to the first angle
  • the upper right position corresponds to the second angle
  • the lower left position corresponds to the third angle
  • the lower right position corresponds to the fourth angle.
  • the light input angle for the grating can be determined, and according to the parameters of the lens group 103, the light emission angle corresponding to the light input angle for the grating can be determined, and then, the light emission unit 101 emits visible light according to the light emission angle, and the corresponding image can be mapped to the area corresponding to the corresponding splicing position.
  • the one-to-one mapping relationship between the position of the image in the image group to which it belongs and the light input angle for the grating and the one-to-one mapping relationship between the light input angle for the grating and the light emission angle, the one-to-one mapping relationship between the position of the image in the image group to which it belongs and the light emission angle is determined, so that the light emission angle can be directly determined according to the position of the image in the image group to which it belongs.
  • the light emitting unit 101 determines that the corresponding light emission angle is angle A according to the position of the image in the image group to which it belongs (i.e., the first position), and the light emitting unit 101 emits visible light at angle A during the display period of the first image, i.e., 10 milliseconds, so that the display time of the first image in the upper left corner of the display is 10 milliseconds.
  • the light emitting unit 101 determines that the corresponding light emission angle is angle B according to the position of the image in the image group to which it belongs (i.e., the second position).
  • the light emitting unit 101 emits visible light at angle B during the display period of the second image, i.e., 10 milliseconds, so that the display time of the first image in the upper right corner of the display is 10 milliseconds.
  • the light emitting unit 101 determines that the corresponding light emission angle is angle C according to the position of the image in the image group to which it belongs (i.e., the third position).
  • the light emitting unit 101 emits visible light at angle C during the display period of the third image, i.e., 10 milliseconds, so that the display time of the third image in the lower left corner of the display is 10 milliseconds.
  • the light emitting unit 101 determines that the corresponding light emission angle is angle C according to the position of the image in the image group to which it belongs (i.e., the third position).
  • the light emitting unit 101 emits visible light at angle C during the display period of the third image, i.e., 10 milliseconds, so that the display time of the third image in the lower left corner of the display is 10 milliseconds.
  • the image source 102 outputs an image group, and the display process is: the first image is displayed at the upper left position of the display for 10 seconds, the second image is displayed at the upper right position of the display for 10 seconds, the third image is displayed at the lower left position of the display for 10 seconds, and the fourth image is displayed at the lower right position of the display for 10 seconds. Because of the visual persistence mechanism of the human eye, the user can see a complete large image composed of these four images. Compared with the display effect of the image output by the image source 102 being tiled on the display in the prior art, the resolution of the image of the same size displayed by this method is higher.
  • the method in which the light emitting unit 101 determines the light emission angle according to the position of the output image of the image source in the image group to which it belongs includes: determining the light input angle for the grating according to the position of the output image of the image source in the image group to which it belongs, and determining the light emission angle according to the light input angle for the grating.
  • Example 2 The difference between Example 2 and Example 1 is that the light input angle corresponding to the image for the grating is first determined based on a one-to-one mapping relationship between the position of the image in the image group to which it belongs and the light input angle for the grating, and then the light emission angle is determined based on a one-to-one mapping relationship between the light input angle for the grating and the light emission angle.
  • the light emitting unit 101 determines the corresponding light input angle for the grating as the first angle according to the position of the image in the image group to which it belongs (i.e., the first position), and determines the light emission angle as angle A according to the first angle.
  • the light emitting unit 101 emits visible light at angle A during the display period of the first image, i.e., 10 milliseconds, so that the display time of the first image in the upper left corner of the display is 10 milliseconds.
  • the light emitting unit 101 determines the corresponding light input angle for the grating as the second angle according to the position of the image in the image group to which it belongs (i.e., the second position), and determines the light emission angle as angle B according to the second angle.
  • the light emitting unit 101 emits visible light at angle B during the display period of the second image, i.e., 10 milliseconds, so that the display time of the first image in the upper right corner of the display is 10 milliseconds.
  • the light emitting unit 101 determines that the corresponding light input angle for the grating is a third angle according to the position of the image in the image group to which it belongs (i.e., the third position), and determines the light emission angle as angle C according to the third angle.
  • the light emitting unit 101 emits visible light at angle C during the display period of the third image, i.e., 10 milliseconds, so that the display time of the third image in the lower left corner of the display is 10 milliseconds.
  • the light emitting unit 101 determines that the corresponding light input angle for the grating is a third angle according to the position of the image in the image group to which it belongs (i.e., the third position), and determines the light emission angle to be angle C according to the third angle.
  • the light emitting unit 101 emits visible light at angle C during the display period of the third image, i.e., 10 milliseconds, so that the display time of the third image in the lower left corner of the display is 10 milliseconds.
  • determining the light input angle for the grating according to the position of the output image of the image source in the image group to which it belongs includes: determining a stitching position in the stitched image according to the position of the output image of the image source in the image group to which it belongs, and determining the light input angle for the grating according to the stitching position.
  • the light input angle corresponding to an image for the grating when determining the light input angle corresponding to an image for the grating, first determine the stitching position in the stitched image as the upper left according to the position of the image in the image group to which it belongs, and then determine the light input angle for the grating as the first angle according to the stitching position.
  • the value of N can be an even value, which results in a better stitching effect.
  • this is not limited to the above, and the value of N can also be an odd value.
  • the value of N is 3, and three small images are stitched into a long strip-shaped large image. The display of the glasses may not be completely occupied, and the image display effect can still be achieved.
  • the method of outputting a complete image through the image source is changed to outputting partial images in sequence through the image source, and different partial images are projected to different positions on the display with a sufficiently short projection time, so that the user can see the complete image due to the visual persistence effect, thereby achieving the effect of splicing multiple partial images into a larger image.
  • the user can see an image with higher resolution, thereby improving the user's visual experience.
  • the method of outputting a complete image through the image source is changed to outputting partial images in sequence through the image source, and different partial images are projected to different positions on the display with a sufficiently short projection time, so that the user can see the complete image due to the visual persistence effect, thereby achieving the effect of splicing multiple partial images into a larger image.
  • the user can see an image with higher resolution, thereby improving the user's visual experience.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Control Of Indicators Other Than Cathode Ray Tubes (AREA)

Abstract

本公开提供一种显示图像的方法、装置及设备,方法包括:获取图像序列(S1),图像序列依次包括多个图像组,每个图像组包括N个图像,每个图像组对应于一个完整图像,N个图像为组成完整图像的N个部分图像;将每个图像组中每个图像投影至显示区域内相应的子区域,并且,控制每个图像的投影时长小于预设时长,以使图像组对应的完整图像在显示区域内显示(S2),其中,每个图像组中不同的图像对应于不同的子区域,预设时长满足因视觉暂留效应得以观看到完整图像的要求。

Description

一种显示图像的方法、装置以及电子设备 技术领域
本公开涉及增强现实显示技术领域,尤其涉及一种显示图像的方法、装置及电子设备。
背景技术
AR(Augmented Reality,增强现实)技术是一种通过实时计算影像位置及角度,生成相应虚拟场景的技术,该技术通过一系列光学元件将微显示屏的图像投影到人眼前,人眼可以通过佩戴的AR设备观察到虚拟世界与现实世界的叠加。操作者可以通过AR设备同时与虚拟场景和真实场景进行互动。
发明内容
为克服相关技术中存在的问题,本公开提供一种显示图像的方法、装置及电子设备。
第一方面,提供了一种显示图像的方法,所述方法包括:
获取图像序列,所述图像序列依次包括多个图像组,每个图像组包括N个图像,每个图像组对应于一个完整图像,所述N个图像为组成所述完整图像的N个部分图像,N为大于1的整数;
将每个图像组中每个图像投影至显示区域内相应的子区域,并且,控制每个图像的投影时长小于预设时长,以使所述图像组对应的完整图像在所述显示区域内显示,其中,每个图像组中不同的图像对应于不同的子区域,所述预设时长满足因视觉暂留效应得以观看到所述完整图像的要求。
在一些可能的实施方式中,所述获取图像序列,包括:获取以第一帧率输入的图像序列;
所述图像组对应的完整图像在所述显示区域内显示,包括:所述图像组对应的完整图像以第二帧率在所述显示区域内显示;
所述第一帧率为所述第二帧率的N倍。
在一些可能的实施方式中,所述将每个图像组中每个图像投影至显示区域内相应的子区域,包括:
根据所述图像组中每个图像对应的子区域,确定所述图像对应的投射角度;
通过所述投射角度将所述图像投影至所述投射角度对应的子区域。
在一些可能的实施方式中,所述根据所述图像组中每个图像对应的子区域,确定所述图 像对应的投射角度,包括:
根据所述图像组中所述图像在所述图像组中的位置,确定所述图像对应的子区域。
在一些可能的实施方式中,所述将每个图像组中每个图像投影至显示区域内相应的子区域,包括:
根据所述图像组中所述图像在所述图像组中的位置,确定光栅入射角度;
通过所述光栅入射角度对应的光栅出射角度将所述图像投影至所述图像对应的子区域。
在一些可能的实施方式中,所述光栅入射角度与所述光栅出射角度符合简并逻辑,所述简并逻辑包括:不同的所述光栅入射角度对应的光矢量的终点位于第一圆环,不同的所述光栅出射角度对应的光矢量的终点位于第二圆环,不同的所述光栅入射角度与对应的所述光栅出射角度的光矢量差的长度相同并且方向相同。
在一些可能的实施方式中,所述N为偶数值。
第二方面,提供了一种显示图像的装置,其中,所述装置包括输入单元和投影单元;
所述输入单元被配置为获取图像序列,所述图像序列依次包括多个图像组,每个图像组包括N个图像,每个图像组对应于一个完整图像,所述N个图像为组成所述完整图像的N个部分图像;
所述投影单元被配置为将每个图像组中每个图像投影至显示区域内相应的子区域,并且,控制每个图像的投影时长小于预设时长,以使所述图像组对应的完整图像在所述显示区域内显示,其中,每个图像组中不同的图像对应于不同的子区域,所述预设时长满足因视觉暂留效应得以观看到所述完整图像的要求。
第三方面,提供了一种电子设备,包括处理器以及存储器,其中,
所述存储器用于存储计算机程序;
所述处理器用于执行所述计算机程序,以实现所述第一方面或所述第一方面的任意一种可能的设计。
第四方面,提供了一种显示图像的装置,其中,所述装置沿光的传播方向包括:光发射单元、图像输入器、透镜、简并光栅、显示器;
所述图像输入器被配置为输入图像序列,所述图像序列依次包括多个图像组,每个图像组包括N个图像,每个图像组对应于一个完整图像,所述N个图像为组成所述完整图像的N个部分图像;
所述光发射单元被配置为将每个图像组中每个图像投影至显示区域内相应的子区域,并且,控制每个图像的投影时长小于预设时长,以使所述图像组对应的完整图像在所述显示区 域内显示,其中,每个图像组中不同的图像对应于不同的子区域,所述预设时长满足因视觉暂留效应得以观看到所述完整图像的要求。
第五方面,提供了一种增强现实AR眼镜,其中,所述眼镜包括一套或两套显示装置,所述显示装置包括一个镜腿和一个镜框;
所述镜腿包括投影装置,所述投影装置沿光的传播方向包括:光发射单元、图像输入器、透镜和简并光栅;所述镜框内固定有一显示器;
所述图像输入器被配置为输入图像序列,所述图像序列依次包括多个图像组,每个图像组包括N个图像,每个图像组对应于一个完整图像,所述N个图像为组成所述完整图像的N个部分图像;
所述光发射单元被配置为将每个图像组中每个图像投影至所述镜框内相应的子区域,并且,控制每个图像的投影时长小于预设时长,以使所述图像组对应的完整图像在所述镜框内显示,其中,每个图像组中不同的图像对应于不同的子区域,所述预设时长满足因视觉暂留效应得以观看到所述完整图像的要求。
本公开的实施例提供的技术方案可以包括以下有益效果:将通过图像源输出完整图像的方式改变为通过图像源依次输出部分图像,以足够短的投影时长将不同的部分图像投射到显示器上的不同位置,使用户因视觉暂留效应得以观看到所述完整图像,从而达到多个部分图像拼接为一个较大图像的效果,相比于现有技术中将图像源输出的每个图像平铺式投影至显示器的方式,使用户可看到分辨率更高的图像,提高用户使用的视觉体验。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本公开。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本公开的实施例,并与说明书一起用于解释本公开的原理。
图1是根据一示例性实施例示出的一种显示图像的方法的流程图。
图2是根据一示例性实施例示出的一种图像在图像组中的位置与拼接位置的对应关系的示意图。
图3是根据一示例性实施例示出的光栅简并原理示意图。
图4是根据一示例性实施例示出的光栅的写入波矢在xy平面上的投影示意图。
图5是根据一示例性实施例示出的一种显示图像的装置的结构图。
图6是根据一示例性实施例示出的应用显示图像的方法AR眼镜的结构图。
图7是根据一示例性实施例示出的一种增强现实显示装置框图。
图8是根据一示例性实施例示出的另一种增强现实显示装置框图。
图9是根据一示例性实施例示出的另一种增强现实显示装置框图。
图10是根据一示例性实施例示出的增强现实显示装置在眼镜中实现图像拼接的示意图。
具体实施方式
这里将详细地对示例性实施例进行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本公开相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本公开的一些方面相一致的装置和方法的例子。
本公开实施例提供了一种显示图像的方法,图1是根据一示例性实施例示出的一种显示图像的方法的流程图。如图1所示,所述方法包括S1~S2,具体的:
S1,获取图像序列。
其中,所述图像序列依次包括多个图像组,每个图像组包括N个图像,每个图像组对应于一个完整图像,所述N个图像为组成所述完整图像的N个部分图像,N为大于1的整数;
S2,将每个图像组中每个图像投影至显示区域内相应的子区域,并且,控制每个图像的投影时长小于预设时长,以使所述图像组对应的完整图像在所述显示区域内显示。
其中,每个图像组中不同的图像对应于不同的子区域,所述预设时长满足因视觉暂留效应得以观看到所述完整图像的要求。
在一可能的实施方式中,由于多数的显示器为矩形区域,为了尽可能占用显示器的显示面积,N的值可以为偶数值;根据不同场景的使用需求,N的值也可以为奇数值。
在一可能的实施方式中,获取图像序列可以为获取以第一帧率输入的图像序列,所述图像组对应的完整图像在所述显示区域内显示可以为:所述图像组对应的完整图像以第二帧率在所述显示区域内显示,所述第一帧率为所述第二帧率的N倍。在所述实施方式中,将图像源输出完整图像的方式改变为通过图像源依次输出部分图像,同时,牺牲显示帧率,使用户因视觉暂留效应得以观看到完整图像,相比于现有技术中将图像源输出的每个图像平铺式投影至显示器的方式,使用户可看到分辨率更高的图像。
在一可能的实施方式中,S2中将每个图像组中每个图像投影至显示区域内相应的子区域 的方法可以是:
S2-1,根据所述图像组中每个图像对应的子区域,确定所述图像对应的投射角度;
S2-2,通过所述投射角度将所述图像投影至所述投射角度对应的子区域。
在一可能的实施方式中,S102-1中根据所述图像组中每个图像对应的子区域,确定所述图像对应的投射角度,包括:根据所述图像组中所述图像在所述图像组中的位置,确定所述图像对应的子区域。
在一示例中,如图2所示,图像源以每秒100帧的速率输出图像,每秒输出的100帧图像中的依次对应于25个图像组,每个图像组均包括4个图像。每个图像组中的4个图像以田字结构拼接成一个完整的拼接图像。每个图像组中第1个图像均对应于左上的拼接位置,第2个图像均对应于右上的拼接位置,第3个图像均对应于左下的拼接位置,第4个图像均对应于右下的拼接位置。
任一图像在所属图像组中的排序位置仅有4种可能位置,即左上位置、右上位置、左下位置和右下位置。相应的,具有4种投影角度,如第一角度,第二角度,第三角度和第四角度。两者之间具有固定的一一映射关系,即左上位置对应于第一角度,右上位置对应于第二角度,左下位置对应于第三角度,右下位置对应于第四角度。
从而,根据每个图像在所属图像组中的排序位置可以确定出所述图像对应的子区域,根据所述子区域可以确定所述图像对应的投射角度。
在一可能的实施方式中,S102中将每个图像组中每个图像投影至显示区域内相应的子区域的方法包括:
S2-A,根据所述图像组中所述图像在所述图像组中的位置,确定光栅入射角度;
S2-B,通过所述光栅入射角度对应的光栅出射角度将所述图像投影至所述图像对应的子区域。
在一示例中,在S102中所述光栅入射角度与所述光栅出射角度符合简并逻辑,所述简并逻辑包括:不同的所述光栅入射角度对应的光矢量的终点位于第一圆环,不同的所述光栅出射角度对应的光矢量的终点位于第二圆环,不同的所述光栅入射角度与对应的所述光栅出射角度的光矢量差的长度相同并且方向相同。
关于简并逻辑的解释如下:
在三维空间中一系列成对的参考光波矢{k p}和物光波矢{k q}在满足条件{k p-k q=K}时,成对写入光波记录的波矢属于同一光栅。如图3所示,K为常矢量,是由其中一对写入光栅k i和k s形成的光栅矢量。这些光栅矢量端点的轨迹在K矢量球上是两个圆环,它们由写入波矢k i和 k s围绕与K平行的矢量球直径旋转而成,以两圆环为底的圆柱上的每一条母线代表的光栅都是简并的,即k i所在的圆锥面上的各读出光均可再现这一光栅。
图4是根据一示例性实施例示出的光栅的写入波矢在xy平面上的投影示意图,在进行光栅简并时,写入波矢k i1k i2k i3…k in的矢量终点在同一圆环上,与之一一对应的写入波矢k s1k s2k s3…k sn的矢量终点在另一圆环,且满足K=k i1-k s1=k i2-k s2=k i3-k s3=k in-k sn
本公开实施例可以适用于幕布投影的应用场景,投影仪中的不同功能的硬件完成上述方法中不同的处理功能,或者对已有的投影仪进行改造,获得改造后的投影仪,使改造后的投影仪可以实现上述方法中的显示功能。
本公开实施例中,将图像源输出完整图像的方式改变为通过图像源依次输出部分图像,以足够短的投影时长将不同的部分图像投射到显示器上的不同位置,使用户因视觉暂留效应得以观看到完整图像,从而达到多个部分图像拼接为一个较大图像的效果,相比于现有技术中将图像源输出的每个图像平铺式投影至显示器的方式,使用户可看到分辨率更高的图像,提高用户使用的视觉体验。
本公开实施例提供了一种显示图像的装置。图5是根据一示例性实施例示出的一种显示图像的装置结构图。如图5所示,装置500包括输入单元和投影单元;
所述输入单元被配置为获取图像序列,所述图像序列依次包括多个图像组,每个图像组包括N个图像,每个图像组对应于一个完整图像,所述N个图像为组成所述完整图像的N个部分图像;
所述投影单元被配置为将每个图像组中每个图像投影至显示区域内相应的子区域,并且,控制每个图像的投影时长小于预设时长,以使所述图像组对应的完整图像在所述显示区域内显示,其中,每个图像组中不同的图像对应于不同的子区域,所述预设时长满足因视觉暂留效应得以观看到所述完整图像的要求。
本公开实施例中的显示图像的装置中的输入单元和投影单元可以由不同形式的硬件产品实现。
本公开中的思想也可以应用于AR产品中。AR产品可以分为三类,分别是头戴式显示器,手持式显示器和以PC显示器、HUD(Head Up Display,平视显示器)为代表的空间显示器。
头戴显示器是由一个头戴装置以及与之搭配的一块或多块(微型)显示屏组成,例如智能AR眼镜。头戴显示器通常能够搭载多自由度的传感器,使得用户可以在前后、上下、左右、俯仰、偏转和滚动六个方向自由移动头部,而头戴显示器AR产品能够根据用户头部移动的动 作进行相应的画面调整。
手持式显示器是以智能手机作为AR移动终端的代表产品,经过不断的更新迭代和功能优化,智能手机显示器分辨率越来越高、处理器越来越强、相机成像质量越来越好、传感器越来越丰富,诸多优势使得智能手机成为现阶段AR应用最容易落地的平台。但是手持式显示器无法实现用户与虚实世界的直接视觉感触。
空间显示器是将虚拟内容直接投影在现实世界中的AR产品。空间显示器往往固定在物理世界中,周围任何物理表面比如墙面、桌面甚至人体都可以成为可交互的AR显示屏。
上述三种AR产品中,由于AR眼镜可以突破屏幕的限制,可将整个物理世界作为AR交互界面,被认为是未来AR产品的主流技术路线。相关技术中,AR头戴显示器的显示***主要分为两大类:
一类是基于传统几何光学叠加技术,包括共轴侧视棱镜方法、几何光波导方法和自由曲面镜方法。基于几何光学的AR头戴显示***会存在光学组件厚重、视场角小、外部光线透过率底或像源光能利用率较低、加工难度较大等限制因素,在实际应用中显示效果不佳。
另一类是基于衍射光学元件的头戴显示***,通过设计衍射光波导器件或者衍射光栅对光线进行传输和偏折,有效缩小了光学组件的尺寸和重量。但是基于衍射光学元件的头戴显示***仍存在一些问题,包括***视场角较小导致观察过程中出现视场丢失现象,衍射波导传输效率较低导致像源光能利用率较低,衍射波导显示搭载的光机引擎控制体积会导致显示图像分辨率不足,提高显示分辨率导致AR显示***整机体积过大等技术难题。
在图像源的图像尺寸和像素总数均不可增加的前提下,如何提高用户的观看效果是需要解决的问题。
增强现实显示装置可以应用于头戴式显示器,手持式显示器和空间显示器中。
本公开实施例提供了一种增强现实AR眼镜。图6是根据一示例性实施例示出的一种显示图像的装置结构图。如图6所示,眼镜包括两套显示装置,每套显示装置包括一个镜腿和一个镜框。
所述镜腿包括投影装置,所述投影装置沿光的传播方向包括:光发射单元、图像输入器、透镜和简并光栅;所述镜框内固定有显示器。
所述图像输入器被配置为输入图像序列,所述图像序列依次包括多个图像组,每个图像组包括N个图像,每个图像组对应于一个完整图像,所述N个图像为组成所述完整图像的N个部分图像;
所述光发射单元被配置为将每个图像组中每个图像投影至所述镜框内相应的子区域,并 且,控制每个图像的投影时长小于预设时长,以使所述图像组对应的完整图像在所述镜框内显示。其中,每个图像组中不同的图像对应于不同的子区域,所述预设时长满足因视觉暂留效应得以观看到所述完整图像的要求。
可选的,每个镜框用于固定一个显示器。图6所示的眼镜中包括镜框,每个镜框固定一个显示器。
如图6所示的AR眼镜中,第一套显示装置对应于左眼侧的眼镜部分,第二套显示装置对应于右眼侧的眼镜部分时,第一套显示装置输入的图像序列为左眼摄像机拍摄的图像序列,第二套显示装置输入的图像序列为右眼摄像机拍摄的图像序列,两套显示装置同时进行显示后,用户便可以看到3D效果的视频。
在一些应用场景下,AR眼镜也可以只包括一套显示装置,所述显示装置包括一个镜腿和一个镜框,即AR眼镜可以只包括一个镜腿和一个镜框。
本公开示例性的实施例提供一种增强现实显示装置,图7是根据一示例性实施例示出的一种增强现实显示装置框图,如图7所示,所述装置沿光的传播方向包括:光发射单元101、图像源(也称为图像输入器)102、透镜组103、光栅104、显示器105。
在一种实施方式中,光栅104可以为简并光栅。
在一种实施方式中,光发射单元101、图像源102、透镜组103、光栅104构成图6对应的实施例中的投影装置。
此装置中的光的传输过程为:光发射单元101发射光线,经过透镜组103和光栅104调制,再到达显示器105,显示器105将图像源102所输出的图像反射进入人眼。
在一种实施方式中,如图8所示,光发射单元101仅包括光源101-1,光源可以控制发射光的发射角度。
在一种实施方式中,如图9所示,光发射单元101包括光源101-1和反射镜101-2。光源101-1,被配置为输出固定角度的光线;反射镜101-2,被配置为对输入的光线进行不同角度的反射。
图10是根据一示例性实施例示出的光线通过光栅调制的示意图,在不同输出图像对应的不同角度的入射光线经过透镜组103后,达到光栅104,不同角度的光线对应光栅简并K矢量球的k i矢量沿矢量圆旋转的方向,光栅104对不同角度的光线进行调制,调制时满足如图3中k s矢量的变化,由此在显示器105上将不同输出图像映射到显示区域中的不同位置。
下面通两个具体的示例进行详细说明。
例一:
图像源以每秒100帧的速率输出图像,每秒输出的100帧图像中的依次对应于25个图像组,每个图像组均包括4个图像。每个图像组中的4个图像以田字结构拼接成一个完整的拼接图像。每个图像组中第1个图像均对应于左上的拼接位置,每个图像组中第2个图像均对应于右上的拼接位置,每个图像组中第3个图像均对应于左下的拼接位置,每个图像组中第4个图像均对应于右下的拼接位置。
可知,任一图像在所属图像组中的位置仅有4种可能位置,即左上位置、右上位置、左下位置和右下位置。相应的,具有4种针对所述光栅的光线输入角度,如第一角度,第二角度,第三角度和第四角度。两者之间具有固定的一一映射关系,即左上位置对应于第一角度,右上位置对应于第二角度,左下位置对应于第三角度,右下位置对应于第四角度。
从而,根据每个图像在所属图像组中的位置和上述一一映射关系,便可确定针对所述光栅的光线输入角度,根据透镜组103的参数便可确定与针对所述光栅的光线输入角度对应的光发射角度,则,光发射单元101根据此光发射角度发射可见光,便可以将相应的图像映射到相应拼接位置对应的区域。光发射角度也包括四种,并且与上述四个角度一一对应,即第一角度对应于角度A,第二角度对应于角度B,第三角度对应于角度C,第四角度对应于角度D。
根据图像在所属图像组中的位置与针对所述光栅的光线输入角度的一一映射关系,以及针对所述光栅的光线输入角度与光发射角度的一一映射关系,确定出图像在所属图像组中的位置与光发射角度的一一映射关系,从而可以直接根据图像在所属图像组中的位置确定光发射角度。
具体的,图像源102输出1秒钟内的第1个图像时,光发射单元101根据图像在所属图像组中的位置(即第1位)确定相应的光发射角度为角度A,光发射单元101在第1图像的显示时段即10毫秒内,以角度A发射可见光,使第1图像在显示器的左上方的显示时长为10毫秒。
图像源102继续输出第2个图像时,光发射单元101根据图像在所属图像组中的位置(即第2位)确定相应的光发射角度为角度B,光发射单元101在第2图像的显示时段即10毫秒内,以角度B发射可见光,使第1图像在显示器上的右上方的显示时长为10毫秒。
图像源102继续输出第3个图像时,光发射单元101根据图像在所属图像组中的位置(即第3位)确定相应的光发射角度为角度C,光发射单元101在第3图像的显示时段即10毫秒内,以角度C发射可见光,使第3图像在显示器上的左下方的显示时长为10毫秒。
图像源102继续输出第4个图像时,光发射单元101根据图像在所属图像组中的位置(即 第3位)确定相应的光发射角度为角度C,光发射单元101在第3图像的显示时段即10毫秒内,以角度C发射可见光,使第3图像在显示器上的左下方的显示时长为10毫秒。
至此图像源102输出一个图像组,显示过程为:第1图像在显示器的左上位置显示10秒,第2图像在显示器的右上位置显示10秒,第3图像在显示器的左下位置显示10秒,第4图像在显示器的右下位置显示10秒。因为人眼的视觉暂留机制,用户可以看到一个由此4个图像拼接而成的完整的大的图像。相比于现有技术中图像源102输出的图像被平铺到显示器上的显示效果,本方法显示出的相同尺寸的图像的分辨率更高。
在一些可能的实施方式中,光发射单元101根据所述图像源的输出图像在所属图像组中的位置确定光发射角度的方法,包括:根据所述图像源的输出图像在所属图像组中的位置确定针对所述光栅的光线输入角度,根据所述针对所述光栅的光线输入角度确定所述光发射角度。
例二
例二与例一的不同在于,先根据图像在所属图像组中的位置与针对所述光栅的光线输入角度的一一映射关系,确定图像对应的针对所述光栅的光线输入角度,再根据以及针对所述光栅的光线输入角度与光发射角度的一一映射关系,确定出光发射角度。
具体的:
图像源102输出1秒钟内的第1个图像时,光发射单元101根据图像在所属图像组中的位置(即第1位)确定相应的针对所述光栅的光线输入角度为第一角度,根据此第一角度确定光发射角度为角度A,光发射单元101在第1图像的显示时段即10毫秒内,以角度A发射可见光,使第1图像在显示器的左上方的显示时长为10毫秒。
图像源102继续输出第2个图像时,光发射单元101根据图像在所属图像组中的位置(即第2位)确定相应的针对所述光栅的光线输入角度为第二角度,根据此第二角度确定光发射角度为角度B,光发射单元101在第2图像的显示时段即10毫秒内,以角度B发射可见光,使第1图像在显示器上的右上方的显示时长为10毫秒。
图像源102继续输出第3个图像时,光发射单元101根据图像在所属图像组中的位置(即第3位)确定相应的针对所述光栅的光线输入角度为第三角度,根据此第三角度确定光发射角度为角度C,光发射单元101在第3图像的显示时段即10毫秒内,以角度C发射可见光,使第3图像在显示器上的左下方的显示时长为10毫秒。
图像源102继续输出第4个图像时,光发射单元101根据图像在所属图像组中的位置(即第3位)确定相应的针对所述光栅的光线输入角度为第三角度,根据此第三角度确定光发射 角度为角度C,光发射单元101在第3图像的显示时段即10毫秒内,以角度C发射可见光,使第3图像在显示器上的左下方的显示时长为10毫秒。
在一些可能的实施方式中,所述根据所述图像源的输出图像在所属图像组中的位置确定针对所述光栅的光线输入角度,包括:根据所述图像源的输出图像在所属图像组中的位置确定在拼接图像中的拼接位置,根据所述拼接位置确定针对所述光栅的光线输入角度。
例如:确定一图像对应的针对所述光栅的光线输入角度时,先根据此图像在所属图像组中的位置确定在拼接图像中的拼接位置为左上,再根据此拼接位置确定针对所述光栅的光线输入角度为第一角度。
确定另一图像对应的针对所述光栅的光线输入角度时,先根据此图像在所属图像组中的位置确定在拼接图像中的拼接位置为右上,再根据此拼接位置确定针对所述光栅的光线输入角度为第二角度。
等等。
在一些可能的实施方式中,鉴于目前的显示器多为长方形形状,N的值可以取偶数值,便拼接效果更好。但不局限于此,N的值也可以取奇数值,例如如图10所示,N的值为3,由3个小图拼接为一个长条状的大图,眼镜的显示器可以不被完全占满,仍能达到图像显示效果。
本方法中,在图像源的图像尺寸和像素总数均不可增加的前提下,将通过图像源输出完整图像的方式改变为通过图像源依次输出部分图像,以足够短的投影时长将不同的部分图像投射到显示器上的不同位置,使用户因视觉暂留效应得以观看到所述完整图像,从而达到多个部分图像拼接为一个较大图像的效果,相比于现有技术中将图像源输出的每个图像平铺式投影至显示器的方式,使用户可看到分辨率更高的图像,提高用户使用的视觉体验。
本领域技术人员在考虑说明书及实践这里公开的发明后,将容易想到本公开的其它实施方案。本公开旨在涵盖本公开的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本公开的一般性原理并包括本公开未公开的本技术领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,本公开的真正范围和精神由下面的权利要求指出。
应当理解的是,本公开并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围进行各种修改和改变。本公开的范围仅由所附的权利要求来限制。
工业实用性
在图像源的图像尺寸和像素总数均不可增加的前提下,将通过图像源输出完整图像的方 式改变为通过图像源依次输出部分图像,以足够短的投影时长将不同的部分图像投射到显示器上的不同位置,使用户因视觉暂留效应得以观看到所述完整图像,从而达到多个部分图像拼接为一个较大图像的效果,相比于现有技术中将图像源输出的每个图像平铺式投影至显示器的方式,使用户可看到分辨率更高的图像,提高用户使用的视觉体验。

Claims (11)

  1. 一种显示图像的方法,其中,所述方法包括:
    获取图像序列,所述图像序列依次包括多个图像组,每个图像组包括N个图像,每个图像组对应于一个完整图像,所述N个图像为组成所述完整图像的N个部分图像,N为大于1的整数;
    将每个图像组中每个图像投影至显示区域内相应的子区域,并且,控制每个图像的投影时长小于预设时长,以使所述图像组对应的完整图像在所述显示区域内显示,其中,每个图像组中不同的图像对应于不同的子区域,所述预设时长满足因视觉暂留效应得以观看到所述完整图像的要求。
  2. 如权利要求1所述的显示图像的方法,其中,所述获取图像序列,包括:获取以第一帧率输入的图像序列;
    所述图像组对应的完整图像在所述显示区域内显示,包括:所述图像组对应的完整图像以第二帧率在所述显示区域内显示;
    所述第一帧率为所述第二帧率的N倍。
  3. 如权利要求1或2所述的显示图像的方法,其中,所述将每个图像组中每个图像投影至显示区域内相应的子区域,包括:
    根据所述图像组中每个图像对应的子区域,确定所述图像对应的投射角度;
    通过所述投射角度将所述图像投影至所述投射角度对应的子区域。
  4. 如权利要求3所述的显示图像的方法,其中,所述根据所述图像组中每个图像对应的子区域,确定所述图像对应的投射角度,包括:
    根据所述图像组中所述图像在所述图像组中的位置,确定所述图像对应的子区域。
  5. 如权利要求1或2所述的显示图像的方法,其中,所述将每个图像组中每个图像投影至显示区域内相应的子区域,包括:
    根据所述图像组中所述图像在所述图像组中的位置,确定光栅入射角度;
    通过所述光栅入射角度对应的光栅出射角度将所述图像投影至所述图像对应的子区域。
  6. 如权利要求5所述的显示图像的方法,其中,所述光栅入射角度与所述光栅出射角度符合简并逻辑,所述简并逻辑包括:不同的所述光栅入射角度对应的光矢量的终点位于第一圆环,不同的所述光栅出射角度对应的光矢量的终点位于第二圆环,不同的所述光栅入射角度与对应的所述光栅出射角度的光矢量差的长度相同并且方向相同。
  7. 如权利要求1所述的显示图像的方法,其中,所述N为偶数值。
  8. 一种显示图像的装置,其中,所述装置包括输入单元和投影单元;
    所述输入单元被配置为获取图像序列,所述图像序列依次包括多个图像组,每个图像组包括N个图像,每个图像组对应于一个完整图像,所述N个图像为组成所述完整图像的N个部分图像;
    所述投影单元被配置为将每个图像组中每个图像投影至显示区域内相应的子区域,并且,控制每个图像的投影时长小于预设时长,以使所述图像组对应的完整图像在所述显示区域内显示,其中,每个图像组中不同的图像对应于不同的子区域,所述预设时长满足因视觉暂留效应得以观看到所述完整图像的要求。
  9. 一种电子设备,包括处理器以及存储器,其中,
    所述存储器用于存储计算机程序;
    所述处理器用于执行所述计算机程序,以实现如权利要求1至7中任一项所述的方法。
  10. 一种显示图像的装置,其中,所述装置沿光的传播方向包括:光发射单元、图像输入器、透镜、简并光栅、显示器;
    所述图像输入器被配置为输入图像序列,所述图像序列依次包括多个图像组,每个图像组包括N个图像,每个图像组对应于一个完整图像,所述N个图像为组成所述完整图像的N个部分图像;
    所述光发射单元被配置为将每个图像组中每个图像投影至显示区域内相应的子区域,并且,控制每个图像的投影时长小于预设时长,以使所述图像组对应的完整图像在所述显示区域内显示,其中,每个图像组中不同的图像对应于不同的子区域,所述预设时长满足因视觉暂留效应得以观看到所述完整图像的要求。
  11. 一种增强现实AR眼镜,其中,所述眼镜包括一套或两套显示装置,所述显示装置包括一个镜腿和一个镜框;
    所述镜腿包括投影装置,所述投影装置沿光的传播方向包括:光发射单元、图像输入器、透镜和简并光栅;所述镜框内固定有一显示器;
    所述图像输入器被配置为输入图像序列,所述图像序列依次包括多个图像组,每个图像组包括N个图像,每个图像组对应于一个完整图像,所述N个图像为组成所述完整图像的N个部分图像;
    所述光发射单元被配置为将每个图像组中每个图像投影至所述镜框内相应的子区域,并且,控制每个图像的投影时长小于预设时长,以使所述图像组对应的完整图像在所述镜框内 显示,其中,每个图像组中不同的图像对应于不同的子区域,所述预设时长满足因视觉暂留效应得以观看到所述完整图像的要求。
PCT/CN2022/136207 2022-12-02 2022-12-02 一种显示图像的方法、装置以及电子设备 WO2024113360A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2022/136207 WO2024113360A1 (zh) 2022-12-02 2022-12-02 一种显示图像的方法、装置以及电子设备

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2022/136207 WO2024113360A1 (zh) 2022-12-02 2022-12-02 一种显示图像的方法、装置以及电子设备

Publications (1)

Publication Number Publication Date
WO2024113360A1 true WO2024113360A1 (zh) 2024-06-06

Family

ID=91322815

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/136207 WO2024113360A1 (zh) 2022-12-02 2022-12-02 一种显示图像的方法、装置以及电子设备

Country Status (1)

Country Link
WO (1) WO2024113360A1 (zh)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101498812A (zh) * 2008-01-30 2009-08-05 黄峰彪 一种多幅图像合成一幅图像显示的方法及其显示组件
CN105527789A (zh) * 2016-02-29 2016-04-27 青岛海信电器股份有限公司 一种投影显示方法及***
CN107092093A (zh) * 2017-06-16 2017-08-25 北京灵犀微光科技有限公司 波导显示装置
CN108873332A (zh) * 2018-05-24 2018-11-23 成都理想境界科技有限公司 单眼大视场近眼显示模组、显示方法及头戴式显示设备
CN109298542A (zh) * 2018-12-12 2019-02-01 深圳创维新世界科技有限公司 时序性三维投影显示***
US20190113755A1 (en) * 2017-10-18 2019-04-18 Seiko Epson Corporation Virtual image display device
US20200110361A1 (en) * 2018-10-09 2020-04-09 Microsoft Technology Licensing, Llc Holographic display system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101498812A (zh) * 2008-01-30 2009-08-05 黄峰彪 一种多幅图像合成一幅图像显示的方法及其显示组件
CN105527789A (zh) * 2016-02-29 2016-04-27 青岛海信电器股份有限公司 一种投影显示方法及***
CN107092093A (zh) * 2017-06-16 2017-08-25 北京灵犀微光科技有限公司 波导显示装置
US20190113755A1 (en) * 2017-10-18 2019-04-18 Seiko Epson Corporation Virtual image display device
CN108873332A (zh) * 2018-05-24 2018-11-23 成都理想境界科技有限公司 单眼大视场近眼显示模组、显示方法及头戴式显示设备
US20200110361A1 (en) * 2018-10-09 2020-04-09 Microsoft Technology Licensing, Llc Holographic display system
CN109298542A (zh) * 2018-12-12 2019-02-01 深圳创维新世界科技有限公司 时序性三维投影显示***

Similar Documents

Publication Publication Date Title
JP6971281B2 (ja) 仮想現実および拡張現実のシステムおよび方法
US20210088968A1 (en) Dynamic fresnel projector
CN113728267A (zh) 使用由光发射器阵列形成的多个瞳孔内视差视图来提供可变适应提示的显示***和方法
JP2023500177A (ja) 奥行きのある物体を表示するシステム及び方法
Itoh et al. Beaming displays
US11054658B2 (en) Display apparatus and method using reflective elements and opacity mask
US6178043B1 (en) Multiview three-dimensional image display system
WO2024113360A1 (zh) 一种显示图像的方法、装置以及电子设备
CN106772821B (zh) 一种可交互裸眼3d***
Tsuchiya et al. An optical design for avatar-user co-axial viewpoint telepresence
JP2000132329A (ja) 面認識装置、面認識方法及び仮想画像立体合成装置
CN118435102A (zh) 一种显示图像的方法、装置以及电子设备
WO2020037693A1 (zh) 一种基于光泳捕获的可交互体三维显示装置及其控制方法
US20230314846A1 (en) Configurable multifunctional display panel
JPH077747A (ja) 立体表示システム
Zhang Design and Prototyping of Wide Field of View Occlusion-capable Optical See-through Augmented Reality Displays by Using Paired Conical Reflectors
CN112465696A (zh) 一种全景呈现方法及其装置
JP2022050768A (ja) 要素画像群生成装置及びそのプログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22966948

Country of ref document: EP

Kind code of ref document: A1