CN112929636A - 3D display device and 3D image display method - Google Patents

3D display device and 3D image display method Download PDF

Info

Publication number
CN112929636A
CN112929636A CN201911231149.XA CN201911231149A CN112929636A CN 112929636 A CN112929636 A CN 112929636A CN 201911231149 A CN201911231149 A CN 201911231149A CN 112929636 A CN112929636 A CN 112929636A
Authority
CN
China
Prior art keywords
user
eye
image
display
pixels
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911231149.XA
Other languages
Chinese (zh)
Inventor
刁鸿浩
黄玲溪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vision Technology Venture Capital Pte Ltd
Beijing Ivisual 3D Technology Co Ltd
Original Assignee
Vision Technology Venture Capital Pte Ltd
Beijing Ivisual 3D Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vision Technology Venture Capital Pte Ltd, Beijing Ivisual 3D Technology Co Ltd filed Critical Vision Technology Venture Capital Pte Ltd
Priority to CN201911231149.XA priority Critical patent/CN112929636A/en
Priority to EP20895613.6A priority patent/EP4068768A4/en
Priority to PCT/CN2020/133332 priority patent/WO2021110038A1/en
Priority to US17/781,058 priority patent/US20230007228A1/en
Priority to TW109142887A priority patent/TWI788739B/en
Publication of CN112929636A publication Critical patent/CN112929636A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/302Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/275Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals
    • H04N13/279Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals the virtual viewpoint locations being selected by the viewers or determined by tracking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/366Image reproducers using viewer tracking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/366Image reproducers using viewer tracking
    • H04N13/368Image reproducers using viewer tracking for two or more viewers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/366Image reproducers using viewer tracking
    • H04N13/383Image reproducers using viewer tracking for tracking with gaze detection, i.e. detecting the lines of sight of the viewer's eyes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/398Synchronisation thereof; Control thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
  • Controls And Circuits For Display Device (AREA)
  • Liquid Crystal (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The application relates to the technical field of 3D display, and discloses a 3D display device, which comprises: a multi-view naked-eye 3D display screen including a plurality of composite pixels, each of the plurality of composite pixels including a plurality of composite sub-pixels, each of the plurality of composite sub-pixels including a plurality of sub-pixels corresponding to a plurality of views of the 3D display device; a perspective determination device configured to determine a user perspective of a user; a 3D processing device configured to render respective sub-pixels of the plurality of composite sub-pixels in accordance with depth information of the 3D model based on a user perspective. The device can solve the problem of naked eye 3D display distortion. The application also discloses a 3D image display method.

Description

3D display device and 3D image display method
Technical Field
The present application relates to the field of 3D display technologies, and for example, to a 3D display device and a 3D image display method.
Background
The naked-eye 3D display technology is a hot research in imaging technology because it can present vivid visual experience to users.
In the process of implementing the embodiments of the present disclosure, it is found that at least the following problems exist in the related art: the users at various positions all see the same 3D image, only users within a certain range can generate real feeling, and other users outside the range can feel display distortion.
This background is only for convenience in understanding the relevant art in this field and is not to be taken as an admission of prior art.
Disclosure of Invention
The following presents a simplified summary in order to provide a basic understanding of some aspects of the disclosed embodiments. This summary is not an extensive overview nor is intended to identify key/critical elements or to delineate the scope of such embodiments but rather as a prelude to the more detailed description that is presented later.
The embodiment of the disclosure provides 3D display equipment and a 3D image display method, and aims to solve the technical problem of naked eye 3D display distortion.
In some embodiments, there is provided a 3D display device including: a multi-view naked-eye 3D display screen including a plurality of composite pixels, each of the plurality of composite pixels including a plurality of composite sub-pixels, each of the plurality of composite sub-pixels including a plurality of sub-pixels corresponding to a plurality of views of the 3D display device; a perspective determination device configured to determine a user perspective of a user; a 3D processing device configured to render respective sub-pixels of the plurality of composite sub-pixels in accordance with depth information of the 3D model based on a user perspective.
In some embodiments, the 3D processing device is configured to generate a 3D image from the depth information based on the user perspective and render corresponding sub-pixels from the 3D image.
In some embodiments, the 3D display device further comprises: an eye tracking device configured to determine a spatial position of a user's eye; the 3D processing device is configured to determine a viewpoint of eyes of the user based on the spatial position of the human eyes, and render sub-pixels corresponding to the viewpoint of the eyes based on the 3D image.
In some embodiments, a human eye tracking device includes: a human eye tracker configured to capture a user image of a user; an eye tracking image processor configured to determine an eye spatial position based on the user image; and an eye tracking data interface configured to transmit eye spatial position information indicative of a spatial position of the eye.
In some embodiments, an eye tracker includes: a first camera configured to capture a first image; and a second camera configured to capture a second image; wherein the eye-tracking image processor is configured to identify the presence of a human eye based on at least one of the first and second images and to determine a spatial position of the human eye based on the identified human eye.
In some embodiments, an eye tracker includes: a camera configured to capture an image; and a depth detector configured to acquire eye depth information of a user; wherein the eye-tracking image processor is configured to identify the presence of a human eye based on the image and to determine a spatial position of the human eye based on the identified position of the human eye and the eye depth information.
In some embodiments, the user viewing angle is an angle between the user and a display plane of the multi-view naked eye 3D display screen.
In some embodiments, the user viewing angle is an included angle between a user sight line and a display plane of the multi-view naked eye 3D display screen, wherein the user sight line is a connection line between a midpoint of a connection line of two eyes of the user and a center of the multi-view naked eye 3D display screen.
In some embodiments, the user perspective is: an angle between the user's sight line and at least one of the horizontal, vertical and depth directions of the display plane; or the angle between the user's gaze and the projection of the user's gaze in the display plane.
In some embodiments, the 3D display device further comprises: a 3D signal interface configured to receive a 3D model.
In some embodiments, there is provided a 3D image display method including: determining a user perspective of a user; and rendering corresponding sub-pixels in the composite sub-pixels of the composite pixels in the multi-view naked-eye 3D display screen according to the depth of field information of the 3D model based on the user visual angle.
In some embodiments, rendering, based on the user perspective, respective ones of the composite sub-pixels of the composite pixel in the multi-view naked-eye 3D display screen in accordance with the depth information of the 3D model comprises: and generating a 3D image according to the depth information based on the user visual angle, and rendering corresponding sub-pixels according to the 3D image.
In some embodiments, the 3D image display method further includes: determining a spatial position of a human eye of a user; determining a viewpoint of eyes of a user based on the space position of the eyes; and rendering the sub-pixels corresponding to the viewpoint of the eyes based on the 3D image.
In some embodiments, determining the spatial position of the user's eye comprises: shooting a user image of a user; determining a spatial position of a human eye based on the user image; and transmitting eye spatial position information indicating the spatial position of the eye.
In some embodiments, capturing a user image of the user and determining the spatial position of the human eye based on the user image comprises: shooting a first image; shooting a second image; identifying the presence of a human eye based on at least one of the first image and the second image; and determining a spatial position of the human eye based on the identified human eye.
In some embodiments, capturing a user image of the user and determining the spatial position of the human eye based on the user image comprises: shooting an image; acquiring eye depth information of a user; identifying the presence of human eyes based on the image; and determining the spatial position of the human eye based on the recognized position of the human eye and the eye depth information.
In some embodiments, the user viewing angle is an angle between the user and a display plane of the multi-view naked eye 3D display screen.
In some embodiments, the user viewing angle is an included angle between a user sight line and a display plane of the multi-view naked eye 3D display screen, wherein the user sight line is a connection line between a midpoint of a connection line of two eyes of the user and a center of the multi-view naked eye 3D display screen.
In some embodiments, the user perspective is: an angle between the user's sight line and at least one of the horizontal, vertical and depth directions of the display plane; or the angle between the user's gaze and the projection of the user's gaze in the display plane.
In some embodiments, the 3D image display method further includes: a 3D model is received.
In some embodiments, there is provided a 3D display device including: a processor; and a memory storing program instructions; the processor is configured to perform the method as described above when executing the program instructions.
The 3D display device and the 3D image display method provided by the embodiment of the disclosure can realize the following technical effects:
a view-based follow-up 3D display effect is provided to a user. The users at different angles can watch different 3D display pictures, and the display effect is vivid. The display effect of different angles can be adjusted along with the change of the visual angle of the user. To present a good visual effect to the user. In addition, the 3D display equipment for realizing the follow-up 3D display effect can adopt a multi-view naked eye 3D display screen, the display resolution of the multi-view naked eye 3D display screen is defined in a composite pixel mode, the display resolution defined by the composite pixel is taken as a consideration factor during transmission and display, the calculation amount of transmission and rendering is reduced under the condition of ensuring the high-definition display effect, and high-quality naked eye type 3D display is realized.
The foregoing general description and the following description are exemplary and explanatory only and are not restrictive of the application.
Drawings
One or more embodiments are illustrated by way of example in the accompanying drawings, which correspond to the accompanying drawings and not in limitation thereof, in which elements having the same reference numeral designations are shown as like elements and not in limitation thereof, and wherein:
fig. 1A to 1C are schematic views of a 3D display device according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of a human eye tracking device according to an embodiment of the present disclosure;
FIG. 3 is a geometric relationship model for determining the spatial position of a human eye using two cameras according to an embodiment of the present disclosure;
FIG. 4 is a schematic view of a human eye tracking device according to another embodiment of the present disclosure;
FIG. 5 is a geometric relational model for determining the spatial position of a human eye using a camera and a depth detector in accordance with an embodiment of the disclosure;
FIG. 6 is a schematic diagram of a user perspective according to an embodiment of the present disclosure;
FIG. 7 is a schematic illustration of a user perspective according to another embodiment of the present disclosure;
FIG. 8 is a schematic diagram of generating 3D images corresponding to different user perspectives, according to an embodiment of the present disclosure;
fig. 9A to 9E are schematic diagrams illustrating a correspondence relationship between a viewpoint and a sub-pixel according to an embodiment of the disclosure;
fig. 10 is a flowchart of a display method of a 3D display device according to an embodiment of the present disclosure; and
fig. 11 is a schematic diagram of a 3D display device according to an embodiment of the present disclosure.
Reference numerals:
100: a 3D display device; 110: a multi-view naked eye 3D display screen; 120: a processor; 121: a register; 130: a 3D processing device; 131: a buffer; 140: a 3D signal interface; 150: a human eye tracking device; 151: an eye tracker; 151 a: a first camera; 151 b: a second camera; 152: an eye tracking image processor; 153: an eye tracking data interface; 154: an infrared emitting device; 155: a camera; 156: a buffer; 157: a comparator; 158: a depth detector; 160: a viewing angle determining device; 300: a 3D display device; 310: a memory; 320: a processor; 330: a bus; 340: a communication interface; 400: a composite pixel; 410: a red composite subpixel; 420: a green composite subpixel; 430: a blue composite subpixel; 500: a composite pixel;510: a red composite subpixel; 520: a green composite subpixel; 530: a blue composite subpixel; f: a focal length; and Za: an optical axis of the first camera; zb: an optical axis of the second camera; 401 a: a focal plane of the first camera; 401 b: a focal plane of the second camera; and Oa: a lens center of the first camera; ob: a lens center of the second camera; XRa: the X-axis coordinate of the right eye of the user imaged in the focal plane of the first camera; XRb: the X-axis coordinate of the right eye of the user imaged in the focal plane of the second camera; XLa: the X-axis coordinate of the left eye of the user imaged in the focal plane of the first camera; XLb: the X-axis coordinate of the left eye of the user imaged in the focal plane of the second camera; t; the distance between the first camera and the second camera; DR: the distance between the right eye and the plane where the first camera and the second camera are located; DL: the distance between the left eye and the plane where the first camera and the second camera are located; α: the inclination angle of the connecting line of the eyes of the user and the plane where the first camera and the second camera are located; p: the user's interocular distance or interpupillary distance; z; an optical axis; FP: a focal plane; XR: the X-axis coordinate of the right eye of the user imaging in the focal plane of the camera; XL: the X-axis coordinate of the left eye of the user imaging in the focal plane of the camera; o: a lens center; MCP: a camera plane; beta is aR: the projection of the connecting line of the left eye and the lens center in the XZ plane is inclined relative to the X axis; beta is aL: the inclination angle of the projection of the connecting line of the right eye and the lens center in the XZ plane relative to the X axis; α: the projection of the user eye connecting line in the XZ plane and the included angle of the X axis; p: pupil distance of both eyes of the user.
Detailed Description
So that the manner in which the features and elements of the disclosed embodiments can be understood in detail, a more particular description of the disclosed embodiments, briefly summarized above, may be had by reference to the embodiments, some of which are illustrated in the appended drawings.
Herein, "naked-eye three-dimensional (or 3D) display" relates to a technique in which a user can observe a 3D image on a display without wearing glasses for 3D display.
In this context, "multi-view" has its conventional meaning in the art, meaning that different images displayed by different pixels or sub-pixels of the display screen can be viewed at different positions (viewpoints) in space. In this context, multi-view shall mean at least 3 views.
In this context, "grating" has a broad interpretation in the art, including but not limited to "parallax barrier" gratings and "lenticular" gratings, such as "lenticular" gratings.
Herein, "lens" or "lenticular" has the conventional meaning in the art, and includes, for example, cylindrical lenses and spherical lenses.
A conventional "pixel" means a 2D display or the smallest display unit in terms of its resolution when displayed as a 2D display.
However, in some embodiments herein, the term "composite pixel" when applied to multi-view technology in the field of naked eye 3D display refers to the smallest unit of display when a naked eye 3D display device provides multi-view display, but does not exclude that a single composite pixel for multi-view technology may comprise or appear as a plurality of 2D display pixels. Herein, unless specifically stated as a composite pixel or 3D pixel for "3D display" or "multi-view" applications, a pixel will refer to the smallest unit of display in 2D display. Also, when describing a "composite subpixel" for a multi-view naked eye 3D display, it will refer to a composite subpixel of a single color present in the composite pixel when the naked eye 3D display device provides multi-view display. Herein, a sub-pixel in a "composite sub-pixel" will refer to the smallest display unit of a single color, which tends to correspond to a viewpoint.
According to an embodiment of the disclosure, a 3D display device is provided, which includes a multi-view naked eye 3D display screen, a viewing angle determining device configured to determine a user viewing angle of a user, and a 3D processing device configured to render a corresponding sub-pixel in a composite pixel included in the multi-view naked eye 3D display screen based on the user viewing angle and according to depth information of a 3D model or a 3D video.
In some embodiments, the 3D processing device generates a 3D image based on the user perspective and from depth information of the 3D model or 3D video, e.g., generates a 3D image corresponding to the user perspective. The corresponding relationship between the user's view angle and the generated 3D image is similar to how a user would see a scene representation corresponding to the view angle from a different angle to a real-existing scene. The 3D image generated by the depth information of the 3D model or 3D video may be different for different user perspectives. Therefore, 3D images which follow the user view angles are generated, the 3D images seen by the user under the various view angles are different, and therefore the user can feel the feeling like watching a real object by means of the multi-view naked eye 3D display screen, the display effect is improved, and the user experience is improved.
Fig. 1A shows a schematic diagram of a 3D display device 100 according to an embodiment of the present disclosure. As shown in fig. 1A, the 3D display device 100 includes a multi-view naked eye 3D display screen 110, a 3D processing device 130, a human eye tracking device 150, a viewing angle determining device 160, a 3D signal interface 140, and a processor 120.
In some embodiments, the multi-view naked-eye 3D display screen 110 may include a display panel and a grating (not shown) covering the display panel. The display panel may include m columns and n rows (m × n) of composite pixels 400 and thus define an m × n display resolution. The m × n display resolution may be, for example, a resolution above Full High Definition (FHD), including but not limited to: 1920 × 1080, 1920 × 1200, 2048 × 1280, 2560 × 1440, 3840 × 2160, and the like. Each composite pixel comprises a plurality of composite sub-pixels, each composite sub-pixel comprising i same-color sub-pixels corresponding to i viewpoints, wherein i ≧ 3.
Fig. 1A schematically shows one composite pixel 400 of m × n composite pixels, including a red composite sub-pixel 410 composed of i ═ 6 red sub-pixels R, a green composite sub-pixel 420 composed of i ═ 6 green sub-pixels G, and a blue composite sub-pixel 430 composed of i ═ 6 blue sub-pixels B. The 3D display device 100 accordingly has i-6 viewpoints (V1-V6). Other values of i greater or less than 6 are contemplated in other embodiments, such as 10, 30, 50, 100, etc.
In some embodiments, each composite pixel is square. The plurality of composite sub-pixels in each composite pixel may be arranged parallel to each other. The i sub-pixels in each composite sub-pixel may be arranged in rows.
In some embodiments, the 3D processing device is an FPGA or ASIC chip or an FPGA or ASIC chipset. In some embodiments, the 3D display device 100 may also be provided with more than one 3D processing means 130 which process the rendering of the sub-pixels of each composite sub-pixel of each composite pixel of the naked eye 3D display screen 110 in parallel, serial or a combination of serial and parallel. Those skilled in the art will appreciate that the more than one 3D processing device may have other ways to distribute and process in parallel rows and columns of composite pixels or composite subpixels of the naked eye 3D display screen 110, which fall within the scope of the embodiments of the present disclosure. As in the embodiment shown in fig. 1A, the 3D processing device 130 may further optionally include a buffer 131 to buffer the received image of the 3D video.
In some embodiments, the processor is included in a computer or a smart terminal, such as a mobile terminal. Alternatively, the processor may be a processor unit of a computer or an intelligent terminal. It is contemplated that in some embodiments, the processor 120 may be disposed external to the 3D display device 100, for example, the 3D display device 100 may be a multi-view naked-eye 3D display with 3D processing means, such as a non-smart naked-eye 3D television.
In some embodiments, the 3D display device includes a processor internally. Based on this, the 3D signal interface 140 is an internal interface connecting the processor 120 and the 3D processing device 130. Such a 3D Display device 100 may be, for example, a mobile terminal, and the 3D signal interface 140 may be a MIPI, mini-MIPI interface, LVDS interface, min-LVDS interface, or Display Port interface.
As shown in fig. 1A, the processor 120 of the 3D display device 100 may further include a register 121. The register 121 may be configured to temporarily store instructions, data, and addresses. In some embodiments, the register 121 may be configured to receive information about display requirements of the multi-view naked-eye 3D display screen 110. In some embodiments, the 3D display device 100 may further include a codec configured to decompress and codec the compressed 3D video signal and transmit the decompressed 3D video signal to the 3D processing apparatus 130 via the 3D signal interface 140.
In some embodiments, the 3D display device 100 may include a human eye tracking apparatus configured to acquire/determine human eye tracking data. For example, in the embodiment shown in fig. 1B, the 3D display device 100 includes a human eye tracking device 150 communicatively connected to the 3D processing device 130, so that the 3D processing device 130 can directly receive human eye tracking data. In some embodiments, the eye tracking device 150 may connect the processor 120 and the 3D processing device 130 simultaneously, so that the 3D processing device 130 may obtain the eye tracking data directly from the eye tracking device 150 on the one hand, and other information obtained by the eye tracking device 150 from the processor 120 on the other hand may be processed by the 3D processing device 130.
In some embodiments, the eye tracking data includes eye spatial position information indicating a spatial position of a user's eyes, and the eye spatial position information may be expressed in a three-dimensional coordinate form, for example, including information on a distance between the user's eyes/face and the multi-view naked eye 3D display screen or the eye tracking device (i.e., depth information of the user's eyes/face), position information of the viewed eyes/face in a lateral direction of the multi-view naked eye 3D display screen or the eye tracking device, and position information of the user's eyes/face in a vertical direction of the multi-view naked eye 3D display screen or the eye tracking device. The spatial position of the human eye can also be expressed in the form of two-dimensional coordinates containing any two of the distance information, the lateral position information, and the vertical position information. The eye-tracking data may also include a viewpoint (viewpoint position) at which the user's eyes (e.g., both eyes) are located, a user's angle of view, and the like.
In some embodiments, an eye tracking device includes an eye tracker configured to capture an image of a user (e.g., an image of a face of the user), an eye tracking image processor configured to determine a spatial position of an eye based on the captured image of the user, and an eye tracking data interface configured to transmit eye spatial position information indicative of the spatial position of the eye.
In some embodiments, the eye tracker includes a first camera configured to capture a first image and a second camera configured to capture a second image, and the eye tracking image processor is configured to identify the presence of a human eye based on at least one of the first image and the second image and to determine a spatial position of the human eye based on the identified human eye.
Fig. 2 shows an example in which the eye tracker in the eye tracking apparatus is configured with two cameras. As shown, the eye tracking device 150 includes a eye tracker 151, an eye tracking image processor 152, and an eye tracking data interface 153. The eye tracker 151 includes a first camera 151a, which is a black and white camera, for example, and a second camera 151b, which is a black and white camera, for example. The first camera 151a is configured to capture a first image, for example, a black-and-white image, and the second camera 151b is configured to capture a second image, for example, a black-and-white image. The eye tracking apparatus 150 may be placed in front of the 3D display device 100, for example in front of the multi-view naked eye 3D display screen 110. The photographing objects of the first and second cameras 151a and 151b may be user faces. In some embodiments, at least one of the first camera and the second camera may be a color camera and configured to capture color images.
In some embodiments, the eye tracking data interface 153 of the eye tracking apparatus 150 is communicatively connected to the 3D processing apparatus 130 of the 3D display device 100, so that the 3D processing apparatus 130 can directly receive the eye tracking data. In other embodiments, the eye-tracking image processor 152 of the eye-tracking device 150 may be communicatively connected to or integrated with the processor 120, whereby eye-tracking data may be transmitted from the processor 120 to the 3D processing device 130 through the eye-tracking data interface 153.
Optionally, the eye tracker 151 is further provided with an infrared emitting device 154. When the first camera or the second camera is in operation, the infrared emitting device 154 is configured to selectively emit infrared light to play a light supplementing role when ambient light is insufficient, for example, when shooting at night, so that a first image and a second image that can identify the face and eyes of the user can be shot under the condition that the ambient light is weak.
In some embodiments, the display apparatus may be configured to control the infrared emitting device to turn on or adjust its size based on the received light sensing signal when the first camera or the second camera is operated, for example, when the light sensing signal is detected to be lower than a predetermined threshold. In some embodiments, the light sensing signal is received by an ambient light sensor integrated with the processing terminal or the display device. The above-mentioned operation for the infrared emitting device may also be performed by a human eye tracking device or a processing terminal integrated with the human eye tracking device.
Optionally, the infrared emitting device 154 is configured to emit infrared light with a wavelength greater than or equal to 1.5 micrometers, i.e., long-wavelength infrared light. Long wave infrared light is less able to penetrate the skin and thus less harmful to the human eye than short wave infrared light.
The captured first image and second image are transmitted to the eye-tracking image processor 152. The eye-tracking image processor 152 may be configured to have a visual recognition function (e.g., a face recognition function), and may be configured to recognize a human eye based on at least one of the first image and the second image and to determine a spatial position of the human eye based on the recognized human eye. The recognizing of the human eye may be that a human face is recognized based on at least one of the first image and the second image, and then the human eye is recognized through the recognized human face.
In some embodiments, the eye-tracking image processor 152 may determine the viewpoint at which the user's eyes are located based on the spatial position of the eyes. In other embodiments, the viewpoint of the user's eyes is determined by the 3D processing device 130 based on the acquired spatial position of the human eyes.
In some embodiments, the first camera and the second camera may be the same camera, e.g., the same black and white camera, or the same color camera. In other embodiments, the first camera and the second camera may be different cameras, such as different black and white cameras, or different color cameras. In case the first camera and the second camera are different cameras, the first image and the second image may be calibrated or rectified in order to determine the spatial position of the human eye.
In some embodiments, at least one of the first camera and the second camera is a wide-angle camera.
Fig. 3 schematically shows a geometric relationship model for determining the spatial position of the human eye using two cameras. In the embodiment shown in fig. 3, the first camera and the second camera are the same camera and therefore have the same focal length f. The optical axis Za of the first camera 151a is parallel to the optical axis Zb of the second camera 151b, while the focal plane 401a of the first camera 151a and the focal plane 401b of the second camera 151b are in the same plane and perpendicular to the optical axes of the two cameras. Based on the above arrangement, the line connecting the lens centers Oa and Ob of the two cameras is parallel to the focal planes of the two cameras. In the embodiment shown in fig. 3, the geometric relationship model of the XZ plane is shown with the direction of the line connecting the lens centers Oa to Ob of the two cameras as the X-axis direction and with the optical axis directions of the two cameras as the Z-axis direction. In some embodiments, the X-axis direction is also a horizontal direction, the Y-axis direction is also a vertical direction, and the Z-axis direction is a direction perpendicular to the XY plane (may also be referred to as a depth direction).
In the embodiment shown in fig. 3, the origin is the lens center Oa of the first camera 151a, and the origin is the lens center Ob of the second camera 151 b. R and L represent the right and left eyes of the user, respectively, XRa and XRb are the X-axis coordinates of the user's right eye R imaged in the focal planes 401a and 401b of the two cameras, respectively, and XLA and XLb are the X-axis coordinates of the user's left eye L imaged in the focal planes 401a and 401b of the two cameras, respectively. Furthermore, the distance T of the two cameras and their focal length f are also known. According to the geometric relationship of similar triangles, the distances DR and DL between the right eye R and the left eye L and the plane where the two cameras are arranged are respectively as follows:
Figure BDA0002303581350000111
Figure BDA0002303581350000112
and the inclination angle alpha of the connecting line of the two eyes of the user and the plane where the two cameras are arranged as above and the distance between the two eyes or the interpupillary distance P of the user are respectively as follows:
Figure BDA0002303581350000113
Figure BDA0002303581350000114
in the embodiment shown in fig. 3, the line connecting the eyes of the user (or the face of the user) and the plane where the two cameras are arranged are mutually inclined and the inclination angle is alpha. When the face of the user and the plane on which the two cameras are arranged are parallel to each other (i.e., when the user looks straight at the two cameras), the tilt angle α is zero.
In some embodiments, the 3D display device 100 may be a computer or an intelligent terminal, such as a mobile terminal. It is contemplated that in some embodiments, the 3D image display apparatus 100 may also be a non-intelligent display terminal, such as a non-intelligent autostereoscopic television. In some embodiments, eye tracking device 150, which includes two cameras 151a, 151b, is positioned in front of, or substantially in the same plane as, the display plane of the multi-view autostereoscopic display screen. Therefore, the distances DR and DL between the right eye R and the left eye L of the user and the plane where the two cameras are located, which are exemplarily obtained in the embodiment shown in fig. 3, are the distances between the right eye R and the left eye L of the user and the multi-view autostereoscopic display screen (or the depths of the right eye and the left eye of the user), and the inclination angle α between the face of the user and the plane where the two cameras are located, which is the inclination angle of the face of the user and the multi-view autostereoscopic display screen.
In some embodiments, eye-tracking data interface 153 is configured to transmit the tilt angle or parallelism of the user's eyes relative to eye-tracking device 150 or multi-view autostereoscopic display screen 110. This may facilitate more accurate rendering of the 3D image.
In some embodiments, the eye spatial position information DR, DL, α, and P exemplarily derived as above is transmitted to the 3D processing device 130 through the eye-tracking data interface 153. The 3D processing device 130 determines the viewpoint of the eyes of the user based on the received spatial position information of the eyes. In some embodiments, the 3D processing device 130 may store a correspondence table between the spatial position of the human eye and the viewpoint of the 3D display apparatus in advance. After the spatial position information of the eyes is obtained, the viewpoint of the eyes of the user can be determined based on the corresponding relation table. Alternatively, the correspondence table may be received/read by the 3D processing device from another component (e.g., a processor) having a memory function.
In some embodiments, the human eye spatial position information DR, DL, α and P exemplarily derived as above may also be directly transmitted to the processor of the 3D display device 100, and the 3D processing apparatus 130 receives/reads the human eye spatial position information from the processor through the human eye tracking data interface 153.
In some embodiments, the first camera 151a is configured to capture a first image sequence including a plurality of first images arranged temporally one behind the other, and the second camera 151b is configured to capture a second image sequence including a plurality of second images arranged temporally one behind the other. The eye-tracking image processor 152 may include a synchronizer 155. The synchronizer 155 is configured to determine time-synchronized first and second images in the first and second image sequences. The first and second images determined to be time-synchronized are used for the recognition of the human eye and for the determination of the spatial position of the human eye.
In some embodiments, the eye-tracking image processor 152 includes a buffer 156 and a comparator 157. The buffer 156 is configured to buffer the first image sequence and the second image sequence. The comparator 157 is configured to compare the plurality of first and second images in the first and second image sequences. Whether the space position of the human eyes changes or not can be judged through comparison, whether the human eyes are still in the viewing range or not can also be judged, and the like. The determination of whether the human eye is still within the viewing range may also be performed by a 3D processing device.
In some embodiments, the eye-tracking image processor 152 is configured to determine the spatial position information of the human eye based on the previous or subsequent first and second images as the current spatial position information of the human eye when the presence of the human eye is not recognized in the current first and second images in the first and second image sequences and recognized in the previous or subsequent first and second images. This may occur, for example, when the user briefly turns his head. In this case, the face of the user and the eyes thereof may be temporarily unrecognizable.
In some embodiments, the eye spatial position information determined based on the previous and subsequent first and second images capable of identifying the human face and the human eyes may be averaged, subjected to data fitting, interpolated or otherwise processed, and the obtained result may be used as the current eye spatial position information.
In some embodiments, the first camera and the second camera are configured to capture the first sequence of images and the second sequence of images at a frequency of 24 frames/second or more, such as at a frequency of 30 frames/second, or such as at a frequency of 60 frames/second.
In some embodiments, the first camera and the second camera are configured to take pictures at the same frequency as a refresh frequency of a multi-view naked eye 3D display screen of the 3D display device.
In some embodiments, the eye tracker includes at least one camera configured to capture at least one image and a depth detector configured to obtain eye depth information of the user, and the eye tracking image processor is configured to identify the presence of a human eye based on the captured at least one image and to determine a spatial position of the human eye based on the identified human eye and the eye depth information.
Fig. 4 shows an example in which the human eye tracker in the human eye tracking apparatus is configured with a single camera and a depth detector. As shown, the eye tracking device 150 includes a eye tracker 151, an eye tracking image processor 152, and an eye tracking data interface 153. Eye tracker 151 includes a camera 155, such as a black and white camera, and a depth detector 158. The camera 155 is configured to take at least one image, for example, a black and white image, and the depth detector 158 is configured to acquire eye depth information of the user. The eye tracking apparatus 150 may be placed in front of the 3D display device 100, for example in front of the multi-view naked eye 3D display screen 110. The subject of the camera 155 is the face of the user, and a human face or eyes are recognized based on the captured image. The depth detector acquires eye depth information, and may also acquire face depth information and acquire eye depth information based on the face depth information. In some embodiments, camera 155 may be a color camera and configured to capture color images. In some embodiments, two or more cameras 155 may also be employed in conjunction with depth detector 158 to determine the spatial position of the human eye.
In some embodiments, the eye tracking data interface 153 of the eye tracking apparatus 150 is communicatively connected to the 3D processing apparatus 130 of the 3D display device 100, so that the 3D processing apparatus 130 can directly receive the eye tracking data. In other embodiments, the eye-tracking image processor 152 may be communicatively connected to or integrated with the processor 120 of the 3D display device 100, whereby eye-tracking data may be transmitted from the processor 120 to the 3D processing apparatus 130 through the eye-tracking data interface 153.
Optionally, the eye tracker 151 is further provided with an infrared emitting device 154. When the camera 155 is in operation, the infrared emitting device 154 is configured to selectively emit infrared light to supplement light when ambient light is insufficient, for example, when shooting at night, so that an image that can identify the face and eyes of the user can be shot under the condition that the ambient light is weak.
In some embodiments, the display device may be configured to control the infrared emitting device to turn on or adjust its size based on the received light sensing signal when the camera is in operation, for example, when the light sensing signal is detected to be lower than a predetermined threshold. In some embodiments, the light sensing signal is received by an ambient light sensor integrated with the processing terminal or the display device. The above-mentioned operation for the infrared emitting device may also be performed by a human eye tracking device or a processing terminal integrated with the human eye tracking device.
Optionally, the infrared emitting device 154 is configured to emit infrared light with a wavelength greater than or equal to 1.5 micrometers, i.e., long-wavelength infrared light. Long wave infrared light is less able to penetrate the skin and thus less harmful to the human eye than short wave infrared light.
The photographed image is transmitted to the eye-tracking image processor 152. The eye-tracking image processor may be configured to have a visual recognition function (e.g., a face recognition function), and may be configured to recognize a face of a person based on the captured image and determine a spatial position of the eye of the person based on the recognized eye position and eye depth information of the user, and determine a viewpoint at which the eyes of the user are located based on the spatial position of the eye of the person. In other embodiments, the 3D processing device determines a viewpoint at which the eyes of the user are located based on the acquired spatial positions of the eyes. In some embodiments, the camera is a wide-angle camera. In some embodiments, the depth detector 158 is configured as a structured light camera or a TOF camera.
Fig. 5 schematically shows a geometric relational model for determining the spatial position of the human eye using a camera and a depth detector. In the embodiment shown in FIG. 5, the camera has a focal length f, an optical axis Z, and a focal plane FP, R and L representing the user's right and left eyes, respectively, and XR and XL are the X-axis coordinates of the user's right and left eyes, R and L, respectively, imaged within the focal plane FP of the camera 155.
By way of explanation and not limitation, from an image including the left and right eyes of the user captured by the camera 155, the X-axis (horizontal direction) coordinates and the Y-axis (vertical direction) coordinates of the left and right eyes imaged within the focal plane FP of the camera 155 can be known. As shown in fig. 5, with the lens center O of the camera 155 as the origin, the X-axis and a Y-axis (not shown) perpendicular to the X-axis form a camera plane MCP, which is parallel to the focal plane FP. The optical axis direction Z of the camera 155 is also the depth direction. That is, in the XZ plane shown in fig. 5, the X-axis coordinates XR, XL of the left and right eyes imaged in the focal plane FP are known. In addition, the focal length f of the camera 155 is known. In this case, the inclination angle β of the projection of the connecting lines between the left and right eyes and the center O of the camera lens on the XZ plane with respect to the X axis can be calculatedRAnd betaL. Similarly, in a YZ plane (not shown), Y-axis coordinates of the left and right eyes imaged in the focal plane FP are known, and in combination with the known focal length f, the tilt angles of the projections of the left and right eyes on the line connecting the camera lens center O in the YZ plane with respect to the Y-axis of the camera plane MCP can be calculated.
By way of explanation and not limitation, the spatial coordinates (X, Y, Z) of the left and right eyes within the coordinate system of the camera 155 are known from the images taken by the camera 155 including the left and right eyes of the user and the depth information of the left and right eyes acquired by the depth detector 158, wherein the Z-axis coordinate is the depth information. Accordingly, as shown in fig. 5, an angle α between the projection of the line connecting the left eye and the right eye in the XZ plane and the X axis can be calculated. Similarly, in a YZ plane (not shown), an angle between a projection of a line connecting the left and right eyes in the YZ plane and the Y axis can be calculated.
As shown in FIG. 5, knowing the focal length f of the camera 155 and the X-axis coordinates XR, XL of the two eyes in the focal plane FP, the inclination angle β of the projection of the line connecting the right and left eyes R, L of the user and the lens center O in the XZ plane with respect to the X-axis can be derivedRAnd betaLRespectively as follows:
Figure BDA0002303581350000151
Figure BDA0002303581350000152
on this basis, the distances DR and DL of the right eye R and the left eye L of the user with respect to the display plane of the camera plane MCP/multi-view naked eye 3D display screen can be known from the depth information of the right eye R and the left eye L obtained by the depth detector 158. Accordingly, the included angle α and the interpupillary distance P between the projection of the user's eyes connecting line in the XZ plane and the X axis can be respectively:
Figure BDA0002303581350000161
Figure BDA0002303581350000162
the above calculation methods and mathematical representations are merely illustrative and other calculation methods and mathematical representations may be contemplated by those skilled in the art to obtain the desired spatial position of the human eye. The skilled person may also think of transforming the coordinate system of the camera with the coordinate system of the display device or the multi-view naked eye 3D display screen if necessary.
In some embodiments, when the distances DR and DL are not equal and the included angle α is not zero, it may be considered that the user faces the display plane of the multi-view naked eye 3D display screen at a certain inclination. When the distances DR and DL are equal and the viewing angle α is zero, it may be considered that the user heads up the display plane of the multi-view naked eye 3D display screen. In other embodiments, a threshold value may be set for the included angle α, and in a case that the included angle α does not exceed the threshold value, it may be considered that the user heads up the display plane of the multi-view naked eye 3D display screen.
In some embodiments, based on the identified human eye or the determined spatial position of the human eye, a user perspective can be obtained, and a 3D image corresponding to the user perspective is generated from the 3D model or the 3D video including the depth information based on the user perspective, so that a 3D effect displayed according to the 3D image is follow-up for the user, so that the user obtains an experience as if viewing a real object or scene at the corresponding angle.
In some embodiments, the user perspective is the angle the user makes with respect to the camera.
In some embodiments, the user perspective may be the angle between the line connecting the user's eyes (monocular) and the lens center O of the camera with respect to the camera coordinate system. In some embodiments, the angle is, for example, the angle θ between the line and the X-axis (transverse) in the camera coordinate systemXOr the included angle theta between the connecting line and the Y axis (vertical direction) in the camera coordinate systemYOr at θ(X,Y)And (4) showing. In some embodiments, the included angle is, for example, an included angle between a projection of the connecting line in the XY plane of the camera coordinate system and the connecting line. In some embodiments, the angle is, for example, the angle θ between the projection of the connecting line in the XY plane of the camera coordinate system and the X axisXOr the included angle theta between the projection of the connecting line in the XY plane of the camera coordinate system and the Y axisYOr at θ(X,Y)And (4) showing.
In some embodiments, the user viewing angle may be the line connecting the midpoint of the line between the eyes of the user and the lens center O of the camera (i.e., the user viewing angle)Line) relative to the camera coordinate system. In some embodiments, the angle is, for example, the angle θ between the user's line of sight and the X-axis (transverse) in the camera coordinate systemXOr the included angle theta between the sight line of the user and the Y axis (vertical direction) in the camera coordinate systemYOr at θ(X,Y)And (4) showing. In some embodiments, the included angle is, for example, an included angle between a projection of the user's sight line in the XY plane of the camera coordinate system and the connecting line. In some embodiments, the angle is, for example, the angle θ between the projection of the user's gaze in the XY plane of the camera's coordinate system and the X-axis (transverse direction)XOr the included angle theta between the projection of the user sight line in the XY plane of the camera coordinate system and the Y axis (vertical direction)YOr at θ(X,Y)And (4) showing.
In some embodiments, the user perspective may be the angle of the user's eye line relative to the camera coordinate system. In some embodiments, the angle is, for example, θ, the angle between the line of the eyes and the X-axis in the camera coordinate systemXOr the included angle theta between the line of the two eyes and the Y axis in the coordinate system of the cameraYOr at θ(X,Y)And (4) showing. In some embodiments, the included angle is, for example, an included angle between a projection of the binocular line in an XY plane of the camera coordinate system and the line. In some embodiments, the angle is, for example, the angle θ between the projection of the binocular line in the XY plane of the camera coordinate system and the X axisXOr the included angle theta between the projection of the binocular connecting line in the XY plane of the camera coordinate system and the Y axisYOr at θ(X,Y)And (4) showing.
In some embodiments, the user viewing angle may be an angle of a plane in which a face of the user is located with respect to a camera coordinate system. In some embodiments, the included angle is, for example, the included angle between the plane of the human face and the XY plane in the camera coordinate system. The plane where the face is located can be determined by extracting a plurality of face features, and the face features can be forehead, eyes, ears, mouth corners, chin and the like.
In some embodiments, the user viewing angle may be an angle of the user with respect to a display plane of the multi-view naked eye 3D display screen or the multi-view naked eye 3D display screen. A coordinate system of the multi-view naked-eye 3D display screen or the display plane is defined herein, in which a center of the multi-view naked-eye 3D display screen or a center o of the display plane is taken as an origin, a horizontal direction (lateral direction) straight line is taken as an x-axis, a vertical direction straight line is taken as a y-axis, and a straight line perpendicular to the xy-plane is taken as a z-axis (depth direction).
In some embodiments, the user viewing angle may be an angle between a connecting line of the user's eyes (single eye) and the center o of the multi-view naked eye 3D display screen or display plane with respect to a coordinate system of the multi-view naked eye 3D display screen or display plane. In some embodiments, the angle is, for example, θ, the angle between the line and the x-axis in the coordinate systemxOr the angle theta between the connecting line and the y-axis in the coordinate systemyOr at θ(x,y)And (4) showing. In some embodiments, the angle is, for example, an angle between a projection of the connection line in the xy-plane of the coordinate system and the connection line. In some embodiments, the angle is, for example, the angle θ between the projection of the connecting line in the xy-plane of the coordinate system and the x-axisxOr the included angle theta between the projection of the connecting line in the xy plane of the coordinate system and the y axisyOr at θ(x,y)And (4) showing.
In some embodiments, the user viewing angle may be an angle between a connecting line (i.e., a user sight line) between a midpoint of a binocular connecting line of the user and the center o of the multi-view naked eye 3D display screen or display plane, and a coordinate system of the multi-view naked eye 3D display screen or display plane. In some embodiments, as shown in FIG. 6, the angle is, for example, the angle θ between the user's line of sight and the x-axis in the coordinate systemxOr the angle theta between the user's line of sight and the y-axis in the coordinate systemyOr at θ(x,y)In the figure, R represents the right eye of the user and L represents the left eye of the user. In some embodiments, as shown in FIG. 7, the included angle is, for example, an included angle θ between a projection k of the user's gaze in an xy plane of the coordinate system and the user's gazek. In some embodiments, the angle is, for example, the angle θ between the projection of the user's gaze in the xy-plane of the coordinate system and the X-axisxOr the included angle theta between the projection of the user sight line in the xy plane of the coordinate system and the y axisyOr at θ(x,y)And (4) showing.
In some embodiments, the user perspective may be a coordinate system of a binocular line of the user relative to a multi-view naked eye 3D display screen or display planeThe included angle of (a). In some embodiments, the angle is, for example, θ, the angle between the line and the x-axis in the coordinate systemxOr the angle theta between the connecting line and the y-axis in the coordinate systemyOr at θ(x,y)And (4) showing. In some embodiments, the angle is, for example, an angle between a projection of the connection line in the xy-plane of the coordinate system and the connection line. In some embodiments, the angle is, for example, the angle θ between the projection of the connecting line in the xy-plane of the coordinate system and the x-axisxOr the included angle theta between the projection of the connecting line in the xy plane of the camera coordinate system and the y axisyOr at θ(x,y)And (4) showing.
In some embodiments, the user viewing angle may be an angle of a plane where a face of the user is located with respect to a coordinate system of the multi-view naked eye 3D display screen or a display plane. In some embodiments, the included angle is, for example, an included angle between a plane where the human face is located and an xy plane in the coordinate system. The plane where the face is located can be determined by extracting a plurality of face features, and the face features can be forehead, eyes, ears, mouth corners, chin and the like.
In some embodiments, the camera is placed in front of the multi-view naked eye 3D display screen. In this case, the camera coordinate system may be considered as a coordinate system of a multi-view naked-eye 3D display screen or a display plane.
To determine the user viewing angle, the 3D display device may be provided with viewing angle determining means. The perspective determining means may be software, such as a computing module, program instructions, etc., or may be hardware. The perspective determining device may be integrated in the 3D processing device, may be integrated in the eye tracking device, or may transmit the user perspective data to the 3D processing device.
In the embodiment illustrated in fig. 1A, perspective determination device 160 is communicatively coupled to 3D processing device 130. The 3D processing device may receive the user perspective data and generate a 3D image corresponding to the user perspective based on the user perspective data and render the view-dependent sub-pixels of the composite sub-pixels from the generated 3D image based on the view point of the user's eyes (e.g., eyes) determined based on the eye-tracking data. In some embodiments, as shown in fig. 1B, the 3D processing device may receive the spatial location information of the human eye determined by the human eye tracking device 150 and the user perspective data determined by the perspective determination device 160. In some embodiments, as shown in fig. 1C, the perspective determining device 160 may be integrated in the eye tracking device 150, for example, integrated in the eye tracking image processor 152, and the eye tracking device 150 is communicatively connected to the 3D processing device, and transmits the eye tracking data including the user perspective data and the eye spatial position information to the 3D processing device. In other embodiments, the perspective determining means may be integrated in the 3D processing means, receiving the eye spatial position information by the 3D processing means and determining the user perspective data based on the eye spatial position information. In some embodiments, the eye tracking device is in communication connection with the 3D processing device and the perspective determining device, respectively, and sends eye spatial position information to both, and the perspective determining device determines user perspective data based on the eye spatial position information and sends the user perspective data to the 3D processing device.
After the 3D processing device receives or determines the user perspective data, a 3D image corresponding to the perspective can be generated from the received 3D model or 3D video including the depth information based on the user perspective data, so that the 3D image with different depth information and rendered image can be presented to the user at different user perspectives, so that the user can obtain a visual experience similar to viewing a real object from different angles.
Fig. 8 schematically shows different 3D images generated based on the same 3D model for different user perspectives. As shown in fig. 8, the 3D processing device receives the 3D model 600 with depth information and also receives or confirms a plurality of different user perspectives. The 3D processing means generates different 3D images 601 and 602 from the 3D model 600 for each user perspective. In the figure, R denotes the right eye of the user and L denotes the left eye of the user. The sub-pixels corresponding to the corresponding viewpoints are rendered according to the different 3D images 601 and 602 generated from the depth information corresponding to the different user viewing angles, wherein the corresponding viewpoints are the viewpoints determined by the eye tracking data where the eyes of the user are located. For the user, the obtained 3D display effect is followed according to different user perspectives. Depending on the change of the user's view, such a follow-up effect may be, for example, a follow-up in the horizontal direction, or a follow-up in the vertical direction, or a follow-up in the depth direction, or a component follow-up in the horizontal, vertical, depth direction.
The multiple different user perspectives may be generated based on multiple users or based on the motion or action of the same user.
In some embodiments, the user perspective is detected and determined in real-time. In some embodiments, a change in the user perspective is detected and determined in real-time, and when the change in the user perspective is less than a predetermined threshold, a 3D image is generated based on the user perspective before the change. This may occur when the user shakes the head temporarily by a small amplitude or range or makes a posture adjustment, for example on a fixed seat. And at the moment, the user visual angle before the change is still used as the current user visual angle, and a 3D image corresponding to the depth information corresponding to the current user visual angle is generated.
In some embodiments, the viewpoint at which the user's eyes are located may be determined based on the identified human eyes or the determined spatial position of the human eyes. The correspondence of the spatial position information of the human eyes to the viewpoints may be stored in the processor in the form of a correspondence table and received by the 3D processing device. Alternatively, the correspondence between the spatial position information of the human eyes and the viewpoint may be stored in the 3D processing device in the form of a correspondence table.
The following describes display of a 3D display device according to an embodiment of the present disclosure. As described above, the 3D display device may have a plurality of viewpoints. The eyes of the user can see the display of corresponding sub-pixels in the composite sub-pixels of each composite pixel in the multi-view naked eye 3D display screen at each view point position (space position). Two different pictures seen by two eyes of a user at different viewpoint positions form parallax, and a 3D picture is synthesized in a brain.
In some embodiments, based on the generated 3D image and the determined viewpoints of the user's eyes, the 3D processing device may render respective ones of the composite sub-pixels. The correspondence of viewpoints and sub-pixels may be stored in the processor in the form of a correspondence table and received by the 3D processing device. Alternatively, the correspondence of the viewpoints to the sub-pixels may be stored in the 3D processing device in the form of a correspondence table.
In some embodiments, based on the generated 3D image, two images in parallel, e.g., a left eye parallax image and a right eye parallax image, are generated by a processor or 3D processing device. In some embodiments, the generated 3D image is taken as one of two juxtaposed images, for example, as one of a left-eye parallax image and a right-eye parallax image, and the other of the two juxtaposed images is generated based on the 3D image, for example, the other of the left-eye parallax image and the right-eye parallax image is generated. The 3D processing device renders at least one sub-pixel in each composite sub-pixel according to the viewpoint position of one of the determined viewpoint positions of the eyes of the user based on one of the two images; and rendering at least another sub-pixel of each composite sub-pixel based on the other of the two images according to the determined viewpoint position of the other of the viewpoint positions of the two eyes of the user.
The rendering of sub-pixels according to viewpoint is described in detail below in conjunction with the embodiments shown in fig. 9A through 9E. In the illustrated embodiment, the 3D display device has 8 viewpoints V1-V8. Each composite pixel 500 in a multi-view naked eye 3D display screen of a 3D display device is composed of three composite sub-pixels 510, 520, and 530. Each composite subpixel is made up of 8 same-color subpixels corresponding to 8 viewpoints. As shown, composite subpixel 510 is a red composite subpixel consisting of 8 red subpixels R, composite subpixel 520 is a green composite subpixel consisting of 8 green subpixels G, and composite subpixel 530 is a blue composite subpixel consisting of 8 blue subpixels B. The plurality of composite pixels are arranged in an array in the multi-view naked eye 3D display screen. For clarity, only one composite pixel 500 in a multi-view naked eye 3D display screen is shown in the figure. The construction of other composite pixels and the rendering of sub-pixels may be referred to the description of the composite pixel shown.
In some embodiments, when it is determined that the eyes of the user correspond to one viewpoint each based on the human eye spatial position information, the 3D processing device may render a corresponding sub-pixel of the composite sub-pixel according to a 3D image corresponding to the user's view angle generated by the 3D model or the depth information of the 3D video.
Referring to fig. 9A, in the illustrated embodiment, the left eye of the user is at viewpoint V2, the right eye is at viewpoint V5, left and right eye disparity images corresponding to the two viewpoints V2 and V5 are generated based on the 3D image, and sub-pixels of the composite sub-pixels 510, 520, 530, each corresponding to the two viewpoints V2 and V5, are rendered.
In some embodiments, when it is determined that the two eyes of the user correspond to one viewpoint each based on the human eye spatial position information, the 3D processing device may render sub-pixels corresponding to the two viewpoints in the composite sub-pixel and render sub-pixels corresponding to adjacent viewpoints of the two viewpoints, according to a 3D image corresponding to a user viewing angle generated by the 3D model or depth information of the 3D video.
Referring to fig. 9B, in the illustrated embodiment, the left eye of the user is at a viewpoint V2 and the right eye is at a viewpoint V6, left and right eye parallax images corresponding to the two viewpoints V2 and V6 are generated based on the 3D image, and subpixels of the composite subpixels 510, 520, 530 corresponding to the two viewpoints V2 and V6, respectively, are rendered while subpixels corresponding to viewpoints adjacent on both sides of each of the viewpoints V2 and V6 are also rendered. In some embodiments, the sub-pixels corresponding to the viewpoints adjacent to the single side of each of viewpoints V2 and V6 may be rendered at the same time.
In some embodiments, when it is determined that the eyes of the user are each located between two viewpoints based on the human eye spatial position information, the 3D processing device may render sub-pixels corresponding to the four viewpoints in the composite sub-pixel according to a 3D image corresponding to a user viewing angle generated by the 3D model or depth information of the 3D video.
Referring to fig. 9C, in the illustrated embodiment, the user's left eye is between viewpoints V2 and V3 and the right eye is between viewpoints V5 and V6, left and right eye disparity images corresponding to viewpoints V2, V3, and V5, V6 are generated based on the 3D image, and composite subpixels 510, 520, 530 are rendered that correspond to viewpoints V2, V3 and V5, V6, respectively.
In some embodiments, when it is determined that a viewpoint position corresponding to at least one of the two eyes of the user has changed based on the eye space position information, the 3D processing device may switch from rendering a sub-pixel corresponding to a viewpoint position before the change in the composite sub-pixel to rendering a sub-pixel corresponding to a viewpoint position after the change in the composite sub-pixel, in accordance with a 3D image corresponding to a user perspective generated from the 3D model or the depth information of the 3D video.
Referring to fig. 9D, when the left eye of the user moves from viewpoint V1 to viewpoint V3 and the right eye moves from viewpoint V5 to viewpoint V7, the sub-pixels of the composite sub-pixels 510, 520, and 530 that are rendered are adjusted accordingly to adapt to the changing viewpoint positions.
In some embodiments, when more than one user is determined based on the human eye spatial position information, the 3D processing device may render sub-pixels of the composite sub-pixels corresponding to viewpoints of both eyes of each user in accordance with a 3D image corresponding to each user perspective generated by the 3D model or depth information of the 3D video.
Referring to fig. 9E, the 3D-oriented display device has two users, the first user having both eyes at viewpoints V2 and V4, respectively, and the second user having both eyes at viewpoints V5 and V7, respectively. A first 3D image corresponding to a first user perspective and a second 3D image corresponding to a second user perspective are generated from depth information of the 3D model or the 3D video, respectively, and left and right eye parallax images corresponding to viewpoints V2 and V4 are generated based on the first 3D image and left and right eye parallax images corresponding to viewpoints V5 and V7 are generated based on the second 3D image. The 3D processing device renders the sub-pixels of the composite sub-pixels 510, 520, 530 corresponding to the viewpoints V2 and V4, V5 and V7, respectively.
In some embodiments, there is a theoretical correspondence of the sub-pixels of the 3D display device to the viewpoints. Such theoretical correspondence may be set or modulated uniformly when the 3D display device is produced from the pipeline, and may also be stored in the 3D display apparatus in the form of a correspondence table, for example, in a processor or in a 3D processing device. Due to the installation, material or alignment of the grating, when the 3D display device is actually used, there may occur a problem that the sub-pixels viewed from the viewpoint position in space do not correspond to the theoretical sub-pixels. This has an effect on the correct display of the 3D image. It is advantageous for a 3D display device to calibrate or correct the correspondence of the sub-pixels to the viewpoints that exists during the actual use of the 3D display device. In the embodiments provided by the present disclosure, such a correspondence of viewpoints to sub-pixels existing during actual use of the 3D display device is referred to as a "correction correspondence". The "correction correspondence" may be deviated from or consistent with the "theoretical correspondence".
The process of obtaining the "correction correspondence" is also a process of finding the correspondence between the viewpoint and the sub-pixel in the actual display process. In some embodiments, in order to determine the corrected corresponding relationship between the sub-pixels in the composite sub-pixels of each composite pixel in the multi-view naked eye 3D display screen and the view point, the multi-view naked eye 3D display screen or the display panel may be divided into a plurality of correction areas, the corrected corresponding relationship between the sub-pixels in each correction area and the view point is determined, and then the corrected corresponding relationship data in each area is stored in a region, for example, in a form of a corresponding relationship table in the processor or the 3D processing apparatus.
In some embodiments, the corrected correspondence of at least one sub-pixel in each correction region to the viewpoint is derived by detection, and the corrected correspondence of the other sub-pixels in each correction region to the viewpoint is derived or estimated by mathematical calculation with reference to the detected corrected correspondence. The mathematical calculation method comprises the following steps: linear difference, linear extrapolation, nonlinear difference, nonlinear extrapolation, taylor series approximation, linear change in reference coordinate system, nonlinear change in reference coordinate system, exponential model, trigonometric transformation, and the like.
In some embodiments, the multi-view naked eye 3D display screen is defined with a plurality of correction regions, and the combined area of all the correction regions is 90% to 100% of the area of the multi-view naked eye 3D display screen. In some embodiments, the plurality of correction regions are arranged in an array in the multi-view naked eye 3D display screen. In some embodiments, each correction region may be defined by one composite pixel comprising three composite sub-pixels. In some embodiments, each correction region may be defined by two or more composite pixels. In some embodiments, each correction region may be defined by two or more composite subpixels. In some embodiments, each correction region may be defined by two or more composite sub-pixels that do not belong to the same composite pixel.
In some embodiments, the deviation of the corrected correspondence of the sub-pixels to the viewpoint in one correction region from the theoretical correspondence may or may not be consistent as compared to the deviation of the corrected correspondence of the sub-pixels to the viewpoint in another correction region from the theoretical correspondence.
Embodiments according to the present disclosure provide a method of 3D image display for the above-described 3D display device. As shown in fig. 10, the method of displaying a 3D image includes:
s10, determining the user view angle of the user; and
and S20, rendering corresponding sub-pixels in the composite sub-pixels of the composite pixels in the multi-view naked eye 3D display screen according to the depth information of the 3D model based on the user visual angle.
In some embodiments, corresponding sub-pixels in composite sub-pixels of a composite pixel in a multi-view naked-eye 3D display screen may also be rendered according to depth information of the 3D video.
In some embodiments, a display method of a 3D image includes:
s100, determining a user view angle of a user;
s200, determining the viewpoints of the two eyes of the user;
s300, receiving a 3D model or a 3D video comprising depth information;
s400, generating a 3D image according to the 3D model or the 3D video comprising the depth information based on the determined user visual angle; and
and S500, rendering corresponding sub-pixels in the composite sub-pixels of the composite pixels in the multi-view naked eye 3D display screen according to the generated 3D image based on the determined viewpoints of the two eyes of the user, wherein the corresponding sub-pixels refer to the sub-pixels, corresponding to the determined viewpoint of the user, in the composite sub-pixels.
In some embodiments, determining the user perspective comprises: and detecting the user view angle in real time.
In some embodiments, generating the 3D image from the depth information of the 3D model or the 3D video based on the determined user perspective comprises: determining a change of a user perspective detected in real time; and generating a 3D image based on the user perspective before the change when the change of the user perspective is less than a predetermined threshold.
An embodiment of the present disclosure provides a 3D display device 300, and referring to fig. 11, the 3D display device 300 includes a processor 320 and a memory 310. In some embodiments, electronic device 300 may also include a communication interface 340 and a bus 330. Wherein, the processor 320, the communication interface 340 and the memory 310 are communicated with each other via the bus 330. Communication interface 340 may be configured to communicate information. The processor 320 may call logic instructions in the memory 310 to perform the method of displaying a 3D picture in a 3D display device according to the user viewing angle in a follow-up manner according to the above-described embodiment.
Furthermore, the logic instructions in the memory 310 may be implemented in software functional units and stored in a computer readable storage medium when the logic instructions are sold or used as independent products.
The memory 310 is a computer-readable storage medium and can be used for storing software programs, computer-executable programs, such as program instructions/modules corresponding to the methods in the embodiments of the present disclosure. The processor 320 executes functional applications and data processing, namely, a method of switching between displaying a 3D image and a 2D image in an electronic device in the above-described method embodiments, by executing program instructions/modules stored in the memory 310.
The memory 310 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the terminal device, and the like. Further, the memory 310 may include a high-speed random access memory and may also include a non-volatile memory.
The technical solution of the embodiments of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes one or more instructions for enabling a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method of the embodiments of the present disclosure. And the aforementioned storage medium may be a non-transitory storage medium comprising: various media capable of storing program codes, such as a U disk, a removable hard disk, a read-only memory, a random access memory, a magnetic disk or an optical disk, may also be transient storage media.
The above description and drawings sufficiently illustrate embodiments of the disclosure to enable those skilled in the art to practice them. Other embodiments may incorporate structural, logical, electrical, process, and other changes. The examples merely typify possible variations. Individual components and functions are optional unless explicitly required, and the sequence of operations may vary. Portions and features of some embodiments may be included in or substituted for those of others. The scope of the disclosed embodiments includes the full ambit of the claims, as well as all available equivalents of the claims. Furthermore, the words used in the specification are words of description only and are not intended to limit the claims. Furthermore, the terms "comprises" and "comprising," when used in this application, specify the presence of at least one stated feature, integer, step, operation, element, or component, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, or groups thereof. In this document, each embodiment may be described with emphasis on differences from other embodiments, and the same and similar parts between the respective embodiments may be referred to each other. For methods, products, etc. of the embodiment disclosures, reference may be made to the description of the method section for relevance if it corresponds to the method section of the embodiment disclosure.
Those of skill in the art would appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software may depend upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the disclosed embodiments. It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the embodiments disclosed herein, the disclosed methods, products (including but not limited to devices, apparatuses, etc.) may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a unit may be merely a division of a logical function, and an actual implementation may have another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form. Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to implement the present embodiment. In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. In the description corresponding to the flowcharts and block diagrams in the figures, operations or steps corresponding to different blocks may also occur in different orders than disclosed in the description, and sometimes there is no specific order between the different operations or steps. For example, two sequential operations or steps may in fact be executed substantially concurrently, or they may sometimes be executed in the reverse order, depending upon the functionality involved. Each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.

Claims (21)

1. A3D display device, comprising:
a multi-view naked-eye 3D display screen comprising a plurality of composite pixels, each of the plurality of composite pixels comprising a plurality of composite sub-pixels, each of the plurality of composite sub-pixels comprising a plurality of sub-pixels corresponding to a plurality of views of the 3D display device;
a perspective determination device configured to determine a user perspective of a user;
a 3D processing device configured to render respective sub-pixels of the plurality of composite sub-pixels in accordance with depth information of a 3D model based on the user perspective.
2. 3D display device according to claim 1, characterized in that the 3D processing means are configured to generate a 3D image from the depth information based on the user perspective and to render the respective sub-pixels from the 3D image.
3. The 3D display device according to claim 2, further comprising:
an eye tracking device configured to determine a spatial position of a user's eye;
the 3D processing device is configured to determine a viewpoint of the eye of the user based on the spatial position of the human eye, and render a sub-pixel corresponding to the viewpoint of the eye based on the 3D image.
4. The 3D display device according to claim 3, wherein the human eye tracking means comprises:
a human eye tracker configured to capture a user image of the user;
an eye tracking image processor configured to determine the eye spatial position based on the user image; and
an eye tracking data interface configured to transmit eye spatial position information indicative of the eye spatial position.
5. The 3D display device of claim 4, wherein the eye tracker comprises:
a first camera configured to capture a first image; and
a second camera configured to capture a second image;
wherein the eye-tracking image processor is configured to identify the presence of a human eye based on at least one of the first and second images and to determine the spatial position of the human eye based on the identified human eye.
6. The 3D display device of claim 4, wherein the eye tracker comprises:
a camera configured to capture an image; and
a depth detector configured to acquire eye depth information of a user;
wherein the eye-tracking image processor is configured to identify the presence of a human eye based on the image and determine the human eye spatial position based on the identified human eye position and the eye depth information.
7. The 3D display device according to any one of claims 1 to 6, wherein the user viewing angle is an angle between the user and a display plane of the multi-view naked eye 3D display screen.
8. The 3D display device according to claim 7, wherein the user viewing angle is an included angle between a user sight line and a display plane of the multi-view naked eye 3D display screen, and the user sight line is a connection line between a midpoint of a user binocular connection line and a center of the multi-view naked eye 3D display screen.
9. The 3D display device of claim 8, wherein the user perspective is:
an angle between the user's line of sight and at least one of a lateral direction, a vertical direction, and a depth direction of the display plane; or
And the included angle between the user sight line and the projection of the user sight line in the display plane.
10. The 3D display device according to any of claims 1 to 6, further comprising: a 3D signal interface configured to receive the 3D model.
11. A 3D image display method, comprising:
determining a user perspective of a user; and
and rendering corresponding sub-pixels in the composite sub-pixels of the composite pixels in the multi-view naked-eye 3D display screen according to the depth of field information of the 3D model based on the user visual angle.
12. The 3D image display method according to claim 11, wherein rendering, based on the user perspective, respective ones of composite sub-pixels of a composite pixel in a multi-view naked-eye 3D display screen in accordance with depth information of a 3D model comprises:
and generating a 3D image according to the depth information based on the user visual angle, and rendering the corresponding sub-pixels according to the 3D image.
13. The 3D image display method according to claim 12, further comprising:
determining a spatial position of a human eye of a user;
determining a viewpoint of the eye of the user based on the spatial position of the human eye; and
rendering sub-pixels corresponding to the viewpoint where the eyes are located based on the 3D image.
14. The 3D image display method according to claim 13, wherein determining the spatial position of the human eye of the user comprises:
capturing a user image of the user;
determining the spatial position of the human eye based on the user image; and
transmitting eye spatial position information indicative of the eye spatial position.
15. The 3D image display method according to claim 14, wherein capturing a user image of the user and determining the spatial position of the human eye based on the user image comprises:
shooting a first image;
shooting a second image;
identifying the presence of a human eye based on at least one of the first image and the second image; and
determining the spatial position of the human eye based on the identified human eye.
16. The 3D image display method according to claim 14, wherein capturing a user image of the user and determining the spatial position of the human eye based on the user image comprises:
shooting an image;
acquiring eye depth information of a user;
identifying a presence of a human eye based on the image; and
determining the spatial position of the human eye based on the identified position of the human eye and the eye depth information together.
17. The 3D image display method according to any one of claims 11 to 16, wherein the user viewing angle is an angle between the user and a display plane of the multi-view naked eye 3D display screen.
18. The 3D image display method according to claim 17, wherein the user viewing angle is an included angle between a user sight line and a display plane of the multi-view naked eye 3D display screen, and the user sight line is a connection line between a midpoint of a user binocular connection line and a center of the multi-view naked eye 3D display screen.
19. The 3D image display method according to claim 18, wherein the user view angle is:
an angle between the user's line of sight and at least one of a lateral direction, a vertical direction, and a depth direction of the display plane; or
And the included angle between the user sight line and the projection of the user sight line in the display plane.
20. The 3D image display method according to any one of claims 11 to 16, further comprising:
a 3D model is received.
21. A 3D display device comprising:
a processor; and
a memory storing program instructions;
characterized in that the processor is configured to execute the program instructions, when executing the 3D image display method according to any of claims 11 or 20.
CN201911231149.XA 2019-12-05 2019-12-05 3D display device and 3D image display method Pending CN112929636A (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
CN201911231149.XA CN112929636A (en) 2019-12-05 2019-12-05 3D display device and 3D image display method
EP20895613.6A EP4068768A4 (en) 2019-12-05 2020-12-02 3d display apparatus and 3d image display method
PCT/CN2020/133332 WO2021110038A1 (en) 2019-12-05 2020-12-02 3d display apparatus and 3d image display method
US17/781,058 US20230007228A1 (en) 2019-12-05 2020-12-02 3d display device and 3d image display method
TW109142887A TWI788739B (en) 2019-12-05 2020-12-04 3D display device, 3D image display method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911231149.XA CN112929636A (en) 2019-12-05 2019-12-05 3D display device and 3D image display method

Publications (1)

Publication Number Publication Date
CN112929636A true CN112929636A (en) 2021-06-08

Family

ID=76160804

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911231149.XA Pending CN112929636A (en) 2019-12-05 2019-12-05 3D display device and 3D image display method

Country Status (5)

Country Link
US (1) US20230007228A1 (en)
EP (1) EP4068768A4 (en)
CN (1) CN112929636A (en)
TW (1) TWI788739B (en)
WO (1) WO2021110038A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114079765A (en) * 2021-11-17 2022-02-22 京东方科技集团股份有限公司 Image display method, device and system
CN115278200A (en) * 2022-07-29 2022-11-01 北京芯海视界三维科技有限公司 Processing apparatus and display device
CN115278201A (en) * 2022-07-29 2022-11-01 北京芯海视界三维科技有限公司 Processing apparatus and display device
CN114079765B (en) * 2021-11-17 2024-05-28 京东方科技集团股份有限公司 Image display method, device and system

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114040184A (en) * 2021-11-26 2022-02-11 京东方科技集团股份有限公司 Image display method, system, storage medium and computer program product

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101002253A (en) * 2004-06-01 2007-07-18 迈克尔·A.·韦塞利 Horizontal perspective simulator
CN102056003A (en) * 2009-11-04 2011-05-11 三星电子株式会社 High density multi-view image display system and method with active sub-pixel rendering

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030063383A1 (en) * 2000-02-03 2003-04-03 Costales Bryan L. Software out-of-focus 3D method, system, and apparatus
US20050275914A1 (en) * 2004-06-01 2005-12-15 Vesely Michael A Binaural horizontal perspective hands-on simulator
KR101694821B1 (en) * 2010-01-28 2017-01-11 삼성전자주식회사 Method and apparatus for transmitting digital broadcasting stream using linking information of multi-view video stream, and Method and apparatus for receiving the same
CN102693065A (en) * 2011-03-24 2012-09-26 介面光电股份有限公司 Method for processing visual effect of stereo image
KR102192986B1 (en) * 2014-05-23 2020-12-18 삼성전자주식회사 Image display apparatus and method for displaying image
CN105323573B (en) * 2014-07-16 2019-02-05 北京三星通信技术研究有限公司 3-D image display device and method
KR101975246B1 (en) * 2014-10-10 2019-05-07 삼성전자주식회사 Multi view image display apparatus and contorl method thereof
KR102415502B1 (en) * 2015-08-07 2022-07-01 삼성전자주식회사 Method and apparatus of light filed rendering for plurality of user
EP3261328B1 (en) * 2016-06-03 2021-10-13 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and computer-readable storage medium
CN207320118U (en) * 2017-08-31 2018-05-04 昆山国显光电有限公司 Dot structure, mask plate and display device
CN109993823B (en) * 2019-04-11 2022-11-25 腾讯科技(深圳)有限公司 Shadow rendering method, device, terminal and storage medium
KR20210030072A (en) * 2019-09-09 2021-03-17 삼성전자주식회사 Multi-image display apparatus using holographic projection
CN211128024U (en) * 2019-12-05 2020-07-28 北京芯海视界三维科技有限公司 3D display device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101002253A (en) * 2004-06-01 2007-07-18 迈克尔·A.·韦塞利 Horizontal perspective simulator
CN102056003A (en) * 2009-11-04 2011-05-11 三星电子株式会社 High density multi-view image display system and method with active sub-pixel rendering

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114079765A (en) * 2021-11-17 2022-02-22 京东方科技集团股份有限公司 Image display method, device and system
CN114079765B (en) * 2021-11-17 2024-05-28 京东方科技集团股份有限公司 Image display method, device and system
CN115278200A (en) * 2022-07-29 2022-11-01 北京芯海视界三维科技有限公司 Processing apparatus and display device
CN115278201A (en) * 2022-07-29 2022-11-01 北京芯海视界三维科技有限公司 Processing apparatus and display device

Also Published As

Publication number Publication date
EP4068768A1 (en) 2022-10-05
TWI788739B (en) 2023-01-01
US20230007228A1 (en) 2023-01-05
WO2021110038A1 (en) 2021-06-10
TW202123694A (en) 2021-06-16
EP4068768A4 (en) 2023-08-02

Similar Documents

Publication Publication Date Title
CN211128024U (en) 3D display device
EP2445221B1 (en) Correcting frame-to-frame image changes due to motion for three dimensional (3-d) persistent observations
EP1168852B1 (en) Stereoscopic TV apparatus
CN101636747B (en) Two dimensional/three dimensional digital information acquisition and display device
JP5014979B2 (en) 3D information acquisition and display system for personal electronic devices
CN112929639A (en) Human eye tracking device and method, 3D display equipment and method and terminal
WO2021110038A1 (en) 3d display apparatus and 3d image display method
CN108093244B (en) Remote follow-up stereoscopic vision system
JP2014045473A (en) Stereoscopic image display device, image processing apparatus, and stereoscopic image processing method
US11961250B2 (en) Light-field image generation system, image display system, shape information acquisition server, image generation server, display device, light-field image generation method, and image display method
US20120069004A1 (en) Image processing device and method, and stereoscopic image display device
CA3086592A1 (en) Viewer-adjusted stereoscopic image display
CN112929638B (en) Eye positioning method and device and multi-view naked eye 3D display method and device
CN211531217U (en) 3D terminal
JP2842735B2 (en) Multi-viewpoint three-dimensional image input device, image synthesizing device, and image output device thereof
CN214756700U (en) 3D display device
JPH08116556A (en) Image processing method and device
Kang Wei et al. Three-dimensional scene navigation through anaglyphic panorama visualization
CN111684517B (en) Viewer adjusted stereoscopic image display
CN112929634A (en) Multi-view naked eye 3D display device and 3D image display method
CN112925430A (en) Method for realizing suspension touch control, 3D display equipment and 3D terminal
KR100400209B1 (en) Apparatus for generating three-dimensional moving pictures from tv signals
CN112929632A (en) 3D terminal
CN115695771A (en) Display device and display method thereof
KR20160042694A (en) Alignment device for stereoscopic camera and method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination