CN111683238B - 3D image fusion method and device based on observation and tracking - Google Patents

3D image fusion method and device based on observation and tracking Download PDF

Info

Publication number
CN111683238B
CN111683238B CN202010554091.9A CN202010554091A CN111683238B CN 111683238 B CN111683238 B CN 111683238B CN 202010554091 A CN202010554091 A CN 202010554091A CN 111683238 B CN111683238 B CN 111683238B
Authority
CN
China
Prior art keywords
display screen
area
viewpoint
intersection
observation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010554091.9A
Other languages
Chinese (zh)
Other versions
CN111683238A (en
Inventor
赵飞
陆小松
蒲天发
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ningbo Thredim Optoelectronics Co ltd
Original Assignee
Ningbo Thredim Optoelectronics Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ningbo Thredim Optoelectronics Co ltd filed Critical Ningbo Thredim Optoelectronics Co ltd
Priority to CN202010554091.9A priority Critical patent/CN111683238B/en
Publication of CN111683238A publication Critical patent/CN111683238A/en
Application granted granted Critical
Publication of CN111683238B publication Critical patent/CN111683238B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/122Improving the 3D impression of stereoscopic images by modifying image signal contents, e.g. by filtering or adding monoscopic depth cues
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/139Format conversion, e.g. of frame-rate or size
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/366Image reproducers using viewer tracking
    • H04N13/383Image reproducers using viewer tracking for tracking with gaze detection, i.e. detecting the lines of sight of the viewer's eyes

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)

Abstract

The embodiment of the invention discloses a 3D image fusion method and a device based on observation and tracking, which relate to the technical field of 3D display and mainly adopt the technical scheme that: acquiring an observation area of an observer on a 3D display screen, wherein the observation area is a binocular vision field of the observer and a plane projection cutting area where the 3D display screen is located; determining whether an intersection area of the observation area and the 3D display screen meets a preset intersection threshold value; if the intersection area meets the preset intersection threshold value, calculating the serial number of the current viewpoint image interleaved sampling viewpoint image in the intersection area to obtain the viewpoint serial number and the mantissa thereof; and calculating the viewpoint sequence number to be fused and the fused weight thereof according to the viewpoint sequence number and the mantissa thereof, and fusing and outputting the viewpoint images in the intersection area according to the viewpoint sequence number to be fused and the fused weight thereof until all the viewpoint images in the intersection area are traversed and fused. The method is mainly applied to the process of tracking the 3D display image according to eye movement.

Description

3D image fusion method and device based on observation and tracking
Technical Field
The embodiment of the invention relates to the technical field of 3D display, in particular to a 3D image fusion method and device based on observation and tracking.
Background
The 3D image display is realized on a 2D display device, and generally, the 3D display is realized by time division multiplexing (such as shutter type 3D display) or space division multiplexing (such as polarized 3D display and lenticular 3D display). Space division multiplexing 3D display requires selecting a proper viewpoint image interleaving and fusing algorithm according to the realization principle (such as polarized light, lenticular grating, grating barrier and the like) and design parameters of a 3D display device, interleaving and fusing the left-eye and right-eye parallax image contents according to a certain rule, displaying the interleaved and fused image on a 2D display plane by the 3D display device, and redistributing the fused image to the left and right eyes of an observer through a light splitting device (such as polarized glasses, lenticular grating, slit grating and the like), so that the observer realizes 3D vision.
At present, a common 3D image interleaving and blending algorithm is implemented in a global computing manner, that is, the displayed image content is not paid attention, only the view image (two view points or multiple view points) interleaving and blending computation is paid attention, successive pixel computation is required according to a display screen pixel arrangement structure, for naked eye 3D display (such as lenticular grating naked eye 3D display), 4K display resolution is generally required, and the display refresh rate is over 60Hz, so that the 3D image interleaving and blending computation workload is large, and more computation resources and processing time are required. For a 3D display application scene needing tracking display, not only left and right viewpoint image interweaving and fusion are needed to be achieved according to design parameters, but also the observation effect of an observer is affected due to the position change of eyes of the observer, for example, the observer can enter an inverse visual area of a parallax image (namely, a left eye projects a viewpoint image of a right eye, and a right eye projects a viewpoint image of a left eye), the space of the observer is disordered and dazzling is caused, therefore, the eye movement tracking of the observer needs to be increased, the viewpoint image arrangement calculation parameters are adjusted according to the three-dimensional coordinate change of the eyes of the observer, the target of tracking display is achieved, the system calculation amount is further increased due to the increase of tracking display, and a good calculation optimization scheme is not provided so as to reduce the system calculation amount.
Disclosure of Invention
In view of this, embodiments of the present invention provide a 3D image fusion method and apparatus based on observation and tracking, and mainly aim to achieve tracking and display, and reduce system resource consumption.
In order to solve the above problems, embodiments of the present invention mainly provide the following technical solutions:
in a first aspect, an embodiment of the present invention further provides an observation and tracking based 3D image fusion method, including:
acquiring an observation area of an observer on a 3D display screen, wherein the observation area is a binocular vision field of the observer and a plane projection cutting area where the 3D display screen is located;
determining whether an intersection area of the observation area and the 3D display screen meets a preset intersection threshold value;
if the intersection area meets the preset intersection threshold value, calculating the serial number of the current viewpoint image interleaved sampling viewpoint image in the intersection area to obtain the viewpoint serial number and the mantissa thereof;
and calculating the viewpoint sequence number to be fused and the fused weight thereof according to the viewpoint sequence number and the mantissa thereof, and fusing and outputting the viewpoint images in the intersection area according to the viewpoint sequence number to be fused and the fused weight thereof until all the viewpoint images in the intersection area are traversed and fused.
Optionally, determining whether the intersection area of the observation area and the 3D display screen meets a preset intersection threshold includes:
when the distance between the observer and the 3D display screen is smaller than a preset observation range and the 3D display screen is in the observation area, determining that a first intersection area exists between the observation area and the 3D display screen;
or when the distance between the observer and the 3D display screen is smaller than a preset observation range and the observation area is smaller than the 3D display screen, determining that the observation area and the 3D display screen have a second intersection area.
Optionally, the viewing area intersects the 3D display screen at the second intersection area,
before fusion output is performed on the viewpoint images in the intersection region according to the viewpoint sequence numbers to be fused and the fused weights thereof, the method further comprises the following steps:
dividing a display area of a 3D display screen into three parts: a region of interest, a transition region, and a region of no interest;
performing transition processing on the viewpoint images of the transition area;
the fusion output of the viewpoint images in the intersection area according to the viewpoint sequence number to be fused and the fused weight thereof comprises the following steps:
aiming at the viewpoint images in the attention area, performing interval fusion output according to the viewpoint sequence numbers to be fused and the fusion weight thereof;
and performing linear viewpoint image interpolation processing on the viewpoint images which are not subjected to interval fusion output processing in the attention area.
Optionally, the method further includes:
outputting a 2D viewpoint image in the 3D display screen when it is determined that the viewing area does not have an intersection area with the 3D display screen.
Optionally, the obtaining of the observation area of the observer on the 3D display screen includes:
determining the three-dimensional coordinates of human eyes and the coordinates of pupils of human eyes of an observer through a TOF camera, an RGB camera and an infrared lamp on a 3D display device;
acquiring the distance of the human eyes relative to the plane of the 3D display screen and the intersection point coordinate of the human eyes and the plane of the 3D display screen according to the three-dimensional coordinates of the human eyes;
acquiring the fixation point coordinate of human eyes on the plane of the 3D display screen through the pupil coordinate of the human eyes;
and acquiring the field angles of human eyes in the vertical direction and the horizontal direction, and calculating the vertex coordinates of the projection clipping area of the observer on the plane of the 3D display screen so as to confirm the observation area of the observer on the 3D display screen.
Optionally, the following formula is adopted for calculating the serial number of the interleaved sampled viewpoint image of the current viewpoint image in the intersection region:
Figure GDA0003396916790000031
wherein i represents the LCD pixel column ordinal number, from 0 to the maximum value Hmax-1 of the lateral resolution;
j represents the LCD pixel line ordinal number, 0 to the vertical resolution maximum Vmax-1;
k represents the LCD sub-pixel ordinal number, and the RGB arrangement mode takes values of 0, 1 and 2; the BGR arrangement takes values of 2, 1, 0;
p0representing parameters related to the initial offset phase of the grating and the LCD pixels;
p1represents a parameter related to the tangent of the grating tilt;
p2representing parameters related to the view map arrangement period;
p3representing a parameter related to the initial offset phase of the raster and the pixel, related to the observer eye position;
n represents the number of views.
Optionally, the calculating the vertex coordinates of the projection clipping area of the observer on the plane of the 3D display screen includes:
Figure GDA0003396916790000032
Figure GDA0003396916790000041
Figure GDA0003396916790000042
Figure GDA0003396916790000043
the system comprises a left corner, a right corner, a br corner, a human eye horizontal field angle alpha and a vertical field angle beta, wherein the left upper corner vertex coordinate of tl, the right upper corner vertex coordinate of tr, the left lower corner vertex coordinate of bl, the right lower corner vertex coordinate of br;
the step of acquiring the fixation point coordinate of the human eye on the 3D display screen plane through the human eye pupil coordinate comprises the following steps:
Figure GDA0003396916790000044
Figure GDA0003396916790000045
wherein, the coordinate (x, y) of the fixation point, the coordinate (x) of the pupil of the human eye1,y1),a0~a5,b0~b5The unknown parameters are obtained by calibration.
In a second aspect, an embodiment of the present invention further provides an observation and tracking based 3D image fusion apparatus, including:
the system comprises an acquisition unit, a display unit and a display unit, wherein the acquisition unit is used for acquiring an observation area of an observer on a 3D display screen, and the observation area is a binocular vision field of the observer and a plane projection cutting area where the 3D display screen is located;
the determining unit is used for determining whether the intersection area of the observation area and the 3D display screen acquired by the acquiring unit meets a preset intersection threshold value;
the first calculation unit is used for calculating the serial number of the current viewpoint image interleaved sampling viewpoint image in the intersected area to obtain the viewpoint serial number and the mantissa thereof when the determination unit determines that the intersected area meets the preset intersection threshold;
the second calculation unit is used for calculating the viewpoint sequence number to be fused and the fusion weight thereof according to the viewpoint sequence number and the mantissa thereof;
and the processing unit is used for fusing and outputting the viewpoint images in the intersection area according to the viewpoint sequence number to be fused and the fused weight thereof until all the viewpoint images in the intersection area are traversed and fused.
Optionally, the determining unit includes:
the first determining module is used for determining that a first intersection area exists between the observation area and the 3D display screen when the distance between the observer and the 3D display screen is smaller than a preset observation range and the 3D display screen is in the observation area;
and the second determining module is used for determining that a second intersection area exists between the observation area and the 3D display screen when the distance between the observer and the 3D display screen is smaller than a preset observation range and the observation area is smaller than the 3D display screen.
Optionally, the viewing area intersects the 3D display screen at the second intersection area,
the device further comprises:
the dividing unit is used for dividing the display area of the 3D display screen into three parts before the processing unit performs fusion output on the viewpoint images in the intersection area according to the viewpoint sequence number to be fused and the fusion weight thereof: a region of interest, a transition region, and a region of no interest;
a transition unit, configured to perform transition processing on the viewpoint image of the transition region;
the processing unit includes:
the first processing module is used for performing interval fusion output on the viewpoint images in the attention area according to the viewpoint sequence numbers to be fused and the fusion weight thereof;
and the second processing module is used for performing linear viewpoint image interpolation processing on the viewpoint images which are not subjected to interval fusion output processing in the attention area.
Optionally, the apparatus further comprises:
an output unit to output a 2D view image in the 3D display screen when it is determined that the observation area does not have an intersection area with the 3D display screen.
Optionally, the obtaining unit includes:
the determining module is used for determining the three-dimensional coordinates of human eyes and the coordinates of human pupils of an observer through a TOF camera, an RGB camera and an infrared lamp on the 3D display device;
the first acquisition module is used for acquiring the distance between the human eyes and the plane of the 3D display screen and the intersection point coordinate of the human eyes and the plane of the 3D display screen according to the three-dimensional coordinates of the human eyes;
the second acquisition module is used for acquiring the fixation point coordinate of human eyes on the plane of the 3D display screen through the pupil coordinate of the human eyes;
the third acquisition module is used for acquiring the field angles of the human eyes in the vertical direction and the horizontal direction;
and the calculation module is used for calculating the vertex coordinates of the projection clipping area of the observer on the plane of the 3D display screen so as to confirm the observation area of the observer on the 3D display screen.
Optionally, the first calculating unit calculates the serial number of the interleaved sampled viewpoint images of the current viewpoint images in the intersection area by using the following formula:
Figure GDA0003396916790000061
wherein i represents the LCD pixel column ordinal number, from 0 to the maximum value Hmax-1 of the lateral resolution;
j represents the LCD pixel line ordinal number, 0 to the vertical resolution maximum Vmax-1;
k represents the LCD sub-pixel ordinal number, and the RGB arrangement mode takes values of 0, 1 and 2; the BGR arrangement takes values of 2, 1, 0;
p0representing parameters related to the initial offset phase of the grating and the LCD pixels;
p1represents a parameter related to the tangent of the grating tilt;
p2representing parameters related to the view map arrangement period;
p3representing a parameter related to the initial offset phase of the raster and the pixel, related to the observer eye position;
n represents the number of views.
Optionally, the calculating module is further configured to calculate vertex coordinates of the projection clipping area of the observer on the 3D display screen plane, where the vertex coordinates include:
Figure GDA0003396916790000067
Figure GDA0003396916790000062
Figure GDA0003396916790000063
Figure GDA0003396916790000064
the system comprises a left corner, a right corner, a br corner, a human eye horizontal field angle alpha and a vertical field angle beta, wherein the left upper corner vertex coordinate of tl, the right upper corner vertex coordinate of tr, the left lower corner vertex coordinate of bl, the right lower corner vertex coordinate of br;
the step of acquiring the fixation point coordinate of the human eye on the 3D display screen plane through the human eye pupil coordinate comprises the following steps:
Figure GDA0003396916790000065
Figure GDA0003396916790000066
wherein, the coordinate (x, y) of the fixation point, the coordinate (x) of the pupil of the human eye1,y1),a0~a5,b0~b5The unknown parameters are obtained by calibration.
By the technical scheme, the technical scheme provided by the embodiment of the invention at least has the following advantages:
the 3D image fusion method and device based on observation and tracking, provided by the embodiment of the invention, are used for acquiring an observation area of an observer on a 3D display screen, wherein the observation area is a projection shearing area of a binocular vision field of the observer and a plane where the 3D display screen is located; determining whether an intersection area of the observation area and the 3D display screen meets a preset intersection threshold value; if the intersection area meets the preset intersection threshold value, calculating the serial number of the current viewpoint image interleaved sampling viewpoint image in the intersection area to obtain the viewpoint serial number and the mantissa thereof; and calculating the viewpoint sequence number to be fused and the fused weight thereof according to the viewpoint sequence number and the mantissa thereof, and fusing and outputting the viewpoint images in the intersection region according to the viewpoint sequence number to be fused and the fused weight thereof until all the viewpoint images in the intersection region are traversed and fused.
The foregoing description is only an overview of the technical solutions of the embodiments of the present invention, and the embodiments of the present invention can be implemented according to the content of the description in order to make the technical means of the embodiments of the present invention more clearly understood, and the detailed description of the embodiments of the present invention is provided below in order to make the foregoing and other objects, features, and advantages of the embodiments of the present invention more clearly understandable.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the embodiments of the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
FIG. 1 is a flow chart of a 3D image fusion method based on observation and tracking according to an embodiment of the present invention;
FIG. 2 is a schematic diagram illustrating a usage scenario when an observer is far away from a 3D display screen according to an embodiment of the present invention;
FIG. 3 is a schematic diagram illustrating a usage scenario of a 3D display screen at a relative suitable distance from an observer according to an embodiment of the present invention;
FIG. 4 is a schematic diagram illustrating a usage scenario in which an observer mostly views a 3D display screen according to an embodiment of the present invention;
fig. 5 is a schematic diagram illustrating a viewpoint image interleaving fusion output (local detail) provided by an embodiment of the present invention;
FIG. 6 is a flow chart of another 3D image fusion method based on observation tracking according to an embodiment of the present invention;
FIG. 7 is a flow chart for acquiring a viewing area of a viewer on a 3D display screen according to an embodiment of the present invention;
fig. 8 is a schematic diagram illustrating a 3D display device according to an embodiment of the present invention;
FIG. 9 is a schematic diagram illustrating an optimized partitioning of a display screen according to an embodiment of the present invention;
FIG. 10 is a block diagram illustrating a 3D image fusion apparatus based on observation tracking according to an embodiment of the present invention;
FIG. 11 is a block diagram illustrating another 3D image fusion device based on observation tracking according to an embodiment of the present invention;
fig. 12 is a diagram illustrating an architecture of an electronic device according to an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
The embodiment of the invention provides a 3D image fusion method based on observation and tracking, as shown in FIG. 1, comprising the following steps:
101. an observation area of an observer on a 3D display screen is acquired.
The observation area described in the embodiment of the present invention is a binocular vision field of an observer and a planar projection cutting area where a 3D display screen is located, which are divided into three cases, and for better understanding, the three cases are described in a form of illustration, fig. 2 shows a schematic view of a usage scene when the observer is far away from the 3D display screen, fig. 3 shows a schematic view of a usage scene when the observer is relatively in a proper position away from the 3D display screen, and fig. 4 shows a schematic view of a usage scene when most of the observer views the 3D display screen.
In fig. 2, when the observer is far away from the 3D display screen (e.g. beyond the preset observation range (i.e. the optimal viewing distance)), the observer cannot see the 3D composite effect in this application scenario; in fig. 3, the observer is at a proper position from the 3D display screen, that is, the eyes of the observer are within a proper observation area from the display screen (that is, smaller than a preset observation range (that is, an optimal viewing distance), and the 3D display screen is within a binocular viewing range of the observer and a planar projection cutting range where the 3D display screen is located); in fig. 4, the binocular viewing range of the observer and the planar projection clipping area where the 3D display screen is located are smaller than those of the 3D display screen, and the viewing area intersects with the display screen.
102. And determining whether the intersection area of the observation area and the 3D display screen meets a preset intersection threshold value.
The preset intersection threshold value in the embodiment of the present invention is an experimental value, and is specifically determined according to the size of the display screen and/or the distance between the observer and the real screen time.
Illustratively, please continue to participate in fig. 2, in which, since the distance between the observer and the display screen exceeds the preset observation range, the observation area is far larger than the display screen, and therefore, in this application scenario, it is determined that the intersection area of the observation area and the 3D display screen does not satisfy the preset intersection threshold.
There is also an application scenario opposite to the above application scenario, that is, when the observer is too close to the display screen, the observation area only occupies a small proportion of the display screen, for example, the observation area only occupies one thousandth of the display screen, and the observer cannot see the full-screen display screen in the application scenario, so that it can be determined that the intersection area of the observation area and the 3D display screen does not satisfy the preset intersection threshold.
Fig. 3 and 4 are application scenarios in which whether the intersection region of the observation region and the 3D display screen meets the preset intersection threshold or not, for fig. 4, in a specific application process, a positional relationship between the observation region and the 3D display screen is not limited, and in addition to the positional relationship in fig. 4, the observation region may be located at an upper right corner, a lower left corner, and the like of the 3D display screen, and the size of the observation region may be larger than that shown in fig. 4, or smaller than that, it can be determined that the intersection region of the observation region and the 3D display screen meets the preset intersection threshold.
103. And if the intersection area exists, calculating the serial number of the current viewpoint image interleaved sampling viewpoint image in the intersection area to obtain the viewpoint serial number and the mantissa thereof.
In the embodiment of the present invention, only the viewpoint images in the intersection region are fused (step 103 to step 104), so as to save system resource consumption. Taking the lenticular grating as an example, the process of the viewpoint image interweaving and fusing is as follows:
1) performing image sampling, interweaving and fusion pixel by pixel according to the resolution supported by the 3D display screen;
2) acquiring the serial number of the current viewpoint image interweaving sampling viewpoint image according to the formula (1);
Figure GDA0003396916790000091
wherein i represents the LCD pixel column ordinal number, from 0 to the maximum value Hmax-1 of the lateral resolution; j represents the LCD pixel line ordinal number, 0 to the vertical resolution maximum Vmax-1; k represents the LCD sub-pixel ordinal number, the RGB arrangement values are 0, 1,2; the BGR arrangement takes values of 2, 1, 0; p is a radical of0Representing parameters related to the initial offset phase of the grating and the LCD pixels; p is a radical of1Represents a parameter related to the tangent of the grating tilt; p is a radical of2Representing parameters related to the view map arrangement period; p is a radical of3Representing a parameter related to the initial offset phase of the raster and the pixel, related to the observer eye position; n represents the number of views.
And (4) calculating according to the formula (1) to obtain the view ordinal number and the mantissa (decimal part) thereof.
104. And calculating the viewpoint sequence number to be fused and the fused weight thereof according to the viewpoint sequence number and the mantissa thereof, and fusing and outputting the viewpoint images in the intersection area according to the viewpoint sequence number to be fused and the fused weight thereof until all the viewpoint images in the intersection area are traversed and fused.
Determining a view image sequence number and a view image sampling weight w of the current sampling according to the view ordinal number and the mantissa (decimal part) of the view ordinal number calculated by the formula (1), calculating a view image sequence number (Vn2 ═ Vn +1) to be fused and a fusion weight (w2 ═ 1-w) of the view image sequence number to be fused, and fusing the view image; the fusion output is the gray scale output of the current pixel point; and traversing in sequence to complete the interweaving and the fusion of the whole viewpoint images.
For convenience of understanding, fig. 5 is a schematic diagram illustrating a viewpoint image interleaved fusion output (local detail) provided by an embodiment of the present invention.
Further, when the step 102 is executed to determine whether the intersection area of the observation area and the 3D display screen meets the preset intersection threshold, the following two methods may be included, but not limited to:
when the distance between the observer and the 3D display screen is smaller than a preset observation range and the 3D display screen is in the observation area, determining that a first intersection area exists between the observation area and the 3D display screen; (application scenario represented in FIG. 3)
Or when the distance between the observer and the 3D display screen is smaller than a preset observation range and the observation area is smaller than the 3D display screen, determining that the observation area and the 3D display screen have a second intersection area (the application scene represented by fig. 4).
In practical applications, the frequency of occurrence of the application scenario in fig. 3 is less than that in fig. 4, and when an observer normally uses the 3D display device, most of the cases only focus on a certain area of the display screen (the application scenario in fig. 4), and for the use scenario in fig. 4, the calculation amount is much smaller than that in the application scenario shown in fig. 3.
It should be noted that the first and second mentioned above are only for distinguishing different intersection areas, and do not represent concepts such as priority.
In the following embodiments, taking an example that the observation area and the 3D display screen intersect in the second intersection area, before performing fusion output on the viewpoint images in the intersection area according to the to-be-fused viewpoint sequence number and the fusion weight thereof, as shown in fig. 6, the method further includes:
201. and acquiring an observation area of an observer on the 3D display screen, wherein the observation area is a binocular vision field of the observer and a plane projection cutting area where the 3D display screen is located.
Including but not limited to the following methods, as shown in fig. 7, including:
2011. and determining the three-dimensional coordinates of the eyes and the coordinates of the pupils of the eyes of the observer through a TOF camera, an RGB camera and an infrared lamp on the 3D display device.
The viewpoint image interweaving fusion calculation of the autostereoscopic display device is suitable for the autostereoscopic display device, the autostereoscopic 3D display device needs to support eye tracking, eye three-dimensional coordinate measurement and eye movement tracking, the specific installation mode can refer to fig. 8, and fig. 8 shows a schematic diagram of the 3D display device provided by the embodiment of the invention. The TOF camera and the RGB camera are adopted to realize human face detection of an observer, three-dimensional coordinate measurement of human eyes and pupil coordinate (x1, y1) measurement of human eyes, and the infrared lamp and the RGB camera are adopted to realize eye movement tracking.
The specific implementation method for determining the three-dimensional coordinates of the eyes and the coordinates of the pupils of the eyes of the observer can refer to any implementation form of people in the prior art, and the embodiment of the invention is not repeated herein.
2012. And acquiring the distance of the human eyes relative to the 3D display screen plane and the intersection point coordinate of the human eyes and the 3D display screen plane according to the three-dimensional coordinates of the human eyes.
Whether a person uses the 3D display device or not can be obtained through face detection, the fixation point coordinates (x, y) of the eyes on the plane of the display screen can be obtained through the eye pupil coordinates, and the three-dimensional coordinates of the eyes obtain the distance l of the eyes relative to the plane of the 3D display screen.
2013. Acquiring the fixation point coordinate of human eyes on the plane of the 3D display screen through the pupil coordinate of the human eyes;
the calculation method of the fixation point coordinate (x, y) comprises the following steps:
Figure GDA0003396916790000111
Figure GDA0003396916790000112
wherein, the coordinates (x) of pupil of human eye1,y1),a0~a5,b0~b5The unknown parameters are obtained by calibration.
2014. Acquiring the field angles of human eyes in the vertical direction and the horizontal direction, and calculating the vertex coordinates of a projection clipping area of an observer on the plane of the 3D display screen to confirm the observation area of the observer on the 3D display screen;
assuming that the horizontal field angle of the human eye is alpha, the vertical field angle is beta, and the distance measured between the human eye and the plane of the display screen is l, the vertex coordinates of the observation region ROI can be calculated by the following formula,
Figure GDA0003396916790000121
Figure GDA0003396916790000122
Figure GDA0003396916790000123
Figure GDA0003396916790000124
the left upper corner vertex coordinate of tl, the right upper corner vertex coordinate of tr, the left lower corner vertex coordinate of bl, the right lower corner vertex coordinate of br, the horizontal field angle of human eyes is alpha, and the vertical field angle is beta.
202. And determining that the observation area and the 3D display screen intersect at the second intersection area.
For more intuitive understanding, please refer to fig. 9, where fig. 9 illustrates a schematic diagram of optimally dividing a display screen according to an embodiment of the present invention, and a display area of a 3D display screen is divided into three parts: a region of interest, a transition region, and a region of no interest;
the reason why the non-attention area exists may be that an observer is not detected (a human face is not detected), or the observer is far away from the 3D display device, or a cut-out area of the observer's eyes in the plane of the display screen is outside the display screen, that is, when it is determined that the observation area does not have an intersection area with the 3D display screen, the 3D display apparatus does not perform any image interleaving and blending process, and directly outputs a 2D viewpoint image in the 3D display screen.
In the attention area, the method shown in fig. 1 can be directly adopted to perform global viewpoint image interleaving and fusion calculation.
In the transition region, in order to adapt to the difference between different observers and reduce the influence of the line of sight movement, the optimization of the viewpoint image in the region needs to be continued, specifically step 203.
203. And performing transition processing on the viewpoint images of the transition area.
And a transition area is inserted between the non-attention area and the attention area to perform buffering processing on the image, wherein the buffering processing comprises the color decreasing change processing.
204. And aiming at the viewpoint images in the attention area, performing interval fusion output according to the viewpoint sequence number to be fused and the fusion weight thereof.
Unlike the global fusion process shown in fig. 1, in this region of interest, pixel-by-pixel traversal is not performed, but instead, the interval process is performed, and the weight of the output viewpoint image of the non-region of interest is gradually increased in the region closer to the non-region of interest.
205. And performing linear viewpoint image interpolation processing on the viewpoint images which are not subjected to interval fusion output processing in the attention area.
Since the observation and tracking based 3D image fusion device described in this embodiment is a device that can execute the observation and tracking based 3D image fusion method in this embodiment of the present invention, based on the observation and tracking based 3D image fusion method described in this embodiment of the present invention, a person skilled in the art can understand a specific implementation manner of the observation and tracking based 3D image fusion device of this embodiment and various variations thereof, and therefore, a detailed description of how the observation and tracking based 3D image fusion device implements the observation and tracking based 3D image fusion method in this embodiment of the present invention is not given here. The scope of the present application is intended to be covered by the following claims so long as those skilled in the art can implement the apparatus used in the method for 3D image fusion based on observation and tracking in the embodiments of the present invention.
An embodiment of the present invention further provides an observation and tracking based 3D image fusion apparatus, as shown in fig. 10, including:
the acquiring unit 31 is configured to acquire an observation region of an observer on the 3D display screen, where the observation region is a binocular vision field of the observer and a planar projection clipping region where the 3D display screen is located;
a determining unit 32, configured to determine whether an intersection area of the observation area and the 3D display screen acquired by the first acquiring unit 31 satisfies a preset intersection threshold;
a first calculating unit 33, configured to calculate, when the determining unit 32 determines that the intersection region meets a preset intersection threshold, a serial number of a current viewpoint image interleaved sampling viewpoint image in the intersection region, so as to obtain a viewpoint serial number and a mantissa thereof;
a second calculating unit 34, configured to calculate, according to the viewpoint sequence numbers and their mantissas, viewpoint sequence numbers to be fused and their fusion weights;
and the processing unit 35 is configured to perform fusion output on the viewpoint images in the intersection region according to the to-be-fused viewpoint sequence number and the fused weight thereof until all the viewpoint images in the intersection region are traversed and fused.
Further, as shown in fig. 11, the determining unit 32 includes:
a first determining module 321, configured to determine that a first intersection region exists between the observation region and the 3D display screen when a distance between the observer and the 3D display screen is smaller than a preset observation range and the 3D display screen is within the observation region;
a second determining module 322, configured to determine that the observation area and the 3D display screen have a second intersection area when the distance between the observer and the 3D display screen is smaller than a preset observation range and the observation area is smaller than the 3D display screen.
Further, as shown in fig. 11, the viewing area intersects the 3D display screen at the second intersection area,
the device further comprises:
a dividing unit 36, configured to divide the display area of the 3D display screen into three parts before the processing unit 35 performs fusion output on the viewpoint images in the intersection area according to the to-be-fused viewpoint sequence number and the fusion weight thereof: a region of interest, a transition region, and a region of no interest;
a transition unit 37, configured to perform transition processing on the viewpoint images of the transition region;
the processing unit 35 includes:
the first processing module 351 is configured to perform interval fusion output on the viewpoint images in the attention area according to the viewpoint sequence numbers to be fused and the fusion weights thereof;
and a second processing module 352, configured to apply linear viewpoint image interpolation processing to the viewpoint images that are not subjected to the interval fusion output processing in the attention area.
Further, as shown in fig. 11, the apparatus further includes:
an output unit 38 for outputting a 2D view image in the 3D display screen when it is determined that the observation area does not have an intersection area with the 3D display screen.
Further, as shown in fig. 11, the acquiring unit 31 includes:
the determining module 311 is configured to determine three-dimensional coordinates of human eyes and coordinates of pupils of human eyes of an observer through a TOF camera, an RGB camera, and an infrared lamp on the 3D display device;
the first obtaining module 312 is configured to obtain, according to the three-dimensional coordinates of the human eyes, a distance between the human eyes and the plane of the 3D display screen and coordinates of an intersection point between the human eyes and the plane of the 3D display screen;
the second obtaining module 313 is configured to obtain a fixation point coordinate of a human eye on the 3D display screen plane through the human eye pupil coordinate;
a third obtaining module 314, configured to obtain the angles of field of the human eyes in the vertical direction and the horizontal direction;
and the calculating module 315 is configured to calculate vertex coordinates of the projection clipping area of the observer on the plane of the 3D display screen, so as to confirm the observation area of the observer on the 3D display screen.
Further, as shown in fig. 11, the first calculating unit 33 calculates the serial number of the interleaved sampled viewpoint images of the current viewpoint image in the intersection area by using the following formula:
Figure GDA0003396916790000151
wherein i represents the LCD pixel column ordinal number, from 0 to the maximum value Hmax-1 of the lateral resolution;
j represents the LCD pixel line ordinal number, 0 to the vertical resolution maximum Vmax-1;
k represents the LCD sub-pixel ordinal number, and the RGB arrangement mode takes values of 0, 1 and 2; the BGR arrangement takes values of 2, 1, 0;
p0representing parameters related to the initial offset phase of the grating and the LCD pixels;
p1represents a parameter related to the tangent of the grating tilt;
p2representing parameters related to the view map arrangement period;
p3representing a parameter related to the initial offset phase of the raster and the pixel, related to the observer eye position;
n represents the number of views.
Further, as shown in fig. 11, the calculating module 315 is further configured to calculate vertex coordinates of the projection clipping area of the observer on the plane of the 3D display screen, including:
Figure GDA0003396916790000152
Figure GDA0003396916790000153
Figure GDA0003396916790000154
Figure GDA0003396916790000155
the system comprises a left corner, a right corner, a br corner, a human eye horizontal field angle alpha and a vertical field angle beta, wherein the left upper corner vertex coordinate of tl, the right upper corner vertex coordinate of tr, the left lower corner vertex coordinate of bl, the right lower corner vertex coordinate of br;
the step of acquiring the fixation point coordinate of the human eye on the 3D display screen plane through the human eye pupil coordinate comprises the following steps:
Figure GDA0003396916790000156
Figure GDA0003396916790000157
wherein, the coordinate (x, y) of the fixation point, the coordinate (x) of the pupil of the human eye1,y1),a0~a5,b0~b5The unknown parameters are obtained by calibration.
An embodiment of the present invention provides an electronic device (3D display apparatus), as shown in fig. 12, including: at least one processor (processor) 41; and at least one memory (memory)42, a bus 43 connected to the processor 41; wherein the content of the first and second substances,
the processor 41 and the memory 42 complete mutual communication through the bus 43;
the processor 41 is configured to call program instructions in the memory 42 to perform the steps in the above-described method embodiments.
The present embodiments provide a non-transitory computer-readable storage medium storing computer instructions that cause the computer to perform the methods provided by the method embodiments described above.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). The memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in the process, method, article, or apparatus that comprises the element.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The above are merely examples of the present application and are not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (9)

1. A3D image fusion method based on observation tracking is characterized by comprising the following steps:
acquiring an observation area of an observer on a 3D display screen, wherein the observation area is a binocular vision field of the observer and a plane projection cutting area where the 3D display screen is located;
determining whether an intersection area of the observation area and the 3D display screen meets a preset intersection threshold value;
if the intersection area meets the preset intersection threshold value, calculating the serial number of the current viewpoint image interleaved sampling viewpoint image in the intersection area to obtain the viewpoint serial number and the mantissa thereof;
calculating the viewpoint sequence number to be fused and the fused weight thereof according to the viewpoint sequence number and the mantissa thereof, and fusing and outputting the viewpoint images in the intersection area according to the viewpoint sequence number to be fused and the fused weight thereof until all the viewpoint images in the intersection area are traversed and fused;
outputting a 2D viewpoint image in the 3D display screen when it is determined that the viewing area does not have an intersection area with the 3D display screen.
2. The method of claim 1, wherein determining whether an intersection area of the viewing area and the 3D display screen meets a preset intersection threshold comprises:
when the distance between the observer and the 3D display screen is smaller than a preset observation range and the 3D display screen is in the observation area, determining that a first intersection area exists between the observation area and the 3D display screen;
or when the distance between the observer and the 3D display screen is smaller than a preset observation range and the observation area is smaller than the 3D display screen, determining that the observation area and the 3D display screen have a second intersection area.
3. The method of claim 2, wherein a viewing area intersects a 3D display screen at the second intersection area,
before fusion output is performed on the viewpoint images in the intersection region according to the viewpoint sequence numbers to be fused and the fused weights thereof, the method further comprises the following steps:
dividing a display area of a 3D display screen into three parts: a region of interest, a transition region, and a region of no interest;
performing transition processing on the viewpoint images of the transition area;
the fusion output of the viewpoint images in the intersection area according to the viewpoint sequence number to be fused and the fused weight thereof comprises the following steps:
aiming at the viewpoint images in the attention area, performing interval fusion output according to the viewpoint sequence numbers to be fused and the fusion weight thereof;
and performing linear viewpoint image interpolation processing on the viewpoint images which are not subjected to interval fusion output processing in the attention area.
4. The method of any one of claims 1 to 3, wherein obtaining the viewer's viewing area on the 3D display screen comprises:
determining the three-dimensional coordinates of human eyes and the coordinates of pupils of human eyes of an observer through a TOF camera, an RGB camera and an infrared lamp on a 3D display device;
acquiring the distance of the human eyes relative to the plane of the 3D display screen and the intersection point coordinate of the human eyes and the plane of the 3D display screen according to the three-dimensional coordinates of the human eyes;
acquiring the fixation point coordinate of human eyes on the plane of the 3D display screen through the pupil coordinate of the human eyes;
and acquiring the field angles of human eyes in the vertical direction and the horizontal direction, and calculating the vertex coordinates of the projection clipping area of the observer on the plane of the 3D display screen so as to confirm the observation area of the observer on the 3D display screen.
5. The method of claim 4, wherein the calculating of the interlaced sampling viewpoint image sequence number of the current viewpoint image in the intersection region uses the following formula:
Figure FDA0003396916780000021
wherein i represents the LCD pixel column ordinal number, from 0 to the maximum value Hmax-1 of the lateral resolution;
j represents the LCD pixel line ordinal number, 0 to the vertical resolution maximum Vmax-1;
k represents the LCD sub-pixel ordinal number, and the RGB arrangement mode takes values of 0, 1 and 2; the BGR arrangement takes values of 2, 1, 0;
p0representing the initial offset from the raster and LCD pixelsShifting phase related parameters;
p1represents a parameter related to the tangent of the grating tilt;
p2representing parameters related to the view map arrangement period;
p3representing a parameter related to the initial offset phase of the raster and the pixel, related to the observer eye position;
n represents the number of views.
6. The method of claim 4, wherein calculating vertex coordinates of the viewer's projected cropped area in the plane of the 3D display screen comprises:
Figure FDA0003396916780000022
Figure FDA0003396916780000031
Figure FDA0003396916780000032
Figure FDA0003396916780000033
the system comprises a display screen, a tl vertex coordinate at the upper left corner, a tr vertex coordinate at the upper right corner, a bl vertex coordinate at the lower left corner, a br vertex coordinate at the lower right corner, a human eye horizontal field angle alpha, a human eye vertical field angle beta, a measured distance between a human eye and the plane of the display screen l, and a human eye pupil coordinate (x)1,y1);
The step of acquiring the fixation point coordinate of the human eye on the 3D display screen plane through the human eye pupil coordinate comprises the following steps:
Figure FDA0003396916780000034
Figure FDA0003396916780000035
wherein the gazing point coordinates (x, y), a0~a5,b0~b5The unknown parameters are obtained by correcting and calibrating.
7. An observation tracking-based 3D image fusion apparatus, comprising:
the system comprises an acquisition unit, a display unit and a display unit, wherein the acquisition unit is used for acquiring an observation area of an observer on a 3D display screen, and the observation area is a binocular vision field of the observer and a plane projection cutting area where the 3D display screen is located;
the determining unit is used for determining whether the intersection area of the observation area and the 3D display screen acquired by the acquiring unit meets a preset intersection threshold value; outputting a 2D viewpoint image in the 3D display screen when it is determined that the observation region does not have an intersection region with the 3D display screen;
the first calculation unit is used for calculating the serial number of the current viewpoint image interleaved sampling viewpoint image in the intersected area to obtain the viewpoint serial number and the mantissa thereof when the determination unit determines that the intersected area meets the preset intersection threshold;
the second calculation unit is used for calculating the viewpoint sequence number to be fused and the fusion weight thereof according to the viewpoint sequence number and the mantissa thereof;
and the processing unit is used for fusing and outputting the viewpoint images in the intersection area according to the viewpoint sequence number to be fused and the fused weight thereof until all the viewpoint images in the intersection area are traversed and fused.
8. The apparatus of claim 7, wherein the determining unit comprises:
the first determining module is used for determining that a first intersection area exists between the observation area and the 3D display screen when the distance between the observer and the 3D display screen is smaller than a preset observation range and the 3D display screen is in the observation area;
and the second determining module is used for determining that a second intersection area exists between the observation area and the 3D display screen when the distance between the observer and the 3D display screen is smaller than a preset observation range and the observation area is smaller than the 3D display screen.
9. The apparatus of claim 8, wherein a viewing area intersects a 3D display screen at the second intersection area,
the device further comprises:
the dividing unit is used for dividing the display area of the 3D display screen into three parts before the processing unit performs fusion output on the viewpoint images in the intersection area according to the viewpoint sequence number to be fused and the fusion weight thereof: a region of interest, a transition region, and a region of no interest;
a transition unit, configured to perform transition processing on the viewpoint image of the transition region;
the processing unit includes:
the first processing module is used for performing interval fusion output on the viewpoint images in the attention area according to the viewpoint sequence numbers to be fused and the fusion weight thereof;
and the second processing module is used for performing linear viewpoint image interpolation processing on the viewpoint images which are not subjected to interval fusion output processing in the attention area.
CN202010554091.9A 2020-06-17 2020-06-17 3D image fusion method and device based on observation and tracking Active CN111683238B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010554091.9A CN111683238B (en) 2020-06-17 2020-06-17 3D image fusion method and device based on observation and tracking

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010554091.9A CN111683238B (en) 2020-06-17 2020-06-17 3D image fusion method and device based on observation and tracking

Publications (2)

Publication Number Publication Date
CN111683238A CN111683238A (en) 2020-09-18
CN111683238B true CN111683238B (en) 2022-02-18

Family

ID=72436054

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010554091.9A Active CN111683238B (en) 2020-06-17 2020-06-17 3D image fusion method and device based on observation and tracking

Country Status (1)

Country Link
CN (1) CN111683238B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113079364A (en) * 2021-03-24 2021-07-06 纵深视觉科技(南京)有限责任公司 Three-dimensional display method, device, medium and electronic equipment for static object

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106531073A (en) * 2017-01-03 2017-03-22 京东方科技集团股份有限公司 Processing circuit of display screen, display method and display device
CN109472855A (en) * 2018-11-16 2019-03-15 青岛海信电器股份有限公司 A kind of object plotting method, device and smart machine
CN111290581A (en) * 2020-02-21 2020-06-16 京东方科技集团股份有限公司 Virtual reality display method, display device and computer readable medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10805530B2 (en) * 2017-10-30 2020-10-13 Rylo, Inc. Image processing for 360-degree camera

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106531073A (en) * 2017-01-03 2017-03-22 京东方科技集团股份有限公司 Processing circuit of display screen, display method and display device
CN109472855A (en) * 2018-11-16 2019-03-15 青岛海信电器股份有限公司 A kind of object plotting method, device and smart machine
CN111290581A (en) * 2020-02-21 2020-06-16 京东方科技集团股份有限公司 Virtual reality display method, display device and computer readable medium

Also Published As

Publication number Publication date
CN111683238A (en) 2020-09-18

Similar Documents

Publication Publication Date Title
US9826225B2 (en) 3D image display method and handheld terminal
CN103595986B (en) Stereoscopic image display device, image processing device, and image processing method
EP3350989B1 (en) 3d display apparatus and control method thereof
EP2693759B1 (en) Stereoscopic image display device, image processing device, and stereoscopic image processing method
CN108090942B (en) Three-dimensional rendering method and apparatus for eyes of user
US20050265619A1 (en) Image providing method and device
KR20190029331A (en) Image processing method and apparatus for autostereoscopic three dimensional display
US9154765B2 (en) Image processing device and method, and stereoscopic image display device
KR101663672B1 (en) Wide viewing angle naked eye 3d image display method and display device
CN108989785B (en) Naked eye 3D display method, device, terminal and medium based on human eye tracking
JP5248709B2 (en) 3D image display apparatus and method
KR20150121127A (en) Binocular fixation imaging method and apparatus
CN108881893A (en) Naked eye 3D display method, apparatus, equipment and medium based on tracing of human eye
KR980004175A (en) Stereoscopic Computer Graphics Video Generator
US20200134912A1 (en) Three-dimensional (3d) image rendering method and apparatus
US20180184066A1 (en) Light field retargeting for multi-panel display
EP3526639A1 (en) Display of visual data with a virtual reality headset
US20170359562A1 (en) Methods and systems for producing a magnified 3d image
CN109978945B (en) Augmented reality information processing method and device
CN111683238B (en) 3D image fusion method and device based on observation and tracking
CN107483915B (en) Three-dimensional image control method and device
JP2011211551A (en) Image processor and image processing method
US20130342536A1 (en) Image processing apparatus, method of controlling the same and computer-readable medium
WO2012176526A1 (en) Stereoscopic image processing device, stereoscopic image processing method, and program
EP3691249A1 (en) Image signal representing a scene

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant