WO2022142146A1 - 画质调整方法、装置、投影仪及计算机可读存储介质 - Google Patents

画质调整方法、装置、投影仪及计算机可读存储介质 Download PDF

Info

Publication number
WO2022142146A1
WO2022142146A1 PCT/CN2021/098979 CN2021098979W WO2022142146A1 WO 2022142146 A1 WO2022142146 A1 WO 2022142146A1 CN 2021098979 W CN2021098979 W CN 2021098979W WO 2022142146 A1 WO2022142146 A1 WO 2022142146A1
Authority
WO
WIPO (PCT)
Prior art keywords
pupil
center
distance
human eye
position information
Prior art date
Application number
PCT/CN2021/098979
Other languages
English (en)
French (fr)
Inventor
陈昌陶
冉鹏
王鑫
Original Assignee
成都极米科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 成都极米科技股份有限公司 filed Critical 成都极米科技股份有限公司
Publication of WO2022142146A1 publication Critical patent/WO2022142146A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3141Constructional details thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3179Video signal processing therefor

Definitions

  • the present invention relates to the field of image processing, and in particular, to an image quality adjustment method, device, projector and computer-readable storage medium.
  • the purpose of the embodiments of the present invention is to provide an image quality adjustment method, device, projector, and computer-readable storage medium, so as to improve the image quality of the projector and enable users to obtain an ultra-high-resolution visual experience.
  • an embodiment of the present invention provides an image quality adjustment method, which is applied to a projector, where the projector includes at least two front cameras and at least two rear cameras, and the method includes: according to the front camera According to the projection image collected by the rear camera, the position information of the projection point is determined; according to the face image collected by the rear camera, the pupil center position information of the face image is determined; according to the position information of the projection point, preset parameters and the pupil center position information, calculate the distance from the center of the area of interest of the human eye to the projection point, wherein the area of interest of the human eye is the area that the human eye pays attention to in the projection screen; using super-resolution image generation technology, adjust The display resolution of the area of interest to the human eye.
  • an embodiment of the present invention provides an image quality adjustment device, which is applied to a projector.
  • the projector includes at least two front cameras and at least two rear cameras, and the device includes a determination module and a calculation module. , Adjust the module.
  • the determining module is used to determine the position information of the projection point according to the projection image collected by the front camera, and also used to determine the center position of the pupil of the face image according to the face image collected by the rear camera information; a calculation module for calculating the distance from the center of the area of interest of the human eye to the projection point according to the position information of the projection point, the preset parameters and the position information of the pupil center, wherein the area of interest of the human eye is is the area that the human eye pays attention to in the projection screen; the adjustment module is used to adjust the display resolution of the area of interest to the human eye by adopting the super-resolution image generation technology.
  • an embodiment of the present invention further provides a projector, the projector includes: a front camera, a rear camera, a processor and a memory; the memory is used to store a program, when the program is processed by the process When the processor is executed, the processor is made to implement the image quality adjustment method described in the first aspect.
  • an embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the image described in the first aspect is implemented. quality adjustment method.
  • the embodiments of the present invention provide an image quality adjustment method, device, projector and computer-readable storage medium.
  • the position information of the projection point is determined according to the projection image collected by the front camera, and then the position information of the projection point is determined.
  • the face image collected by the rear camera determine the pupil center position information of the face image, and then calculate the attention of the human eye according to the position information of the projection point, the preset parameters and the pupil center position information The distance from the center of the area to the projection point, and finally, the super-resolution image generation technology is used to adjust the display resolution of the area of interest to the human eye.
  • the position of the center of the area of interest of the human eye is determined by the distance from the center of the area of interest of the human eye to the projection point, thereby further determining the area of interest of the human eye, and by improving the resolution of the area of interest of the human eye, the projector can be improved.
  • the high-quality picture quality enables users to obtain the technical effect of ultra-high-resolution visual experience.
  • FIG. 1 shows a schematic block diagram of the steps of the image quality adjustment method provided by the embodiment of the present application
  • FIG. 2 shows a schematic block diagram of a sub-step flow of determining pupil center position information provided by an embodiment of the present application
  • Fig. 3 shows the geometrical relationship diagram of the pupil center, the projector and the projection screen provided by the embodiment of the present application
  • FIG. 4 shows a schematic block diagram of a sub-step flow for calculating the distance from the center of the area of interest to the human eye to the projection point provided by an embodiment of the present application;
  • FIG. 5 shows a schematic block diagram of a sub-step flow of adjusting display resolution provided by an embodiment of the present application
  • FIG. 6 shows a schematic block diagram of the structure of an image quality adjustment apparatus provided by an embodiment of the present application.
  • 100-projector 101-first front camera; 102-second front camera; 103-first rear camera; 104-second rear camera; 105-first front camera imaging surface; 106-th 2 Front camera imaging plane; 107-first rear camera imaging plane; 108-second rear camera imaging plane; 109-left pupil center; 110-right pupil center; 111- center of the area of interest of the human eye; 112- Projection point; 200-image quality adjustment device; 210-determination module; 220-calculation module; 230-adjustment module.
  • FIG. 1 shows a schematic block diagram of the steps of the image quality adjustment method provided by the embodiment of the present application.
  • the image quality adjustment method provided by the embodiment of the present application is applied to a projector, where the projector includes at least two front cameras and at least two rear cameras, and the method includes S110 to S140 .
  • S110 Determine the position information of the projection point according to the projection image collected by the front camera.
  • the projection point is any point on the screen projected by the projector, and should not be understood as the light source point.
  • the projection point is a point where the mid-perpendicular lines of the two front cameras intersect with the projection screen.
  • the projection image collected by the front camera is a two-dimensional lattice data, and the position information of the projection point is the number of rows and columns on the projection image.
  • S120 Determine the pupil center position information of the face image according to the face image collected by the rear camera.
  • the face image is input into the deep learning model to obtain the position information of the human eye; according to the position information of the human eye and the face image, the pupil center positioning algorithm is used to determine the pupil Center location information.
  • the deep learning model can use algorithms such as the RetinaFace face recognition algorithm.
  • the pupil center position information includes the center coordinates of the pupil area and the distance between the center of the left pupil and the center of the right pupil. As shown in Figure 2, determining the pupil center position information specifically includes the following steps:
  • S121 Obtain a human eye image according to the human face image and the human eye position information.
  • the image of the human eye region is obtained.
  • S122 Perform the first image binarization segmentation on the human eye image to determine the eyeball region.
  • the eyeball pixel grayscale threshold and pupil pixel grayscale threshold are set, the eyeball pixel grayscale threshold is greater than the pupil threshold pixel grayscale, the first image binarization segmentation is performed on the human eye image, and the pixel grayscale is lower than the eyeball pixel grayscale.
  • the region of the pixel grayscale threshold is determined as the eyeball region in the center of the eyeball, and the eyeball region includes the pupil and iris regions in the eyeball.
  • S123 Perform a second image binarization segmentation on the eyeball region of the human eye image to determine the pupil region.
  • the second image binarization segmentation is performed on the eyeball area, and the area where the pixel grayscale is lower than the pupil pixel grayscale threshold is determined as the pupil area in the center of the eyeball.
  • S124 Perform contour extraction and ellipse fitting on the image of the pupil area to obtain the center coordinates of the pupil area.
  • S125 Obtain the distance between the center of the left pupil and the center of the right pupil according to the center coordinates of the corresponding pupil regions of the left pupil and the right pupil.
  • center coordinates of the left pupil area and the center coordinates of the right pupil area are substituted into the distance formula between the two points to obtain the distance between the center of the left pupil and the center of the right pupil.
  • the corrected pupil center position information is obtained through a fifth preset formula. Let the formula be:
  • Y' represents the corrected pupil center position information
  • Y represents the pupil center position information
  • Q 1 represents the distance from the pupil center to the left border of the eyeball
  • Q 2 represents the pupil center to the The distance to the right edge of the eyeball.
  • the uncorrected pupil center position information is the eyeball center position information, and the pupil center position information is corrected according to the deviation distance from the pupil center to the eyeball center.
  • the focal length of the rear camera is adjusted by calculating the sharpness of the human eye region in the face image.
  • the method for calculating the sharpness of the human eye region in the face image includes but is not limited to: Brenner gradient function, Tenengrad gradient function, Laplacian gradient function, SMD (grayscale variance) function, SMD2 (grayscale variance product) function, variance function, energy gradient function, Vollath function, entropy function.
  • S130 Calculate the distance from the center of the area of interest of the human eye to the projection point according to the location information of the projection point, the preset parameters, and the location information of the pupil center, where the area of interest of the human eye is the projection screen of the human eye area of interest.
  • the projector 100 includes a first front camera 101 , a second front camera 102 , a first rear camera 103 and a second rear camera 104 .
  • Each camera has a corresponding imaging plane, that is, the first front camera 101 corresponds to the first front camera imaging plane 105 , the second front camera 102 corresponds to the second front camera imaging plane 106 , and the first rear camera 103 Corresponding to the imaging plane 107 of the first rear camera, and corresponding to the imaging plane 108 of the second rear camera 104 .
  • the left pupil center 109 and the right pupil center 110 are the positions of the pupil centers when the user watches the projection screen, the center 111 of the area of interest of the human eye is the center of the user's area of interest on the projector 100, and the projection point 112 is any point on the projection screen.
  • the front camera includes a first front camera 101 and a second front camera 102
  • the rear camera includes a first rear camera 103 and a second rear camera 104
  • the first rear camera 104 The focal lengths of the front camera 101 and the second front camera 102 are the same, the focal lengths of the first rear camera 103 and the second rear camera 104 are the same, and the projection of the first front camera 101 to the The distance of the screen is equal to the distance from the second front camera 102 to the projection screen.
  • the first connection is determined according to the first front camera 101 and the second front camera 102, and the first connection is determined according to the first front camera 101 and the second front camera 102.
  • the rear camera 103 and the second rear camera 104 determine a second connection line, and the first connection line is parallel to the second connection line.
  • the preset parameters include the distance parameter of the front camera, the distance parameter of the rear camera and the focal length of the front camera, wherein the distance parameter of the front camera is the first front camera and the second front camera The distance between cameras, the rear camera distance parameter is the distance between the first rear camera and the second rear camera.
  • the front camera distance parameter T is preset, that is, the distance between the first front camera and the second front camera is preset as T;
  • the front camera distance parameter T 1 is preset, that is, the first rear camera The distance between the camera and the second rear camera is T 1 ;
  • the focal length of the front camera is preset to be f.
  • the distance of the projection point includes the following steps:
  • Z represents the distance from the projection screen to the front camera
  • f represents the focal length of the front camera
  • T represents the distance between the first front camera and the second front camera
  • X l represents The abscissa of the projected image obtained by the first front camera of the projection point, that is, the abscissa of the projection point on the imaging plane of the first front camera
  • X r indicates that the projection point is in the The abscissa of the projected image obtained by the second front camera, that is, the abscissa of the projector point on the imaging plane of the second front camera.
  • Z 1 represents the vertical distance from the center of the pupil to the second connecting line
  • f 1 represents the focal length of the rear camera
  • T 1 represents the distance between the first rear camera and the second rear camera
  • Y l1 represents the abscissa of the projected image obtained by the human eye at the rear first camera, that is, the abscissa of the image plane of the human eye at the rear first camera
  • Y r1 indicates that the human eye is at the rear the abscissa of the projected image obtained by the second camera, that is, the abscissa of the image plane of the human eye on the second rear camera;
  • represents the inclination angle of the human eye
  • Z 11 represents the vertical distance from the center of the left pupil to the second connecting line
  • Z 12 represents the vertical distance from the center of the right pupil to the second connecting line
  • Q represents the The distance from the center of the left pupil to the center of the right pupil.
  • M represents the distance from the center of the region of interest to the human eye to the projection point
  • W represents the distance from the first connecting line to the second connecting line
  • S140 Using a super-resolution image generation technology, adjust the display resolution of the region of interest to the human eye.
  • the position information of the center of the area of interest of the human eye is determined by the distance from the center of the area of interest of the human eye to the projection point, and the preset range around the center of the area of interest of the human eye is determined as the area of interest of the human eye, and the preset range can be determined according to It actually needs to be set manually.
  • the super-resolution image generation technology is used to adjust the display resolution of the area of interest of the human eye, which specifically includes the following steps:
  • S141 Preprocess the image of the region of interest of the human eye to obtain a preprocessed image.
  • the preprocessing performed on the image of the region of interest of the human eye includes, but is not limited to, histogram equalization processing, median filtering processing, and normalization processing.
  • S142 Input the preprocessed image into a super-resolution reconstruction network model to obtain a super-resolution reconstructed preprocessed image.
  • super-resolution reconstruction network models include but are not limited to super-resolution convolutional neural network models (Super-Resolution Convolutional Neural Network, SRCNN), fast super-resolution convolutional neural network models (Fast Super-Resolution Convolutional Neural Networks, FSRCNN) , Efficient Sub-Pixel Convolutional Neural Network (ESPCN) and Deep Convolutional Networks (VDSR).
  • SRCNN super-Resolution Convolutional Neural Network
  • Fast Super-Resolution Convolutional Neural Networks Fast Super-Resolution Convolutional Neural Networks
  • ESPCN Efficient Sub-Pixel Convolutional Neural Network
  • VDSR Deep Convolutional Networks
  • S143 Fill the preprocessed image after super-resolution reconstruction into the region of interest of the human eye.
  • the preprocessed image reconstructed by the super-resolution is filled into the area of interest of the human eye, the resolution of the image in the area of interest of the human eye is improved, and the user can obtain a super-high-resolution visual experience.
  • the position information of the projection point is determined according to the projection image collected by the front camera, and the pupil center position information of the face image is determined according to the face image collected by the rear camera.
  • the position information, preset parameters and the pupil center position information of the projection point calculate the distance from the center of the area of interest of the human eye to the projection point, and use super-resolution image generation technology to adjust the display of the area of interest of the human eye resolution. That is, the position of the center of the area of interest of the human eye is determined by the distance from the center of the area of interest of the human eye to the projection point, thereby further determining the area of interest of the human eye, and by improving the resolution of the area of interest of the human eye, the projector can be improved.
  • the high-quality picture quality enables users to obtain the technical effect of ultra-high-resolution visual experience.
  • FIG. 6 shows a schematic block diagram of the structure of an image quality adjustment apparatus provided by an embodiment of the present application.
  • the image quality adjustment apparatus 200 is applied to a projector, and the projector includes at least two front cameras and at least two rear cameras.
  • the image quality adjustment apparatus 200 includes a determination module 210 , a calculation module 220 and an adjustment module 230 .
  • the determining module 210 is used for determining the position information of the projection point according to the projection image collected by the front camera, and is also used for determining the pupil center of the face image according to the face image collected by the rear camera location information;
  • the calculation module 220 is configured to calculate the distance from the center of the area of interest of the human eye to the projection point according to the position information of the projection point, the preset parameter and the position information of the pupil center, wherein the area of interest of the human eye is a human The area of interest of the eye within the projection screen;
  • the adjustment module 230 is configured to adjust the display resolution of the area of interest of the human eye by adopting the super-resolution image generation technology.
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of code that contains one or more functions for implementing the specified logical function(s) executable instructions. It should also be noted that, in alternative implementations, the functions noted in the block may occur out of the order noted in the figures.
  • each block of the block diagrams and/or flow diagrams, and combinations of blocks in the block diagrams and/or flow diagrams can be implemented using dedicated hardware-based systems that perform the specified functions or actions. be implemented, or may be implemented in a combination of special purpose hardware and computer instructions.
  • each functional module or unit in each embodiment of the present invention may be integrated to form an independent part, or each module may exist alone, or two or more modules may be integrated to form an independent part.
  • the functions, if implemented in the form of software function modules and sold or used as independent products, may be stored in a computer-readable storage medium.
  • the technical solution of the present invention can be embodied in the form of a software product in essence, or the part that contributes to the prior art or the part of the technical solution.
  • the computer software product is stored in a storage medium, including Several instructions are used to cause a computer device (which may be a smart phone, a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in the various embodiments of the present invention.
  • the aforementioned storage medium includes: U disk, mobile hard disk, Read-Only Memory (ROM, Read-Only Memory), Random Access Memory (RAM, Random Access Memory), magnetic disk or optical disk and other media that can store program codes .

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Controls And Circuits For Display Device (AREA)
  • Image Processing (AREA)

Abstract

本发明实施例公开了一种画质调整方法、装置、投影仪及计算机可读存储介质。该画质调整方法应用于投影仪,投影仪包括至少两个前置摄像头和至少两个后置摄像头,该方法包括:根据前置摄像头采集的投影图像,确定投影点的位置信息,根据后置摄像头采集的人脸图像,确定人脸图像的瞳孔中心位置信息,根据投影点的位置信息、预设参数和瞳孔中心位置信息,计算人眼关注区域的中心到投影点的距离,其中,人眼关注区域为人眼在投影屏幕内关注的区域,采用超分辨率图像生成技术,调整人眼关注区域的显示分辨率。本发明实现提升投影仪的画质,使用户获得超高分辨率的视觉体验的技术效果。

Description

画质调整方法、装置、投影仪及计算机可读存储介质 技术领域
本发明涉及图像处理领域,尤其涉及一种画质调整方法、装置、投影仪及计算机可读存储介质。
背景技术
受限于网络带宽、影视资源分辨率、投影光机分辨率,现有投影仪很难达到8K或更高画质的清晰度,因此,如何提升投影仪的画质,使用户获得超高分辨率的视觉体验是亟需解决的问题。
发明内容
本发明实施例的目的在于提供一种画质调整方法、装置、投影仪和计算机可读存储介质,用以提升投影仪的画质,使用户获得超高分辨率的视觉体验。
为了实现上述目的,本发明实施例采用的技术方案如下:
第一方面,本发明实施例提供了一种画质调整方法,应用于投影仪,所述投影仪包括至少两个前置摄像头和至少两个后置摄像头,所述方法包括:根据所述前置摄像头采集的投影图像,确定投影点的位置信息;根据所述后置摄像头采集的人脸图像,确定所述人脸图像的瞳孔中心位置信息;根据所述投影点的位置信息、预设参数和所述瞳孔中心位置信息,计算人 眼关注区域的中心到所述投影点的距离,其中,所述人眼关注区域为人眼在投影屏幕内关注的区域;采用超分辨率图像生成技术,调整所述人眼关注区域的显示分辨率。
第二方面,本发明实施例提供了一种画质调整装置,应用于投影仪,所述投影仪包括至少两个前置摄像头和至少两个后置摄像头,所述装置包括确定模块、计算模块、调整模块。其中,确定模块,用于根据所述前置摄像头采集的投影图像,确定投影点的位置信息,还用于根据所述后置摄像头采集的人脸图像,确定所述人脸图像的瞳孔中心位置信息;计算模块,用于根据所述投影点的位置信息、预设参数和所述瞳孔中心位置信息,计算人眼关注区域的中心到所述投影点的距离,其中,所述人眼关注区域为人眼在投影屏幕内关注的区域;调整模块,用于采用超分辨率图像生成技术,调整所述人眼关注区域的显示分辨率。
第三方面,本发明实施例还提供了一种投影仪,所述投影仪包括:前置摄像头、后置摄像头、处理器和存储器;存储器,用于存储程序,当所述程序被所述处理器执行时,使得所述处理器实现如第一方面所述的画质调整方法。
第四方面,本发明实施例还提供了一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现如第一方面所述的画质调整方法。
相对于现有技术,本发明实施例提供一种画质调整方法、装置、投影仪和计算机可读存储介质,首先,根据所述前置摄像头采集的投影图像,确定投影点的位置信息,然后,根据所述后置摄像头采集的人脸图像,确定所述人脸图像的瞳孔中心位置信息,再根据所述投影点的位置信息、预设参数和所述瞳孔中心位置信息,计算人眼关注区域的中心到所述投影点 的距离,最后,采用超分辨率图像生成技术,调整所述人眼关注区域的显示分辨率。即,通过人眼关注区域的中心到所述投影点的距离,确定人眼关注区域的中心的位置,从而进一步确定人眼关注区域,通过提高人眼关注区域图像的分辨率,实现提升投影仪的画质,使用户获得超高分辨率的视觉体验的技术效果。
附图说明
为了更清楚地说明本发明的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,应当理解,以下附图仅示出了本发明的某些实施例,因此不应被看作是对本发明保护范围的限定。在各个附图中,类似的构成部分采用类似的编号。
图1示出了本申请实施例提供的画质调整方法的步骤流程示意框图;
图2示出了本申请实施例提供的确定瞳孔中心位置信息的子步骤流程示意框图;
图3示出了本申请实施例提供的瞳孔中心、投影仪与投影屏幕的几何关系图;
图4示出了本申请实施例提供的计算人眼关注区域的中心到所述投影点的距离的子步骤流程示意框图;
图5示出了本申请实施例提供的调整显示分辨率的子步骤流程示意框图;
图6示出了本申请实施例提供的画质调整装置的结构示意框图。
主要元件符号说明:
100-投影仪;101-第一前置摄像头;102-第二前置摄像头;103-第一后置摄像头;104-第二后置摄像头;105-第一前置摄像头成像面;106-第二前 置摄像头成像面;107-第一后置摄像头成像面;108-第二后置摄像头成像面;109-左瞳孔中心;110-右瞳孔中心;111-人眼关注区域的中心;112-投影点;200-画质调整装置;210-确定模块;220-计算模块;230-调整模块。
具体实施方式
下面将结合本发明实施例中附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。
通常在此处附图中描述和示出的本发明实施例的组件可以以各种不同的配置来布置和设计。因此,以下对在附图中提供的本发明的实施例的详细描述并非旨在限制要求保护的本发明的范围,而是仅仅表示本发明的选定实施例。基于本发明的实施例,本领域技术人员在没有做出创造性劳动的前提下所获得的所有其他实施例,都属于本发明保护的范围。
在下文中,可在本发明的各种实施例中使用的术语“包括”、“具有”及其同源词仅意在表示特定特征、数字、步骤、操作、元件、组件或前述项的组合,并且不应被理解为首先排除一个或更多个其它特征、数字、步骤、操作、元件、组件或前述项的组合的存在或增加一个或更多个特征、数字、步骤、操作、元件、组件或前述项的组合的可能性。
此外,术语“第一”、“第二”、“第三”等仅用于区分描述,而不能理解为指示或暗示相对重要性。
除非另有限定,否则在这里使用的所有术语(包括技术术语和科学术语)具有与本发明的各种实施例所属领域普通技术人员通常理解的含义相同的含义。所述术语(诸如在一般使用的词典中限定的术语)将被解释为具有与在相关技术领域中的语境含义相同的含义并且将不被解释为具有理 想化的含义或过于正式的含义,除非在本发明的各种实施例中被清楚地限定。
实施例1
请参照图1,图1示出了本申请实施例提供的画质调整方法的步骤流程示意框图。
如图1所示,本申请实施例提供的画质调整方法,应用于投影仪,所述投影仪包括至少两个前置摄像头和至少两个后置摄像头,所述方法包括S110至S140。
S110:根据所述前置摄像头采集的投影图像,确定投影点的位置信息。
可以理解的是,前置摄像头和后置摄像头的数量可以根据需求具体设置,在本实施例中,以前置摄像头和后置摄像头的数量均为两个为例进一步叙述。投影点为投影仪投出的画面上的任意一点,不应该理解为光源点。优选地,投影点为两个前置摄像头的中垂线与投影屏幕相交的点。前置摄像头采集的投影图像是一个二维点阵数据,投影点的位置信息就是在投影图像上的行列数。
S120:根据所述后置摄像头采集的人脸图像,确定所述人脸图像的瞳孔中心位置信息。
在本实施例中,后置摄像头采集到人脸图像后,将人脸图像输入至深度学习模型,得到人眼位置信息;根据人眼位置信息和人脸图像,采用瞳孔中心定位算法,确定瞳孔中心位置信息。
可以理解的是,深度学习模型可以采用RetinaFace人脸识别算法等算法。
具体地,瞳孔中心位置信息包括瞳孔区域的中心坐标和左瞳孔中心到右瞳孔中心之间的距离。如图2所示,确定瞳孔中心位置信息,具体包括 如下步骤:
S121:根据所述人脸图像和所述人眼位置信息,得到人眼图像。
具体地,根据后置摄像头采集的人脸图像和人眼位置信息,得到人眼区域的图像。
S122:对所述人眼图像进行第一次图像二值化分割,确定眼珠区域。
具体地,设定眼珠像素灰度阈值和瞳孔像素灰度阈值,眼珠像素灰度阈值大于瞳孔阈值像素灰度,对人眼图像进行第一次图像二值化分割,将像素灰度低于眼珠像素灰度阈值的区域确定为眼球中心的眼珠区域,眼珠区域包括眼球中的瞳孔和虹膜区域。
S123:对所述人眼图像的眼珠区域进行第二次图像二值化分割,确定瞳孔区域。
具体地,对眼珠区域进行第二次图像二值化分割,将像素灰度低于瞳孔像素灰度阈值的区域确定为眼球中心的瞳孔区域。
S124:对所述瞳孔区域的图像进行轮廓提取以及椭圆拟合,得到所述瞳孔区域的中心坐标。
S125:根据左瞳孔和右瞳孔的相应瞳孔区域的中心坐标,得到所述左瞳孔中心到右瞳孔中心之间的距离。
具体地,将左瞳孔区域的中心坐标和右瞳孔区域的中心坐标,代入两点间距离公式,得到左瞳孔中心到右瞳孔中心之间的距离。
在本实施例中,根据所述瞳孔中心到眼球左边界的距离和所述瞳孔中心到眼球右边界的距离,通过第五预设公式,得到修正后的瞳孔中心位置信息,所述第五预设公式为:
Figure PCTCN2021098979-appb-000001
其中,Y'表示所述修正后的瞳孔中心位置信息,Y表示所述瞳孔中心位 置信息,Q 1表示所述瞳孔中心到所述眼球左边界的距离,Q 2表示所述瞳孔中心到所述眼球右边界的距离。
具体地,未修正之前的瞳孔中心位置信息是眼球中心位置信息,根据瞳孔中心到眼球中心的偏离距离,修正瞳孔中心位置信息。
以左瞳孔中心与右瞳孔中心的中间位置作为所述后置摄像头的对焦位置,通过计算所述人脸图像中人眼区域的清晰度,调整所述后置摄像头的焦距。
具体地,计算所述人脸图像中人眼区域的清晰度的方法包括但不限于:Brenner梯度函数、Tenengrad梯度函数、Laplacian梯度函数、SMD(灰度方差)函数、SMD2(灰度方差乘积)函数、方差函数、能量梯度函数、Vollath函数、熵函数。
S130:根据所述投影点的位置信息、预设参数和所述瞳孔中心位置信息,计算人眼关注区域的中心到所述投影点的距离,其中,所述人眼关注区域为人眼在投影屏幕内关注的区域。
请参照图3,图3示出了本实施例中瞳孔中心、投影仪与投影屏幕的几何关系图。投影仪100包括第一前置摄像头101、第二前置摄像头102、第一后置摄像头103和第二后置摄像头104。每个摄像头都有对应的成像面,即,第一前置摄像头101对应第一前置摄像头成像面105,第二前置摄像头102对应第二前置摄像头成像面106,第一后置摄像头103对应第一后置摄像头成像面107,第二后置摄像头104对应第二后置摄像头成像面108。左瞳孔中心109和右瞳孔中心110为用户观看投影屏幕时瞳孔中心的位置,人眼关注区域的中心111是用户在投影仪100上的关注区域中心,投影点112是投影屏幕上任意一点。
在本实施例中,所述前置摄像头包括第一前置摄像头101和第二前置 摄像头102,所述后置摄像头包括第一后置摄像头103与第二后置摄像头104,所述第一前置摄像头101和所述第二前置摄像头102的焦距相等,所述第一后置摄像头103和所述第二后置摄像头104的焦距相等,所述第一前置摄像头101到所述投影屏幕的距离与所述第二前置摄像头102到所述投影屏幕的距离相等,根据所述第一前置摄像头101与所述第二前置摄像头102确定第一连线,根据所述第一后置摄像头103与所述第二后置摄像头104确定第二连线,所述第一连线与所述第二连线平行。
预先设置预设参数,所述预设参数包括前置摄像头距离参数、后置摄像头距离参数和前置摄像头的焦距,其中,所述前置摄像头距离参数为第一前置摄像头和第二前置摄像头之间的距离,后置摄像头距离参数为第一后置摄像头和第二后置摄像头之间的距离。
具体地,预先设置前置摄像头距离参数T,即,预先设置第一前置摄像头和第二前置摄像头之间的距离为T;预先设置前置摄像头距离参数T 1,即,第一后置摄像头和第二后置摄像头之间的距离为T 1;预先设置前置摄像头的焦距为f。
根据所述投影点的位置信息、预设参数和所述瞳孔中心位置信息,计算人眼关注区域的中心到所述投影点的距离,如图4所示,计算人眼关注区域的中心到所述投影点的距离,具体包括如下步骤:
S131:采用第一预设公式,计算投影屏幕到所述前置摄像头的距离,其中,所述第一预设公式为:
Figure PCTCN2021098979-appb-000002
Z表示所述投影屏幕到所述前置摄像头的距离,f表示所述前置摄像头的焦距,T表示所述第一前置摄像头和所述第二前置摄像头之间的距离, X l表示所述投影点在所述前置第一摄像头获取的投影图像的横坐标,即所述投影点在所述前置第一摄像头的成像面的横坐标,X r表示所述投影点在所述前置第二摄像头获取的投影图像的横坐标,即,所述投影仪点在所述前置第二摄像头的成像面的横坐标。
具体地,根据相似三角形的性质,可以得到:
Figure PCTCN2021098979-appb-000003
经过变形后,得到
Figure PCTCN2021098979-appb-000004
S132:采用第二预设公式,计算左瞳孔中心到所述第二连线的垂直距离,和右瞳孔中心到所述第二连线的垂直距离,其中,所述第二预设公式为:
Figure PCTCN2021098979-appb-000005
Z 1表示瞳孔中心到所述第二连线的垂直距离,f 1表示所述后置摄像头的焦距,T 1表示所述第一后置摄像头与第二后置摄像头之间的距离,Y l1表示,人眼在所述后置第一摄像头获取的投影图像的横坐标,即,人眼在所述后置第一摄像头的成像面的横坐标,Y r1表示所述人眼在所述后置第二摄像头获取的投影图像的横坐标,即,所述人眼在所述后置第二摄像头的成像面的横坐标;
具体地,根据相似三角形的性质,可以得到:
Figure PCTCN2021098979-appb-000006
经过变形后,得到
Figure PCTCN2021098979-appb-000007
S133:采用第三预设公式,计算人眼的倾斜角度,其中,所述第三预设公式为:
Figure PCTCN2021098979-appb-000008
θ表示人眼的倾斜角度,Z 11表示所述左瞳孔中心到所述第二连线的垂直距离,Z 12表示所述右瞳孔中心到所述第二连线的垂直距离,Q表示所述左瞳孔中心到所述右瞳孔中心之间的距离。
S134:采用第四预设公式,计算人眼关注区域的中心到所述投影点的距离,其中,所述第四预设公式为:
Figure PCTCN2021098979-appb-000009
M表示所述人眼关注区域的中心到所述投影点的距离,W表示所述第一连线到所述第二连线间的距离。
S140:采用超分辨率图像生成技术,调整所述人眼关注区域的显示分辨率。
具体地,通过人眼关注区域的中心到投影点的距离,确定人眼关注区域的中心的位置信息,将人眼关注区域的中心周围的预设范围确定为人眼关注区域,预设范围可根据实际需要人为设置。
进一步地,为了提高人眼关注区域的显示分辨率,如图5所示,采用超分辨率图像生成技术,调整所述人眼关注区域的显示分辨率,具体包括如下步骤:
S141:对所述人眼关注区域的图像进行预处理,得到预处理图像。
具体地,对人眼关注区域的图像进行的预处理包括但不限于直方图均衡处理、中值滤波处理和归一化处理。
S142:将所述预处理图像输入至超分辨率重建网络模型,得到超分辨率重建后的预处理图像。
具体地,超分辨率重建网络模型包括但不限于超分辨卷积神经网络模型(Super-Resolution Convolutional Neural Network,SRCNN)、快速超分辨卷积神经网络模型(Fast Super-Resolution Convolutional Neural Networks,FSRCNN)、高效亚像素卷积神经网络模型(Efficient Sub-Pixel Convolutional Neural Network,ESPCN)和深度卷积网络模型(Very Deep Convolutional Networks,VDSR)。
S143:将所述超分辨率重建后的预处理图像填充至所述人眼关注区域。
具体地,将超分辨率重建后的预处理图像填充至人眼关注区域,提高人眼关注区域的图像的分辨率,使用户获得超高分辨率的视觉体验。
在本实施例中,根据所述前置摄像头采集的投影图像,确定投影点的位置信息,根据所述后置摄像头采集的人脸图像,确定所述人脸图像的瞳孔中心位置信息,根据所述投影点的位置信息、预设参数和所述瞳孔中心位置信息,计算人眼关注区域的中心到所述投影点的距离,采用超分辨率图像生成技术,调整所述人眼关注区域的显示分辨率。即,通过人眼关注区域的中心到所述投影点的距离,确定人眼关注区域的中心的位置,从而进一步确定人眼关注区域,通过提高人眼关注区域图像的分辨率,实现提升投影仪的画质,使用户获得超高分辨率的视觉体验的技术效果。
实施例2
请参照图6,图6示出了本申请实施例提供的画质调整装置的结构示意框图。画质调整装置200应用于投影仪,所述投影仪包括至少两个前置摄像头和至少两个后置摄像头。所述画质调整装置200包括确定模块210、计算模块220、调整模块230。
其中,确定模块210,用于根据所述前置摄像头采集的投影图像,确定投影点的位置信息,还用于根据所述后置摄像头采集的人脸图像,确定所述人脸图像的瞳孔中心位置信息;
计算模块220,用于根据所述投影点的位置信息、预设参数和所述瞳孔中心位置信息,计算人眼关注区域的中心到所述投影点的距离,其中,所述人眼关注区域为人眼在投影屏幕内关注的区域;
调整模块230,用于采用超分辨率图像生成技术,调整所述人眼关注区域的显示分辨率。
在本申请所提供的几个实施例中,应该理解到,所揭露的装置和方法,也可以通过其它的方式实现。以上所描述的装置实施例仅仅是示意性的,例如,附图中的流程图和结构图显示了根据本发明的多个实施例的装置、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段或代码的一部分,所述模块、程序段或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在作为替换的实现方式中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个连续的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,结构图和/或流程图中的每个方框、以及结构图和/或流程图中的方框的组合,可以用执行规定的功能或动作的专用的基于硬件的***来实现,或者可以用专用硬件与计算机指令的组合来实现。
另外,在本发明各个实施例中的各功能模块或单元可以集成在一起形成一个独立的部分,也可以是各个模块单独存在,也可以两个或更多个模块集成形成一个独立的部分。
所述功能如果以软件功能模块的形式实现并作为独立的产品销售或使 用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是智能手机、个人计算机、服务器、或者网络设备等)执行本发明各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本发明的具体实施方式,但本发明的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本发明的保护范围之内。

Claims (10)

  1. 一种画质调整方法,其特征在于,应用于投影仪,所述投影仪包括至少两个前置摄像头和至少两个后置摄像头,所述方法包括:
    根据所述前置摄像头采集的投影图像,确定投影点的位置信息;
    根据所述后置摄像头采集的人脸图像,确定所述人脸图像的瞳孔中心位置信息;
    根据所述投影点的位置信息、预设参数和所述瞳孔中心位置信息,计算人眼关注区域的中心到所述投影点的距离,其中,所述人眼关注区域为人眼在投影屏幕内关注的区域;
    采用超分辨率图像生成技术,调整所述人眼关注区域的显示分辨率。
  2. 根据权利要求1所述的画质调整方法,其特征在于,所述确定所述人脸图像的瞳孔中心位置信息的步骤,包括:
    将所述人脸图像输入至深度学习模型,得到人眼位置信息;
    根据所述人眼位置信息和所述人脸图像,采用瞳孔中心定位算法,确定瞳孔中心位置信息。
  3. 根据权利要求2所述的画质调整方法,其特征在于,所述瞳孔中心位置信息包括瞳孔区域的中心坐标和左瞳孔中心到右瞳孔中心之间的距离,所述根据所述人眼位置信息和所述人脸图像,采用瞳孔中心定位算法,确定瞳孔中心位置信息的步骤,包括:
    根据所述人脸图像和所述人眼位置信息,得到人眼图像;
    对所述人眼图像进行第一次图像二值化分割,确定眼珠区域;
    对所述人眼图像的眼珠区域进行第二次图像二值化分割,确定瞳孔区域;
    对所述瞳孔区域的图像进行轮廓提取以及椭圆拟合,得到所述瞳孔区域的中心坐标;
    根据左瞳孔和右瞳孔的相应瞳孔区域的中心坐标,得到所述左瞳孔中心到右瞳孔中心之间的距离。
  4. 根据权利要求1所述的画质调整方法,其特征在于,所述前置摄像头包括第一前置摄像头和第二前置摄像头,所述后置摄像头包括第一后置摄像头与第二后置摄像头,所述第一前置摄像头和所述第二前置摄像头的焦距相等,所述第一后置摄像头和所述第二后置摄像头的焦距相等,所述第一前置摄像头到所述投影屏幕的距离与所述第二前置摄像头到所述投影屏幕的距离相等,根据所述第一前置摄像头与所述第二前置摄像头确定第一连线,根据所述第一后置摄像头与所述第二后置摄像头确定第二连线,所述第一连线与所述第二连线平行,所述根据所述投影点的位置信息、预设参数和所述瞳孔中心位置信息,计算人眼关注区域的中心到所述投影点的距离的步骤,包括:
    采用第一预设公式,计算投影屏幕到所述前置摄像头的距离,其中,所述第一预设公式为:
    Figure PCTCN2021098979-appb-100001
    Z表示所述投影屏幕到所述前置摄像头的距离,f表示所述前置摄像头的焦距,T表示所述第一前置摄像头和所述第二前置摄像头之间的距离,X l表示所述投影点在所述前置第一摄像头获取的投影图像的横坐标,X r表示所述投影点在所述前置第二摄像头获取的投影图像的横坐标;
    采用第二预设公式,计算左瞳孔中心到所述第二连线的垂直距离,和右瞳孔中心到所述第二连线的垂直距离,其中,所述第二预设公式为:
    Figure PCTCN2021098979-appb-100002
    Z 1表示瞳孔中心到所述第二连线的垂直距离,f 1表示所述后置摄像头的焦距,T 1表示所述第一后置摄像头与第二后置摄像头之间的距离,Y l1表 示人眼在所述后置第一摄像头获取的投影图像的横坐标,Y r1表示所述人眼在所述后置第二摄像头获取的投影图像的横坐标;
    采用第三预设公式,计算人眼的倾斜角度,其中,所述第三预设公式为:
    Figure PCTCN2021098979-appb-100003
    θ表示人眼的倾斜角度,Z 11表示所述左瞳孔中心到所述第二连线的垂直距离,Z 12表示所述右瞳孔中心到所述第二连线的垂直距离,Q表示所述左瞳孔中心到所述右瞳孔中心之间的距离;
    采用第四预设公式,计算人眼关注区域的中心到所述投影点的距离,其中,所述第四预设公式为:
    Figure PCTCN2021098979-appb-100004
    M表示所述人眼关注区域的中心到所述投影点的距离,W表示所述第一连线到所述第二连线的距离。
  5. 根据权利要求4所述的画质调整方法,其特征在于,还包括:
    预先设置预设参数,所述预设参数包括前置摄像头距离参数、后置摄像头距离参数和前置摄像头的焦距,其中,所述前置摄像头距离参数为第一前置摄像头和第二前置摄像头之间的距离,后置摄像头距离参数为第一后置摄像头和第二后置摄像头之间的距离。
  6. 根据权利要求1所述的画质调整方法,其特征在于,在所述根据所述后置摄像头采集的人脸图像,确定所述人脸图像的瞳孔中心位置信息之后,所述根据所述投影点的位置信息、预设参数和所述瞳孔中心位置信息,计算人眼关注区域的中心到所述投影点的距离之前,还包括:
    根据所述瞳孔中心到眼球左边界的距离和所述瞳孔中心到眼球右边界的距离,通过第五预设公式,得到修正后的瞳孔中心位置信息,所述第五 预设公式为:
    Figure PCTCN2021098979-appb-100005
    其中,Y'表示所述修正后的瞳孔中心位置信息,Y表示所述瞳孔中心位置信息,Q 1表示所述瞳孔中心到所述眼球左边界的距离,Q 2表示所述瞳孔中心到所述眼球右边界的距离;
    以左瞳孔中心与右瞳孔中心的中间位置作为所述后置摄像头的对焦位置,通过计算所述人脸图像中人眼区域的清晰度,调整所述后置摄像头的焦距。
  7. 根据权利要求1所述的画质调整方法,其特征在于,所述采用超分辨率图像生成技术,调整所述人眼关注区域的显示分辨率的步骤,包括:
    对所述人眼关注区域的图像进行预处理,得到预处理图像;
    将所述预处理图像输入至超分辨率重建网络模型,得到超分辨率重建后的预处理图像;
    将所述超分辨率重建后的预处理图像填充至所述人眼关注区域。
  8. 一种画质调整装置,其特征在于,应用于投影仪,所述投影仪包括至少两个前置摄像头和至少两个后置摄像头,所述装置包括:
    确定模块,用于根据所述前置摄像头采集的投影图像,确定投影点的位置信息,还用于根据所述后置摄像头采集的人脸图像,确定所述人脸图像的瞳孔中心位置信息;
    计算模块,用于根据所述投影点的位置信息、预设参数和所述瞳孔中心位置信息,计算人眼关注区域的中心到所述投影点的距离,其中,所述人眼关注区域为人眼在投影屏幕内关注的区域;
    调整模块,用于采用超分辨率图像生成技术,调整所述人眼关注区域的显示分辨率。
  9. 一种投影仪,其特征在于,所述投影仪包括:
    前置摄像头、后置摄像头、处理器和存储器;
    存储器,用于存储程序,当所述程序被所述处理器执行时,使得所述处理器实现如权利要求1-7中任一项所述的方法。
  10. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现如权利要求1-7中任一项所述的方法。
PCT/CN2021/098979 2020-12-31 2021-06-08 画质调整方法、装置、投影仪及计算机可读存储介质 WO2022142146A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011618299.9A CN112804504B (zh) 2020-12-31 2020-12-31 画质调整方法、装置、投影仪及计算机可读存储介质
CN202011618299.9 2020-12-31

Publications (1)

Publication Number Publication Date
WO2022142146A1 true WO2022142146A1 (zh) 2022-07-07

Family

ID=75805854

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/098979 WO2022142146A1 (zh) 2020-12-31 2021-06-08 画质调整方法、装置、投影仪及计算机可读存储介质

Country Status (2)

Country Link
CN (1) CN112804504B (zh)
WO (1) WO2022142146A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112804504B (zh) * 2020-12-31 2022-10-04 成都极米科技股份有限公司 画质调整方法、装置、投影仪及计算机可读存储介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000305481A (ja) * 1999-04-21 2000-11-02 Seiko Epson Corp 投写型表示装置及び情報記憶媒体
CN106531073A (zh) * 2017-01-03 2017-03-22 京东方科技集团股份有限公司 显示屏的处理电路、显示方法及显示器件
CN107526237A (zh) * 2016-06-22 2017-12-29 卡西欧计算机株式会社 投影装置、投影***、投影方法
CN109302594A (zh) * 2017-07-24 2019-02-01 三星电子株式会社 包括眼睛***的投影显示装置
JP2019216344A (ja) * 2018-06-12 2019-12-19 日本放送協会 全天周立体映像表示装置及びそのプログラム、全天周立体映像撮影装置、並びに、全天周立体映像システム
CN111046744A (zh) * 2019-11-21 2020-04-21 深圳云天励飞技术有限公司 一种关注区域检测方法、装置、可读存储介质及终端设备
CN112804504A (zh) * 2020-12-31 2021-05-14 成都极米科技股份有限公司 画质调整方法、装置、投影仪及计算机可读存储介质

Family Cites Families (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6407777B1 (en) * 1997-10-09 2002-06-18 Deluca Michael Joseph Red-eye filter method and apparatus
JP4538924B2 (ja) * 2000-08-30 2010-09-08 沖電気工業株式会社 眼画像撮影装置
US20050131535A1 (en) * 2003-12-15 2005-06-16 Randall Woods Intraocular lens implant having posterior bendable optic
CN102473033B (zh) * 2009-09-29 2015-05-27 阿尔卡特朗讯 一种注视点检测方法及其装置
CN101840509B (zh) * 2010-04-30 2013-01-02 深圳华昌视数字移动电视有限公司 人眼观察视角的测量方法及装置
JP5602708B2 (ja) * 2011-11-09 2014-10-08 楽天株式会社 注視位置推定システム、注視位置推定システムの制御方法、注視位置推定装置、注視位置推定装置の制御方法、プログラム、及び情報記憶媒体
CN103747183B (zh) * 2014-01-15 2017-02-15 北京百纳威尔科技有限公司 一种手机拍摄对焦方法
US9867693B2 (en) * 2014-03-10 2018-01-16 Amo Groningen B.V. Intraocular lens that improves overall vision where there is a local loss of retinal function
US9485414B2 (en) * 2014-06-20 2016-11-01 John Visosky Eye contact enabling device for video conferencing
CN104683786B (zh) * 2015-02-28 2017-06-16 上海玮舟微电子科技有限公司 裸眼3d设备的人眼跟踪方法及装置
US20170263017A1 (en) * 2016-03-11 2017-09-14 Quan Wang System and method for tracking gaze position
CN105763810B (zh) * 2016-03-28 2019-04-16 努比亚技术有限公司 基于人眼的拍照装置及方法
CN109522775B (zh) * 2017-09-19 2021-07-20 杭州海康威视数字技术股份有限公司 人脸属性检测方法、装置及电子设备
CN108334870A (zh) * 2018-03-21 2018-07-27 四川意高汇智科技有限公司 Ar设备数据服务器状态的远程监控***
CN108665521B (zh) * 2018-05-16 2020-06-02 京东方科技集团股份有限公司 图像渲染方法、装置、***、计算机可读存储介质及设备
CN109359512A (zh) * 2018-08-28 2019-02-19 深圳壹账通智能科技有限公司 眼球位置追踪方法、装置、终端及计算机可读存储介质
CN110263657B (zh) * 2019-05-24 2023-04-18 亿信科技发展有限公司 一种人眼追踪方法、装置、***、设备和存储介质
CN110225252B (zh) * 2019-06-11 2021-07-23 Oppo广东移动通信有限公司 拍照控制方法及相关产品
CN110780739B (zh) * 2019-10-18 2023-11-03 天津理工大学 基于注视点估计的眼控辅助输入方法
CN111290581B (zh) * 2020-02-21 2024-04-16 京东方科技集团股份有限公司 虚拟现实显示方法、显示装置及计算机可读介质

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000305481A (ja) * 1999-04-21 2000-11-02 Seiko Epson Corp 投写型表示装置及び情報記憶媒体
CN107526237A (zh) * 2016-06-22 2017-12-29 卡西欧计算机株式会社 投影装置、投影***、投影方法
CN106531073A (zh) * 2017-01-03 2017-03-22 京东方科技集团股份有限公司 显示屏的处理电路、显示方法及显示器件
CN109302594A (zh) * 2017-07-24 2019-02-01 三星电子株式会社 包括眼睛***的投影显示装置
JP2019216344A (ja) * 2018-06-12 2019-12-19 日本放送協会 全天周立体映像表示装置及びそのプログラム、全天周立体映像撮影装置、並びに、全天周立体映像システム
CN111046744A (zh) * 2019-11-21 2020-04-21 深圳云天励飞技术有限公司 一种关注区域检测方法、装置、可读存储介质及终端设备
CN112804504A (zh) * 2020-12-31 2021-05-14 成都极米科技股份有限公司 画质调整方法、装置、投影仪及计算机可读存储介质

Also Published As

Publication number Publication date
CN112804504A (zh) 2021-05-14
CN112804504B (zh) 2022-10-04

Similar Documents

Publication Publication Date Title
Kuster et al. Gaze correction for home video conferencing
US10380421B2 (en) Iris recognition via plenoptic imaging
US10210660B2 (en) Removing occlusion in camera views
WO2018201809A1 (zh) 基于双摄像头的图像处理装置及方法
US20190251675A1 (en) Image processing method, image processing device and storage medium
JP4811462B2 (ja) 画像処理方法、画像処理プログラム、画像処理装置、及び撮像装置
US20170366804A1 (en) Light field collection control methods and apparatuses, light field collection devices
CN108234858B (zh) 图像虚化处理方法、装置、存储介质及电子设备
JP7101269B2 (ja) ポーズ補正
CN107798704B (zh) 一种用于增强现实的实时图像叠加方法及装置
CN113965664B (zh) 一种图像虚化方法、存储介质以及终端设备
CN112384928A (zh) 对图像执行对象照明操纵的方法和装置
WO2022142146A1 (zh) 画质调整方法、装置、投影仪及计算机可读存储介质
CN109166178B (zh) 一种视觉特性与行为特性融合的全景图像显著图生成方法及***
CN111105370A (zh) 图像处理方法、图像处理装置、电子设备和可读存储介质
US20180322689A1 (en) Visualization and rendering of images to enhance depth perception
CN108830804B (zh) 基于线扩展函数标准差的虚实融合模糊一致性处理方法
JP2017021430A (ja) パノラマビデオデータの処理装置、処理方法及びプログラム
Chang et al. R2p: Recomposition and retargeting of photographic images
WO2022036338A2 (en) System and methods for depth-aware video processing and depth perception enhancement
CN111062904B (zh) 图像处理方法、图像处理装置、电子设备和可读存储介质
CN113938578A (zh) 一种图像虚化方法、存储介质及终端设备
TW201828691A (zh) 視訊成像方法及其電子裝置
KR101995985B1 (ko) 영상회의 시스템에서 스테레오 영상을 이용한 참여자들 눈맞춤 제공 방법 및 장치
CN113395434A (zh) 一种预览图像虚化方法、存储介质及终端设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21912905

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21912905

Country of ref document: EP

Kind code of ref document: A1