WO2021227740A1 - 图像处理方法及图像显示装置 - Google Patents

图像处理方法及图像显示装置 Download PDF

Info

Publication number
WO2021227740A1
WO2021227740A1 PCT/CN2021/086593 CN2021086593W WO2021227740A1 WO 2021227740 A1 WO2021227740 A1 WO 2021227740A1 CN 2021086593 W CN2021086593 W CN 2021086593W WO 2021227740 A1 WO2021227740 A1 WO 2021227740A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
lens
vertices
areas
distortion
Prior art date
Application number
PCT/CN2021/086593
Other languages
English (en)
French (fr)
Inventor
范清文
王龙辉
苗京花
郝帅
何惠东
张硕
李文宇
陈丽莉
张�浩
Original Assignee
京东方科技集团股份有限公司
北京京东方光电科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 京东方科技集团股份有限公司, 北京京东方光电科技有限公司 filed Critical 京东方科技集团股份有限公司
Priority to US17/765,424 priority Critical patent/US20220351340A1/en
Publication of WO2021227740A1 publication Critical patent/WO2021227740A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/181Segmentation; Edge detection involving edge growing; involving edge linking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Definitions

  • the present disclosure relates to the field of display technology, and in particular, to an image processing method and an image display device.
  • the image display device for example, adopts a virtual reality technology (Virtual Reality, VR) of a computer simulation system that can create and experience a virtual world, and uses a computer to generate a simulation environment to immerse the user in the environment.
  • VR Virtual Reality
  • Virtual reality technology mainly includes aspects such as simulated environment, perception, natural skills and sensing equipment.
  • the simulation environment is a real-time dynamic three-dimensional realistic image generated by a computer.
  • Perception means that the ideal virtual reality technology should have the perception that all people have.
  • there are also perceptions such as hearing, touch, force, movement, and even smell and taste. It is called multi-sensing. Therefore, when performing the virtual reality technology display, the user can be immersed in the immersive environment provided by the virtual reality technology, which brings the user a more realistic use experience.
  • an image processing method is provided.
  • the image processing method is applied to an image display device, the image display device includes a lens and a display screen, and the lens has a plurality of distortion coefficients.
  • the image processing method includes: dividing an area of the display screen into a plurality of image areas according to a plurality of distortion coefficients of the lens, the outer boundary line of each image area encloses a polygon, and the outer edge of the image area
  • the distortion coefficients of the positions where each vertex of the polygon enclosed by the boundary line is mapped on the lens are all the same.
  • the coordinates of the vertices of the image area are subjected to anti-distortion processing to obtain the texture coordinates of the vertices of the image area.
  • the distortion coefficients of the lens gradually increase; in the two adjacent image areas, the one that is relatively close to the lens.
  • the number of vertices in the image area of the geometric center is less than or equal to the number of vertices in the image area relatively far from the geometric center of the lens.
  • the lens includes a plurality of distortion parts, the geometric center of the plurality of distortion parts coincides with the geometric center of the lens, and each distortion part has at least one distortion coefficient;
  • the outer boundary lines are mapped in the same distorted part; along the direction from the geometric center of the lens to the edge of the lens, the number of vertices of the image area mapped in each distorted part increases sequentially.
  • the number of vertices of each image region mapped in the same distortion part is equal.
  • the distortion coefficients corresponding to the vertices of the different image areas are not equal; the number of image areas into which the display area of the display screen is divided is equal to the number of distortion coefficients of the lens.
  • the number of distortion coefficients of the lens is 10; along the direction from the geometric center of the lens to the edge of the lens, the number of vertices of the multiple image regions is 3 respectively. , 6, 6, 12, 12, 12, 24, 24, 24, and 24, or 3, 6, 6, 12, 12, 24, 24, 48, 48, and 48 respectively.
  • the image processing method further includes: sorting the coordinates of the vertices of the multiple image areas according to the mapping mode to obtain index coordinates; and connecting the multiple image areas according to the index coordinates Divide the multiple image areas into multiple areas to be mapped, and there is no overlap between the multiple areas to be mapped; according to the texture coordinates and the index coordinates, the target images are sequentially mapped on the corresponding In the area to be mapped, the target image after the anti-distortion processing is obtained; and the target image after the anti-distortion processing is displayed.
  • the texture mode is a triangle texture mode.
  • the connecting the vertices of the plurality of image areas according to the index coordinates, and dividing the plurality of image areas into a plurality of areas to be mapped includes: connecting the plurality of image areas according to the index coordinates At each vertex, each of the image areas is divided into at least one triangular area; except for the image area closest to the geometric center of the lens, each of the remaining image areas is divided into a plurality of triangular areas; There is no overlap between the triangular areas, and the plurality of triangular areas in the image area include the vertices of the image area; wherein, among the vertices of each of the triangular areas, at least one vertex is adjacent to the image area, And the apex of the image area closer to the geometric center of the lens; the areas of the plurality of triangular areas included in each of the image areas are not completely equal; along the direction from the geometric center of the lens to the edge of the lens , The number of triang
  • an image display device in another aspect, includes a body, a display screen arranged on the body, a lens arranged on the body, and an image processor arranged on the body.
  • the lens is closer to the side of the image display device that is close to the human eye when the image display device is worn relative to the display screen; the lens has a plurality of distortion coefficients.
  • the image processor is configured to divide the display area of the display screen into a plurality of image areas according to a plurality of distortion coefficients of the lens, so that the outer boundary line of each image area forms a polygon, and the image
  • the geometric center of the polygon enclosed by the outer boundary line of the area coincides with the geometric center of the lens
  • the distortion coefficient of the position where each vertex of the polygon enclosed by the outer boundary line of the image area is mapped on the lens Is one of the plurality of distortion coefficients
  • the texture coordinates of the vertex is configured to divide the display area of the display screen into a plurality of image areas according to a plurality of distortion coefficients of the lens, so that the outer boundary line of each image area forms a polygon, and the image
  • the geometric center of the polygon enclosed by the outer boundary line of the area coincides with the geometric center of the lens
  • the image processor is further configured to sort the coordinates of the vertices of the multiple image areas according to the mapping mode to obtain index coordinates; and connect the multiple images according to the index coordinates Each vertex of the region divides the multiple image regions into multiple regions to be mapped without overlapping each other; and, according to the texture coordinates and the index coordinates, the target image is sequentially mapped on the corresponding In multiple regions to be mapped, the target image after anti-distortion processing is obtained.
  • the image processor is configured to connect the vertices of the multiple image areas according to the index coordinates, and combine each of the The image area is divided into at least one triangular area; except for the image area closest to the geometric center of the lens, each of the remaining image areas is divided into multiple triangular areas; there is no overlap between the multiple triangular areas, And the plurality of triangular areas in the image area include each vertex of the image area; among the vertices of each triangular area, at least one vertex is adjacent to the image area and closer to the geometric center of the lens
  • the vertices of the image area; the areas of the multiple triangular areas included in each of the image areas are not completely equal; along the direction from the geometric center of the lens to the edge of the lens, image areas with different numbers of vertices
  • the number of triangular areas in the image area gradually increases, and the area of the triangular areas in the image areas with different numbers of vertices gradually decreases.
  • the image display device further includes a controller provided on the body.
  • the controller is coupled with the image processor.
  • a computer-readable storage medium stores computer program instructions, which when run on a processor, cause the processor to perform one or more steps in the image processing method according to any of the above embodiments .
  • a computer program product includes computer program instructions, and when the computer program instructions are executed on a computer, the computer program instructions cause the computer to execute one or more steps in the image processing method according to any one of the above embodiments.
  • a computer program When the computer program is executed on a computer, the computer program causes the computer to execute one or more steps in the image processing method described in any of the above embodiments.
  • FIG. 1 is a structural diagram of an image display device according to some embodiments of the present disclosure
  • FIG. 2 is a comparison diagram of a normal target image and a distorted target image of the image display device according to some embodiments
  • FIG. 3 is a distribution diagram of multiple image regions according to related technologies
  • FIG. 4 is a comparison diagram of the normal target image and the target image after anti-distortion processing of the image display device according to some embodiments;
  • Fig. 5 is a distribution diagram of multiple image areas according to some embodiments of the present disclosure.
  • Fig. 6 is a flowchart of an image processing method according to some embodiments of the present disclosure.
  • Fig. 7 is another distribution diagram of multiple image regions according to some embodiments of the present disclosure.
  • Fig. 8 is another flowchart of an image processing method according to some embodiments of the present disclosure.
  • FIG. 9 is another distribution diagram of multiple image regions according to some embodiments of the present disclosure.
  • FIG. 10 is another distribution diagram of multiple image regions according to some embodiments of the present disclosure.
  • FIG. 11 is still another flowchart of an image processing method according to some embodiments of the present disclosure.
  • FIG. 12 is still another flowchart of an image processing method according to some embodiments of the present disclosure.
  • FIG. 13 is another flowchart of an image processing method according to some embodiments of the present disclosure.
  • first and second are only used for descriptive purposes, and cannot be understood as indicating or implying relative importance or implicitly indicating the number of indicated technical features. Thus, the features defined with “first” and “second” may explicitly or implicitly include one or more of these features. In the description of the embodiments of the present disclosure, unless otherwise specified, “plurality” means two or more.
  • the expressions “coupled” and “connected” and their extensions may be used.
  • the term “connected” may be used when describing some embodiments to indicate that two or more components are in direct physical or electrical contact with each other.
  • the term “coupled” may be used when describing some embodiments to indicate that two or more components have direct physical or electrical contact.
  • the term “coupled” or “communicatively coupled” may also mean that two or more components are not in direct contact with each other, but still cooperate or interact with each other.
  • the embodiments disclosed herein are not necessarily limited to the content of this document.
  • the image display device 100 includes a main body 110, a display screen 102 disposed on the main body 110, and a lens 101 disposed on the main body 110.
  • the main body 110 of the image display device 100 is a structure that implements image display in the image display device 100, and the structure can be close to a human face.
  • the image display device may be a virtual reality device.
  • the main body 110 is a head-mounted display.
  • the display screen 102 may be a liquid crystal display screen.
  • the display screen 102 may be a display screen of the mobile phone.
  • the lens 101 is closer to the side of the image display device 100 that is closer to the human eye when worn relative to the display screen 102.
  • the user can see the target image displayed on the display screen 102 through the lens 101 under a larger field of view.
  • the lens 101 is a convex lens.
  • the image is distorted, and the larger the field of view the user sees the target image, the greater the distortion of the target image that the user sees. Therefore, when the user When viewing a normal target image (as shown in part (a) of Figure 2), in the actual target image, the image far away from the geometric center of the target image appears to be stretched, and the image that the user sees is similar to a lens 101 shows the virtual image after the target image is enlarged. At this time, the target image has a distortion effect (for example, the pincushion distortion effect shown in part (b) of FIG. 2).
  • a distortion effect for example, the pincushion distortion effect shown in part (b) of FIG. 2.
  • the target image can be subjected to anti-distortion processing, the display area of the display screen 102 is divided into multiple image areas, the vertices of each image area are obtained according to the distortion coefficient of the lens 101, and the anti-distortion processing is performed to obtain the anti-distortion processing.
  • the resulting target image can reduce the amount of calculation in the image processing process and reduce the configuration requirements of the device for running the anti-distortion processing process.
  • the normal target image shown in part (a) of FIG. 4 is subjected to anti-distortion processing to obtain the target image after anti-distortion processing as shown in part (b) of FIG. 4 (for example, barrel-shaped distortion). Effect), when the user views the anti-distortion processed target image through the lens 101, the anti-distortion processed target image will go through the action of the lens, so that the image viewed by the user is the normal image shown in part (c) of Figure 4 The target image, thereby reducing the distortion of the target image.
  • anti-distortion processing for example, barrel-shaped distortion. Effect
  • the display area of the display screen 102 is divided into multiple image areas according to the shape of the display screen 102 in a grid shape (for example, (24 in Figure 3) ⁇ 24) image areas S').
  • the shape of the lens 101 (for example, a circle) does not match the shape of the display screen 102, so that the distortion coefficient of the vertex of the grid S'mapped to the position on the lens 101 does not match the distortion coefficient K of the lens 101, that is, the distortion coefficient K of the lens 101 is different.
  • the vertices of the grid S' exist at the crossing of the distortion coefficient, which leads to the unsmooth connection of the images at the crossing of the different distortion coefficients when mapping the texture coordinates of the vertices of each mesh, which affects the anti-distortion effect.
  • the (24 ⁇ 24) grid-like image regions S'in FIG. 3 have (25 ⁇ 25) vertices, and the number of image regions S' More, the number of vertices is larger, which makes the anti-distortion processing more computationally expensive, consumes the anti-distortion processing time, and the texture time will be prolonged, resulting in the phenomenon of frame dropping during the display process.
  • the embodiment of the present disclosure provides an image processing method. This image processing method is applied to the image display device 100.
  • the image display device 100 includes a lens 101 and a display screen 102.
  • the lens 101 has a plurality of distortion coefficients K.
  • the image processing method includes:
  • the display area A of the display screen 102 is divided into multiple image areas S, the outer boundary line L of each image area S encloses a polygon, and the image
  • the geometric center O2 of the polygon enclosed by the outer boundary line L of the area S coincides with the geometric center O1 of the lens 101; the distortion coefficients of the positions of the polygons enclosed by the outer boundary line L of the image area S on the lens 101 are all the same .
  • the multiple distortion coefficients K of the lens 101 are uniformly distributed.
  • the distortion coefficient at the position where each vertex of the polygon surrounded by the outer boundary line L of the image area S is mapped on the lens 101 is one distortion coefficient among the multiple distortion coefficients K.
  • the embodiment of the present disclosure ignores the deviation of the geometric center O1 of the lens 101 from the geometric center O2 of the polygon enclosed by the outer boundary line L of the image area S due to fluctuations in the actual product assembly process.
  • the vertices of the image area S mentioned in the text are the vertices of the polygon surrounded by the outer boundary line L of the image area S.
  • the display area A of the display screen 102 is an area where the display screen 102 can normally display images.
  • outer boundary line L of the image area S refers to the boundary line in the image area away from the geometric center O2 of the image area.
  • the number of sides of the polygon enclosed by the outer boundary line L is greater than or equal to 3.
  • the distortion coefficient K of the position where each vertex of the polygon surrounded by the outer boundary line L of each image area S is mapped on the lens 101 is equal.
  • the shape of the figure enclosed by the connecting line at the same distortion coefficient position is approximately the same as the shape of the lens 101 (for example, a circle or an ellipse). Therefore, the shape of each image area S
  • the shape of the polygon surrounded by the outer boundary line L is approximately the same as the shape of the lens 101.
  • the shape of the lens 101 and the image area S do not match, that is, the distortion coefficient of the apex of each image area S mapped to the position of the lens 101 and the distortion coefficient of the lens 101 are significantly different, resulting in subsequent image correction.
  • the image has distortion that does not meet the distortion coefficient of the lens 101, and the image connection is not smooth, which affects the image display effect after anti-distortion processing.
  • the distortion coefficient is located between different distortion coefficients K in the lens 101.
  • each image area in the embodiment of the present disclosure The distortion coefficient K of the position where each vertex of the polygon surrounded by the outer boundary line L of S is mapped on the lens 101 is a distortion coefficient among a plurality of distortion coefficients K, so that the transition area between different distortion coefficients K in the lens 101 , No vertex exists.
  • the number of vertices of the multiple image regions S in the embodiment of the present disclosure is relatively reduced. Therefore, the amount of calculation for anti-distortion processing on each vertex is reduced, and the time of image processing is saved.
  • the distortion coefficient K is one of the plurality of distortion coefficients K, that is, the distortion coefficient of each image area S
  • the distortion coefficient K corresponding to each vertex is known. Therefore, compared to the position where each vertex of each image area S'in FIG. 3 is mapped on the lens 101, the distortion coefficient still exists in addition to the multiple distortion coefficients K, so Before performing the anti-distortion processing, it is also necessary to calculate the distortion coefficient K (for example, using interpolation operation).
  • the embodiments of the present disclosure can reduce the amount of calculation for performing anti-distortion processing on each vertex, shorten the time of image processing, and improve the efficiency of image processing. Effect.
  • the distortion coefficient K corresponding to the vertices of the image area S the coordinates of the vertices of the image area S are de-distorted to obtain the texture coordinates of the vertices of the image area S, that is, according to the respective vertices of the image area S
  • the distortion coefficient K corresponding to the vertex is interpolated on the coordinates of each vertex (for example, the Catmull Rom interpolation algorithm is used) to obtain the texture coordinates of each vertex of the image area S.
  • the display area A of the display screen 102 is divided into a plurality of image areas S, the outer boundary line L of each image area S encloses a polygon, and the outer edge of the image area S
  • the geometric center O2 of the polygon surrounded by the boundary line L coincides with the geometric center O1 of the lens 101; the distortion coefficients K of the positions of the vertices of the polygon surrounded by the outer boundary line L of the image area S are all mapped on the lens 101 are the same, which can reduce
  • the number of vertices in the multiple image regions S reduces the amount of calculation for performing anti-distortion processing on each vertices, shortens the time of image processing, and improves the effect of image processing.
  • the shape of the polygon surrounded by the outer boundary line L of each image area S is approximately the same as the shape of the lens 101, which can avoid the mismatch between the shape of the lens 101 and the image area S, which may cause the subsequent mapping process of the image area S In the transition area between different distortion coefficients K in the lens 101, the image connection is not smooth, which affects the image display effect after anti-distortion processing.
  • the distortion coefficients K of the lens 101 gradually increase.
  • the number of vertices of the image region S relatively close to the geometric center O1 of the lens 101 is less than or equal to the number of vertices of the image region S relatively far away from the geometric center O1 of the lens 101.
  • the distortion coefficient K of the lens 101 is the smallest, and at the position of the edge of the lens 101, the distortion coefficient K of the lens 101 is the largest.
  • the density of the figure surrounded by the connecting line at the same distortion coefficient position gradually increases, that is, at a position close to the geometric center O1 of the lens 101, the lens
  • the distribution of the distortion coefficients K of the lens 101 is relatively sparse, and the distribution of the distortion coefficients K of the lens 101 is dense at a position close to the edge of the lens 101.
  • the vertices of the image regions S relatively close to the geometric center O1 of the lens 101 are distributed sparsely, and the images relatively close to the edge of the lens 101
  • the vertices of the area S are densely distributed. That is, the distribution method of each distortion coefficient K of the lens 101 is approximately the same as the distribution method of each vertex of each image area S.
  • the vertices of each image area S in the embodiment of the present disclosure are sparse and dense, reducing the number of vertices and reducing the anti-distortion of each vertices.
  • the amount of processing calculation shortens the time of image processing and improves the effect of image processing.
  • the lens 101 includes a plurality of distortion parts Q.
  • the geometric center O3 of the plurality of distortion parts Q coincides with the geometric center O1 of the lens 101, and each distortion part Q has at least one distortion coefficient K.
  • the outer boundary line L of at least one image area S is mapped in the same distortion part Q.
  • the number of vertices of the image area S mapped in each distortion portion Q sequentially increases.
  • the number of vertices of each image region S mapped in the same distortion part is equal.
  • the density of the vertices of the image region S corresponding to each distortion part Q gradually increases, which can reduce the number of vertices of the multiple image regions S, It reduces the amount of calculation for anti-distortion processing on each vertex, shortens the time of image processing, and improves the effect of image processing.
  • the distortion coefficients K corresponding to the vertices of the different image regions S are not equal.
  • the number of image areas S into which the display area A of the display screen 102 is divided is equal to the number of distortion coefficients K possessed by the lens 101.
  • the number of distortion coefficients K of the lens 101 is 10.
  • the number of vertices of the multiple image regions S is 3, 6, 6 respectively. , 12, 12, 12, 24, 24, 24, and 24 (as shown in Figure 7), or 3, 6, 6, 12, 12, 24, 24, 48, 48, and 48 respectively (not shown in the figure) ).
  • the image processing method further includes:
  • the vertices of the multiple image areas S are connected, and the multiple image areas S are divided into multiple regions T to be mapped, and there is no overlap between the regions T to be mapped.
  • the target image after the anti-distortion process after the target image after the anti-distortion process passes through the lens 101, the target image after the anti-distortion process will be distorted in a direction opposite to the distortion direction of the target image after the anti-distortion process, so that the human eye can observe through the lens 101
  • the image displayed on the display screen 102 is a target image that tends to be normal and has no distortion, thereby reducing the degree of distortion of the target image.
  • the anti-distortion processed target image obtained by mapping is close to the shape of the lens 101, so that the image at the junction of different distortion coefficients K is smoother, thereby improving the display effect of the anti-distortion processed target image.
  • the texture mode is a triangle texture mode. According to the index coordinates, connect the vertices of the multiple image regions S, and divide the multiple image regions S into multiple regions T to be mapped, as shown in FIG. 11, including:
  • each of the triangular areas Among the vertices at least one vertex is the vertex of the image area S adjacent to the image area S and closer to the geometric center O1 of the lens 101; the areas of the multiple triangular areas included in each image area S are not completely equal; From the geometric center O1 of the lens 101 to the direction of the edge of the lens 101, the number of triangular areas in the image area S with different numbers of vertices gradually increases, and the area of the triangular areas in the image area S with different numbers of vertices gradually increases. Decrease
  • the shape of the area T to be mapped is a triangle.
  • the embodiments of the present disclosure can reduce the number of triangular regions, shorten the time required for mapping, avoid prolonging the mapping time and cause frame drop during the display process, and improve the display effect.
  • embodiments of the present disclosure may adopt any texture mode in the field that can implement the above-mentioned image processing method, which is not limited herein.
  • the texture mode may be a quadrilateral texture mode.
  • each image area S is divided into at least one quadrilateral area. Except for the image area S closest to the geometric center O1 of the lens 101, each Each image area S is divided into a plurality of quadrilateral areas.
  • each quadrilateral area at least two vertices are the vertices of the image area S adjacent to the image area S and closer to the geometric center O1 of the lens 101; multiple quadrilateral areas included in each image area S Along the direction from the geometric center O1 of the lens 101 to the edge of the lens 101, the number of quadrilateral regions in the image region S with different numbers of vertices gradually increases, and the area of the quadrilateral regions gradually decreases. In this way, the number of quadrilateral areas is reduced, the time required for mapping is shortened, and the extension of the mapping time prevents frames from being dropped during the display process, and the display effect is improved.
  • the image processing method further includes:
  • the target image obtained by rendering is the texture of the target image obtained by rendering.
  • a plurality of distortion coefficients K of the lens 101 are also obtained.
  • step S80 will not be executed, and step S70 will be returned to receive the vertical synchronization signal Vsync again.
  • the image processing method further includes:
  • the texture coordinates can be directly applied according to the texture coordinates, and there is no need to perform anti-distortion calculations, which reduces calculations. It saves the time of image processing and improves the efficiency of image processing.
  • the image processing method further includes:
  • judging whether to end the application is judging whether there is target image data of the subsequent frame that can be updated. If not, the image processing process is ended.
  • An embodiment of the present disclosure provides an image display device 100, and the image display device 100 is the image display device 100 shown in FIG. 1 described above.
  • the image display device 100 includes an image processor 200.
  • the image processor 200 is disposed on the main body 110 of the image display device 100 in FIG. 1.
  • the display screen 102 is coupled to the image processor 200.
  • the image processor 200 may be a GPU (Graphics Processing Unit, graphics processor).
  • the image processor 200 is configured to divide the display area A of the display screen 102 into a plurality of image areas S according to a plurality of distortion coefficients K of the lens 101, referring to FIG. It is a polygon, and the geometric center O2 of the polygon surrounded by the outer boundary line L of the image area S coincides with the geometric center O1 of the lens 101.
  • the distortion coefficient at the position where each vertex of the polygon surrounded by the outer boundary line L of the image area S is mapped on the lens 101 is one distortion coefficient among the plurality of distortion coefficients.
  • the coordinates of the vertices of the image area S are subjected to anti-distortion processing, and the texture coordinates of the vertices of the image area S are obtained.
  • the rendering capability of the mobile terminal is quite different from that of the PC (Personal Computer) terminal, for the image processor of the all-in-one image display device, in the process of anti-distortion processing, when the number of image areas is large, for example, refer to the figure For the grid-like image area S'in 3, when the number of image areas S'reaches (32x32), the image processing time of the image processor is too long, resulting in frequent frame dropping during the display process.
  • the image processor 200 provided by the embodiment of the present disclosure divides the display area A of the display screen 102 into a plurality of image areas S, and the outer boundary line L of each image area S forms a polygon, and the image The geometric center O2 of the polygon surrounded by the outer boundary line L of the area S coincides with the geometric center O1 of the lens 101; Being one distortion coefficient among the plurality of distortion coefficients K, the number of vertices of the plurality of image regions S can be reduced.
  • the amount of calculation for performing anti-distortion processing on each vertex is reduced, the time for image processing by the image processor 200 is shortened, and the performance of the image processor 200 is improved. . In this way, when the image processor 200 is applied to a mobile terminal image display device, the load of the image processor 200 can be reduced, and the phenomenon of frame dropping can be avoided.
  • the shape of the polygon surrounded by the outer boundary line L of each image area S is approximately the same as the shape of the lens 101, which can avoid the mismatch between the shape of the lens 101 and the image area S, which may cause the subsequent mapping process of the image area S In the transition area between different distortion coefficients K in the lens 101, the image connection is not smooth, which affects the image display effect after anti-distortion processing.
  • the image processor 200 is further configured to sort the coordinates of the vertices of the multiple image regions S according to the mapping mode to obtain index coordinates; according to the index coordinates, referring to FIG. 9, connect the multiple image regions At each vertex of S, the multiple image regions S are divided into multiple regions to be mapped T without overlapping each other; and, according to the texture coordinates and index coordinates, the target image is sequentially mapped on the corresponding multiple regions to be mapped T Inside, the target image after anti-distortion processing is obtained.
  • the image processor 200 when the texture mode is a triangle texture mode, is configured to connect the vertices of the multiple image regions S according to the index coordinates, referring to FIG. 9, and divide each image region S At least one triangular area is formed; except for the image area closest to the geometric center O1 of the lens 101, each of the remaining image areas S is divided into a plurality of triangular areas.
  • At least one vertex is the vertex of the image area S adjacent to the image area S and closer to the geometric center O1 of the lens 101.
  • the areas of the plurality of triangular regions included in each image region S are not completely equal.
  • the number of triangular areas in the image area S with different numbers of vertices gradually increases, and the area of the triangular areas in the image area S with different numbers of vertices slowing shrieking.
  • the image processor 200 is further configured to acquire the target image data and the vertical synchronization signal Vsync before dividing the display area of the display screen 102 into a plurality of image areas.
  • the target image is obtained by rendering according to the target image data.
  • the image processor 200 before dividing the display area of the display screen 102 into multiple image areas, the image processor 200 also obtains multiple distortion coefficients K of the lens 101.
  • the image processor 200 is further configured to update the target image data and the vertical synchronization signal Vsync of the subsequent frames after displaying the target image of the first frame after the anti-distortion processing.
  • the image processor 200 is further configured to sequentially map the target image of the subsequent frame according to the texture coordinates obtained after performing the anti-distortion processing on the target image of the first frame.
  • the above-mentioned image processor 200 has the same beneficial effects as the above-mentioned image processing method, so it will not be repeated here.
  • the specific structure of the image processor 200 of the embodiment of the present disclosure may adopt any circuit or module capable of realizing corresponding functions in the field. In practical applications, technicians can make selections according to the situation, and the present disclosure is not limited herein.
  • the image display device 100 further includes a controller 103 provided on the main body 110.
  • the controller 103 is coupled with the image processor 200.
  • the controller 103 is configured to control the image processor 200 to perform image processing, so that the display screen 102 displays the target image after anti-distortion processing.
  • controller 103 transmits the vertical synchronization signal Vsync to the image processor 200.
  • the specific structure of the controller 103 can adopt any circuit or module capable of realizing corresponding functions in the field. In practical applications, technicians can make selections according to the situation, and the present disclosure is not limited herein.
  • the controller 103 may be applied to a mobile terminal (such as a mobile phone, etc.).
  • Some embodiments of the present disclosure provide a computer-readable storage medium (for example, a non-transitory computer-readable storage medium), the computer-readable storage medium stores computer program instructions, and when the computer program instructions run on a processor , Causing the processor to execute one or more steps in the image processing method described in any one of the foregoing embodiments.
  • a computer-readable storage medium for example, a non-transitory computer-readable storage medium
  • the foregoing computer-readable storage medium may include, but is not limited to: magnetic storage devices (for example, hard disks, floppy disks, or magnetic tapes, etc.), optical disks (for example, CD (Compact Disk), DVD (Digital Versatile Disk, Digital universal disk), etc.), smart cards and flash memory devices (for example, EPROM (Erasable Programmable Read-Only Memory), cards, sticks or key drives, etc.).
  • Various computer-readable storage media described in this disclosure may represent one or more devices and/or other machine-readable storage media for storing information.
  • the term "machine-readable storage medium" may include, but is not limited to, wireless channels and various other media capable of storing, containing, and/or carrying instructions and/or data.
  • Some embodiments of the present disclosure also provide a computer program product.
  • the computer program product includes computer program instructions, and when the computer program instructions are executed on a computer, the computer program instructions cause the computer to execute one or more steps in the image processing method described in the foregoing embodiments.
  • Some embodiments of the present disclosure also provide a computer program.
  • the computer program When the computer program is executed on a computer, the computer program causes the computer to execute one or more steps in the image processing method described in the above-mentioned embodiments.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Geometry (AREA)

Abstract

一种图像处理方法,应用于图像显示装置中,所述图像显示装置包括透镜和显示屏,所述透镜具有多个畸变系数;所述图像处理方法包括:根据所述透镜的多个畸变系数,将所述显示屏的显示区域划分为多个图像区域,每个图像区域的外边界线围成一个多边形,且所述图像区域的外边界线所围成的多边形的几何中心与所述透镜的几何中心重合;所述图像区域的外边界线所围成的多边形的各顶点映射在所述透镜上的位置的畸变系数均相同;根据所述图像区域的各顶点对应的畸变系数,对所述图像区域的各顶点的坐标进行反畸变处理,得到所述图像区域的各顶点的纹理坐标。

Description

图像处理方法及图像显示装置
本申请要求于2020年05月09日提交的、申请号为202010388616.6的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本公开涉及显示技术领域,尤其涉及一种图像处理方法及图像显示装置。
背景技术
图像显示装置,例如采用一种可以创建和体验虚拟世界的计算机仿真***的虚拟现实技术(Virtual Reality,VR),利用计算机生成一种模拟环境,使用户沉浸到该环境中。
虚拟现实技术主要包括模拟环境、感知、自然技能和传感设备等方面。其中,模拟环境是由计算机生成的、实时动态的三维立体逼真图像。感知是指理想的虚拟现实技术应该具有一切人所具有的感知,除计算机图形技术所生成的视觉感知外,还有听觉、触觉、力觉、运动等感知,甚至还包括嗅觉和味觉等,也称为多感知。因此,在进行虚拟现实技术显示时,用户可以沉浸到虚拟现实技术所提供的沉浸式环境中,给用户带来了比较真实的使用体验。
发明内容
一方面,提供一种图像处理方法。所述图像处理方法应用于图像显示装置中,所述图像显示装置包括透镜和显示屏,所述透镜具有多个畸变系数。所述图像处理方法包括:根据所述透镜的多个畸变系数,将所述显示屏的区域划分为多个图像区域,每个图像区域的外边界线围成一个多边形,且所述图像区域的外边界线所围成的多边形的各顶点映射在所述透镜上的位置的畸变系数均相同。根据所述图像区域的各顶点对应的畸变系数,对所述图像区域的各顶点的坐标进行反畸变处理,得到所述图像区域的各顶点的纹理坐标。
在一些实施例中,沿由所述透镜的几何中心指向所述透镜的边缘的方向,所述透镜的各畸变系数逐渐增大;相邻两个所述图像区域中,相对靠近所述透镜的几何中心的图像区域的顶点的个数,小于或等于相对远离所述透镜的几何中心的图像区域的顶点的个数。
在一些实施例中,所述透镜包括多个畸变部分,所述多个畸变部分的几何中心与所述透镜的几何中心重合,每个畸变部分具有至少一个畸变系数;至少一个所述图像区域的外边界线映射在同一个所述畸变部分中;沿由所述透镜的几何中心指向所述透镜的边缘的方向,映射在各所述畸变部分中的图 像区域的顶点的个数依次增加。
在一些实施例中,在多个所述图像区域的外边界线映射在一个所述畸变部分中的情况下,映射在同一个所述畸变部分中的各图像区域的顶点的个数相等。
在一些实施例中,不同所述图像区域的各顶点对应的畸变系数不相等;所述显示屏的显示区域被划分成的图像区域的个数与所述透镜具有的畸变系数的个数相等。
在一些实施例中,所述透镜具有的畸变系数的个数为10;沿由所述透镜的几何中心指向所述透镜的边缘的方向,所述多个图像区域的顶点个数,分别为3、6、6、12、12、12、24、24、24和24,或者分别为3、6、6、12、12、24、24、48、48和48。
在一些实施例中,所述图像处理方法还包括:根据贴图模式,对所述多个图像区域的各顶点的坐标进行排序,得到索引坐标;根据所述索引坐标,连接所述多个图像区域的各顶点,将所述多个图像区域分割为多个待贴图区域,所述多个待贴图区域之间无交叠;根据所述纹理坐标和所述索引坐标,将目标图像依次贴图在对应的待贴图区域内,得到反畸变处理后的目标图像;显示所述反畸变处理后的目标图像。
在一些实施例中,所述贴图模式为三角形贴图模式。所述根据所述索引坐标,连接所述多个图像区域的各顶点,将所述多个图像区域分割为多个待贴图区域,包括:根据所述索引坐标,连接所述多个图像区域的各顶点,将每个所述图像区域分割成至少一个三角形区域;除最靠近所述透镜的几何中心的图像区域外,其余每个所述图像区域被分割成多个三角形区域;所述多个三角形区域之间无重叠,且所述图像区域中的多个三角形区域包含该图像区域的各顶点;其中,每个所述三角形区域的各顶点中,至少一个顶点为与该图像区域相邻、且更靠近所述透镜的几何中心的图像区域的顶点;每个所述图像区域所包括的多个三角形区域的面积不完全相等;沿由所述透镜的几何中心指向所述透镜的边缘的方向,各顶点个数不同的图像区域中的三角形区域的个数逐渐增多,各顶点个数不同的图像区域中的三角形区域的面积逐渐减小。
另一方面,提供一种图像显示装置。所述图像显示装置包括本体、设置于所述本体上的显示屏、设置于所述本体上的透镜和设置于所述本体上的图像处理器。所述透镜相对于所述显示屏更靠近所述图像显示装置被佩戴时接近人眼的一侧;所述透镜具有多个畸变系数。所述图像处理器被配置为根据 所述透镜的多个畸变系数,将所述显示屏的显示区域划分为多个图像区域,使每个图像区域的外边界线围成一个多边形,且所述图像区域的外边界线所围成的多边形的几何中心与所述透镜的几何中心重合;并使所述图像区域的外边界线所围成的多边形的各顶点映射在所述透镜上的位置的畸变系数,为所述多个畸变系数中的一个畸变系数;及,根据所述图像区域的各顶点对应的畸变系数,对所述图像区域的各顶点的坐标进行反畸变处理,得到所述图像区域的各顶点的纹理坐标。
在一些实施例中,所述图像处理器还被配置为根据贴图模式,对所述多个图像区域的各顶点的坐标进行排序,得到索引坐标;根据所述索引坐标,连接所述多个图像区域的各顶点,将所述多个图像区域分割为相互之间无交叠的多个待贴图区域;及,根据所述纹理坐标和所述索引坐标,将目标图像依次贴图在对应的所述多个待贴图区域内,得到反畸变处理后的目标图像。
在一些实施例中,在所述贴图模式为三角形贴图模式的情况下,所述图像处理器被配置为,根据所述索引坐标,连接所述多个图像区域的各顶点,将每个所述图像区域分割成至少一个三角形区域;使除最靠近所述透镜的几何中心的图像区域外,其余每个所述图像区域被分割成多个三角形区域;所述多个三角形区域之间无重叠,且所述图像区域中的多个三角形区域包含该图像区域的各顶点;每个所述三角形区域的各顶点中,至少一个顶点为与该图像区域相邻、且更靠近所述透镜的几何中心的图像区域的顶点;每个所述图像区域所包括的多个三角形区域的面积不完全相等;沿由所述透镜的几何中心指向所述透镜的边缘的方向,各顶点个数不同的图像区域中的三角形区域的个数逐渐增多,各顶点个数不同的图像区域中的三角形区域的面积逐渐减小。
在一些实施例中,所述图像显示装置还包括设置于所述本体上的控制器。所述控制器与所述图像处理器耦接。
再一方面,提供一种计算机可读存储介质。所述计算机可读存储介质存储有计算机程序指令,所述计算机程序指令在处理器上运行时,使得所述处理器执行如上述任一实施例所述的图像处理方法中的一个或多个步骤。
又一方面,提供一种计算机程序产品。所述计算机程序产品包括计算机程序指令,在计算机上执行所述计算机程序指令时,所述计算机程序指令使计算机执行如上述任一实施例所述的图像处理方法中的一个或多个步骤。
又一方面,提供一种计算机程序。当所述计算机程序在计算机上执行时,所述计算机程序使计算机执行如上述任一实施例所述的图像处理方法中的一 个或多个步骤。
附图说明
为了更清楚地说明本公开中的技术方案,下面将对本公开一些实施例中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本公开的一些实施例的附图,对于本领域普通技术人员来讲,还可以根据这些附图获得其他的附图。此外,以下描述中的附图可以视作示意图,并非对本公开实施例所涉及的产品的实际尺寸、方法的实际流程、信号的实际时序等的限制。
图1为根据本公开的一些实施例的图像显示装置的一种结构图;
图2为根据一些实施例的图像显示装置的正常目标图像和畸变目标图像的一种对比图;
图3为根据相关技术的多个图像区域的一种分布图;
图4为根据一些实施例的图像显示装置的正常目标图像和反畸变处理后的目标图像的一种对比图;
图5为根据本公开的一些实施例的多个图像区域的一种分布图;
图6为根据本公开的一些实施例的图像处理方法的一种流程图;
图7为根据本公开的一些实施例的多个图像区域的另一种分布图;
图8为根据本公开的一些实施例的图像处理方法的另一种流程图;
图9为根据本公开的一些实施例的多个图像区域的又一种分布图;
图10为根据本公开的一些实施例的多个图像区域的又一种分布图;
图11为根据本公开的一些实施例的图像处理方法的又一种流程图;
图12为根据本公开的一些实施例的图像处理方法的又一种流程图;
图13为根据本公开的一些实施例的图像处理方法的又一种流程图。
具体实施方式
下面将结合附图,对本公开一些实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本公开一部分实施例,而不是全部的实施例。基于本公开所提供的实施例,本领域普通技术人员所获得的所有其他实施例,都属于本公开保护的范围。
除非上下文另有要求,否则,在整个说明书和权利要求书中,术语“包括(comprise)”及其其他形式例如第三人称单数形式“包括(comprises)”和现在分词形式“包括(comprising)”被解释为开放、包含的意思,即为“包含,但不限于”。在说明书的描述中,术语“一个实施例(one embodiment)”、“一些实施例(some embodiments)”、“示例性实施例(exemplary  embodiments)”、“示例(example)”、“特定示例(specific example)”或“一些示例(some examples)”等旨在表明与该实施例或示例相关的特定特征、结构、材料或特性包括在本公开的至少一个实施例或示例中。上述术语的示意性表示不一定是指同一实施例或示例。此外,所述的特定特征、结构、材料或特点可以以任何适当方式包括在任何一个或多个实施例或示例中。
以下,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个该特征。在本公开实施例的描述中,除非另有说明,“多个”的含义是两个或两个以上。
在描述一些实施例时,可能使用了“耦接”和“连接”及其衍伸的表达。例如,描述一些实施例时可能使用了术语“连接”以表明两个或两个以上部件彼此间有直接物理接触或电接触。又如,描述一些实施例时可能使用了术语“耦接”以表明两个或两个以上部件有直接物理接触或电接触。然而,术语“耦接”或“通信耦合(communicatively coupled)”也可能指两个或两个以上部件彼此间并无直接接触,但仍彼此协作或相互作用。这里所公开的实施例并不必然限制于本文内容。
本文中“适用于”或“被配置为”的使用意味着开放和包容性的语言,其不排除适用于或被配置为执行额外任务或步骤的设备。
如本文所使用的那样,“约”或“近似”包括所阐述的值以及处于特定值的可接受偏差范围内的平均值,其中所述可接受偏差范围如由本领域普通技术人员考虑到正在讨论的测量以及与特定量的测量相关的误差(即,测量***的局限性)所确定。
本公开的一些实施例提供一种图像显示装置100,如图1所示,图像显示装置100包括本体110、设置于本体110上的显示屏102和设置于本体110上的透镜101。
需要说明的是,图像显示装置100的本体110即为图像显示装置100中实现图像显示的结构,该结构可以贴近人脸。
例如,图像显示装置可以为虚拟现实装置。在图像显示装置100为头戴式虚拟现实装置的情况下,本体110为头戴显示器。
示例性地,显示屏102可以采用液晶显示屏。
在图像显示装置100为与手机配合使用的图像显示装置的情况下,显示屏102可以为手机的显示屏。
透镜101相对于显示屏102更靠近图像显示装置100被佩戴时接近人眼 的一侧。
可以理解的是,用户通过透镜101可以在较大的视场角下看到显示屏102所显示的目标图像。
示例性地,透镜101为凸透镜。
在此情况下,由于透镜101的球面位置的焦距不同,使得图像发生畸变,且用户观看目标图像的视场角越大,用户所看到的目标图像的畸变也就越大,因此,当用户观看正常目标图像(如图2中(a)部分所示)时,实际所看到目标图像中,在远离目标图像几何中心点的图像看上去被拉伸,用户所看到的图像类似于透镜101将目标图像放大后的所呈现虚像,此时,目标图像产生畸变效果(例如图2中(b)部分所示的枕形畸变效果)。
在此基础上,可以对目标图像进行反畸变处理,将显示屏102的显示区域划分成多个图像区域,根据透镜101的畸变系数得到各图像区域的顶点进行反畸变处理,以获得反畸变处理后的目标图像,相比于对目标图像的每一个像素进行反畸变处理,可以减少图像处理过程中的计算量,降低运行反畸变处理过程的装置的配置要求。
示例性地,对如图4中(a)部分所示的正常的目标图像进行反畸变处理,得到如图4中(b)部分所示的反畸变处理后的目标图像(例如呈桶形畸变效果),当用户通过透镜101观看反畸变处理后的目标图像时,反畸变处理后的目标图像会经过透镜的作用,使用户观看到的图像为图4中(c)部分所示的正常的目标图像,从而降低了目标图像的畸变程度。
由于显示屏102的形状一般呈矩形,在进行反畸变处理的过程中,显示屏102的显示区域根据显示屏102的形状划分成的多个图像区域呈网格状(例如图3中的(24×24)个图像区域S')。但透镜101的形状(例如圆形)与显示屏102的形状不匹配,使得网格S'的顶点映射至透镜101上的位置的畸变系数与透镜101的畸变系数K不匹配,即,在不同畸变系数跨界处存在网格S'的顶点,导致在根据各网格的顶点的纹理坐标贴图时,不同畸变系数跨界处的图像衔接不平滑,影响反畸变效果。
此外,在多个图像区域呈网格状的情况下,例如图3中的(24×24)个网格状的图像区域S',具有(25×25)个顶点,图像区域S'的数量较多,顶点的个数较多,使得反畸变处理的运算量较大,耗费反畸变处理的时间,贴图的时间会延长,导致在显示过程中出现掉帧的现象。
本公开的实施例提供一种图像处理方法。该图像处理方法应用于图像显示装置100中。
其中,如图1所示,图像显示装置100包括透镜101和显示屏102。
如图5所示,透镜101具有多个畸变系数K。
需要说明的是,多个畸变系数K对应透镜101的不同位置。如图6所示,图像处理方法包括:
S10、如图5所示,根据透镜101的多个畸变系数K,将显示屏102的显示区域A划分为多个图像区域S,每个图像区域S的外边界线L围成一个多边形,且图像区域S的外边界线L围成的多边形的几何中心O2与透镜101的几何中心O1重合;图像区域S的外边界线L所围成的多边形的各顶点映射在透镜101上的位置的畸变系数均相同。
其中,透镜101的多个畸变系数K均匀分布。
可以理解的是,图像区域S的外边界线L所围成的多边形的各顶点映射在透镜101上的位置的畸变系数,为多个畸变系数K中的一个畸变系数。
需要说明的是,本公开的实施例忽略因实际产品组装工艺的波动,而导致透镜101的几何中心O1与图像区域S的外边界线L围成的多边形的几何中心O2偏移的情况。
S20、根据图像区域S的各顶点对应的畸变系数,对图像区域S的各顶点的坐标进行反畸变处理,得到图像区域S的各顶点的纹理坐标。
需要说明的是,文中所述的图像区域S的各顶点,即为图像区域S的外边界线L所围成的多边形的各顶点。
可以理解的是,显示屏102的显示区域A为该显示屏102能够正常显示图像的区域。
需要说明的是,图像区域S的外边界线L指的是图像区域中远离该图像区域的几何中心O2的边界线。外边界线L围成的多边形的边数大于或等于3。
可以理解的是,每个图像区域S的外边界线L所围成的多边形的各顶点映射在透镜101上的位置的畸变系数K相等。
需要说明的是,在透镜101中,同一个畸变系数的位置的连接线围成的图形的形状与透镜101的形状(例如呈圆形或者椭圆形)近似相同,因此,每个图像区域S的外边界线L所围成的多边形的形状与透镜101的形状近似相同。
在此情况下,可以避免因透镜101与图像区域S的形状不匹配,即,各图像区域S的顶点映射至透镜101的位置的畸变系数与透镜101的畸变系数差别较大,导致后续对图像区域贴图的过程中,在透镜101中位于不同畸变系数之间的过渡区域,图像存在不符合透镜101畸变系数的畸变,图像衔接 不平滑,而影响反畸变处理后的图像显示效果。
并且,相比于在图3中的图像区域S'的顶点映射在透镜101上的位置的畸变系数位于透镜101中不同畸变系数K之间,这样,本公开的实施例中的每个图像区域S的外边界线L所围成的多边形的各顶点映射在透镜101上的位置的畸变系数K为多个畸变系数K中的一个畸变系数,使得在透镜101中不同畸变系数K之间的过渡区域,无顶点存在。因此,在根据图像区域S的各顶点对应的畸变系数K,对图像区域S的各顶点的坐标进行反畸变处理的过程中,本公开的实施例多个图像区域S的顶点的数量相对减少,从而减少了对各顶点进行反畸变处理的运算量,节约了图像处理的时间。
而且,由于每个图像区域S的外边界线L所围成的多边形的各顶点映射在透镜上101的位置的畸变系数K为多个畸变系数K中的一个畸变系数,即,各图像区域S的各顶点对应的畸变系数K已知,因此,相比于图3中的每个图像区域S'的各顶点映射在透镜101上的位置的畸变系数还存在于多个畸变系数K之外,使得在进行反畸变处理之前还需要通过运算求解(例如采用插值运算)畸变系数K,本公开的实施例可以减少了对各顶点进行反畸变处理的运算量,缩短图像处理的时间,提升图像处理的效果。
示例性地,根据图像区域S的各顶点对应的畸变系数K,对图像区域S的各顶点的坐标进行反畸变处理,得到图像区域S的各顶点的纹理坐标,即,根据图像区域S的各顶点对应的畸变系数K,对各顶点的坐标进行插值运算(例如采用Catmull Rom插值算法),得到图像区域S的各顶点的纹理坐标。
因此,本公开的实施例提供的图像处理方法,通过将显示屏102的显示区域A划分为多个图像区域S,每个图像区域S的外边界线L围成一个多边形,且图像区域S的外边界线L围成的多边形的几何中心O2与透镜101的几何中心O1重合;图像区域S的外边界线L所围成的多边形的各顶点映射在透镜101上的位置的畸变系数K均相同,可以减少多个图像区域S的顶点的数量,减少了对各顶点进行反畸变处理的运算量,缩短图像处理的时间,提升图像处理的效果。
并且,每个图像区域S的外边界线L所围成的多边形的形状与透镜101的形状近似相同,可以避免因透镜101与图像区域S的形状不匹配,使得后续对图像区域S贴图的过程中,在透镜101中位于不同畸变系数K之间的过渡区域,图像衔接不平滑,而影响反畸变处理后的图像显示效果。
在一些实施例中,沿由透镜101的几何中心O1指向透镜101的边缘的方向,透镜101的各畸变系数K逐渐增大。
相邻两个图像区域S中,相对靠近透镜101的几何中心O1的图像区域S的顶点的个数,小于或等于相对远离透镜101的几何中心O1的图像区域S的顶点的个数。
可以理解的是,在透镜101的几何中心O1的位置处,透镜101的畸变系数K最小,在透镜101的边缘的位置处,透镜101的畸变系数K最大。沿由透镜101的几何中心O1指向透镜101的边缘的方向,同一个畸变系数的位置的连接线围成的图形的密度逐渐增大,即,在靠近透镜101的几何中心O1的位置处,透镜101的各畸变系数K分布较疏,在靠近透镜101的边缘的位置处,透镜101的各畸变系数K分布较密。
在此情况下,沿由透镜101的几何中心O1指向透镜101的边缘的方向,相对靠近透镜101的几何中心O1的各图像区域S的顶点的分布较疏,相对靠近透镜101的边缘的各图像区域S的顶点的分布较密。即,透镜101的各畸变系数K的分布方式,与各图像区域S的各个顶点的分布方式近似相同。
这样,相比于图3中各图像区域S'的顶点均匀分布,本公开的实施例中的各图像区域S的顶点内疏外密,减少了顶点的数量,减少了对各顶点进行反畸变处理的运算量,缩短图像处理的时间,提升图像处理的效果。
在一些实施例中,如图7所示,透镜101包括多个畸变部分Q。多个畸变部分Q的几何中心O3与透镜101的几何中心O1重合,每个畸变部分Q具有至少一个畸变系数K。
至少一个图像区域S的外边界线L映射在同一个畸变部分Q中。
沿由透镜101的几何中心O1指向透镜101的边缘的方向,映射在各畸变部分Q中的图像区域S的顶点的个数依次增加。
在多个图像区域S的外边界线L映射在一个畸变部分Q中的情况下,映射在同一个畸变部分中的各图像区域S的顶点的个数相等。
其中,在多个图像区域S的外边界线L映射在一个畸变部分Q中的情况下,各图像区域S中对应于同一个畸变部分的图像区域S的外边界线L围成的图形的形状相同。
在此情况下,沿由透镜101的几何中O1心指向透镜101的边缘的方向,各畸变部分Q对应的图像区域S的顶点的密度逐渐增加,可以减少多个图像区域S的顶点的数量,减少了对各顶点进行反畸变处理的运算量,缩短图像处理的时间,提升图像处理的效果。
在一些实施例中,不同图像区域S的各顶点对应的畸变系数K不相等。显示屏102的显示区域A被划分成的图像区域S的个数与透镜101具有的畸 变系数K的个数相等。
示例性地,透镜101具有的畸变系数K的个数为10个,沿由透镜101的几何中心指向透镜101的边缘的方向,多个图像区域S的顶点个数,分别为3、6、6、12、12、12、24、24、24和24(如图7所示),或者分别为3、6、6、12、12、24、24、48、48和48(图中未示出)。
在此情况下,如图3所示,畸变系数K的个数为10个,网格状的图像区域S'的个数为(24×24=576)个,图像区域S'的顶点个数为(25×25=625)个;本公开的实施例中如图7所示,图像区域S的个数为10个,图像区域S的顶点个数为(3+6+6+12+12+12+24+24+24+24=147)个,因此,本公开的实施例可以减少图像区域S的个数和图像区域S的顶点个数,从而减少了对各顶点进行反畸变处理的运算量,缩短图像处理的时间,提升图像处理的效果。
需要说明的是,本领域技术人员可以根据实际中图像处理的时间和图像所要达到的显示效果,结合透镜的畸变系数,对多个图像区域的顶点的个数进行设置,在此不作限定。
在一些实施例中,如图8所示,图像处理方法还包括:
S30、根据贴图模式,对多个图像区域S的各顶点的坐标进行排序,得到索引坐标。
S40、如图9所示,根据索引坐标,连接多个图像区域S的各顶点,将多个图像区域S分割为多个待贴图区域T,多个待贴图区域T之间无交叠。
S50、根据纹理坐标和索引坐标,将目标图像依次贴图在对应的待贴图区域T内,得到反畸变处理后的目标图像。
S60、显示反畸变处理后的目标图像。
在此情况下,反畸变处理后的目标图像经过透镜101后,反畸变处理后的目标图像会向与反畸变处理后的目标图像的畸变方向的反方向畸变,使得人眼通过透镜101观察到显示屏102所显示的图像为趋向于正常且不存在畸变的目标图像,从而降低目标图像的畸变程度。
并且,贴图得到的反畸变处理后的目标图像贴近于透镜101的形状,使得位于不同畸变系数K的衔接处的图像更加平滑,从而改善了反畸变处理后的目标图像的显示效果。
在一些实施例中,贴图模式为三角形贴图模式。根据索引坐标,连接多个图像区域S的各顶点,将多个图像区域S分割为多个待贴图区域T,如图11所示,包括:
S41、如图9所示,根据索引坐标,连接多个图像区域S的各顶点,将每 个图像区域S分割成至少一个三角形区域;除最靠近透镜101的几何中心O1的图像区域S外,其余每个图像区域S被分割成多个三角形区域;多个三角形区域之间无重叠,且图像区域S中的多个三角形区域包含该图像区域S的各顶点;其中,每个三角形区域的各顶点中,至少一个顶点为与该图像区域S相邻、且更靠近透镜101的几何中心O1的图像区域S的顶点;每个图像区域S所包括的多个三角形区域的面积不完全相等;沿由透镜101的几何中心O1指向透镜101的边缘的方向,各顶点个数不同的图像区域S中的三角形区域的个数逐渐增多,各顶点个数不同的图像区域S中的三角形区域的面积逐渐减小。
即,如图9所示,待贴图区域T的形状为三角形。
在此情况下,相比于图3中的图像区域S'的对角线将图像区域S'分割为两个三角形区域,且每个图像区域S'中的三角形区域的面积和个数相等,本公开的实施例可以减少三角形区域的个数,缩短贴图所需的时间,避免贴图的时间延长而导致在显示过程中出现掉帧的现象,提高显示效果。
需要说明的是,本公开的实施例可以采用领域内任何能够实现上述的图像处理方法的贴图模式,在此不作限定。
例如,如图10所示,贴图模式可以为四边形贴图模式,在此情况下,每个图像区域S分割成至少一个四边形区域,除最靠近透镜101的几何中心O1的图像区域S外,其余每个图像区域S被分割成多个四边形区域。每个四边形区域的各顶点中,至少两个顶点为与该图像区域S相邻、且更靠近透镜101的几何中心O1的图像区域S的顶点;每个图像区域S所包括的多个四边形区域的面积不完全相等;沿由透镜101的几何中心O1指向透镜101的边缘的方向,各顶点个数不同的图像区域S中的四边形区域的个数逐渐增多,四边形区域的面积逐渐减小。这样,减少了四边形区域的个数,缩短贴图所需的时间,避免贴图的时间延长而导致在显示过程中出现掉帧的现象,提高显示效果。
在一些实施例中,如图12所示,图像处理方法还包括:
S70、在将显示屏102的显示区域A划分为多个图像区域S之前,获取目标图像数据和垂直同步信号Vsync。
S80、在接收到垂直同步信号Vsync后,根据目标图像数据,渲染得到目标图像。
可以理解的是,渲染得到目标图像即为渲染得到目标图像的纹理。
并且,在将显示屏102的显示区域A划分为多个图像区域S之前,还会 获取透镜101的多个畸变系数K。
需要说明的是,若没有接收到垂直同步信号Vsync,则不会执行步骤S80,会返回步骤S70重新接收垂直同步信号Vsync。
在一些实施例中,如图13所示,图像处理方法还包括:
S90、在显示反畸变处理后的第一帧目标图像后,更新后续帧的目标图像数据和垂直同步信号Vsync。
S100、根据对第一帧目标图像进行反畸变处理后得到的纹理坐标,依次对后续帧的目标图像进行贴图。
在此情况下,只需要对第一帧目标图像进行反畸变处理,获得纹理坐标,后续帧的目标图像在显示过程中,可以直接应用根据该纹理坐标,无需再进行反畸变运算,减少了运算量,节约了图像处理的时间,提升了图像处理效率。
在一些实施例中,在显示反畸变处理后的当前目标图像之后,如图13所示,图像处理方法还包括:
S110、判断是否结束应用。
若否,则更新后续帧的目标图像数据和垂直同步信号Vsync;若是,则结束应用。
可以理解的是,判断是否结束应用即为判断是否有可更新的后续帧的目标图像数据,若没有,则结束图像处理过程。
本公开的实施例提供一种图像显示装置100,图像显示装置100为上述图1所示的图像显示装置100。
其中,图像显示装置100包括图像处理器200。图像处理器200设置于图1中的图像显示装置100的本体110上。并且,显示屏102与图像处理器200耦接。
可以理解的是,图像处理器200可为GPU(Graphics Processing Unit,图形处理器)。
图像处理器200被配置为根据透镜101的多个畸变系数K,参考图5,将显示屏102的显示区域A划分为多个图像区域S,使每个图像区域S的外边界线L围成一个多边形,且图像区域S的外边界线L所围成的多边形的几何中心O2与透镜101的几何中心O1重合。并使图像区域S的外边界线L所围成的多边形的各顶点映射在透镜101上的位置的畸变系数,为多个畸变系数中的一个畸变系数。并且,根据图像区域S的各顶点对应的畸变系数,对图像区域S的各顶点的坐标进行反畸变处理,得到图像区域S的各顶点的纹理 坐标。
由于移动端的渲染能力与PC(Personal Computer,个人计算机)端相比相差较大,对于图像显示装置一体机的图像处理器,在反畸变处理过程中,当图像区域的数量较大,例如参考图3中的网格状图像区域S',当图像区域S'数量达到(32ⅹ32)时,图像处理器的贴图时间耗费过长,导致显示过程中频繁出现掉帧现象。
在此情况下,本公开的实施例提供的图像处理器200,通过将显示屏102的显示区域A划分为多个图像区域S,每个图像区域S的外边界线L围成一个多边形,且图像区域S的外边界线L围成的多边形的几何中心O2与透镜101的几何中心O1重合;图像区域S的外边界线L所围成的多边形的各顶点映射在透镜101上的位置的畸变系数K,为多个畸变系数K中的一个畸变系数,可以减少多个图像区域S的顶点的数量。使得对图像区域S的各顶点的坐标进行反畸变处理的过程中,对各顶点进行反畸变处理的运算量减少,缩短了图像处理器200对图像处理的时间,提升了图像处理器200的性能。这样,当图像处理器200应用于移动端图像显示装置时,可以降低图像处理器200的负荷,避免出现掉帧现象。
并且,每个图像区域S的外边界线L所围成的多边形的形状与透镜101的形状近似相同,可以避免因透镜101与图像区域S的形状不匹配,使得后续对图像区域S贴图的过程中,在透镜101中位于不同畸变系数K之间的过渡区域,图像衔接不平滑,而影响反畸变处理后的图像显示效果。
在一些实施例中,图像处理器200还被配置为,根据贴图模式,对多个图像区域S的各顶点的坐标进行排序,得到索引坐标;根据索引坐标,参考图9,连接多个图像区域S的各顶点,将多个图像区域S分割为相互之间无交叠的多个待贴图区域T;及,根据纹理坐标和索引坐标,将目标图像依次贴图在对应的多个待贴图区域T内,得到反畸变处理后的目标图像。
在一些实施例中,在贴图模式为三角形贴图模式的情况下,图像处理器200被配置为,根据索引坐标,参考图9,连接多个图像区域S的各顶点,将每个图像区域S分割成至少一个三角形区域;使除最靠近透镜101的几何中心O1的图像区域外,其余每个图像区域S被分割成多个三角形区域。
其中,每个三角形区域的各顶点中,至少一个顶点为与该图像区域S相邻、且更靠近透镜101的几何中心O1的图像区域S的顶点。
每个图像区域S所包括的多个三角形区域的面积不完全相等。
沿由透镜101的几何中心O1指向透镜101的边缘的方向,各顶点个数不 同的图像区域S中的三角形区域的个数逐渐增多,各顶点个数不同的图像区域S中的三角形区域的面积逐渐减小。
在一些实施例中,图像处理器200还被配置为,在将显示屏102的显示区域划分为多个图像区域之前,获取目标图像数据和垂直同步信号Vsync。
及,在接收到垂直同步信号Vsync后,根据目标图像数据,渲染得到目标图像。
可以理解的是,在将显示屏102的显示区域划分为多个图像区域之前,图像处理器200还会获取透镜101的多个畸变系数K。
在一些实施例中,图像处理器200还被配置为在显示反畸变处理后的第一帧目标图像后,更新后续帧的目标图像数据和垂直同步信号Vsync。
图像处理器200还被配置为根据对第一帧目标图像进行反畸变处理后得到的纹理坐标,依次对后续帧的目标图像进行贴图。
上述的图像处理器200具有与上述的图像处理方法相同的有益效果,因此不再赘述。
需要说明的是,本公开的实施例的图像处理器200的具体结构可以采用领域内任何能够实现相应功能的电路或模块。在实际应用中,技术人员可以根据情况进行选择,本公开在此不作限定。
在一些实施例中,如图1所示,图像显示装置100还包括设置于本体110上的控制器103。
控制器103与图像处理器200耦接。
其中,控制器103被配置为控制图像处理器200进行图像处理,以使显示屏102显示反畸变处理后的目标图像。
可以理解的是,控制器103向图像处理器200传输垂直同步信号Vsync。
需要说明的是,在本公开的实施例提供的图像显示装置100中,控制器103的具体结构可以采用领域内任何能够实现相应功能的电路或模块。在实际应用中,技术人员可以根据情况进行选择,本公开在此不作限定。
示例性地,控制器103可以应用于移动终端(例如手机等)。
本公开的一些实施例提供了一种计算机可读存储介质(例如,非暂态计算机可读存储介质),该计算机可读存储介质中存储有计算机程序指令,计算机程序指令在处理器上运行时,使得处理器执行如上述实施例中任一实施例所述的图像处理方法中的一个或多个步骤。
示例性的,上述计算机可读存储介质可以包括,但不限于:磁存储器件(例如,硬盘、软盘或磁带等),光盘(例如,CD(Compact Disk,压缩盘)、 DVD(Digital Versatile Disk,数字通用盘)等),智能卡和闪存器件(例如,EPROM(Erasable Programmable Read-Only Memory,可擦写可编程只读存储器)、卡、棒或钥匙驱动器等)。本公开描述的各种计算机可读存储介质可代表用于存储信息的一个或多个设备和/或其它机器可读存储介质。术语“机器可读存储介质”可包括但不限于,无线信道和能够存储、包含和/或承载指令和/或数据的各种其它介质。
本公开的一些实施例还提供了一种计算机程序产品。该计算机程序产品包括计算机程序指令,在计算机上执行该计算机程序指令时,该计算机程序指令使计算机执行如上述实施例所述的图像处理方法中的一个或多个步骤。
本公开的一些实施例还提供了一种计算机程序。当该计算机程序在计算机上执行时,该计算机程序使计算机执行如上述实施例所述的图像处理方法中的一个或多个步骤。
上述计算机可读存储介质、计算机程序产品及计算机程序的有益效果和上述一些实施例所述的图像处理方法的有益效果相同,此处不再赘述。
以上所述,仅为本公开的具体实施方式,但本公开的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本公开揭露的技术范围内,想到变化或替换,都应涵盖在本公开的保护范围之内。因此,本公开的保护范围应以所述权利要求的保护范围为准。

Claims (13)

  1. 一种图像处理方法,应用于图像显示装置中,所述图像显示装置包括透镜和显示屏,所述透镜具有多个畸变系数;
    所述图像处理方法包括:
    根据所述透镜的多个畸变系数,将所述显示屏的显示区域划分为多个图像区域,每个图像区域的外边界线围成一个多边形,且所述图像区域的外边界线所围成的多边形的几何中心与所述透镜的几何中心重合;所述图像区域的外边界线所围成的多边形的各顶点映射在所述透镜上的位置的畸变系数均相同;
    根据所述图像区域的各顶点对应的畸变系数,对所述图像区域的各顶点的坐标进行反畸变处理,得到所述图像区域的各顶点的纹理坐标。
  2. 根据权利要求1所述的图像处理方法,其中,沿由所述透镜的几何中心指向所述透镜的边缘的方向,所述透镜的各畸变系数逐渐增大;
    相邻两个所述图像区域中,相对靠近所述透镜的几何中心的图像区域的顶点的个数,小于或等于相对远离所述透镜的几何中心的图像区域的顶点的个数。
  3. 根据权利要求1或2所述的图像处理方法,其中,所述透镜包括多个畸变部分,所述多个畸变部分的几何中心与所述透镜的几何中心重合,每个畸变部分具有至少一个畸变系数;
    至少一个所述图像区域的外边界线映射在同一个所述畸变部分中;
    沿由所述透镜的几何中心指向所述透镜的边缘的方向,映射在各所述畸变部分中的图像区域的顶点的个数依次增加。
  4. 根据权利要求3所述的图像处理方法,其中,在多个所述图像区域的外边界线映射在一个所述畸变部分中的情况下,映射在同一个所述畸变部分中的各图像区域的顶点的个数相等。
  5. 根据权利要求1~4中任一项所述的图像处理方法,其中,不同所述图像区域的各顶点对应的畸变系数不相等;
    所述显示屏的显示区域被划分成的图像区域的个数与所述透镜具有的畸变系数的个数相等。
  6. 根据权利要求1~5中任一项所述的图像处理方法,其中,所述透镜具有的畸变系数的个数为10;
    沿由所述透镜的几何中心指向所述透镜的边缘的方向,所述多个图像区域的顶点个数,分别为3、6、6、12、12、12、24、24、24和24,或者分别 为3、6、6、12、12、24、24、48、48和48。
  7. 根据权利要求1所述的图像处理方法,还包括:
    根据贴图模式,对所述多个图像区域的各顶点的坐标进行排序,得到索引坐标;
    根据所述索引坐标,连接所述多个图像区域的各顶点,将所述多个图像区域分割为多个待贴图区域,所述多个待贴图区域之间无交叠;
    根据所述纹理坐标和所述索引坐标,将目标图像依次贴图在对应的待贴图区域内,得到反畸变处理后的目标图像;
    显示所述反畸变处理后的目标图像。
  8. 根据权利要求7所述的图像处理方法,其中,所述贴图模式为三角形贴图模式;
    所述根据所述索引坐标,连接所述多个图像区域的各顶点,将所述多个图像区域分割为多个待贴图区域,包括:
    根据所述索引坐标,连接所述多个图像区域的各顶点,将每个所述图像区域分割成至少一个三角形区域;除最靠近所述透镜的几何中心的图像区域外,其余每个所述图像区域被分割成多个三角形区域;所述多个三角形区域之间无重叠,且所述图像区域中的多个三角形区域包含该图像区域的各顶点;
    其中,每个所述三角形区域的各顶点中,至少一个顶点为与该图像区域相邻、且更靠近所述透镜的几何中心的图像区域的顶点;
    每个所述图像区域所包括的多个三角形区域的面积不完全相等;
    沿由所述透镜的几何中心指向所述透镜的边缘的方向,各顶点个数不同的图像区域中的三角形区域的个数逐渐增多,各顶点个数不同的图像区域中的三角形区域的面积逐渐减小。
  9. 一种图像显示装置,包括:
    本体;
    设置于所述本体上的显示屏;
    设置于所述本体上的透镜;所述透镜相对于所述显示屏更靠近所述图像显示装置被佩戴时接近人眼的一侧;所述透镜具有多个畸变系数;设置于所述本体上的图像处理器;所述图像处理器与所述显示屏耦接;所述图像处理器被配置为,根据所述透镜的多个畸变系数,将所述显示屏的显示区域划分为多个图像区域,使每个图像区域的外边界线围成一个多边形,且所述图像区域的外边界线所围成的多边形的几何中心与所述透镜的几何中心重合;并使所述图像区域的外边界线所围成的多边形的各顶点映射在所述透镜上的位 置的畸变系数,为所述多个畸变系数中的一个畸变系数;及,根据所述图像区域的各顶点对应的畸变系数,对所述图像区域的各顶点的坐标进行反畸变处理,得到所述图像区域的各顶点的纹理坐标。
  10. 根据权利要求9所述的图像显示装置,其中,所述图像处理器还被配置为,根据贴图模式,对所述多个图像区域的各顶点的坐标进行排序,得到索引坐标;根据所述索引坐标,连接所述多个图像区域的各顶点,将所述多个图像区域分割为相互之间无交叠的多个待贴图区域;及,根据所述纹理坐标和所述索引坐标,将目标图像依次贴图在对应的所述多个待贴图区域内,得到反畸变处理后的目标图像。
  11. 根据权利要求10所述的图像显示装置,其中,在所述贴图模式为三角形贴图模式的情况下,
    所述图像处理器被配置为,根据所述索引坐标,连接所述多个图像区域的各顶点,将每个所述图像区域分割成至少一个三角形区域;
    使除最靠近所述透镜的几何中心的图像区域外,其余每个所述图像区域被分割成多个三角形区域;所述多个三角形区域之间无重叠,且所述图像区域中的多个三角形区域包含该图像区域的各顶点;
    每个所述三角形区域的各顶点中,至少一个顶点为与该图像区域相邻、且更靠近所述透镜的几何中心的图像区域的顶点;
    每个所述图像区域所包括的多个三角形区域的面积不完全相等;
    沿由所述透镜的几何中心指向所述透镜的边缘的方向,各顶点个数不同的图像区域中的三角形区域的个数逐渐增多,各顶点个数不同的图像区域中的三角形区域的面积逐渐减小。
  12. 根据权利要求9~11中任一项所述的图像显示装置,还包括:
    设置于所述本体上的控制器,所述控制器与所述图像处理器耦接。
  13. 一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序指令,所述计算机程序指令在处理器上运行时,使得所述处理器执行如权利要求1~8中任一项所述的图像处理方法中的一个或多个步骤。
PCT/CN2021/086593 2020-05-09 2021-04-12 图像处理方法及图像显示装置 WO2021227740A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/765,424 US20220351340A1 (en) 2020-05-09 2021-04-12 Image processing method and image display device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010388616.6A CN111539898B (zh) 2020-05-09 2020-05-09 图像处理方法及图像显示装置
CN202010388616.6 2020-05-09

Publications (1)

Publication Number Publication Date
WO2021227740A1 true WO2021227740A1 (zh) 2021-11-18

Family

ID=71975730

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/086593 WO2021227740A1 (zh) 2020-05-09 2021-04-12 图像处理方法及图像显示装置

Country Status (3)

Country Link
US (1) US20220351340A1 (zh)
CN (1) CN111539898B (zh)
WO (1) WO2021227740A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111539898B (zh) * 2020-05-09 2023-08-01 京东方科技集团股份有限公司 图像处理方法及图像显示装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050213159A1 (en) * 2004-03-29 2005-09-29 Seiji Okada Distortion correction device and image sensing device provided therewith
CN105869142A (zh) * 2015-12-21 2016-08-17 乐视致新电子科技(天津)有限公司 虚拟现实头盔的成像畸变测试方法及装置
CN108596854A (zh) * 2018-04-28 2018-09-28 京东方科技集团股份有限公司 图像畸变校正方法及装置、计算机可读介质、电子设备
CN109754380A (zh) * 2019-01-02 2019-05-14 京东方科技集团股份有限公司 一种图像处理方法及图像处理装置、显示装置
CN111539898A (zh) * 2020-05-09 2020-08-14 京东方科技集团股份有限公司 图像处理方法及图像显示装置

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3429280B2 (ja) * 2000-09-05 2003-07-22 理化学研究所 画像のレンズ歪みの補正方法
US10102668B2 (en) * 2016-05-05 2018-10-16 Nvidia Corporation System, method, and computer program product for rendering at variable sampling rates using projective geometric distortion
CN109961402A (zh) * 2017-12-22 2019-07-02 中科创达软件股份有限公司 一种显示设备目镜反畸变方法及装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050213159A1 (en) * 2004-03-29 2005-09-29 Seiji Okada Distortion correction device and image sensing device provided therewith
CN105869142A (zh) * 2015-12-21 2016-08-17 乐视致新电子科技(天津)有限公司 虚拟现实头盔的成像畸变测试方法及装置
CN108596854A (zh) * 2018-04-28 2018-09-28 京东方科技集团股份有限公司 图像畸变校正方法及装置、计算机可读介质、电子设备
CN109754380A (zh) * 2019-01-02 2019-05-14 京东方科技集团股份有限公司 一种图像处理方法及图像处理装置、显示装置
CN111539898A (zh) * 2020-05-09 2020-08-14 京东方科技集团股份有限公司 图像处理方法及图像显示装置

Also Published As

Publication number Publication date
CN111539898A (zh) 2020-08-14
US20220351340A1 (en) 2022-11-03
CN111539898B (zh) 2023-08-01

Similar Documents

Publication Publication Date Title
CN110049351B (zh) 视频流中人脸变形的方法和装置、电子设备、计算机可读介质
CN110807836B (zh) 三维人脸模型的生成方法、装置、设备及介质
US20060152579A1 (en) Stereoscopic imaging system
CN107452049B (zh) 一种三维头部建模方法及装置
CA2314714C (en) Morphing image processing system using polygon reduction processing
CN114820905B (zh) 虚拟形象生成方法、装置、电子设备及可读存储介质
US11893705B2 (en) Reference image generation apparatus, display image generation apparatus, reference image generation method, and display image generation method
JP2008306377A (ja) 色調整装置および色調整プログラム
JP4234089B2 (ja) エンタテインメント装置、オブジェクト表示装置、オブジェクト表示方法、プログラム、およびキャラクタ表示方法
WO2021227740A1 (zh) 图像处理方法及图像显示装置
US11288774B2 (en) Image processing method and apparatus, storage medium, and electronic apparatus
CN113808249B (zh) 图像处理方法、装置、设备和计算机存储介质
EP1745440A1 (en) Graphics pipeline for rendering graphics
CN110751026B (zh) 视频处理方法及相关装置
US10152818B2 (en) Techniques for stereo three dimensional image mapping
CN115965735B (zh) 纹理贴图的生成方法和装置
CN110502305B (zh) 一种动态界面的实现方法、装置及相关设备
CN112686978B (zh) 表情资源的加载方法、装置和电子设备
US10636210B2 (en) Dynamic contour volume deformation
US20230237611A1 (en) Inference model construction method, inference model construction device, recording medium, configuration device, and configuration method
CN116385705B (zh) 用于对三维数据进行纹理融合的方法、设备和存储介质
KR20190013146A (ko) 모바일 환경에서 3d 객체의 실시간 대량 처리를 위한 랜더링 최적화 방법
CN114119633A (zh) 一种基于mr应用场景的显示融合切割算法
CN115063516A (zh) 一种数字人的处理方法及装置
CN118053189A (zh) 一种稀疏多视角动态人脸重建方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21804261

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21804261

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 27.06.2023)

122 Ep: pct application non-entry in european phase

Ref document number: 21804261

Country of ref document: EP

Kind code of ref document: A1