WO2021227740A1 - 图像处理方法及图像显示装置 - Google Patents
图像处理方法及图像显示装置 Download PDFInfo
- Publication number
- WO2021227740A1 WO2021227740A1 PCT/CN2021/086593 CN2021086593W WO2021227740A1 WO 2021227740 A1 WO2021227740 A1 WO 2021227740A1 CN 2021086593 W CN2021086593 W CN 2021086593W WO 2021227740 A1 WO2021227740 A1 WO 2021227740A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- lens
- vertices
- areas
- distortion
- Prior art date
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 40
- 238000012545 processing Methods 0.000 claims abstract description 68
- 238000004590 computer program Methods 0.000 claims description 23
- 238000013507 mapping Methods 0.000 claims description 17
- 230000007423 decrease Effects 0.000 claims description 6
- 238000000034 method Methods 0.000 description 24
- 230000000694 effects Effects 0.000 description 16
- 238000004364 calculation method Methods 0.000 description 10
- 238000010586 diagram Methods 0.000 description 9
- 238000009826 distribution Methods 0.000 description 9
- 238000005516 engineering process Methods 0.000 description 8
- 230000008447 perception Effects 0.000 description 4
- 238000009877 rendering Methods 0.000 description 4
- 230000007704 transition Effects 0.000 description 4
- 238000005259 measurement Methods 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 238000004088 simulation Methods 0.000 description 2
- 241000226585 Antennaria plantaginifolia Species 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000005094 computer simulation Methods 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 238000003702 image correction Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000002035 prolonged effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000009827 uniform distribution Methods 0.000 description 1
- 230000016776 visual perception Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/80—Geometric correction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/181—Segmentation; Edge detection involving edge growing; involving edge linking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20021—Dividing image into blocks, subimages or windows
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Definitions
- the present disclosure relates to the field of display technology, and in particular, to an image processing method and an image display device.
- the image display device for example, adopts a virtual reality technology (Virtual Reality, VR) of a computer simulation system that can create and experience a virtual world, and uses a computer to generate a simulation environment to immerse the user in the environment.
- VR Virtual Reality
- Virtual reality technology mainly includes aspects such as simulated environment, perception, natural skills and sensing equipment.
- the simulation environment is a real-time dynamic three-dimensional realistic image generated by a computer.
- Perception means that the ideal virtual reality technology should have the perception that all people have.
- there are also perceptions such as hearing, touch, force, movement, and even smell and taste. It is called multi-sensing. Therefore, when performing the virtual reality technology display, the user can be immersed in the immersive environment provided by the virtual reality technology, which brings the user a more realistic use experience.
- an image processing method is provided.
- the image processing method is applied to an image display device, the image display device includes a lens and a display screen, and the lens has a plurality of distortion coefficients.
- the image processing method includes: dividing an area of the display screen into a plurality of image areas according to a plurality of distortion coefficients of the lens, the outer boundary line of each image area encloses a polygon, and the outer edge of the image area
- the distortion coefficients of the positions where each vertex of the polygon enclosed by the boundary line is mapped on the lens are all the same.
- the coordinates of the vertices of the image area are subjected to anti-distortion processing to obtain the texture coordinates of the vertices of the image area.
- the distortion coefficients of the lens gradually increase; in the two adjacent image areas, the one that is relatively close to the lens.
- the number of vertices in the image area of the geometric center is less than or equal to the number of vertices in the image area relatively far from the geometric center of the lens.
- the lens includes a plurality of distortion parts, the geometric center of the plurality of distortion parts coincides with the geometric center of the lens, and each distortion part has at least one distortion coefficient;
- the outer boundary lines are mapped in the same distorted part; along the direction from the geometric center of the lens to the edge of the lens, the number of vertices of the image area mapped in each distorted part increases sequentially.
- the number of vertices of each image region mapped in the same distortion part is equal.
- the distortion coefficients corresponding to the vertices of the different image areas are not equal; the number of image areas into which the display area of the display screen is divided is equal to the number of distortion coefficients of the lens.
- the number of distortion coefficients of the lens is 10; along the direction from the geometric center of the lens to the edge of the lens, the number of vertices of the multiple image regions is 3 respectively. , 6, 6, 12, 12, 12, 24, 24, 24, and 24, or 3, 6, 6, 12, 12, 24, 24, 48, 48, and 48 respectively.
- the image processing method further includes: sorting the coordinates of the vertices of the multiple image areas according to the mapping mode to obtain index coordinates; and connecting the multiple image areas according to the index coordinates Divide the multiple image areas into multiple areas to be mapped, and there is no overlap between the multiple areas to be mapped; according to the texture coordinates and the index coordinates, the target images are sequentially mapped on the corresponding In the area to be mapped, the target image after the anti-distortion processing is obtained; and the target image after the anti-distortion processing is displayed.
- the texture mode is a triangle texture mode.
- the connecting the vertices of the plurality of image areas according to the index coordinates, and dividing the plurality of image areas into a plurality of areas to be mapped includes: connecting the plurality of image areas according to the index coordinates At each vertex, each of the image areas is divided into at least one triangular area; except for the image area closest to the geometric center of the lens, each of the remaining image areas is divided into a plurality of triangular areas; There is no overlap between the triangular areas, and the plurality of triangular areas in the image area include the vertices of the image area; wherein, among the vertices of each of the triangular areas, at least one vertex is adjacent to the image area, And the apex of the image area closer to the geometric center of the lens; the areas of the plurality of triangular areas included in each of the image areas are not completely equal; along the direction from the geometric center of the lens to the edge of the lens , The number of triang
- an image display device in another aspect, includes a body, a display screen arranged on the body, a lens arranged on the body, and an image processor arranged on the body.
- the lens is closer to the side of the image display device that is close to the human eye when the image display device is worn relative to the display screen; the lens has a plurality of distortion coefficients.
- the image processor is configured to divide the display area of the display screen into a plurality of image areas according to a plurality of distortion coefficients of the lens, so that the outer boundary line of each image area forms a polygon, and the image
- the geometric center of the polygon enclosed by the outer boundary line of the area coincides with the geometric center of the lens
- the distortion coefficient of the position where each vertex of the polygon enclosed by the outer boundary line of the image area is mapped on the lens Is one of the plurality of distortion coefficients
- the texture coordinates of the vertex is configured to divide the display area of the display screen into a plurality of image areas according to a plurality of distortion coefficients of the lens, so that the outer boundary line of each image area forms a polygon, and the image
- the geometric center of the polygon enclosed by the outer boundary line of the area coincides with the geometric center of the lens
- the image processor is further configured to sort the coordinates of the vertices of the multiple image areas according to the mapping mode to obtain index coordinates; and connect the multiple images according to the index coordinates Each vertex of the region divides the multiple image regions into multiple regions to be mapped without overlapping each other; and, according to the texture coordinates and the index coordinates, the target image is sequentially mapped on the corresponding In multiple regions to be mapped, the target image after anti-distortion processing is obtained.
- the image processor is configured to connect the vertices of the multiple image areas according to the index coordinates, and combine each of the The image area is divided into at least one triangular area; except for the image area closest to the geometric center of the lens, each of the remaining image areas is divided into multiple triangular areas; there is no overlap between the multiple triangular areas, And the plurality of triangular areas in the image area include each vertex of the image area; among the vertices of each triangular area, at least one vertex is adjacent to the image area and closer to the geometric center of the lens
- the vertices of the image area; the areas of the multiple triangular areas included in each of the image areas are not completely equal; along the direction from the geometric center of the lens to the edge of the lens, image areas with different numbers of vertices
- the number of triangular areas in the image area gradually increases, and the area of the triangular areas in the image areas with different numbers of vertices gradually decreases.
- the image display device further includes a controller provided on the body.
- the controller is coupled with the image processor.
- a computer-readable storage medium stores computer program instructions, which when run on a processor, cause the processor to perform one or more steps in the image processing method according to any of the above embodiments .
- a computer program product includes computer program instructions, and when the computer program instructions are executed on a computer, the computer program instructions cause the computer to execute one or more steps in the image processing method according to any one of the above embodiments.
- a computer program When the computer program is executed on a computer, the computer program causes the computer to execute one or more steps in the image processing method described in any of the above embodiments.
- FIG. 1 is a structural diagram of an image display device according to some embodiments of the present disclosure
- FIG. 2 is a comparison diagram of a normal target image and a distorted target image of the image display device according to some embodiments
- FIG. 3 is a distribution diagram of multiple image regions according to related technologies
- FIG. 4 is a comparison diagram of the normal target image and the target image after anti-distortion processing of the image display device according to some embodiments;
- Fig. 5 is a distribution diagram of multiple image areas according to some embodiments of the present disclosure.
- Fig. 6 is a flowchart of an image processing method according to some embodiments of the present disclosure.
- Fig. 7 is another distribution diagram of multiple image regions according to some embodiments of the present disclosure.
- Fig. 8 is another flowchart of an image processing method according to some embodiments of the present disclosure.
- FIG. 9 is another distribution diagram of multiple image regions according to some embodiments of the present disclosure.
- FIG. 10 is another distribution diagram of multiple image regions according to some embodiments of the present disclosure.
- FIG. 11 is still another flowchart of an image processing method according to some embodiments of the present disclosure.
- FIG. 12 is still another flowchart of an image processing method according to some embodiments of the present disclosure.
- FIG. 13 is another flowchart of an image processing method according to some embodiments of the present disclosure.
- first and second are only used for descriptive purposes, and cannot be understood as indicating or implying relative importance or implicitly indicating the number of indicated technical features. Thus, the features defined with “first” and “second” may explicitly or implicitly include one or more of these features. In the description of the embodiments of the present disclosure, unless otherwise specified, “plurality” means two or more.
- the expressions “coupled” and “connected” and their extensions may be used.
- the term “connected” may be used when describing some embodiments to indicate that two or more components are in direct physical or electrical contact with each other.
- the term “coupled” may be used when describing some embodiments to indicate that two or more components have direct physical or electrical contact.
- the term “coupled” or “communicatively coupled” may also mean that two or more components are not in direct contact with each other, but still cooperate or interact with each other.
- the embodiments disclosed herein are not necessarily limited to the content of this document.
- the image display device 100 includes a main body 110, a display screen 102 disposed on the main body 110, and a lens 101 disposed on the main body 110.
- the main body 110 of the image display device 100 is a structure that implements image display in the image display device 100, and the structure can be close to a human face.
- the image display device may be a virtual reality device.
- the main body 110 is a head-mounted display.
- the display screen 102 may be a liquid crystal display screen.
- the display screen 102 may be a display screen of the mobile phone.
- the lens 101 is closer to the side of the image display device 100 that is closer to the human eye when worn relative to the display screen 102.
- the user can see the target image displayed on the display screen 102 through the lens 101 under a larger field of view.
- the lens 101 is a convex lens.
- the image is distorted, and the larger the field of view the user sees the target image, the greater the distortion of the target image that the user sees. Therefore, when the user When viewing a normal target image (as shown in part (a) of Figure 2), in the actual target image, the image far away from the geometric center of the target image appears to be stretched, and the image that the user sees is similar to a lens 101 shows the virtual image after the target image is enlarged. At this time, the target image has a distortion effect (for example, the pincushion distortion effect shown in part (b) of FIG. 2).
- a distortion effect for example, the pincushion distortion effect shown in part (b) of FIG. 2.
- the target image can be subjected to anti-distortion processing, the display area of the display screen 102 is divided into multiple image areas, the vertices of each image area are obtained according to the distortion coefficient of the lens 101, and the anti-distortion processing is performed to obtain the anti-distortion processing.
- the resulting target image can reduce the amount of calculation in the image processing process and reduce the configuration requirements of the device for running the anti-distortion processing process.
- the normal target image shown in part (a) of FIG. 4 is subjected to anti-distortion processing to obtain the target image after anti-distortion processing as shown in part (b) of FIG. 4 (for example, barrel-shaped distortion). Effect), when the user views the anti-distortion processed target image through the lens 101, the anti-distortion processed target image will go through the action of the lens, so that the image viewed by the user is the normal image shown in part (c) of Figure 4 The target image, thereby reducing the distortion of the target image.
- anti-distortion processing for example, barrel-shaped distortion. Effect
- the display area of the display screen 102 is divided into multiple image areas according to the shape of the display screen 102 in a grid shape (for example, (24 in Figure 3) ⁇ 24) image areas S').
- the shape of the lens 101 (for example, a circle) does not match the shape of the display screen 102, so that the distortion coefficient of the vertex of the grid S'mapped to the position on the lens 101 does not match the distortion coefficient K of the lens 101, that is, the distortion coefficient K of the lens 101 is different.
- the vertices of the grid S' exist at the crossing of the distortion coefficient, which leads to the unsmooth connection of the images at the crossing of the different distortion coefficients when mapping the texture coordinates of the vertices of each mesh, which affects the anti-distortion effect.
- the (24 ⁇ 24) grid-like image regions S'in FIG. 3 have (25 ⁇ 25) vertices, and the number of image regions S' More, the number of vertices is larger, which makes the anti-distortion processing more computationally expensive, consumes the anti-distortion processing time, and the texture time will be prolonged, resulting in the phenomenon of frame dropping during the display process.
- the embodiment of the present disclosure provides an image processing method. This image processing method is applied to the image display device 100.
- the image display device 100 includes a lens 101 and a display screen 102.
- the lens 101 has a plurality of distortion coefficients K.
- the image processing method includes:
- the display area A of the display screen 102 is divided into multiple image areas S, the outer boundary line L of each image area S encloses a polygon, and the image
- the geometric center O2 of the polygon enclosed by the outer boundary line L of the area S coincides with the geometric center O1 of the lens 101; the distortion coefficients of the positions of the polygons enclosed by the outer boundary line L of the image area S on the lens 101 are all the same .
- the multiple distortion coefficients K of the lens 101 are uniformly distributed.
- the distortion coefficient at the position where each vertex of the polygon surrounded by the outer boundary line L of the image area S is mapped on the lens 101 is one distortion coefficient among the multiple distortion coefficients K.
- the embodiment of the present disclosure ignores the deviation of the geometric center O1 of the lens 101 from the geometric center O2 of the polygon enclosed by the outer boundary line L of the image area S due to fluctuations in the actual product assembly process.
- the vertices of the image area S mentioned in the text are the vertices of the polygon surrounded by the outer boundary line L of the image area S.
- the display area A of the display screen 102 is an area where the display screen 102 can normally display images.
- outer boundary line L of the image area S refers to the boundary line in the image area away from the geometric center O2 of the image area.
- the number of sides of the polygon enclosed by the outer boundary line L is greater than or equal to 3.
- the distortion coefficient K of the position where each vertex of the polygon surrounded by the outer boundary line L of each image area S is mapped on the lens 101 is equal.
- the shape of the figure enclosed by the connecting line at the same distortion coefficient position is approximately the same as the shape of the lens 101 (for example, a circle or an ellipse). Therefore, the shape of each image area S
- the shape of the polygon surrounded by the outer boundary line L is approximately the same as the shape of the lens 101.
- the shape of the lens 101 and the image area S do not match, that is, the distortion coefficient of the apex of each image area S mapped to the position of the lens 101 and the distortion coefficient of the lens 101 are significantly different, resulting in subsequent image correction.
- the image has distortion that does not meet the distortion coefficient of the lens 101, and the image connection is not smooth, which affects the image display effect after anti-distortion processing.
- the distortion coefficient is located between different distortion coefficients K in the lens 101.
- each image area in the embodiment of the present disclosure The distortion coefficient K of the position where each vertex of the polygon surrounded by the outer boundary line L of S is mapped on the lens 101 is a distortion coefficient among a plurality of distortion coefficients K, so that the transition area between different distortion coefficients K in the lens 101 , No vertex exists.
- the number of vertices of the multiple image regions S in the embodiment of the present disclosure is relatively reduced. Therefore, the amount of calculation for anti-distortion processing on each vertex is reduced, and the time of image processing is saved.
- the distortion coefficient K is one of the plurality of distortion coefficients K, that is, the distortion coefficient of each image area S
- the distortion coefficient K corresponding to each vertex is known. Therefore, compared to the position where each vertex of each image area S'in FIG. 3 is mapped on the lens 101, the distortion coefficient still exists in addition to the multiple distortion coefficients K, so Before performing the anti-distortion processing, it is also necessary to calculate the distortion coefficient K (for example, using interpolation operation).
- the embodiments of the present disclosure can reduce the amount of calculation for performing anti-distortion processing on each vertex, shorten the time of image processing, and improve the efficiency of image processing. Effect.
- the distortion coefficient K corresponding to the vertices of the image area S the coordinates of the vertices of the image area S are de-distorted to obtain the texture coordinates of the vertices of the image area S, that is, according to the respective vertices of the image area S
- the distortion coefficient K corresponding to the vertex is interpolated on the coordinates of each vertex (for example, the Catmull Rom interpolation algorithm is used) to obtain the texture coordinates of each vertex of the image area S.
- the display area A of the display screen 102 is divided into a plurality of image areas S, the outer boundary line L of each image area S encloses a polygon, and the outer edge of the image area S
- the geometric center O2 of the polygon surrounded by the boundary line L coincides with the geometric center O1 of the lens 101; the distortion coefficients K of the positions of the vertices of the polygon surrounded by the outer boundary line L of the image area S are all mapped on the lens 101 are the same, which can reduce
- the number of vertices in the multiple image regions S reduces the amount of calculation for performing anti-distortion processing on each vertices, shortens the time of image processing, and improves the effect of image processing.
- the shape of the polygon surrounded by the outer boundary line L of each image area S is approximately the same as the shape of the lens 101, which can avoid the mismatch between the shape of the lens 101 and the image area S, which may cause the subsequent mapping process of the image area S In the transition area between different distortion coefficients K in the lens 101, the image connection is not smooth, which affects the image display effect after anti-distortion processing.
- the distortion coefficients K of the lens 101 gradually increase.
- the number of vertices of the image region S relatively close to the geometric center O1 of the lens 101 is less than or equal to the number of vertices of the image region S relatively far away from the geometric center O1 of the lens 101.
- the distortion coefficient K of the lens 101 is the smallest, and at the position of the edge of the lens 101, the distortion coefficient K of the lens 101 is the largest.
- the density of the figure surrounded by the connecting line at the same distortion coefficient position gradually increases, that is, at a position close to the geometric center O1 of the lens 101, the lens
- the distribution of the distortion coefficients K of the lens 101 is relatively sparse, and the distribution of the distortion coefficients K of the lens 101 is dense at a position close to the edge of the lens 101.
- the vertices of the image regions S relatively close to the geometric center O1 of the lens 101 are distributed sparsely, and the images relatively close to the edge of the lens 101
- the vertices of the area S are densely distributed. That is, the distribution method of each distortion coefficient K of the lens 101 is approximately the same as the distribution method of each vertex of each image area S.
- the vertices of each image area S in the embodiment of the present disclosure are sparse and dense, reducing the number of vertices and reducing the anti-distortion of each vertices.
- the amount of processing calculation shortens the time of image processing and improves the effect of image processing.
- the lens 101 includes a plurality of distortion parts Q.
- the geometric center O3 of the plurality of distortion parts Q coincides with the geometric center O1 of the lens 101, and each distortion part Q has at least one distortion coefficient K.
- the outer boundary line L of at least one image area S is mapped in the same distortion part Q.
- the number of vertices of the image area S mapped in each distortion portion Q sequentially increases.
- the number of vertices of each image region S mapped in the same distortion part is equal.
- the density of the vertices of the image region S corresponding to each distortion part Q gradually increases, which can reduce the number of vertices of the multiple image regions S, It reduces the amount of calculation for anti-distortion processing on each vertex, shortens the time of image processing, and improves the effect of image processing.
- the distortion coefficients K corresponding to the vertices of the different image regions S are not equal.
- the number of image areas S into which the display area A of the display screen 102 is divided is equal to the number of distortion coefficients K possessed by the lens 101.
- the number of distortion coefficients K of the lens 101 is 10.
- the number of vertices of the multiple image regions S is 3, 6, 6 respectively. , 12, 12, 12, 24, 24, 24, and 24 (as shown in Figure 7), or 3, 6, 6, 12, 12, 24, 24, 48, 48, and 48 respectively (not shown in the figure) ).
- the image processing method further includes:
- the vertices of the multiple image areas S are connected, and the multiple image areas S are divided into multiple regions T to be mapped, and there is no overlap between the regions T to be mapped.
- the target image after the anti-distortion process after the target image after the anti-distortion process passes through the lens 101, the target image after the anti-distortion process will be distorted in a direction opposite to the distortion direction of the target image after the anti-distortion process, so that the human eye can observe through the lens 101
- the image displayed on the display screen 102 is a target image that tends to be normal and has no distortion, thereby reducing the degree of distortion of the target image.
- the anti-distortion processed target image obtained by mapping is close to the shape of the lens 101, so that the image at the junction of different distortion coefficients K is smoother, thereby improving the display effect of the anti-distortion processed target image.
- the texture mode is a triangle texture mode. According to the index coordinates, connect the vertices of the multiple image regions S, and divide the multiple image regions S into multiple regions T to be mapped, as shown in FIG. 11, including:
- each of the triangular areas Among the vertices at least one vertex is the vertex of the image area S adjacent to the image area S and closer to the geometric center O1 of the lens 101; the areas of the multiple triangular areas included in each image area S are not completely equal; From the geometric center O1 of the lens 101 to the direction of the edge of the lens 101, the number of triangular areas in the image area S with different numbers of vertices gradually increases, and the area of the triangular areas in the image area S with different numbers of vertices gradually increases. Decrease
- the shape of the area T to be mapped is a triangle.
- the embodiments of the present disclosure can reduce the number of triangular regions, shorten the time required for mapping, avoid prolonging the mapping time and cause frame drop during the display process, and improve the display effect.
- embodiments of the present disclosure may adopt any texture mode in the field that can implement the above-mentioned image processing method, which is not limited herein.
- the texture mode may be a quadrilateral texture mode.
- each image area S is divided into at least one quadrilateral area. Except for the image area S closest to the geometric center O1 of the lens 101, each Each image area S is divided into a plurality of quadrilateral areas.
- each quadrilateral area at least two vertices are the vertices of the image area S adjacent to the image area S and closer to the geometric center O1 of the lens 101; multiple quadrilateral areas included in each image area S Along the direction from the geometric center O1 of the lens 101 to the edge of the lens 101, the number of quadrilateral regions in the image region S with different numbers of vertices gradually increases, and the area of the quadrilateral regions gradually decreases. In this way, the number of quadrilateral areas is reduced, the time required for mapping is shortened, and the extension of the mapping time prevents frames from being dropped during the display process, and the display effect is improved.
- the image processing method further includes:
- the target image obtained by rendering is the texture of the target image obtained by rendering.
- a plurality of distortion coefficients K of the lens 101 are also obtained.
- step S80 will not be executed, and step S70 will be returned to receive the vertical synchronization signal Vsync again.
- the image processing method further includes:
- the texture coordinates can be directly applied according to the texture coordinates, and there is no need to perform anti-distortion calculations, which reduces calculations. It saves the time of image processing and improves the efficiency of image processing.
- the image processing method further includes:
- judging whether to end the application is judging whether there is target image data of the subsequent frame that can be updated. If not, the image processing process is ended.
- An embodiment of the present disclosure provides an image display device 100, and the image display device 100 is the image display device 100 shown in FIG. 1 described above.
- the image display device 100 includes an image processor 200.
- the image processor 200 is disposed on the main body 110 of the image display device 100 in FIG. 1.
- the display screen 102 is coupled to the image processor 200.
- the image processor 200 may be a GPU (Graphics Processing Unit, graphics processor).
- the image processor 200 is configured to divide the display area A of the display screen 102 into a plurality of image areas S according to a plurality of distortion coefficients K of the lens 101, referring to FIG. It is a polygon, and the geometric center O2 of the polygon surrounded by the outer boundary line L of the image area S coincides with the geometric center O1 of the lens 101.
- the distortion coefficient at the position where each vertex of the polygon surrounded by the outer boundary line L of the image area S is mapped on the lens 101 is one distortion coefficient among the plurality of distortion coefficients.
- the coordinates of the vertices of the image area S are subjected to anti-distortion processing, and the texture coordinates of the vertices of the image area S are obtained.
- the rendering capability of the mobile terminal is quite different from that of the PC (Personal Computer) terminal, for the image processor of the all-in-one image display device, in the process of anti-distortion processing, when the number of image areas is large, for example, refer to the figure For the grid-like image area S'in 3, when the number of image areas S'reaches (32x32), the image processing time of the image processor is too long, resulting in frequent frame dropping during the display process.
- the image processor 200 provided by the embodiment of the present disclosure divides the display area A of the display screen 102 into a plurality of image areas S, and the outer boundary line L of each image area S forms a polygon, and the image The geometric center O2 of the polygon surrounded by the outer boundary line L of the area S coincides with the geometric center O1 of the lens 101; Being one distortion coefficient among the plurality of distortion coefficients K, the number of vertices of the plurality of image regions S can be reduced.
- the amount of calculation for performing anti-distortion processing on each vertex is reduced, the time for image processing by the image processor 200 is shortened, and the performance of the image processor 200 is improved. . In this way, when the image processor 200 is applied to a mobile terminal image display device, the load of the image processor 200 can be reduced, and the phenomenon of frame dropping can be avoided.
- the shape of the polygon surrounded by the outer boundary line L of each image area S is approximately the same as the shape of the lens 101, which can avoid the mismatch between the shape of the lens 101 and the image area S, which may cause the subsequent mapping process of the image area S In the transition area between different distortion coefficients K in the lens 101, the image connection is not smooth, which affects the image display effect after anti-distortion processing.
- the image processor 200 is further configured to sort the coordinates of the vertices of the multiple image regions S according to the mapping mode to obtain index coordinates; according to the index coordinates, referring to FIG. 9, connect the multiple image regions At each vertex of S, the multiple image regions S are divided into multiple regions to be mapped T without overlapping each other; and, according to the texture coordinates and index coordinates, the target image is sequentially mapped on the corresponding multiple regions to be mapped T Inside, the target image after anti-distortion processing is obtained.
- the image processor 200 when the texture mode is a triangle texture mode, is configured to connect the vertices of the multiple image regions S according to the index coordinates, referring to FIG. 9, and divide each image region S At least one triangular area is formed; except for the image area closest to the geometric center O1 of the lens 101, each of the remaining image areas S is divided into a plurality of triangular areas.
- At least one vertex is the vertex of the image area S adjacent to the image area S and closer to the geometric center O1 of the lens 101.
- the areas of the plurality of triangular regions included in each image region S are not completely equal.
- the number of triangular areas in the image area S with different numbers of vertices gradually increases, and the area of the triangular areas in the image area S with different numbers of vertices slowing shrieking.
- the image processor 200 is further configured to acquire the target image data and the vertical synchronization signal Vsync before dividing the display area of the display screen 102 into a plurality of image areas.
- the target image is obtained by rendering according to the target image data.
- the image processor 200 before dividing the display area of the display screen 102 into multiple image areas, the image processor 200 also obtains multiple distortion coefficients K of the lens 101.
- the image processor 200 is further configured to update the target image data and the vertical synchronization signal Vsync of the subsequent frames after displaying the target image of the first frame after the anti-distortion processing.
- the image processor 200 is further configured to sequentially map the target image of the subsequent frame according to the texture coordinates obtained after performing the anti-distortion processing on the target image of the first frame.
- the above-mentioned image processor 200 has the same beneficial effects as the above-mentioned image processing method, so it will not be repeated here.
- the specific structure of the image processor 200 of the embodiment of the present disclosure may adopt any circuit or module capable of realizing corresponding functions in the field. In practical applications, technicians can make selections according to the situation, and the present disclosure is not limited herein.
- the image display device 100 further includes a controller 103 provided on the main body 110.
- the controller 103 is coupled with the image processor 200.
- the controller 103 is configured to control the image processor 200 to perform image processing, so that the display screen 102 displays the target image after anti-distortion processing.
- controller 103 transmits the vertical synchronization signal Vsync to the image processor 200.
- the specific structure of the controller 103 can adopt any circuit or module capable of realizing corresponding functions in the field. In practical applications, technicians can make selections according to the situation, and the present disclosure is not limited herein.
- the controller 103 may be applied to a mobile terminal (such as a mobile phone, etc.).
- Some embodiments of the present disclosure provide a computer-readable storage medium (for example, a non-transitory computer-readable storage medium), the computer-readable storage medium stores computer program instructions, and when the computer program instructions run on a processor , Causing the processor to execute one or more steps in the image processing method described in any one of the foregoing embodiments.
- a computer-readable storage medium for example, a non-transitory computer-readable storage medium
- the foregoing computer-readable storage medium may include, but is not limited to: magnetic storage devices (for example, hard disks, floppy disks, or magnetic tapes, etc.), optical disks (for example, CD (Compact Disk), DVD (Digital Versatile Disk, Digital universal disk), etc.), smart cards and flash memory devices (for example, EPROM (Erasable Programmable Read-Only Memory), cards, sticks or key drives, etc.).
- Various computer-readable storage media described in this disclosure may represent one or more devices and/or other machine-readable storage media for storing information.
- the term "machine-readable storage medium" may include, but is not limited to, wireless channels and various other media capable of storing, containing, and/or carrying instructions and/or data.
- Some embodiments of the present disclosure also provide a computer program product.
- the computer program product includes computer program instructions, and when the computer program instructions are executed on a computer, the computer program instructions cause the computer to execute one or more steps in the image processing method described in the foregoing embodiments.
- Some embodiments of the present disclosure also provide a computer program.
- the computer program When the computer program is executed on a computer, the computer program causes the computer to execute one or more steps in the image processing method described in the above-mentioned embodiments.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
- Geometry (AREA)
Abstract
Description
Claims (13)
- 一种图像处理方法,应用于图像显示装置中,所述图像显示装置包括透镜和显示屏,所述透镜具有多个畸变系数;所述图像处理方法包括:根据所述透镜的多个畸变系数,将所述显示屏的显示区域划分为多个图像区域,每个图像区域的外边界线围成一个多边形,且所述图像区域的外边界线所围成的多边形的几何中心与所述透镜的几何中心重合;所述图像区域的外边界线所围成的多边形的各顶点映射在所述透镜上的位置的畸变系数均相同;根据所述图像区域的各顶点对应的畸变系数,对所述图像区域的各顶点的坐标进行反畸变处理,得到所述图像区域的各顶点的纹理坐标。
- 根据权利要求1所述的图像处理方法,其中,沿由所述透镜的几何中心指向所述透镜的边缘的方向,所述透镜的各畸变系数逐渐增大;相邻两个所述图像区域中,相对靠近所述透镜的几何中心的图像区域的顶点的个数,小于或等于相对远离所述透镜的几何中心的图像区域的顶点的个数。
- 根据权利要求1或2所述的图像处理方法,其中,所述透镜包括多个畸变部分,所述多个畸变部分的几何中心与所述透镜的几何中心重合,每个畸变部分具有至少一个畸变系数;至少一个所述图像区域的外边界线映射在同一个所述畸变部分中;沿由所述透镜的几何中心指向所述透镜的边缘的方向,映射在各所述畸变部分中的图像区域的顶点的个数依次增加。
- 根据权利要求3所述的图像处理方法,其中,在多个所述图像区域的外边界线映射在一个所述畸变部分中的情况下,映射在同一个所述畸变部分中的各图像区域的顶点的个数相等。
- 根据权利要求1~4中任一项所述的图像处理方法,其中,不同所述图像区域的各顶点对应的畸变系数不相等;所述显示屏的显示区域被划分成的图像区域的个数与所述透镜具有的畸变系数的个数相等。
- 根据权利要求1~5中任一项所述的图像处理方法,其中,所述透镜具有的畸变系数的个数为10;沿由所述透镜的几何中心指向所述透镜的边缘的方向,所述多个图像区域的顶点个数,分别为3、6、6、12、12、12、24、24、24和24,或者分别 为3、6、6、12、12、24、24、48、48和48。
- 根据权利要求1所述的图像处理方法,还包括:根据贴图模式,对所述多个图像区域的各顶点的坐标进行排序,得到索引坐标;根据所述索引坐标,连接所述多个图像区域的各顶点,将所述多个图像区域分割为多个待贴图区域,所述多个待贴图区域之间无交叠;根据所述纹理坐标和所述索引坐标,将目标图像依次贴图在对应的待贴图区域内,得到反畸变处理后的目标图像;显示所述反畸变处理后的目标图像。
- 根据权利要求7所述的图像处理方法,其中,所述贴图模式为三角形贴图模式;所述根据所述索引坐标,连接所述多个图像区域的各顶点,将所述多个图像区域分割为多个待贴图区域,包括:根据所述索引坐标,连接所述多个图像区域的各顶点,将每个所述图像区域分割成至少一个三角形区域;除最靠近所述透镜的几何中心的图像区域外,其余每个所述图像区域被分割成多个三角形区域;所述多个三角形区域之间无重叠,且所述图像区域中的多个三角形区域包含该图像区域的各顶点;其中,每个所述三角形区域的各顶点中,至少一个顶点为与该图像区域相邻、且更靠近所述透镜的几何中心的图像区域的顶点;每个所述图像区域所包括的多个三角形区域的面积不完全相等;沿由所述透镜的几何中心指向所述透镜的边缘的方向,各顶点个数不同的图像区域中的三角形区域的个数逐渐增多,各顶点个数不同的图像区域中的三角形区域的面积逐渐减小。
- 一种图像显示装置,包括:本体;设置于所述本体上的显示屏;设置于所述本体上的透镜;所述透镜相对于所述显示屏更靠近所述图像显示装置被佩戴时接近人眼的一侧;所述透镜具有多个畸变系数;设置于所述本体上的图像处理器;所述图像处理器与所述显示屏耦接;所述图像处理器被配置为,根据所述透镜的多个畸变系数,将所述显示屏的显示区域划分为多个图像区域,使每个图像区域的外边界线围成一个多边形,且所述图像区域的外边界线所围成的多边形的几何中心与所述透镜的几何中心重合;并使所述图像区域的外边界线所围成的多边形的各顶点映射在所述透镜上的位 置的畸变系数,为所述多个畸变系数中的一个畸变系数;及,根据所述图像区域的各顶点对应的畸变系数,对所述图像区域的各顶点的坐标进行反畸变处理,得到所述图像区域的各顶点的纹理坐标。
- 根据权利要求9所述的图像显示装置,其中,所述图像处理器还被配置为,根据贴图模式,对所述多个图像区域的各顶点的坐标进行排序,得到索引坐标;根据所述索引坐标,连接所述多个图像区域的各顶点,将所述多个图像区域分割为相互之间无交叠的多个待贴图区域;及,根据所述纹理坐标和所述索引坐标,将目标图像依次贴图在对应的所述多个待贴图区域内,得到反畸变处理后的目标图像。
- 根据权利要求10所述的图像显示装置,其中,在所述贴图模式为三角形贴图模式的情况下,所述图像处理器被配置为,根据所述索引坐标,连接所述多个图像区域的各顶点,将每个所述图像区域分割成至少一个三角形区域;使除最靠近所述透镜的几何中心的图像区域外,其余每个所述图像区域被分割成多个三角形区域;所述多个三角形区域之间无重叠,且所述图像区域中的多个三角形区域包含该图像区域的各顶点;每个所述三角形区域的各顶点中,至少一个顶点为与该图像区域相邻、且更靠近所述透镜的几何中心的图像区域的顶点;每个所述图像区域所包括的多个三角形区域的面积不完全相等;沿由所述透镜的几何中心指向所述透镜的边缘的方向,各顶点个数不同的图像区域中的三角形区域的个数逐渐增多,各顶点个数不同的图像区域中的三角形区域的面积逐渐减小。
- 根据权利要求9~11中任一项所述的图像显示装置,还包括:设置于所述本体上的控制器,所述控制器与所述图像处理器耦接。
- 一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序指令,所述计算机程序指令在处理器上运行时,使得所述处理器执行如权利要求1~8中任一项所述的图像处理方法中的一个或多个步骤。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/765,424 US20220351340A1 (en) | 2020-05-09 | 2021-04-12 | Image processing method and image display device |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010388616.6A CN111539898B (zh) | 2020-05-09 | 2020-05-09 | 图像处理方法及图像显示装置 |
CN202010388616.6 | 2020-05-09 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021227740A1 true WO2021227740A1 (zh) | 2021-11-18 |
Family
ID=71975730
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/086593 WO2021227740A1 (zh) | 2020-05-09 | 2021-04-12 | 图像处理方法及图像显示装置 |
Country Status (3)
Country | Link |
---|---|
US (1) | US20220351340A1 (zh) |
CN (1) | CN111539898B (zh) |
WO (1) | WO2021227740A1 (zh) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111539898B (zh) * | 2020-05-09 | 2023-08-01 | 京东方科技集团股份有限公司 | 图像处理方法及图像显示装置 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050213159A1 (en) * | 2004-03-29 | 2005-09-29 | Seiji Okada | Distortion correction device and image sensing device provided therewith |
CN105869142A (zh) * | 2015-12-21 | 2016-08-17 | 乐视致新电子科技(天津)有限公司 | 虚拟现实头盔的成像畸变测试方法及装置 |
CN108596854A (zh) * | 2018-04-28 | 2018-09-28 | 京东方科技集团股份有限公司 | 图像畸变校正方法及装置、计算机可读介质、电子设备 |
CN109754380A (zh) * | 2019-01-02 | 2019-05-14 | 京东方科技集团股份有限公司 | 一种图像处理方法及图像处理装置、显示装置 |
CN111539898A (zh) * | 2020-05-09 | 2020-08-14 | 京东方科技集团股份有限公司 | 图像处理方法及图像显示装置 |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3429280B2 (ja) * | 2000-09-05 | 2003-07-22 | 理化学研究所 | 画像のレンズ歪みの補正方法 |
US10102668B2 (en) * | 2016-05-05 | 2018-10-16 | Nvidia Corporation | System, method, and computer program product for rendering at variable sampling rates using projective geometric distortion |
CN109961402A (zh) * | 2017-12-22 | 2019-07-02 | 中科创达软件股份有限公司 | 一种显示设备目镜反畸变方法及装置 |
-
2020
- 2020-05-09 CN CN202010388616.6A patent/CN111539898B/zh active Active
-
2021
- 2021-04-12 WO PCT/CN2021/086593 patent/WO2021227740A1/zh active Application Filing
- 2021-04-12 US US17/765,424 patent/US20220351340A1/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050213159A1 (en) * | 2004-03-29 | 2005-09-29 | Seiji Okada | Distortion correction device and image sensing device provided therewith |
CN105869142A (zh) * | 2015-12-21 | 2016-08-17 | 乐视致新电子科技(天津)有限公司 | 虚拟现实头盔的成像畸变测试方法及装置 |
CN108596854A (zh) * | 2018-04-28 | 2018-09-28 | 京东方科技集团股份有限公司 | 图像畸变校正方法及装置、计算机可读介质、电子设备 |
CN109754380A (zh) * | 2019-01-02 | 2019-05-14 | 京东方科技集团股份有限公司 | 一种图像处理方法及图像处理装置、显示装置 |
CN111539898A (zh) * | 2020-05-09 | 2020-08-14 | 京东方科技集团股份有限公司 | 图像处理方法及图像显示装置 |
Also Published As
Publication number | Publication date |
---|---|
CN111539898A (zh) | 2020-08-14 |
US20220351340A1 (en) | 2022-11-03 |
CN111539898B (zh) | 2023-08-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110049351B (zh) | 视频流中人脸变形的方法和装置、电子设备、计算机可读介质 | |
CN110807836B (zh) | 三维人脸模型的生成方法、装置、设备及介质 | |
US20060152579A1 (en) | Stereoscopic imaging system | |
CN107452049B (zh) | 一种三维头部建模方法及装置 | |
CA2314714C (en) | Morphing image processing system using polygon reduction processing | |
CN114820905B (zh) | 虚拟形象生成方法、装置、电子设备及可读存储介质 | |
US11893705B2 (en) | Reference image generation apparatus, display image generation apparatus, reference image generation method, and display image generation method | |
JP2008306377A (ja) | 色調整装置および色調整プログラム | |
JP4234089B2 (ja) | エンタテインメント装置、オブジェクト表示装置、オブジェクト表示方法、プログラム、およびキャラクタ表示方法 | |
WO2021227740A1 (zh) | 图像处理方法及图像显示装置 | |
US11288774B2 (en) | Image processing method and apparatus, storage medium, and electronic apparatus | |
CN113808249B (zh) | 图像处理方法、装置、设备和计算机存储介质 | |
EP1745440A1 (en) | Graphics pipeline for rendering graphics | |
CN110751026B (zh) | 视频处理方法及相关装置 | |
US10152818B2 (en) | Techniques for stereo three dimensional image mapping | |
CN115965735B (zh) | 纹理贴图的生成方法和装置 | |
CN110502305B (zh) | 一种动态界面的实现方法、装置及相关设备 | |
CN112686978B (zh) | 表情资源的加载方法、装置和电子设备 | |
US10636210B2 (en) | Dynamic contour volume deformation | |
US20230237611A1 (en) | Inference model construction method, inference model construction device, recording medium, configuration device, and configuration method | |
CN116385705B (zh) | 用于对三维数据进行纹理融合的方法、设备和存储介质 | |
KR20190013146A (ko) | 모바일 환경에서 3d 객체의 실시간 대량 처리를 위한 랜더링 최적화 방법 | |
CN114119633A (zh) | 一种基于mr应用场景的显示融合切割算法 | |
CN115063516A (zh) | 一种数字人的处理方法及装置 | |
CN118053189A (zh) | 一种稀疏多视角动态人脸重建方法及装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21804261 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21804261 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 27.06.2023) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21804261 Country of ref document: EP Kind code of ref document: A1 |