WO2024040398A1 - 矫正函数的生成、图像矫正方法及装置 - Google Patents

矫正函数的生成、图像矫正方法及装置 Download PDF

Info

Publication number
WO2024040398A1
WO2024040398A1 PCT/CN2022/114002 CN2022114002W WO2024040398A1 WO 2024040398 A1 WO2024040398 A1 WO 2024040398A1 CN 2022114002 W CN2022114002 W CN 2022114002W WO 2024040398 A1 WO2024040398 A1 WO 2024040398A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
sample
display
coordinates
correction function
Prior art date
Application number
PCT/CN2022/114002
Other languages
English (en)
French (fr)
Inventor
白家荣
董瑞君
韩娜
Original Assignee
京东方科技集团股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 京东方科技集团股份有限公司 filed Critical 京东方科技集团股份有限公司
Priority to PCT/CN2022/114002 priority Critical patent/WO2024040398A1/zh
Priority to CN202280002785.9A priority patent/CN117918019A/zh
Publication of WO2024040398A1 publication Critical patent/WO2024040398A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration

Definitions

  • the present disclosure relates to the field of image processing technology, and specifically to the generation of correction functions, image correction methods and devices.
  • VR Virtual Reality, virtual reality
  • AR Augmented Reality, augmented reality
  • MR Magnetic Reality, mixed reality
  • XR Extended Reality, extended reality
  • the VR glasses worn by users usually include a screen and a lens.
  • the light emitted by the screen when displaying images can enter the user's eyes through the lens.
  • the reverse extension of the above-mentioned light will form a corresponding virtual image.
  • the lens is usually a non-planar lens such as a convex lens or a concave lens, and it is difficult to ensure that the lens itself is strictly parallel to the image plane, the above-mentioned virtual image will inevitably produce distortion, such as radial distortion and/or tangential distortion, resulting in a virtual image.
  • the content in the video is deformed and distorted, affecting the user's sense of presence and even causing users to feel dizzy. In this regard, it is necessary to correct the distortion of the above-mentioned distorted images.
  • distortion correction is often achieved through pre-distortion.
  • a pre-distorted image is displayed on the screen to form a normal image without distortion through the optical system.
  • mathematical fitting is performed based on known device parameters in multiple dimensions, and enough data is finally obtained to form a pre-distorted image.
  • this type of fitting algorithm has complex logic, requires many types of input parameters, and the fitting process is cumbersome and inefficient.
  • the above-mentioned fitting algorithms are usually only able to correct symmetrical distortion, but cannot effectively correct asymmetrical distortion that may be caused by various reasons such as processing or assembly errors, and have a small scope of application.
  • embodiments of the present disclosure propose a correction function generation, image correction method and device to solve the deficiencies existing in related technologies.
  • a method for generating a correction function including:
  • a display device including an optical system and a display component, the normal image displayed by the display component forms a distorted image through the optical system, and the distorted image is a virtual image;
  • sample object-space position coordinates of the display device and the corresponding sample image-space angle coordinates.
  • the sample object-space position coordinates are used to characterize the display position of the sample pixels in the normal image in the display component.
  • sample image side visual field angle coordinates are used to characterize the visual field angle of the sample virtual image point corresponding to the sample pixel point in the distorted image;
  • a correction function is generated according to the sample object-space position coordinates and the sample image-space angle coordinates, and the correction function is used to correct the distorted image.
  • an image correction method is proposed, which is applied to a display device including an optical system and a display component.
  • the method includes:
  • the image field angle coordinates are used to represent the field of view angle of the virtual image point corresponding to the pixel point.
  • the virtual image is represented by The display component is formed by the optical system when displaying the target image;
  • the object-space position coordinates corresponding to the image-side field of view angle coordinates are determined according to a correction function.
  • the object-space position coordinates are used to represent the expected display position of the pixel point in the display component.
  • the correction function is passed through the aforementioned third Generated by the method described in one aspect;
  • Control the display component to display the color value of the pixel according to the expected display position.
  • a device for generating a correction function includes one or more processors, and the processor is configured to:
  • a display device including an optical system and a display component, the normal image displayed by the display component forms a distorted image through the optical system, and the distorted image is a virtual image;
  • sample object-space position coordinates of the display device and the corresponding sample image-space angle coordinates.
  • the sample object-space position coordinates are used to characterize the display position of the sample pixels in the normal image in the display component.
  • sample image side visual field angle coordinates are used to characterize the visual field angle of the sample virtual image point corresponding to the sample pixel point in the distorted image;
  • a correction function is generated according to the sample object-space position coordinates and the sample image-space angle coordinates, and the correction function is used to correct the distorted image.
  • an image correction device is proposed.
  • the device is applied to a display device including an optical system and a display component.
  • the device includes one or more processors, and the processor is configured to :
  • the image field angle coordinates are used to represent the field of view angle of the virtual image point corresponding to the pixel point.
  • the virtual image is represented by The display component is formed by the optical system when displaying the target image;
  • the object-space position coordinates corresponding to the image-side field of view angle coordinates are determined according to a correction function.
  • the object-space position coordinates are used to represent the expected display position of the pixel point in the display component.
  • the correction function is passed through the aforementioned third Generated by the method described in any one of the aspects;
  • Control the display component to display the color value of the pixel according to the expected display position.
  • an electronic device including: a processor; a memory for storing instructions executable by the processor; wherein the processor is configured to implement the correction described in the first aspect. Function generation method.
  • a display device including: an optical system and a display component; a processor; a memory for storing instructions executable by the processor; wherein the processor is configured to implement the above-mentioned first The image correction method described in the second aspect.
  • a non-transitory computer-readable storage medium on which a computer program is stored.
  • the program is executed by a processor, the method for generating the correction function described in the first aspect is implemented, or The second step in the image correction method described in the second aspect.
  • a correction function corresponding to the display component can be generated, and the function can be used to calculate the pre-distorted image that the display component should display, thereby achieving effective correction of the distorted image.
  • the sample object-space position coordinates and the corresponding sample image-space angle coordinates of the display device including the display component and the optical system can be obtained first.
  • the sample object-space position coordinates are used to characterize the normal The display position of the sample pixel point in the image in the display component, and the sample image field angle coordinates are used to characterize the field angle of the sample virtual image point corresponding to the sample pixel point in the distorted image; then A correction function is generated based on the sample object-side position coordinates and the sample image-side visual field angle coordinates.
  • the display device can first determine the color value of the pixel point contained in the target image and the corresponding image field angle coordinates.
  • the image field angle coordinates are used to characterize the image field angle corresponding to the pixel point.
  • the virtual image is formed by the optical system when the display component displays the target image; and then the object-space position coordinates corresponding to the image-side field-of-view angle coordinates are determined according to the aforementioned correction function,
  • the object-space position coordinates are used to represent the expected display position of the pixel point in the display component; finally, the display component is controlled to display the color value of the pixel point according to the expected display position.
  • the expected display position of any pixel calculated through the above method is the display position of the pixel in the display component. After each pixel is displayed at the corresponding expected display position, the display component displays the pre-distorted image corresponding to the target image. At this time, a distortion-free virtual image can be formed through the optical system, thereby achieving To correct the distortion of the distorted image.
  • the position coordinates of the pixel points of the target image in each direction of the optical axis can be calculated by the aforementioned method to obtain the corresponding visual field angle coordinates, that is, the correction function is isotropic to the pixel point position, so this method is not only It can correct both symmetrical distortion and asymmetrical distortion, that is, it can correct images with distortion in any direction, and has a wider range of applications.
  • FIG. 1 is a flow chart of a method for generating a correction function according to an embodiment of the present disclosure.
  • FIG. 2 is a schematic diagram of the imaging principle of a display device according to an embodiment of the present disclosure.
  • FIG. 3 is a schematic diagram of a distorted image according to an embodiment of the present disclosure.
  • FIG. 4 is a schematic diagram of a viewing angle of a pixel according to an embodiment of the present disclosure.
  • Figure 5 is a flow chart of an image correction method according to an embodiment of the present disclosure.
  • FIG. 6 is a schematic diagram illustrating a geometric relationship of a maximum field of view according to an embodiment of the present disclosure.
  • Figure 7 is a schematic diagram of a pre-distorted image according to an embodiment of the present disclosure.
  • FIG. 8 is a schematic block diagram of a device for generating a correction function according to an embodiment of the present disclosure.
  • FIG. 9 is a schematic block diagram of an image correction device according to an embodiment of the present disclosure.
  • the VR glasses worn by users usually include a screen and a lens.
  • the light emitted by the screen when displaying images can enter the user's eyes through the lens.
  • the reverse extension of the above-mentioned light will form a corresponding virtual image.
  • the lens is usually a non-planar lens such as a convex lens or a concave lens, and it is difficult to ensure that the lens itself is strictly parallel to the image plane, the above-mentioned virtual image will inevitably produce distortion, such as radial distortion and/or tangential distortion, resulting in a virtual image.
  • the content in the video is deformed and distorted, affecting the user's sense of presence and even causing users to feel dizzy. In this regard, it is necessary to correct the distortion of the above-mentioned distorted images.
  • distortion correction is often achieved through pre-distortion.
  • a pre-distorted image is displayed on the screen to form a normal image without distortion through the optical system.
  • mathematical fitting is performed based on known device parameters in multiple dimensions, and enough data is finally obtained to form a pre-distorted image.
  • this type of fitting algorithm has complex logic, requires many types of input parameters, and the fitting process is cumbersome and inefficient.
  • the above-mentioned fitting algorithms are usually only able to correct symmetrical distortion, but cannot effectively correct asymmetrical distortion that may be caused by various reasons such as processing or assembly errors, and have a small scope of application.
  • embodiments of the present invention propose an improved image correction solution. Specifically, a correction function suitable for the display device is generated, and the display device corrects the distorted image based on the correction function.
  • the image correction scheme will be described in detail below with reference to the accompanying drawings.
  • FIG. 1 is a flow chart of a method for generating a correction function according to an embodiment of the present disclosure.
  • This method can be applied to any form of electronic equipment such as computers. For example, designers of display devices can use this method to generate correction functions for display devices in their own computers; alternatively, this method can also be applied to the display device, such as During the use stage after the design and production of the display device are completed, the device can collect relevant data on its own and generate a correction function suitable for itself. As shown in Figure 1, the method may include the following steps 102-106.
  • Step 102 Determine a display device including an optical system and a display component.
  • the normal image displayed by the display component forms a distorted image through the optical system, and the distorted image is a virtual image.
  • the display device described in the embodiment of the present invention may be a near-eye display device, which may be in the form of glasses, clothing, accessories, etc.
  • the user can wear the display device in front of the eyes to observe the image displayed by the display device. Since the inherent characteristics of the optical system will inevitably cause the virtual image to be distorted (in fact, the real image observed by the human eye will also be distorted), so when the display component displays a normal image, the imaging The virtual image corresponding to the image is the distorted image, that is, the distorted image described in the embodiment of the present invention.
  • the display component located on the right side of the optical system emits corresponding light during the display of an image.
  • the light passes through the optical system and then enters the human eye (ie, the user's eye) located on the left side of the optical system.
  • the human eye ie, the user's eye
  • the light ray will form a virtual image of the image on the right side of the optical system.
  • the corresponding distorted image may be an irregular quadrilateral.
  • the shape of the distorted image shown in FIG. 3 is only exemplary. In practical applications, the distorted image can be of any shape, which is not limited in this embodiment of the present invention.
  • any sample pixel in the normal image that is, a pixel in the normal image
  • There are corresponding sample virtual image points in the image that is, pixel points in the distorted image.
  • any sample pixel point (xi , yi ) in the normal image has a corresponding sample virtual image point ( ui , vi ) in the distorted image, where 1 ⁇ i ⁇ n, n is The total number of pixels in the normal image (or the distorted image).
  • Step 104 Obtain the sample object-space position coordinates and the corresponding sample image-space angle coordinates of the display device.
  • the sample object-space position coordinates are used to characterize the sample pixels in the normal image in the display component.
  • the display position of the sample image side is used to represent the field of view angle of the sample virtual image point corresponding to the sample pixel point in the distorted image.
  • the embodiment of the present invention creatively generates a generated image based on the position coordinates of the sample pixel point in the normal image (i.e., the sample object-space position coordinates) and the field of view angle of the sample virtual image point in the distorted image (i.e., the sample image-side field angle coordinates).
  • the correction function of the display device can significantly reduce the parameters required to generate the correction function and simplify the generation logic of the correction function. Therefore, before generating the correction function, it is necessary to obtain sample data, that is, obtain the sample object-side position coordinates and the sample image-side visual field angle coordinates.
  • the position coordinates of the sample pixel point in the display component can be used as the sample object.
  • the position coordinates of any sample pixel point can be the components of the relative position of the pixel point in the horizontal and vertical directions.
  • a rectangular coordinate system can be established with the lower left corner vertex of the normal image as the origin, and then the coordinates of the sample pixel points in the rectangular coordinate system can be used as the sample object-space position coordinates.
  • the above rectangular coordinate system can be established in various ways, and this is not limited in the embodiment of the present invention.
  • each pixel in the normal image and the distorted image has a corresponding field of view.
  • the field of view of any pixel in any image is the distance between the pixel and the human eye.
  • the embodiment of the present invention only focuses on the field of view angle of the sample virtual image point in the distorted image, which is hereby explained.
  • the outline of the virtual image is a rectangle and the optical axis passes through the center point O of the rectangle.
  • the field of view angle of any vertex B in the virtual image is ⁇ (that is, ⁇ OEB).
  • the field of view angle ⁇ can be split into two angles u (ie ⁇ OEC) and v (ie ⁇ OEA) along the horizontal and vertical directions. It can be understood that for any two angles in the virtual image pixels, the field of view angles of the two may be different; but the angles u and v after the two field of view angles are split cannot be exactly the same. Therefore, the array composed of the split u and v can be used to represent ⁇ . Based on this, an array composed of two angles after the field of view angle is split can be used as the sample image side field of view angle coordinates. As shown in Figure 4, the sample image field angle coordinate corresponding to point B can be either the field of view angle ⁇ or (u, v).
  • sample image field angle coordinates of any sample virtual image point are the components of its field angle in the horizontal and vertical directions. Based on the above analysis, for each sample pixel point in the normal image, the sample image field angle coordinates corresponding to the corresponding sample virtual image point can be determined respectively.
  • various methods can be used to obtain the sample object-side position coordinates and the sample image-side viewing angle coordinates of the display device.
  • the optical parameters of the optical system can be queried, and the sample pixels in the normal image can be determined based on the optical parameters.
  • the optical parameters may include the focal length, optical power, magnification and/or optical aperture of the optical system, which are not limited in this embodiment of the present invention. Normally, designers will detect the display components and optical systems during the design stage of the display device, and calculate benchmark data for the optical system based on the detection results and corresponding optical parameters.
  • the benchmark data is Including data such as the position coordinates of the pixels displayed in the display component and the trial production viewing angle of the corresponding virtual image point in the virtual image. Therefore, the above-mentioned reference data can also be directly queried in the design manual of the display device. In this case, the sample object-space position coordinates and the sample image-space angle coordinates can be determined without calculation.
  • the distorted image can also be captured by a camera, where the camera can be a calibrated standard camera with preset parameters.
  • the preset parameters of the camera can be used to calculate the field of view angle of the sample virtual image point in the distorted image; and, the sample pixel point corresponding to the sample virtual image point in the normal image is determined, and the display is used to determine the sample pixel point corresponding to the sample virtual image point.
  • the preset parameters of the component calculate the display position of the sample pixel.
  • the number of corresponding sample virtual image points when obtaining the sample image field angle coordinates in the aforementioned steps can be multiple, and the field angle range formed by the field angles of multiple sample virtual image points can be no less than Default range threshold.
  • the field of view range of the sample virtual image points corresponding to the sample data may be determined or the position range of the sample pixel points may be determined.
  • a position range can be determined first in the normal image, where the position range should include as many pixels as possible in the normal image; then, each sample pixel point can be determined sequentially in the position range. and their respective sample object-space position coordinates; and, determine the sample virtual image point corresponding to each sample pixel point in the distorted image and its respective sample image-space angle coordinates.
  • the field of view range can also be determined first in the distorted image, where the field of view range should include as many pixels as possible in the distorted image, for example, the field of view range should be no less than a preset range. threshold.
  • the sample object-space position coordinates of multiple sample pixel points in the normal image can be obtained, as well as the sample image-space angle coordinates of the sample virtual image points corresponding to each sample pixel point in the distorted image; in other words, Sample data of multiple pairs of sample pixel points and sample virtual image points are obtained.
  • Step 106 Generate a correction function based on the sample object-space position coordinates and the sample image-space angle coordinates, where the correction function is used to correct the distorted image.
  • a correction function can be generated based on these sample data.
  • the correction function described in the embodiment of the present invention can adopt various forms such as polynomial function or trigonometric function, which is not limited in the embodiment of the present invention.
  • both the sample object-space position coordinates of the sample pixel points and the sample image-space field angle coordinates of the sample virtual image points can be expressed in binary form; moreover, when the sample object-space position coordinates are used as the sample image-space
  • the relationship between them conforms to the form of a binary polynomial surface.
  • the object-image relationship between the sample pixel points (equivalent to objects) and the sample virtual image points (equivalent to images) can be represented by a binary polynomial surface function relationship, that is, the correction function can be The form of a binary polynomial function.
  • sample object-space position coordinates of the sample pixel points and the sample image-space angle coordinates of the corresponding sample virtual image points can be substituted into a binary polynomial to generate a binary polynomial equation including polynomial coefficients; and then the described A binary polynomial equation is used to determine the values of the polynomial coefficients, and the values are back-substituted into the binary polynomial to obtain the correction function.
  • a mn and b mn are polynomial coefficients
  • p and q are the highest power of the variables (i.e. u and v) plus 1.
  • p ⁇ 6 and q ⁇ 6 can be set.
  • the previous steps have obtained the sample object position coordinates of N sample pixel points in the normal image, and the sample image field angle coordinates of N sample virtual image points in the distorted image (wherein, the N sample pixel points and N There is a one-to-one correspondence between sample virtual image points).
  • the sample data of the above N pairs (x i , y i ) and (u i , vi ) can be substituted into the above expansion equation (2) in turn.
  • the unknown number can be obtained (that is, the polynomial coefficients) are a 11 , a 12 ...a pq and b 11 , b 12 ...b pq polynomial equations.
  • the values of each polynomial coefficient can be obtained by solving the polynomial equation, and then only need to back-substitute the values into equation (2) to obtain the correction function whose unknown variables are u and v.
  • the correction function is obtained. Because the sample data used in the generation process of the function are all collected from the display device, the correction function can be used to correct distortion in the display device.
  • the sample object-space position coordinates and the corresponding sample image-space angle coordinates of the display device including the display component and the optical system can be obtained first.
  • the sample object-space position coordinates are expressed by
  • the sample image field angle coordinate is used to characterize the sample virtual image point corresponding to the sample pixel point in the distorted image. field of view; and then generate a correction function based on the sample object-side position coordinates and the sample image-side field of view angle coordinates.
  • Figure 5 is a flow chart of an image correction method according to an embodiment of the present disclosure.
  • the method can be applied to display devices including optical systems and display components. As shown in Figure 5, the method may include the following steps 502-506.
  • Step 502 Determine the color value of the pixel point contained in the target image and the corresponding image field angle coordinates.
  • the image field angle coordinates are used to represent the field of view angle of the virtual image point corresponding to the pixel point, so The virtual image is formed by the display component through the optical system when displaying the target image.
  • the correction function generated through the foregoing embodiments can be stored in the display device in advance; or the display device can also temporarily generate the correction function by collecting sample data during use. It can be understood that if the display component in the display device directly displays the target image, the image will produce a distorted virtual image after passing through the optical system.
  • the correction function can be used to perform distortion correction on the distorted image.
  • the distortion correction process is a process of generating a pre-distorted image based on the correction function and image field angle data and controlling the display component to display the image.
  • the color values of the pixels contained in the target image can be of any color.
  • the model is represented; for example, an RGB model, an RGBa model, a CMYK model, a YUV model, etc. can be used, and the embodiments of the present invention are not limited to this.
  • the image field angle data corresponding to any pixel point is used to represent the field of view angle of the virtual image point corresponding to the pixel point, and the virtual image point is the corresponding normal image displayed after the distortion correction is completed. From the virtual image point of the pixel point, it can be seen that the field of view angle of the virtual image point is actually the expected theoretical value. Based on the principle of reversibility of light, the field of view angle can be determined.
  • the correction function is specific to the optical system and display components in the display device, that is, the correction function generated based on the sample data of a set of optical systems and display components can usually only be directly applied to that set of optical systems and display components.
  • a set of optical systems and display components if the parameters of the optical system or display components change, it is difficult for the original correction function to be directly applicable to the changed set of optical systems and display components.
  • the correction function is generated in the design stage of the display device, the optical system or display component may be temporarily replaced during the production stage due to various reasons, so that the combination of the optical system and the display component in the finished display device The actual parameters are different from the optical parameters of the combination determined during the design stage.
  • the correction function generated in the design stage is directly applied to the display device, the calculation result of the correction function will have a large error or even be unable to calculate a meaningful result, resulting in poor distortion correction effect.
  • the embodiment of the present invention proposes a range adaptation solution to solve the problem.
  • the display range of the display component in the current display device may differ from the imaging range of the optical system. Are not the same.
  • the above-mentioned display range and imaging range can be expressed by parameters such as size or maximum viewing angle.
  • the imaging range of the optical system can be calculated through optical parameters such as focal length, optical power, magnification and/or optical aperture of the optical system.
  • the above optical parameters are usually recorded in the local storage space of the display device, so the display device The above optical parameters of the display component can be read locally to determine its imaging range.
  • the specific process of calculating the size of the imaging range or its corresponding maximum field of view according to the above optical parameters can be found in the related art, and will not be described again here.
  • the display range of the display component can be determined by the imaging range of the optical system.
  • the display component may first determine the optical origin of the optical system and the display origin of the display component, where the optical origin, the display origin and the observation point of the optical device are all located in the light beam of the display device.
  • the first maximum field of view angle of the first edge point farthest from the optical origin in the optical system and the first farthest distance between the first edge point and the optical origin can be determined distance; and, determine the second furthest distance between the second edge point farthest from the display origin in the display component and the display origin; finally, according to the first maximum field of view angle, the third A furthest distance and the second furthest distance, the second maximum field of view angle of the second edge point is calculated based on the geometric relationship between the optical system and the display component, the second maximum field of view angle That is, it can be used to characterize the display range of the display component.
  • the display device includes a screen and an optical system (not shown in the figure).
  • the display range of the screen is smaller than the imaging range of the optical system.
  • the virtual image shown in Figure 6 is different from the imaging range of the optical system.
  • the ranges are the same size.
  • the display component may first determine the optical origin O of the optical system and the display origin O' of the screen, wherein the optical origin O, the display origin O' and the observation point E of the optical device are all located at On the optical axis of the display device (that is, the straight line where the line segment EO is located).
  • the optical origin O, the display origin O' and the observation point E of the optical device shown in Figure 6 are located on the same straight line. It can be understood that this spatial position relationship is only exemplary. In practical applications, the display component, The optical system and the observation point may not be located on the same straight line.
  • the optical system may be a decentered optical system, etc. This is not limited in the embodiment of the present invention.
  • the display device may determine the first maximum field of view angle ⁇ 1 (i.e., ⁇ BEO) of the first edge point B farthest from the optical origin O in the optical system, as well as the sum of the first edge point B and the The first furthest distance r 1 between the optical origins O (i.e., the length of the line segment OB); and, determine the second edge point B' in the screen that is farthest from the display origin O' and the display origin the second farthest distance r 2 (that is, the length of line segment O'B'); finally, according to the first maximum field of view angle ⁇ 1 , the first farthest distance r 1 and the second farthest distance r 2 The distance r 2 is used to calculate the second maximum field of view angle ⁇ 2 of the second edge point based on the geometric relationship between the optical system and the screen.
  • the calculation formula can be found in Equation (3):
  • the calculated second maximum viewing angle ⁇ 2 can be used to characterize the display range of the screen.
  • the optical system is an eccentric system, the geometric relationship between the above parameters will change.
  • the calculation formula of the second maximum field of view angle ⁇ 2 can be adjusted accordingly according to the actual geometric relationship, which will not be described again. .
  • the display device can further determine whether the two are the same.
  • the scaling ratio between the display range and the imaging range can be determined; and then it is determined based on the scaling ratio of the pixels included in the target image in the imaging range.
  • the corresponding virtual image point in the range, and the field of view angle of the virtual image point is used as the image field angle coordinate corresponding to the pixel point.
  • the maximum field of view angle can be used to represent the display range and the imaging range.
  • the scaling ratio between the two is ⁇ 1 / ⁇ 2 . Taking any pixel point P ' with the field of view angle ⁇ x on the screen as an example, the field of view angle of the virtual image point P corresponding to this pixel point is ⁇ image field angle data.
  • the display device can separately determine the color value and image field angle coordinates of each pixel point among all pixel points included in the target image. Based on this, the display device can use the correction function to sequentially calculate the object position coordinates corresponding to each pixel point, that is, determine the expected display position of each pixel point in the display device, thereby determining the position of each pixel point in the pre-distorted image. Color and display position.
  • the process of substituting the image side view angle coordinates into the correction function to calculate the corresponding object side position coordinates requires consuming computing resources of the display device.
  • the display device can also first determine the key pixels among all the pixels contained in the target image, and then determine the color value and image field angle coordinates of each key pixel, and then use the correction function to calculate each The object position coordinates corresponding to the key pixel point. In this way, you only need to substitute the image field angle coordinates of the key pixels in the target image (that is, some of the pixels in all pixels) into the correction function to calculate the corresponding object position coordinates, while the rest are not critical. Pixels do not need the above processing, thus reducing the amount of calculation for the correction function, helping to reduce the computing resource consumption of the display device, and also helping to increase the processing speed and avoid display freezes.
  • the display device can determine the key pixels in the target image in various ways. For example, multiple key pixels can be randomly selected in the target image, and the selection logic of this method is simple. For another example, multiple key pixels may be selected sequentially in the target image according to preset viewing angle intervals.
  • the field of view angle range of the key pixel point can be [0, ⁇ max ], and the ⁇ max is the maximum field of view angle of the target image. For example, it can be the target image that is located at the edge of the image and is far away from the light. The angle between the line connecting the farthest pixel and the human eye and the optical axis.
  • the display device can sequentially determine the pixel points corresponding to each field of view angle within the field of view angle range in steps of 2° (or 1°, 10°, etc.) (that is, the preset field of view angle interval) , and determine this part of pixels as the key pixels.
  • multiple key pixels can be selected in sequence according to preset distance intervals in the target image.
  • the point-taking range where the key pixel points are located can be a rectangle that is no larger than the size of the target image itself.
  • the display device can use 5 (or 2, 10, etc.) pixels within the point-taking range. Each pixel is determined sequentially in steps and will be used as key pixels in the target image.
  • Step 504 Determine the object-space position coordinates corresponding to the image-side field of view angle coordinates according to a correction function.
  • the object-space position coordinates are used to represent the expected display position of the pixel point in the display component.
  • the correction function Generated by the aforementioned correction function generation method.
  • the display device can use the correction function generated by the foregoing solution to calculate the corresponding object position coordinates.
  • the pixels in the target image other than the key pixels are non-key pixels.
  • the pixels in the target image Points can be divided into two categories: key pixel points and non-key pixel points.
  • the display device has determined the image field angle coordinates of each key pixel point, so at this time, the object position coordinates corresponding to the image field angle coordinates of the key pixel points can be determined according to the correction function.
  • an interpolation algorithm can be used to determine the image-side viewing angle coordinates and corresponding object-side position coordinates of each non-key pixel point.
  • the display device only needs to calculate and use the correction function to calculate the object-space position coordinates corresponding to some pixels in the target image (that is, the key pixel points), while the object-space position coordinates corresponding to the remaining non-key pixels are It can be calculated directly through the interpolation algorithm, thereby reducing the calculation workload of the correction function and simplifying the logic of determining the object position coordinates.
  • an interpolation algorithm may be used to determine the color value of each non-key pixel based on the color value of each key pixel. As shown in Figure 7, each black point in the pre-distorted image corresponds to a key pixel point in the target image, and the blank area between the black points corresponds to a non-key pixel point in the target image.
  • the display device can directly substitute the image-side viewing angle coordinates into the correction function, and determine the corresponding object-side position coordinates by solving the function.
  • each polynomial coefficient in the correction function i.e., the aforementioned a 11 , a 12 ...a pq and b 11 , b 12 ...b pq
  • the variables are only u and v, so after substituting the image field angle coordinates of each pixel (that is, the specific values of u and v of each pixel) into this function, the corresponding x and y can be obtained directly, Thus, the object position coordinates (x, y) are obtained.
  • the display device can also pre-calculate the corresponding object position coordinates based on the correction function and the preset image field angle data when idle, and use the above preset image
  • the square field of view angle coordinates and the corresponding object side position coordinates are compiled into an object image mapping table, so that after the image side field of view angle coordinates are determined through the aforementioned method, the image side view angle coordinates can be directly queried in the object image mapping table.
  • the object-space position coordinates corresponding to the field angle coordinates further improve the determination speed of the object-space position coordinates and greatly speed up the distortion correction.
  • the correction function can still be used for temporary calculation, which is not limited in the embodiment of the present invention. It is understandable that the more data recorded in the above object mapping table, the more comprehensive the query will be, but the mapping table will also occupy a larger storage space, and the query time may be longer, so it can be reasonable based on the actual situation.
  • the amount of data recorded in the object-image mapping table is set to achieve a balance between storage space and query efficiency. For example, it is not necessary to record the data corresponding to all pixels, but only the data corresponding to each of the aforementioned key pixels, etc., which will not be described again.
  • Step 506 Control the display component to display the color value of the pixel according to the expected display position.
  • the object position coordinates corresponding to any pixel point in the target image can be used to characterize the expected display position of the pixel point in the display component. It can be seen from the generation process of the aforementioned correction function that if each pixel point in the target image is displayed at its corresponding expected display position, the display component can display the pre-distorted image, and at this time, the corresponding image can be formed through the optical system. corrected image without distortion. Therefore, when the display device knows the color value of each pixel point in the target image and its object-space position coordinates, it can control the display component to display the display at the expected display position represented by the object-space position coordinates. The color value of the pixel point is displayed to display the pre-distorted image, thereby achieving distortion correction of the distorted image.
  • the display device can first determine the color values of the pixels contained in the target image and the corresponding image field angle coordinates.
  • the image field angle coordinates are used to characterize the virtual image points corresponding to the pixel points.
  • the field of view angle the virtual image is formed by the optical system when the display component displays the target image; then the object-space position coordinates corresponding to the image-side field-of-view angle coordinates are determined according to the aforementioned correction function,
  • the object position coordinates are used to represent the expected display position of the pixel point in the display component; finally, the display component is controlled to display the color value of the pixel point according to the expected display position.
  • the expected display position of any pixel calculated through the above method is the display position of the pixel in the display component. After each pixel is displayed at the corresponding expected display position, the display component displays the pre-distorted image corresponding to the target image. At this time, a distortion-free virtual image can be formed through the optical system, thereby achieving To correct the distortion of the distorted image.
  • the position coordinates of the pixel points of the target image in each direction of the optical axis can be calculated by the aforementioned method to obtain the corresponding visual field angle coordinates, that is, the correction function is isotropic to the pixel point position, so this method is not only It can correct both symmetrical distortion and asymmetrical distortion, that is, it can correct images with distortion in any direction, and has a wider range of applications.
  • the present disclosure also provides an embodiment of a device for generating a correction function.
  • An embodiment of the present disclosure proposes a device for generating a correction function.
  • the device includes one or more processors, and the processor is configured to:
  • a display device including an optical system and a display component, the normal image displayed by the display component forms a distorted image through the optical system, and the distorted image is a virtual image;
  • sample object-space position coordinates of the display device and the corresponding sample image-space angle coordinates.
  • the sample object-space position coordinates are used to characterize the display position of the sample pixels in the normal image in the display component.
  • sample image side visual field angle coordinates are used to characterize the visual field angle of the sample virtual image point corresponding to the sample pixel point in the distorted image;
  • a correction function is generated according to the sample object-space position coordinates and the sample image-space angle coordinates, and the correction function is used to correct the distorted image.
  • the processor is further configured to:
  • the distorted image is captured by a camera, and the field of view angle of the sample virtual image point in the distorted image is calculated using the preset parameters of the camera; and, the sample pixel point corresponding to the sample virtual image point in the normal image is determined, And use the preset parameters of the display component to calculate the display position of the sample pixel point.
  • the number of the sample virtual image points is multiple, and the field of view range formed by the field of view angles of the plurality of sample virtual image points is not less than a preset range threshold.
  • the processor is further configured to:
  • the binary polynomial equation is solved to determine the values of the polynomial coefficients, and the values are back-substituted into the binary polynomial to obtain the correction function.
  • the present disclosure also provides embodiments of an image correction device.
  • An embodiment of the present disclosure proposes an image correction device, which is applied to a display device including an optical system and a display component.
  • the device includes one or more processors, and the processor is configured to:
  • the image field angle coordinates are used to represent the field of view angle of the virtual image point corresponding to the pixel point.
  • the virtual image is represented by The display component is formed by the optical system when displaying the target image;
  • the object-space position coordinates corresponding to the image-side field of view angle coordinates are determined according to the correction function.
  • the object-space position coordinates are used to represent the expected display position of the pixel point in the display component.
  • the correction function is determined by any of the foregoing methods. Generated by a method for generating a correction function according to an embodiment;
  • Control the display component to display the color value of the pixel according to the expected display position.
  • the processor is further configured to:
  • the display range of the display component is different from the imaging range of the optical system, determine the scaling ratio between the display range and the imaging range;
  • the virtual image point corresponding to the pixel point contained in the target image in the imaging range is determined according to the scaling ratio, and the field of view angle of the virtual image point is used as the image field angle coordinate corresponding to the pixel point.
  • the processor is further configured to:
  • a third edge point of the second edge point is calculated based on the geometric relationship between the optical system and the display component. Two maximum viewing angles, the second maximum viewing angle is used to characterize the display range of the display component.
  • the processor is further configured to:
  • the processor is further configured to:
  • a plurality of key pixels are sequentially selected in the target image according to a preset viewing angle interval or a preset distance interval.
  • all pixels in the target image include non-key pixels and the key pixels
  • the processor is further configured to: determine the object-space position coordinates corresponding to the image-side view angle coordinates of the key pixel points according to the correction function, and determine the image-side view angle coordinates of the non-key pixel points using an interpolation algorithm. and its corresponding object-space position coordinates; or,
  • the processor is further configured to: based on the color value of each key pixel point, use an interpolation algorithm to determine the color value of each non-key pixel point.
  • the processor is also configured to:
  • the object-space position coordinates corresponding to the image-side field-of-view angle coordinates are queried in the object-image mapping table.
  • the object-image mapping table is calculated based on the correction function and the preset image-side field-of-view angle data.
  • An embodiment of the present disclosure also proposes an electronic device, including: a processor; a memory for storing instructions executable by the processor; wherein the processor is configured to implement the generation of the correction function described in any of the above embodiments method.
  • Embodiments of the present disclosure also provide an electronic device, including: an optical system and a display component; a processor; and a memory for storing instructions executable by the processor; wherein the processor is configured to implement any of the above embodiments.
  • an electronic device including: an optical system and a display component; a processor; and a memory for storing instructions executable by the processor; wherein the processor is configured to implement any of the above embodiments.
  • Embodiments of the present disclosure also provide a non-transitory computer-readable storage medium on which a computer program is stored.
  • the program is executed by a processor, the method for generating a correction function or the method for image correction described in any of the above embodiments is implemented. steps in.
  • FIG. 8 is a schematic block diagram of a device 800 according to an embodiment of the present disclosure.
  • the device 800 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, or the like.
  • the device 800 may include one or more of the following components: a processing component 802, a memory 804, a power supply component 806, a multimedia component 808, an audio component 810, an input/output (I/O) interface 812, a sensor component 814, and communications component 816.
  • Processing component 802 generally controls the overall operations of device 800, such as operations associated with display, phone calls, data communications, camera operations, and recording operations.
  • the processing component 802 may include one or more processors 820 to execute instructions to complete all or part of the steps of the above correction function generation method.
  • processing component 802 may include one or more modules that facilitate interaction between processing component 802 and other components.
  • processing component 802 may include a multimedia module to facilitate interaction between multimedia component 808 and processing component 802.
  • Memory 804 is configured to store various types of data to support operations at device 800 . Examples of such data include instructions for any application or method operating on device 800, contact data, phonebook data, messages, pictures, videos, etc.
  • Memory 804 may be implemented by any type of volatile or non-volatile storage device, or a combination thereof, such as static random access memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EEPROM), Programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
  • SRAM static random access memory
  • EEPROM electrically erasable programmable read-only memory
  • EEPROM erasable programmable read-only memory
  • EPROM Programmable read-only memory
  • PROM programmable read-only memory
  • ROM read-only memory
  • magnetic memory flash memory, magnetic or optical disk.
  • Power supply component 806 provides power to the various components of device 800.
  • Power supply components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power to device 800 .
  • Multimedia component 808 includes a screen that provides an output interface between the device 800 and the user.
  • the screen may include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from the user.
  • the touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide action.
  • multimedia component 808 includes a front-facing camera and/or a rear-facing camera.
  • the front camera and/or the rear camera may receive external multimedia data.
  • Each front-facing camera and rear-facing camera can be a fixed optical lens system or have a focal length and optical zoom capabilities.
  • Audio component 810 is configured to output and/or input audio signals.
  • audio component 810 includes a microphone (MIC) configured to receive external audio signals when device 800 is in operating modes, such as call mode, recording mode, and speech recognition mode. The received audio signal may be further stored in memory 804 or sent via communication component 816 .
  • audio component 810 also includes a speaker for outputting audio signals.
  • the I/O interface 812 provides an interface between the processing component 802 and a peripheral interface module, which may be a keyboard, a click wheel, a button, etc. These buttons may include, but are not limited to: Home button, Volume buttons, Start button, and Lock button.
  • Sensor component 814 includes one or more sensors for providing various aspects of status assessment for device 800.
  • the sensor component 814 can detect the open/closed state of the device 800, the relative positioning of components, such as the display and keypad of the device 800, and the sensor component 814 can also detect a change in position of the device 800 or a component of the device 800. , the presence or absence of user contact with the device 800 , device 800 orientation or acceleration/deceleration and temperature changes of the device 800 .
  • Sensor assembly 814 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact.
  • Sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications.
  • the sensor component 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
  • Communication component 816 is configured to facilitate wired or wireless communication between apparatus 800 and other devices.
  • the device 800 can access a wireless network based on a communication standard, such as WiFi, 2G or 3G, 4G LTE, 6G NR, or a combination thereof.
  • the communication component 816 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel.
  • the communications component 816 also includes a near field communications (NFC) module to facilitate short-range communications.
  • NFC near field communications
  • the NFC module can be implemented based on radio frequency identification (RFID) technology, infrared data association (IrDA) technology, ultra-wideband (UWB) technology, Bluetooth (BT) technology and other technologies.
  • RFID radio frequency identification
  • IrDA infrared data association
  • UWB ultra-wideband
  • Bluetooth Bluetooth
  • apparatus 800 may be configured by one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable Gate array (FPGA), controller, microcontroller, microprocessor or other electronic components are implemented for executing the above correction function generation method.
  • ASICs application specific integrated circuits
  • DSPs digital signal processors
  • DSPDs digital signal processing devices
  • PLDs programmable logic devices
  • FPGA field programmable Gate array
  • controller microcontroller, microprocessor or other electronic components are implemented for executing the above correction function generation method.
  • a non-transitory computer-readable storage medium including instructions such as a memory 804 including instructions, which can be executed by the processor 820 of the device 800 to complete the above XX method is also provided.
  • the non-transitory computer-readable storage medium may be ROM, random access memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
  • FIG. 9 is a schematic block diagram of a device 900 for data authentication or driving mode determination according to an embodiment of the present disclosure.
  • the device 900 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, or the like.
  • the device 900 may include one or more of the following components: a processing component 902, a memory 904, a power supply component 906, a multimedia component 908, an audio component 910, an input/output (I/O) interface 912, a sensor component 914, Communication component 916, optical system 922 and display component 924.
  • the display component 924 is used to display an image, which generates a corresponding virtual image through the optical system 922 .
  • the optical system 922 will generate a distorted virtual image; when the display component 924 displays a pre-distorted image corrected by the image correction method, the optical system 922 will generate a distortion-free virtual image. .
  • Processing component 902 generally controls the overall operations of device 900, such as operations associated with display, phone calls, data communications, camera operations, and recording operations.
  • the processing component 902 may include one or more processors 920 to execute instructions to complete all or part of the steps of the above image correction method.
  • processing component 902 may include one or more modules that facilitate interaction between processing component 902 and other components.
  • processing component 902 may include a multimedia module to facilitate interaction between multimedia component 908 and processing component 902.
  • Memory 904 is configured to store various types of data to support operations at device 900 . Examples of such data include instructions for any application or method operating on device 900, contact data, phonebook data, messages, pictures, videos, etc.
  • Memory 904 may be implemented by any type of volatile or non-volatile storage device, or a combination thereof, such as static random access memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EEPROM), Programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
  • SRAM static random access memory
  • EEPROM electrically erasable programmable read-only memory
  • EEPROM erasable programmable read-only memory
  • EPROM Programmable read-only memory
  • PROM programmable read-only memory
  • ROM read-only memory
  • magnetic memory flash memory, magnetic or optical disk.
  • Power supply component 906 provides power to the various components of device 900 .
  • Power supply components 906 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power to device 900 .
  • Multimedia component 908 includes a screen that provides an output interface between the device 900 and the user.
  • the screen may include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from the user.
  • the touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide action.
  • multimedia component 908 includes a front-facing camera and/or a rear-facing camera.
  • the front camera and/or the rear camera may receive external multimedia data.
  • Each front-facing camera and rear-facing camera can be a fixed optical lens system or have a focal length and optical zoom capabilities.
  • Audio component 910 is configured to output and/or input audio signals.
  • audio component 910 includes a microphone (MIC) configured to receive external audio signals when device 900 is in operating modes, such as call mode, recording mode, and speech recognition mode. The received audio signals may be further stored in memory 904 or sent via communications component 916 .
  • audio component 910 also includes a speaker for outputting audio signals.
  • the I/O interface 912 provides an interface between the processing component 902 and a peripheral interface module, which may be a keyboard, a click wheel, a button, etc. These buttons may include, but are not limited to: Home button, Volume buttons, Start button, and Lock button.
  • Sensor component 914 includes one or more sensors for providing various aspects of status assessment for device 900 .
  • the sensor component 914 can detect the open/closed state of the device 900, the relative positioning of components, such as the display and keypad of the device 900, and the sensor component 914 can also detect a change in position of the device 900 or a component of the device 900. , the presence or absence of user contact with the device 900 , device 900 orientation or acceleration/deceleration and temperature changes of the device 900 .
  • Sensor assembly 914 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact.
  • Sensor assembly 914 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications.
  • the sensor component 914 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
  • Communication component 916 is configured to facilitate wired or wireless communication between apparatus 900 and other devices.
  • the device 900 can access a wireless network based on a communication standard, such as WiFi, 2G or 3G, 4G LTE, 6G NR, or a combination thereof.
  • the communication component 916 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel.
  • the communications component 916 also includes a near field communications (NFC) module to facilitate short-range communications.
  • NFC near field communications
  • the NFC module can be implemented based on radio frequency identification (RFID) technology, infrared data association (IrDA) technology, ultra-wideband (UWB) technology, Bluetooth (BT) technology and other technologies.
  • RFID radio frequency identification
  • IrDA infrared data association
  • UWB ultra-wideband
  • Bluetooth Bluetooth
  • apparatus 900 may be implemented by one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable Gate array (FPGA), controller, microcontroller, microprocessor or other electronic components are implemented for executing the above image correction method.
  • ASICs application specific integrated circuits
  • DSPs digital signal processors
  • DSPDs digital signal processing devices
  • PLDs programmable logic devices
  • FPGA field programmable Gate array
  • controller microcontroller, microprocessor or other electronic components are implemented for executing the above image correction method.
  • a non-transitory computer-readable storage medium including instructions such as a memory 904 including instructions, which can be executed by the processor 920 of the device 900 to complete the above image correction method is also provided.
  • the non-transitory computer-readable storage medium may be ROM, random access memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

一种矫正函数的生成、图像矫正方法及装置。所述矫正函数的生成方法,包括:确定包括光学***和显示组件的显示设备,所述显示组件显示出的正常图像通过所述光学***形成畸变图像,所述畸变图像为虚像;获取所述显示设备的样本物方位置坐标和相应的样本像方视场角坐标,所述样本物方位置坐标用于表征所述正常图像中的样本像素点在所述显示组件中的显示位置,所述样本像方视场角坐标用于表征所述畸变图像中对应于所述样本像素点的样本虚像点的视场角;根据所述样本物方位置坐标和所述样本像方视场角坐标生成矫正函数,所述矫正函数用于矫正所述畸变图像。该方法可以简化矫正函数的生成逻辑提升生成效率,并能够用于矫正非对称畸变。

Description

矫正函数的生成、图像矫正方法及装置 技术领域
本公开涉及图像处理技术领域,具体而言,涉及矫正函数的生成、图像矫正方法及装置。
背景技术
现阶段,VR(Virtual Reality,虚拟现实)、AR(Augmented Reality,增强现实)、MR(Mixed Reality,混合现实)等XR(Extend Reality,扩展现实)技术正飞速发展。在这类技术的应用过程中,通常需要包含显示组件和光学***的近眼显示设备为用户展示图像,以便为用户营造相应的视觉效果。
以VR技术为例,用户佩戴的VR眼镜中通常包含屏幕和镜头,屏幕在显示图像过程中发出的光线可以经过镜头进入用户的眼睛,上述光线的反向延长线将形成相应的虚像。由于所述镜头通常为凸透镜或凹透镜等非平面镜头、镜头本身与图像平面难以保证严格平行等原因,上述虚像难以避免的会产生畸变,如径向畸变和/或切向畸变等,从而导致虚像中的内容变形失真,影响用户的临场感,甚至导致用户产生眩晕感。对此,需要对上述畸变图像进行畸变矫正。
在相关技术中,常通过预畸变的方式实现畸变矫正。如在屏幕上显示预畸变图像,以经过光学***形成无畸变的正常图像。如根据已知的多个维度的设备参数进行数学拟合,最终获得足够多的数据以形成预畸变图像。但这类拟合算法逻辑复杂,所需输入的参数种类较多,拟合过程繁琐低效。另外,上述拟合算法通常仅能够矫正对称畸变,而无法有效矫正因加工或组装误差等多种原因可能引起的非对称畸变,适用范围较小。
发明内容
有鉴于此,本公开的实施例提出了一种矫正函数的生成、图像矫正方法及装置,以解决相关技术中存在的不足。
根据本公开实施例的第一方面,提出一种矫正函数的生成方法,包括:
确定包括光学***和显示组件的显示设备,所述显示组件显示出的正常图像通 过所述光学***形成畸变图像,所述畸变图像为虚像;
获取所述显示设备的样本物方位置坐标和相应的样本像方视场角坐标,所述样本物方位置坐标用于表征所述正常图像中的样本像素点在所述显示组件中的显示位置,所述样本像方视场角坐标用于表征所述畸变图像中对应于所述样本像素点的样本虚像点的视场角;
根据所述样本物方位置坐标和所述样本像方视场角坐标生成矫正函数,所述矫正函数用于矫正所述畸变图像。
根据本公开实施例的第二方面,提出一种图像矫正方法,应用于包含光学***和显示组件的显示设备,所述方法包括:
确定目标图像中所包含像素点的颜色值和相应的像方视场角坐标,所述像方视场角坐标用于表征对应于所述像素点的虚像点的视场角,所述虚像由所述显示组件在显示所述目标图像时通过所述光学***形成;
根据矫正函数确定所述像方视场角坐标对应的物方位置坐标,所述物方位置坐标用于表征所述像素点在所述显示组件中的预期显示位置,所述矫正函数通过前述第一方面所述的方法生成;
控制所述显示组件按照所述预期显示位置显示所述像素点的颜色值。
根据本公开实施例的第三方面,提出一种矫正函数的生成装置,所述装置包括一个或多个处理器,所述处理器被配置为:
确定包括光学***和显示组件的显示设备,所述显示组件显示出的正常图像通过所述光学***形成畸变图像,所述畸变图像为虚像;
获取所述显示设备的样本物方位置坐标和相应的样本像方视场角坐标,所述样本物方位置坐标用于表征所述正常图像中的样本像素点在所述显示组件中的显示位置,所述样本像方视场角坐标用于表征所述畸变图像中对应于所述样本像素点的样本虚像点的视场角;
根据所述样本物方位置坐标和所述样本像方视场角坐标生成矫正函数,所述矫正函数用于矫正所述畸变图像。
根据本公开实施例的第四方面,提出一种图像矫正装置,所述装置应用于包含光学***和显示组件的显示设备,所述装置包括一个或多个处理器,所述处理器被配 置为:
确定目标图像中所包含像素点的颜色值和相应的像方视场角坐标,所述像方视场角坐标用于表征对应于所述像素点的虚像点的视场角,所述虚像由所述显示组件在显示所述目标图像时通过所述光学***形成;
根据矫正函数确定所述像方视场角坐标对应的物方位置坐标,所述物方位置坐标用于表征所述像素点在所述显示组件中的预期显示位置,所述矫正函数通过前述第一方面中任一项所述的方法生成;
控制所述显示组件按照所述预期显示位置显示所述像素点的颜色值。
根据本公开实施例的第五方面,提出一种电子设备,包括:处理器;用于存储处理器可执行指令的存储器;其中,所述处理器被配置为实现上述第一方面所述的矫正函数的生成方法。
根据本公开实施例的第六方面,提出一种显示设备,包括:光学***和显示组件;处理器;用于存储处理器可执行指令的存储器;其中,所述处理器被配置为实现上述第二方面所述的图像矫正方法。
根据本公开实施例的第七方面,提出一种非瞬态计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现上述第一方面所述的矫正函数的生成方法或者第二上述第二方面所述的图像矫正方法中的步骤。
根据本公开的实施例,可以生成显示组件对应的矫正函数,利用该函数可以计算显示组件应当显示的预畸变图像,从而实现对畸变图像的有效矫正。
在矫正函数的生成方法中,可以先获取包含显示组件和光学***的显示设备的样本物方位置坐标和相应的样本像方视场角坐标,所述样本物方位置坐标用于表征所述正常图像中的样本像素点在所述显示组件中的显示位置,所述样本像方视场角坐标用于表征所述畸变图像中对应于所述样本像素点的样本虚像点的视场角;然后根据所述样本物方位置坐标和所述样本像方视场角坐标生成矫正函数。
通过该方法,只需要获取样本像素点在原始图像中的位置坐标和相应的样本虚像点在畸变图像中的视场角,即可通过上述样本数据计算得到显示设备的矫正函数。相对于相关技术中的拟合算法,所需的参数种类大大减少,相应的函数生成逻辑也大幅简化,有助于提升矫正函数的生成效率。
而在图像矫正方法中,显示设备可以先确定目标图像中所包含像素点的颜色值和相应的像方视场角坐标,所述像方视场角坐标用于表征对应于所述像素点的虚像点的视场角,所述虚像由所述显示组件在显示所述目标图像时通过所述光学***形成;然后根据前述矫正函数确定所述像方视场角坐标对应的物方位置坐标,所述物方位置坐标用于表征所述像素点在所述显示组件中的预期显示位置;最后控制所述显示组件按照所述预期显示位置显示所述像素点的颜色值。
可以理解的是,经过上述方式计算得到的任一像素点的预期显示位置,即为该像素点在显示组件中的显示位置。在将各个像素点分别显示在相应的预期显示位置后,所述显示组件即显示出所述目标图像对应的预畸变图像——此时经过所述光学***即可形成无畸变的虚像,从而实现了对所述畸变图像的畸变矫正。另外,由于目标图像在光轴的各个方向上的像素点的位置坐标均可以通过前述方式计算得到相应的视场角坐标,即所述矫正函数对于像素点位置具有各向同性,所以该方式不仅可以矫正对称畸变,也可以矫正非对称畸变,即可以对存在任意方向畸变的图像进行矫正,使用范围更广。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本公开。
附图说明
为了更清楚地说明本公开实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单的介绍,显而易见地,下面描述中的附图仅仅是本公开的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1是根据本公开实施例示出的一种矫正函数的生成方法的流程图。
图2是根据本公开实施例示出的一种显示设备的成像原理示意图。
图3是根据本公开实施例示出的一种畸变图像的示意图。
图4是根据本公开实施例示出的一种像素点的视场角的示意图。
图5是根据本公开实施例示出的一种图像矫正方法的流程图。
图6是根据本公开实施例示出的一种最大视场角的几何关系的示意图。
图7是根据本公开实施例示出的一种预畸变图像的示意图。
图8是根据本公开实施例示出的一种矫正函数的生成装置的示意框图。
图9是根据本公开实施例示出的一种图像矫正装置的示意框图。
具体实施方式
下面将结合本公开实施例中的附图,对本公开实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本公开一部分实施例,而不是全部的实施例。基于本公开中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本公开保护的范围。
在XR技术的应用过程中,通常需要包含显示组件和光学***的近眼显示设备为用户展示图像,以便为用户营造相应的视觉效果。以VR技术为例,用户佩戴的VR眼镜中通常包含屏幕和镜头,屏幕在显示图像过程中发出的光线可以经过镜头进入用户的眼睛,上述光线的反向延长线将形成相应的虚像。由于所述镜头通常为凸透镜或凹透镜等非平面镜头、镜头本身与图像平面难以保证严格平行等原因,上述虚像难以避免的会产生畸变,如径向畸变和/或切向畸变等,从而导致虚像中的内容变形失真,影响用户的临场感,甚至导致用户产生眩晕感。对此,需要对上述畸变图像进行畸变矫正。
在相关技术中,常通过预畸变的方式实现畸变矫正。如在屏幕上显示预畸变图像,以经过光学***形成无畸变的正常图像。如根据已知的多个维度的设备参数进行数学拟合,最终获得足够多的数据以形成预畸变图像。但这类拟合算法逻辑复杂,所需输入的参数种类较多,拟合过程繁琐低效。另外,上述拟合算法通常仅能够矫正对称畸变,而无法有效矫正因加工或组装误差等多种原因可能引起的非对称畸变,适用范围较小。
对此,本发明实施例提出一种改进的图像矫正方案,具体的,生成适用于所述显示设备的矫正函数,并由所述显示设备基于该矫正函数对畸变图像进行矫正。下面结合附图对所述图像矫正方案进行详细说明。
图1是根据本公开实施例示出的一种矫正函数的生成方法的流程图。该方法可以应用于电脑等任意形式的电子设备,如显示设备的设计人员可以在自身使用的电脑中采用该方法生成显示设备的矫正函数;或者,该方法也可以应用于所述显示设备,如在所述显示设备设计、生产完成后的使用阶段,该设备可以自行采集相关数据并生成适用于自身的矫正函数。如图1所示,该方法可以包括下述步骤102-106。
步骤102,确定包括光学***和显示组件的显示设备,所述显示组件显示出的正常图像通过所述光学***形成畸变图像,所述畸变图像为虚像。
本发明实施例所述的显示设备可以为近眼显示设备,具体可以采用眼镜、衣帽、佩饰等形式,用户可以将该显示设备穿戴至眼睛前方,以便观察所述显示设备所显示的图像。由于所述光学***的固有特性不可避免的会导致所述虚像产生畸变(实际上,人眼所观察到的实像也会产生畸变),所以在所述显示组件显示正常图像的情况下,该成像图像对应的虚像即为产生畸变的图像,即本发明实施例所述的畸变图像。
如图2所示,位于光学***右侧的显示组件在显示图像的过程中会发出相应的光线,该光线经过光学***后进入位于光学***左侧的人眼(即用户的眼睛)。基于光的可逆性可知,所述光线将在光学***右侧形成所述图像的虚像。如图3所示,在显示组件显示出的正常图像为矩形的情况下,相应的畸变图像可能为不规则四边形。当然,图3所示畸变图像的形状仅是示例性的,在实际应用中,所述畸变图像可以为任意形状,本发明实施例对此并不进行限定。
可以理解的是,所述正常图像和畸变图像分别包含的像素点之间存在一一对应关系,即所述正常图像中的任一样本像素点(即正常图像中的像素点)在所述畸变图像中存在相应的样本虚像点(即畸变图像中的像素点)。如图3所示,正常图像中的任一样本像素点(x i,y i)在畸变图像中存在相应的样本虚像点(u i,v i),其中,1≤i≤n,n为所述正常图像(或所述畸变图像)中的像素点总数。
步骤104,获取所述显示设备的样本物方位置坐标和相应的样本像方视场角坐标,所述样本物方位置坐标用于表征所述正常图像中的样本像素点在所述显示组件中的显示位置,所述样本像方视场角坐标用于表征所述畸变图像中对应于所述样本像素点的样本虚像点的视场角。
本发明实施例创造性地根据正常图像中样本像素点的位置坐标(即所述样本物方位置坐标)和畸变图像中样本虚像点的视场角(即所述样本像方视场角坐标)生成所述显示设备的矫正函数,以大幅减少生成矫正函数所需的参数并简化矫正函数的生成逻辑。因此,在生成所述矫正函数之前,需要先获取样本数据,即获取所述样本物方位置坐标和所述样本像方视场角坐标。
虑到所述正常图像由所述显示组件所显示,所以可以将样本像素点在所述显示组件中的位置坐标(用于体现该像素点在显示组件中的显示其位置)作为所述样本物 方位置坐标。具体的,任一样本像素点的位置坐标可以为该像素点的相对位置在水平和垂直方向上的分量。如图3所示,可以以所述正常图像的左下角顶点为原点建立直角坐标系,进而将样本像素点在该直角坐标系中的坐标作为所述样本物方位置坐标。当然,可以采用多种方式建立上述直角坐标系,本发明实施例对此并不进行限定。
实际上,所述正常图像和畸变图像中的各个像素点均分别存在相应的视场角,其中,任一图像中的任一像素点的视场角,即为该像素点和人眼间的连线与该图像中位于光轴上的像素点和人眼间的连线之间的夹角。本发明实施例仅关注畸变图像中样本虚像点的视场角,特此说明。如图4所示的任一虚像,假设虚像的轮廓为矩形,光轴穿过矩形的中心点O,此时虚像中任一顶点B的视场角为θ(即∠OEB)。此时,可以沿水平和竖直方向将所述视场角θ拆分成两个角u(即∠OEC)和v(即∠OEA),可以理解的是,对于所述虚像中的任意两个像素点,二者的视场角可能不同;但两视场角拆分后的角u和v不可能完全相同,因此,可以使用拆分后的u和v所构成的数组表征θ。基于此,可以使用所述视场角拆分后的两个角所构成的数组作为所述样本像方视场角坐标。如图4所示,B点对应的样本像方视场角坐标既可以为视场角θ,也可以为(u,v),其他像素点与此类似,不再赘述。可见,任一样本虚像点的对应的样本像方视场角坐标即为其视场角在水平和垂直方向上的分量。基于上述分析,对于所述正常图像中的各个样本像素点,可以分别确定相应的样本虚像点对应的样本像方视场角坐标。
具体的,可以采用多种方式获取所述显示设备的样本物方位置坐标和样本像方视场角坐标。作为一示例性实施例,考虑到所述畸变主要由所述光学***的自身结构所引起,所以可以查询所述光学***的光学参数,并根据所述光学参数确定所述正常图像中的样本像素点的显示位置以及对应于所述样本像素点的样本虚像点的视场角。其中,所述光学参数可以包括所述光学***的焦距、光焦度、放大倍数和/或光学孔径等,本发明实施例对此并不进行限定。通常情况下,设计人员在所述显示设备的设计阶段会对所述显示组件和光学***进行检测,并根据检测结果和相应的光学参数计算出针对该光学***的基准数据,所述基准数据即包括显示组件中所显示像素点的位置坐标及其在虚像中所对应虚像点的试产视场角等数据。因此,也可以在所述显示设备的涉设计手册中直接查询上述基准数据,此时无需计算即可确定所述样本物方位置坐标和样本像方视场角坐标。
在另一示例性实施例中,也可以通过相机拍摄所述畸变图像,其中,该相机可 以为经过校准的、具有预设参数的标准相机。进而,可以利用所述相机的预设参数计算所述畸变图像中样本虚像点的视场角;以及,确定所述正常图像中对应于所述样本虚像点的样本像素点,并利用所述显示组件的预设参数计算所述样本像素点的显示位置。
在一实施例中,为了确保生成的矫正函数具有较高的准确度,可以获取尽可能多且分布范围尽可能广的样本数据。以样本虚像点为例,前述步骤获取样本像方视场角坐标时对应的样本虚像点的数量可以为多个,且多个样本虚像点的视场角所构成的视场角范围可以不小于预设的范围阈值。具体的,在获取所述样本数据之前,可以先确定样本数据对应的样本虚像点的视场角范围或者确定样本像素点的位置范围。如图3所示,可以先在所述正常图像中确定位置范围,其中该位置范围应当包含所述正常图像中尽可能多的像素点;进而,可以在该位置范围中依次确定各个样本像素点及其各自的样本物方位置坐标;以及,确定各个样本像素点在所述畸变图像中对应的样本虚像点及其各自的样本像方视场角坐标。或者,也可以先在所述畸变图像中确定视场角范围,其中该视场角范围应当包含所述畸变图像中尽可能多的像素点,如该视场角范围应当不小于预设的范围阈值。
通过前述方式,可以获取到正常图像中的多个样本像素点的样本物方位置坐标,以及各个样本像素点在畸变图像中分别对应的样本虚像点的样本像方视场角坐标;换言之,可以获取到多对样本像素点和样本虚像点的样本数据。
步骤106,根据所述样本物方位置坐标和所述样本像方视场角坐标生成矫正函数,所述矫正函数用于矫正所述畸变图像。
在获取到前述样本物方位置坐标和样本像方视场角坐标之后,可以根据这些样本数据生成矫正函数。其中,本发明实施例所述的矫正函数可以采用多项式函数或三角函数等多种形式,本发明实施例对此并不进行限定。
发明人发现,所述样本像素点的样本物方位置坐标和所述样本虚像点的样本像方视场角坐标均可以采用二元形式表示;而且,当以样本物方位置坐标作为样本像方视场角坐标的函数时,二者之间的关系符合二元多项式曲面形式。有鉴于此,在一实施例中,可以以二元多项式曲面函数关系表示样本像素点(相当于物)和样本虚像点(相当于像)之间的物像关系,即所述矫正函数可以采用二元多项式函数的形式。例如,可以先将所述样本像素点的样本物方位置坐标和相应的样本虚像点的样本像方视场角坐标代入二元多项式,以生成包含多项式系数的二元多项式方程;然后求解所述 二元多项式方程以确定所述多项式系数的取值,并将所述取值反代入所述二元多项式以得到所述矫正函数。
示例性的,假设编号为i的样本像素点的样本物方位置坐标为(xi,yi)、该样本像素点所对应样本虚像点的样本像方视场角坐标为(ui,vi),则二者之间的二元多项式表达式可以参见式(1):
Figure PCTCN2022114002-appb-000001
其中,a mn和b mn为多项式系数,p和q为变量(即u和v)的最高幂加1。为保证所述矫正函数的拟合精度尽量高,可以设置p≥6且q≥6。
在实际应用过程中,可以利用式(1)的展开式进行计算,所述展开式参见式(2):
Figure PCTCN2022114002-appb-000002
Figure PCTCN2022114002-appb-000003
不妨假设前述步骤获取到了正常图像中N个样本像素点的样本物方位置坐标,以及畸变图像中N个样本虚像点的样本像方视场角坐标(其中,所述N个样本像素点和N个样本虚像点一一对应)。基于此,可以将上述N对(x i,y i)和(u i,v i)的样本数据依次代入上述展开式(2),因为代入后的u和v均为实数,所以可以得到未知数(即所述多项式系数)为a 11、a 12…a pq和b 11、b 12…b pq多项式方程。显然,通过求解该多项式方程即可得到各个多项式系数的取值,此后只需要将所述取值反代入式(2)即可得到未知数为u和v的矫正函数。
至此,即得到所述矫正函数。因为该函数的生成过程所使用的样本数据均采集自所述显示设备,所以该矫正函数即可用于对所述显示设备中的畸变进行畸形矫正。
根据前述实施例,在矫正函数的生成方法中,可以先获取包含显示组件和光学 ***的显示设备的样本物方位置坐标和相应的样本像方视场角坐标,所述样本物方位置坐标用于表征所述正常图像中的样本像素点在所述显示组件中的显示位置,所述样本像方视场角坐标用于表征所述畸变图像中对应于所述样本像素点的样本虚像点的视场角;然后根据所述样本物方位置坐标和所述样本像方视场角坐标生成矫正函数。
通过该方法,只需要获取样本像素点在原始图像中的位置坐标和相应的样本虚像点在畸变图像中的视场角,即可通过上述样本数据计算得到显示设备的矫正函数。相对于相关技术中的拟合算法,所需的参数种类大大减少,相应的函数生成逻辑也大幅简化,有助于提升矫正函数的生成效率。
下面结合实施例对使用所述矫正函数对所述显示设备进行畸形矫正的过程进行说明。图5是根据本公开实施例示出的一种图像矫正方法的流程图。该方法可以应用于包含光学***和显示组件的显示设备。如图5所示,该方法可以包括下述步骤502-506。
步骤502,确定目标图像中所包含像素点的颜色值和相应的像方视场角坐标,所述像方视场角坐标用于表征对应于所述像素点的虚像点的视场角,所述虚像由所述显示组件在显示所述目标图像时通过所述光学***形成。
在本发明实施例中,所述显示设备中可以预先保存通过前述实施例生成的矫正函数;或者,所述显示设备也可以在使用过程中采集样本数据临时生成所述矫正函数。可以理解的是,若显示设备中的显示组件直接显示所述目标图像,则该图像经过光学***会产生畸变的虚像,对此,可以利用所述矫正函数对畸变图像进行畸变矫正。而所述畸变矫正的过程,即为根据所述矫正函数和像方视场角数据生成预畸变图像并控制显示组件显示该图像的过程。
在进行畸变矫正之前,需要先确定目标图像中所包含像素点的颜色值和该像素点对应的像方视场角数据,其中,所述目标图像中像素点的颜色值,可以采用任一色彩模型加以表示;如可以采用RGB模型、RGBa模型、CMYK模型、YUV模型等,本发明实施例对此并不进行限定。
可以理解的是,任一像素点对应的像方视场角数据用于表征对应于该像素点的虚像点的视场角,而该虚像点即为畸变矫正完成后显示出的正常图像中对应于所述像素点的虚像点,可见该虚像点的视场角实际上是预期的理论值。基于光的可逆性原理,可以确定出所述视场角。
由前述矫正函数的生成过程可知,矫正函数与所述显示设备中的光学***和显示组件具有针对性,即根据一组光学***和显示组件的样本数据生成的矫正函数通常仅能够直接适用于该组光学***和显示组件,若其中光学***或显示组件的参数发生变化,则原有矫正函数难以直接适用于变化后的这组光学***和显示组件。然而,若在所述显示设备的设计阶段生成所述矫正函数,则在生产阶段可能因为各种原因临时更换其中的光学***或显示组件,使得生产完成的显示设备中光学***和显示组件组合的实际参数与设计阶段所确定组合的光学参数不同。此时,若将设计阶段生成的所述矫正函数直接应用于该显示设备,将导致矫正函数的计算结果误差较大甚至无法计算出有意义的结果,从而导致畸变矫正的效果不佳。
对此,本发明实施例提出一种范围适配的方案加以解决。通常情况下,若显示设备中的光学***和/或显示组件被更换(或被修改某些可变参数),则当前显示设备中所述显示组件的显示范围与所述光学***的成像范围可能不相同。要判断所述显示范围与所述成像范围是否相同,首先需要确定所述显示范围与所述成像范围。其中,上述显示范围和成像范围可以通过尺寸或最大视场角等参数加以表示。
其中,光学***的成像范围可以通过光学***的焦距、光焦度、放大倍数和/或光学孔径等光学参数计算得到,而上述光学参数通常会被记录于显示设备的本地存储空间,因此显示设备可以从本地读取显示组件的上述光学参数以确定其成像范围。根据上述光学参数计算所述成像范围的尺寸或其对应的最大视场角的具体过程可以参见相关技术中的记载,此处不再赘述。
而对于所述显示组件的显示范围,可以通过所述光学***的成像范围确定。例如,显示组件可以先确定所述光学***的光学原点和所述显示组件的显示原点,其中,所述光学原点、所述显示原点和所述光学设备的观察点均位于所述显示设备的光轴上;然后,可以确定所述光学***中距离所述光学原点最远的第一边缘点的第一最大视场角以及所述第一边缘点和所述光学原点之间的第一最远距离;以及,确定所述显示组件中距离所述显示原点最远的第二边缘点和所述显示原点之间的第二最远距离;最后根据所述第一最大视场角、所述第一最远距离和所述第二最远距离,基于所述光学***和所述显示组件之间的几何关系计算所述第二边缘点的第二最大视场角,该第二最大视场角即可用于表征所述显示组件的显示范围。
如图6所示,显示设备中包含屏幕和光学***(图中未示出),所述屏幕的显示范围小于所述光学***的成像范围,图6所示的虚像与所述光学***的成像范围大 小相同。其中,显示组件可以先确定所述光学***的光学原点O和所述屏幕的显示原点O’,其中,所述光学原点O、所述显示原点O’和所述光学设备的观察点E均位于所述显示设备的光轴(即线段EO所在直线)上。图6所示的光学原点O、显示原点O’和光学设备的观察点E位于同一直线上,可以理解的是,该空间位置关系仅是示例性的,在实际应用中,所述显示组件、光学***和观察点之间也可以不位于同一直线上,如所述光学***可以为偏心光学***等,本发明实施例对此并不进行限定。
进一步的,显示设备可以确定所述光学***中距离所述光学原点O最远的第一边缘点B的第一最大视场角θ 1(即∠BEO)以及所述第一边缘点B和所述光学原点O之间的第一最远距离r 1(即线段OB的长度);以及,确定所述屏幕中距离所述显示原点O’最远的第二边缘点B’和所述显示原点之间的第二最远距离r 2(即线段O’B’的长度);最后根据所述第一最大视场角θ 1、所述第一最远距离r 1和所述第二最远距离r 2,基于所述光学***和所述屏幕之间的几何关系计算所述第二边缘点的第二最大视场角θ 2,计算公式可以参见式(3):
Figure PCTCN2022114002-appb-000004
至此,计算出的所述第二最大视场角θ 2即可用于表征所述屏幕的显示范围。其中,在所述光学***为偏心***的情况下,上述各个参数之间的几何关系将发生变化,此时第二最大视场角θ 2的计算公式可以根据实际几何关系相应调整,不再赘述。
在通过上述方式确定出所述显示组件的显示范围与所述光学***的成像范围之后,显示设备可以进一步判断二者是否相同。在所述显示范围与所述成像范围不同的情况下,可以确定所述显示范围与所述成像范围之间的缩放比例;然后根据所述缩放比例确定目标图像中所包含像素点在所述成像范围中对应的虚像点,并将所述虚像点的视场角作为所述像素点对应的像方视场角坐标。如图6所示,可以使用最大视场角表证所述显示范围与所述成像范围,此时二者之间的缩放比例即为θ 12。以屏幕中视场角为θ x的任一像素点P’为例,该像素点对应的虚像点P的视场角即为θ x*θ1/θ 2,据此即可计算出任一像素点对应的像方视场角数据。
在另一实施例中,显示设备可以分别确定目标图像所包含全部像素点中各个像素点的颜色值和像方视场角坐标。基于此,显示设备可以使用所述矫正函数依次计算出各个像素点对应的物方位置坐标,即确定出各个像素点在显示设备中的预期显示位置,从而确定出预畸变图像中各个像素点的颜色和显示位置。
可以理解的是,将所述像方视场角坐标代入所述矫正函数计算相应的物方位置坐标的过程需要消耗显示设备的计算资源,而为了减少物方位置坐标的计算量以减少相应的资源消耗,显示设备也可以先确定目标图像所包含全部像素点中的关键像素点,然后分别确定各个关键像素点的颜色值和像方视场角坐标,此后,可以利用所述矫正函数计算各个关键像素点对应的物方位置坐标。通过该方式,只需要将目标图像中的关键像素点(即全部像素点中的部分像素点)的像方视场角坐标代入矫正函数计算相应的物方位置坐标即可,而其余的非关键像素点则无需上述处理,从而减少了针对矫正函数的计算量,有助于降低显示设备的计算资源消耗,也有助于提升处理速度,避免显示卡顿。
其中,显示设备可以通过多种方式确定所述目标图像中的关键像素点。例如,可以在所述目标图像中随机选取多个关键像素点,该方式选取逻辑简单。再例如,也可以在所述目标图像中按照预设视场角间隔依次选取多个关键像素点。其中,关键像素点所处的视场角范围可以为[0,θ max],所述θ max为所述目标图像的最大视场角,如可以为所述目标图像中位于图像边缘且距离光轴最远的像素点和人眼之间的连线与光轴之间的夹角。对此,显示设备可以在所述视场角范围内以2°(或1°、10°等)为步长(即所述预设视场角间隔)依次确定各个视场角对应的像素点,并将这部分像素点确定为所述关键像素点。又例如,还可以在所述目标图像中按照预设距离间隔依次选取多个关键像素点。其中,关键像素点所处的取点范围可以为不大于所述目标图像自身尺寸的矩形,基于此,显示设备可以在所述取点范围内以5个(或2个、10个等)像素点为步长依次确定各个像素点,并将作为目标图像中的关键像素点。通过上述方式,可以尽量确保确定出的关键像素点在目标图像中分布范围尽可能广,且分布尽量均匀,从而提升矫正效果的均匀性。
步骤504,根据矫正函数确定所述像方视场角坐标对应的物方位置坐标,所述物方位置坐标用于表征所述像素点在所述显示组件中的预期显示位置,所述矫正函数通过前述矫正函数的生成方法生成。
在通过前述实施例确定出目标图像中像素点的像方视场角坐标之后,显示设备可以使用前述方案生成的矫正函数计算相应的物方位置坐标。
承接于前述确定关键像素点的像方视场角坐标的实施例,所述目标图像中除所述关键像素点之外的像素点即为非关键像素点,换言之,所述目标图像中的像素点可以分为前述关键像素点和非关键像素点两类。如前所述,显示设备已经确定出各个关 键像素点的像方视场角坐标,所以此时可以根据所述矫正函数确定所述关键像素点的像方视场角坐标对应的物方位置坐标。进一步的,可以利用插值算法确定各个非关键像素点的像方视场角坐标及其对应的物方位置坐标。通过该方式,显示设备只需要计算利用矫正函数计算目标图像中部分像素点(即所述关键像素点)对应的物方位置坐标即可,而其余的非关键像素点对应的物方位置坐标则可以直接通过插值算法计算得到,从而减少矫正函数的计算工作量,简化物方位置坐标的确定逻辑。或者,为了进一步节省显示设备的计算资源,也可以基于各个关键像素点的颜色值,利用插值算法确定各个非关键像素点的颜色值。如图7所示,所述预畸变图像中的各个黑点即对应于目标图像中的关键像素点,而黑点之间的空白区域即对应于目标图像中的非关键像素点。
在一实施例中,利用所述矫正函数确定所述像方视场角坐标对应的物方位置坐标,可以有多种方式。例如,显示设备可以直接将所述像方视场角坐标代入矫正函数,并通过求解该函数确定相应的物方位置坐标。以所述矫正函数根据前述式(2)生成为例,可以理解的是,矫正函数中的各个多项式系数(即前述a 11、a 12…a pq和b 11、b 12…b pq)均为常数,变量只有u和v,所以在将各个像素点的像方视场角坐标(即各个像素点的u和v的具体值)代入该函数后,即可直接求解得到相应的x和y,从而得到物方位置坐标(x,y)。
或者,为了提升物方位置坐标的确定速度,显示设备也可以在空闲时根据所述矫正函数和预设的像方视场角数据预先计算相应的物方位置坐标,并将上述预设的像方视场角坐标和相应的物方位置坐标编制成物象映射表,从而在通过前述方式确定出像方视场角坐标之后,即可直接在所述物像映射表中查询所述像方视场角坐标对应的物方位置坐标,从而进一步提升物方位置坐标的确定速度,大大加快畸变矫正的速度。当然,对于查询不到相应数据的像素点,仍然可以利用所述矫正函数临时计算,本发明实施例对此并不进行限定。可以理解的是,上述物象映射表中记录的数据越多,则查询的全面性越好,但该映射表所占用的存储空间也越大,而且查询时间可能更长,所以可以根据实际情况合理设置所述物像映射表中记录的数据量,以达到存储空间与查询效率的平衡。如可以不必记录全部像素点对应的数据,而仅记录前述各个关键像素点对应的数据等,不再赘述。
步骤506,控制所述显示组件按照所述预期显示位置显示所述像素点的颜色值。
如前所述,目标图像中任一像素点对应的物方位置坐标,即可用于表征该像素 点在所述显示组件中的预期显示位置。由前述矫正函数的生成过程可知,若所述目标图像中各个像素点分别显示在其对应的预期显示位置处,显示组件即可显示出预畸变图像,此时经过所述光学***即可形成相应的、无畸变的矫正后图像。因此,所述显示设备在已知目标图像中各个像素点的颜色值及其物方位置坐标的情况下,即可控制所述显示组件按照所述物方位置坐标表征的预期显示位置显示所述像素点的颜色值,从而显示出所述预畸变图像,进而实现对畸变图像的畸变矫正。
如图7所示,在显示组件中显示出不规则四边形的预畸变图像的情况下,该图像通过光学***即形成矩形的矫正后图像,该图像即为无畸变的正常图像。
通过前述实施例,显示设备可以先确定目标图像中所包含像素点的颜色值和相应的像方视场角坐标,所述像方视场角坐标用于表征对应于所述像素点的虚像点的视场角,所述虚像由所述显示组件在显示所述目标图像时通过所述光学***形成;然后根据前述矫正函数确定所述像方视场角坐标对应的物方位置坐标,所述物方位置坐标用于表征所述像素点在所述显示组件中的预期显示位置;最后控制所述显示组件按照所述预期显示位置显示所述像素点的颜色值。
可以理解的是,经过上述方式计算得到的任一像素点的预期显示位置,即为该像素点在显示组件中的显示位置。在将各个像素点分别显示在相应的预期显示位置后,所述显示组件即显示出所述目标图像对应的预畸变图像——此时经过所述光学***即可形成无畸变的虚像,从而实现了对所述畸变图像的畸变矫正。另外,由于目标图像在光轴的各个方向上的像素点的位置坐标均可以通过前述方式计算得到相应的视场角坐标,即所述矫正函数对于像素点位置具有各向同性,所以该方式不仅可以矫正对称畸变,也可以矫正非对称畸变,即可以对存在任意方向畸变的图像进行矫正,使用范围更广。
与前述的矫正函数的生成方法的实施例相对应,本公开还提供了矫正函数的生成装置的实施例。
本公开实施例提出一种矫正函数的生成装置,所述装置包括一个或多个处理器,所述处理器被配置为:
确定包括光学***和显示组件的显示设备,所述显示组件显示出的正常图像通过所述光学***形成畸变图像,所述畸变图像为虚像;
获取所述显示设备的样本物方位置坐标和相应的样本像方视场角坐标,所述样 本物方位置坐标用于表征所述正常图像中的样本像素点在所述显示组件中的显示位置,所述样本像方视场角坐标用于表征所述畸变图像中对应于所述样本像素点的样本虚像点的视场角;
根据所述样本物方位置坐标和所述样本像方视场角坐标生成矫正函数,所述矫正函数用于矫正所述畸变图像。
在一个实施例中,所述处理器还被配置为:
查询所述光学***的光学参数,并根据所述光学参数确定所述正常图像中的样本像素点的显示位置以及对应于所述样本像素点的样本虚像点的视场角;或者,
通过相机拍摄所述畸变图像,利用所述相机的预设参数计算所述畸变图像中样本虚像点的视场角;以及,确定所述正常图像中对应于所述样本虚像点的样本像素点,并利用所述显示组件的预设参数计算所述样本像素点的显示位置。
在一个实施例中,所述样本虚像点的数量为多个,且多个样本虚像点的所述视场角构成的视场角范围不小于预设的范围阈值。
在一个实施例中,所述处理器还被配置为:
将所述样本像素点的样本物方位置坐标和相应的样本虚像点的样本像方视场角坐标代入二元多项式,以生成包含多项式系数的二元多项式方程;
求解所述二元多项式方程以确定所述多项式系数的取值,并将所述取值反代入所述二元多项式以得到所述矫正函数。
与前述的图像矫正方法的实施例相对应,本公开还提供了图像矫正装置的实施例。
本公开实施例提出一种图像矫正装置,所述装置应用于包含光学***和显示组件的显示设备,所述装置包括一个或多个处理器,所述处理器被配置为:
确定目标图像中所包含像素点的颜色值和相应的像方视场角坐标,所述像方视场角坐标用于表征对应于所述像素点的虚像点的视场角,所述虚像由所述显示组件在显示所述目标图像时通过所述光学***形成;
根据矫正函数确定所述像方视场角坐标对应的物方位置坐标,所述物方位置坐标用于表征所述像素点在所述显示组件中的预期显示位置,所述矫正函数通过前述任一实施例所述的矫正函数的生成方法生成;
控制所述显示组件按照所述预期显示位置显示所述像素点的颜色值。
在一个实施例中,所述处理器还被配置为:
在所述显示组件的显示范围与所述光学***的成像范围不相同的情况下,确定所述显示范围与所述成像范围之间的缩放比例;
根据所述缩放比例确定目标图像中所包含像素点在所述成像范围中对应的虚像点,并将所述虚像点的视场角作为所述像素点对应的像方视场角坐标。
在一个实施例中,所述处理器还被配置为:
确定所述光学***的光学原点和所述显示组件的显示原点,所述光学原点、所述显示原点和所述光学设备的观察点均位于所述显示设备的光轴上;
确定所述光学***中距离所述光学原点最远的第一边缘点的第一最大视场角以及所述第一边缘点和所述光学原点之间的第一最远距离;以及,确定所述显示组件中距离所述显示原点最远的第二边缘点和所述显示原点之间的第二最远距离;
根据所述第一最大视场角、所述第一最远距离和所述第二最远距离,基于所述光学***和所述显示组件之间的几何关系计算所述第二边缘点的第二最大视场角,所述第二最大视场角用于表征所述显示组件的显示范围。
在一个实施例中,所述处理器还被配置为:
确定目标图像所包含全部像素点中各个像素点的颜色值和像方视场角坐标;或者,
确定目标图像所包含全部像素点中的关键像素点,并分别确定各个关键像素点的颜色值和像方视场角坐标。
在一个实施例中,所述处理器还被配置为:
在所述目标图像中随机选取多个关键像素点;或者,
在所述目标图像中按照预设视场角间隔或预设距离间隔依次选取多个关键像素点。
在一个实施例中,所述目标图像中的全部像素点包括非关键像素点和所述关键像素点,
所述处理器还被配置为:根据矫正函数确定所述关键像素点的像方视场角坐标 对应的物方位置坐标,并利用插值算法确定所述非关键像素点的像方视场角坐标及其对应的物方位置坐标;或者,
所述处理器还被配置为:基于各个关键像素点的颜色值,利用插值算法确定各个非关键像素点的颜色值。
所述处理器还被配置为:
将所述像方视场角坐标代入矫正函数,并通过求解所述矫正函数确定相应的物方位置坐标;或者,
在物像映射表中查询所述像方视场角坐标对应的物方位置坐标,所述物像映射表被根据所述矫正函数和预设的像方视场角数据计算得到。
关于上述实施例中的装置,其中各个模块执行操作的具体方式已经在相关方法的实施例中进行了详细描述,此处将不做详细阐述说明。
本公开的实施例还提出一种电子设备,包括:处理器;用于存储处理器可执行指令的存储器;其中,所述处理器被配置为实现上述任一实施例所述的矫正函数的生成方法。
本公开的实施例还提出一种电子设备,包括:光学***和显示组件;处理器;用于存储处理器可执行指令的存储器;其中,所述处理器被配置为实现上述任一实施例所述的图像矫正方法。
本公开的实施例还提出一种非瞬态计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现上述任一实施例所述的矫正函数的生成方法或图像矫正方法中的步骤。
图8是根据本公开的实施例示出的一种装置800的示意框图。例如,装置800可以是移动电话,计算机,数字广播终端,消息收发设备,游戏控制台,平板设备,医疗设备,健身设备,个人数字助理等。
参照图8,装置800可以包括以下一个或多个组件:处理组件802,存储器804,电源组件806,多媒体组件808,音频组件810,输入/输出(I/O)的接口812,传感器组件814,以及通信组件816。
处理组件802通常控制装置800的整体操作,诸如与显示,电话呼叫,数据通信,相机操作和记录操作相关联的操作。处理组件802可以包括一个或多个处理器820 来执行指令,以完成上述矫正函数的生成方法的全部或部分步骤。此外,处理组件802可以包括一个或多个模块,便于处理组件802和其他组件之间的交互。例如,处理组件802可以包括多媒体模块,以方便多媒体组件808和处理组件802之间的交互。
存储器804被配置为存储各种类型的数据以支持在装置800的操作。这些数据的示例包括用于在装置800上操作的任何应用程序或方法的指令,联系人数据,电话簿数据,消息,图片,视频等。存储器804可以由任何类型的易失性或非易失性存储设备或者它们的组合实现,如静态随机存取存储器(SRAM),电可擦除可编程只读存储器(EEPROM),可擦除可编程只读存储器(EPROM),可编程只读存储器(PROM),只读存储器(ROM),磁存储器,快闪存储器,磁盘或光盘。
电源组件806为装置800的各种组件提供电力。电源组件806可以包括电源管理***,一个或多个电源,及其他与为装置800生成、管理和分配电力相关联的组件。
多媒体组件808包括在所述装置800和用户之间的提供一个输出接口的屏幕。在一些实施例中,屏幕可以包括液晶显示器(LCD)和触摸面板(TP)。如果屏幕包括触摸面板,屏幕可以被实现为触摸屏,以接收来自用户的输入信号。触摸面板包括一个或多个触摸传感器以感测触摸、滑动和触摸面板上的手势。所述触摸传感器可以不仅感测触摸或滑动动作的边界,而且还检测与所述触摸或滑动操作相关的持续时间和压力。在一些实施例中,多媒体组件808包括一个前置摄像头和/或后置摄像头。当装置800处于操作模式,如拍摄模式或视频模式时,前置摄像头和/或后置摄像头可以接收外部的多媒体数据。每个前置摄像头和后置摄像头可以是一个固定的光学透镜***或具有焦距和光学变焦能力。
音频组件810被配置为输出和/或输入音频信号。例如,音频组件810包括一个麦克风(MIC),当装置800处于操作模式,如呼叫模式、记录模式和语音识别模式时,麦克风被配置为接收外部音频信号。所接收的音频信号可以被进一步存储在存储器804或经由通信组件816发送。在一些实施例中,音频组件810还包括一个扬声器,用于输出音频信号。
I/O接口812为处理组件802和***接口模块之间提供接口,上述***接口模块可以是键盘,点击轮,按钮等。这些按钮可包括但不限于:主页按钮、音量按钮、启动按钮和锁定按钮。
传感器组件814包括一个或多个传感器,用于为装置800提供各个方面的状态 评估。例如,传感器组件814可以检测到装置800的打开/关闭状态,组件的相对定位,例如所述组件为装置800的显示器和小键盘,传感器组件814还可以检测装置800或装置800一个组件的位置改变,用户与装置800接触的存在或不存在,装置800方位或加速/减速和装置800的温度变化。传感器组件814可以包括接近传感器,被配置用来在没有任何的物理接触时检测附近物体的存在。传感器组件814还可以包括光传感器,如CMOS或CCD图像传感器,用于在成像应用中使用。在一些实施例中,该传感器组件814还可以包括加速度传感器,陀螺仪传感器,磁传感器,压力传感器或温度传感器。
通信组件816被配置为便于装置800和其他设备之间有线或无线方式的通信。装置800可以接入基于通信标准的无线网络,如WiFi,2G或3G,4G LTE、6G NR或它们的组合。在一个示例性实施例中,通信组件816经由广播信道接收来自外部广播管理***的广播信号或广播相关信息。在一个示例性实施例中,所述通信组件816还包括近场通信(NFC)模块,以促进短程通信。例如,在NFC模块可基于射频识别(RFID)技术,红外数据协会(IrDA)技术,超宽带(UWB)技术,蓝牙(BT)技术和其他技术来实现。
在示例性实施例中,装置800可以被一个或多个应用专用集成电路(ASIC)、数字信号处理器(DSP)、数字信号处理设备(DSPD)、可编程逻辑器件(PLD)、现场可编程门阵列(FPGA)、控制器、微控制器、微处理器或其他电子元件实现,用于执行上述矫正函数的生成方法。
在示例性实施例中,还提供了一种包括指令的非瞬态计算机可读存储介质,例如包括指令的存储器804,上述指令可由装置800的处理器820执行以完成上述XX方法。例如,所述非瞬态计算机可读存储介质可以是ROM、随机存取存储器(RAM)、CD-ROM、磁带、软盘和光数据存储设备等。
图9是根据本公开的实施例示出的一种用于数据存证或者驾驶模式的确定的装置900的示意框图。例如,装置900可以是移动电话,计算机,数字广播终端,消息收发设备,游戏控制台,平板设备,医疗设备,健身设备,个人数字助理等。
参照图9,装置900可以包括以下一个或多个组件:处理组件902,存储器904,电源组件906,多媒体组件908,音频组件910,输入/输出(I/O)的接口912,传感器组件914,通信组件916,光学***922和显示组件924。
显示组件924用于显示图像,该图像通过光学***922产生相应的虚像。在显示组件924显示正常图像的情况下,光学***922将产生畸变的虚像;在显示组件924显示经过所述图像矫正方法矫正后的预畸变图像的情况下,光学***922将产生无畸变的虚像。
处理组件902通常控制装置900的整体操作,诸如与显示,电话呼叫,数据通信,相机操作和记录操作相关联的操作。处理组件902可以包括一个或多个处理器920来执行指令,以完成上述图像矫正方法的全部或部分步骤。此外,处理组件902可以包括一个或多个模块,便于处理组件902和其他组件之间的交互。例如,处理组件902可以包括多媒体模块,以方便多媒体组件908和处理组件902之间的交互。
存储器904被配置为存储各种类型的数据以支持在装置900的操作。这些数据的示例包括用于在装置900上操作的任何应用程序或方法的指令,联系人数据,电话簿数据,消息,图片,视频等。存储器904可以由任何类型的易失性或非易失性存储设备或者它们的组合实现,如静态随机存取存储器(SRAM),电可擦除可编程只读存储器(EEPROM),可擦除可编程只读存储器(EPROM),可编程只读存储器(PROM),只读存储器(ROM),磁存储器,快闪存储器,磁盘或光盘。
电源组件906为装置900的各种组件提供电力。电源组件906可以包括电源管理***,一个或多个电源,及其他与为装置900生成、管理和分配电力相关联的组件。
多媒体组件908包括在所述装置900和用户之间的提供一个输出接口的屏幕。在一些实施例中,屏幕可以包括液晶显示器(LCD)和触摸面板(TP)。如果屏幕包括触摸面板,屏幕可以被实现为触摸屏,以接收来自用户的输入信号。触摸面板包括一个或多个触摸传感器以感测触摸、滑动和触摸面板上的手势。所述触摸传感器可以不仅感测触摸或滑动动作的边界,而且还检测与所述触摸或滑动操作相关的持续时间和压力。在一些实施例中,多媒体组件908包括一个前置摄像头和/或后置摄像头。当装置900处于操作模式,如拍摄模式或视频模式时,前置摄像头和/或后置摄像头可以接收外部的多媒体数据。每个前置摄像头和后置摄像头可以是一个固定的光学透镜***或具有焦距和光学变焦能力。
音频组件910被配置为输出和/或输入音频信号。例如,音频组件910包括一个麦克风(MIC),当装置900处于操作模式,如呼叫模式、记录模式和语音识别模式时,麦克风被配置为接收外部音频信号。所接收的音频信号可以被进一步存储在存储器904或经由通信组件916发送。在一些实施例中,音频组件910还包括一个扬声 器,用于输出音频信号。
I/O接口912为处理组件902和***接口模块之间提供接口,上述***接口模块可以是键盘,点击轮,按钮等。这些按钮可包括但不限于:主页按钮、音量按钮、启动按钮和锁定按钮。
传感器组件914包括一个或多个传感器,用于为装置900提供各个方面的状态评估。例如,传感器组件914可以检测到装置900的打开/关闭状态,组件的相对定位,例如所述组件为装置900的显示器和小键盘,传感器组件914还可以检测装置900或装置900一个组件的位置改变,用户与装置900接触的存在或不存在,装置900方位或加速/减速和装置900的温度变化。传感器组件914可以包括接近传感器,被配置用来在没有任何的物理接触时检测附近物体的存在。传感器组件914还可以包括光传感器,如CMOS或CCD图像传感器,用于在成像应用中使用。在一些实施例中,该传感器组件914还可以包括加速度传感器,陀螺仪传感器,磁传感器,压力传感器或温度传感器。
通信组件916被配置为便于装置900和其他设备之间有线或无线方式的通信。装置900可以接入基于通信标准的无线网络,如WiFi,2G或3G,4G LTE、6G NR或它们的组合。在一个示例性实施例中,通信组件916经由广播信道接收来自外部广播管理***的广播信号或广播相关信息。在一个示例性实施例中,所述通信组件916还包括近场通信(NFC)模块,以促进短程通信。例如,在NFC模块可基于射频识别(RFID)技术,红外数据协会(IrDA)技术,超宽带(UWB)技术,蓝牙(BT)技术和其他技术来实现。
在示例性实施例中,装置900可以被一个或多个应用专用集成电路(ASIC)、数字信号处理器(DSP)、数字信号处理设备(DSPD)、可编程逻辑器件(PLD)、现场可编程门阵列(FPGA)、控制器、微控制器、微处理器或其他电子元件实现,用于执行上述图像矫正方法。
在示例性实施例中,还提供了一种包括指令的非瞬态计算机可读存储介质,例如包括指令的存储器904,上述指令可由装置900的处理器920执行以完成上述图像矫正方法。例如,所述非瞬态计算机可读存储介质可以是ROM、随机存取存储器(RAM)、CD-ROM、磁带、软盘和光数据存储设备等。
本领域技术人员在考虑说明书及实践这里公开的实施例后,将容易想到本公开 的其它实施方案。本公开旨在涵盖本公开的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本公开的一般性原理并包括本公开未公开的本技术领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,本公开的真正范围和精神由下面的权利要求指出。
应当理解的是,本公开并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围进行各种修改和改变。本公开的范围仅由所附的权利要求来限制。
需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。
以上对本公开实施例所提供的方法和装置进行了详细介绍,本文中应用了具体个例对本公开的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本公开的方法及其核心思想;同时,对于本领域的一般技术人员,依据本公开的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本公开的限制。

Claims (16)

  1. 一种矫正函数的生成方法,包括:
    确定包括光学***和显示组件的显示设备,所述显示组件显示出的正常图像通过所述光学***形成畸变图像,所述畸变图像为虚像;
    获取所述显示设备的样本物方位置坐标和相应的样本像方视场角坐标,所述样本物方位置坐标用于表征所述正常图像中的样本像素点在所述显示组件中的显示位置,所述样本像方视场角坐标用于表征所述畸变图像中对应于所述样本像素点的样本虚像点的视场角;
    根据所述样本物方位置坐标和所述样本像方视场角坐标生成矫正函数,所述矫正函数用于矫正所述畸变图像。
  2. 根据权利要求1所述的方法,所述获取所述显示设备的样本物方位置坐标和相应的样本像方视场角坐标,包括:
    查询所述光学***的光学参数,并根据所述光学参数确定所述正常图像中的样本像素点的显示位置以及对应于所述样本像素点的样本虚像点的视场角;或者,
    通过相机拍摄所述畸变图像,利用所述相机的预设参数计算所述畸变图像中样本虚像点的视场角;以及,确定所述正常图像中对应于所述样本虚像点的样本像素点,并利用所述显示组件的预设参数计算所述样本像素点的显示位置。
  3. 根据权利要求1所述的方法,所述样本虚像点的数量为多个,且多个样本虚像点的所述视场角构成的视场角范围不小于预设的范围阈值。
  4. 根据权利要求1所述的方法,所述根据所述样本物方位置坐标和所述样本像方视场角坐标生成矫正函数,包括:
    将所述样本像素点的样本物方位置坐标和相应的样本虚像点的样本像方视场角坐标代入二元多项式,以生成包含多项式系数的二元多项式方程;
    求解所述二元多项式方程以确定所述多项式系数的取值,并将所述取值反代入所述二元多项式以得到所述矫正函数。
  5. 一种图像矫正方法,应用于包含光学***和显示组件的显示设备,所述方法包括:
    确定目标图像中所包含像素点的颜色值和相应的像方视场角坐标,所述像方视场角坐标用于表征对应于所述像素点的虚像点的视场角,所述虚像由所述显示组件在显示所述目标图像时通过所述光学***形成;
    根据矫正函数确定所述像方视场角坐标对应的物方位置坐标,所述物方位置坐标 用于表征所述像素点在所述显示组件中的预期显示位置,所述矫正函数通过权利要求1-4中任一项所述的方法生成;
    控制所述显示组件按照所述预期显示位置显示所述像素点的颜色值。
  6. 根据权利要求5所述的方法,所述确定目标图像中所包含像素点的像方视场角坐标,包括:
    在所述显示组件的显示范围与所述光学***的成像范围不相同的情况下,确定所述显示范围与所述成像范围之间的缩放比例;
    根据所述缩放比例确定目标图像中所包含像素点在所述成像范围中对应的虚像点,并将所述虚像点的视场角作为所述像素点对应的像方视场角坐标。
  7. 根据权利要求6所述的方法,所述确定所述显示组件的显示范围,包括:
    确定所述光学***的光学原点和所述显示组件的显示原点,所述光学原点、所述显示原点和所述光学设备的观察点均位于所述显示设备的光轴上;
    确定所述光学***中距离所述光学原点最远的第一边缘点的第一最大视场角以及所述第一边缘点和所述光学原点之间的第一最远距离;以及,确定所述显示组件中距离所述显示原点最远的第二边缘点和所述显示原点之间的第二最远距离;
    根据所述第一最大视场角、所述第一最远距离和所述第二最远距离,基于所述光学***和所述显示组件之间的几何关系计算所述第二边缘点的第二最大视场角,所述第二最大视场角用于表征所述显示组件的显示范围。
  8. 根据权利要求5所述的方法,所述确定目标图像中所包含像素点的颜色值和相应的像方视场角坐标,包括:
    确定目标图像所包含全部像素点中各个像素点的颜色值和像方视场角坐标;或者,
    确定目标图像所包含全部像素点中的关键像素点,并分别确定各个关键像素点的颜色值和像方视场角坐标。
  9. 根据权利要求8所述的方法,所述确定目标图像所包含全部像素点中的关键像素点,包括:
    在所述目标图像中随机选取多个关键像素点;或者,
    在所述目标图像中按照预设视场角间隔或预设距离间隔依次选取多个关键像素点。
  10. 根据权利要求8所述的方法,所述目标图像中的全部像素点包括非关键像素点和所述关键像素点,
    所述根据矫正函数确定所述像方视场角坐标对应的物方位置坐标,包括:根据矫 正函数确定所述关键像素点的像方视场角坐标对应的物方位置坐标,并利用插值算法确定所述非关键像素点的像方视场角坐标及其对应的物方位置坐标;或者,
    还包括:基于各个关键像素点的颜色值,利用插值算法确定各个非关键像素点的颜色值。
  11. 根据权利要求5所述的方法,所述根据矫正函数确定所述像方视场角坐标对应的物方位置坐标,包括:
    将所述像方视场角坐标代入矫正函数,并通过求解所述矫正函数确定相应的物方位置坐标;或者,
    在物像映射表中查询所述像方视场角坐标对应的物方位置坐标,所述物像映射表被根据所述矫正函数和预设的像方视场角数据计算得到。
  12. 一种矫正函数的生成装置,所述装置包括一个或多个处理器,所述处理器被配置为:
    确定包括光学***和显示组件的显示设备,所述显示组件显示出的正常图像通过所述光学***形成畸变图像,所述畸变图像为虚像;
    获取所述显示设备的样本物方位置坐标和相应的样本像方视场角坐标,所述样本物方位置坐标用于表征所述正常图像中的样本像素点在所述显示组件中的显示位置,所述样本像方视场角坐标用于表征所述畸变图像中对应于所述样本像素点的样本虚像点的视场角;
    根据所述样本物方位置坐标和所述样本像方视场角坐标生成矫正函数,所述矫正函数用于矫正所述畸变图像。
  13. 一种图像矫正装置,所述装置应用于包含光学***和显示组件的显示设备,所述装置包括一个或多个处理器,所述处理器被配置为:
    确定目标图像中所包含像素点的颜色值和相应的像方视场角坐标,所述像方视场角坐标用于表征对应于所述像素点的虚像点的视场角,所述虚像由所述显示组件在显示所述目标图像时通过所述光学***形成;
    根据矫正函数确定所述像方视场角坐标对应的物方位置坐标,所述物方位置坐标用于表征所述像素点在所述显示组件中的预期显示位置,所述矫正函数通过权利要求1-4中任一项所述的方法生成;
    控制所述显示组件按照所述预期显示位置显示所述像素点的颜色值。
  14. 一种电子设备,其特征在于,包括:
    处理器;
    用于存储处理器可执行指令的存储器;
    其中,所述处理器被配置为实现权利要求1至4中任一项所述的方法。
  15. 一种显示设备,其特征在于,包括:
    光学***和显示组件;
    处理器;
    用于存储处理器可执行指令的存储器;
    其中,所述处理器被配置为实现权利要求5至11中任一项所述的方法。
  16. 一种非瞬态计算机可读存储介质,其上存储有计算机程序,其特征在于,该程序被处理器执行时实现权利要求1至11中任一项所述的方法中的步骤。
PCT/CN2022/114002 2022-08-22 2022-08-22 矫正函数的生成、图像矫正方法及装置 WO2024040398A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2022/114002 WO2024040398A1 (zh) 2022-08-22 2022-08-22 矫正函数的生成、图像矫正方法及装置
CN202280002785.9A CN117918019A (zh) 2022-08-22 2022-08-22 矫正函数的生成、图像矫正方法及装置

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2022/114002 WO2024040398A1 (zh) 2022-08-22 2022-08-22 矫正函数的生成、图像矫正方法及装置

Publications (1)

Publication Number Publication Date
WO2024040398A1 true WO2024040398A1 (zh) 2024-02-29

Family

ID=90012108

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/114002 WO2024040398A1 (zh) 2022-08-22 2022-08-22 矫正函数的生成、图像矫正方法及装置

Country Status (2)

Country Link
CN (1) CN117918019A (zh)
WO (1) WO2024040398A1 (zh)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10327373A (ja) * 1997-05-26 1998-12-08 Mitsubishi Electric Corp 接眼映像表示装置
CN108876725A (zh) * 2017-05-12 2018-11-23 深圳市魔眼科技有限公司 一种虚拟图像畸变矫正方法及***
CN109688392A (zh) * 2018-12-26 2019-04-26 联创汽车电子有限公司 Ar-hud光学投影***及映射关系标定方法和畸变矫正方法
CN111127365A (zh) * 2019-12-26 2020-05-08 重庆矢崎仪表有限公司 基于三次样条曲线拟合的hud畸变矫正方法
CN112258399A (zh) * 2020-09-10 2021-01-22 江苏泽景汽车电子股份有限公司 一种逆向建模的hud图像光学矫正方法
CN113240592A (zh) * 2021-04-14 2021-08-10 重庆利龙科技产业(集团)有限公司 基于ar-hud动态眼位下计算虚像平面的畸变矫正方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10327373A (ja) * 1997-05-26 1998-12-08 Mitsubishi Electric Corp 接眼映像表示装置
CN108876725A (zh) * 2017-05-12 2018-11-23 深圳市魔眼科技有限公司 一种虚拟图像畸变矫正方法及***
CN109688392A (zh) * 2018-12-26 2019-04-26 联创汽车电子有限公司 Ar-hud光学投影***及映射关系标定方法和畸变矫正方法
CN111127365A (zh) * 2019-12-26 2020-05-08 重庆矢崎仪表有限公司 基于三次样条曲线拟合的hud畸变矫正方法
CN112258399A (zh) * 2020-09-10 2021-01-22 江苏泽景汽车电子股份有限公司 一种逆向建模的hud图像光学矫正方法
CN113240592A (zh) * 2021-04-14 2021-08-10 重庆利龙科技产业(集团)有限公司 基于ar-hud动态眼位下计算虚像平面的畸变矫正方法

Also Published As

Publication number Publication date
CN117918019A (zh) 2024-04-23

Similar Documents

Publication Publication Date Title
US11114130B2 (en) Method and device for processing video
WO2016011747A1 (zh) 肤色调整方法和装置
JP6560740B2 (ja) バーチャルリアリティヘッドマウントディスプレイ機器ソフトウェアをテストする方法、装置、プログラム、及び記録媒体
CN110400266B (zh) 一种图像矫正的方法及装置、存储介质
EP2927787A1 (en) Method and device for displaying picture
US10650502B2 (en) Image processing method and apparatus, and storage medium
US10863077B2 (en) Image photographing method, apparatus, and terminal
WO2020134558A1 (zh) 图像处理方法、装置、电子设备及存储介质
EP2975574B1 (en) Method, apparatus and terminal for image retargeting
CN107330868A (zh) 图片处理方法及装置
US20200402321A1 (en) Method, electronic device and storage medium for image generation
WO2020114097A1 (zh) 一种边界框确定方法、装置、电子设备及存储介质
CN112017133B (zh) 一种图像展示方法、装置及电子设备
US9665925B2 (en) Method and terminal device for retargeting images
CN112333385B (zh) 电子防抖控制方法及装置
EP3770859A1 (en) Image processing method, image processing apparatus, and storage medium
CN109934168B (zh) 人脸图像映射方法及装置
EP3327718A1 (en) Method and device for processing a page
CN115802173B (zh) 一种图像处理方法、装置、电子设备及存储介质
WO2024040398A1 (zh) 矫正函数的生成、图像矫正方法及装置
CN112449165A (zh) 投影方法、装置及电子设备
WO2023240444A1 (zh) 一种图像的处理方法、装置及存储介质
WO2022042160A1 (zh) 图像处理方法及装置
US9619016B2 (en) Method and device for displaying wallpaper image on screen
CN111107270B (zh) 拍摄方法及电子设备

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 202280002785.9

Country of ref document: CN

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22955947

Country of ref document: EP

Kind code of ref document: A1