WO2019179168A1 - 投影畸变校正方法、装置、***及存储介质 - Google Patents

投影畸变校正方法、装置、***及存储介质 Download PDF

Info

Publication number
WO2019179168A1
WO2019179168A1 PCT/CN2018/118837 CN2018118837W WO2019179168A1 WO 2019179168 A1 WO2019179168 A1 WO 2019179168A1 CN 2018118837 W CN2018118837 W CN 2018118837W WO 2019179168 A1 WO2019179168 A1 WO 2019179168A1
Authority
WO
WIPO (PCT)
Prior art keywords
projection
image
target surface
center
projection target
Prior art date
Application number
PCT/CN2018/118837
Other languages
English (en)
French (fr)
Inventor
余新
李屹
Original Assignee
深圳光峰科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳光峰科技股份有限公司 filed Critical 深圳光峰科技股份有限公司
Publication of WO2019179168A1 publication Critical patent/WO2019179168A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3179Video signal processing therefor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3179Video signal processing therefor
    • H04N9/3185Geometric adjustment, e.g. keystone or convergence

Definitions

  • the present invention relates to the field of projection interaction, and in particular to a projection distortion correction method, device, projection distortion correction system and storage medium.
  • the existing method performs 3D modeling on the projected object, and uses the texture rendering technique to locally compensate the projected image to generate a new projected image. This requires making an image or video that is compensated for by the projected surface before the presentation.
  • the process of compensating for distortion requires manual modeling, and the installation of the projector requires complex debugging to match the results of the distortion correction.
  • the relative position of the projector relative to the projected surface during projection must remain fixed. If the projection surface needs to be moved in the presentation, the existing technology will not be able to meet the requirements.
  • supporting a moving projection surface requires a device capable of measuring the geometry of the projection target surface and automatically compensating according to the shape of the projection surface.
  • the invention provides a projection distortion correction method, the method comprising:
  • a projection distortion correction method comprising:
  • the boundary of the projection target surface is matched to obtain a distortion corrected projection image.
  • the method further comprises: performing a three-dimensional mapping on the projection target surface by using the projection image to generate a texture mapping function, and then performing inverse transformation on the projection image by using the texture mapping function to generate the distortion-compensated image.
  • a center of the projection target curved surface is determined by a center of gravity, a geometric center, or a preset feature matching center of the 3D point cloud of the projection target curved surface.
  • the boundary of the projection picture is determined by the intersection of the boundary of the conical space bounded by the projection ray of the pixel corresponding to the four corners of the projection image and the 3D point cloud of the projection target curved surface.
  • a projection distortion correcting device comprising:
  • An image acquisition module configured to acquire a depth image of a projection target surface acquired by the depth image acquisition system
  • the projection target surface model establishing module is configured to process the depth image acquired by the depth image acquisition system to obtain a 3D point cloud; extract a 3D point cloud of the projection target surface from the 3D point cloud, and based on the 3D of the projection target surface a point cloud creates a model of the projection target surface; and
  • a distortion correction module configured to determine a center and a boundary of the projection target surface according to the model of the projection target surface, and determine a center of the projection image and a boundary of the projection image according to the three-dimensional position of the projection target surface, and calculate the projection target surface Centering the direction and distance from the center of the projected picture, translating the projected image according to the direction and distance of the deviation such that the center of the projected target curved surface is aligned with the center of the projected picture, and cropping the edge of the projected picture to make The boundary of the projection screen coincides with the boundary of the projection target curved surface to obtain a distortion corrected projection image.
  • the projection distortion correction module is further configured to perform a three-dimensional mapping on the projection target curved surface by using the projection image to generate a texture mapping function, and then inversely transform the projection image by using the texture mapping function to generate the distortion-compensated image.
  • a projection distortion correction system comprising: a projection system coupled to a projection source device for projecting content of the projection source device to a projection target curved surface; a depth image acquisition system, A depth image acquisition system is fixedly disposed on the projection system for acquiring a depth image including the projection target curved surface; and a projection distortion correction device as described above.
  • the depth image acquisition system is capable of acquiring depth information of an image at a certain frame rate in real time, including one or more depth acquisition devices, and if there are multiple, respectively, are disposed at different positions on the projection system.
  • the projection distortion correction device is integrated within the projection system or within the projection source device.
  • a computer readable storage medium having stored thereon a computer program that, when executed by a processor, implements a projection distortion correction method as described above.
  • the projection distortion correction method can calculate the model and the three-dimensional position information of the projection target surface according to the depth image information of the projection target surface acquired by the depth image acquisition system, and perform center alignment and edge on the projection image source. Correction operations such as cropping, so that the image projected onto the projection surface coincides with the projection surface so that distortion compensation can be automatically performed according to the shape of the projection surface.
  • FIG. 1 is a schematic structural view of a projection distortion correction system according to an embodiment of the present invention.
  • FIG. 2 is a block diagram of a projection distortion correcting apparatus according to an embodiment of the present invention.
  • FIG. 3 is a flowchart of a point cloud fast grouping algorithm according to an embodiment of the present invention.
  • FIG. 4 is a schematic structural view of a projection distortion correcting apparatus according to an embodiment of the present invention.
  • FIG. 5 is a flowchart of a projection distortion correction method according to an embodiment of the present invention.
  • FIG. 6 is a schematic diagram of a mapping algorithm according to an embodiment of the present invention.
  • Fig. 7 is a comparison diagram of a projection image according to an embodiment of the present invention before and after distortion correction.
  • FIG. 8 is a comparison diagram of still another projection image before and after distortion correction according to an embodiment of the present invention.
  • 9 is an off-plane projection active electronic display card to which a projection distortion correction system according to an embodiment of the present invention is applied.
  • FIG. 10 is a schematic diagram of a stage moving cylinder projection of a projection distortion correction system to which an embodiment of the present invention is applied.
  • Distortion correction module 304 Distortion correction module 304
  • a component when referred to as being “fixed” to another component, it can be directly on the other component or the component can be present.
  • a component When a component is considered to "connect” another component, it can be directly connected to another component or possibly a central component.
  • a component When a component is considered to be “set to” another component, it can be placed directly on another component or possibly with a centered component.
  • the terms “vertical,” “horizontal,” “left,” “right,” and the like, as used herein, are for illustrative purposes only.
  • FIG. 1 is a schematic structural diagram of a projection distortion correction system 1000 according to an embodiment of the present invention.
  • the projection distortion correction system 1000 includes a projection system 1, a depth image acquisition system 2, and a projection distortion correction device 3.
  • the depth image acquisition system 2 is fixedly disposed on the projection system 1 and its position relative to the projection system 1 is fixed and known.
  • the depth image acquisition system 2 is configured to acquire depth image information of the projection target curved surface 4 in the projection direction of the projection system 1.
  • the depth image acquisition system 2 can be any device capable of generating spatial depth information, such as a ToF camera, a structured light 3D scanner, a binocular parallax 3D scanning device, and the like.
  • the depth image acquisition system 2 can be mounted facing the projection direction of the projection system 1 and its image acquisition range includes at least the projection target surface.
  • the depth image acquisition system 2 may include one or more depth image acquisition devices.
  • the setting of the plurality of depth image acquisition devices is to make the depth image acquired by the depth image acquisition system 2 more accurate.
  • a depth image capturing device will be described as an example in the following embodiments.
  • the projection system 1 is capable of being communicatively coupled to a projection source device (not shown) for projecting the content of the projection source device onto the projection target curved surface 4.
  • the projection system 1 and the projection source device can be connected by wire or wirelessly.
  • the projection system 1 can also have a built-in storage device in which the projection content is stored.
  • the projection system 1 can also be provided with a connection interface for connecting an external storage device to read and project content in the external storage device.
  • the projection system 1 is provided with a memory card slot, and the contents of the memory card can be read and played by inserting a memory card.
  • the projection distortion correction device 3 can be communicably connected to the depth image acquisition system 2 and the projection source device, respectively, for acquiring the projection target acquired by the depth image acquisition system 2 from the depth image acquisition system 2 a depth image of the curved surface 4, and processing and analyzing the depth image to obtain a 3D point cloud, and extracting a 3D point cloud of the projection target curved surface 4 from the 3D point cloud, and performing coordinate conversion to obtain the projection target curved surface 4
  • the mathematical function indicates that coordinate transformation is performed on the coordinates of the projected image, and the projection image and the projection target curved surface are converted into the same coordinate system, and distortion correction processing of the image is performed, the distortion correction processing including the projection image and the projection target Surface center alignment and edge cropping.
  • the distortion correction process further comprises performing distortion correction on the projection picture according to the fluctuation of the projection target curved surface.
  • the distortion corrected image is sent to the projection system 1 to project the corrected image to the projection target curved surface 4.
  • the projection distortion correcting device 3 may be a control unit integrated in the projection source device, or may be an electronic device independent of the control device of the projection source device and having data processing capability and communication connection capability, such as a computer. Mobile phones, tablets, personal digital assistants (PDAs), etc.
  • the projection distortion correcting device 3 may also be a control device integrated in the projection system 1.
  • the projection source device is communicably coupled to the projection system 1 for outputting projected content to the projection system 1.
  • the projection source device can be any electronic device with data processing capability, such as a computer, a mobile phone, a tablet, a Personal Digital Assistant (PDA), or the like. It can be understood that the projection source device 4 can also be other electronic devices, such as a television, a set top box, a video player, a game machine, a video recorder, and the like, which can output content.
  • the wired mode includes connecting through a communication port, such as a universal serial bus (USB), a High-Definition Multimedia Interface (HDMI) interface, and a Video Graphics Array (VGA) interface. Controller area network (CAN), serial and/or other standard network connections, Inter-Integrated Circuit (I2C) buses, etc.
  • the wireless method may employ any type of wireless communication system, such as Bluetooth, infrared, Wireless Fidelity (WiFi), cellular technology, satellite, and broadcast.
  • the cellular technology may include mobile communication technologies such as second generation (2G), third generation (3G), fourth generation (4G) or fifth generation (5G).
  • FIG. 2 is a functional block diagram of the projection distortion correcting device 3 according to an embodiment of the present invention.
  • the projection distortion correction device 3 includes an image acquisition module 300, a projection target surface model creation module 302, and a distortion correction module 304.
  • the image acquisition module 300 is configured to acquire a depth image of the projection target curved surface 4 acquired by the depth image acquisition system 2 from the depth image acquisition system 2 .
  • the projection target surface model establishing module 302 is configured to process and coordinate the depth image to obtain a three-dimensional model mathematical representation (eg, a triangular mesh) of the projection target surface.
  • the depth image acquisition system 2 includes two or more depth image acquisition devices, the image acquisition module 300 respectively acquiring depth images from each depth image acquisition device, and then for each depth image The image of the acquisition device is processed, such as noise reduction, down sampling, outlier filter, etc., to generate a 3D point cloud corresponding to the local coordinate system of each depth image acquisition device;
  • the pre-configured information such as the position, orientation and imaging aspect ratio of the image acquisition device converts the point cloud in the local coordinate system to global coordinates and merges them into a total point cloud.
  • the coordinates of any pixel (i-th row, j-th column) in the depth image of the output of the depth image capturing device in the local coordinate system of the depth image capturing device may be expressed as among them with with They are the horizontal and vertical viewing angles of the depth image acquisition device respectively; n ⁇ m is the imaging resolution of the depth image acquisition device.
  • the projection target surface model building module 302 extracts a 3D point cloud of the projection target surface and constructs a model of the projection target surface.
  • the projection distortion correcting device 3 classifies according to the distribution density characteristics of the point cloud, and the classification algorithm may adopt any suitable classification algorithm, such as k-nearest neighbor, k-dTree, and the like.
  • the point cloud based on depth image transformation is a structural point cloud. At this point, the neighboring points of any point in the cloud can be quickly found by the subscript of the point, so there can also be a group acceleration algorithm for the structured point cloud.
  • a flow chart as shown in FIG. 3 describes a fast classification algorithm.
  • step 401 it is determined whether the point cloud is empty, that is, whether there is a point in the point cloud. If not, proceed to step 402, otherwise, the process ends.
  • step 402 a point is randomly selected from the point cloud as a seed point.
  • step 403 the seed point is placed on the stack.
  • step 404 it is determined whether the stack is empty, that is, whether there is a point in the stack. If not, the process proceeds to step 405. Otherwise, the process proceeds to step 412.
  • step 405 a point is taken from the stack.
  • step 406 the extracted point is added to the cluster group.
  • Step 407 Acquire all neighboring points of the point, where the neighboring point refers to a point where the distance from the point is within a predetermined threshold range.
  • step 408 it is determined whether all neighboring points have been traversed. If yes, go back to step 404, if no, go to step 409.
  • Step 409 Select a neighboring point to determine whether the neighboring point has been accessed, that is, whether the neighboring point has joined the stack. If yes, go back to step 408. If no, go to step 410.
  • Step 410 Determine whether the distance between the neighboring point and the point taken in step 405 is less than a predetermined threshold. If yes, proceed to step 411. If no, return to step 408.
  • step 411 the neighboring point is added to the stack.
  • Step 412 creating a cluster of points from the cluster group.
  • Step 413 Delete points in the cluster group from the point cloud.
  • the projection target surface model establishing module 302 extracts a point cloud of the projection target curved surface 4 from the classified 3D point cloud.
  • a model of the projection target surface eg, a triangular mesh model
  • any existing modeling method can be used, for example, a triangular meshing process based on a triangulation algorithm (delaunay).
  • the distortion correction module 304 is configured to perform distortion correction on the projected image according to the model of the projection target curved surface.
  • the distortion correction module 304 first converts the projected image into a three-dimensional global coordinate system that is unified with the projection target surface.
  • the distortion correction module 304 establishes a projection space conversion function using pre-configured projection conversion parameters such as a projection aspect ratio, a projection vertical, a divergence angle in a horizontal direction, and the like.
  • the spatial transformation function converts the two-dimensional local coordinates of the projected image into spatial global coordinates.
  • the two-dimensional local coordinates of the projected image may be first converted to the three-dimensional local coordinates in the projected coordinate system, and then the three-dimensional local coordinates in the projected coordinate system are converted to the global coordinate system according to the position of the lens of the projection system 1.
  • the projection system coordinate system is a three-dimensional coordinate system constructed with the optical center of the projection system 1 as an origin.
  • the distortion correction module 304 converts the projection image and the projection target curved surface to the same coordinate system (three-dimensional global coordinates), first determines the center of the projection target curved surface 4 according to the three-dimensional point cloud of the projection target curved surface. position.
  • the center of the projection target curved surface 4 can be realized by calculating a gravity center, a geometric center of the point cloud of the projection target curved surface 4, and a preset feature matching center.
  • the projection picture is a curved surface defined by the intersection of the conical space and the projection surface of the boundary of the projection ray in the local coordinate system of the projection system.
  • the method for determining the boundary of the projected picture is that the boundary of the projected picture is the boundary of the tapered space bounded by the projection ray of the pixel corresponding to the four corners of the projected image.
  • the center of the projected picture may be determined as the position at which the center of the projected image is mapped to the projected picture.
  • the boundary of the projection target surface 4 is determined by a point cloud characterizing the projection target surface 4. If the point cloud of the projection target surface 4 is completely within the projected cone volume, the boundary of the projection target surface is determined by the boundary of the point cloud, otherwise the point cloud of the projection target surface 4 intersects with the projected cone volume Partial decision.
  • the distortion correction module 304 translates the projected image according to the direction and distance of the center of the projection target curved surface 4 in the three-dimensional global coordinates from the center of the projected image, such that the center of the projection target curved surface 4 and the projection Center-aligning the image; then, the projection distortion correcting device 3 crops the translated projected image according to the boundary of the projected image such that the boundary of the projected image and the boundary of the projected target curved surface 4 (ie, the projected target curved surface point The boundary of the cloud) is consistent.
  • the distortion correction module 304 After the distortion correction module 304 completes the alignment of the projection image with the center of the projection target surface, the distortion conversion function is generated according to the position of the projection target curved surface and the projection image is converted into the three-dimensional global coordinate. Then, using the algorithm and principle of 3D texture, the coordinate transformation of the projection image is used to perform 3D mapping on the projection target surface to generate a texture mapping function. The image processing system inversely transforms the projected image by using a texture mapping function to generate a distortion-compensated image.
  • the texture algorithm assumes that the area on the screen corresponds to the area on the surface.
  • the projection target surface is represented by a triangular mesh, and each triangle on it has a corresponding normal direction. If the normal direction of the triangle is perpendicular to the projected picture (spatial light modulator), the pattern on the picture can be projected onto the triangle without distortion, as shown in (a) of FIG. If the normal direction of the triangle and the projected picture are not perpendicular, distortion will occur, as shown in (b) of Fig. 6.
  • the first triangle 70 in (b) of FIG. 6 is a region that is cast directly onto the triangle of the offset projection target curved surface.
  • the algorithm maps the picture within the second triangle 72 to the region of the first triangle 70, which is essentially the same as the 3D texture algorithm in the 3D display.
  • the depth image acquisition system 2 and the projection distortion correction device operate in a real-time mode, and collect and analyze a depth image acquired by the depth image acquisition system 2 according to a certain frame rate, and generate a model of the projection target surface in real time, and project a projection The image is corrected in real time. Therefore, the projection distortion correcting device 3 and system of the present invention can realize real-time tracking of the projection target curved surface so that the projected image always coincides with the projected target curved surface.
  • the projection distortion correcting device 3 is also capable of performing distortion correction on the projection image according to the fluctuation of the projection target curved surface 4.
  • the projection transformation function is generated according to the position of the projection target curved surface and the projection image is converted into the three-dimensional global coordinates.
  • the coordinate transformation of the projection image is used to perform 3D mapping on the projection target surface to generate a texture mapping function.
  • the image processing system inversely transforms the projected image by using a texture mapping function to generate a distortion-compensated image.
  • the distortion-compensated image projected onto the projection target surface can eliminate the distortion caused by the surface undulation, as shown in (d) of FIG. 7 and (d) of FIG.
  • FIG. 4 is a schematic structural diagram of a projection distortion correcting device 3 according to an embodiment of the present invention.
  • the projection distortion correcting device 3 includes a processor 31, a memory 32, and a communication device 33.
  • the memory 32 can be used to store computer programs and/or modules (such as the image acquisition module 300, the projection target surface model building module 302, and the distortion correction module 304 described above) that are stored or executed by the processor 31
  • the memory 32 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application required for at least one function (such as a projection function of a projector, a social function of a mobile phone, etc.), and the like;
  • the data area can store data (such as depth image data, 3D point cloud data, etc.) created according to the use of the projection distortion correcting device 3, and the like.
  • the memory 32 may include a high-speed random access memory, and may also include a non-volatile memory such as a hard disk, a memory, a plug-in hard disk, a smart memory card (SMC), and a secure digital (SD). Card, flash card, at least one disk storage device, flash device, or other volatile solid state storage device.
  • the processor 31 may be a central processing unit (CPU), or may be another general-purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), Field-Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, etc.
  • the general purpose processor may be a microprocessor or the processor may be any conventional processor or the like, and the processor 31 is a control center of the projection distortion correcting device 3, and connects the entire projection distortion correction using various interfaces and lines. Various parts of the device 3.
  • the projection distortion correcting device 3 further includes at least one communication device 33.
  • the communication device 33 may be a wired communication device or a wireless communication device.
  • the wired communication device includes a communication port, such as a universal serial bus (USB), a controller area network (CAN), a serial and/or other standard network connection, and an integrated circuit (Inter- Integrated Circuit, I2C) bus, etc.
  • the wireless communication device can employ any type of wireless communication system, such as Bluetooth, infrared, Wireless Fidelity (WiFi), cellular technology, satellite, and broadcast.
  • the cellular technology may include mobile communication technologies such as second generation (2G), third generation (3G), fourth generation (4G) or fifth generation (5G).
  • the projection distortion correction device 3 is configured to communicate with the depth image acquisition system 2 through the communication device 33 to acquire an image acquired by the depth image acquisition system 2, and perform the image on the image.
  • the analysis process obtains a 3D point cloud, and extracts a 3D point cloud of the projection target surface from the 3D point cloud, calculates a model of the projection target surface, a projection screen center and a boundary, and according to the model and projection image of the target surface The center and the boundary perform distortion correction processing on the graphic.
  • the projection distortion correcting device 3 is further configured to communicate with the projection system 1 via the communication device 33 to transmit a distortion-corrected projected image to the projection system 1 to effect projection of the projected image.
  • the schematic diagram is only an example of the projection distortion correcting device 3, and does not constitute a limitation on the projection distortion correcting device 3, and may include more or less components than illustrated, or Combining certain components, or different components, for example, the projection distortion correcting device 3 may also include an input/output device, a display device, and the like according to actual needs.
  • the input and output device can include any suitable input device including, but not limited to, a mouse, a keyboard, a touch screen, or a contactless input, such as gesture input, voice input, and the like.
  • the display device may be a liquid crystal display (LCD), a light emitting diode (LED) display, an Organic Light-Emitting Diode (OLED) or other suitable display. .
  • FIG. 5 is a flowchart of a method for correcting projection distortion according to an embodiment of the present invention.
  • a depth image is acquired.
  • the projection distortion correcting device 3 acquires a depth image of the projection target curved surface 4 acquired by the depth image acquisition system 2 from the depth image acquisition system.
  • the depth image acquisition system 2 acquires a depth image of the projection target surface 4 at a predetermined frame rate (eg, 30-60 frames/second). If it is a fixed projection surface, there is no requirement for the frame rate; but if it is a moving projection surface, there is no significant change in the projection plane between adjacent frames. It can be understood that the acquisition frame rate of the depth image acquisition system 2 can be adjusted according to actual needs. When applied to the moving projection surface, as long as the captured image is not significantly changed between the adjacent two frames. Just fine.
  • the depth image acquired by the depth image acquisition system 2 is transmitted to the projection distortion correcting device 3 by wire or wirelessly.
  • the depth image is processed to obtain a 3D point cloud.
  • the projection distortion correcting device 3 processes the depth image, and the processing includes, but is not limited to, noise reduction, down sampling, outlier filter, etc., generating a 3D point corresponding to a local coordinate system of each depth image acquisition device. cloud.
  • the image plane coordinate system is a Cartesian coordinate system formed by the intersection of the optical axis and the image plane of the depth image capturing device as an origin of the image coordinate system, and the local coordinate system of the depth image capturing device is parallel to
  • the plane coordinate axis of the image plane coordinate system is taken as the X, Y axis, and the optical axis direction is taken as the Z axis, and the optical axis of the depth image capturing device is used as the origin of the three-dimensional coordinate system.
  • the depth information of the depth image represents a Z-axis coordinate value of each pixel in the depth image in a local coordinate system of the depth image acquisition device.
  • the projection distortion correcting device 3 also converts the coordinates of each pixel in the depth image in the local coordinate system of the depth image acquisition device to the coordinates in the global coordinate system.
  • the global coordinate system is relative to the local coordinate system of the depth image acquisition device and the local coordinate system of the projection system, and may be any reference system whose origin is defined at an arbitrary position, for example, the origin may be defined at a certain point in the space. Cartesian coordinate system.
  • the coordinates of any pixel (i-th row, j-th column) in the depth image of the output of the depth image capturing device in the local coordinate system of the depth image capturing device can be expressed as among them with with They are the horizontal and vertical viewing angles of the depth image acquisition device respectively; n ⁇ m is the imaging resolution of the depth image acquisition device.
  • the three main axes of the local coordinate system of the depth image acquisition device The coordinates of the origin of the local coordinate system of the depth image acquisition device in the global coordinates.
  • the depth image acquisition system 2 includes at least two depth image acquisition devices
  • the images acquired by the at least two depth image acquisition devices are combined into one total point cloud.
  • the point cloud in the local coordinate system of each depth image capturing device is converted to global coordinates according to pre-configured information such as the position, direction, and imaging aspect ratio of each depth image capturing device, and then corresponding to the corresponding
  • the point clouds in the global coordinate system of at least two depth image acquisition devices are merged into a total point cloud.
  • the point clouds in the global coordinate system respectively corresponding to the at least two depth image acquisition devices may also be aligned to further correct and eliminate distortion of the acquired depth image.
  • Step 503 extracting a 3D point cloud of the projection target surface, and constructing a model of the projection target surface.
  • the projection distortion correcting device 3 classifies according to the distribution characteristics of the point cloud, and the classification algorithm may adopt any suitable classification algorithm, such as k-nearest neighbor, k-dTree, and the like.
  • the projection target surface model establishing module 302 extracts a point cloud of the projection target curved surface 4 from the classified 3D point cloud.
  • a model of the projection target surface e.g., a triangular mesh model
  • any existing modeling method can be used, for example, a triangular meshing process based on a triangulation algorithm (delaunay).
  • the center of the projection target surface and the center and boundary of the projection screen are calculated.
  • the projection distortion correcting device 3 first converts the projected image into a three-dimensional global coordinate system unified with the projection target curved surface.
  • the projection distortion correcting device 3 establishes a projection space conversion function using pre-configured projection conversion parameters such as a projection aspect ratio, a projection vertical, a divergence angle in a horizontal direction, and the like.
  • the spatial transformation function converts the two-dimensional local coordinates of the projected image into spatial global coordinates.
  • the two-dimensional local coordinates of the projected image may be first converted to the three-dimensional local coordinates in the projected coordinate system, and then the three-dimensional local coordinates in the projected coordinate system are converted to the global coordinate system according to the position of the lens of the projection system 1.
  • the projection system coordinate system is a three-dimensional coordinate system constructed with the optical center of the projection system 1 as an origin.
  • the projection target curved surface 4 is determined according to the three-dimensional point cloud of the projection target curved surface. Central location.
  • the center of the projection target curved surface 4 can be realized by calculating a gravity center, a geometric center of the point cloud of the projection target curved surface 4, and a preset feature matching center.
  • the center of the projected picture may be determined as the position at which the center of the projected image is mapped to the projected picture.
  • the method for determining the boundary of the projected picture is that the boundary of the projected picture is the boundary of the tapered space where the projection ray of the pixel corresponding to the four corners of the projected image is a boundary. Any plane or curved surface is known whose cross section with the aforementioned tapered space determines the boundary of the projected picture.
  • the boundary of the projection target surface 4 is determined by a point cloud characterizing the projection target surface 4. If the point cloud of the projection target surface 4 is completely within the projected cone volume, the boundary of the projection target surface is determined by the boundary of the point cloud, otherwise determined by the portion where the projection target surface 4 and the projected cone volume intersect.
  • Step 505 aligning the center of the projection screen with the center of the projection target surface, and trimming the edge of the projection image.
  • the distortion correction module 304 translates the projected image according to the direction and distance of the center of the projection target curved surface 4 in the three-dimensional global coordinates from the center of the projected image, such that the center of the projection target curved surface 4 and the projection Center-aligning the image; then, the projection distortion correcting device 3 crops the translated projected image according to the boundary of the projected image such that the boundary of the projected image and the boundary of the projected target curved surface 4 (ie, the projected target curved surface point The boundary of the cloud) is consistent.
  • Step 506 performing distortion compensation on the projected image by using a mapping algorithm to generate a new projected image.
  • the projection transformation function is generated according to the position of the projection target curved surface and the projection image is converted into the three-dimensional global coordinates. Then, using the algorithm and principle of 3D texture, the coordinate transformation of the projection image is used to perform 3D mapping on the projection target surface to generate a texture mapping function. The projection distortion correcting device 3 inversely transforms the projected image using the texture mapping function to generate a distortion-compensated image.
  • the texture algorithm assumes that the area on the screen corresponds to the area on the surface.
  • the projection target surface is represented by a triangular mesh, and each triangle on it has a corresponding normal direction. If the normal direction of the triangle is perpendicular to the projected picture (spatial light modulator), the pattern on the picture can be projected onto the triangle without distortion, as shown in (a) of FIG. If the normal direction of the triangle and the projected picture are not perpendicular, distortion will occur, as shown in (b) of Fig. 6.
  • the first triangle 70 in (b) of FIG. 6 is a region that is cast directly onto the triangle of the offset projection target curved surface.
  • the algorithm maps the picture within the second triangle 72 to the region of the first triangle 70, which is essentially the same as the 3D texture algorithm in the 3D display.
  • Step 507 projecting the distortion compensated projection image.
  • the projection distortion correcting device transmits the distortion-corrected projected image to the projection system for projection.
  • FIG. 7 and FIG. 8 respectively, showing the changes of the projected image before and after the distortion is eliminated.
  • FIG. 7 and (a) in FIG. 8 are source projection images
  • FIG. 7 and (b) in FIG. 8 are projection images of the uncorrected distortion projected on the projection target curved surface
  • Projection effect
  • FIG. 7 and (c) in FIG. 8 are projection images after distortion correction is completed
  • (d) in FIG. 7 and (d) in FIG. 8 are projection images after distortion correction is completed.
  • the projection effect projected on the projection target surface As can be seen from (b) in FIG. 7 and (b) in FIG. 8, the image projected on the projection target curved surface is severely distorted in the curved portion; from (d) in FIG. 7 and (d) in FIG. It can be seen that the distortion of the image projected on the projection target surface is substantially eliminated.
  • the depth image acquisition system 2 and the projection distortion correction device 3 operate in a real-time mode, and collect and analyze a depth image acquired by the depth image acquisition system 2 according to a certain frame rate, and generate a model of the projection target surface in real time, The projected image is corrected in real time. Therefore, the projection distortion correcting device 3 and system of the present invention can realize real-time tracking of the projection target curved surface so that the projected image always coincides with the projected target curved surface.
  • the automatic screen of the laser TV corrects its distortion.
  • Laser TVs use ultra-short focus to achieve large-scale projections at very close projection distances.
  • laser TVs are generally equipped with a light-resistant screen. Careful debugging between the TV and the screen is required to perfectly match the picture projected by the laser TV with the anti-light screen to achieve a satisfactory display.
  • the host of the laser TV is not fixedly installed, and the cabinet and the TV itself are moved for various reasons. It is very inconvenient to use the screen alignment and correction after each move.
  • the laser television using the projection distortion correcting device of the present invention does not need to accurately position the laser television, and only needs to roughly align the laser television (so that the projection screen is located within the collection range of the depth image acquisition system).
  • the laser TV itself can complete screen recognition and automatic distortion correction and screen alignment.
  • Alien activity electronic display card Active electronic display boards used to be LED, OLED or LCD screens. LEDs and LCD displays cannot be made into a different-sided structure. Although OLEDs are flexible, there is no visible surface display using arbitrary shapes using flexible OLEDs.
  • the irregular surface active electronic display card can be realized by using irregular surface projection and automatic feature matching tracking calibration technology. As shown in Fig. 9, a plurality of projectors embodying the technology of the present invention collectively project a display card of a rabbit shape. For different festivals or events, rabbits can be replaced with different themed costumes and textures by updating the projected content.
  • the display cylinder for the stage to move. Multiple projectors are simultaneously projected onto a white cylinder to form the desired pattern.
  • the cylinder may be raised or lowered as needed.
  • the projection range of multiple projectors is much larger than the size of the cylinder and can cover the trajectory of the cylinder movement.
  • the module/unit integrated by the projection distortion correcting device described in the above embodiments may be stored in a computer readable storage medium if it is implemented in the form of a software functional unit and sold or used as a stand-alone product.
  • the present invention implements all or part of the flow of the projection distortion correction method described in the above embodiments, which may also be completed by a computer program instruction related hardware, and the computer program may be stored in a computer readable storage.
  • the computer program when executed by the processor, implements the steps described in the method embodiments above.
  • the computer program comprises computer program code, which may be in the form of source code, object code form, executable file or some intermediate form.
  • the computer readable medium may include any entity or device capable of carrying the computer program code, a recording medium, a USB flash drive, a removable hard disk, a magnetic disk, an optical disk, a computer memory, a read-only memory (ROM). , random access memory (RAM, Random Access Memory), electrical carrier signals, telecommunications signals, and software distribution media. It should be noted that the content contained in the computer readable medium may be appropriately increased or decreased according to the requirements of legislation and patent practice in a jurisdiction, for example, in some jurisdictions, according to legislation and patent practice, computer readable media Does not include electrical carrier signals and telecommunication signals.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Geometry (AREA)
  • Transforming Electric Information Into Light Information (AREA)
  • Projection Apparatus (AREA)
  • Image Processing (AREA)

Abstract

一种投影畸变校正方法,包括:获取深度图像采集***采集的投影目标曲面的深度图像;对所述深度图像进行处理得到3D点云;从所述3D点云中提取出投影目标曲面的3D点云,创建所述投影目标曲面的模型、确定所述投影目标曲面的中心及边界,确定投影画面的中心及投影画面的边界,计算所述投影目标曲面的中心偏离所述投影画面的中心的方向及距离,平移投影图像以对齐投影目标曲面的中心与所述投影画面的中心,并裁剪所述投影画面的边缘以使得所述投影画面的边界与所述投影目标曲面的边界吻合,得到畸变校正后的投影图像。还提供一种投影畸变校正装置及***。所述投影畸变校正方法、装置及***能对投影图像进行实时的畸变校正。

Description

投影畸变校正方法、装置、***及存储介质 技术领域
本发明涉及投影交互领域,尤其涉及一种投影畸变校正方法、装置、投影畸变校正***及存储介质。
背景技术
在投影应用中,有将图像或纹理投影到不规则曲面的需求以达到特定的艺术效果。一个实例就是世博会上的地球投影。用多台投影仪将图像投射并拼接到一个球面上。由于被投射表面是不规则的,因此投影面的不同部位到投影仪的距离和角度都不相同。这样导致了投影的图像在投影面不同部位产生不同的畸变,这种畸变随投影目标曲面法线方向与投影中心轴方向的差距增加而增加。由于曲面上不同位置的单位面积投影到投影面上的面积不相同,导致投影的画面产生畸变。为了让投影达到令人满意的效果,投影的图像需要根据被投射物体的形状进行畸变补偿。现有的方法对被投射物体进行3D建模,并利用贴图渲染技术对投影图像进行局部畸变补偿以生成新的投影图像。这需要在演示之前就制作好根据投影曲面补偿好的图像或视频。补偿畸变的过程需要手工进行复杂的建模,且投影仪的安装需要复杂的调试,以期与畸变校正的结果匹配。投影的过程中投影仪相对被投射面的相对位置必须保持固定。如果展示中需要移动投影面,则现有的技术将无法满足要求。为了减少这种情况下投影仪的设置和安装难度,支持移动的投影面,需要一种装置能够测量投影目标曲面的几何形状并能自动的根据投影面的形状进行补偿。
发明内容
鉴于此,有必要提供一种投影畸变校正方法、装置、投影畸变校 正***及存储介质,能够测量投影目标曲面的几何形状并能自动的根据投影面的形状进行补偿。
本发明提供一种投影畸变校正方法,所述方法包括:
一种投影畸变校正方法,所述方法包括:
获取深度图像采集***采集的投影目标曲面的深度图像;
对所述深度图像采集***采集的深度图像进行处理得到3D点云;
从所述3D点云中提取出投影目标曲面的3D点云,并基于投影目标曲面的3D点云创建所述投影目标曲面的模型;
根据所述投影目标曲面的模型确定所述投影目标曲面的中心及边界,并根据投影目标曲面的三维位置确定投影画面的中心及投影画面的边界,计算所述投影目标曲面的中心偏离所述投影画面的中心的方向及距离,根据偏离的方向及距离平移投影图像以使得投影目标曲面的中心与所述投影画面的中心对齐,并裁剪所述投影画面的边缘以使得所述投影画面的边界与所述投影目标曲面的边界吻合,得到畸变校正后的投影图像。
进一步地,所述方法还包括利用投影图像对投影目标曲面进行三维贴图从而生成贴图映射函数,再利用贴图映射函数对投影图像进行逆变换从而生成畸变补偿后的图像。
进一步地,所述投影目标曲面的中心通过所述投影目标曲面的3D点云的重心、几何中心或预先设定的特征匹配中心确定。
进一步地,所述投影画面的边界通过以投影图像四个角所对应的像素的投影射线为边界的锥形空间的边界与所述投影目标曲面的3D点云的交点决定。
一种投影畸变校正装置,所述投影畸变校正装置包括:
图像获取模块,用于获取深度图像采集***采集的投影目标曲面的深度图像;
投影目标曲面模型建立模块用于对所述深度图像采集***采集的深度图像进行处理得到3D点云;从所述3D点云中提取出投影目标 曲面的3D点云,并基于投影目标曲面的3D点云创建所述投影目标曲面的模型;及
畸变校正模块,用于根据所述投影目标曲面的模型确定所述投影目标曲面的中心及边界,并根据投影目标曲面的三维位置确定投影画面的中心及投影画面的边界,计算所述投影目标曲面的中心偏离所述投影画面的中心的方向及距离,根据偏离的方向及距离平移投影图像以使得投影目标曲面的中心与所述投影画面的中心对齐,并裁剪所述投影画面的边缘以使得所述投影画面的边界与所述投影目标曲面的边界吻合,得到畸变校正后的投影图像。
进一步地,所述投影畸变校正模块还用于利用投影图像对投影目标曲面进行三维贴图从而生成贴图映射函数,再利用贴图映射函数对投影图像进行逆变换从而生成畸变补偿后的图像。
一种投影畸变校正***,所述投影畸变校正***包括:投影***,所述投影***与投影源设备相连接,用于将投影源设备的内容投射至投影目标曲面;深度图像采集***,所述深度图像采集***固定设置在所述投影***上,用于采集包括所述投影目标曲面在内的深度图像;及如上所述的投影畸变校正装置。
进一步地,所述深度图像采集***能够实时以一定的帧率采集图像的深度信息,包括一个或多个深度采集装置,若为多个,则分别设置在所述投影***上不同位置。
进一步地,所述投影畸变校正装置集成在所述投影***内或所述投影源设备内。
一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现如上所述的投影畸变校正方法。
与现有技术相比较,所述投影畸变校正方法能够根据深度图像采集***采集的投影目标曲面的深度图像信息计算得到投影目标曲面的模型及三维位置信息,对投影图像源进行中心对正及边缘裁剪等校正操作,使得投影到投影曲面上的画面正好和投影表面吻合从而能够自动的根据投影曲面的形状进行畸变补偿。
附图说明
图1是本发明实施例的投影畸变校正***的结构示意图。
图2是本发明实施例的投影畸变校正装置的模块示意图。
图3为本发明实施例的点云快速分组算法流程图。
图4是本发明实施例的投影畸变校正装置的结构示意图。
图5是本发明实施例的投影畸变校正方法的流程图。
图6是本发明实施例的贴图算法示意图。
图7是本发明实施例的投影图像在畸变校正前后的对比图。
图8是本发明实施例的又一投影图像在畸变校正前后的对比图。
图9是应用本发明实施例的投影畸变校正***的异面投影活动电子展示牌。
图10是应用本发明实施例的投影畸变校正***的舞台移动圆柱投影示意图。
主要元件符号说明
投影畸变校正***          1000
投影***                  1
深度图像采集***          2
投影畸变校正装置          3
图像获取模块              300
投影目标曲面模型建立模块  302
畸变校正模块              304
处理器                    31
存储器                    32
通信装置                  33
第一三角形                70
第二三角形                72
如下具体实施方式将结合上述附图进一步说明本发明。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
需要说明的是,当组件被称为“固定于”另一个组件,它可以直接在另一个组件上或者也可以存在居中的组件。当一个组件被认为是“连接”另一个组件,它可以是直接连接到另一个组件或者可能同时存在居中组件。当一个组件被认为是“设置于”另一个组件,它可以是直接设置在另一个组件上或者可能同时存在居中组件。本文所使用的术语“垂直的”、“水平的”、“左”、“右”以及类似的表述只是为了说明的目的。
以下所描述的***实施方式仅仅是示意性的,所述模块或电路的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。此外,显然“包括”一词不排除其他单元或步骤,单数不排除复数。***权利要求中陈述的多个单元或装置也可以由同一个单元或装置通过软件或者硬件来实现。第一,第二等词语用来表示名称,而并不表示任何特定的顺序。
除非另有定义,本文所使用的所有的技术和科学术语与属于本发明的技术领域的技术人员通常理解的含义相同。本文中在本发明的说明书中所使用的术语只是为了描述具体的实施例的目的,不是旨在于限制本发明。本文所使用的术语“及/或”包括一个或多个相关的所列项目的任意的和所有的组合。
请参阅图1,图1是本发明实施例的投影畸变校正***1000的结构示意图。所述投影畸变校正***1000包括投影***1、深度图像采集***2及投影畸变校正装置3。
所述深度图像采集***2固定设置在所述投影***1上,其相对所述投影***1的位置固定不变且为已知。所述深度图像采集***2 用于采集所述投影***1的投影方向上的投影目标曲面4的深度图像信息。所述深度图像采集***2可为任何能够生成空间深度信息的设备,如ToF相机,结构光3D扫描仪,双眼视差3D扫描设备等。所述深度图像采集***2可以面向所述投影***1的投影方向安装,且其图像采集范围至少包括所述投影目标曲面在内。所述深度图像采集***2可以包括一个或多个深度图像采集装置。多个深度图像采集装置的设置是为了使得所述深度图像采集***2所采集的深度图像更精准。为便于描述,如下实施例中以一个深度图像采集装置为例进行说明。
所述投影***1能够与投影源设备(未示出)通信连接,将所述投影源设备的内容投射在投影目标曲面4上。所述投影***1与所述投影源设备可通过有线或无线的方式连接。可以理解的是,在一些实施例中,所述投影***1也可以内置存储装置,所述内置存储装置中存储有投影内容。在一些实施例中,所述投影***1也可以设置连接接口,连接外接存储装置以读取及投影所述外界存储装置中的内容。例如,所述投影***1设置存储卡插槽,通过***存储卡即可读取及播放所述存储卡内的内容。
所述投影畸变校正装置3能够分别与所述深度图像采集***2及所述投影源设备通信连接,用于从所述深度图像采集***2获取所述深度图像采集***2采集的所述投影目标曲面4的深度图像,并对所述深度图像进行处理分析得到3D点云,并从所述3D点云中提取出投影目标曲面4的3D点云,进行坐标转换,得到所述投影目标曲面4的数学函数表示,再对投影图像的坐标进行坐标转换,将投影图像与所述投影目标曲面转换至同一坐标系下,进行图像的畸变校正处理,所述畸变校正处理包括对投影画面和投影目标曲面中心对齐和边缘裁剪。在进一步的实施例中,所述畸变校正处理还包括对投影画面根据投影目标曲面的起伏进行畸变校正。所述畸变校正后的图像发送至所述投影***1以投影校正后的图像至所述投影目标曲面4。所述投影畸变校正装置3可以是集成在所述投影源设备中的控制单元,也可以 是独立于所述投影源设备的控制装置且具有数据处理能力及通信连接能力的电子装置,例如电脑、手机、平板、个人数字助理(Personal Digital Assistant,PDA)等。在一些实施例中,所述投影畸变校正装置3也可以是集成在所述投影***1中的控制装置。
所述投影源设备能够与所述投影***1通信连接,用于输出投影内容至所述投影***1。所述投影源设备可以为任何具有数据处理能力的电子装置,例如电脑、手机、平板、个人数字助理(Personal Digital Assistant,PDA)等。可以理解的是,所述投影源设备4还可以是其他电子装置,例如电视、机顶盒、影音播放器、游戏机、录像机等可以输出内容的电子装置。
所述深度图像采集***2与所述投影畸变校正装置3之间的通信、所述投影源设备与所述投影***1之间的通信及所述投影源设备与所述投影畸变校正装置3之间的通信均可采用有线或无线的方式。其中所述有线方式包括通过通信端口连接,例如通用串行总线(universal serial bus,USB)、高清晰度多媒体(High-Definition Multimedia Interface,HDMI)接口、视频图形阵列(Video Graphics Array,VGA)接口、控制器局域网(Controller area network,CAN)、串行及/或其他标准网络连接、集成电路间(Inter-Integrated Circuit,I2C)总线等。所述无线方式可采用任意类别的无线通信***,例如,蓝牙、红外线、无线保真(Wireless Fidelity,WiFi)、蜂窝技术,卫星,及广播。其中所述蜂窝技术可包括第二代(2G)、第三代(3G)、***(4G)或第五代(5G)等移动通信技术。
请参阅图2所示,为本发明实施例的投影畸变校正装置3的功能模块图。
所述投影畸变校正装置3包括图像获取模块300、投影目标曲面模型建立模块302、畸变校正模块304。所述图像获取模块300用于从所述深度图像采集***2获取所述深度图像采集***2采集的所述投影目标曲面4的深度图像。
所述投影目标曲面模型建立模块302用于对所述深度图像进行处理及坐标转换得到投影目标曲面的三维模型数学表示(例如三角网格)。在一些实施例中,所述深度图像采集***2包括两个或两个以上的深度图像采集装置,所述图像获取模块300分别从每个深度图像采集装置获取深度图像,然后对每个深度图像采集装置的图像进行处理,如降噪,下采样(down sampling),异常值过滤(outlier filter)等,生成对应于每个深度图像采集装置的局部坐标系下的3D点云;然后根据各个深度图像采集装置的位置,方向以及成像长宽比等预先配置的信息将局部坐标系下的点云转换到全局坐标下,并合并成一个总的点云。
其中,深度图像采集装置的输出的深度图像中的任一像素(第i行,第j列)在深度图像采集装置局部坐标系下的坐标可以表示为
Figure PCTCN2018118837-appb-000001
其中
Figure PCTCN2018118837-appb-000002
Figure PCTCN2018118837-appb-000003
Figure PCTCN2018118837-appb-000004
Figure PCTCN2018118837-appb-000005
分别为深度图像采集装置的水平和垂直视角;n×m为深度图像采集装置的成像分辨率。
从深度图像采集装置的局部坐标系坐标
Figure PCTCN2018118837-appb-000006
到全局坐标系坐标(x 0,y 0,z 0)的变换公式为
Figure PCTCN2018118837-appb-000007
其中,
Figure PCTCN2018118837-appb-000008
分别为深度图像采集装置局部坐标系的三个主轴方向在新坐标系下的单位列向量;(x c,y c,z c)为深度图像采集装置局部坐标系原点在全局坐标下的坐标。
所述投影目标曲面模型建立模块302提取投影目标曲面的3D点云,构建投影目标曲面的模型。所述投影畸变校正装置3根据点云的分布密度特性进行分类,分类算法可采用任何适宜的分类算法,例如k近邻分类(k-nearest neighbor),k维数(k-dTree)等。基于深度图 像变换得到的点云是结构性的点云,在此点云中的任意一点的邻近点都能通过点的下标快速找出,因而还可以有针对结构化点云的分组加速算法,如图3所示的流程图描述了一种快速分类算法。
步骤401,判断点云是否为空,即判断点云中是否存在点。若不为空,进入步骤402,否则,流程结束。
步骤402,从点云中随机选取一个点作为种子点。
步骤403,将该种子点置入堆栈。
步骤404,判断堆栈是否为空,即堆栈中是否存在点,若不为空,进入步骤405,否则,进入步骤412。
步骤405,从堆栈取点。
步骤406,将该取出的点加入集群组。
步骤407,获取所述点的所有邻近点,所述邻近点是指与所述点的距离位于预定阈值范围内的点。
步骤408,判断是否已遍历所有邻近点。若是,返回步骤404,若否,进入步骤409。
步骤409,选取一个邻近点,判断该邻近点是否已被访问过,即该邻近点是否已加入所述堆栈。若是,则返回步骤408,若否,进入步骤410。
步骤410,判断该邻近点与步骤405中所取点的距离是否小于预定阈值,若是,则进入步骤411,若否,返回步骤408。
步骤411,将该邻近点加入堆栈。
步骤412,从所述集群组创建点的集群。
步骤413,将所述集群组中的点从所述点云中删除。
所述投影目标曲面模型建立模块302从分类后的3D点云中提取出所述投影目标曲面4的点云。然后根据所述投影目标曲面4的3D 点云构建所述投影目标曲面的模型(例如三角网格模型)。在构建所述投影目标曲面4的模型时,可采用现有的任何建模方法来实现,例如基于三角剖分算法(delaunay)的三角网格化处理等。
所述畸变校正模块304用于根据所述投影目标曲面的模型对投影图像进行畸变校正。所述畸变校正模块304首先将投影图像转换至与所述投影目标曲面统一的三维全局坐标系下。所述畸变校正模块304利用预先配置的投影转换参数(如投影长宽比,投影竖直、水平方向的发散角度等)建立投影空间转换函数。空间转换函数将投影图像的二维局部坐标转换为空间全局坐标。
具体地,可以先将所述投影图像的二维局部坐标转换至投影坐标系下的三维局部坐标,然后再根据投影***1的镜头的位置将投影坐标系下的三维局部坐标转换至全局坐标系,其中投影***坐标系是以所述投影***1的光心为原点构建的三维坐标系。
其中,假设在全局坐标下的坐标定义为(x 0,y 0,z 0) T,则转换为投影坐标系下的坐标
Figure PCTCN2018118837-appb-000009
Figure PCTCN2018118837-appb-000010
其中,
Figure PCTCN2018118837-appb-000011
分别为投影***局部坐标系的三个主轴方向在新坐标系下的单位列向量;(x p,y p,z p)为投影***局部坐标系原点在全局坐标下的坐标。
所述畸变校正模块304将所述投影图像与所述投影目标曲面转换至同一坐标系下(三维全局坐标)后,先根据所述投影目标曲面的三维点云确定所述投影目标曲面4的中心位置。所述投影目标曲面4的中心可以通过计算所述投影目标曲面4的点云的重心(gravity center)、几何中心(geometric center),与预先设定的特征匹配中心等方法来实现。
投影画面是投影图像的四个角所对应的像素在投影***局部坐标系下的投影射线为边界的锥形空间与投影曲面的交集所限定的曲面。其中确定所述投影画面的边界的方法为:投影画面的边界是以投影图像四个角所对应的像素的投影射线为边界的锥形空间的边界。所述投影画面的中心可确定为投影图像的中心映射至投影画面的位置。所述投影目标曲面4的边界由表征所述投影目标曲面4的点云决定。如果所述投影目标曲面4的点云完全处在投影锥形体积内,则投影目标曲面的边界由点云的边界决定,否则由所述投影目标曲面4的点云和投影锥形体积相交的部分决定。
所述畸变校正模块304根据所述投影目标曲面4的中心在三维全局坐标中偏离所述投影图像的中心的方向和距离来平移投影图像,以使得所述投影目标曲面4的中心与所述投影图像的中心对齐;然后,所述投影畸变校正装置3根据所述投影画面的边界对平移后的投影图像进行裁剪以使得投影画面的边界与所述投影目标曲面4的边界(即投影目标曲面点云的边界)吻合。
所述畸变校正模块304在完成投影画面与投影目标曲面中心对准以后,根据投影目标曲面的位置生成投影转换函数并将投影图像转换到三维全局坐标中。再利用三维贴图的算法和原理,利用坐标转换后的投影图像对投影目标曲面进行三维贴图从而生成贴图映射函数。图像处理***利用贴图映射函数对投影图像进行逆变换从而生成畸变补偿后的图像。
其中,贴图的算法假设画面上的面积和曲面上的面积一一对应。在一些实施例中,投影目标曲面由三角形网格表示,其上的每个三角形都有一个对应的法线方向。如果三角形的法线方向和投影画面(空间光调制器)垂直,则画面上的图案能够无畸变的投影到该三角形上, 如图6中的(a)所示。如果三角形的法线方向和投影画面不垂直,则会产生畸变,如图6中的(b)所示。图6中的(b)中的第一三角形70为直投到偏移的投影目标曲面的三角形上的区域。而如果按照面积相等原则,则实际应该对应到投影目标曲面的该三角形的区域为第二三角形72。为了补偿这个畸变,算法会将第二三角形72内的画面压缩映射到第一三角形70的区域内,这个变换跟3D显示里面3D贴图算法基本一样。
所述深度图像采集***2和所述投影畸变校正装置工作在实时模式,按照一定的帧率采集分析所述深度图像采集***2采集的深度图像,实时生成所述投影目标曲面的模型,对投影图像进行实时的校正。因而采用本发明的投影畸变校正装置3和***能够实现对投影目标曲面的实时跟踪从而使得投影画面一直和投影目标曲面吻合。
在一些实施例中,所述投影畸变校正装置3还能够对投影图像根据投影目标曲面4的起伏进行畸变校正。所述投影畸变校正装置3在完成投影画面与投影目标曲面中心对准以后,根据投影目标曲面的位置生成投影转换函数并将投影图像转换到三维全局坐标中。再利用三维贴图的算法和原理,利用坐标转换后的投影图像对投影目标曲面进行三维贴图从而生成贴图映射函数。图像处理***利用贴图映射函数对投影图像进行逆变换从而生成畸变补偿后的图像。畸变补偿后的图像投影到投影目标曲面上能消除由于曲面起伏造成的畸变,如图7中的(d)和图8中的(d)所示。
请参阅图4所示,为本发明实施例的投影畸变校正装置3的结构示意图。
所述投影畸变校正装置3包括处理器31、存储器32及通信装置33。
所述存储器32可用于存储计算机程序和/或模块(例如上所述的图像获取模块300、投影目标曲面模型建立模块302及畸变校正模块304),所述处理器31通过运行或执行存储在所述存储器32内的计算机程序和/或模块,以及调用存储在存储器32内的数据,实现所述投影畸变校正装置3的各种功能(例如对投影图像的畸变校正处理)。所述存储器32可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作***、至少一个功能所需的应用程序(比如投影仪的投影功能、手机的社交功能等)等;存储数据区可存储根据投影畸变校正装置3的使用所创建的数据(比如深度图像数据、3D点云数据等)等。此外,存储器32可以包括高速随机存取存储器,还可以包括非易失性存储器,例如硬盘、内存、插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)、至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。
所述处理器31可以是中央处理单元(Central Processing Unit,CPU),还可以是其他通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现成可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等,所述处理器31是所述投影畸变校正装置3的控制中心,利用各种接口和线路连接整个投影畸变校正装置3的各个部分。
所述投影畸变校正装置3还包括至少一个通信装置33。
所述通信装置33可以是有线通信装置也可以是无线通信装置。其中所述有线通信装置包括通信端口,例如通用串行总线(universal serial bus,USB)、控制器局域网(Controller area network,CAN)、串 行及/或其他标准网络连接、集成电路间(Inter-Integrated Circuit,I2C)总线等。所述无线通信装置可采用任意类别的无线通信***,例如,蓝牙、红外线、无线保真(Wireless Fidelity,WiFi)、蜂窝技术,卫星,及广播。其中所述蜂窝技术可包括第二代(2G)、第三代(3G)、***(4G)或第五代(5G)等移动通信技术。
在本发明实施例中,所述投影畸变校正装置3用于通过所述通信装置33与所述深度图像采集***2通信以获取所述深度图像采集***2采集的图像,并对所述图像进行分析处理得到3D点云,并从3D点云中提取出所述投影目标曲面的3D点云,计算出投影目标曲面的模型、投影画面中心及边界,并根据所述目标曲面的模型、投影画面中心及边界对图形进行畸变校正处理。所述投影畸变校正装置3还用于通过所述通信装置33与所述投影***1通信连接以传送完成畸变校正后的投影图像至所述投影***1以实现对投影图像的投影。
本领域技术人员可以理解,所述示意图仅仅是所述投影畸变校正装置3的示例,并不构成对所述投影畸变校正装置3的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件,例如所述投影畸变校正装置3还可以根据实际需要包括输入输出设备、显示装置等。所述输入输出设备可包括任意适宜的输入设备,包括但不限于,鼠标、键盘、触摸屏、或非接触式输入,例如,手势输入、声音输入等。所述显示装置可以是触液晶显示屏(Liquid Crystal Display,LCD)、发光二极管(Light Emitting Diode,LED)显示屏、有机电激光显示屏(Organic Light-Emitting Diode,OLED)或其他适宜的显示屏。
图5为本发明一实施例的投影畸变校正方法流程图。
步骤501,获取深度图像。所述投影畸变校正装置3从所述深度 图像采集***获取所述深度图像采集***2所采集的所述投影目标曲面4的深度图像。在一些实施例中,所述深度图像采集***2以预定帧率(例如30~60帧/秒)采集所述投影目标曲面4的深度图像。如果是固定的投影面,则对帧率没有要求;但是如果是移动的投影面,则要求相邻帧之间投影面没有明显的变化。可以理解的是,深度图像采集***2的采集帧率可根据实际需要进行调整,当应用于移动的投影面时,只要能保证所采集的图像在相邻两帧之间投影面没有明显的变化即可。所述深度图像采集***2采集的深度图像通过有线或无线的方式传送至所述投影畸变校正装置3。
步骤502,处理深度图像得到3D点云。所述投影畸变校正装置3对所述深度图像进行处理,所述处理包括,但不限于,降噪、down sampling、outlier filter等,生成对应于每个深度图像采集装置局部坐标系下的3D点云。其中,图像平面坐标系是以所述深度图像采集装置的光轴与图像平面的交点为图像坐标系的原点所构成的直角坐标系,所述深度图像采集装置的局部坐标系是以分别平行于所述图像平面坐标系的平面坐标轴作为X、Y轴,以光轴方向作为Z轴,以所述深度图像采集装置的光心为原点构建的三维坐标系。所述深度图像的深度信息即代表所述深度图像中的每一像素点在所述深度图像采集装置的局部坐标系下的Z轴坐标值。在一些实施例中,所述投影畸变校正装置3还将所述深度图像中的每一像素点在所述深度图像采集装置的局部坐标系下的坐标转换至全局坐标系下的坐标。所述全局坐标系是相对所述深度图像采集装置的局部坐标系和投影***的局部坐标系来说的,可以是原点定义在任意位置的任意参考系,例如可以是原点定义在空间某点的笛卡尔坐标系。
其中,假定深度图像采集装置的输出的深度图像中的任一像素(第 i行,第j列)在深度图像采集装置局部坐标系下的坐标可以表示为
Figure PCTCN2018118837-appb-000012
其中
Figure PCTCN2018118837-appb-000013
Figure PCTCN2018118837-appb-000014
Figure PCTCN2018118837-appb-000015
Figure PCTCN2018118837-appb-000016
分别为深度图像采集装置的水平和垂直视角;n×m为深度图像采集装置的成像分辨率。
从深度图像采集装置的局部坐标系坐标
Figure PCTCN2018118837-appb-000017
到全局坐标系坐标(x 0,y 0,z 0)的变换公式为
Figure PCTCN2018118837-appb-000018
其中,
Figure PCTCN2018118837-appb-000019
分别为深度图像采集装置局部坐标系的三个主轴方向;
Figure PCTCN2018118837-appb-000020
为深度图像采集装置局部坐标系原点在全局坐标下的坐标。
在一些实施方式中,若所述深度图像采集***2包括至少两个深度图像采集装置,则还包括该至少两个深度图像采集装置采集的图像合并成一个总的点云。具体地,根据各个深度图像采集装置的位置,方向以及成像长宽比等预先配置的信息将各深度图像采集装置的局部坐标系下的点云转换到全局坐标下,然后再将分别对应于该至少两个深度图像采集装置的全局坐标系下的点云合并成一个总的点云。在一些实施例中,还可以对该分别对应于该至少两个深度图像采集装置的全局坐标系下的点云进行对齐以进一步校正及消除所采集的深度图像的畸变。
步骤503,提取投影目标曲面的3D点云,构建投影目标曲面的模型。所述投影畸变校正装置3根据点云的分布特性进行分类,分类算法可采用任何适宜的分类算法,例如k近邻分类(k-nearest neighbor),k维数(k-dTree)等。所述投影目标曲面模型建立模块302从分类后的3D点云中提取出所述投影目标曲面4的点云。然后根据所述投影目标曲面4的3D点云构建所述投影目标曲面的模型(例如三角网格 模型)。在构建所述投影目标曲面4的模型时,可采用现有的任何建模方法来实现,例如基于三角剖分算法(delaunay)的三角网格化处理等。
步骤504,计算投影目标曲面的中心及投影画面中心与边界。所述投影畸变校正装置3首先将投影图像转换至与所述投影目标曲面统一的三维全局坐标系下。所述投影畸变校正装置3利用预先配置的投影转换参数(如投影长宽比,投影竖直、水平方向的发散角度等)建立投影空间转换函数。空间转换函数将投影图像的二维局部坐标转换为空间全局坐标。
具体地,可以先将所述投影图像的二维局部坐标转换至投影坐标系下的三维局部坐标,然后再根据投影***1的镜头的位置将投影坐标系下的三维局部坐标转换至全局坐标系,其中投影***坐标系是以所述投影***1的光心为原点构建的三维坐标系。
其中,假设在全局坐标下的坐标定义为(x 0,y 0,z 0) T,则转换为投影坐标系下的坐标
Figure PCTCN2018118837-appb-000021
Figure PCTCN2018118837-appb-000022
其中,
Figure PCTCN2018118837-appb-000023
分别为投影***局部坐标系的三个主轴方向;(x p,y p,z p)为投影***局部坐标系原点在全局坐标下的坐标。
所述投影畸变校正装置3将所述投影图像与所述投影目标曲面转换至同一坐标系下(三维全局坐标)后,先根据所述投影目标曲面的三维点云确定所述投影目标曲面4的中心位置。所述投影目标曲面4的中心可以通过计算所述投影目标曲面4的点云的重心(gravitycenter)、几何中心(geometric center),与预先设定的特征匹配中心等方法来实现。所述投影画面的中心可确定为投影图像的中心映射至投影画面的位置。
确定所述投影画面的边界的方法为:投影画面的边界是投影图像 四个角所对应的像素的投影射线为边界的锥形空间的边界。已知任一平面或曲面,其与前述锥形空间的截面决定了投影画面的边界。所述投影目标曲面4的边界由表征所述投影目标曲面4的点云决定。如果所述投影目标曲面4的点云完全处在投影锥形体积内,则投影目标曲面的边界由点云的边界决定,否则由所述投影目标曲面4和投影锥形体积相交的部分决定。
步骤505,对齐投影画面中心与投影目标曲面中心,并剪裁投影画面边缘。所述畸变校正模块304根据所述投影目标曲面4的中心在三维全局坐标中偏离所述投影图像的中心的方向和距离来平移投影图像,以使得所述投影目标曲面4的中心与所述投影图像的中心对齐;然后,所述投影畸变校正装置3根据所述投影画面的边界对平移后的投影图像进行裁剪以使得投影画面的边界与所述投影目标曲面4的边界(即投影目标曲面点云的边界)吻合。
步骤506,利用贴图算法对投影图像进行畸变补偿以生成新的投影图像。
所述投影畸变校正装置3在完成投影画面与投影目标曲面中心对准以后,根据投影目标曲面的位置生成投影转换函数并将投影图像转换到三维全局坐标中。再利用三维贴图的算法和原理,利用坐标转换后的投影图像对投影目标曲面进行三维贴图从而生成贴图映射函数。所述投影畸变校正装置3利用贴图映射函数对投影图像进行逆变换从而生成畸变补偿后的图像。
其中,贴图的算法假设画面上的面积和曲面上的面积一一对应。在一些实施例中,投影目标曲面由三角形网格表示,其上的每个三角形都有一个对应的法线方向。如果三角形的法线方向和投影画面(空间光调制器)垂直,则画面上的图案能够无畸变的投影到该三角形上, 如图6中的(a)所示。如果三角形的法线方向和投影画面不垂直,则会产生畸变,如图6中的(b)所示。图6中的(b)中的第一三角形70为直投到偏移的投影目标曲面的三角形上的区域。而如果按照面积相等原则,则实际应该对应到投影目标曲面的该三角形的区域为第二三角形72。为了补偿这个畸变,算法会将第二三角形72内的画面压缩映射到第一三角形70的区域内,这个变换跟3D显示里面3D贴图算法基本一样。
步骤507,投影畸变补偿后的投影图像。所述投影畸变校正装置将完成畸变校正后的投影图像发送至投影***进行投影。
请参阅图7和图8所示,分别展示了投影图像在消除畸变前后的变化。其中图7中的(a)和图8中的(a)为源投影图像;图7中的(b)和图8中的(b)为未消除畸变的投影图像投射在投影目标曲面上的投影效果;图7中的(c)和图8中的(c)为完成畸变校正后的投影图像;图7中的(d)和图8中的(d)为完成畸变校正后的投影图像投射在投影目标曲面上的投影效果。从图7中的(b)和图8中的(b)中可看出,投影在投影目标曲面上的图像在曲面部分畸变严重;从图7中的(d)和图8中的(d)中可看出,投影在投影目标曲面上的图像的畸变基本消除。
所述深度图像采集***2和所述投影畸变校正装置3工作在实时模式,按照一定的帧率采集分析所述深度图像采集***2采集的深度图像,实时生成所述投影目标曲面的模型,对投影图像进行实时的校正。因而采用本发明的投影畸变校正装置3和***能够实现对投影目标曲面的实时跟踪从而使得投影画面一直和投影目标曲面吻合。
下面列举几个本发明投影畸变校正***的应用举例。
超短焦投影的电子白板。
应用实例1:
激光电视的自动屏幕对其畸变校正。激光电视利用超短焦实现在很近的投影距离上的大尺寸投影。同时为了更好的对比度和色彩表现,激光电视一般配置了抗光屏幕。电视和屏幕之间需要仔细的调试以使得激光电视投影的画面和抗光屏幕完美的吻合以达到满意的显示效果。一般激光电视的主机并不是固定安装,其所放置的柜子和电视本身都会因为各种原因发生移动。每次移动以后都需要重新进行屏幕的对准和校正,很不方便使用。采用本发明投影畸变校正装置的激光电视,不需要精确的放置激光电视的位置,只需要对激光电视进行大致的对准(使得投影屏幕位于深度图像采集***的采集范围内即可)。激光电视自身就可以完成屏幕的识别和自动的畸变校正和屏幕对齐。
应用实例2:
异面活动电子展示牌。活动的电子展示牌以往都是LED、OLED或者LCD屏幕的。LED和LCD显示屏无法做成异面结构的,OLED虽然是柔性的,但是目前还没有看到利用柔性OLED做任意形状的异面显示。利用本发明提出的技术,能够利用不规则曲面投影和自动特征匹配追踪校准技术实现异面活动电子展示牌。如图9所示多台实现本发明技术的投影仪共同向兔子外形的展示牌投影。针对不同的节日或事件,可以通过更新投影内容的方式给兔子更换不同的主题服装和纹理。
应用实例3:
舞台移动的显示圆柱。多架投影仪同时投影到白色圆柱上,形成需要的图案。圆柱可能根据需要升降或平移。多架投影仪的投射范围远大于圆柱的大小,并能覆盖圆柱移动的轨迹。利用投影曲面中心检测和边缘检测能力,能够根据圆柱的位置校正投影内容的位置和由于 圆柱表面起伏造成的畸变,实现投影到圆柱上的画面和圆柱始终保持同步,如图10所示。
上述实施例中所述的投影畸变校正装置集成的模块/单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明实现上述实施例所述的投影畸变校正方法中的全部或部分流程,也可以通过计算机程序指令相关的硬件来完成,所述的计算机程序可存储于一计算机可读存储介质中,该计算机程序在被处理器执行时,可实现上文方法实施例所述的步骤。其中,所述计算机程序包括计算机程序代码,所述计算机程序代码可以为源代码形式、对象代码形式、可执行文件或某些中间形式等。所述计算机可读介质可以包括:能够携带所述计算机程序代码的任何实体或装置、记录介质、U盘、移动硬盘、磁碟、光盘、计算机存储器、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、电载波信号、电信信号以及软件分发介质等。需要说明的是,所述计算机可读介质包含的内容可以根据司法管辖区内立法和专利实践的要求进行适当的增减,例如在某些司法管辖区,根据立法和专利实践,计算机可读介质不包括电载波信号和电信信号。
以上所述仅为本发明的实施方式,并非因此限制本发明的专利范围,凡是利用本发明说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本发明的专利保护范围内。

Claims (10)

  1. 一种投影畸变校正方法,其特征在于,所述方法包括:
    获取深度图像采集***采集的投影目标曲面的深度图像;
    对所述深度图像采集***采集的深度图像进行处理得到3D点云;
    从所述3D点云中提取出投影目标曲面的3D点云,并基于投影目标曲面的3D点云创建所述投影目标曲面的模型;
    根据所述投影目标曲面的模型确定所述投影目标曲面的中心及边界,并根据投影目标曲面的三维位置确定投影画面的中心及投影画面的边界,计算所述投影目标曲面的中心偏离所述投影画面的中心的方向及距离,根据偏离的方向及距离平移投影图像以使得投影目标曲面的中心与所述投影画面的中心对齐,并裁剪所述投影画面的边缘以使得所述投影画面的边界与所述投影目标曲面的边界吻合,得到畸变校正后的投影图像。
  2. 如权利要求1所述的方法,其特征在于,所述方法还包括利用投影图像对投影目标曲面进行三维贴图从而生成贴图映射函数,再利用贴图映射函数对投影图像进行逆变换从而生成畸变补偿后的图像。
  3. 如权利要求1所述的方法,其特征在于,所述投影目标曲面的中心通过所述投影目标曲面的3D点云的重心、几何中心或预先设定的特征匹配中心确定。
  4. 如权利要求1所述的方法,其特征在于,所述投影画面的边界通过以投影图像四个角所对应的像素的投影射线为边界的锥形空间的边界与所述投影目标曲面的3D点云的交点决定。
  5. 一种投影畸变校正装置,其特征在于,所述投影畸变校正装置包括:
    图像获取模块,用于获取深度图像采集***采集的投影目标曲面的深度图像;
    投影目标曲面模型建立模块,用于对所述深度图像采集***采集的深度图像进行处理得到3D点云;从所述3D点云中提取出投影目标 曲面的3D点云,并基于投影目标曲面的3D点云创建所述投影目标曲面的模型;及
    畸变校正模块,用于根据所述投影目标曲面的模型确定所述投影目标曲面的中心及边界,并根据投影目标曲面的三维位置确定投影画面的中心及投影画面的边界,计算所述投影目标曲面的中心偏离所述投影画面的中心的方向及距离,根据偏离的方向及距离平移投影图像以使得投影目标曲面的中心与所述投影画面的中心对齐,并裁剪所述投影画面的边缘以使得所述投影画面的边界与所述投影目标曲面的边界吻合,得到畸变校正后的投影图像。
  6. 如权利要求5所述的装置,其特征在于,所述投影畸变校正模块还用于利用投影图像对投影目标曲面进行三维贴图从而生成贴图映射函数,再利用贴图映射函数对投影图像进行逆变换从而生成畸变补偿后的图像。
  7. 一种投影畸变校正***,其特征在于,所述投影畸变校正***包括:
    投影***,所述投影***与投影源设备相连接,用于将投影源设备的内容投射至投影目标曲面;
    深度图像采集***,所述深度图像采集***固定设置在所述投影***上,用于采集包括所述投影目标曲面在内的深度图像;及
    权利要求6所述的投影畸变校正装置。
  8. 如权利要求7所述的投影畸变校正***,其特征在于,所述深度图像采集***能够实时以一定的帧率采集图像的深度信息,包括一个或多个深度采集装置,若为多个,则分别设置在所述投影***上不同位置。
  9. 如权利要求7所述的投影畸变校正***,其特征在于,所述投影畸变校正装置集成在所述投影***内或所述投影源设备内。
  10. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于:所述计算机程序被处理器执行时实现如权利要求1-4中任意一项所述的方法。
PCT/CN2018/118837 2018-03-22 2018-12-03 投影畸变校正方法、装置、***及存储介质 WO2019179168A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810240367.9 2018-03-22
CN201810240367.9A CN110300292B (zh) 2018-03-22 2018-03-22 投影畸变校正方法、装置、***及存储介质

Publications (1)

Publication Number Publication Date
WO2019179168A1 true WO2019179168A1 (zh) 2019-09-26

Family

ID=67986655

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/118837 WO2019179168A1 (zh) 2018-03-22 2018-12-03 投影畸变校正方法、装置、***及存储介质

Country Status (2)

Country Link
CN (1) CN110300292B (zh)
WO (1) WO2019179168A1 (zh)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111738955A (zh) * 2020-06-23 2020-10-02 安徽海微电光电科技有限责任公司 投影图像的畸变校正方法、装置及计算机可读存储介质
CN113610916A (zh) * 2021-06-17 2021-11-05 同济大学 基于点云数据的不规则物体体积测定方法及***
CN114286069A (zh) * 2021-12-31 2022-04-05 深圳市火乐科技发展有限公司 投影画面处理方法、装置、存储介质及投影设备
CN114827562A (zh) * 2022-03-11 2022-07-29 深圳海翼智新科技有限公司 投影方法、装置、投影设备及计算机存储介质
WO2023029277A1 (zh) * 2021-09-01 2023-03-09 广景视睿科技(深圳)有限公司 一种投影方法、装置、投影设备及存储介质
CN116859829A (zh) * 2023-09-04 2023-10-10 天津天石休闲用品有限公司 基于材料边缘曲线投影的切刀运动控制方法及设备

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110830781B (zh) 2019-10-30 2021-03-23 歌尔科技有限公司 一种基于双目视觉的投影图像自动校正方法及***
TWI722703B (zh) 2019-12-09 2021-03-21 財團法人工業技術研究院 投影設備與投影校正方法
CN111366906A (zh) * 2020-02-01 2020-07-03 上海鲲游光电科技有限公司 投射装置和分区tof装置及其制造方法和电子设备
CN111669557B (zh) * 2020-06-24 2022-05-13 歌尔光学科技有限公司 投影图像校正方法和校正装置
CN112348939A (zh) * 2020-11-18 2021-02-09 北京沃东天骏信息技术有限公司 用于三维重建的纹理优化方法及装置
CN112884898B (zh) * 2021-03-17 2022-06-07 杭州思看科技有限公司 用于测量纹理映***度的参考装置
CN115146745B (zh) * 2022-09-01 2022-12-02 深圳市城市公共安全技术研究院有限公司 点云数据坐标点位置校正的方法、装置、设备及存储介质
CN118042592A (zh) * 2024-02-21 2024-05-14 安徽中杰信息科技有限公司 基于rfid的资产定位管理***

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101221658A (zh) * 2007-12-20 2008-07-16 四川川大智胜软件股份有限公司 基于软件的环幕帧缓存纹理重贴几何校正方法
CN101815188A (zh) * 2009-11-30 2010-08-25 四川川大智胜软件股份有限公司 一种非规则光滑曲面显示墙多投影仪图像画面校正方法
CN102129680A (zh) * 2010-01-15 2011-07-20 精工爱普生株式会社 实时几何形状感知投影和快速重校准
CN105227881A (zh) * 2015-09-15 2016-01-06 海信集团有限公司 一种投影画面校正方法及投影设备
CN106604003A (zh) * 2016-11-10 2017-04-26 Tcl集团股份有限公司 一种短焦投影实现曲面幕布投影的方法及***
EP3273413A1 (en) * 2016-07-21 2018-01-24 Christie Digital Systems USA, Inc. System and method for geometric warping correction in projection mapping

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101221658A (zh) * 2007-12-20 2008-07-16 四川川大智胜软件股份有限公司 基于软件的环幕帧缓存纹理重贴几何校正方法
CN101815188A (zh) * 2009-11-30 2010-08-25 四川川大智胜软件股份有限公司 一种非规则光滑曲面显示墙多投影仪图像画面校正方法
CN102129680A (zh) * 2010-01-15 2011-07-20 精工爱普生株式会社 实时几何形状感知投影和快速重校准
CN105227881A (zh) * 2015-09-15 2016-01-06 海信集团有限公司 一种投影画面校正方法及投影设备
EP3273413A1 (en) * 2016-07-21 2018-01-24 Christie Digital Systems USA, Inc. System and method for geometric warping correction in projection mapping
CN106604003A (zh) * 2016-11-10 2017-04-26 Tcl集团股份有限公司 一种短焦投影实现曲面幕布投影的方法及***

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111738955A (zh) * 2020-06-23 2020-10-02 安徽海微电光电科技有限责任公司 投影图像的畸变校正方法、装置及计算机可读存储介质
CN113610916A (zh) * 2021-06-17 2021-11-05 同济大学 基于点云数据的不规则物体体积测定方法及***
CN113610916B (zh) * 2021-06-17 2024-04-12 同济大学 基于点云数据的不规则物体体积测定方法及***
WO2023029277A1 (zh) * 2021-09-01 2023-03-09 广景视睿科技(深圳)有限公司 一种投影方法、装置、投影设备及存储介质
CN114286069A (zh) * 2021-12-31 2022-04-05 深圳市火乐科技发展有限公司 投影画面处理方法、装置、存储介质及投影设备
CN114286069B (zh) * 2021-12-31 2024-04-02 深圳市火乐科技发展有限公司 投影画面处理方法、装置、存储介质及投影设备
CN114827562A (zh) * 2022-03-11 2022-07-29 深圳海翼智新科技有限公司 投影方法、装置、投影设备及计算机存储介质
CN116859829A (zh) * 2023-09-04 2023-10-10 天津天石休闲用品有限公司 基于材料边缘曲线投影的切刀运动控制方法及设备
CN116859829B (zh) * 2023-09-04 2023-11-03 天津天石休闲用品有限公司 基于材料边缘曲线投影的切刀运动控制方法及设备

Also Published As

Publication number Publication date
CN110300292B (zh) 2021-11-19
CN110300292A (zh) 2019-10-01

Similar Documents

Publication Publication Date Title
WO2019179168A1 (zh) 投影畸变校正方法、装置、***及存储介质
US11501507B2 (en) Motion compensation of geometry information
CN106796718B (zh) 用于高效深度图像变换的方法和设备
WO2020035002A1 (en) Methods and devices for acquiring 3d face, and computer readable storage media
WO2021208933A1 (zh) 用于相机的图像校正方法和装置
US9846960B2 (en) Automated camera array calibration
US9652849B2 (en) Techniques for rapid stereo reconstruction from images
US9842423B2 (en) Systems and methods for producing a three-dimensional face model
CN108604379A (zh) 用于确定图像中的区域的***及方法
KR20170017700A (ko) 360도 3d 입체 영상을 생성하는 전자 장치 및 이의 방법
US11527014B2 (en) Methods and systems for calibrating surface data capture devices
JP2015212849A (ja) 画像処理装置、画像処理方法および画像処理プログラム
US11580616B2 (en) Photogrammetric alignment for immersive content production
JP2020144864A (ja) 画像処理方法、装置及びコンピュータ読み取り可能な記憶媒体
CN111694528A (zh) 显示墙的排版辨识方法以及使用此方法的电子装置
WO2019179342A1 (zh) 图像处理方法、图像处理装置、图像处理***及介质
US11062422B2 (en) Image processing apparatus, image communication system, image processing method, and recording medium
US20230062973A1 (en) Image processing apparatus, image processing method, and storage medium
US9536133B2 (en) Display apparatus and control method for adjusting the eyes of a photographed user
WO2024055531A1 (zh) 照度计数值识别方法、电子设备及存储介质
WO2019100547A1 (zh) 投影控制方法、装置、投影交互***及存储介质
US20130208976A1 (en) System, method, and computer program product for calculating adjustments for images
US11902502B2 (en) Display apparatus and control method thereof
CN114723923B (zh) 一种传动解决方案模拟展示***和方法
US20240046576A1 (en) Video See-Through Augmented Reality

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18910780

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18910780

Country of ref document: EP

Kind code of ref document: A1