WO2018196472A1 - 一种投影方法、装置、***及存储介质 - Google Patents

一种投影方法、装置、***及存储介质 Download PDF

Info

Publication number
WO2018196472A1
WO2018196472A1 PCT/CN2018/076995 CN2018076995W WO2018196472A1 WO 2018196472 A1 WO2018196472 A1 WO 2018196472A1 CN 2018076995 W CN2018076995 W CN 2018076995W WO 2018196472 A1 WO2018196472 A1 WO 2018196472A1
Authority
WO
WIPO (PCT)
Prior art keywords
projection
image
projected
area
devices
Prior art date
Application number
PCT/CN2018/076995
Other languages
English (en)
French (fr)
Inventor
赵冬晓
尹志良
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Publication of WO2018196472A1 publication Critical patent/WO2018196472A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3141Constructional details thereof
    • H04N9/3147Multi-projection systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/74Projection arrangements for image reproduction, e.g. using eidophor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]

Definitions

  • the present disclosure relates to the field of terminal projection, and in particular, to a projection method, apparatus, system, and storage medium.
  • the specific method of using the multi-projector for splicing projection in the related art is as follows: first, the user first divides an image to be projected by a computer according to the splicing requirement; secondly, transmits the segmented image to a plurality of projectors; and then, controls A plurality of projectors project corresponding segmentation images. Therefore, in the prior art, the collaborative projection process of the multi-projector must be manually controlled, and at the same time, the projection result needs to be manually adjusted. When the number of projectors is large, the method of manually controlling the projection of the multi-projector is time-consuming and labor-intensive, resulting in Projection of multi-projector collaborative projection is less efficient.
  • embodiments of the present disclosure are intended to provide a projection method, apparatus and system, and storage medium.
  • an embodiment of the present disclosure provides a projection method, including: acquiring an initial projection image formed by initial projection of at least two projection devices; and determining at least two projection regions corresponding to at least two projection devices from the initial projection image. And determining, as the target projection area, a maximum inscribed rectangular area formed by the at least two projection areas; determining, according to the relative positional relationship of the at least two projection areas and the target projection area, at least two projection devices respectively corresponding to the image to be projected The sub-image to be projected; respectively controlling at least two projection devices to project respective sub-images to be projected.
  • an embodiment of the present disclosure provides a projection apparatus, including: an acquisition module configured to acquire an initial projection image formed by initial projection of at least two projection devices by an image acquisition unit; the first processing module is configured to be initial Determining at least two projection areas corresponding to at least two projection devices in the projection image, and determining a maximum inscribed rectangular area formed by the at least two projection areas as a target projection area; and a second processing module configured to according to at least two projections a relative positional relationship between the area and the target projection area, determining a sub-image to be projected corresponding to each of the at least two projection devices from the image to be projected; and the control module is configured to respectively control at least two projection devices to project the corresponding sub-images to be projected.
  • an embodiment of the present disclosure provides a projection system, the system comprising: a camera configured to acquire an initial projection image formed by initial projection of at least two projection devices; and a control device configured to determine from the initial projection image At least two projection areas corresponding to at least two projection devices, and determining a maximum inscribed rectangular area formed by the at least two projection areas as a target projection area; according to a relative positional relationship between the at least two projection areas and the target projection area, Determining a sub-image to be projected corresponding to each of the at least two projection devices; respectively controlling at least two projection devices to project respective sub-images to be projected; at least two projection devices configured as initial projections; Corresponding sub-images to be projected.
  • the projection method, device, system and storage medium provided by the embodiments of the present disclosure firstly acquire an initial projection image formed by initial projection of at least two projection devices; secondly, determine a projection area corresponding to each projection device from the initial projection image and Target projection area; then, according to the relative positional relationship between each projection area and the target projection area, determining a sub-image to be projected corresponding to each projection device from the image to be projected; finally, respectively controlling each projection device to project a corresponding sub-projection Image, the process does not require manual operation, and the cooperative projection of multi-projection equipment can be realized only by means of software control, thereby improving the intelligence degree of collaborative projection of multi-projection equipment, and further improving the projection of cooperative projection of multi-projection equipment. effectiveness.
  • FIG. 1 is a first structural schematic view of a projection system in an embodiment of the present disclosure
  • FIG. 2 is a second schematic structural diagram of a projection system in an embodiment of the present disclosure
  • FIG. 3 is a schematic structural diagram of a main projection device in a projection system according to an embodiment of the present disclosure
  • FIG. 4 is a schematic diagram of a first flow of a projection method in an embodiment of the present disclosure
  • FIG. 5 is a schematic diagram of a first projection image of a projection method in an embodiment of the present disclosure
  • FIG. 6 is a schematic diagram of a second projection image of a projection method in an embodiment of the present disclosure.
  • FIG. 7 is a schematic diagram of a third projection image of a projection method in an embodiment of the present disclosure.
  • FIG. 8 is a second schematic flowchart of a projection method in an embodiment of the present disclosure.
  • FIG. 9 is a schematic diagram of a fourth projection image of a projection method in an embodiment of the present disclosure.
  • FIG. 10 is a third schematic flowchart of a projection method in an embodiment of the present disclosure.
  • FIG. 11 is a fourth schematic flowchart of a projection method in an embodiment of the present disclosure.
  • FIG. 12 is a schematic structural diagram of a projection apparatus in an embodiment of the present disclosure.
  • Embodiment 1 is a diagrammatic representation of Embodiment 1:
  • the embodiment of the present disclosure provides a projection method applied to a projection system.
  • the projection system 10 includes a camera 11 , a control device 12 , and at least two projection devices 13 .
  • the control device 12 and the projection device 13 may be physically separated or physically combined.
  • the camera 11 can be disposed on the control device 12.
  • the projection process of each projection device 13 is controlled by the control device 12, and the camera 11 is controlled to collect the initial projection of each projection device.
  • the initial projected image formed, where the control device 12 can be a mobile phone, a tablet computer, a notebook computer, etc.; when the control device 12 and the projection device 13 are physically combined, the camera 11 is disposed on the projection device 13,
  • the projection device 13 can be referred to as a main projection device, and other projection devices are referred to as respective slave projection devices.
  • the projection system includes four projection devices, wherein D0 represents a first projection device, D1 represents a second projection device, D2 represents a third projection device, and D3 represents a fourth projection device.
  • the control device and D0 are physically combined, and the camera is set on D0, and the position of the camera can be in the same plane as the D0 optical machine (Fig. 2 is the main view of each projection device, wherein the circle on the main view of D0) Representing the camera, the rectangle represents the optical machine).
  • D0 can be used as the main projection device, D1, D2 and D3 as the slave projection devices, the master and slave devices are connected, and D0 controls itself and the projection process of each slave projector device, and The initial projection image is acquired by the D0 control camera.
  • the multi-projection devices in this embodiment can be connected by wireless, for example, through a wireless mirror connection technology, to realize sharing of information between the projection devices.
  • D0 is used as the main projection device, and is connected to each of the slave projection devices D1, D2, and D3 by wireless mirror connection technology.
  • the main projection device is the transmitting end, and the receiving device is receiving from the projection device.
  • the main projection device controls itself and the output image of each of the slave projection devices.
  • the projection method of the embodiment is further described by the physical combination of the control device and the projection device.
  • the control device and the D0 are physically combined, in the D0.
  • the CPU (Central Processing Unit) module 121, the display control module 122, and the communication interface 123 can be regarded as a whole, which is equivalent to a control device, wherein the CPU module 121 serves as a total control module and is responsible for controlling the projection of each device.
  • the display control module 122 controls the optical device 124 of the D0 to perform projection
  • the communication interface of the D0 may be a wireless communication interface, and then communicates with each of the slave projection devices through the wireless network to control the projection of each of the slave projection devices.
  • the mirror connection mode is connected to D1, D2, and D3, respectively, and the image divided by the CPU module of D0 is transmitted to D1, D2, and D3, and the delay between D0 and D1, D2, and D3 is controlled by the CPU module to ensure each projection device. Synchronous display of collaborative projection screens.
  • FIG. 4 it is a schematic flowchart of a projection method provided by an embodiment of the present disclosure.
  • the method can be applied to scenes that need to display data or display images on a large screen, such as displaying military simulation, industrial design, and virtual When manufacturing, engineering projection, complex monitoring, etc. related images, the method includes:
  • S40 Acquire an initial projection image formed by initial projection of at least two projection devices
  • a plurality of projection devices are turned on; then, multiple projection devices perform initial projection; and then, acquired by an image acquisition module.
  • the initial projected image formed by the initial projection of multiple projection devices.
  • At least two projection devices can be used for initial projection.
  • the number of projection devices can also take the square of the natural number.
  • 4 or 9 projection devices are used for initial projection to improve the projection effect of multi-projection device collaborative projection.
  • FIG. 3 is four projection devices D0, D1, D2, and D3 for initial projection
  • FIG. 5 is the above four projection devices collected by the camera.
  • FIG. 3 is four projection devices D0, D1, D2, and D3 for initial projection
  • the line type indicated by 501 represents the projection edge of D0; the line type indicated by 502 represents the projection edge of D1; the line type indicated by 503 represents the projection edge of D2; and the line type indicated by 504 represents the projection edge of D3.
  • S41 determining at least two projection regions corresponding to the at least two projection devices from the initial projection image, and determining a maximum inscribed rectangular region formed by the at least two projection regions as the target projection region;
  • the control device may identify at least two projection regions corresponding to at least two projection devices in the initial projection image by using an image recognition algorithm, and calculate by an edge recognition algorithm or a mathematical algorithm for finding a maximum rectangle.
  • a maximum inscribed rectangular area composed of at least two projection areas is used, and this maximum inscribed rectangular area is used as a target projection area.
  • the control device and D0 are physically combined, and D0 is used as the main projection device.
  • the CPU module of D0 processes the initial projection image, first.
  • the projection area D0' corresponding to D0 in the initial projection area, the projection area D1' corresponding to D1, the projection area D2' corresponding to D2, and the projection area D3' corresponding to D3 are identified by the image recognition algorithm; secondly, respectively, D0', D1', D2', and D3' perform edge recognition to obtain four edges of each projection area; then, extend the line where the lower edge of D0' and D1' are in the upper edge, and D2' and D3' The line of the higher edge of the lower edge is extended, and the line of the right edge of the left edge of D0' and D2' is extended, and the left edge of the right edge of D1' and D3' is located. The straight line is extended; finally, the rectangular area composed of the four extension lines is determined
  • a person skilled in the art can also set a calculation method according to actual needs to identify at least two projection areas corresponding to at least two projection devices in the initial projection image and to obtain a maximum inscribed rectangular area in the initial projection image.
  • the embodiment of the present disclosure is not specifically limited.
  • S42 Determine, according to a relative positional relationship between the at least two projection areas and the target projection area, a sub-image to be projected corresponding to each of the at least two projection devices from the image to be projected;
  • the control device may be based on the target projection area P.
  • the overlapping area and the non-overlapping area between the respective projection areas D0', D1', D2', and D3' the sub-images to be projected corresponding to the respective projection devices are acquired from the image to be projected.
  • S42 may include the following steps:
  • S801 divide the image to be projected according to a relative positional relationship between the at least two projection areas and the target projection area, to obtain a first area corresponding to each of the at least two projection devices;
  • the control device first acquires an area overlapping each of the projection area and the target projection area.
  • the area overlapped between D0' and P is C0
  • the area overlapped between D1' and P is C1.
  • the area overlapped between D2' and P is C2
  • the area overlapped between D3' and P is C3; then, the control device divides the image to be projected according to C0, C1, C2, and C3, and obtains the corresponding number of the four projection devices.
  • One area that is, the divided images corresponding to the four projection devices.
  • S802 determining, respectively, an area that is not overlapped between the at least two projection areas and the target projection area as a second area corresponding to each of the at least two projection devices, where image parameter values of the respective pixels in the second area are the same;
  • Image parameters may include: pixels, saturation, hue, and the like.
  • the pixel value of each pixel in the second area may be set to black, so that the projection content of the second area will be concise and consistent, and will not have a visual impact on the projected content in the target projection area, thereby improving the coordination.
  • the projection effect of the projection may be set to black, so that the projection content of the second area will be concise and consistent, and will not have a visual impact on the projected content in the target projection area, thereby improving the coordination.
  • S803 Obtain a sub-image to be projected corresponding to each of the at least two projection devices according to the first region and the second region.
  • control device divides the image to be projected according to C0, C1, C2, and C3, and obtains a first divided image corresponding to D0, a second divided image corresponding to D1, a third divided image corresponding to D2, and a fourth divided image corresponding to D3. Then, according to each divided image and the second region, the sub-images to be projected corresponding to D0, D1, D2, and D3, respectively, are obtained.
  • S43 Control at least two projection devices respectively to respectively project corresponding sub-images to be projected.
  • control device controls each projection device to project the respective divided images, and uniformly fills each of the projection regions and the second region where the target projection regions are not overlapped, so as to realize projection of the corresponding sub-images to be projected by the respective projection devices. purpose.
  • the respective divided images are spliced with the corresponding second regions, and the sub-images to be projected corresponding to the respective projection devices are obtained, and then cooperative projection is performed.
  • S801 may include:
  • S1001 determining, as each third region, an area in which at least two projection areas in the target projection area overlap each other;
  • the control device determines, as the respective third region, an area in which each of the projection areas in the target projection area overlap each other, and the third area is an area jointly projected by the multi-projection device, and the pixel value of each pixel in the area is more
  • the projection device projects the superimposed pixel values.
  • the areas where the four projection areas D0', D1', D2', and D3' overlap each other include: S1, S2, S3, S4, S5, and S6.
  • the control device acquires the number of projection devices corresponding to each third region, and further knows that the pixel value of the pixel in the third region is a pixel value obtained by superimposing and superimposing a plurality of projection devices. For example, if the control device acquires an area where S1 is a common projection of D0 and D2, the pixel value of the pixel in S1 is the pixel value obtained by the projection of 2 projection devices; S2 is the area jointly projected by D0 and D1, then S2 The pixel value of the pixel inside is the pixel value obtained by the projection of 2 projection devices; S3 is the area jointly projected by D1 and D3, and the pixel value of the pixel in S3 is the pixel obtained by the projection of 2 projection devices.
  • S4 is the area jointly projected by D2 and D3, then the pixel value of the pixel in S4 is the pixel value obtained by the projection of 2 projection devices;
  • S5 is the area jointly projected by D1, D2 and D3, then the area in S5 The pixel value of the pixel is the pixel value obtained by the projection of the three projection devices;
  • S6 is the area jointly projected by D0, D2 and D3, and the pixel value of the pixel in S6 is the pixel obtained by the projection of the three projection devices. value.
  • control device may process the image to be projected according to the number of projection devices, and the processing procedure is as follows:
  • a first step determining, according to a mapping relationship between the image to be projected and the target projection area, a fourth region corresponding to each third region from the image to be projected;
  • mapping relationship between the image to be projected and the target projection area is first established; and according to the established mapping relationship, the fourth area corresponding to each third area is determined from the image to be projected, and the fourth area is the image to be projected.
  • the fourth region corresponding to S1, S2, S3, S4, S5, and S6 is respectively determined from the image to be projected.
  • Step 2 Based on the number of projection devices, the image parameter values corresponding to the respective pixels in the fourth region corresponding to each third region are equally divided.
  • the pixel values of the fourth region may be equally divided according to the number of projection devices. For example, for S5, since the number of projection devices of the projection S5 is 3, after acquiring the fourth region corresponding to S5, the pixel values of the fourth region are processed into three equal parts, which is equivalent to three projection devices respectively projecting S5. 1/3 of the pixel value of each pixel of the corresponding fourth region, and further, the pixel value of each pixel obtained by the three projection devices co-projecting S5 is equal to each pixel in the fourth region in the image to be projected. The pixel value of the point.
  • S1004 Divide the processed image to be projected to obtain a first region corresponding to each of the at least two projection devices.
  • the image to be projected is divided to obtain a first region corresponding to each projection device, and then combined with the second region.
  • the sub-images to be projected corresponding to the projection devices are controlled, and then the sub-images to be projected corresponding to the projections of the projection devices are controlled, and the overlapping regions are not found in the projection regions obtained by the multi-projection device cooperative projection by the method, and related art
  • the efficiency of the cooperative projection of the multi-projection device is improved by the software control method, and the accuracy of the cooperative projection of the multi-projection device is ensured.
  • S801 may further include:
  • S1101 dividing the image to be projected according to a relative positional relationship between the at least two projection areas and the target projection area, and obtaining a fifth area corresponding to each of the at least two projection devices;
  • control device first determines the regions C0, C1, C2, and C3 that overlap each other between the four projection regions and the target projection region P, and then divides the image to be projected according to C0, C1, C2, and C3 to obtain four projections.
  • the fifth area corresponding to each device, the fifth area is a divided image corresponding to each projection device, that is, the first divided image corresponding to D0, the second divided image corresponding to D1, the third divided image corresponding to D2, and the fourth corresponding to D3 Divide the image.
  • S1102 determining, as each sixth region, an area in which at least two projection areas in the target projection area overlap each other;
  • a plurality of sixth regions can be identified, which are: S1, S2, S3, S4, S5, and S6.
  • the control device acquires an area where S1 is a common projection of D0 and D2, and the number of projection devices corresponding to S1 is 2;
  • S2 is an area jointly projected by D0 and D1, and the number of projection devices corresponding to S2 is 2;
  • S3 is D1 and For the area jointly projected by D3, the number of projection devices corresponding to S3 is 2;
  • S4 is the area jointly projected by D2 and D3, and the number of projection devices corresponding to S4 is 2;
  • S5 is the area jointly projected by D1, D2 and D3, then S5
  • the number of corresponding projection devices is 3;
  • S6 is the area where D0, D2, and D3 are jointly projected, and the number of projection devices corresponding to S6 is 3.
  • S1104 Processing, according to the number of projection devices, a fifth region corresponding to each of the at least two projection devices, to obtain a first region corresponding to each of the at least two projection devices.
  • the control device establishes a mapping relationship between each projection area in the target projection area and the corresponding fifth area, for example, respectively establishing D0' and C0, D1' and C1, D2' and C2, and D3'
  • the mapping relationship between C3 secondly, according to the established mapping relationship, the regions corresponding to S1, S2, S5, and S6 are determined from C0, and the regions corresponding to S2, S3, and S6 are determined from C1, and determined from C2.
  • the areas corresponding to S1, S4, and S5 are determined from C3; then, according to the number of projection devices in S1, S2, S3, S4, S5, and S6, C0 and C1 are used.
  • pixel values of pixel points in the regions corresponding to S1, S2, S3, S4, S5, and S6 in C2 and C3 are equally processed to obtain a first region corresponding to each projection device; further, for each projection device According to the first region and the second region, the corresponding sub-images to be projected can be obtained. Finally, by controlling the projection images to be projected by the respective projection devices, the multi-projection device can be cooperatively projected, and the projection effect without overlapping projections can be achieved. .
  • the control device controls the multi-projection device to perform initial projection to obtain each projection region; then, when an overlapping region occurs between the projection regions, the control device can quickly Processing the projected image to obtain the sub-image to be projected corresponding to each projection device; finally, the control device controls the projection image to be projected corresponding to each projection device, thereby avoiding the overlapping problem of the multi-projection device collaborative projection, thereby improving the Multi-projection device collaborative projection projection effect and projection accuracy.
  • Embodiment 2 is a diagrammatic representation of Embodiment 1:
  • the present embodiment performs cooperative projection with four projection devices.
  • S42 according to the relative positional relationship between at least two projection regions and the target projection region, at least two projection devices are respectively determined from the image to be projected. The process of projecting a sub-image.
  • any pixel in the target projection area can be represented by the following array format: (X, Y, N, X0, Y0, X1, Y1, X2, Y2, X3, Y3), where X, Y are the target projections.
  • the coordinate value of any pixel in the region, N is that the pixel is projected by N projections together. When N is 1, the pixel is projected by one projector. When N is 2, the pixel is shared by two projectors.
  • the pixel point (0, 0) in the target projection area is projected by the projector D2, and the coordinate value corresponding to D2 is (0, 50), and the corresponding array of the pixel point is (0, 0, 1, -1). , -1, -1, -1, 0, 50, -1, -1).
  • the pixel point (1280, 0) in the target projection area is projected by the projector D3, and the coordinate value corresponding to D2 is (800, 0), and the corresponding array of the pixel point is (1280, 0, 1, -1). , -1, -1, -1, -1, -1, 800, 0).
  • the pixel point (0,800) in the target projection area is projected by the projector D0, and the coordinate value corresponding to D0 is (60,600), and the corresponding array of the pixel point is (0,800,1,60,600,-1,-1,- 1,-1,-1,-1).
  • the pixel points (1280, 800) in the target projection area are projected by the projector D1, and the coordinate value corresponding to D1 is (800, 500), and the corresponding array of the pixel points is (1280, 800, 1, -1, -1, 800, 500). , -1, -1, -1, -1).
  • the pixel points (320, 400) in the target projection area are projected by the projectors D0 and D2.
  • the coordinate value corresponding to D0 is (410, 0)
  • the coordinate value corresponding to D2 is (380, 290)
  • the corresponding array of the pixel points is (320, 400, 2, 410, 0, -1, -1, 380, 290, -1, -1).
  • the pixel points (640, 400) in the target projection area are projected by the projectors D0, D1, D2 and D3, and the coordinate values corresponding to D0 are (700, 50), and the coordinate values corresponding to D1 are (50, 50), corresponding to D2.
  • the coordinate value is (760, 580), and the coordinate value corresponding to D3 is (50, 560).
  • the corresponding array of the pixel is (640, 400, 4, 700, 50, 50, 50, 760, 580, 50, 560).
  • the control device reads the image to be projected cached by the display system, and the image to be projected can be represented by the coordinate value of each pixel and its corresponding color value, wherein the color value is represented by three primary colors (RGB), and further, the coordinates of each pixel point
  • the value can be expressed as (X, Y, R, G, B), the coordinate value (X, Y) of each pixel in the image to be projected, and the coordinate value (X, Y) of each pixel in the target projection area.
  • each pixel point on the image to be projected is divided into each projection device, and all the pixels in the sub-image to be projected corresponding to each projection device are obtained.
  • the following array format may be used. :
  • D0 (X0, Y0, R/N, G/N, B/N)
  • the pixel value of the coordinates (X0, Y0) in D0 needs to be projected (R/N, G/N, B/N)
  • the projection device D0 initializes the display array [X0, Y0, 0, 0 , 0], no comfort color for the pixel is (0,0,0), for each pixel of the target projection area to do the division of the above projection device, fill the pixel value into the initialization array, you can get
  • the final array [X0, Y0, R, G, B], the coordinate value of the array and its corresponding color value is the pixel value to be projected by D0 to project the pixel.
  • D1 (X1, Y1, R/N, G/N, B/N)
  • the pixel whose coordinates are (X1, Y1) in D1 needs to be displayed (R/N, G/N, B/N)
  • the projection device D1 initializes the display array [X1, Y1, 0, 0, 0 ], the comfort color is not given to the pixel (0,0,0), and each pixel of the target projection area is divided by the above projection device, and the color value is filled into the initialization array to obtain the final array.
  • [X1, Y1, R, G, B] the coordinate value of the array and its corresponding color value is the pixel value to be projected by D1 to project the pixel.
  • D2 (X2, Y2, R/N, G/N, B/N)
  • X2>-1, Y2>- 1 then the pixel value of the (X2, Y2) pixel in D2 needs to be displayed (R/N, G/N, B/N)
  • the projection device D2 initialize the display array [X2, Y2, 0, 0 , 0], no comfort color for the pixel is (0, 0, 0), for each pixel of the target projection area to do the division of the above projection device, fill the color value into the initialization array, you can get
  • the final array [X2, Y2, R, G, B], the coordinate value of the array and its corresponding color value is the pixel value to be projected by D2 to project the pixel.
  • D3 (X3, Y3, R/N, G/N, B/N)
  • the array [X3, Y3, R, G, B], the coordinate value of the array and its corresponding color value is the pixel value to be projected by D3 to project the pixel.
  • control device controls the respective sub-images to be projected corresponding to the D0, D1, D2, and D3 projections.
  • the projection device in this embodiment may be a digital light processing (DLP) projector, or may be another projector.
  • the projector includes the following key parts: the light source, the light source of the DLP system is composed of three LED bulbs, respectively emitting red RED, green GREEN, blue BLUE color light, brightness of the LED
  • the digital micromirror device which is the core display device in the DLP projection system. It is composed of many small mirrors that can be rotated. The small mirror is pixel-based.
  • each small mirror corresponds to each pixel of the image, or each pixel of the image controls the deflection angle of a small mirror.
  • the RGB data of each pixel of the image is decomposed, the RGB lights of the control LED are respectively switched.
  • the R lamp is turned on, the small mirror is deflected according to the value of R. The larger the R value, the more light the mirror reflects, G, The B lamp is turned on by the same process. Through this process, the correct brightness of RGB is reflected out in one frame of image; the lens converges the light reflected by the small mirror, and then projects onto the screen according to the focal length, and realizes different through the lens group in the lens.
  • Focal length, through the focal length can adjust the imaging clarity and size of the screen; in summary of the principle of the DLP projection system, the picture data on the DMD will change the mirror flip of the DMD to change the intensity of the RGB light, and realize the color and brightness of each pixel of the picture. Differently, by changing the brightness of the three LEDs of R, G, and B, the color balance of the picture can be generally controlled.
  • the DLP projector can realize the superimposition effect of the pixel value of 1:1, when the multi-projection device cooperatively projects, the pixel value of the image to be projected corresponding to the overlap region can be based on the region.
  • the number of projection devices is equally divided to control the color balance of the images that are co-projected.
  • Embodiment 3 is a diagrammatic representation of Embodiment 3
  • the present embodiment performs cooperative projection examples by four projection devices, and details the process of determining the projection regions corresponding to the respective projection devices in S41 and determining the target projection regions.
  • each projection device may perform initial projection using rectangular images of different colors.
  • D0 represents a first projection device, and its projection color is a red rectangular region
  • D1 represents a second A projection device whose projection color is a green rectangular region
  • D2 represents a third projection device whose projection color is a yellow rectangular region
  • D3 represents a fourth projection device whose projection color is a blue rectangular region.
  • the control device controls the initial projection image acquired by the camera as shown in FIG. 5.
  • the projections corresponding to the four projection devices are determined for the control device. The process of the area is described.
  • the projection area corresponding to D0 is determined, and it is determined whether the S1-S6 area is adjacent to the red area, and the adjacent area is taken to form a projection area corresponding to D0, as in the frame of D0' in FIG.
  • Determining the projection area corresponding to D1 determining whether the S1-S6 area is adjacent to the green area, and taking the adjacent area to form a projection area corresponding to D1, as in the D1'-frame portion of FIG.
  • Determining the projection area corresponding to D2 determining whether the S1-S6 area is adjacent to the yellow area, and taking the adjacent area to form a projection area corresponding to D2, as shown in the D2'-frame portion in FIG.
  • the projection area corresponding to D3 is determined, and it is determined whether the S1-S6 area is adjacent to the blue area, and the adjacent area is taken to form a projection area corresponding to D3, as in the D3' in-frame portion in FIG.
  • the target projection area is determined based on the projection areas D0', D1', D2', and D3' corresponding to the four projection devices D0, D1, D2, and D3.
  • control device performs image recognition on the projected edge of the initial projected image, and recognizes all horizontal lines and vertical lines of different colors, each of which has two horizontal lines and two vertical lines.
  • the two horizontal lines of red in the D0' area take the higher one.
  • the two horizontal lines of green in the D1' area take the higher one.
  • the two lines are compared, and the lower one is taken as the upper boundary of the target projection area for extension.
  • the rectangular area composed of the four extension lines is the target projection area, as indicated by P in Fig. 7 .
  • each projection area corresponding to each projection device is determined in S41, and the target projection area is determined, the cooperative projection of the four projection devices is taken as an example, and S42 is described. Meanwhile, in the embodiment, each projection area is used. When there is an overlapping area, the image to be processed is divided first, and then the divided image is processed, and finally the non-overlapping projection is realized as an example.
  • control device divides the image to be projected according to the relative positional relationship between the four projection regions and the target projection region, and obtains a fifth region corresponding to each of the four projection devices.
  • control device establishes a mapping relationship between each of the projection areas in the target projection area and the corresponding fifth area; and determines an image to be projected corresponding to each overlapping part from each of the fifth areas according to the established mapping relationship;
  • the number of projection devices in the overlapping area is processed, and the pixel values of the corresponding image to be projected are processed to obtain sub-images to be projected corresponding to the respective projection devices.
  • the sub-image to be projected corresponding to D0 includes the following part:
  • S5 overlapping with D2 and D3, after acquiring the image to be projected corresponding to S5, reducing the pixel value of the image to be projected corresponding to S5 to 1/3, and obtaining the image to be projected corresponding to the processed S5;
  • the sub-image to be projected corresponding to D1 includes the following parts:
  • the sub-image to be projected corresponding to D2 includes the following parts:
  • S1 overlapping with D0, after acquiring the image to be projected corresponding to S1, halving the pixel value of the image to be projected corresponding to S1, and obtaining the image to be projected corresponding to S1 after processing;
  • S5 overlapping with D0 and D3, after acquiring the image to be projected corresponding to S5, reducing the pixel value of the image to be projected corresponding to S5 to 1/3, and obtaining the image to be projected corresponding to the processed S5;
  • the sub-image to be projected corresponding to D3 includes the following parts:
  • S5 overlapping with D2 and D0, after acquiring the image to be projected corresponding to S5, reducing the pixel value of the image to be projected corresponding to S5 to 1/3, and obtaining the image to be projected corresponding to the processed S5;
  • control device controls each of the projection devices to project respective corresponding images to be projected.
  • Embodiment 4 is a diagrammatic representation of Embodiment 4:
  • a projection apparatus which is disposed on a control device and configured to control a projection process of multi-projection device cooperative projection.
  • the apparatus 100 includes: an acquisition module 101 configured to acquire an initial projection image formed by initial projection of at least two projection devices by an image acquisition unit; and a first processing module 102 configured to project an initial image from the initial projection image Determining at least two projection areas corresponding to at least two projection devices, and determining a maximum inscribed rectangular area formed by the at least two projection areas as a target projection area; and the second processing module 103 is configured to according to at least two projection areas And a relative positional relationship with the target projection area, determining a sub-image to be projected corresponding to each of the at least two projection devices from the image to be projected; and the control module 104 is configured to respectively control at least two projection devices to project the corresponding sub-images to be projected.
  • the second processing module 103 is further configured to divide the image to be projected according to the relative positional relationship between the at least two projection areas and the target projection area, to obtain a first area corresponding to each of the at least two projection devices; Determining a non-overlapping region between the at least two projection regions and the target projection region as a second region corresponding to each of the at least two projection devices, wherein the image parameters of the respective pixels in the second region are the same; for each projection device, according to the first The region and the second region obtain respective corresponding sub-images to be projected.
  • the second processing module 103 is further configured to determine, as the third region, an area in which at least two of the target projection areas overlap each other; and acquire a number of projection devices corresponding to each third area;
  • the first image to be projected is processed according to the number of the projection devices; and the processed image to be projected is divided according to the relative positional relationship between the at least two projection regions and the target projection region, and the first corresponding to each of the at least two projection devices is obtained. region.
  • the second processing module 103 is further configured to determine, according to the mapping relationship between the image to be projected and the target projection area, the fourth region corresponding to each third region from the image to be projected; And dividing the image parameter values corresponding to the respective pixels in the fourth region corresponding to each of the third regions.
  • the second processing module 103 is further configured to divide the image to be projected according to the relative positional relationship between the at least two projection areas and the target projection area, and obtain a fifth area corresponding to each of the at least two projection devices; An area in which at least two projection areas overlap each other in the target projection area is determined as each sixth area; a number of projection devices corresponding to each sixth area is acquired; and a fifth corresponding to each of the at least two projection devices is based on the number of projection devices The area is processed to obtain a first area corresponding to each of the at least two projection devices.
  • the obtaining module 101, the first processing module 102, the second processing module 103, and the control module 104 may be implemented by a processor in a projection device.
  • embodiments of the present disclosure can be provided as a method, system, or computer program product. Accordingly, the present disclosure may take the form of a hardware embodiment, a software embodiment, or a combination of software and hardware aspects. Moreover, the present disclosure may take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage and optical storage, etc.) including computer usable program code.
  • the computer program instructions can also be stored in a computer readable memory that can direct a computer or other programmable data processing device to operate in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture comprising the instruction device.
  • the apparatus implements the functions specified in one or more blocks of a flow or a flow and/or block diagram of the flowchart.
  • These computer program instructions can also be loaded onto a computer or other programmable data processing device such that a series of operational steps are performed on a computer or other programmable device to produce computer-implemented processing for execution on a computer or other programmable device.
  • the instructions provide steps for implementing the functions specified in one or more of the flow or in a block or blocks of a flow diagram.
  • an embodiment of the present disclosure further provides a storage medium, in particular a computer readable storage medium, on which a computer program is stored, and when the computer program is executed by the processor, the steps of the method of the embodiment of the present disclosure are implemented.
  • the solution provided by the embodiment of the present disclosure firstly acquires an initial projection image formed by initial projection of at least two projection devices; secondly, determines a projection area corresponding to each projection device and a target projection region from the initial projection image; a relative positional relationship between the projection area and the target projection area, determining a sub-image to be projected corresponding to each projection device from the image to be projected; finally, respectively controlling each projection device to project a corresponding sub-image to be projected, the process does not require manual operation.
  • the cooperative projection of multi-projection equipment can be realized only by means of software control, thereby improving the intelligence degree of collaborative projection of multi-projection equipment and improving the projection efficiency of multi-projection equipment collaborative projection.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Projection Apparatus (AREA)
  • Transforming Electric Information Into Light Information (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

一种投影方法,包括:获取至少两个投影设备初始投影所形成的初始投影图像;从初始投影图像中确定至少两个投影设备对应的至少两个投影区域,并将至少两个投影区域所组成的最大内接矩形区域确定为目标投影区域;根据至少两个投影区域与目标投影区域的相对位置关系,从待投影图像中确定至少两个投影设备各自对应的待投影子图像;分别控制至少两个投影设备投影各自对应的待投影子图像。同时公开了一种投影装置、投影***及存储介质。

Description

一种投影方法、装置、***及存储介质
相关申请的交叉引用
本申请基于申请号为201710271996.3、申请日为2017年04月24日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本申请作为参考。
技术领域
本公开涉及终端投影领域,尤其涉及一种投影方法、装置、***及存储介质。
背景技术
近年来,随着信息量的迅速增长以及信息承载和传输技术的飞速发展,人们对于显示设备的分辨率、显示效率、视觉效果的要求日益增强。传统的单台普通投影仪虽然价格合理,但是分辨率相对较低,不能满足市场的需求;而多屏拼接显示器虽然实现了图像的高分辨率显示,但是价格十分昂贵,不能得到广泛的普及和应用。因此,多投影拼接技术作为一种权衡价格和高分辨率显示的综合性技术,利用多个普通投影仪进行拼接投影以投射出待投影图像的投射方式深受人们的青睐。
相关技术中采用多投影仪进行拼接投影的具体方式为:首先,用户根据拼接需求,通过电脑先对一个待投影图像进行分割;其次,将分割后的图像传输给多个投影仪;然后,控制多个投影仪投影对应的分割图像。所以,现有技术中,多投影仪的协同投影过程必须通过人工控制,同时,还需要人工调整投影结果,在投影仪数量较多时,这种通过人工控制多投影仪投影的方法费时费力,导致多投影仪协同投影的投影效率较低。
发明内容
有鉴于此,本公开实施例期望提供一种投影方法、装置及***及存储介质。
本公开实施例的技术方案是这样实现的:
第一方面,本公开实施例提供了一种投影方法,包括:获取至少两个投影设备初始投影所形成的初始投影图像;从初始投影图像中确定至少两个投影设备对应的至少两个投影区域,并将至少两个投影区域所组成的最大内接矩形区域确定为目标投影区域;根据至少两个投影区域与目标投影区域的相对位置关系,从待投影图像中确定至少两个投影设备各自对应的待投影子图像;分别控制至少两个投影设备投影各自对应的待投影子图像。
第二方面,本公开实施例提供了一种投影装置,包括:获取模块,配置为通过图像采集单元获取至少两个投影设备初始投影所形成的初始投影图像;第一处理模块,配置为从初始投影图像中确定至少两个投影设备对应的至少两个投影区域,并将至少两个投影区域所组成的最大内接矩形区域确定为目标投影区域;第二处理模块,配置为根据至少两个投影区域与目标投影区域的相对位置关系,从待投影图像中确定至少两个投影设备各自对应的待投影子图像;控制模块,配置为分别控制至少两个投影设备投影各自对应的待投影子图像。
第三方面,本公开实施例提供了一种投影***,所述***包括:摄像头,配置为采集至少两个投影设备初始投影所形成的初始投影图像;控制设备,配置为从初始投影图像中确定至少两个投影设备对应的至少两个投影区域,并将至少两个投影区域所组成的最大内接矩形区域确定为目标投影区域;根据至少两个投影区域与目标投影区域的相对位置关系,从待投影图像中确定至少两个投影设备各自对应的待投影子图像;分别控制至少两个投影设备投影各自对应的待投影子图像;至少两个投影设备,配置为初始投影;还配置为投影各自对应的待投影子图像。
本公开实施例所提供的投影方法、装置、***及存储介质,首先,获取至少两个投影设备初始投影所形成的初始投影图像;其次,从初始投影 图像中确定各投影设备对应的投影区域以及目标投影区域;然后,根据各投影区域与目标投影区域的相对位置关系,从待投影图像中确定各投影设备各自对应的待投影子图像;最后,分别控制各投影设备投影各自对应的待投影子图像,该过程无需人工操作,仅通过软件控制的方式,就可以实现多投影设备的协同投影,进而,提高了多投影设备协同投影的智能程度,进一步地,提高了多投影设备协同投影的投影效率。
附图说明
图1为本公开实施例中的投影***的第一种结构示意图;
图2为本公开实施例中的投影***的第二种结构示意图;
图3为本公开实施例中的投影***中的主投影设备的结构示意图;
图4为本公开实施例中的投影方法的第一种流程示意图;
图5为本公开实施例中的投影方法的第一种投影图像示意图;
图6为本公开实施例中的投影方法的第二种投影图像示意图;
图7为本公开实施例中的投影方法的第三种投影图像示意图;
图8为本公开实施例中的投影方法的第二种流程示意图;
图9为本公开实施例中的投影方法的第四种投影图像示意图;
图10为本公开实施例中的投影方法的第三种流程示意图;
图11为本公开实施例中的投影方法的第四种流程示意图;
图12为本公开实施例中的投影装置的结构示意图。
具体实施方式
下面将结合本公开实施例中的附图,对本公开实施例中的技术方案进行清楚、完整地描述。
实施例一:
本公开实施例提供一种投影方法,应用于投影***,参见图1所示,该投影***10包括:摄像头11、控制设备12以及至少两个投影设备13。
其中,上述控制设备12和投影设备13既可以在物理上分设,也可以在物理上合设。当控制设备12和投影设备13在物理上分设时,摄像头11可以设置在控制设备12上,此时,由控制设备12控制各投影设备13的投影过程,并控制摄像头11采集各投影设备初始投影所形成的初始投影图像,这里,控制设备12可以是手机、平板电脑、笔记本电脑等;当控制设备12和投影设备13在物理上合设时,将摄像头11设置在该投影设备13上,此时,可以将该投影设备13称为主投影设备,其他的投影设备称为各个从投影设备。
参见图2所示,以投影***包括4台投影设备举例说明,其中,D0表示第一投影设备、D1表示第二投影设备、D2表示第三投影设备以及D3表示第四投影设备,此时,控制设备和D0在物理上合设,并将摄像头设置在D0上,摄像头的位置可以与D0的光机处于同一平面(图2为各投影设备的主视图,其中,D0的主视图上的圆圈代表摄像头,矩形代表光机),此时,可以将D0作为主投影设备,D1、D2以及D3作为各从投影设备,主从设备连接,由D0控制自身以及各从投影设备的投影过程,并由D0控制摄像头采集初始投影图像。
实际应用中,多投影设备之间多以有线方式连接,这样,可能出现连接插头松动或者接触不良等问题,导致多投影设备无法协同投影。为此,本实施例中的多投影设备之间可以通过无线方式连接,例如,通过无线镜像连接技术连接,以实现各投影设备之间信息的分享。仍旧以4台投影设备举例说明,这里,D0作为主投影设备,通过无线镜像连接技术分别与各从投影设备D1、D2以及D3连接,此时,主投影设备为发送端,从投影设备为接收端,主投影设备控制自身以及各从投影设备的输出图像。
进一步地,仍旧以控制设备和投影设备在物理上合设为例,对本实施例的投影方法作进一步的说明,参见图3所示,此时,控制设备与D0在物理上合设,D0中的中央处理(CPU,Central Processing Unit)模块121、显示控制模块122以及通信接口123三者可以看作一个整体,相当于控制设备,其中,CPU模块121作为总控制模块,负责控制各个设备的投影内容,显示控制模块122控制D0的光机124进行投射,D0的通信接口可以是无 线通信接口,进而通过无线网络与各从投影设备通信,控制各从投影设备的投射,此时,D0通过无线镜像连接方式分别与D1、D2以及D3连接,将D0的CPU模块划分的图像传送到D1、D2以及D3,并且由CPU模块控制D0与D1、D2以及D3之间的时延,保证各投影设备协同投影画面的同步显示。
下面结合上述***对本公开实施例提供的投影方法进行说明。
参见图4所示,其示出了本公开实施例提供的投影方法的流程示意图,该方法可以应用于一些需要使用大屏幕显示数据或显示图像的场景,比如在显示军事仿真、工业设计、虚拟制造、工程投影、复杂的监控等的相关图像时,该方法包括:
S40:获取至少两个投影设备初始投影所形成的初始投影图像;
这里,以多台投影设备在大型会议中进行协同投影为例,用户放置好多台投影设备之后,首先,开启多台投影设备;然后,多台投影设备进行初始投影;接着,通过图像采集模块获取多台投影设备初始投影所形成的初始投影图像。
在本实施例中,可以采用至少2台投影设备进行初始投影,在一实施例中,由于常用图像的长宽比多为自然数的平方,因此,投影设备数量还可以取自然数的平方。例如,采用4台或者9台投影设备进行初始投影,提高多投影设备协同投影的投影效果。这里,以4台投影设备为例,结合图3和图5所示,图3为进行初始投影的4台投影设备D0、D1、D2以及D3,图5为摄像头采集到的上述4台投影设备初始投影所形成的初始投影图像,该初始投影图像的边缘由各投影设备的投影区域的边缘组成,为了便于说明,图5采用不同的线型对各投影设备的投影边缘进行了区分,例如,501所指线型表示D0的投影边缘;502所指线型表示D1的投影边缘;503所指线型表示D2的投影边缘;504所指线型表示D3的投影边缘。
S41:从初始投影图像中确定至少两个投影设备对应的至少两个投影区域,并将至少两个投影区域所组成的最大内接矩形区域确定为目标投影区域;
这里,在S40获取到初始投影图像之后,控制设备可以通过图像识别 算法识别出初始投影图像中至少两个投影设备对应的至少两个投影区域,并通过边缘识别算法或者求最大矩形的数学算法计算出至少两个投影区域所组成的最大内接矩形区域,并将这个最大内接矩形区域作为目标投影区域。
参见图3和图6所示,此时,控制设备与D0在物理上合设,D0作为主投影设备,在摄像头采集到初始投影图像之后,D0的CPU模块对该初始投影图像进行处理,首先,通过图像识别算法识别出初始投影区域内D0对应的投影区域D0'、D1对应的投影区域D1'、D2对应的投影区域D2'以及D3对应的投影区域D3';其次,分别对D0'、D1'、D2'以及D3'进行边缘识别,得到每个投影区域的四个边缘;然后,将D0'和D1'的上边缘中较低的边缘所在的直线进行延长,将D2'和D3'的下边缘中较高的边缘所在的直线进行延长,将D0'和D2'的左边缘中靠右的边缘所在的直线进行延长,将D1'和D3'的右边缘中靠左的边缘所在的直线进行延长;最后,将四条延长线所组成的矩形区域确定为目标投影区域,参见图7所示,目标投影区域即矩形区域P。
当然,本领域技术人员也可以根据实际需要自行设置计算方法,以识别出初始投影图像中的至少两个投影设备对应的至少两个投影区域以及求得初始投影图像中的最大内接矩形区域,本公开实施例不做具体限定。
S42:根据至少两个投影区域与目标投影区域的相对位置关系,从待投影图像中确定至少两个投影设备各自对应的待投影子图像;
这里,在S41确定出四台投影设备D0、D1、D2以及D3分别对应的四个投影区域D0'、D1'、D2'以及D3'以及目标投影区域P之后,控制设备可以基于目标投影区域P与各投影区域D0'、D1'、D2'以及D3'之间的重叠区域和非重叠区域,从待投影图像中获取各个投影设备对应的待投影子图像。
在一实施例中,参见图8所示,S42可以包括如下步骤:
S801:根据至少两个投影区域与目标投影区域的相对位置关系,对待投影图像进行划分,得到至少两个投影设备各自对应的第一区域;
这里,控制设备先获取各个投影区域与目标投影区域之间相互重叠的 区域,参见图9所示,D0'与P之间重叠的区域为C0,D1'与P之间重叠的区域为C1,D2'与P之间重叠的区域为C2,D3'与P之间重叠的区域为C3;然后,控制设备根据C0、C1、C2以及C3对待投影图像进行划分,得到4台投影设备对应的第一区域,即4台投影设备分别对应的划分图像。
S802:分别将至少两个投影区域与目标投影区域之间未重叠的区域确定为至少两个投影设备各自对应的第二区域,第二区域内各个像素点的图像参数值相同;
这里,在控制设备已确定出四个投影区域与目标投影区域P之间相互重叠的区域C0、C1、C2以及C3之后,可以将四个投影区域与目标投影区域P之间未重叠的区域确定为第二区域,参见图9所示,图中的K0、K1、K2以及K3均为第二区域。由于待投影图像最终将被投影到目标投影区域中,那么,对于目标投影区域之外,各投影设备投射的第二区域,可以将其内的各个像素点的图像参数值进行统一设置,这里,图像参数可以包括:像素,饱和度,色调等。举例说明,可以将第二区域内的各个像素点的像素值设置为黑色,如此,第二区域的投影内容将简洁一致,不会对目标投影区域内的投影内容产生视觉影响,进而提升了协同投影的投影效果。
S803:根据第一区域和第二区域得到至少两个投影设备各自对应的待投影子图像。
这里,控制设备根据C0、C1、C2以及C3对待投影图像进行划分,得到D0对应的第一划分图像、D1对应的第二划分图像、D2对应的第三划分图像以及D3对应的第四划分图像,然后,根据各个划分图像和第二区域,得到D0、D1、D2以及D3分别对应的待投影子图像。
S43:分别控制至少两个投影设备投影各自对应的待投影子图像。
这里,可以理解地,在S803的基础上,S43中控制设备控制各个投影设备投影各自对应的待投影子图像的过程,可以通过如下两种方法实现:
其一,在控制设备控制各投影设备投影各自的划分图像的同时,将各投影区域与目标投影区域未重叠的第二区域进行统一填充,以实现各投影设备投影各自对应的待投影子图像的目的。
其二,针对各投影设备,将各自的划分图像与各自对应的第二区域进行拼接,得到各投影设备对应的待投影子图像,然后进行协同投影。
实际应用中,当初始化投影的各投影区域之间出现重叠区域时,如果仅根据各投影区域与目标投影区域的相对位置关系,从待投影图像中确定各投影设备对应的待投影子图像,然后控制各投影设备投影对应的待投影子图像,那么,多投影设备协同投影的图像中也会出现重叠区域,影响投影效果。为此,本实施例在初始化投影的各投影区域之间出现重叠区域时,先对待投影图像进行处理,再对处理后的待投影图像进行划分,最终实现无重叠投影的技术效果。所以,在本实施例中,参见图10所示,S801可以包括:
S1001:将目标投影区域中的至少两个投影区域之间相互重叠的区域确定为各个第三区域;
这里,控制设备将目标投影区域中的各投影区域之间相互重叠的区域确定为各个第三区域,第三区域是多投影设备共同投影的区域,该区域内每个像素点的像素值是多投影设备投影叠加所得到的像素值。参见图9所示,四个投影区域D0'、D1'、D2'以及D3'之间相互重叠的区域包括:S1、S2、S3、S4、S5、S6。
S1002:获取各个第三区域对应的投影设备数量;
这里,控制设备获取各个第三区域对应的投影设备数量,进而可以获知第三区域内的像素点的像素值是几个投影设备投影叠加所得到的像素值。例如,控制设备获取到S1为D0和D2共同投影的区域,则S1内的像素点的像素值是2个投影设备投影叠加所得到的像素值;S2为D0和D1共同投影的区域,则S2内的像素点的像素值是2个投影设备投影叠加所得到的像素值;S3为D1和D3共同投影的区域,则S3内的像素点的像素值是2个投影设备投影叠加所得到的像素值;S4为D2和D3共同投影的区域,则S4内的像素点的像素值是2个投影设备投影叠加所得到的像素值;S5为D1、D2以及D3共同投影的区域,则S5内的像素点的像素值是3个投影设备投影叠加所得到的像素值;S6为D0、D2以及D3共同投影的区域,则S6内的像素点的像素值是3个投影设备投影叠加所得到的像素值。
S1003:基于投影设备数量,对待投影图像进行处理;
这里,控制设备在得到各个第三区域对应的投影设备数量之后,可以根据投影设备数量,对待投影图像进行处理,处理过程如下:
第一步:根据待投影图像与目标投影区域之间的映射关系,从待投影图像中,确定各个第三区域对应的第四区域;
这里,先建立待投影图像与目标投影区域之间的映射关系;再根据已建立的映射关系,从待投影图像中,确定出各个第三区域对应的第四区域,第四区域即待投影图像中将要被划分给多投影设备进行重叠投影的区域。
例如,根据已建立的映射关系,从待投影图像中,分别确定出S1、S2、S3、S4、S5、S6对应的第四区域。
第二步:基于投影设备数量,对各个第三区域对应的第四区域中各个像素点对应的图像参数值进行均分。
这里,对第四区域的中各个像素点对应的图像参数值进行均分时,可以根据投影设备数量,对第四区域的像素值进行均分。例如,对于S5,由于投影S5的投影设备数量为3,那么,在获取到S5对应的第四区域之后,对第四区域的像素值进行三等分处理,相当于3个投影设备分别投影S5对应的第四区域的各个像素点的像素值的1/3,进而,3个投影设备共同投影S5所得到的各个像素点的像素值也就等于待投影图像中的第四区域内的各个像素点的像素值。
S1004:对处理后的待投影图像进行划分,得到至少两个投影设备各自对应的第一区域。
这里,在控制设备基于重叠区域S1、S2、S3、S4、S5、S6,对待投影图像进行上述处理之后,对待投影图像进行划分,得到各投影设备对应的第一区域,再结合第二区域得到各投影设备对应的待投影子图像,随之,控制各投影设备投影对应的待投影子图像,通过该方法进行多投影设备协同投影所得到的投影区域中不会出现重叠区域,与相关技术中通过人工手动调整重叠区域的方案相比较,在本实施例中,通过软件控制的方法提高了多投影设备协同投影的效率,确保了多投影设备协同投影的准确率。
在多投影设备协同投影的图像中出现重叠区域时,还可以先对待投影图像进行划分,再对划分后的图像进行处理,最终实现无重叠投影的技术效果。所以,在本实施例中,参见图11所示,S801还可以包括:
S1101:根据至少两个投影区域与目标投影区域的相对位置关系,对待投影图像进行划分,获得至少两个投影设备各自对应的第五区域;
这里,控制设备先确定出四个投影区域与目标投影区域P之间相互重叠的区域C0、C1、C2以及C3,然后,根据C0、C1、C2以及C3对待投影图像进行划分,得到四个投影设备各自对应的第五区域,第五区域为各投影设备对应的划分图像,即D0对应的第一划分图像、D1对应的第二划分图像、D2对应的第三划分图像以及D3对应的第四划分图像。
S1102:将目标投影区域中的至少两个投影区域之间相互重叠的区域确定为各个第六区域;
这里,根据四个投影区域D0'、D1'、D2'以及D3'之间相互重叠的区域,可以确定出如下多个第六区域,分别为:S1、S2、S3、S4、S5、S6。
S1103:获取各个第六区域对应的投影设备数量;
这里,控制设备获取到S1为D0和D2共同投影的区域,则S1对应的投影设备数量为2;S2为D0和D1共同投影的区域,则S2对应的投影设备数量为2;S3为D1和D3共同投影的区域,则S3对应的投影设备数量为2;S4为D2和D3共同投影的区域,则S4对应的投影设备数量为2;S5为D1、D2以及D3共同投影的区域,则S5对应的投影设备数量为3;S6为D0、D2以及D3共同投影的区域,则S6对应的投影设备数量为3。
S1104:基于投影设备数量,对至少两个投影设备各自对应的第五区域进行处理,得到至少两个投影设备各自对应的第一区域。
这里,首先,控制设备建立目标投影区域中的各个投影区域与对应的各个第五区域之间的映射关系,例如,分别建立D0'与C0、D1'与C1、D2'与C2以及D3'与C3之间的映射关系;其次,根据已建立的映射关系,从C0中确定S1、S2、S5、S6所对应的区域,从C1中确定S2、S3、S6所对应的区域,从C2中确定S1、S4、S5所对应的区域,从C3中确定 S3、S4、S5、S6所对应的区域;然后,根据S1、S2、S3、S4、S5、S6内的投影设备数量,对C0、C1、C2以及C3中S1、S2、S3、S4、S5、S6所对应的区域内的像素点的像素值进行均分处理,以得到各个投影设备对应的第一区域;进一步地,针对各个投影设备,根据第一区域和第二区域便可以得到各自对应的待投影子图像;最后,控制各个投影设备投影对应的待投影子图像,便可以实现多投影设备协同投影,且无重叠投影的投影效果。
需要说明的是,在实际投影中,通过人工控制多投影设备进行投影时,还需要人工手动调整各投影仪投影所得图像之间的重叠区域,人工调整的工作量较为繁重,并且在投影设备较多时,无法实现快速调整,而且这种通过人工调整多台投影仪的位置的方法,只能做到粗略调整,无法精确地将各投影区域的重叠部分完全错开,这样,将无法避免协同投影所出现的重叠区域,导致投影效果较差。
然而,在本公开实施例中,在开启各投影设备之后,首先,控制设备控制多投影设备进行初始投影得到各投影区域;然后,在各投影区域之间出现重叠区域时,控制设备能够快速地对待投影图像进行处理,以得到各个投影设备对应的待投影子图像;最后,控制设备控制各个投影设备投影对应的待投影子图像,进而,避免了多投影设备协同投影的重叠问题,从而提高了多投影设备协同投影的投影效果以及投影准确率。
实施例二:
基于前述实施例,本实施例以4个投影设备进行协同投影举例,详细说明S42中根据至少两个投影区域与目标投影区域的相对位置关系,从待投影图像中确定至少两个投影设备各自对应的待投影子图像的过程。
在控制设备控制D0与D1、D2以及D3初始投影,并确定目标投影区域之后,可得到目标投影区域内任意一个像素点的坐标值、投射该像素点的投影仪的个数以及对应的投影坐标值。进而,目标投影区域内任一像素点可以采用如下的数组格式表示:(X,Y,N,X0,Y0,X1,Y1,X2,Y2,X3,Y3),其中,X,Y为目标投影区域内任一像素点的坐标值,N为该像素点被N个投 影共同投射,N为1时,该像素点被1个投影仪投射,N为2时该像素点被2个投影仪共同投射,N为3时,该像素点被3个投影仪共同投射,N为4时,该像素点被4个投影仪共同投射。(X0,Y0)为D0的投影坐标值,如果该像素点被D0投影,那么(X0,Y0)为D0投影的具体坐标值,如果该点不被D0投影,则设定该像素点的X0=-1,Y0=-1;(X1,Y1)为D1的投影坐标值,如果该像素点被D1投影,那么(X1,Y1)为D1投影的具体坐标值,如果该点不被D1投影,则设定该像素点的X1=-1,Y1=-1;(X2,Y2)为D2的投影坐标值,如果该像素点被D2投影,那么(X2,Y2)为D2投影的具体坐标值,如果该像素点不被D2投影,则设定该像素点的X2=-1,Y2=-1;(X3,Y3)为D3的投影坐标值,如果该像素点被D3投影,那么(X3,Y3)为D3投影的具体坐标值,如果该像素点不被D3投影,则设定该像素点的X3=-1,Y3=-1。
当目标投影区域分辨率为1280×800,每个投影仪设备的分辨率为800×600时,以四个投影设备进行协同投影举例说明:
目标投影区域内的像素点(0,0)由投影仪D2进行投影,在D2对应的坐标值为(0,50),则该像素点的对应的数组为(0,0,1,-1,-1,-1,-1,0,50,-1,-1)。
目标投影区域内的像素点(1280,0)由投影仪D3进行投影,在D2对应的坐标值为(800,0),则该像素点的对应的数组为(1280,0,1,-1,-1,-1,-1,-1,-1,800,0)。
目标投影区域内的像素点(0,800)由投影仪D0进行投影,在D0对应的坐标值为(60,600),该像素点的对应的数组为(0,800,1,60,600,-1,-1,-1,-1,-1,-1)。
目标投影区域内的像素点(1280,800)由投影仪D1进行投影,在D1对应的坐标值为(800,500),则该像素点对应的数组为(1280,800,1,-1,-1,800,500,-1,-1,-1,-1)。
目标投影区域内的像素点(320,400)由投影仪D0和D2进行投影,在D0对应的坐标值为(410,0),D2对应的坐标值为(380,290),则该像素点对应的数组为(320,400,2,410,0,-1,-1,380,290,-1,-1)。
目标投影区域内的像素点(640,400)由投影仪D0,D1,D2和D3进行投影,在D0对应的坐标值为(700,50),D1对应的坐标值为(50,50),D2对应的坐标值为(760,580),D3对应的坐标值为(50,560),则该像素点对应的数组为(640,400,4,700,50,50,50,760,580,50,560)。
控制设备读取显示***缓存的待投影图像,待投影图像可以用每个像素点的坐标值与其对应的颜色值表示,其中,颜色值以三原色(RGB)表示,进而,每个像素点的坐标值可以表示为(X,Y,R,G,B),待投影图像中每个像素点的坐标值(X,Y),与目标投影区域中每个像素点的坐标值(X,Y)一一对应,进而,将待投影图像上的每个像素点划分到每个投影设备中,得到每个投影设备对应的待投影子图像中的所有像素点,这里,可以采用如下的数组格式表示:
(X,Y,N,X0,Y0,R/N,G/N,B/N,X1,Y1,R/N,G/N,B/N,X2,Y2,R/N,G/N,B/N,X3,Y3,R/N,G/N,B/N,)。
关于D0(X0,Y0,R/N,G/N,B/N),如果X0=-1,Y0=-1,那么该像素点不被D0投影,如果X0>-1,Y0>-1,那么D0中坐标为(X0,Y0)的像素点需要被投影的像素值为(R/N,G/N,B/N),投影设备D0,初始化显示数组[X0,Y0,0,0,0],没有给像素点的舒适化颜色均为(0,0,0),对目标投影区域的每一个像素点做如上的投影设备的划分,将像素值填充到初始化数组中,可以得到最终数组[X0,Y0,R,G,B],该数组的坐标值及其对应的颜色值就是D0投影该像素点所要投影的像素值。
关于D1(X1,Y1,R/N,G/N,B/N),如果X1=-1,Y1=-1,那么该像素点不被D1投影,如果X0>-1,Y0>-1,那么D1中坐标为(X1,Y1)的像素点需要显示的像素为(R/N,G/N,B/N),投影设备D1,初始化显示数组[X1,Y1,0,0,0],没有给像素点的舒适化颜色均为(0,0,0),对目标投影区域的每一个像素点做如上的投影设备的划分,将颜色值填充到初始化数组中,可以得到最终数组[X1,Y1,R,G,B],该数组的坐标值及其对应的颜色值就是D1投影该像素点所要投影的像素值。
关于D2(X2,Y2,R/N,G/N,B/N),如果X2=-1,Y2=-1,那么该像素点不被D2上投影,如果X2>-1,Y2>-1,那么D2中坐标为(X2,Y2)的像素点需 要显示的像素值为(R/N,G/N,B/N),投影设备D2,初始化显示数组[X2,Y2,0,0,0],没有给像素点的舒适化颜色均为(0,0,0),对目标投影区域的每一个像素点做如上的投影设备的划分,将颜色值填充到初始化数组中,可以得到最终数组[X2,Y2,R,G,B],该数组的坐标值及其对应的颜色值就是D2投影该像素点所要投影的像素值。
关于D3(X3,Y3,R/N,G/N,B/N),如果X3=-1,Y3=-1,那么该像素点不被D3上投影,如果X3>-1,Y3>-1,那么D3中坐标为(X3,Y3)的点需要显示的颜色值为(R/N,G/N,B/N),投影设备D3,初始化显示数组[X3,Y3,0,0,0],没有给像素点的舒适化颜色均为(0,0,0),对目标投影区域的每一个像素点做如上的投影设备的划分,将颜色值填充到初始化数组中,可以得到最终数组[X3,Y3,R,G,B],该数组的坐标值及其对应的颜色值就是D3投影该像素点所要投影的像素值。
最终,控制设备控制D0、D1、D2以及D3投影各自对应的待投影子图像。
在实际应用中,本实施例中的投影设备可以是数字光处理(DLP,Digital Light Processing)投影仪,也可以是其他的投影仪。这里,以DLP投影仪为例,该投影仪包括如下几个关键部分:光源,DLP***的光源是由三盏LED灯泡组成,分别发出红RED,绿GREEN,蓝BLUE颜色的光,LED的亮度越高投影的画面亮度就越高;数字微镜元件(DMD,Digital Micro mirror Device),DMD是DLP投影***中的核心显示器件,它是由许多可以旋转的小镜子组成,小镜子按照像素的横列排列,每个小镜子和图像的每个像素对应,或者说图像的每个像素控制一个小镜子的偏转角度。当图像的每个像素的RGB数据分解出来后,控制LED的RGB灯分别开关,在R灯开启的时候小镜子根据R的数值做偏转,R数值越大镜子反射的光就越多,G,B灯打开同理,通过这个过程在一帧的图像中把RGB的正确亮度反射出去;镜头,把小镜子反射的光汇聚,然后根据焦距投射到幕布上,通过镜头中的镜片组实现不同的焦距,通过焦距可以调节在幕布的成像清晰度和大小;综上DLP投影***的原理,DMD上的图片数据会改变DMD的镜子翻转从而改变RGB光线的强弱,实现画面的各个像素颜色和亮度不 同,通过改变R,G,B三个LED的亮度,可以总体控制画面的色彩平衡。
在本实施例中,由于采用DLP投影仪可以实现像素值1:1的叠加效果,所以,通过多投影设备协同投影时,对于重叠区域对应的待投影图像的像素值,可以基于该区域内的投影设备数量进行均分处理,进而控制协同投影的图像的色彩平衡。
实施例三:
基于前述实施例,本实施以4个投影设备进行协同投影举例,详细说明S41中确定各个投影设备对应的投影区域,以及确定目标投影区域的过程。
这里,为了简化图像识别算法,各投影设备可以采用不同颜色的矩形图像进行初始化投影,例如,针对4个投影设备,D0表示第一投影设备,其投影颜色为红色的矩形区域;D1表示第二投影设备,其投影颜色为绿色的矩形区域;D2表示第三投影设备,其投影颜色为黄色的矩形区域;D3表示第四投影设备,其投影颜色为蓝色的矩形区域。在4个投影设备进行初始投影之后,控制设备控制摄像头采集到的初始投影图像如图5所示。其中,图中共有10种颜色,包括红色、绿色、黄色、蓝色以及六种混合颜色(图5中未示出)。
首先,结合图5和图6(图5中的六种混合颜色对应的区域分别为图6中的S1、S2、S3、S4、S5、S6),对控制设备确定四个投影设备对应的投射区域的过程进行说明。
确定D0对应的投影区域,判断S1-S6区域是否与红色区域相邻,取相邻区域,组成D0对应的投射区域,如图6中的D0'框内部分。
确定D1对应的投影区域,判断S1-S6区域是否与绿色区域相邻,取相邻区域,组成D1对应的投射区域,如图6中的D1'框内部分。
确定D2对应的投影区域,判断S1-S6区域是否与黄色区域相邻,取相邻区域,组成D2对应的投射区域,如图6中的D2'框内部分。
确定D3对应的投影区域,判断S1-S6区域是否与蓝色区域相邻,取相 邻区域,组成D3对应的投射区域,如图6中的D3'框内部分。
其次,根据四个投影设备D0、D1、D2以及D3对应的投影区域D0'、D1'、D2'以及D3',确定目标投影区域。
结合图7所示,控制设备对初始投影图像的投影边缘进行图像识别,识别出不同颜色所有的水平线和竖直线,每种颜色的水平线为两条,竖直线为两条。
目标投影区域上边界的确认:D0'区域中红色的两条水平线,取较高的一条。D1'区域中绿色的两条水平线,取较高的一条。两条线进行比较,取较低的一条作为目标投影区域的上边界,进行延长。
目标投影区域下边界的确认:D2'区域中颜色黄色的两条水平线,取较低的一条。D3'区域中颜色蓝色的两条水平线,取较低的一条。两条线进行比较,取较高的一条作为目标投影的区域下边界,进行延长。
目标投影区域左边界的确认:D0'区域中颜色红色的两条竖直线,取靠左的一条。D2'区域中颜色黄色的两条竖直线,取靠左的一条。两条线进行比较,取靠右的一条作为目标投影区域的左边界,进行延长。
目标投影区域右边界的确认:D1'区域中颜色绿色的两条竖直线,取靠右的一条。D3'区域中颜色蓝色的两条竖直线,取靠右的一条。两条线进行比较,取靠左的一条作为目标投影区域的右边界,进行延长。
最后,四条延长线组成的矩形区域就是目标投影区域,如图7中的P所指的框内部分。
进一步地,在S41确定了各个投影设备对应的投影区域,以及确定目标投影区域之后,仍旧以4个投影设备协同投影为例,对S42进行说明,同时,在本实施例中,当各投影区域之间出现重叠区域时,以先对待投影图像进行划分,再对划分后的图像进行处理,最终实现无重叠投影为例进行说明。
首先,控制设备根据四个投影区域与目标投影区域的相对位置关系,对待投影图像进行划分,获得四个投影设备各自对应的第五区域。
其次,控制设备建立目标投影区域中的各个投影区域与对应的各个第 五区域之间的映射关系;根据已建立的映射关系,从各个第五区域中确定各个重叠部分对应的待投影图像;根据重叠区域的投影设备数量,对相应的待投影图像的像素值进行处理,得到各个投影设备对应的待投影子图像。
参见图9所示,对各个投影设备对应的待投影子图像进行说明:
D0对应的待投影子图像包括下部分:
K0:投影为黑色;
S1:与D2重叠部分,获取到S1对应的待投影图像之后,将S1对应的待投影图像的像素值减半,得到处理后的S1对应的待投影图像;
S2:与D1重叠部分,获取到S2对应的待投影图像之后,将S2对应的待投影图像的像素值减半,得到处理后的S2对应的待投影图像;
S5:与D2、D3重叠部分,获取到S5对应的待投影图像之后,将S5对应的待投影图像的像素值减少至1/3,得到处理后的S5对应的待投影图像;
S6:与D1、D3重叠部分,获取到S6对应的待投影图像之后,将S6对应的待投影图像的像素值减少至1/3,得到处理后的S6对应的待投影图像;
其他部分对应的待投影图像的像素值不变。
D1对应的待投影子图像包括以下部分:
K1:投影为黑色;
S2:与D0重叠部分,获取到S2对应的待投影图像之后,将S2对应的待投影图像的像素值减半,得到处理后的S2对应的待投影图像;
S3:与D3重叠部分,获取到S3对应的待投影图像之后,将S3对应的待投影图像的像素值减半,得到处理后的S3对应的待投影图像;
S6:与D0、D3重叠部分,获取到S6对应的待投影图像之后,将S6对应的待投影图像的像素值减少至1/3,得到处理后的S6对应的待投影图像;
其他部分对应的待投影图像的像素值不变。
D2对应的待投影子图像包括以下部分:
K2:投影为黑色;
S1:与D0重叠部分,获取到S1对应的待投影图像之后,将S1对应的待投影图像的像素值减半,得到处理后的S1对应的待投影图像;
S4:与D3重叠部分,获取到S4对应的待投影图像之后,将S4对应的待投影图像的像素值减半,得到处理后的S4对应的待投影图像;
S5:与D0、D3重叠部分,获取到S5对应的待投影图像之后,将S5对应的待投影图像的像素值减少至1/3,得到处理后的S5对应的待投影图像;
其他部分对应的待投影图像的像素值不变。
D3对应的待投影子图像包括以下部分:
K3:投影为黑色;
S3:与D1重叠部分,获取到S3对应的待投影图像之后,将S3对应的待投影图像的像素值减半,得到处理后的S3对应的待投影图像;
S4:与D2重叠部分,获取到S4对应的待投影图像之后,将S4对应的待投影图像的像素值减半,得到处理后的S4对应的待投影图像;
S5:与D2、D0重叠部分,获取到S5对应的待投影图像之后,将S5对应的待投影图像的像素值减少至1/3,得到处理后的S5对应的待投影图像;
S6:与D1、D0重叠部分,获取到S6对应的待投影图像之后,将S6对应的待投影图像的像素值减少至1/3,得到处理后的S6对应的待投影图像;
其他部分对应的待投影图像的像素值不变。
最后,控制设备控制各个投影设备投影各自对应的待投影图像。
实施例四:
基于同一发明构思,在本实施例中,提供了一种投影装置,该投影装 置设置在控制设备上,配置为控制多投影设备协同投影的投影过程。
这里,参见图12所示,该装置100包括:获取模块101,配置为通过图像采集单元获取至少两个投影设备初始投影所形成的初始投影图像;第一处理模块102,配置为从初始投影图像中确定至少两个投影设备对应的至少两个投影区域,并将至少两个投影区域所组成的最大内接矩形区域确定为目标投影区域;第二处理模块103,配置为根据至少两个投影区域与目标投影区域的相对位置关系,从待投影图像中确定至少两个投影设备各自对应的待投影子图像;控制模块104,配置为分别控制至少两个投影设备投影各自对应的待投影子图像。
在本实施例中,第二处理模块103,还配置为根据至少两个投影区域与目标投影区域的相对位置关系,对待投影图像进行划分,得到至少两个投影设备各自对应的第一区域;分别将至少两个投影区域与目标投影区域之间未重叠的区域确定为至少两个投影设备各自对应的第二区域,第二区域内各个像素点的图像参数相同;针对各个投影设备,根据第一区域和第二区域得到各自对应的待投影子图像。
在本实施例中,第二处理模块103,还配置为将目标投影区域中的至少两个投影区域之间相互重叠的区域确定为各个第三区域;获取各个第三区域对应的投影设备数量;基于投影设备数量,对第一待投影图像进行处理;根据至少两个投影区域与目标投影区域的相对位置关系,对处理后的待投影图像进行划分,得到至少两个投影设备各自对应的第一区域。
在本实施例中,第二处理模块103,还配置为根据待投影图像与目标投影区域之间的映射关系,从待投影图像中,确定各个第三区域对应的第四区域;基于投影设备数量,对各个第三区域对应的第四区域中各个像素点对应的图像参数值进行均分。
在本实施例中,第二处理模块103,还配置为根据至少两个投影区域与目标投影区域的相对位置关系,对待投影图像进行划分,获得至少两个投影设备各自对应的第五区域;将目标投影区域中的至少两个投影区域之间相互重叠的区域确定为各个第六区域;获取各个第六区域对应的投影设备数量;基于投影设备数量,对至少两个投影设备各自对应的第五区域进行 处理,得到至少两个投影设备各自对应的第一区域。
实际应用时,所述获取模块101、第一处理模块102、第二处理模块103、控制模块104可由投影装置中的处理器实现。
这里需要指出的是:以上装置实施例的描述,与上述方法实施例的描述是类似的,具有同方法实施例相似的有益效果,因此不做赘述。对于本公开装置实施例中未披露的技术细节,请参照本公开方法实施例的描述而理解,为节约篇幅,因此不再赘述。
本领域内的技术人员应明白,本公开的实施例可提供为方法、***、或计算机程序产品。因此,本公开可采用硬件实施例、软件实施例、或结合软件和硬件方面的实施例的形式。而且,本公开可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器和光学存储器等)上实施的计算机程序产品的形式。
本公开是参照根据本公开实施例的方法、设备(***)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
基于此,本公开实施例还提供了一种存储介质,具体为计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现本公开实施例方法的步骤。
以上所述,仅为本公开的较佳实施例而已,并非用于限定本公开的保护范围。
工业实用性
本公开实施例所提供的方案,首先,获取至少两个投影设备初始投影所形成的初始投影图像;其次,从初始投影图像中确定各投影设备对应的投影区域以及目标投影区域;然后,根据各投影区域与目标投影区域的相对位置关系,从待投影图像中确定各投影设备各自对应的待投影子图像;最后,分别控制各投影设备投影各自对应的待投影子图像,该过程无需人工操作,仅通过软件控制的方式,就可以实现多投影设备的协同投影,进而,提高了多投影设备协同投影的智能程度,提高了多投影设备协同投影的投影效率。

Claims (12)

  1. 一种投影方法,包括:
    获取至少两个投影设备初始投影所形成的初始投影图像;
    从所述初始投影图像中确定所述至少两个投影设备对应的至少两个投影区域,并将所述至少两个投影区域所组成的最大内接矩形区域确定为目标投影区域;
    根据所述至少两个投影区域与所述目标投影区域的相对位置关系,从待投影图像中确定所述至少两个投影设备各自对应的待投影子图像;
    分别控制所述至少两个投影设备投影各自对应的所述待投影子图像。
  2. 根据权利要求1所述的方法,其中,所述根据所述至少两个投影区域与所述目标投影区域的相对位置关系,从待投影图像中确定所述至少两个投影设备各自对应的待投影子图像,包括:
    根据所述至少两个投影区域与所述目标投影区域的相对位置关系,对所述待投影图像进行划分,得到所述至少两个投影设备各自对应的第一区域;
    分别将所述至少两个投影区域与所述目标投影区域之间未重叠的区域确定为所述至少两个投影设备各自对应的第二区域,所述第二区域内各个像素点的图像参数值相同;
    根据所述第一区域和所述第二区域得到所述至少两个投影设备各自对应的待投影子图像。
  3. 根据权利要求2所述的方法,其中,所述根据所述至少两个投影区域与所述目标投影区域的相对位置关系,对所述待投影图像进行划分,得到所述至少两个投影设备各自对应的第一区域,包括:
    将所述目标投影区域中的所述至少两个投影区域之间相互重叠的区域确定为各个第三区域;
    获取所述各个第三区域对应的投影设备数量;
    基于所述投影设备数量,对所述待投影图像进行处理;
    根据所述至少两个投影区域与所述目标投影区域的相对位置关系,对处理后的待投影图像进行划分,得到所述至少两个投影设备各自对应的第一区域。
  4. 根据权利要求3所述的方法,其中,所述基于所述投影设备数量,对所述待投影图像进行处理,包括:
    根据所述待投影图像与所述目标投影区域之间的映射关系,从所述待投影图像中,确定所述各个第三区域对应的第四区域;
    基于所述投影设备数量,对所述各个第三区域对应的第四区域中各个像素点对应的图像参数值进行均分。
  5. 根据权利要求2所述的方法,其中,所述根据所述至少两个投影区域与所述目标投影区域的相对位置关系,对所述待投影图像进行划分,得到所述至少两个投影设备各自对应的第一区域,包括:
    根据所述至少两个投影区域与所述目标投影区域的相对位置关系,对所述待投影图像进行划分,获得所述至少两个投影设备各自对应的第五区域;
    将所述目标投影区域中的所述至少两个投影区域之间相互重叠的区域确定为各个第六区域;
    获取所述各个第六区域对应的投影设备数量;
    基于所述投影设备数量,对所述至少两个投影设备各自对应的第五区域进行处理,得到所述至少两个投影设备各自对应的第一区域。
  6. 一种投影装置,包括:
    获取模块,配置为通过图像采集单元获取至少两个投影设备初始投影所形成的初始投影图像;
    第一处理模块,配置为从所述初始投影图像中确定所述至少两个投影设备对应的至少两个投影区域,并将所述至少两个投影区域所组成的最大内接矩形区域确定为目标投影区域;
    第二处理模块,配置为根据所述至少两个投影区域与所述目标投影区域的相对位置关系,从所述待投影图像中确定所述至少两个投影设备各自 对应的待投影子图像;
    控制模块,配置为分别控制所述至少两个投影设备投影各自对应的所述待投影子图像。
  7. 根据权利要求6所述的装置,其中,所述第二处理模块,还配置为根据所述至少两个投影区域与所述目标投影区域的相对位置关系,对所述待投影图像进行划分,得到所述至少两个投影设备各自对应的第一区域;分别将所述至少两个投影区域与所述目标投影区域之间未重叠的区域确定为所述至少两个投影设备各自对应的第二区域,所述第二区域内各个像素点的图像参数值相同;根据所述第一区域和所述第二区域得到所述至少两个投影设备各自对应的待投影子图像。
  8. 根据权利要求7所述的装置,其中,所述第二处理模块,还配置为将所述目标投影区域中的所述至少两个投影区域之间相互重叠的区域确定为各个第三区域;获取所述各个第三区域对应的投影设备数量;基于所述投影设备数量,对所述待投影图像进行处理;根据所述至少两个投影区域与所述目标投影区域的相对位置关系,对处理后的待投影图像进行划分,得到所述至少两个投影设备各自对应的第一区域。
  9. 根据权利要求8所述的装置,其中,所述第二处理模块,还配置为根据所述待投影图像与所述目标投影区域之间的映射关系,从所述待投影图像中,确定所述各个第三区域对应的第四区域;基于所述投影设备数量,对所述各个第三区域对应的第四区域中各个像素点对应的图像参数值进行均分。
  10. 根据权利要求7所述的装置,其中,所述第二处理模块,还配置为根据所述至少两个投影区域与所述目标投影区域的相对位置关系,对所述待投影图像进行划分,获得所述至少两个投影设备各自对应的第五区域;将所述目标投影区域中的所述至少两个投影区域之间相互重叠的区域确定为各个第六区域;获取所述各个第六区域对应的投影设备数量;基于所述投影设备数量,对所述至少两个投影设备各自对应的第五区域进行处理,得到所述至少两个投影设备各自对应的第一区域。
  11. 一种投影***,包括:
    摄像头,配置为采集至少两个投影设备初始投影所形成的初始投影图像;控制设备,配置为从所述初始投影图像中确定所述至少两个投影设备对应的至少两个投影区域,并将所述至少两个投影区域所组成的最大内接矩形区域确定为目标投影区域;根据所述至少两个投影区域与所述目标投影区域的相对位置关系,从所述待投影图像中确定所述至少两个投影设备各自对应的待投影子图像;分别控制所述至少两个投影设备投影各自对应的所述待投影子图像;所述至少两个投影设备,配置为初始投影;还配置为投影各自对应的所述待投影子图像。
  12. 一种存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现权利要求1至5任一项所述方法的步骤。
PCT/CN2018/076995 2017-04-24 2018-02-23 一种投影方法、装置、***及存储介质 WO2018196472A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710271996.3 2017-04-24
CN201710271996.3A CN108737799A (zh) 2017-04-24 2017-04-24 一种投影方法、装置及***

Publications (1)

Publication Number Publication Date
WO2018196472A1 true WO2018196472A1 (zh) 2018-11-01

Family

ID=63919403

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/076995 WO2018196472A1 (zh) 2017-04-24 2018-02-23 一种投影方法、装置、***及存储介质

Country Status (2)

Country Link
CN (1) CN108737799A (zh)
WO (1) WO2018196472A1 (zh)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110191326B (zh) * 2019-05-29 2021-09-17 北京小鸟听听科技有限公司 一种投影***分辨率扩展方法、装置和投影***
CN110989949B (zh) * 2019-11-13 2023-04-11 浙江大华技术股份有限公司 一种异形拼接显示的方法及装置
CN111158554B (zh) * 2019-12-31 2021-07-16 联想(北京)有限公司 一种图像显示方法、电子设备和图像显示***
CN111258524B (zh) * 2020-01-20 2023-03-24 北京淳中科技股份有限公司 拼接屏组的控制方法、装置和服务器
CN114007051B (zh) * 2020-07-28 2024-02-20 青岛海信激光显示股份有限公司 激光投影***及其投影图像的显示方法
CN112233048B (zh) * 2020-12-11 2021-03-02 成都成电光信科技股份有限公司 一种球形视频图像校正方法
CN112770095B (zh) * 2021-01-28 2023-06-30 广州方硅信息技术有限公司 全景投影方法、装置及电子设备
CN113671782B (zh) * 2021-10-21 2022-02-15 成都极米科技股份有限公司 一种投影设备
CN117041508B (zh) * 2023-10-09 2024-01-16 杭州罗莱迪思科技股份有限公司 一种分布式投影方法、投影***、设备和介质

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090033874A1 (en) * 2007-07-31 2009-02-05 Richard Aufranc System and method of projecting an image using a plurality of projectors
CN104516482A (zh) * 2013-09-26 2015-04-15 北京天盛世纪科技发展有限公司 一种无影投影***及方法
CN105681772A (zh) * 2014-12-04 2016-06-15 佳能株式会社 显示控制装置及其控制方法

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103533276B (zh) * 2013-10-21 2017-01-18 北京理工大学 一种平面多投影快速拼接方法
JP6456086B2 (ja) * 2014-09-25 2019-01-23 キヤノン株式会社 投影型画像表示装置及びその制御方法並びにプロジェクタ及びその制御方法
CN105912101B (zh) * 2016-03-31 2020-08-25 联想(北京)有限公司 一种投影控制方法和电子设备

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090033874A1 (en) * 2007-07-31 2009-02-05 Richard Aufranc System and method of projecting an image using a plurality of projectors
CN104516482A (zh) * 2013-09-26 2015-04-15 北京天盛世纪科技发展有限公司 一种无影投影***及方法
CN105681772A (zh) * 2014-12-04 2016-06-15 佳能株式会社 显示控制装置及其控制方法

Also Published As

Publication number Publication date
CN108737799A (zh) 2018-11-02

Similar Documents

Publication Publication Date Title
WO2018196472A1 (zh) 一种投影方法、装置、***及存储介质
US9298071B2 (en) Multi-projection system
EP1606935B1 (en) Method for creating brightness filter and virtual space creation system
US20060181685A1 (en) Projector, method of controlling the projector, program for controlling the projector, and recording medium storing the program
US9671684B2 (en) Theater parameter management apparatus and method
JP6513234B2 (ja) Ledディスプレイに用いられる画像処理方法及び装置
JP2004336225A (ja) 画像処理システム、プロジェクタ、プログラム、情報記憶媒体および画像処理方法
JP6793483B2 (ja) 表示装置、電子機器およびそれらの制御方法
US10871931B2 (en) Display device and control method of display device
JP2015167341A (ja) マルチプロジェクションシステム
JP6926948B2 (ja) プロジェクター、画像投写システム、及びプロジェクターの制御方法
US10638100B2 (en) Projector, multi-projection system, and method for controlling projector
KR20150024186A (ko) 투사 장치 클러스터링 방법, 이를 이용한 관리 장치 및 관리 시스템
CN105824173A (zh) 交互式投影仪及其用于确定对象的深度信息的操作方法
CN114071104B (zh) 基于着色器实现多投影机投影渐变融合的方法
TW202144892A (zh) 投影系統以及投影方法
CN113890626B (zh) 色散校正方法、装置、激光电视机及存储介质
EP3934244A1 (en) Device, system and method for generating a mapping of projector pixels to camera pixels and/or object positions using alternating patterns
JP2019047312A (ja) 画像投写システム及びその制御方法
JP5249733B2 (ja) 映像信号処理装置
US11327389B2 (en) Image projection system and method of controlling image projection system
JP2022147132A (ja) 投写画像の調整方法、情報処理装置、及び投写システム
WO2020162051A1 (ja) 投射型映像表示システム
TWI552606B (zh) Image processing device and its projection image fusion method
CN116095280A (zh) 投影***以及应用投影***的投影方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18790934

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18790934

Country of ref document: EP

Kind code of ref document: A1