WO2022011817A1 - 一种面向三维球体可视化*** - Google Patents

一种面向三维球体可视化*** Download PDF

Info

Publication number
WO2022011817A1
WO2022011817A1 PCT/CN2020/114880 CN2020114880W WO2022011817A1 WO 2022011817 A1 WO2022011817 A1 WO 2022011817A1 CN 2020114880 W CN2020114880 W CN 2020114880W WO 2022011817 A1 WO2022011817 A1 WO 2022011817A1
Authority
WO
WIPO (PCT)
Prior art keywords
virtual
sphere
image
area
dimensional
Prior art date
Application number
PCT/CN2020/114880
Other languages
English (en)
French (fr)
Inventor
范湘涛
朱俊杰
杜小平
简洪登
刘健
阎福礼
Original Assignee
中国科学院空天信息创新研究院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中国科学院空天信息创新研究院 filed Critical 中国科学院空天信息创新研究院
Publication of WO2022011817A1 publication Critical patent/WO2022011817A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/363Image reproducers using image projection screens
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/122Improving the 3D impression of stereoscopic images by modifying image signal contents, e.g. by filtering or adding monoscopic depth cues
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/275Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/296Synchronisation thereof; Control thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/388Volumetric displays, i.e. systems where the image is built up from picture elements distributed through a volume
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3141Constructional details thereof
    • H04N9/3147Multi-projection systems

Definitions

  • the present invention relates to the technical field of virtual projection, in particular to a three-dimensional sphere-oriented visualization system.
  • the 3D earth display system is generally composed of software environment and hardware environment.
  • the software environment is mainly responsible for the realization of functions such as digital prototype display, product assemblability analysis, data visualization, and human-machine efficacy analysis
  • the hardware environment is mainly responsible for providing multi-channel stereoscopic display, human-computer interaction data collection and other functions.
  • the display mode based on the projector projected on the sphere has become a popular display mode, which uses multiple projectors to project images onto the sphere from different angular positions, thereby forming a three-dimensional earth display system.
  • the current three-dimensional sphere large-scale projection visualization display system has mature solutions. Their feature is that multiple projectors are placed on the earth's equatorial plane, and the projection axis rays are located on the earth's equatorial plane, thereby projecting image data onto the earth's spherical surface.
  • This design is the most simple and practical design idea. After the computer renders the earth data, it is projected onto a three-dimensional sphere. There is no image deformation caused by projection, and the rendering is simple, and there is no need to consider the complicated earth orientation. However, in the actual application process, there will be many objective conditions, and the projector cannot be placed on the equator of the earth. In view of this actual situation, the current projection system in the market is powerless.
  • embodiments of the present application provide a three-dimensional sphere-oriented visualization system.
  • the present application provides a three-dimensional sphere-oriented visualization system, comprising: a projection display unit including a solid three-dimensional sphere and at least one solid projector, wherein the at least one solid projector is arranged at any position around the solid three-dimensional sphere
  • a graphic generator unit for obtaining a virtual three-dimensional scene according to the received spherical parameters of the solid three-dimensional ball, the position information of the solid three-dimensional ball and at least one solid projector, wherein the virtual three-dimensional scene includes a virtual three-dimensional ball and at least one virtual camera, the volume ratio of the virtual 3D sphere to the solid 3D sphere, the distance between each virtual camera and the surface of the virtual 3D sphere and the distance between each solid projector and the surface of the solid 3D sphere.
  • the distance ratio is the same, and the positional relationship between the virtual cameras and the virtual three-dimensional ball is the same as the positional relationship between the physical projectors and the solid three-dimensional ball; and the first three-dimensional ball is generated on the virtual three-dimensional ball.
  • the at least one virtual camera is controlled to photograph the virtual three-dimensional sphere to obtain at least one second image;
  • the fusion display unit is configured to perform edge overlap fusion processing on the second images obtained by the adjacent virtual cameras , after obtaining at least one third image, send it to the at least one physical projector corresponding to the at least one virtual camera, so that the at least one physical projector can project the obtained second image to the physical three-dimensional on the ball.
  • the fusion display unit is further configured to receive the second image sent by the at least one virtual camera through multiple channels; and respectively send the at least one third image to the at least one virtual camera through multiple channels The at least one physical projector corresponding to one virtual camera.
  • the fusion display unit is specifically configured to determine whether there is an area with the same image content in the second image captured by any two adjacent virtual cameras; when there is an area with the same image content When the image content is the same, the area with the same image content is divided into a first area and a second area along the middle line; Deleting an area and deleting the second area in the second image captured by another one of the two adjacent virtual cameras.
  • the present application also provides a three-dimensional sphere-oriented visualization system, when at least one entity projector is set at any position around the entity three-dimensional sphere, including: a communication unit for receiving the spherical parameters of the entity three-dimensional sphere , the position information of the entity 3D sphere and at least one entity projector; the processing unit is used to obtain each entity projection according to the spherical parameters of the entity 3D sphere, the position information of the entity 3D sphere and at least one entity projector The first image that the projector is used for projection; the communication unit is further configured to send each of the first images to the corresponding physical projectors.
  • the processing unit is specifically configured to obtain a virtual three-dimensional scene according to the received spherical parameters of the solid three-dimensional ball, the position information of the solid three-dimensional ball and at least one solid projector, and the virtual three-dimensional scene is
  • the 3D scene includes a virtual 3D sphere and at least one virtual camera, the volume ratio of the virtual 3D sphere to the solid 3D sphere, the distance between each virtual camera and the surface of the virtual 3D sphere and the distance from each solid projector to the solid
  • the distance ratios between the surfaces of the three-dimensional spheres are the same, and the positional relationship between the respective virtual cameras and the virtual three-dimensional sphere is the same as the positional relationship between the respective entity projectors and the entity three-dimensional sphere; and
  • the at least one virtual camera is controlled to photograph the virtual three-dimensional sphere to obtain at least one first image.
  • the processing unit is further configured to determine whether there is an area with the same image content in the second image captured by any two adjacent virtual cameras; when there is an area with the same image content When the image content is the same, the area with the same image content is divided into a first area and a second area along the middle line; the first area in the second image captured by one of the two adjacent virtual cameras Deleting an area and deleting the second area in the second image captured by another virtual camera in the two adjacent virtual cameras.
  • the communication unit is configured to receive an instruction from a user, and the instruction is used to zoom in on a part of the area in the second image; the processing unit is further configured to zoom in on the virtual three-dimensional ball A third image is displayed, where the third image is the second image after a partial area in the second image is enlarged.
  • the communication unit is configured to receive display data, where the display data is data associated with a partial area on the second image; the processing unit is further configured to The display data is displayed on a partial area on the image.
  • the communication unit is configured to receive a viewpoint operation instruction, where the viewpoint operation instruction is an instruction for clicking on a first area of the second image, where the second image includes the first area; the processing unit is further configured to display, on the first area, content different from the content displayed on the first area before the click.
  • the processing unit is further configured to add N virtual cameras on the virtual scene, for shooting the at least one virtual camera to shoot the blind area of the virtual three-dimensional ball, and the N is greater than zero positive integer of .
  • the present application also provides a method for 3D sphere visualization, including: acquiring sphere parameters of the solid 3D sphere, position information of the solid 3D sphere and at least three solid projectors; sphere parameters, the entity 3D sphere and the position information of the at least three entity projectors, to obtain a virtual 3D scene, the virtual 3D scene includes a virtual 3D sphere and at least three virtual cameras, and the virtual 3D sphere and all
  • the volume ratio of the solid 3D sphere and the distance between each virtual camera and the surface of the virtual 3D sphere are the same as the distance ratio between each solid projector and the surface of the solid 3D sphere.
  • the positional relationship between the three-dimensional balls is the same as the positional relationship between the physical projectors and the solid three-dimensional ball; a first image is generated on the virtual three-dimensional ball; The three-dimensional sphere is photographed; the second images photographed by each virtual camera are respectively transmitted to the corresponding physical projectors; wherein, the second images photographed by the respective virtual cameras constitute the first image.
  • the method before the part of the first images captured by the respective virtual cameras are respectively transmitted to the corresponding physical projectors, the method includes: judging the images captured by the first virtual camera and the second virtual camera. Whether there is an area with the same image content in the second image, the first virtual camera and the second virtual camera are adjacent to each other; when there is an area with the same image content, the area with the same image content is placed along the The middle line is divided into a first area and a second area; the first area in the second image captured by the first virtual camera is deleted and the second image captured by the second virtual camera is deleted The second region is deleted as described in .
  • the method further includes: receiving a first instruction, where the first instruction is to zoom in on the first image on the virtual three-dimensional ball or the instruction; Before the three virtual cameras photograph the virtual three-dimensional ball, the method includes: according to the first instruction, zooming in or zooming out the content displayed on a part or all of the area of the first image.
  • the method further includes: receiving first data, the first data being data associated with at least one area on the first image on the virtual three-dimensional sphere; Before controlling the three virtual cameras to photograph the virtual three-dimensional ball, the method includes: displaying corresponding data on at least one area on the first image according to the first data.
  • the embodiments of the present application further provide a computer-readable storage medium on which a computer program is stored, and when the computer program is executed in a computer, the computer is caused to execute each possible implementation of the third aspect. .
  • an embodiment of the present application further provides a computing device, including a memory and a processor, wherein the memory stores executable code, and when the processor executes the executable code, the execution is performed. Examples of possible implementations of the third aspect are as follows.
  • the present application simulates a virtual scene that is the same as the real scene by acquiring the spherical parameters, position information of the solid 3D sphere and the position information of multiple solid projectors, and then allows the virtual projectors to shoot images on the virtual 3D sphere , after obtaining the image to be projected by each entity projector, send it to each entity image, and finally each entity projector will project the obtained image on the entity 3D sphere, so as to obtain an accurate and clear 3D image, so for each entity projector
  • FIG. 1 is an architecture diagram of a three-dimensional sphere-oriented visualization system provided by an embodiment of the present application
  • FIG. 2 is an architectural diagram of a projection display scene provided by an embodiment of the present application
  • FIG. 3 is a schematic diagram representing the overall size of a projection display scene provided by an embodiment of the present application.
  • FIG. 4 is a side view of an optical path design scene provided by an embodiment of the present application.
  • FIG. 5 is a top view of an optical path design scene provided by an embodiment of the present application.
  • FIG. 6 is a structural diagram of a three-dimensional sphere-oriented visualization system provided by an embodiment of the present application.
  • FIG. 7 is a work flow chart of the three-dimensional sphere-oriented visualization system provided by the embodiment of the present application.
  • FIG. 8 is a working flowchart of a communication unit provided by an embodiment of the present application.
  • FIG. 9 is a schematic flowchart of a method for 3D sphere visualization provided by an embodiment of the present application.
  • FIG. 1 is an architectural diagram of a three-dimensional sphere-oriented visualization system provided by an embodiment of the present application. As shown in FIG. 1 , the system includes: a projection display unit 11 , a graphic generator unit 12 and a fusion display unit 13 .
  • the projection display unit 11 mainly includes: at least one projector and a three-dimensional sphere.
  • the present application adopts a four-channel front projection fusion display, and is equipped with four high-resolution and high-brightness laser light source projectors, which can meet the needs of different working modes, as shown in FIG. 2 .
  • the most important thing in the projection display unit 11 is the optical path design.
  • This application uses professional design tool software, combined with the site conditions and the technical requirements of the bidding documents, to provide an interactive three-dimensional preliminary design of this display system. All system data of this design are are reliable, accurate, and engineering available.
  • the projector is a laser projector.
  • the selection of the 3D ball is very important.
  • Professional 3D ball display has very high requirements on the quality of the image in many aspects, such as the display of static image details, which requires the system to provide a very high dynamic contrast, which is the key to determining the dynamic contrast of the system. are screen properties.
  • the dynamic contrast ratio of the system is different from the so-called all-black and all-white contrast ratio of a stand-alone projector.
  • Stand-alone full black and full white contrast refers to measuring pure white and pure black separately at different times under ideal conditions (only a specific lens can be used, under a specific brightness, using front projection, image size is very small, etc.)
  • the signal brightness, obtained by dividing the two, has no practical significance. What is really valuable to the user is the maximum contrast that the human eye can really see and the degree of detail feature performance under the actual engineering constraints. These all need a suitable screen to achieve. Therefore, we use the high-contrast projection dome of Baixue's polymer carbon fiber resin matrix material, which is the most three-dimensional ball.
  • Projection dome features are as follows:
  • High contrast screen Under certain conditions, high system dynamic ratio can display richer image details, and the key factors of display system dynamic contrast ratio are screen type and attributes. In practical engineering, the system contrast and system brightness are interrelated and restricted, so it is necessary to make a reasonable choice between the two in the system design stage.
  • the second is the ability to resist deformation. Because screen flatness affects display accuracy, high-end display systems require the highest possible screen flatness.
  • the white snow projection dome adopts polymer carbon fiber resin as the matrix material, and the flatness error is 1 ⁇ , which can ensure its long-term flatness even when the screen size is relatively large.
  • the flatness of the screen will also be affected by the rationality of the mechanical structure design and the environment, and further deformation may occur.
  • the use of a customized mechanical support structure can ensure no deformation under long-term use conditions.
  • the projector has the best matching effect.
  • This application adopts a four-channel front-projection fusion display, and is equipped with four high-resolution and high-brightness laser light source projectors, which can meet the needs of different working modes.
  • a single-unit DP E-Vision Laser 8500 WUXGA laser projector has a maximum luminous flux output of 9000 lumens and a physical resolution of 1920x1200.
  • the life of the light source reaches 20,000 hours, and the brightness decay is less than 50% from the initial brightness to the end of the life.
  • the projection display unit 11 adopts 4 projectors for front projection fusion display, and the specific parameters are as follows: size: 3D ball with a diameter of 2600mm; blind area size: lower height 44.6mm; fusion area: 40%; system resolution: 5376 *1200.
  • the light path of the projection display unit 11 is the most critical environment for system integration, and the light path design determines the position of the projection, the fusion effect of the projection, the brightness of the projection and the coverage of the projection.
  • the present application proposes a solution for arranging the projector above 60 degrees north latitude by combining spatial geometric calculation, computer three-dimensional simulation and experiment, as shown in Fig. 4 and Fig. 5, which breaks through the limitations of existing commercial systems Mount the projector at the equatorial height limit.
  • the graphics generator unit 12 refers primarily to a graphics workstation.
  • a graphics workstation is a generator of graphics and image signals, providing the computing power guarantee required for real-time rendering.
  • the graphics generator unit 12 is a key hardware part in the platform system.
  • the graphic generator unit 12 is the software support platform of the whole system, and all kinds of software required by the user to design must be installed on this platform.
  • this application adopts HP's high-end Z8G4 customized workstation, which is configured with the current high-end graphics cards Quadro RTX6000 (24GB video memory) and Quadro P2000 (5GB video memory) launched by Nvidia.
  • the graphics generator unit 12 passes the spherical parameters of the three-dimensional sphere, it includes data such as the radius, surface area, and shape of the three-dimensional sphere, and the position information of the three-dimensional sphere and each projector, including the pitch angle projected by each projector, the The angle between the projection direction of the center of the light source of the projector and the direction of the line between the center of the light source of each projector and the center of the three-dimensional sphere, the relative coordinates of each projector and the coordinate system with the center of the three-dimensional sphere as the origin, each projection The distance between the center of the light source of the projector and the surface of the three-dimensional sphere, the positional relationship between the projectors, the azimuth angle and heading angle of the center of the light source of each projector relative to the center of the three-dimensional sphere, etc.
  • a simulated 3D scene at different scales that is the same as the real scene. That is, the ratio between the solid 3D sphere and the virtual 3D sphere, the ratio between the distance from each projector to the solid 3D sphere and the distance from each camera to the virtual 3D sphere, and the distance between each projector and each camera.
  • the ratio between the distances is the same, but the angle values such as the included angle, azimuth angle, and heading angle of each projector and each camera remain the same.
  • the graphics generator unit 12 generates an image to be presented on the representation of the virtual 3D sphere, and controls each virtual camera to take pictures of the representation of the virtual 3D sphere, and the acquired image is a part of the image generated on the surface of the virtual 3D sphere.
  • multiple virtual slave cameras are added to the virtual 3D scene, which respectively observe the virtual 3D sphere from different orientations, adjust the position of the virtual slave cameras so that there is no dead angle for observation, and render the images observed by the virtual slave cameras to multiple windows. Then, the frame image in each window is transmitted to the projector, and the projector projects the picture on the dome screen and synchronizes it to the solid 3D sphere for display.
  • the fusion display unit 13 needs to be able to support multi-channel, arbitrary edge and multi-edge overlay fusion processing when performing image splicing and fusion processing, and is not affected by the size of the fusion area.
  • Blanking adjustments only affect the edges of the projected image when performing blanking fusion, and are used to place the projected image within a frame in the screen and hide or mask unwanted information (or noise).
  • the blanking image is divided into 4 parts: top blanking, bottom blanking, left blanking, right blanking.
  • the configured fusion display system is mainly composed of large TV and TV fusion machines.
  • the fusion display unit 13 configured in this application is mainly composed of Daishi Electronics MM5000-B rainbow series fusion machine.
  • the MM5000-B Rainbow series fusion machine adopts a 2U chassis and is equipped with 4 DP input and output boards to support 4 channels of DisplayPort input and 4 channels of DisplayPort output.
  • 4 outputs are connected to 4 projectors
  • the outboard card supports 4 outputs and can be connected to monitoring monitors
  • 4 monitors are normally connected to the graphics generation subsystem IG with 4 outputs of Nvidia Quadro P2000 graphics card
  • the entire workstation has two outputs.
  • Each graphics card can be extended to output a total of 8 channels of WUXGA 1920*1200 resolution, and the matching output and pre-operation can be specified by the user-side software.
  • the images of the virtual three-dimensional spherical surface captured between adjacent virtual cameras must have a certain overlapping area, which requires processing of the overlapping area, otherwise, during the projection process of the projector, the overlapping area of the adjacent images will be will appear blurry.
  • image 1 and image 2 After the two virtual cameras acquire two images (image 1 and image 2), use the midline method to determine the overlapping area of the two images, divide them into two areas from the middle, and then delete the area 2 in image 1. Part of area 1 in image 2 is deleted to obtain processed image 1 and image 2.
  • the projectors project the images onto the solid 3D sphere, thereby realizing the projection of the spherical screen and the 3D sphere presentation of the images.
  • the graphics generator unit 12 can output to the display port through multiple channels, without causing delay, and can fully achieve a refresh rate of 60 Hz.
  • the parallel computing and processing method is adopted, and the device parameters are calculated and saved in advance. , which reduces the amount of computation, thereby increasing the processing speed and refresh rate.
  • the system may further include a line integration unit. Since there are many signal sources that need to be displayed, such as image signals, network signals, monitor signals, etc., these complex signals are not displayed centrally at the same time, but are selectively displayed according to certain processes or requirements. Likewise, various working modes of the system also need to be implemented by switching. Professional design and planning are required for large-scale sphere display system platforms, such as:
  • Electromagnetic compatibility design Electromagnetic interference sources, coupling paths and sensitive equipment are the keys to solving electromagnetic compatibility problems in the system. These three elements are indispensable, and we also start from these three elements. Eliminating one or two of the three factors can meet the requirements of electromagnetic compatibility. Electromagnetic compatibility system design is to carefully predict various possible electromagnetic compatibility problems in the process of product design, and take various measures from the beginning of the design to avoid electromagnetic compatibility problems, so technical measures that combine circuit and structure can be taken. . Taking this approach usually resolves 90% of EMC issues before the official product is complete. In the design of electromagnetic compatibility, we mainly solve the technical problem of electromagnetic shielding, that is, cut off the coupling path.
  • Design of the system cable The design of the cable network should not only meet the system connection requirements and ensure signal transmission, but also pay attention to the electromagnetic compatibility of the system to reduce unnecessary coupling.
  • the design of the cable network can be mainly divided into network, data, control, video cables, etc., and different types are arranged separately in the same slot. According to the requirements of the actual use of the equipment, the signal wiring diagram and the specifications of various signal lines are provided, and the signal line slot is reserved.
  • Grounding wire adopts RV4.0 polyvinyl chloride insulated connecting flexible wire.
  • the DC resistance of the wire is 5m ⁇ /M, which fully meets the military grounding resistance standard of 0.1 ⁇ .
  • grounding of the power supply The grounding of the power supply, grounding the workbench and the cabinet shell, ensures the safety of personal electricity use. When the grounding resistance is 4 ⁇ , it fully meets the safety requirements of low-voltage system protection grounding; due to the insulation of the neutral line to the ground in this method, the general electrical fault will not cause power failure due to this, which meets the requirement that power failure is not allowed during work.
  • the electromagnetic compatibility and radiation protection performance of the system can meet the requirements.
  • the present application uses the combination of the above three hardware units to simulate a virtual scene identical to the real scene after acquiring the spherical parameters and position information of the solid three-dimensional sphere and the position information of multiple physical projectors, and then let the virtual projections
  • the camera shoots the image on the virtual 3D sphere, obtains the image to be projected by each entity projector, and sends it to each entity image.
  • each entity projector projects the obtained image on the entity 3D sphere, so as to obtain accurate and clear images. Therefore, there are no strict requirements on the positional relationship between each entity projector and the entity 3D sphere, and they can be placed at will, thus greatly improving the application scenarios and flexibility of 3D projection technology.
  • FIG. 6 is a structural diagram of a three-dimensional sphere-oriented visualization system provided by an embodiment of the present application. As shown in FIG. 6 , the system 600 includes: a communication unit 601 and a processing unit 602 .
  • the communication unit 601 is used to receive spherical parameters of the three-dimensional ball, including data such as the radius, surface area, and shape of the three-dimensional ball, and position information of the three-dimensional ball and each projector, including the pitch angle projected by each projector and the center projection of the light source of each projector.
  • the processing unit 602 is configured to simulate a simulated three-dimensional scene with different scales that is the same as the real scene according to the data obtained above. That is, the ratio between the solid 3D sphere and the virtual 3D sphere, the ratio between the distance from each projector to the solid 3D sphere and the distance from each camera to the virtual 3D sphere, and the distance between each projector and each camera.
  • the ratio between the distances is the same, but the angle values such as the included angle, azimuth angle, and heading angle of each projector and each camera remain the same.
  • an image to be presented is generated on the representation of the virtual 3D sphere, and each virtual camera is controlled to take pictures of the representation of the virtual 3D sphere, and the acquired image is a part of the image generated on the surface of the virtual 3D sphere.
  • the processing unit 602 is configured to use the midline method after the two virtual cameras acquire two images (image 1 and image 2), after determining the overlapping area of the two images, divide the area into two areas from the middle, and then divide the area in the image 1 into two areas. Part 2 is deleted, and part of area 1 in image 2 is deleted to obtain processed image 1 and image 2.
  • the processing unit 602 sends the processed images to the corresponding projectors
  • the projectors project the images onto the solid 3D sphere, thereby realizing spherical projection and 3D sphere presentation of the images.
  • the communication unit 601 can output to the display port through multiple channels, without causing delay, and can completely achieve a refresh rate of 60 Hz, technically using parallel computing and processing, and calculating and saving device parameters in advance. , which reduces the amount of computation, thereby increasing the processing speed and refresh rate.
  • the communication unit 601 also receives an instruction from the user to perform a zoom-in or zoom-out operation on the virtual image, and the processing unit 602 controls the image on the virtual three-dimensional ball to zoom in or out according to the instruction, and then the virtual camera shoots After the enlarged or reduced image is obtained, it is sent to the corresponding projector, so that the enlarged or reduced image is displayed on the solid 3D sphere synchronously.
  • the communication unit 601 also receives data input by the server, other terminals or users, and the data is data such as interpretation and identification of a part of the area on the image displayed on the virtual three-dimensional ball.
  • the processing unit 602 adds, according to the data, to the corresponding partial area on the image displayed on the virtual three-dimensional ball, texts, patterns, etc. that explain and identify the area.
  • the virtual camera captures an image with logos such as characters, patterns, etc. added, and sends it to the corresponding projector, so that the image of the logo is displayed on the solid three-dimensional sphere synchronously.
  • the communication unit 601 further receives an instruction of the user to operate the image, and the processing unit 602 changes the image displayed on the virtual three-dimensional ball according to the instruction according to the instruction. Then, after the virtual camera captures the changed image, it is sent to the corresponding projector, so that the changed image is displayed on the solid 3D sphere synchronously.
  • Step S701 creating network communication.
  • Step S702 constructing a virtual three-dimensional scene.
  • the system receives the sphere parameters of the 3D sphere, including the radius, surface area, sphere shape and other data of the 3D sphere, the position information of the 3D sphere and each projector, including the pitch angle projected by each projector, the light source of each projector The angle between the center projection direction and the direction of the line between the center of the light source of each projector and the center of the 3D sphere, the relative coordinates of each projector and the coordinate system with the center of the 3D sphere as the origin, the light source of each projector The distance from the center to the surface of the 3D sphere, the positional relationship between the projectors, the azimuth angle and heading angle of the center of the light source of each projector relative to the center of the 3D sphere, etc., according to the data obtained above, simulate a scene that is the same as the real scene different scales of simulated 3D scenes.
  • the ratio between the solid 3D sphere and the virtual 3D sphere the ratio between the distance from each projector to the solid 3D sphere and the distance from each camera to the virtual 3D sphere, and the distance between each projector and each camera.
  • the ratio between the distances is the same, but the angle values such as the included angle, azimuth angle, and heading angle of each projector and each camera remain the same.
  • an image to be presented is generated on the representation of the virtual 3D sphere, and each virtual camera is controlled to take pictures of the representation of the virtual 3D sphere, and the acquired image is a part of the image generated on the surface of the virtual 3D sphere.
  • Step S703 3D digital earth data is loaded.
  • a 3D digital globe is displayed on the surface of the virtual 3D sphere, and then each virtual camera is controlled to photograph the surface of the virtual 3D sphere to obtain multiple images.
  • multiple virtual slave cameras are added to the virtual 3D scene, which respectively observe the virtual 3D sphere from different orientations, adjust the position of the virtual slave cameras so that there is no dead angle for observation, and render the images observed by the virtual slave cameras to multiple windows. Then, the frame image in each window is transmitted to the projector, and the projector projects the picture on the dome screen and synchronizes it to the solid 3D sphere for display.
  • Step S704 outputting multiple images obtained by each virtual camera to the display port through multiple channels.
  • the system obtains multiple images, outputs them to the display port through multi-channels, and transmits them to the corresponding physical projectors, and then each projector projects the obtained images onto the solid three-dimensional sphere, realizing the projection of the spherical screen and the integration of the images.
  • 3D sphere demo In this application, the multi-channel output to the display port does not cause delay, and the refresh rate of 60 Hz can be completely achieved.
  • the parallel computing and processing method is adopted. Thereby increasing the processing speed and refresh rate.
  • Step S705 after acquiring the data in the Socket buffer, determine whether the data is network data.
  • the network data is the data transmitted by the user input, sent by other devices, etc.
  • the data is generally text, instructions, icons, and the like.
  • Step S706 Analyze the network data to determine which type of data the data is.
  • Step S707 when the parsed network data is an instruction for the user to operate the image, according to the instruction, the image displayed on the virtual three-dimensional ball is changed according to the instruction, and then step S704 is executed.
  • Step S708 when the parsed network data is data such as interpretation and identification of the partial area on the image displayed on the virtual three-dimensional sphere, then according to the data, add this area to the corresponding partial area on the image displayed on the virtual three-dimensional sphere. Explain, marked characters, patterns, etc., and then execute step S704.
  • Step S709 when the parsed network data is an instruction for enlarging or reducing the virtual image, according to the instruction, the image on the virtual three-dimensional ball is controlled to enlarge or shrink, and then step S704 is executed.
  • Step S801 initialize the network module. That is, re-create the Qt Udp communication socket at startup, bind the network data receiving and sending ports, and join the network multicast.
  • Step S802 monitoring the devices in the system.
  • step S803 it is judged whether there is any message generation, if yes, go to step S804, if not, go to step S803.
  • Step S804 parse the monitored message.
  • Step S805 when the monitored message is an instruction for the user to operate the image, it is determined as a viewpoint control message, and then step S808 is executed.
  • Step S806 when the monitored message is a part of the area on the image displayed on the virtual 3D sphere for interpretation, identification, etc., it is determined to be a layer management message, and then step S808 is performed.
  • Step S807 when the monitored message is an instruction to perform a zoom-in or zoom-out operation on the virtual image, it is determined to be a data loading message, and then step S808 is performed.
  • Step S808 encapsulating the obtained message into an instruction.
  • Step S809 sending a UDP command.
  • step S810 it is judged whether the instruction has been sent, and if so, the message monitoring is stopped; if not, step S802 is executed.
  • FIG. 9 is a schematic flowchart of a method for 3D sphere visualization provided by an embodiment of the present application.
  • the method shown in Figure 9, the specific implementation process is as follows:
  • Step S901 acquiring the sphere parameters of the entity 3D sphere, the entity 3D sphere and the position information of at least three entity projectors.
  • Step S902 Obtain a virtual three-dimensional scene according to the spherical parameters of the solid three-dimensional ball, the solid three-dimensional ball and the position information of at least three solid projectors.
  • the virtual 3D scene includes a virtual 3D sphere and at least three virtual cameras, the volume ratio of the virtual 3D sphere to the solid 3D sphere, the distance between each virtual camera and the virtual 3D sphere surface, and the difference between each solid projector and the solid 3D sphere surface.
  • the distance ratios are the same, and the positional relationship between each virtual camera and the virtual three-dimensional ball is the same as the positional relationship between each physical projector and the physical three-dimensional ball.
  • Step S903 generating a first image on the virtual three-dimensional sphere.
  • Step S904 controlling at least three virtual cameras to photograph the virtual three-dimensional ball.
  • Step S905 the second images captured by the virtual cameras are respectively transmitted to the corresponding physical projectors.
  • the second images captured by each virtual camera constitute the first image.
  • the method before the part of the first images captured by the respective virtual cameras are respectively transmitted to the corresponding physical projectors, the method includes: judging the images captured by the first virtual camera and the second virtual camera. Whether there is an area with the same image content in the second image, the first virtual camera and the second virtual camera are adjacent to each other; when there is an area with the same image content, the area with the same image content is placed along the The middle line is divided into a first area and a second area; the first area in the second image captured by the first virtual camera is deleted and the second image captured by the second virtual camera is deleted The second region is deleted as described in .
  • the method further includes: receiving a first instruction, where the first instruction is to zoom in on the first image on the virtual three-dimensional ball or the instruction; Before the three virtual cameras photograph the virtual three-dimensional ball, the method includes: according to the first instruction, zooming in or zooming out the content displayed on a part or all of the area of the first image.
  • the method further includes: receiving first data, the first data being data associated with at least one area on the first image on the virtual three-dimensional sphere; Before controlling the three virtual cameras to photograph the virtual three-dimensional ball, the method includes: displaying corresponding data on at least one area on the first image according to the first data.
  • the present application simulates a virtual scene that is the same as the real scene by acquiring the spherical parameters, position information of the solid 3D sphere and the position information of multiple solid projectors, and then allows the virtual projectors to shoot images on the virtual 3D sphere , after obtaining the image to be projected by each entity projector, send it to each entity image, and finally each entity projector will project the obtained image on the entity 3D sphere, so as to obtain an accurate and clear 3D image, so as to provide an accurate and clear 3D image for each entity projector.
  • the present invention provides a computer-readable storage medium on which a computer program is stored, and when the computer program is executed in a computer, the computer is made to execute any one of the above methods.
  • the present invention provides a computing device, including a memory and a processor, wherein executable codes are stored in the memory, and when the processor executes the executable codes, any one of the above methods is implemented.
  • various aspects or features of the embodiments of the present application may be implemented as a method, apparatus or article of manufacture using standard programming and/or engineering techniques.
  • article of manufacture encompasses a computer program accessible from any computer readable device, carrier or medium.
  • computer readable media may include, but are not limited to: magnetic storage devices (eg, hard disks, floppy disks, or magnetic tapes, etc.), optical disks (eg, compact discs (CDs), digital versatile discs (DVDs) etc.), smart cards and flash memory devices (eg, erasable programmable read-only memory (EPROM), card, stick or key drives, etc.).
  • various storage media described herein can represent one or more devices and/or other machine-readable media for storing information.
  • the term "machine-readable medium” may include, but is not limited to, wireless channels and various other media capable of storing, containing, and/or carrying instructions and/or data.
  • the above-mentioned embodiments it may be implemented in whole or in part by software, hardware, firmware or any combination thereof.
  • software it can be implemented in whole or in part in the form of a computer program product.
  • the computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, all or part of the processes or functions described in the embodiments of the present application are generated.
  • the computer may be a general purpose computer, special purpose computer, computer network, or other programmable device.
  • the computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be downloaded from a website site, computer, server, or data center Transmission to another website site, computer, server, or data center is by wire (eg, coaxial cable, fiber optic, digital subscriber line (DSL)) or wireless (eg, infrared, wireless, microwave, etc.).
  • the computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device such as a server, a data center, or the like that includes an integration of one or more available media.
  • the usable media may be magnetic media (eg, floppy disks, hard disks, magnetic tapes), optical media (eg, DVD), or semiconductor media (eg, Solid State Disk (SSD)), and the like.
  • the size of the sequence numbers of the above-mentioned processes does not mean the sequence of execution, and the execution sequence of each process should be determined by its functions and internal logic, and should not be The implementation process of the embodiments of the present application constitutes any limitation.
  • the disclosed system, apparatus and method may be implemented in other manners.
  • the apparatus embodiments described above are only illustrative.
  • the division of the units is only a logical function division. In actual implementation, there may be other division methods.
  • multiple units or components may be combined or Can be integrated into another system, or some features can be ignored, or not implemented.
  • the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
  • the functions, if implemented in the form of software functional units and sold or used as independent products, may be stored in a computer-readable storage medium.
  • the technical solutions of the embodiments of the present application can be embodied in the form of software products in essence, or the parts that make contributions to the prior art or the parts of the technical solutions, and the computer software products are stored in a storage medium , including several instructions to cause a computer device (which may be a personal computer, a server, or an access network device, etc.) to execute all or part of the steps of the methods described in the various embodiments of the embodiments of this application.
  • the aforementioned storage medium includes: U disk, mobile hard disk, Read-Only Memory (ROM, Read-Only Memory), Random Access Memory (RAM, Random Access Memory), magnetic disk or optical disk and other media that can store program codes .

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Processing Or Creating Images (AREA)
  • Transforming Electric Information Into Light Information (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

本申请提供了一种面向三维球体可视化***,涉及虚拟投影技术领域。其中所述***包括:投影显示单元,包括实体三维球和至少一个实体投影机,至少一个实体投影机设置在实体三维球周围的任意位置;图形发生器单元,用于根据接收到的实体三维球的球体参数、实体三维球和至少一个实体投影机的位置信息,得到虚拟三维场景,并在虚拟三维球上生成第一图像后,控制至少一个虚拟相机对虚拟三维球进行拍摄,得到至少一个第二图像;融合显示单元,用于对相邻虚拟相机得到的第二图像进行边缘重叠融合处理,得到至少一个第三图像后,发送给至少一个虚拟相机对应的至少一个实体投影机;至少一个实体投影机将各自得到的第二图像投影到实体三维球上。

Description

一种面向三维球体可视化***
本申请要求于2020年07月17日提交中国专利局、申请号为202010694406.X、申请名称为“一种面向三维球体可视化***”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本发明虚拟投影技术领域,尤其涉及一种面向三维球体可视化***。
背景技术
近年来,数字地球三维球体展示***在多媒体处理领域迅猛发展,三维地球展示***一般由软件环境和硬件环境组成。其中,软件环境主要负责数字样机展示、产品可装配性分析、数据可视化、人机功效分析等功能的实现,硬件环境主要负责提供多通道立体显示、人机交互数据采集等功能。基于投影机投影于球体的展示模式成为一种流行的展示模式,它利用多台投影机将图像从不同的角度位置投影到球体上,从而形成一个三维地球展示***。
目前的三维球体大型投影可视化展示***有成熟的解决方案,他们的特点是将多台投影机放置在地球赤道面上,投影主轴光线位于地球赤道面上,从而将图像数据投影到地球球面上,这一设计是一种最简单实用的设计思路,计算机对地球数据渲染之后投影到三维球体上,不存在由于投影引起的图像变形,渲染简单,不需要考虑复杂的地球方位等情况。但是实际应用过程中,会存在很多客观的条件,投影机不能给放置在地球赤道面上,针对这一实际情况,目前市场中的投影***就无能为力了。
发明内容
为了克服上述问题,本申请的实施例提供了一种面向三维球体可视化***。
为了达到上述目的,本申请的实施例采用如下技术方案:
第一方面,本申请提供一种面向三维球体可视化***,包括:投影显示单元,包括实体三维球和至少一个实体投影机,所述至少一个实体投影机设置在所述实体三维球周围的任意位置;图形发生器单元,用于根据接收到的实体三维球的球体参数、所述实体三维球和至少一个实体投影机的位置信息,得到虚拟三维场景,其中,所述虚拟三维场景包括虚拟三维球和至少一个虚拟相机,所述虚拟三维球与所述实体三维球的体积比和各个虚拟相机到所述虚拟三维球表面之间的距离与各个实体投影机到所述实体三维球表面之间的距离比相同,所述各个虚拟相机与所述虚拟三维球之间的位置关系与所述各个实体投影机与所述实体三维球之间的位置关系相同;并在所述虚拟三维球上生成第一图像后,控制所述至少一个虚拟相机对所述虚拟三维球进行拍摄,得到至少一个第二图像;融合显示单元,用于对相邻所述虚拟相机得到的第二图像进行边缘重叠融合处理,得到至少一个第三图像后,发送给所述至少一个虚拟相机对应的所述至少一个实体投影机,以便所述至少一个实 体投影机将各自得到的所述第二图像投影到所述实体三维球上。
在另一个可能的实现中,所述融合显示单元,还用于通过多通道接收所述至少一个虚拟相机发送的第二图像;以及通过多通道分别发送所述至少一个第三图像给所述至少一个虚拟相机对应的所述至少一个实体投影机。
在另一个可能的实现中,所述融合显示单元,具体用于判断任意相邻的两个虚拟相机拍摄得到的所述第二图像的是否有图像内容相同的区域;当存在有图像内容相同的区域时,将所述图像内容相同的区域沿中间线分割成第一区域和第二区域;将所述相邻的两个虚拟相机中一个虚拟相机拍摄得到的所述第二图像中所述第一区域删除和将所述相邻的两个虚拟相机中另一个虚拟相机拍摄得到的所述第二图像中所述第二区域删除。
第二方面,本申请还提供了一种面向三维球体可视化***,当至少一个实体投影机设置在实体三维球周围的任意位置时,包括:通信单元,用于接收所述实体三维球的球体参数、所述实体三维球和至少一个实体投影机的位置信息;处理单元,用于根据所述实体三维球的球体参数、所述实体三维球和至少一个实体投影机的位置信息,得到各个实体投影机用于投影的第一图像;所述通信单元,还用于将各个所述第一图像发送给对应的各个实体投影机。
在另一个可能的实现中,所述处理单元,具体用于根据接收到的实体三维球的球体参数、所述实体三维球和至少一个实体投影机的位置信息,得到虚拟三维场景,所述虚拟三维场景包括虚拟三维球和至少一个虚拟相机,所述虚拟三维球与所述实体三维球的体积比和各个虚拟相机到所述虚拟三维球表面之间的距离与各个实体投影机到所述实体三维球表面之间的距离比相同,所述各个虚拟相机与所述虚拟三维球之间的位置关系与所述各个实体投影机与所述实体三维球之间的位置关系相同;以及在所述虚拟三维球上生成第二图像后,控制所述至少一个虚拟相机对所述虚拟三维球进行拍摄,得到至少一个所述第第一图像。
在另一个可能的实现中,所述处理单元,还用于判断任意相邻的两个虚拟相机拍摄得到的所述第二图像的是否有图像内容相同的区域;当存在有图像内容相同的区域时,将所述图像内容相同的区域沿中间线分割成第一区域和第二区域;将所述相邻的两个虚拟相机中一个虚拟相机拍摄得到的所述第二图像中所述第一区域删除和将所述相邻的两个虚拟相机中另一个虚拟相机拍摄得到的所述第二图像中所述第二区域删除。
在另一个可能的实现中,所述通信单元,用于接收用户的指令,所述指令用于放大所述第二图像中部分区域;所述处理单元,还用于在所述虚拟三维球上显示第三图像,所述第三图像为放大所述第二图像中的部分区域后的所述第二图像。
在另一个可能的实现中,所述通信单元,用于接收显示数据,所述显示数据为与所述第二图像上部分区域相关联的数据;所述处理单元,还用在所述第二图像上的部分区域上显示所述显示数据。
在另一个可能的实现中,所述通信单元,用于接收视点操作指令,所述视点操作指令为用于点击所述第二图像第一区域的指令,所述第二图像包括所述第一区域;所述处理单元,还用于在所述第一区域上显示与点击之前所述第一区域上显示的内容不同的内容。
在另一个可能的实现中,所述处理单元,还用于在虚拟场景上增加N个虚拟相机,用于拍摄所述至少一个虚拟相机拍摄所述虚拟三维球的盲区,所述N为大于零的正整数。
第三方面,本申请还提供了一种面向三维球体可视化的方法,包括:获取实体三维球的球体参数、所述实体三维球和至少三个实体投影机的位置信息;根据所述实体三维球的球体参数、所述实体三维球和所述至少三个实体投影机的位置信息,得到虚拟三维场景,所述虚拟三维场景包括虚拟三维球和至少三个虚拟相机,所述虚拟三维球与所述实体三维球的体积比和各个虚拟相机到所述虚拟三维球表面之间的距离与各个实体投影机到所述实体三维球表面之间的距离比相同,所述各个虚拟相机与所述虚拟三维球之间的位置关系与所述各个实体投影机与所述实体三维球之间的位置关系相同;在所述虚拟三维球上生成第一图像;控制所述三个虚拟相机对所述虚拟三维球进行拍摄;将各个虚拟相机拍摄得到的第二图像分别传输的到对应的实体投影机上;其中,所述各个虚拟相机拍摄得到的所述第二图像组成所述第一图像。
在另一个可能的实现中,在所述将各个虚拟相机拍摄得到的部分所述第一图像分别传输的到对应的实体投影机上之前,包括:判断第一虚拟相机和第二虚拟相机拍摄得到的所述第二图像的是否有图像内容相同的区域,所述第一虚拟相机和所述第二虚拟相机彼此相邻;当存在有图像内容相同的区域时,将所述图像内容相同的区域沿中间线分割成第一区域和第二区域;将所述第一虚拟相机拍摄得到的所述第二图像中所述第一区域删除和将所述第二虚拟相机拍摄得到的所述第二图像中所述第二区域删除。
在另一个可能的实现中,所述方法还包括:接收第一指令,所述第一指令是对所述虚拟三维球上的所述第一图像进行放大或所述的指令;在所述控制所述三个虚拟相机对所述虚拟三维球进行拍摄之前,包括:根据所述第一指令,对所述第一图像的部分或全部区域上显示的内容进行放大或缩小。
在另一个可能的实现中,所述方法还包括:接收第一数据,所述第一数据为与所述虚拟三维球上的所述第一图像上至少一个区域相关联的数据;在所述控制所述三个虚拟相机对所述虚拟三维球进行拍摄之前,包括:根据所述第一数据,在所述第一图像上至少一个区域上显示对应的数据。
第四方面,本申请实施例还提供了一种计算机可读存储介质,其上存储有计算机程序,当所述计算机程序在计算机中执行时,令计算机执行如第三方面各个可能实现的实施例。
第五方面,本申请实施例还提供了一种计算设备,包括存储器和处理器,其特征在于,所述存储器中存储有可执行代码,所述处理器执行所述可执行代码时,实现执行如第三方面各个可能实现的实施例。
本申请通过获取实体三维球的球体参数、位置信息和多个实体投影机的位置信息后,模拟出一个与真实场景相同的虚拟场景,然后让虚拟的各个投影机拍摄虚拟的三维球上的图像,得到每个实体投影机所要投影的图像后,发送给各个实体图像,最后各个实体投影机将得到的图像投影在实体三维球上,从而得到精准、清晰的三维图像,所以对各个实体投影机与实体三维球之间的位置关系没有严格的要求,可以随意放置,从而大大提高了三维投影技术的应用场景和和灵活性。
附图说明
下面对实施例或现有技术描述中所需使用的附图作简单地介绍。
图1为本申请实施例提供的一种面向三维球体可视化***的架构图;
图2为本申请实施例提供的一种投影显示场景的架构图;
图3为本申请实施例提供的一种投影显示场景下整体尺寸表示示意图;
图4为本申请实施例提供的一种光路设计场景的侧视图;
图5为本申请实施例提供的一种光路设计场景的俯视图;
图6为本申请实施例提供的一种面向三维球体可视化***的结构图;
图7为本申请实施例提供的面向三维球体可视化***的工作流程图;
图8为本申请实施例提供的通信单元的工作流程图;
图9为本申请实施例提供的一种面向三维球体可视化的方法流程示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行描述。
图1为本申请实施例提供的一种面向三维球体可视化***的架构图。如图1所示,该***包括:投影显示单元11、图形发生器单元12和融合显示单元13。
投影显示单元11主要包括:至少一个投影机和一个三维球。优选地,本申请采用了四通道正投融合显示,配置四台高分辨率高亮度激光光源投影机,能满足不同工作模式的需要,如图2所示。
投影显示单元11中最重要的是光路设计,本申请采用专业设计工具软件,结合现场条件限制和招标文件技术要求,提供了此显示***的可交互式三维初步设计,此设计的所有***数据都是可靠精确的、工程上可用的。
***主要构成单元组成中,投影机为激光投影机。三维球的选择非常重要,专业的三维球显示对图像多方面质量都有非常高的要求,例如静态图像细节特征显示,这要求***能提供非常高的动态对比度,而能决定***动态对比度的关键是屏幕特性。
需要说明的是,***动态对比度与通常所说的投影机单机全黑全白对比度是两回事。单机全黑全白对比度指的是不同时刻在理想条件下(只能用某一款特定镜头、在特定亮度下、采用采用正投、图像规格非常小等等。)分别测量纯白和纯黑信号亮度,将二者相除得到的,没有什么实际意义。对用户真正有价值的是,实际工程约束条件下,人眼真正能看到的最大对比度以及细节特征表现程度。这些都需合适的屏幕来实现。因此,我们这里使用了白雪公司的高分子碳纤维树脂基质材料高对比度投影球幕最为三维球。
投影球幕特点如下:
1、高对比度屏幕。在一定的条件下,高***动态比度能够显示更丰富的图像细节,显示***动态对比度的关键因素是屏幕类型和属性。实际工程中,***对比度和***亮度之间是相互关联和制约的,因此需要在***设计阶段对两者做合理的选择。
2、第二是抗形变能力。由于屏幕平整度会对显示精度产生影响,因此高端显示***需要尽可能高的屏幕平整度。白雪投影球幕采用高分子碳纤维树脂为基质材料平整度误差为1‰,在屏幕规格比较大时也能保证其长期平整度。屏幕在长期使用过程中,平整度还会受到机械结构设计合理性和环境的影响,有可能发生进一步的形变,采用定制的机械支撑结构能够保证长期使用条件下不变形,屏幕属性与***中的投影机有最好的匹配效果。
本申请采用了四通道正投融合显示,配置四台高分辨率高亮度激光光源投影机,能满足不同工作模式的需要。例如,配置了DP E-Vision Laser 8500 WUXGA激光投影机单机最 大光通量输出为9000流明,物理分辨率1920x1200。光源寿命达到20000小时,从起始亮度到寿命终结,亮度衰减低于50%以内。如图3所示,投影显示单元11采用4台投影机正投融合显示,具体参数如下:尺寸:直径2600mm三维球;盲区尺寸:下部高度44.6mm;融合区:40%;***分辨率:5376*1200。
投影显示单元11的光路是***集成最关键的环境,光路设计决定了投影的位置、投影的融合效果、投影的亮度和投影覆盖度。本申请通过结合空间几何计算、计算机三维模拟以及实验的方式,提出了一种将投影仪架设在北纬60度之上的方案,如图4和图5所示,突破了现有商业***只能将投影仪架设于赤道高度的限制。
图形发生器单元12主要是指图形工作站。图形工作站是图形图像信号的发生器,提供实时渲染所需的运算能力保障。图形发生器单元12是本平台***中一个关键的硬件部分。图形发生器单元12是整个***的软件支撑平台,用户进行设计所需的各种软件都要安装在这个平台之上。示例性地,本申请采用了HP高端Z8G4定制性工作站,配置目前Nvidia公司推出的高端图卡Quadro RTX6000(24GB显存)和Quadro P2000(5GB显存)。
示例性地,当图形发生器单元12通过三维球的球体参数,包括三维球的半径、表面积、球体形状等数据、三维球和各个投影机的位置信息,包括各个投影机投影的俯仰角、各个投影机光源中心投影方向与各个投影机光源中心与三维球中心之间的连线方向之间的夹角、各个投影机相对与以三维球的中心为原点的坐标系下的坐标点、各个投影机的光源中心距三维球表面距离、各个投影机之间的位置关系、各个投影机光源中心相对于三维球的中心的方位角、航向角等数据后,根据上述得到的数据,模拟出一个与真实场景相同的不同比例的模拟三维场景。也即实体三维球与虚拟三维球之间的比例、各个投影机到实体三维球的距离与各个相机到虚拟三维球的距离之间的比例、以及各个投影机之间的距离与各个相机之间的距离之间的比例等距离比例值相同,但是各个投影机与各个相机的夹角、方位角、航向角等角度值保持相同。
然后,图形发生器单元12在虚拟三维球的表明上生成将要呈现的图像,并控制各个虚拟相机,对虚拟三维球的表明进行拍照,获取的图像为虚拟三维球表面生成的图像的一部分。
可选地,在虚拟三维场景中加入多个虚拟从相机,其分别从不同的方位观察虚拟三维球,调整虚拟从相机位置使得观察无死角,将虚拟从相机观察到的图像渲染到多个窗口中,然后将每个窗口中的帧图像传输给投影机,投影机将图片投影在球幕上,同步到实体三维球上显示。
融合显示单元13在进行图像拼接融合处理需要能够支持多通道、任意边缘及多边缘叠加融合处理,且不受融合区大小所影响。在进行消隐融合时消隐调整只会影响投射图像的边缘,用于将投射的图像放入屏幕中的框架内,并隐藏或遮掩不需要的信息(或噪音)。消隐图像分为4部分,分别为:顶部消隐、底部消隐、左侧消隐、右侧消隐。配置的融合显示***主要由大视电融合机组成。
优选地,本申请中配置的融合显示单元13主要由大视电子MM5000-B彩虹系列融合机组成。MM5000-B彩虹系列融合机采用2U机箱,配置4块DP输入输出板卡支持4路DisplayPort输入,支持4路DisplayPort输出。其中,4路输出接入4台投影机,出板卡支持4路输出可选择接入监控显示器,4台监视器正常接入图形生成子***IG中Nvidia Quadro  P2000显卡4个输出,整个工作站两个显卡总共可扩展输出8路WUXGA 1920*1200分辨率,通过用户方软件指定进行匹配输出和预操作。
示例性地,相邻虚拟相机之间拍摄到的虚拟三维球表面的图像必然存在一定的重叠区域,这就需要对重叠区域进行处理,否则投影机在投影的过程中,相邻图像的重叠区域会显示模糊。当两个虚拟相机获取到两个图像(图像1和图像2)后,采用中线法,确定两个图像的重叠区域后,从中间分成两个区域,然后将图像1中区域2部分删除,将图像2中区域1部分删除,得到处理后的图像1和图像2。
最后,图形发生器单元12将处理后的各个图像发送给对应的各个投影机后,投影机将图像投影到实体三维球上,实现了球幕投影和图像的三维球体演示。
另外,图形发生器单元12可以通过多通道输出到显示端口,不引起时延,完全可以达到60赫兹的刷新率,在技术上采用了并行计算与处理的方式,同时提前计算并保存了设备参数,减少了计算量,从而提高了处理速度和刷新频率。
在本申请实施例中,该***还可以包括线路集成单元。由于涉及的需要展示的信号源有很多,如图像信号、网络信号、监视器信号等等,这些复杂的信号并不是同一时间集中显示的,而是根据一定的流程或要求选择性的展示。同样,***的多种不同工作模式也是需要通过切换来实现的。针对大型球体展示***平台需要进行专业的设计和规划,如:
1、电磁兼容设计:电磁干扰源、耦合途径和敏感设备是***解决电磁兼容性问题的关键。这三个要素缺一不可,同样也是从这三个要素入手。消除三个因素中的一到二个因素,就能达到电磁兼容的要求。电磁兼容***设计是在产品的设计过程中仔细预测各种可能发生的电磁兼容问题,并从设计的一开始就采取各种措施,避免电磁兼容问题,因此可以采取电路与结构相结合的技术措施。采取这种方法通常能在正式产品完成之前解决90%的电磁兼容问题。在电磁兼容的设计中,我们主要是解决电磁屏蔽的技术问题,即切断耦合途径。
2、***线缆的设计:线缆网的设计,不仅要满足***连接要求,确保信号传递,而且还要注意***的电磁兼容性,减少不必要的耦合。本申请中,线缆网的设计,主要可分为网络、数据、控制、视频线缆等,不同种类采用同槽分隔布放。根据设备实际使用的要求,提供信号连线图及各种信号线的规格,预留信号线槽。
(1)接地线:接地线采用RV4.0的聚氯乙烯绝缘连接软导线,该线的直流电阻为5mΩ/M,完全满足军用接地电阻0.1Ω的标准。
(2)工作台、柜接地:由于工作台和柜子内部采用等电位接地平面方式,工作台和柜子可将其组合的接地线并接于汇流条上,多个工作台和柜子又共同接于机房接地端子,形成一种复合式单点接地。
(3)供电电源的接地:供电电源的接地,将工作台和柜子外壳接地,保证了人身用电安全。当接地电阻4Ω时,完全满足低压***保护接地安全要求;由于该方式中线对地绝缘,一般电气碰壳故障不会因此而断电,符合工作时不允许断电的要求。
(4)防静电接地:对于电子产品,防静电最大允许安全电压范围内的接地电阻为1.28×109Ω,而对于工作人员当安全电流为5mA时,安全电阻应为1.0×105Ω,而以上设计的接地电阻0.1Ω安全满足以上要求,因此作为防护ESD危害是有保障的。
经过以上各项措施,可使***电磁兼容性和防辐射性能达到要求。
本申请通过上述三个硬件单元的组合,使得获取实体三维球的球体参数、位置信息和多个实体投影机的位置信息后,模拟出一个与真实场景相同的虚拟场景,然后让虚拟的各个投影机拍摄虚拟的三维球上的图像,得到每个实体投影机所要投影的图像后,发送给各个实体图像,最后各个实体投影机将得到的图像投影在实体三维球上,从而得到精准、清晰的三维图像,所以对各个实体投影机与实体三维球之间的位置关系没有严格的要求,可以随意放置,从而大大提高了三维投影技术的应用场景和和灵活性。
图6为本申请实施例提供的一种面向三维球体可视化***的结构图。如图6所示,该***600包括:通信单元601和处理单元602。
通信单元601用于接收三维球的球体参数,包括三维球的半径、表面积、球体形状等数据、三维球和各个投影机的位置信息,包括各个投影机投影的俯仰角、各个投影机光源中心投影方向与各个投影机光源中心与三维球中心之间的连线方向之间的夹角、各个投影机相对与以三维球的中心为原点的坐标系下的坐标点、各个投影机的光源中心距三维球表面距离、各个投影机之间的位置关系、各个投影机光源中心相对于三维球的中心的方位角、航向角等数据。
处理单元602用于根据上述得到的数据,模拟出一个与真实场景相同的不同比例的模拟三维场景。也即实体三维球与虚拟三维球之间的比例、各个投影机到实体三维球的距离与各个相机到虚拟三维球的距离之间的比例、以及各个投影机之间的距离与各个相机之间的距离之间的比例等距离比例值相同,但是各个投影机与各个相机的夹角、方位角、航向角等角度值保持相同。然后,在虚拟三维球的表明上生成将要呈现的图像,并控制各个虚拟相机,对虚拟三维球的表明进行拍照,获取的图像为虚拟三维球表面生成的图像的一部分。
可选地,由于相邻虚拟相机之间拍摄到的虚拟三维球表面的图像必然存在一定的重叠区域,这就需要对重叠区域进行处理,否则投影机在投影的过程中,相邻图像的重叠区域会显示模糊。处理单元602用于当两个虚拟相机获取到两个图像(图像1和图像2)后,采用中线法,确定两个图像的重叠区域后,从中间分成两个区域,然后将图像1中区域2部分删除,将图像2中区域1部分删除,得到处理后的图像1和图像2。
最后,处理单元602将处理后的各个图像发送给对应的各个投影机后,投影机将图像投影到实体三维球上,实现了球幕投影和图像的三维球体演示。
可选地,通信单元601可以通过多通道输出到显示端口,不引起时延,完全可以达到60赫兹的刷新率,在技术上采用了并行计算与处理的方式,同时提前计算并保存了设备参数,减少了计算量,从而提高了处理速度和刷新频率。
在一个可能的实施例中,通信单元601还接收用户的指令对虚拟图像进行放大或缩小操作的指令,处理单元602根据该指令,控制虚拟三维球上的图像进行放大或缩小,然后虚拟相机拍摄到放大或缩小的图像后,发送给对应的投影机,使得在实体三维球上同步显示放大或缩小后的图像。
在一个可能的实施例中,通信单元601还接收服务器、其它终端或用户输入的数据,该数据为虚拟三维球上显示的图像上部分区域进行解释、标识等数据。处理单元602根据该数据,对虚拟三维球上显示的图像上对应的部分区域上增加对该区域进行解释、标识的 文字、图案等等。然后虚拟相机拍摄到增加了文字、图案等标识的图像后,发送给对应的投影机,使得在实体三维球上同步显示该标识的图像。
在一个可能的实施例中,通信单元601还接收用户对图像进行操作的指令,处理单元602根据该指令,对虚拟三维球上显示的图像按照指令进行变化。然后虚拟相机拍摄到变化后的图像后,发送给对应的投影机,使得在实体三维球上同步显示变化后的图像。
示例性地,如图7所示,该***具体实现过程如下:
步骤S701,创建网络通信。
具体的,启动时创建Qt Udp通信Socket,绑定网络数据接收、发送端口,加入网络组播。当Socket缓冲区有数据时,获取缓冲区中的数据并解析,解析后的数据发送到三维数字地球,然后根据报文内容作出响应。
步骤S702,构建虚拟三维场景。
具体的,当***接收到接收三维球的球体参数,包括三维球的半径、表面积、球体形状等数据、三维球和各个投影机的位置信息,包括各个投影机投影的俯仰角、各个投影机光源中心投影方向与各个投影机光源中心与三维球中心之间的连线方向之间的夹角、各个投影机相对与以三维球的中心为原点的坐标系下的坐标点、各个投影机的光源中心距三维球表面距离、各个投影机之间的位置关系、各个投影机光源中心相对于三维球的中心的方位角、航向角等数据时,根据上述得到的数据,模拟出一个与真实场景相同的不同比例的模拟三维场景。也即实体三维球与虚拟三维球之间的比例、各个投影机到实体三维球的距离与各个相机到虚拟三维球的距离之间的比例、以及各个投影机之间的距离与各个相机之间的距离之间的比例等距离比例值相同,但是各个投影机与各个相机的夹角、方位角、航向角等角度值保持相同。然后,在虚拟三维球的表明上生成将要呈现的图像,并控制各个虚拟相机,对虚拟三维球的表明进行拍照,获取的图像为虚拟三维球表面生成的图像的一部分。
步骤S703,三维数字地球数据加载。
具体的,根据步骤701中接收到的报文的内容,在虚拟三维球的表面上显示三维数字地球,然后控制各个虚拟相机对虚拟三维球的表面进行拍摄,得到多个图像。
可选地,在虚拟三维场景中加入多个虚拟从相机,其分别从不同的方位观察虚拟三维球,调整虚拟从相机位置使得观察无死角,将虚拟从相机观察到的图像渲染到多个窗口中,然后将每个窗口中的帧图像传输给投影机,投影机将图片投影在球幕上,同步到实体三维球上显示。
步骤S704,将各个虚拟相机得到的多个图像通过多通道输出到显示端口。
具体的,***在得到多个图像,分别通过多通道输出到显示端口,传输给对应的实体投影机,然后各个投影机将得到的图像投影到实体三维球上,实现了球幕投影和图像的三维球体演示。本申请通过多通道输出到显示端口,不引起时延,完全可以达到60赫兹的刷新率,在技术上采用了并行计算与处理的方式,同时提前计算并保存了设备参数,减少了计算量,从而提高了处理速度和刷新频率。
步骤S705,获取Socket缓冲区中的数据后,判断该数据是否为网络数据。其中,网络数据为用户输入、其它设备发送等方式传输过来的数据,该数据一般为文字、指令、图标等等。
步骤S706,对网络数据进行解析,确定该数据具体为哪个类型的数据。
步骤S707,当解析后的网络数据为用户对图像进行操作的指令,则根据该指令,对虚拟三维球上显示的图像按照指令进行变化,然后执行步骤S704。
步骤S708,当解析后的网络数据为虚拟三维球上显示的图像上部分区域进行解释、标识等数据,则根据该数据,对虚拟三维球上显示的图像上对应的部分区域上增加对该区域进行解释、标识的文字、图案等等,然后执行步骤S704。
步骤S709,当解析后的网络数据为对虚拟图像进行放大或缩小操作的指令,则根据该指令,控制虚拟三维球上的图像进行放大或缩小,然后执行步骤S704。
示例性地,如图8所示,该通信单元具体实现过程如下:
步骤S801,对网络模块进行初始化。也即启动时重新创建Qt Udp通信Socket,绑定网络数据接收、发送端口,加入网络组播。
步骤S802,对***内设备进行监听。
步骤S803,判断是否存在有消息生成,如果有,则执行步骤S804,如果没有,则执行步骤S803。
步骤S804,对监听到的消息进行解析。
步骤S805,当监听到的消息为用户对图像进行操作的指令,确定为视点控制消息,然后执行步骤S808。
步骤S806,当监听到的消息为虚拟三维球上显示的图像上部分区域进行解释、标识等,确定为图层管理消息,然后执行步骤S808。
步骤S807,当监听到的消息为对虚拟图像进行放大或缩小操作的指令,确定为数据加载消息,然后执行步骤S808。
步骤S808,将得到消息封装成指令。
步骤S809,进行UDP指令发送。
步骤S810,判断指令是否发送完毕,如果是,则停止消息监听;如果否,则执行步骤S802。
图9为本申请实施例提供的一种面向三维球体可视化的方法流程示意图。如图9所示的方法,具体实现过程如下:
步骤S901,获取实体三维球的球体参数、实体三维球和至少三个实体投影机的位置信息。
步骤S902,根据实体三维球的球体参数、实体三维球和至少三个实体投影机的位置信息,得到虚拟三维场景。其中,虚拟三维场景包括虚拟三维球和至少三个虚拟相机,虚拟三维球与实体三维球的体积比和各个虚拟相机到虚拟三维球表面之间的距离与各个实体投影机到实体三维球表面之间的距离比相同,各个虚拟相机与虚拟三维球之间的位置关系与各个实体投影机与实体三维球之间的位置关系相同。
步骤S903,在虚拟三维球上生成第一图像。
步骤S904,控制至少三个虚拟相机对虚拟三维球进行拍摄。
步骤S905,将各个虚拟相机拍摄得到的第二图像分别传输的到对应的实体投影机上。其中,各个虚拟相机拍摄得到的第二图像组成第一图像。
在另一个可能的实现中,在所述将各个虚拟相机拍摄得到的部分所述第一图像分别传输的到对应的实体投影机上之前,包括:判断第一虚拟相机和第二虚拟相机拍摄得到的所述第二图像的是否有图像内容相同的区域,所述第一虚拟相机和所述第二虚拟相机彼此相邻;当存在有图像内容相同的区域时,将所述图像内容相同的区域沿中间线分割成第一区域和第二区域;将所述第一虚拟相机拍摄得到的所述第二图像中所述第一区域删除和将所述第二虚拟相机拍摄得到的所述第二图像中所述第二区域删除。
在另一个可能的实现中,所述方法还包括:接收第一指令,所述第一指令是对所述虚拟三维球上的所述第一图像进行放大或所述的指令;在所述控制所述三个虚拟相机对所述虚拟三维球进行拍摄之前,包括:根据所述第一指令,对所述第一图像的部分或全部区域上显示的内容进行放大或缩小。
在另一个可能的实现中,所述方法还包括:接收第一数据,所述第一数据为与所述虚拟三维球上的所述第一图像上至少一个区域相关联的数据;在所述控制所述三个虚拟相机对所述虚拟三维球进行拍摄之前,包括:根据所述第一数据,在所述第一图像上至少一个区域上显示对应的数据。
本申请通过获取实体三维球的球体参数、位置信息和多个实体投影机的位置信息后,模拟出一个与真实场景相同的虚拟场景,然后让虚拟的各个投影机拍摄虚拟的三维球上的图像,得到每个实体投影机所要投影的图像后,发送给各个实体图像,最后各个实体投影机将得到的图像投影在实体三维球上,从而得到精准、清晰的三维图像,从而对各个实体投影机与实体三维球之间的位置关系没有严格的要求,可以随意放置,从而大大提高了三维投影技术的应用场景和和灵活性。
本发明提供一种计算机可读存储介质,其上存储有计算机程序,当所述计算机程序在计算机中执行时,令计算机执行上述任一项方法。
本发明提供一种计算设备,包括存储器和处理器,所述存储器中存储有可执行代码,所述处理器执行所述可执行代码时,实现上述任一项方法。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请实施例的范围。
此外,本申请实施例的各个方面或特征可以实现成方法、装置或使用标准编程和/或工程技术的制品。本申请中使用的术语“制品”涵盖可从任何计算机可读器件、载体或介质访问的计算机程序。例如,计算机可读介质可以包括,但不限于:磁存储器件(例如,硬盘、软盘或磁带等),光盘(例如,压缩盘(compact disc,CD)、数字通用盘(digital versatile disc,DVD)等),智能卡和闪存器件(例如,可擦写可编程只读存储器(erasable programmable read-only memory,EPROM)、卡、棒或钥匙驱动器等)。另外,本文描述的各种存储介质可代表用于存储信息的一个或多个设备和/或其它机器可读介质。术语“机器可读介质”可包括但不限于,无线信道和能够存储、包含和/或承载指令和/或数据的各种其它介质。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产 品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如固态硬盘Solid State Disk(SSD))等。
应当理解的是,在本申请实施例的各种实施例中,上述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的***、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的***、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个***,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者接入网设备等)执行本申请实施例各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本申请实施例的具体实施方式,但本申请实施例的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请实施例揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请实施例的保护范围之内。

Claims (16)

  1. 一种面向三维球体可视化***,其特征在于,包括:
    投影显示单元,包括实体三维球和至少一个实体投影机,所述至少一个实体投影机设置在所述实体三维球周围的任意位置;
    图形发生器单元,用于根据接收到的实体三维球的球体参数、所述实体三维球和所述至少一个实体投影机的位置信息,得到虚拟三维场景,其中,所述虚拟三维场景包括虚拟三维球和至少一个虚拟相机,所述虚拟三维球与所述实体三维球的体积比和各个虚拟相机到所述虚拟三维球表面之间的距离与各个实体投影机到所述实体三维球表面之间的距离比相同,所述各个虚拟相机与所述虚拟三维球之间的位置关系与所述各个实体投影机与所述实体三维球之间的位置关系相同;并在所述虚拟三维球上生成第一图像后,控制所述至少一个虚拟相机对所述虚拟三维球进行拍摄,得到至少一个第二图像;
    融合显示单元,用于对相邻所述虚拟相机得到的第二图像进行边缘重叠融合处理,得到至少一个第三图像后,发送给所述至少一个虚拟相机对应的所述至少一个实体投影机,以便所述至少一个实体投影机将各自得到的所述第二图像投影到所述实体三维球上。
  2. 根据权利要求1所述的***,其特征在于,所述融合显示单元,还用于通过多通道接收所述至少一个虚拟相机发送的第二图像;以及
    通过多通道分别发送所述至少一个第三图像给所述至少一个虚拟相机对应的所述至少一个实体投影机。
  3. 根据权利要求1所述的***,其特征在于,所述融合显示单元,具体用于
    判断任意相邻的两个虚拟相机拍摄得到的所述第二图像的是否有图像内容相同的区域;当存在有图像内容相同的区域时,将所述图像内容相同的区域沿中间线分割成第一区域和第二区域;将所述相邻的两个虚拟相机中一个虚拟相机拍摄得到的所述第二图像中所述第一区域删除和将所述相邻的两个虚拟相机中另一个虚拟相机拍摄得到的所述第二图像中所述第二区域删除。
  4. 一种面向三维球体可视化***,其特征在于,当至少一个实体投影机设置在实体三维球周围的任意位置时,包括:
    通信单元,用于接收所述实体三维球的球体参数、所述实体三维球和至少一个实体投影机的位置信息;
    处理单元,用于根据所述实体三维球的球体参数、所述实体三维球和至少一个实体投影机的位置信息,得到各个实体投影机用于投影的第一图像;
    所述通信单元,还用于将各个所述第一图像发送给对应的各个实体投影机。
  5. 根据权利要求4所述的***,其特征在于,所述处理单元,具体用于
    根据接收到的实体三维球的球体参数、所述实体三维球和所述至少一个实体投影机的位置信息,得到虚拟三维场景,所述虚拟三维场景包括虚拟三维球和至少一个虚拟相机,所述虚拟三维球与所述实体三维球的体积比和各个虚拟相机到所述虚拟三维球表面之间的距离与各个实体投影机到所述实体三维球表面之间的距离比相同,所述各个虚拟相机与所述虚拟三维球之间的位置关系与所述各个实体投影机与所述实体三维球之间的位置关系相同;以及
    在所述虚拟三维球上生成第二图像后,控制所述至少一个虚拟相机对所述虚拟三维球 进行拍摄,得到至少一个所述第一图像。
  6. 根据权利要求4-5任意一项所述的***,其特征在于,所述处理单元,还用于判断任意相邻的两个虚拟相机拍摄得到的所述第二图像的是否有图像内容相同的区域;当存在有图像内容相同的区域时,将所述图像内容相同的区域沿中间线分割成第一区域和第二区域;将所述相邻的两个虚拟相机中一个虚拟相机拍摄得到的所述第二图像中所述第一区域删除和将所述相邻的两个虚拟相机中另一个虚拟相机拍摄得到的所述第二图像中所述第二区域删除。
  7. 根据权利要求4-6任意一项所述的***,其特征在于,
    所述通信单元,用于接收用户的指令,所述指令用于放大所述第二图像中部分区域;
    所述处理单元,还用于在所述虚拟三维球上显示第三图像,所述第三图像为放大所述第二图像中的部分区域后的所述第二图像。
  8. 根据权利要求4-6任意一项所述的***,其特征在于,
    所述通信单元,用于接收显示数据,所述显示数据为与所述第二图像上部分区域相关联的数据;
    所述处理单元,还用在所述第二图像上的部分区域上显示所述显示数据。
  9. 根据权利要求4-6任意一项所述的***,其特征在于,
    所述通信单元,用于接收视点操作指令,所述视点操作指令为用于点击所述第二图像第一区域的指令,所述第二图像包括所述第一区域;
    所述处理单元,还用于在所述第一区域上显示与点击之前所述第一区域上显示的内容不同的内容。
  10. 根据权利要求4-9任意一项所述的***,其特征在于,所述处理单元,还用于
    在虚拟场景上增加N个虚拟相机,用于拍摄所述至少一个虚拟相机拍摄所述虚拟三维球的盲区,所述N为大于零的正整数。
  11. 一种面向三维球体可视化的方法,其特征在于,包括:
    获取实体三维球的球体参数、所述实体三维球和至少三个实体投影机的位置信息;
    根据所述实体三维球的球体参数、所述实体三维球和所述至少三个实体投影机的位置信息,得到虚拟三维场景,所述虚拟三维场景包括虚拟三维球和至少三个虚拟相机,所述虚拟三维球与所述实体三维球的体积比和各个虚拟相机到所述虚拟三维球表面之间的距离与各个实体投影机到所述实体三维球表面之间的距离比相同,所述各个虚拟相机与所述虚拟三维球之间的位置关系与所述各个实体投影机与所述实体三维球之间的位置关系相同;
    在所述虚拟三维球上生成第一图像;
    控制所述至少三个虚拟相机对所述虚拟三维球进行拍摄;
    将各个虚拟相机拍摄得到的第二图像分别传输的到对应的实体投影机上;其中,所述各个虚拟相机拍摄得到的所述第二图像组成所述第一图像。
  12. 根据权利要求11所述的方法,其特征在于,在所述将各个虚拟相机拍摄得到的部分所述第一图像分别传输的到对应的实体投影机上之前,包括:
    判断第一虚拟相机和第二虚拟相机拍摄得到的所述第二图像的是否有图像内容相同的区域,所述第一虚拟相机和所述第二虚拟相机彼此相邻;
    当存在有图像内容相同的区域时,将所述图像内容相同的区域沿中间线分割成第一区域和第二区域;
    将所述第一虚拟相机拍摄得到的所述第二图像中所述第一区域删除和将所述第二虚拟相机拍摄得到的所述第二图像中所述第二区域删除。
  13. 根据权利要求11所述的方法,其特征在于,所述方法还包括:
    接收第一指令,所述第一指令是对所述虚拟三维球上的所述第一图像进行放大或所述的指令;
    在所述控制所述三个虚拟相机对所述虚拟三维球进行拍摄之前,包括:
    根据所述第一指令,对所述第一图像的部分或全部区域上显示的内容进行放大或缩小。
  14. 根据权利要求13所述的方法,其特征在于,所述方法还包括:
    接收第一数据,所述第一数据为与所述虚拟三维球上的所述第一图像上至少一个区域相关联的数据;
    在所述控制所述三个虚拟相机对所述虚拟三维球进行拍摄之前,包括:
    根据所述第一数据,在所述第一图像上至少一个区域上显示对应的数据。
  15. 一种计算机可读存储介质,其上存储有计算机程序,当所述计算机程序在计算机中执行时,令计算机执行权利要求11-14中任一项的所述的方法。
  16. 一种计算设备,包括存储器和处理器,其特征在于,所述存储器中存储有可执行代码,所述处理器执行所述可执行代码时,实现权利要求11-14中任一项所述的方法。
PCT/CN2020/114880 2020-07-17 2020-09-11 一种面向三维球体可视化*** WO2022011817A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010694406.XA CN112203079A (zh) 2020-07-17 2020-07-17 一种面向三维球体可视化***
CN202010694406.X 2020-07-17

Publications (1)

Publication Number Publication Date
WO2022011817A1 true WO2022011817A1 (zh) 2022-01-20

Family

ID=74005517

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/114880 WO2022011817A1 (zh) 2020-07-17 2020-09-11 一种面向三维球体可视化***

Country Status (2)

Country Link
CN (1) CN112203079A (zh)
WO (1) WO2022011817A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112004073B (zh) * 2020-08-07 2021-11-02 山东金东数字创意股份有限公司 基于window平台异面融合影像互动***和方法

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6937210B1 (en) * 2002-11-06 2005-08-30 The United States Of America As Represented By The Secretary Of Commerce Projecting images on a sphere
CN106375748A (zh) * 2016-09-07 2017-02-01 深圳超多维科技有限公司 立体虚拟现实全景视图拼接方法、装置及电子设备
CN107592514A (zh) * 2017-09-14 2018-01-16 深圳市圆周率软件科技有限责任公司 一种全景投影***及方法
CN110383843A (zh) * 2017-03-22 2019-10-25 高通股份有限公司 用于360度视频的有效压缩的球体赤道投影
CN110568715A (zh) * 2019-09-30 2019-12-13 宁波元年文化传媒有限公司 立体球体表面全覆盖投影装置
CN111970504A (zh) * 2020-07-17 2020-11-20 中国科学院空天信息创新研究院 利用虚拟投影反向模拟三维球体的展示方法、装置和***

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN203178666U (zh) * 2013-03-20 2013-09-04 黄文津 球体屏幕构造
CN103841344B (zh) * 2014-01-08 2017-01-18 江苏省现代企业信息化应用支撑软件工程技术研发中心 一种将二维数字图像以内投式投射到球幕上的方法
CN105357512B (zh) * 2015-12-23 2017-08-08 中国人民解放军海军航空工程学院 一种单显卡三通道立体视景***构建及其校正融合方法
CN107121888B (zh) * 2017-07-13 2019-08-30 广西临届数字科技有限公司 球幕投影放映的方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6937210B1 (en) * 2002-11-06 2005-08-30 The United States Of America As Represented By The Secretary Of Commerce Projecting images on a sphere
CN106375748A (zh) * 2016-09-07 2017-02-01 深圳超多维科技有限公司 立体虚拟现实全景视图拼接方法、装置及电子设备
CN110383843A (zh) * 2017-03-22 2019-10-25 高通股份有限公司 用于360度视频的有效压缩的球体赤道投影
CN107592514A (zh) * 2017-09-14 2018-01-16 深圳市圆周率软件科技有限责任公司 一种全景投影***及方法
CN110568715A (zh) * 2019-09-30 2019-12-13 宁波元年文化传媒有限公司 立体球体表面全覆盖投影装置
CN111970504A (zh) * 2020-07-17 2020-11-20 中国科学院空天信息创新研究院 利用虚拟投影反向模拟三维球体的展示方法、装置和***

Also Published As

Publication number Publication date
CN112203079A (zh) 2021-01-08

Similar Documents

Publication Publication Date Title
JP7504953B2 (ja) 画像を合成するための方法及び装置
CN108525298B (zh) 图像处理方法、装置、存储介质及电子设备
WO2018188499A1 (zh) 图像、视频处理方法和装置、虚拟现实装置和存储介质
EP3495921A1 (en) An apparatus and associated methods for presentation of first and second virtual-or-augmented reality content
US11074755B2 (en) Method, device, terminal device and storage medium for realizing augmented reality image
JP2009252240A (ja) リフレクション組み込みシステム、方法及びプログラム
CN111654746A (zh) 视频的插帧方法、装置、电子设备和存储介质
US20080295035A1 (en) Projection of visual elements and graphical elements in a 3D UI
CN107978018B (zh) 立体图形模型的构建方法、装置、电子设备及存储介质
WO2022011817A1 (zh) 一种面向三维球体可视化***
CN114667496A (zh) 为多个显示设备提供连续虚拟空间
KR20170091710A (ko) 디지털 비디오 렌더링
Wischgoll Display systems for visualization and simulation in virtual environments
US20230405475A1 (en) Shooting method, apparatus, device and medium based on virtual reality space
Soares et al. Designing a highly immersive interactive environment: The virtual mine
Wischgoll et al. Display infrastructure for virtual environments
Shibata et al. A View Management Method for Mobile Mixed Reality Systems.
CN112565883A (zh) 一种用于虚拟现实场景的视频渲染处理***和计算机设备
CN114612602A (zh) 确定透明度的方法、装置、电子设备及存储介质
KR20230152589A (ko) 화상 처리 시스템, 화상 처리방법, 및 기억매체
CN109727315B (zh) 一对多集群渲染方法、装置、设备及存储介质
CN109949396A (zh) 一种渲染方法、装置、设备和介质
Zhou et al. Analysis and practical minimization of registration error in a spherical fish tank virtual reality system
CN110430417B (zh) 多视点立体图像生成方法、装置、计算机设备和存储介质
Fucci et al. Measuring audio-visual latencies in virtual reality systems

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20945385

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20945385

Country of ref document: EP

Kind code of ref document: A1