CN110035275B - Urban panoramic dynamic display system and method based on large-screen fusion projection - Google Patents

Urban panoramic dynamic display system and method based on large-screen fusion projection Download PDF

Info

Publication number
CN110035275B
CN110035275B CN201910237297.6A CN201910237297A CN110035275B CN 110035275 B CN110035275 B CN 110035275B CN 201910237297 A CN201910237297 A CN 201910237297A CN 110035275 B CN110035275 B CN 110035275B
Authority
CN
China
Prior art keywords
image
points
coordinate system
projection
equation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910237297.6A
Other languages
Chinese (zh)
Other versions
CN110035275A (en
Inventor
王孝奎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Huaheng Exhibition Design And Construction Co ltd
Original Assignee
Suzhou Huaheng Exhibition Design And Construction Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Huaheng Exhibition Design And Construction Co ltd filed Critical Suzhou Huaheng Exhibition Design And Construction Co ltd
Priority to CN201910237297.6A priority Critical patent/CN110035275B/en
Publication of CN110035275A publication Critical patent/CN110035275A/en
Application granted granted Critical
Publication of CN110035275B publication Critical patent/CN110035275B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/08Projecting images onto non-planar surfaces, e.g. geodetic screens
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/275Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/363Image reproducers using image projection screens

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Remote Sensing (AREA)
  • Computer Graphics (AREA)
  • Image Processing (AREA)
  • Processing Or Creating Images (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The invention discloses a large-screen fusion projection-based urban panoramic dynamic display system and method, relates to the technical field of urban panoramic dynamic display, and aims to solve the problem that the existing method for displaying three-dimensional images is poor in display effect when applied to urban panoramic display. The technical scheme includes that the system comprises a building module for building a three-dimensional model of a city, a splicing module for splicing projection images of a plurality of projectors into an integral projection image, a conversion module for generating a conversion equation, a matching module for generating a matching equation, a first conversion module for generating a first conversion equation, a second conversion module for generating a second conversion equation and a projection module for acquiring vehicle feature points and performing pixel conversion calculation on the integral projection image. The projection on the display screen comprises the real-time projection of the vehicles in the city, so that the dynamic display of the city panorama is realized, the immersion feeling of the viewers is improved, and the display screen has the advantage of good city panorama display effect.

Description

Urban panoramic dynamic display system and method based on large-screen fusion projection
Technical Field
The invention relates to the technical field of urban panoramic dynamic display, in particular to an urban panoramic dynamic display system and method based on large-screen fusion projection.
Background
The large screen fusion projection is a display system for projecting a three-dimensional scene on a large-size screen in real time, and the principle of the system is that a picture projected by a plurality of projectors is subjected to edge overlapping, geometric correction and brightness blanking to generate a picture which is free of gaps, uniform in color, ultrahigh in brightness, oversized in size and ultrahigh in resolution, and the picture is projected by one projector in visual sense.
In the field of urban panoramic display, a digital sand table is generally adopted in the existing display mode. The digital sand table is a revolutionary innovation, and various multimedia acousto-optic means mainly based on dynamic projection are added on the basis of the traditional solid sand table, so that the impression of people on the monotonous carving of the sand table is broken. In a city planning hall, the sand table is indispensable, the sand table is a miniature city, and a bird's-eye view angle is provided for audiences to observe the city and visit the whole world.
However, most of the digital sand tables on the market display a single screen composed of a plurality of display screens, and the display structure has the disadvantages that on one hand, a large screen is composed of a plurality of small screens, so that physical gaps between the screens are inevitably caused in physics, and the display image of a projection screen cannot keep the whole image intact and has artificial segmentation; in the display process of the digital sand table, the existence of gaps easily causes image display pollution and influences the appearance.
In order to solve the above problems, the chinese patent with the prior publication number CN101800907A discloses a method for displaying a three-dimensional image, which displays a three-dimensional image by means of a three-dimensional image display system, wherein the three-dimensional image display system comprises: a database for storing and outputting three-dimensional image data to be projected; the system comprises a plurality of projectors, a display screen node machine, a main control machine and a plurality of image projection machines, wherein the projectors are used for receiving three-dimensional image projection parameters from the main control machine and projecting the three-dimensional image data to be displayed; the main control computer is used for generating three-dimensional image projection parameters and distributing the three-dimensional image projection parameters to the node machine; three-dimensional image data correspondingly controlled and projected by each node machine are converged and combined together on a display screen to form a three-dimensional image whole; the method comprises the following steps: display screen azimuth setting and user observation point setting steps: horizontally arranging a ground screen main screen on the ground of the display space, and arranging the position of a user observation point in the air, wherein the user watches the ground screen main screen from top to bottom; determining and sending three-dimensional image projection parameters: the main control machine determines three-dimensional image projection parameters and distributes the three-dimensional image projection parameters to a plurality of node machines; a three-dimensional image projection step: and each node machine extracts three-dimensional image data according to the received three-dimensional image projection parameters, and instructs a plurality of projectors to perform projection after performing projection pre-processing on the three-dimensional image data.
However, the above prior art solutions have the following drawbacks: the technical scheme can only project static three-dimensional image data of city buildings, roads and the like on the display screen, and can not capture vehicles running on the city roads in real time and project the vehicles on the display screen, so that the displayed city panorama is a static city panorama, is not rich in life, has poor immersion of viewers, and has poor city display effect.
Disclosure of Invention
The invention aims to provide an urban panoramic dynamic display system and method based on large-screen fusion projection.
The invention aims at: the urban panoramic dynamic display system based on the large-screen fusion projection has the advantages that urban panoramic dynamic images can be displayed, the immersion of viewers is improved, and the urban panoramic display effect is good;
the second purpose of the invention is that: the method for dynamically displaying the urban panorama based on the large-screen fusion projection has the advantage of improving the urban panorama display effect.
The above object of the present invention is achieved by the following technical solutions:
the utility model provides a city panorama dynamic display system based on large screen fusion projection, includes the show screen, installs the show room and many cameras that are used for monitoring city traffic flow of show screen still include:
the building module is used for building a three-dimensional model of the city with a reduced corresponding proportion by adopting a three-dimensional modeling method;
the splicing module is used for splicing the projection images of the plurality of projectors into an integral projection image by adopting an image splicing method, and storing the integral projection image as an integral projection image result image with a reduced set proportion;
the conversion module is used for simulating a display model of the whole projection image result graph projected on a display screen and generating a conversion equation between the display model and the three-dimensional model;
the matching module is used for acquiring the image shot by the camera and generating a matching equation between the image shot by the camera and the three-dimensional model;
a first transformation module for generating a first transformation equation between the overall projection image result map and a presentation model;
the second transformation module is used for generating a second transformation equation between the overall projection image result image and a picture shot by the camera based on the matching equation, the conversion equation and the first transformation equation;
and the projection module is used for acquiring and storing vehicle characteristic points corresponding to vehicles in the picture according to the picture shot by the camera, and is also used for carrying out pixel transformation calculation on the whole projection image based on the second transformation equation and the acquired vehicle characteristic points, so that the projection image is projected on a display screen and then has moving and/or static vehicle projection.
By adopting the technical scheme, the pixel coordinates of the vehicle characteristic points in the picture shot by the camera in the projection image can be obtained, and the colors of the corresponding pixels in the projection image are changed according to the obtained pixel coordinates, so that the projection projected on the display screen comprises the real-time projection of the vehicles in the city, the dynamic display of the city panorama is realized, the immersion feeling of viewers is improved, and the display method has the advantage of good city panorama display effect.
The invention is further configured to: the conversion module includes:
the simulation unit is used for simulating a display model of the whole projection image result image projected on a display screen, constructing a local three-dimensional coordinate system according to the display model and constructing a whole three-dimensional coordinate system according to the three-dimensional model;
the first extraction unit is used for extracting a plurality of observation points in the local three-dimensional coordinate system and acquiring equivalent points of the observation points in the overall three-dimensional coordinate system;
and the conversion unit is used for establishing a conversion equation between the local three-dimensional coordinate system and the whole three-dimensional coordinate system based on the coordinate values of the multiple groups of observation points and the equivalent points.
By adopting the technical scheme, the conversion equation between the local three-dimensional coordinate system and the whole three-dimensional coordinate system is fitted by combining the coordinate values of the multiple groups of observation points and the equivalent points, and the effect of quickly obtaining the corresponding relation between the points in the display model and the points in the three-dimensional model is achieved.
The invention is further configured to: the matching module includes:
the first acquisition unit is used for acquiring the image shot by the camera and constructing a first image coordinate system according to the acquired image;
the second extraction unit is used for extracting a plurality of reference points in the first image coordinate system and acquiring corresponding reference points of the reference points in the whole three-dimensional coordinate system;
and the fitting unit is used for obtaining a matching equation between the image shot by the camera and the three-dimensional model by adopting least square fitting based on the coordinate values of the multiple groups of reference points and the reference points.
By adopting the technical scheme, the matching equation between the image shot by the camera and the three-dimensional model is fitted based on the coordinate values of the multiple groups of reference points and the reference points, so that the corresponding relation between the coordinate points in the image shot by the camera and the three-dimensional model can be quickly obtained.
The invention is further configured to: the first transformation module comprises:
the pixel unit is used for constructing a second image coordinate system based on the whole projection image result image and obtaining pixel points corresponding to the observation points in the second image coordinate system;
and the construction unit is used for establishing a first transformation equation between the whole projection image result graph and the display model based on the coordinate values of the plurality of groups of observation points and the pixel points.
By adopting the technical scheme, the first transformation equation capable of reflecting the corresponding relation between the coordinate points in the whole projection image result graph and the coordinate points in the display model is obtained, and subsequent transformation calculation is facilitated.
The invention is further configured to: the projection module includes:
the second acquisition unit is used for acquiring and storing vehicle characteristic points corresponding to the vehicles in the picture according to the picture shot by the camera;
the matching unit is used for matching a corresponding vehicle model according to the vehicle characteristic points acquired by the acquisition unit II and acquiring a plurality of image points of the vehicle model corresponding to a first image coordinate system;
a mapping unit for obtaining mapping points corresponding to the image points in a second image coordinate system based on the second transformation equation;
and the conversion unit is used for carrying out pixel conversion calculation on the whole projection image according to the mapping points and the proportion between the whole projection image and the whole projection image result image, so that the projection image is projected on the display screen and then is projected by a moving and/or static vehicle.
By adopting the technical scheme, after the vehicle characteristic points in the picture shot by the camera are captured, the corresponding vehicle model can be correspondingly matched (namely the vehicle contour is corrected), so that a plurality of image points capable of reflecting the shape of the vehicle can be obtained. Finally, the mapping point corresponding to the image point in the second image coordinate system can be obtained according to the second transformation equation, and after the pixel color of the pixel corresponding to the mapping point is changed, the projection image can be projected on the display screen and then is provided with moving and/or static vehicle projection.
The second aim of the invention is realized by the following technical scheme:
a city panoramic dynamic display method based on large screen fusion projection comprises a display screen, a display hall provided with the display screen and a plurality of cameras used for monitoring city traffic flow, and further comprises the following steps:
s1, constructing a three-dimensional model of the city with a reduced corresponding proportion by adopting a three-dimensional modeling method;
step S2, splicing the projection images of the projectors into an integral projection image by adopting an image splicing method, and storing the integral projection image result image as an integral projection image result image with a reduced set proportion;
s3, simulating a display model of the whole projection image result image projected on the display screen, and generating a conversion equation between the display model and the three-dimensional model;
s4, acquiring the image shot by the camera, and generating a matching equation between the image shot by the camera and the three-dimensional model;
step S5, generating a first transformation equation between the whole projection image result graph and the display model;
step S6, generating a second transformation equation between the overall projection image result graph and the picture shot by the camera based on the matching equation, the conversion equation and the first transformation equation;
and step S7, acquiring and storing vehicle characteristic points corresponding to the vehicle according to the picture shot by the camera, and performing pixel transformation calculation on the whole projection image based on the second transformation equation and the acquired vehicle characteristic points, so that the projection image is projected on the display screen and then has a moving and/or static vehicle projection.
Through adopting above-mentioned technical scheme, can find the pixel coordinate that corresponds in the projection image fast according to the vehicle characteristic point in the picture that the camera was caught to change the colour of corresponding pixel, make the vehicle in the city also can project to show the screen in real time, improved the effect that the city panorama was demonstrateed.
The invention is further configured to: the step S3 specifically includes:
s31, simulating a display model of the whole projection image result image projected on the display screen, constructing a local three-dimensional coordinate system according to the display model, and constructing a whole three-dimensional coordinate system according to the three-dimensional model;
s32, extracting a plurality of observation points in the local three-dimensional coordinate system, and acquiring equivalent points of the observation points in the whole three-dimensional coordinate system;
and step S33, establishing a conversion equation between the local three-dimensional coordinate system and the overall three-dimensional coordinate system based on the coordinate values of the multiple groups of observation points and the equivalent points.
By adopting the technical scheme, the corresponding relation between the display model and the three-dimensional model can be quickly obtained, and subsequent calculation is facilitated.
The invention is further configured to: the step S4 specifically includes:
s41, acquiring an image shot by the camera, and constructing a first image coordinate system according to the acquired image;
s42, extracting a plurality of reference points in the first image coordinate system, and acquiring corresponding reference points in the overall three-dimensional coordinate system;
and step S43, obtaining a matching equation between the image shot by the camera and the three-dimensional model by adopting least square fitting based on the coordinate values of the multiple groups of reference points and the reference points.
By adopting the technical scheme, after the matching equation between the image shot by the camera and the three-dimensional model is established, the three-dimensional coordinates of all points in the image corresponding to the three-dimensional model can be quickly found.
The invention is further configured to: the step S5 specifically includes:
step S51, constructing a second image coordinate system based on the whole projection image result graph, and obtaining pixel points corresponding to the observation points in the second image coordinate system;
and step S52, establishing a first transformation equation between the overall projection image result graph and the display model based on the coordinate values of the multiple groups of observation points and the pixel points.
By adopting the technical scheme, the corresponding relation between the whole projection image result graph and the display model can be quickly fitted based on the coordinate values of the multiple groups of observation points and the pixel points.
The invention is further configured to: the step S7 specifically includes:
step S71, acquiring and storing vehicle characteristic points corresponding to the vehicle according to the pictures shot by the camera;
step S72, matching a corresponding vehicle model according to the acquired vehicle characteristic points, and acquiring a plurality of image points of the vehicle model corresponding to the first image coordinate system;
step S73, obtaining a mapping point corresponding to the image point in the second image coordinate system based on the second transformation equation;
and step S74, performing pixel transformation calculation on the overall projection image according to the obtained mapping points and the proportion between the overall projection image and the overall projection image result image, so that the projection image is projected on the display screen and then is projected by a moving and/or static vehicle.
By adopting the technical scheme, after the corresponding vehicle feature points are obtained according to the picture shot by the camera, the corresponding vehicle model can be matched according to the obtained vehicle feature points, so that the corrected vehicle contour is obtained and corresponds to the picture shot by the camera, and a plurality of image points reflecting the shape of the vehicle are obtained. Finally, a mapping point corresponding to the image point in a second image coordinate system can be obtained through a second transformation equation, and color transformation is carried out on a pixel corresponding to the mapping point, so that the projection image is projected on a display screen and then has moving and/or static vehicle projection.
In conclusion, the beneficial technical effects of the invention are as follows:
1. through the arrangement of the conversion module, the matching module, the first conversion module, the second conversion module and the projection module, a projection image can be projected on a display screen and then has a movable and/or static vehicle projection, so that the immersion feeling of viewers is improved, and the urban panoramic display effect is also improved;
2. through the setting of the second acquisition unit and the matching unit, the captured vehicle contour can be corrected, and the vehicle projection effect can be improved.
Drawings
Fig. 1 is a schematic overall structure diagram of an urban panoramic dynamic display system based on large-screen fusion projection according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of another urban panoramic dynamic display system based on large-screen fusion projection according to an embodiment of the present invention;
fig. 3 is a flowchart of a method for dynamically displaying an urban panoramic image based on large-screen fusion projection according to a second embodiment of the present invention;
fig. 4 is a flowchart of step S3 in the method for dynamically displaying an urban panoramic image based on large-screen fusion projection according to the second embodiment of the present invention;
fig. 5 is a flowchart of step S4 in the method for dynamically displaying an urban panoramic image based on large-screen fusion projection according to the second embodiment of the present invention;
fig. 6 is a flowchart of step S5 in the method for dynamically displaying an urban panoramic image based on large-screen fusion projection according to the second embodiment of the present invention;
fig. 7 is a flowchart of step S7 in the method for dynamically displaying an urban panoramic image based on large-screen fusion projection according to the second embodiment of the present invention.
In the figure, 1, a module is constructed; 2. a splicing module; 3. a conversion module; 31. an analog unit; 32. an extraction unit I; 33. a conversion unit; 4. a matching module; 41. an acquisition unit I; 42. an extraction unit II; 43. a fitting unit; 5. a first transformation module; 51. a pixel unit; 52. a building unit; 6. a second transformation module; 7. a projection module; 71. an acquisition unit II; 72. a matching unit; 73. a mapping unit; 74. and a scaling unit.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings.
Example one
Referring to fig. 1, the urban panoramic dynamic display system based on large-screen fusion projection disclosed by the invention comprises a display screen, a display hall provided with the display screen and a plurality of cameras for monitoring urban traffic flow. The system further comprises a construction module 1, a splicing module 2, a conversion module 3, a matching module 4, a first transformation module 5, a second transformation module 6 and a projection module 7.
Specifically, the building module 1 is configured to build a three-dimensional model with a corresponding urban reduction ratio by using a three-dimensional modeling method (including modeling methods such as a basic body, an extended body, boolean operations, rotation, stretching (or squeezing), lofting, and the like). In this embodiment, the ratio between the three-dimensional model and the city is 1: 500000. the splicing module 2 is used for splicing the projection images of the plurality of projectors into an integral projection image by adopting an image splicing method, and storing the integral projection image as an integral projection image result image with a reduced set proportion, preferably, the proportion between the integral projection image result image and the integral projection image is 1: 3000.
the conversion module 3 is used for simulating a display model of the whole projection image result image projected on the display screen and generating a conversion equation between the display model and the three-dimensional model. The matching module 4 is configured to acquire an image captured by the camera and process the acquired image, so as to generate a matching equation between the image captured by the camera and the three-dimensional model. The first transformation module 5 is configured to generate a first transformation equation between the overall projection image result map and the presentation model. The second transformation module 6 is used for calculating a second transformation equation between the overall projection image result image and the shooting picture taken by the camera based on the matching equation generated by the matching module 4, the transformation equation generated by the transformation module 3 and the first transformation equation generated by the first transformation module 5.
The projection module 7 is used for acquiring and storing vehicle characteristic points corresponding to vehicles in the picture according to the picture shot by the camera, and is also used for performing pixel transformation calculation on the whole projection image based on the second transformation equation and the acquired vehicle characteristic points, so that the projection image is projected on the display screen and then has a moving and/or static vehicle projection.
Referring to fig. 2, the conversion module 3 includes an analog unit 31, a first extraction unit 32, and a conversion unit 33. Specifically, the simulation unit 31 is configured to simulate a display model in which the result graph of the overall projection image is projected on the display screen, and also configured to construct a local three-dimensional coordinate system according to the simulated display model, and construct an overall three-dimensional coordinate system according to the three-dimensional model established by the construction module 1. The first extracting unit 32 is configured to extract a plurality of observation points a (X1, Y1, Z1) in the local three-dimensional coordinate system, where X1, Y1, and Z1 of any two observation points a are different from each other. The first extracting unit 32 is further configured to obtain an equivalent point B of the observation point a in the overall three-dimensional coordinate system, and specifically, may select a point having significant features, such as a building top inflection point and a crossroad inflection point, in the three-dimensional model as the equivalent point B, so as to use the corresponding building/road surface feature point in the display model as the observation point a. The conversion unit 33 is configured to establish a conversion equation between the local three-dimensional coordinate system and the overall three-dimensional coordinate system based on the three-dimensional coordinate values of the multiple groups of observation points and the equivalent points, specifically, taking observation point a1 and equivalent point B1 as an example, if the coordinates of equivalent point B1 in the overall three-dimensional coordinate system are (X2, Y2, Z2), the coordinates of observation point a1 corresponding to equivalent point B1 in the local three-dimensional coordinate system are (X2/500000, Y2/500000, Z2/500000).
Referring to fig. 2, the matching module 4 includes a first obtaining unit 41, a second extracting unit 42, and a fitting unit 43. The first obtaining unit 41 is configured to obtain an image captured by the camera, and construct a first image coordinate system according to the obtained image. It should be noted that, since the pictures taken by each camera are different, the first image coordinate system established by the obtaining unit one 41 is multiple, that is, each camera corresponds to one first image coordinate system. The second extracting unit 42 is used for extracting a plurality of reference points G (X3, Y3) in the first image coordinate system, and acquiring the reference points G corresponding to the comparison points H (X4, Y4, Z4) in the overall three-dimensional coordinate system. Specifically, a red street lamp, a billboard, a roadside marker (e.g., a trash can, a bus station), etc. in a picture captured by the camera may be selected as the reference point G1, and a corresponding comparison point H1 is found in the three-dimensional model according to the geographic location of the corresponding camera. The fitting unit 43 is configured to obtain a polynomial matching equation between the image captured by the camera and the three-dimensional model by using a least square fitting method based on the coordinate values of the plurality of sets of reference points and the reference points.
Referring to fig. 2, the first transformation module 5 includes a pixel unit 51 and a construction unit 52. The pixel unit 51 is configured to construct a second image coordinate system with a single pixel as a unit based on the entire projection image result diagram, obtain a pixel point corresponding to the observation point selected on the display model in the second image coordinate system, and record a pixel coordinate value of the found pixel point. The construction unit 52 is configured to fit a first transformation equation between the whole projection image result graph and the display model based on the correlation between the coordinate values of the plurality of groups of observation points and the coordinate values of the pixel points.
Referring to fig. 2, the projection module 7 includes an acquisition unit two 71, a matching unit 72, a mapping unit 73, and a scaling unit 74. The second obtaining unit 71 is configured to obtain and store a vehicle feature point corresponding to the vehicle in the picture according to the picture captured by the camera. The matching unit 72 is configured to match a corresponding vehicle model according to the vehicle feature points acquired by the second acquiring unit 71, and acquire a plurality of image points of the vehicle model corresponding to the first image coordinate system. Specifically, after the second acquiring unit 71 acquires the vehicle feature points from the image captured by the camera, it analyzes the acquired vehicle feature points and outputs one piece of vehicle contour information, and the matching unit 72 matches the corresponding vehicle model (i.e. vehicle image) according to the vehicle contour information output by the acquiring unit, and corrects the plurality of feature points acquired from the image captured by the camera by the acquiring unit into a plurality of image points capable of reflecting the vehicle form according to the matched vehicle model (i.e. a process of acquiring a plurality of image points corresponding to the vehicle model in the first image coordinate system). The mapping unit 73 is configured to obtain a mapping point of the image point in the second image coordinate system based on the second transformation equation. The scaling unit 74 is configured to perform pixel transformation calculation (i.e., transform the color of the pixels in the area range surrounded by the corresponding mapping points) on the entire projection image according to the mapping points and the ratio between the entire projection image and the entire projection image result map, so that the projection image is projected on the display screen with a moving and/or stationary vehicle projection.
Example two
Referring to fig. 3, the method for dynamically displaying the urban panorama based on the large-screen fusion projection disclosed by the invention comprises a display screen, a display hall provided with the display screen, and a plurality of cameras for monitoring urban traffic flow. It also includes the following steps:
s1, constructing a three-dimensional model with a reduced city proportion by using a three-dimensional modeling method (including modeling methods such as a basic body, an extended body, boolean operations, rotation, stretching (or squeezing), lofting, and the like), where the proportion between the three-dimensional model and the city is 1: 500000.
s2, stitching the projection images of the plurality of projectors into an overall projection image by using an image stitching method, and storing the overall projection image result image as an overall projection image result image with a reduced set scale, wherein preferably, the ratio between the overall projection image result image and the overall projection image in the present embodiment is 1: 3000.
and S3, simulating a display model of the whole projection image result image projected on the display screen, and generating a conversion equation between the display model and the three-dimensional model.
And S4, acquiring the image shot by the camera, and generating a matching equation between the image shot by the camera and the three-dimensional model.
And S5, generating a first transformation equation between the overall projection image result graph and the display model.
And S6, generating a second transformation equation between the overall projection image result graph and the picture shot by the camera based on the matching equation, the conversion equation and the first transformation equation.
And S7, acquiring and storing vehicle characteristic points corresponding to the vehicle according to the picture shot by the camera, and performing pixel transformation calculation on the whole projection image based on the second transformation equation and the acquired vehicle characteristic points, so that the projection image is projected on the display screen and then has a moving and/or static vehicle projection.
Referring to fig. 4, step S3 specifically includes:
and step S31, simulating a display model of the whole projection image result image projected on the display screen, constructing a local three-dimensional coordinate system according to the display model, and constructing a whole three-dimensional coordinate system according to the three-dimensional model.
And step S32, extracting a plurality of observation points A in the local three-dimensional coordinate system, and acquiring equivalent points B corresponding to the observation points in the whole three-dimensional coordinate system. Specifically, the three-dimensional coordinates of the observation point a in the local three-dimensional coordinate system are set to be (X1, Y1, Z1), and X1, Y1 and Z1 of any two observation points a are all different from each other; regarding the equivalent point B, points having significant features such as a building top inflection point, a crossroad inflection point, and the like in the three-dimensional model may be selected as the equivalent point B, and the corresponding building/road surface feature point in the display model may be set as the observation point a.
And step S33, establishing a conversion equation between the local three-dimensional coordinate system and the overall three-dimensional coordinate system based on the coordinate values of the multiple groups of observation points and the equivalent points. Specifically, taking observation point a1 and equivalent point B1 as an example, if the coordinates of equivalent point B1 in the entire three-dimensional coordinate system are (X2, Y2, Z2), the coordinates of observation point a1 corresponding to equivalent point B1 in the local three-dimensional coordinate system are (X2/500000, Y2/500000, Z2/500000).
Referring to fig. 5, step S4 specifically includes:
and step S41, acquiring the image shot by the camera, and constructing a first image coordinate system according to the acquired image. Specifically, because the pictures shot by each camera are different, a plurality of first image coordinate systems are established, that is, each camera corresponds to one first image coordinate system.
Step S42, extracting several reference points G (X3, Y3) in the first image coordinate system, and acquiring the reference points corresponding to the reference points H (X4, Y4, Z4) in the overall three-dimensional coordinate system. Specifically, taking the reference point G1 and the comparison point H1 as an example, a red street lamp, a billboard, a roadside marker (e.g., a trash can, a bus station), etc. in a picture captured by a camera may be selected as the reference point G1, and the corresponding comparison point H1 is found in the three-dimensional model according to the geographic location of the corresponding camera.
And step S43, obtaining a polynomial matching equation between the image shot by the camera and the three-dimensional model by adopting least square fitting based on the coordinate values of the multiple groups of reference points and the reference points.
Referring to fig. 6, step S5 specifically includes:
step S51, a second image coordinate system with a single pixel as a unit is constructed based on the entire projection image result diagram, and a pixel point corresponding to the observation point selected on the display model in the second image coordinate system is obtained, and a pixel coordinate value of the found pixel point is recorded.
And step S52, fitting a first transformation equation between the overall projection image result graph and the display model based on the correlation between the multiple groups of observation points and the coordinate values of the pixel points.
Referring to fig. 7, step S7 specifically includes:
and step S71, acquiring and storing the vehicle characteristic points corresponding to the vehicle according to the picture shot by the camera.
And step S72, matching the corresponding vehicle model according to the acquired vehicle characteristic points, and acquiring a plurality of image points of the vehicle model corresponding to the first image coordinate system.
Specifically, after the vehicle feature points are obtained from the image captured by the camera, the obtained vehicle feature points are analyzed and one piece of vehicle contour information is output, then the corresponding vehicle model (i.e., the vehicle image) is matched according to the output vehicle contour information, and the plurality of feature points obtained from the image captured by the camera in step S71 are corrected into a plurality of image points capable of reflecting the vehicle form according to the matched vehicle model (i.e., the process of obtaining the plurality of image points corresponding to the vehicle model in the first image coordinate system).
And step S73, obtaining the mapping points of the image points in the second image coordinate system based on the second transformation equation.
And step S74, performing pixel transformation calculation (namely transforming the colors of pixels in the area range surrounded by the corresponding mapping points) on the overall projection image according to the obtained mapping points and the proportion between the overall projection image and the overall projection image result image, so that the projection image is projected on the display screen with a moving and/or static vehicle.
The embodiments of the present invention are preferred embodiments of the present invention, and the scope of the present invention is not limited by these embodiments, so: all equivalent changes made according to the structure, shape and principle of the invention are covered by the protection scope of the invention.

Claims (2)

1. The utility model provides a city panorama dynamic display system based on large screen fusion projection, includes the show screen, installs the show room and many cameras that are used for monitoring city traffic flow of show screen, its characterized in that still includes:
a building module (1) for building a three-dimensional model of the city reduced by a corresponding proportion by a three-dimensional modeling method;
the splicing module (2) is used for splicing the projection images of the plurality of projectors into an integral projection image by adopting an image splicing method, and storing the integral projection image as an integral projection image result image with a reduced set proportion;
a conversion module (3) for simulating a display model of the overall projection image result image projected on a display screen and generating a conversion equation between the display model and the three-dimensional model;
the conversion module (3) comprises:
the simulation unit (31) is used for simulating a display model of the whole projection image result image projected on a display screen, and is also used for constructing a local three-dimensional coordinate system according to the display model and constructing a whole three-dimensional coordinate system according to the three-dimensional model;
the extraction unit I (32) is used for extracting a plurality of observation points in the local three-dimensional coordinate system and acquiring equivalent points of the observation points in the overall three-dimensional coordinate system;
a conversion unit (33) for establishing a conversion equation between the local three-dimensional coordinate system and the entire three-dimensional coordinate system based on the coordinate values of the plurality of sets of observation points and the equivalent points;
the matching module (4) is used for acquiring the image shot by the camera and generating a matching equation between the image shot by the camera and the three-dimensional model;
the matching module (4) comprises:
the first acquisition unit (41) is used for acquiring the image shot by the camera and constructing a first image coordinate system according to the acquired image;
a second extraction unit (42) for extracting a plurality of reference points in the first image coordinate system and acquiring corresponding reference points in the overall three-dimensional coordinate system;
a fitting unit (43) for obtaining a matching equation between the image shot by the camera and the three-dimensional model by adopting least square fitting based on the coordinate values of the plurality of groups of reference points and the reference points;
a first transformation module (5) for generating a first transformation equation between the overall projection image result map and a presentation model;
the first transformation module (5) comprises:
the pixel unit (51) is used for constructing a second image coordinate system based on the whole projection image result graph and obtaining pixel points corresponding to the observation points in the second image coordinate system;
the construction unit (52) is used for establishing a first transformation equation between the overall projection image result graph and the display model based on the coordinate values of the multiple groups of observation points and the pixel points;
a second transformation module (6) for generating a second transformation equation between the overall projection image result map and the picture taken by the camera based on the matching equation, the conversion equation and the first transformation equation;
the projection module (7) is used for acquiring and storing vehicle characteristic points corresponding to vehicles in the picture according to the picture shot by the camera, and is also used for carrying out pixel transformation calculation on the whole projection image based on the second transformation equation and the acquired vehicle characteristic points, so that the projection image is projected on a display screen and then has a moving and/or static vehicle projection;
the projection module (7) comprises:
a second acquisition unit (71) for acquiring and storing vehicle feature points corresponding to vehicles in the picture according to the picture shot by the camera;
a matching unit (72) for matching the corresponding vehicle model according to the vehicle feature points acquired by the second acquiring unit (71), and acquiring a plurality of image points of the vehicle model corresponding to the first image coordinate system;
a mapping unit (73) for obtaining mapped points of the image points in a second image coordinate system based on the second transformation equation;
and the conversion unit (74) is used for carrying out pixel conversion calculation on the whole projection image according to the mapping points and the proportion between the whole projection image and the whole projection image result image, so that the projection image is projected on the display screen and then is projected by a moving and/or static vehicle.
2. A city panorama dynamic display method based on large screen fusion projection comprises a display screen, a display hall provided with the display screen and a plurality of cameras used for monitoring city traffic flow, and is characterized by further comprising the following steps:
s1, constructing a three-dimensional model of the city with a reduced corresponding proportion by adopting a three-dimensional modeling method;
step S2, splicing the projection images of the projectors into an integral projection image by adopting an image splicing method, and storing the integral projection image result image as an integral projection image result image with a reduced set proportion;
s3, simulating a display model of the whole projection image result image projected on the display screen, and generating a conversion equation between the display model and the three-dimensional model;
the step S3 specifically includes:
s31, simulating a display model of the whole projection image result image projected on the display screen, constructing a local three-dimensional coordinate system according to the display model, and constructing a whole three-dimensional coordinate system according to the three-dimensional model;
s32, extracting a plurality of observation points in the local three-dimensional coordinate system, and acquiring equivalent points of the observation points in the whole three-dimensional coordinate system;
s33, establishing a conversion equation between the local three-dimensional coordinate system and the whole three-dimensional coordinate system based on the coordinate values of the multiple groups of observation points and the equivalent points;
s4, acquiring the image shot by the camera, and generating a matching equation between the image shot by the camera and the three-dimensional model;
the step S4 specifically includes:
s41, acquiring an image shot by the camera, and constructing a first image coordinate system according to the acquired image;
s42, extracting a plurality of reference points in the first image coordinate system, and acquiring corresponding reference points in the overall three-dimensional coordinate system;
s43, obtaining a matching equation between the image shot by the camera and the three-dimensional model by adopting least square fitting based on the coordinate values of the multiple groups of reference points and the reference points;
step S5, generating a first transformation equation between the whole projection image result graph and the display model;
the step S5 specifically includes:
step S51, constructing a second image coordinate system based on the whole projection image result graph, and obtaining pixel points corresponding to the observation points in the second image coordinate system;
s52, establishing a first transformation equation between the overall projection image result graph and the display model based on the coordinate values of the multiple groups of observation points and the pixel points;
step S6, generating a second transformation equation between the overall projection image result graph and the picture shot by the camera based on the matching equation, the conversion equation and the first transformation equation;
step S7, acquiring and storing vehicle characteristic points corresponding to the vehicle according to the picture shot by the camera, and performing pixel transformation calculation on the whole projection image based on a second transformation equation and the acquired vehicle characteristic points, so that the projection image is projected on a display screen and then has moving and/or static vehicle projection;
the step S7 specifically includes:
step S71, acquiring and storing vehicle characteristic points corresponding to the vehicle according to the pictures shot by the camera;
step S72, matching a corresponding vehicle model according to the acquired vehicle characteristic points, and acquiring a plurality of image points of the vehicle model corresponding to the first image coordinate system;
step S73, obtaining a mapping point corresponding to the image point in the second image coordinate system based on the second transformation equation;
and step S74, performing pixel transformation calculation on the overall projection image according to the obtained mapping points and the proportion between the overall projection image and the overall projection image result image, so that the projection image is projected on the display screen and then is projected by a moving and/or static vehicle.
CN201910237297.6A 2019-03-27 2019-03-27 Urban panoramic dynamic display system and method based on large-screen fusion projection Active CN110035275B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910237297.6A CN110035275B (en) 2019-03-27 2019-03-27 Urban panoramic dynamic display system and method based on large-screen fusion projection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910237297.6A CN110035275B (en) 2019-03-27 2019-03-27 Urban panoramic dynamic display system and method based on large-screen fusion projection

Publications (2)

Publication Number Publication Date
CN110035275A CN110035275A (en) 2019-07-19
CN110035275B true CN110035275B (en) 2021-01-15

Family

ID=67236762

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910237297.6A Active CN110035275B (en) 2019-03-27 2019-03-27 Urban panoramic dynamic display system and method based on large-screen fusion projection

Country Status (1)

Country Link
CN (1) CN110035275B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110766317B (en) * 2019-10-22 2023-05-12 北京软通绿城科技有限公司 City index data display method, system, electronic equipment and storage medium
CN110853487A (en) * 2019-11-25 2020-02-28 西安工业大学 Digital sand table system for urban design

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1404018A (en) * 2002-09-29 2003-03-19 西安交通大学 Intelligent scene drawing system and drawing & processing method in computer network environment
CN101149843A (en) * 2007-10-10 2008-03-26 深圳先进技术研究院 Succession type automatic generation and real time updating method for digital city
CN103997609A (en) * 2014-06-12 2014-08-20 四川川大智胜软件股份有限公司 Multi-video real-time panoramic fusion splicing method based on CUDA
CN106067160A (en) * 2016-06-21 2016-11-02 江苏亿莱顿智能科技有限公司 Giant-screen merges projecting method
CN106327573A (en) * 2016-08-25 2017-01-11 成都慧途科技有限公司 Real scene three-dimensional modeling method for urban building
CN106899782A (en) * 2015-12-17 2017-06-27 上海酷景信息技术有限公司 A kind of method for realizing interactive panoramic video stream map
CN109272478A (en) * 2018-09-20 2019-01-25 华强方特(深圳)智能技术有限公司 A kind of screen projecting method and device and relevant device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120300020A1 (en) * 2011-05-27 2012-11-29 Qualcomm Incorporated Real-time self-localization from panoramic images

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1404018A (en) * 2002-09-29 2003-03-19 西安交通大学 Intelligent scene drawing system and drawing & processing method in computer network environment
CN101149843A (en) * 2007-10-10 2008-03-26 深圳先进技术研究院 Succession type automatic generation and real time updating method for digital city
CN103997609A (en) * 2014-06-12 2014-08-20 四川川大智胜软件股份有限公司 Multi-video real-time panoramic fusion splicing method based on CUDA
CN106899782A (en) * 2015-12-17 2017-06-27 上海酷景信息技术有限公司 A kind of method for realizing interactive panoramic video stream map
CN106067160A (en) * 2016-06-21 2016-11-02 江苏亿莱顿智能科技有限公司 Giant-screen merges projecting method
CN106327573A (en) * 2016-08-25 2017-01-11 成都慧途科技有限公司 Real scene three-dimensional modeling method for urban building
CN109272478A (en) * 2018-09-20 2019-01-25 华强方特(深圳)智能技术有限公司 A kind of screen projecting method and device and relevant device

Also Published As

Publication number Publication date
CN110035275A (en) 2019-07-19

Similar Documents

Publication Publication Date Title
CN103226830B (en) The Auto-matching bearing calibration of video texture projection in three-dimensional virtual reality fusion environment
US8963943B2 (en) Three-dimensional urban modeling apparatus and method
CN108495102B (en) Multi-projector seamless splicing and fusion method based on Unity splicing and fusion system
CN107067447B (en) Integrated video monitoring method for large spatial region
CN202003534U (en) Interactive electronic sand table
JP2006189708A (en) Display device
CN112437276A (en) WebGL-based three-dimensional video fusion method and system
CN101999139A (en) Method for creating and/or updating textures of background object models, video monitoring system for carrying out the method, and computer program
CN110035275B (en) Urban panoramic dynamic display system and method based on large-screen fusion projection
CN104427230A (en) Reality enhancement method and reality enhancement system
CN106899782A (en) A kind of method for realizing interactive panoramic video stream map
CN113347373B (en) Image processing method for making special-effect video in real time through AR space positioning
CN104767975A (en) Method for achieving interactive panoramic video stream map
CN111815786A (en) Information display method, device, equipment and storage medium
CN107241610A (en) A kind of virtual content insertion system and method based on augmented reality
JP3162664B2 (en) Method and apparatus for creating three-dimensional cityscape information
CN108509173A (en) Image shows system and method, storage medium, processor
EP3057316B1 (en) Generation of three-dimensional imagery to supplement existing content
Nakamae et al. Rendering of landscapes for environmental assessment
CN208506731U (en) Image display systems
US11043019B2 (en) Method of displaying a wide-format augmented reality object
CN108564654B (en) Picture entering mode of three-dimensional large scene
Alshawabkeh et al. Automatic multi-image photo texturing of complex 3D scenes
JP7224894B2 (en) Information processing device, information processing method and program
CN110853487A (en) Digital sand table system for urban design

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant