CN117412019A - Aerial imaging and virtual reality dynamic backlight determination method and device and electronic equipment - Google Patents

Aerial imaging and virtual reality dynamic backlight determination method and device and electronic equipment Download PDF

Info

Publication number
CN117412019A
CN117412019A CN202311715130.9A CN202311715130A CN117412019A CN 117412019 A CN117412019 A CN 117412019A CN 202311715130 A CN202311715130 A CN 202311715130A CN 117412019 A CN117412019 A CN 117412019A
Authority
CN
China
Prior art keywords
virtual
exhibit
imaging
projection
audience
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311715130.9A
Other languages
Chinese (zh)
Inventor
秦林波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Ouguan Microelectronics Technology Co ltd
Original Assignee
Shenzhen Ouguan Microelectronics Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Ouguan Microelectronics Technology Co ltd filed Critical Shenzhen Ouguan Microelectronics Technology Co ltd
Priority to CN202311715130.9A priority Critical patent/CN117412019A/en
Publication of CN117412019A publication Critical patent/CN117412019A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/363Image reproducers using image projection screens
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/327Calibration thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3179Video signal processing therefor
    • H04N9/3185Geometric adjustment, e.g. keystone or convergence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/048Indexing scheme relating to G06F3/048
    • G06F2203/04806Zoom, i.e. interaction techniques or interactors for controlling the zooming operation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • General Physics & Mathematics (AREA)
  • Geometry (AREA)
  • Human Computer Interaction (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention provides an aerial imaging and virtual reality dynamic backlight determining method, a device and electronic equipment, which can better present the sense of reality of an exhibit by geometrically correcting the virtual imaging of the exhibit, especially aligning the three-dimensional characteristics of the plane, the curved surface and the like of the exhibit, enhance the fidelity of the virtual exhibit, receive the interactive instruction of the exhibition audience, carry out projection adjustment on the virtual exhibit according to the instruction, satisfy the experience of real-time interaction between the audience and the virtual exhibit, and effectively ensure the safety of the exhibit without real objects.

Description

Aerial imaging and virtual reality dynamic backlight determination method and device and electronic equipment
Technical Field
The present invention relates to the field of data processing technologies, and in particular, to a method and an apparatus for determining an air image and a virtual reality dynamic backlight, and an electronic device.
Background
With the development of society, multimedia and network technologies develop, and the use of multimedia exhibition and display devices for physical exhibition is increasing.
At present, the requirement of a museum or jewelry exhibition hall on the observation of an exhibited article is very high, but the requirement of the exhibition article on the site is also very high, and a worker is required to carefully nurse the exhibited article, so that any accidents of the exhibited article can be avoided, if the phenomenon of confusion occurs on the site or the worker during the exhibition, the exhibited article is easily stolen by lawless persons, and the safety of the exhibited article cannot be ensured.
Disclosure of Invention
The invention aims to solve the problem of how to achieve the exhibition effect of the exhibited article under the condition that the exhibited article does not show a real object, and provides an aerial imaging and virtual reality dynamic backlight determining method, an aerial imaging and virtual reality dynamic backlight determining device and electronic equipment.
The invention adopts the following technical means for solving the technical problems:
identifying a virtual photo frame to be input in a preset setting based on pre-acquired exhibit virtual imaging;
judging whether the virtual imaging of the exhibit can be matched with the virtual photo frame to be input;
if yes, calculating a projection difference between the exhibit virtual imaging and the virtual photo frame to be input according to a preset projection track, carrying out geometric correction on the exhibit virtual imaging according to the projection difference, aligning the three-dimensional characteristics of the exhibit virtual imaging, mapping the exhibit virtual imaging into the virtual photo frame to be input, and generating a virtual exhibit, wherein the three-dimensional characteristics specifically comprise an exhibit plane and an exhibit curved surface;
judging whether the virtual exhibit receives an interaction instruction output by a spectator or not;
if yes, the interactive content existing in the interactive instruction is read, projection adjustment is carried out on the virtual exhibit based on the interactive content, the virtual exhibit image after the projection adjustment is intercepted, the virtual exhibit image is fed back to the audience, and preset exhibit information is presented beside the virtual exhibit image, wherein the projection adjustment specifically comprises changing the position, the shape and the color of the virtual exhibit, and the exhibit information specifically comprises an artistic introduction, product description and historical culture.
Further, the step of performing geometric correction on the virtual imaging of the exhibit according to the projection difference and aligning the stereoscopic features of the virtual imaging of the exhibit includes:
generating a three-dimensional model of the virtual imaging of the exhibit based on preset measurement data;
judging whether the three-dimensional model displays a plurality of point cloud data from a preset angle;
if yes, a preset deformation correction algorithm is applied to acquire the relative position information of the audience and the three-dimensional model, and the projection coordinates of the three-dimensional model are adjusted in a self-adaptive mode according to the relative position information.
Further, the step of performing projection adjustment on the virtual exhibit based on the interactive content and intercepting the virtual exhibit image after projection adjustment includes:
identifying an interaction link of the audience to the virtual exhibit image, wherein the content of the interaction link specifically comprises zooming in and out, adding a label and uploading evaluation;
judging whether the interaction link is recorded;
and if so, acquiring pre-interaction data and post-interaction data of the virtual exhibit images, and transmitting at least two virtual exhibit images to equipment reserved by the audience based on the pre-interaction data and the post-interaction data.
Further, the step of determining whether the virtual imaging of the exhibit can match the virtual photo frame to be input includes:
acquiring a preset number of the virtual imaging of the exhibit;
judging whether the preset number accords with the number array of the virtual photo frame to be input;
if yes, viewing the visual angle content of the exhibit virtual imaging in the virtual photo frame to be input, wherein the visual angle content specifically comprises a transparency degree and a matching size.
Further, the step of determining whether the virtual exhibit receives the interactive instruction output by the audience further includes:
detecting a temporary interaction number acquired by the audience at an exhibition;
judging whether the temporary interaction number is given authority to interact with the virtual exhibit;
if not, carrying out temporary identity verification on the audience, comparing the prerecorded identity content of the audience on the exhibition based on the temporary identity verification, and recovering the interaction authority owned by the temporary interaction number.
Further, the step of identifying the virtual photo frame to be input in the preset setting includes:
scanning a projection template existing in the scenery;
judging whether the projection template is provided with projection size limitation or not;
If yes, a number array corresponding to the projection template is obtained, and a preset display queue is adjusted for the virtual photo frame to be input based on the number array, wherein the display queue is specifically a queuing sequence for displaying on an exhibition.
Further, before the step of pre-collecting based on the exhibit imaging, the method further comprises:
identifying at least two representation shapes of the pre-recorded exhibits by using preset scanning equipment, wherein the representation shapes specifically comprise geometric shapes, curved surface features and texture information;
judging whether various characterization shapes are matched with each other;
if yes, generating an exhibit imaging corresponding to the exhibit based on the representation shape.
The invention also provides an aerial imaging and virtual reality dynamic backlight determining device, which comprises:
the identification module is used for identifying a virtual photo frame to be input in a preset setting based on the pre-acquired virtual imaging of the exhibit;
the judging module is used for judging whether the virtual imaging of the exhibit can be matched with the virtual photo frame to be input;
the execution module is used for calculating the projection difference between the exhibit virtual imaging and the virtual photo frame to be input according to a preset projection track if the exhibit virtual imaging is enabled, carrying out geometric correction on the exhibit virtual imaging according to the projection difference, aligning the stereoscopic features of the exhibit virtual imaging, mapping the exhibit virtual imaging into the virtual photo frame to be input, and generating a virtual exhibit, wherein the stereoscopic features specifically comprise an exhibit plane and an exhibit curved surface;
The second judging module is used for judging whether the virtual exhibit receives an interaction instruction output by a spectator or not;
and the second execution module is used for reading the interactive content existing in the interactive instruction, carrying out projection adjustment on the virtual exhibit based on the interactive content, intercepting the virtual exhibit image after the projection adjustment, feeding back the virtual exhibit image to the audience, and simultaneously presenting preset exhibit information beside the virtual exhibit image, wherein the projection adjustment specifically comprises changing the position, the shape and the color of the virtual exhibit, and the exhibit information specifically comprises an artistic introduction, a product description and a history culture.
Further, the execution module further includes:
the generating unit is used for generating a three-dimensional model of the virtual imaging of the exhibit based on preset measurement data;
the judging unit is used for judging whether the three-dimensional model displays a plurality of point cloud data from a preset angle;
and the execution unit is used for acquiring the relative position information of the audience and the three-dimensional model by applying a preset deformation correction algorithm if the audience is in the three-dimensional model, and adaptively adjusting the projection coordinates of the three-dimensional model according to the relative position information.
The invention also provides electronic equipment for aerial imaging and virtual reality dynamic backlight determination, which comprises a memory and a processor, wherein the memory stores a computer program.
The invention provides an air imaging and virtual reality dynamic backlight determining method, a device and electronic equipment, which have the following beneficial effects:
according to the invention, through geometric correction of virtual imaging of the exhibited article, particularly aligning three-dimensional characteristics of a plane, a curved surface and the like of the exhibited article, the sense of reality of the exhibited article can be better presented, the fidelity of the virtual exhibited article is enhanced, meanwhile, the interactive instruction of a spectator of the exhibited article is received, projection adjustment is carried out on the virtual exhibited article according to the instruction, the experience of real-time interaction between the spectator and the virtual exhibited article is met, the 3D effect and the detail of the exhibited article can be seen without real objects in the whole exhibition process, and the safety of the exhibited article is effectively ensured.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Wherein:
FIG. 1 is a schematic flow chart of an embodiment of a method for aerial imaging and virtual reality dynamic backlight determination according to the present invention;
FIG. 2 is a block diagram illustrating one embodiment of an aerial imaging and virtual reality dynamic backlight determination apparatus of the present invention;
fig. 3 is a schematic diagram illustrating an internal structure of an embodiment of an air imaging and virtual reality dynamic backlight determination electronic device according to the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1, a method for determining an air image and a virtual reality dynamic backlight according to an embodiment of the invention includes:
s1: identifying a virtual photo frame to be input in a preset setting based on pre-acquired exhibit virtual imaging;
s2: judging whether the virtual imaging of the exhibit can be matched with the virtual photo frame to be input;
s3: if yes, calculating a projection difference between the exhibit virtual imaging and the virtual photo frame to be input according to a preset projection track, carrying out geometric correction on the exhibit virtual imaging according to the projection difference, aligning the three-dimensional characteristics of the exhibit virtual imaging, mapping the exhibit virtual imaging into the virtual photo frame to be input, and generating a virtual exhibit, wherein the three-dimensional characteristics specifically comprise an exhibit plane and an exhibit curved surface;
S4: judging whether the virtual exhibit receives an interaction instruction output by a spectator or not;
s5: if yes, the interactive content existing in the interactive instruction is read, projection adjustment is carried out on the virtual exhibit based on the interactive content, the virtual exhibit image after the projection adjustment is intercepted, the virtual exhibit image is fed back to the audience, and preset exhibit information is presented beside the virtual exhibit image, wherein the projection adjustment specifically comprises changing the position, the shape and the color of the virtual exhibit, and the exhibit information specifically comprises an artistic introduction, product description and historical culture.
In the embodiment, the system identifies a virtual photo frame to be input in a preset setting in the current exhibition based on pre-acquired exhibit virtual imaging data, and then the system executes corresponding steps by judging whether the exhibit virtual imaging can be matched with the virtual photo frame to be input; for example, when the system determines that the virtual imaging of the exhibit cannot be matched with the virtual photo frame to be input, the system considers that the matching fails due to the fact that the virtual photo frame in the preset setting and the virtual imaging of the actual exhibit have large differences in shape, size or perspective, and at the moment, the virtual imaging data of the exhibit needs to be acquired again, so that the integrity and accuracy of the data are ensured, and the virtual imaging data of the exhibit can be matched with the size of the virtual photo frame to be input; for example, when the system determines that the virtual imaging of the exhibit can be matched with the virtual photo frame to be input, the system calculates the projection difference between the virtual imaging of the exhibit and the virtual photo frame to be input according to a preset projection track, performs geometric correction on the projection content of the virtual imaging of the exhibit according to the projection difference, and finally maps and inputs the virtual imaging of the exhibit into the virtual photo frame to be input by aligning the three-dimensional characteristics of the virtual imaging of the exhibit in the projection process, so that the virtual exhibit can be generated in the virtual photo frame; then the system judges whether the virtual exhibit receives the interactive instruction output by the audience at the exhibition or not so as to execute the corresponding steps; for example, when the system determines that the virtual exhibit does not receive the interaction instruction, the system may consider that the interaction between the audience and the virtual exhibit is problematic or the audience does not attempt to interact with the virtual exhibit, which may be that the system fails or the virtual exhibit fails to respond to the interaction instruction of the audience in time due to network delay or other reasons, the system state is checked in time, so as to ensure the normal operation of the system, or the audience may not be interested in the virtual exhibit, the temporarily acquired interaction instruction authority is abandoned, and no person interacts with the virtual exhibit when the queue turns to the audience; for example, when the system determines that the virtual exhibit receives the interactive instruction, the system reads the interactive content existing in the interactive instruction, adjusts the projection content of the virtual exhibit based on the interactive content, after confirming that the audience approval is obtained, that is, the adjustment is completed, the virtual exhibit image is sent to the interactive audience, and corresponding exhibit information is presented beside the virtual exhibit image received by the audience, so as to assist the audience to understand other information of the exhibit, including providing detailed description about the exhibit, displaying the introduction of artist or designer, introducing their style, inspiration and other works, and providing relevant history and cultural background information for the exhibit having historical or cultural significance, so as to assist the audience in understanding the context of the exhibit.
It should be noted that specific examples of the geometric correction are as follows:
assuming that in a virtual art exhibition, an image of a virtual sculpture is projected onto a screen, however, due to the angle change of the audience, the projection of the virtual sculpture has some deviation from the position on the actual screen, the projection data of the sculpture on the actual screen is captured by using a depth sensor or a camera, the image of the virtual sculpture is generated by a computer and is ready to be projected onto the screen, in the actual exhibition process, the projection on the actual screen and the position of the virtual sculpture are monitored, the deviation between the actual projection and the virtual image is calculated by comparing the images, parameters such as translation, rotation, scaling and the like are possibly involved, the calculated geometric transformation parameters are applied, the image of the virtual sculpture is adjusted to be consistent with the actual projection as much as possible, during the exhibition, the position of the virtual sculpture is monitored in real time, and the image of the virtual sculpture is adjusted according to the changed angle of the audience so as to keep accurate projection; by way of example, it can be seen that the projection difference based geometric correction is a dynamic process that can be adjusted in real time during the presentation to ensure that the virtual exhibit remains consistent with the actual projection.
In this embodiment, the step S3 of performing geometric correction on the virtual image of the exhibit according to the projection difference and aligning the stereoscopic features of the virtual image of the exhibit includes:
s31: generating a three-dimensional model of the virtual imaging of the exhibit based on preset measurement data;
s32: judging whether the three-dimensional model displays a plurality of point cloud data from a preset angle;
s33: if yes, a preset deformation correction algorithm is applied to acquire the relative position information of the audience and the three-dimensional model, and the projection coordinates of the three-dimensional model are adjusted in a self-adaptive mode according to the relative position information.
In this embodiment, the system generates a three-dimensional model of virtual imaging of the exhibit based on measurement data set in advance, and then determines whether the three-dimensional model has a plurality of point cloud data when viewed from a preset angle, so as to execute corresponding steps; for example, when the system determines that the three-dimensional model is displayed without a plurality of point cloud data, it can be stated that in the virtual display, the three-dimensional model data of the exhibited item is insufficient to provide detailed point cloud information, or the data may be missing or damaged, which may cause the virtual exhibited item to lack details, have insufficient precision, or present an unreal appearance in the display process, at this time, optimization of the three-dimensional model is required to reduce the file size, improve the display efficiency, and ensure that sufficient point cloud data can be provided during the display, and repair or replace the damaged file to ensure correct display; for example, when the system determines that the three-dimensional model is displayed, the system has a plurality of point cloud data, and at the moment, the system can apply a deformation correction algorithm preset to acquire the relative position information of the exhibition audience and the three-dimensional model, and self-adaptively adjust the projection coordinates of the three-dimensional model according to different relative position information, so that the audience on the seat of each audience on the exhibition can clearly watch the three-dimensional model.
It should be noted that, a specific example scenario of obtaining the relative position information of the exhibition audience and the three-dimensional model by applying the deformation correction algorithm is as follows:
assuming that a three-dimensional model of a building is displayed on a virtual exhibition stand, a sensor technology such as a camera, a depth sensor or other position tracking equipment is utilized to track each position of an audience in an exhibition in real time, the three-dimensional model to be displayed is created or loaded, virtual imaging is carried out on the exhibition equipment, a deformation correction algorithm comprising geometric transformation such as translation, rotation, scaling and the like is applied, real-time position information of the audience and the virtual three-dimensional model are subjected to relative position correction, during the exhibition, the position change of the audience is monitored in real time, the virtual three-dimensional model is correspondingly adjusted, so that the exhibited items seen by the audience at different positions and angles are ensured to be consistent, meanwhile, the position change of the audience is monitored in real time, and deformation parameters of the building model are dynamically adjusted to adapt to the view angle of the audience, so that the audience can interact with the virtual three-dimensional model in a relatively accurate mode in the exhibition regardless of the position change.
In this embodiment, the step S5 of performing projection adjustment on the virtual exhibit based on the interactive content and intercepting the virtual exhibit image after projection adjustment includes:
S51: identifying an interaction link of the audience to the virtual exhibit image, wherein the content of the interaction link specifically comprises zooming in and out, adding a label and uploading evaluation;
s52: judging whether the interaction link is recorded;
s53: and if so, acquiring pre-interaction data and post-interaction data of the virtual exhibit images, and transmitting at least two virtual exhibit images to equipment reserved by the audience based on the pre-interaction data and the post-interaction data.
In this embodiment, the system identifies an interaction link of the audience to the virtual exhibit image, and then determines whether the interaction link is recorded in exhibition data, so as to execute a corresponding step; for example, when the system determines that the interaction link of the audience is not completely recorded by the exhibition system, it is indicated that the interaction process of the audience is not completely captured or recorded by the system, which may result in missing important user behaviors or data and affect the overall understanding of the audience behaviors, at this time, the system needs to check the system logic and the decision mechanism to ensure the accuracy of the decision on the interaction, if there is misjudgment, the system needs to re-record the audience which is not recorded yet after the current interaction link of the audience is finished in time, so as to avoid that the audience cannot obtain the optimal exhibition experience on the exhibition; for example, when the system determines that the interaction link is recorded in the exhibition data, at this time, the system collects conventional data of the virtual exhibit image before the interaction of the audience and authentication data after the interaction of the audience, and transmits the conventional data and the authentication data as the original virtual exhibit image and the incidental data content of the interaction virtual exhibit image to the receiving device set by the audience, and one virtual exhibit image which has not been interacted and another virtual exhibit image which has been interacted are obtained from the receiving device, so that the exhibition audience can obtain a summary of the exhibition audience after the viewing exhibition and an interaction trace reserved in the interaction process from the two images.
It should be noted that, when each audience completes interaction to obtain a virtual exhibit image, the interactive link content of the audience during interaction is correspondingly reserved.
In this embodiment, the step S2 of determining whether the virtual image of the exhibit can match the virtual photo frame to be input includes:
s21: acquiring a preset number of the virtual imaging of the exhibit;
s22: judging whether the preset number accords with the number array of the virtual photo frame to be input;
s23: if yes, viewing the visual angle content of the exhibit virtual imaging in the virtual photo frame to be input, wherein the visual angle content specifically comprises a transparency degree and a matching size.
In this embodiment, the system acquires a preset projection number of the virtual imaging of the exhibit, and then determines whether the projection number accords with a number sequence of the virtual photo frame to be input, so as to execute a corresponding step; for example, when the system determines that the projection number does not conform to the number sequence of the virtual photo frame to be input, that is, the projection number is not matched with the number of the expected virtual photo frame or is abnormal, the virtual display system cannot correctly identify or display the corresponding virtual content, at this time, the system needs to carefully check the numbering system to ensure that each virtual photo frame and the corresponding projection number are correct, and meanwhile, check the configuration and parameter settings of the system to ensure that the matching rule of the virtual photo frame and the projection number is correct; for example, when the system determines that the projection number can conform to the number array of the virtual photo frame to be input, the system inspects the visual angle content of the visual image of the exhibit in the virtual photo frame to be input, ensures that the position and the size of the virtual exhibit in the virtual photo frame to be input are consistent with the actual exhibit or the expected effect, ensures that the virtual exhibit is coordinated with the background of the virtual photo frame to be input at the same time so as to create a harmonious visual effect, and finally inspects the position and accessibility of the interaction element to ensure that the audience can easily interact with the virtual exhibit.
In this embodiment, in step S4 of determining whether the virtual exhibit receives the interaction instruction output by the audience, the method further includes:
s41: detecting a temporary interaction number acquired by the audience at an exhibition;
s42: judging whether the temporary interaction number is given authority to interact with the virtual exhibit;
s43: if not, carrying out temporary identity verification on the audience, comparing the prerecorded identity content of the audience on the exhibition based on the temporary identity verification, and recovering the interaction authority owned by the temporary interaction number.
In this embodiment, the system detects the temporary interaction number acquired by the audience at the exhibition, and then determines whether the temporary interaction number is given authority to interact with the virtual exhibit in the exhibition process, so as to execute the corresponding step; for example, when the system determines that the temporary interaction number is given the authority to interact with the virtual exhibit, the system indicates that the audience has the temporary interaction authority, the system verifies the temporary interaction number, ensures that the temporary interaction number is associated with the corresponding identity or authority in the system, checks whether the temporary interaction number is given the specific authority so as to ensure that the audience can execute the expected interaction operation, and the audience can interact with the virtual exhibit after checking; for example, when the system determines that the temporary interaction number is not assigned with the authority of interaction with the virtual exhibit, the system performs temporary authentication on the exhibition audience, compares the pre-recorded identity content of the audience before the exhibition starts based on the temporary authentication, if the temporary authentication is the same identity after confirmation, the system assigns the interaction authority corresponding to the audience again, if the temporary interaction number is not the same identity after confirmation, the system recovers the temporary interaction number corresponding to the interaction authority, and avoids the opportunity that other audiences do not need the interaction authority but obtain interaction.
In this embodiment, in step S1 of identifying a virtual photo frame to be input in a preset setting, the method includes:
s11: scanning a projection template existing in the scenery;
s12: judging whether the projection template is provided with projection size limitation or not;
s13: if yes, a number array corresponding to the projection template is obtained, and a preset display queue is adjusted for the virtual photo frame to be input based on the number array, wherein the display queue is specifically a queuing sequence for displaying on an exhibition.
In this embodiment, the system scans a projection template provided in the scenery, and then determines whether the projection template is provided with a corresponding projection size limit, so as to execute a corresponding step; for example, when the system determines that the projection template is not provided with a projection size limit, the system considers that the projection template is not provided with a corresponding exhibit at the exhibition, and therefore the projection template is not provided with a corresponding size to adapt to the matched exhibit for exhibition; for example, when the system determines that the projection template is provided with the projection size limitation, the system can acquire a number sequence corresponding to the projection template, adjust a preset display queue for a virtual photo frame to be input based on the number sequence, and adjust preset display sequences for all exhibits for an exhibition, so that the condition that the exhibits are not displayed according to the preset display sequences and influence the exhibition appearance of audiences is avoided.
In this embodiment, before step S1 of pre-collected exhibit imaging, the method further includes:
s101: identifying at least two representation shapes of the pre-recorded exhibits by using preset scanning equipment, wherein the representation shapes specifically comprise geometric shapes, curved surface features and texture information;
s102: judging whether various characterization shapes are matched with each other;
s103: if yes, generating an exhibit imaging corresponding to the exhibit based on the representation shape.
In this embodiment, the system identifies the characterization shapes corresponding to the pre-recorded exhibits by applying a pre-set scanning device, and then determines whether the characterization shapes match with each other, so as to execute the corresponding steps; for example, when the system determines that the representation shapes cannot be matched with each other, that is, states that there is inconsistency or unexpected situation in the process of representing the exhibited article, the reality and user experience of the virtual exhibition are affected, at this time, the system needs to ensure that all the shape descriptions adopt the same coordinate system or are converted when necessary to ensure that they are aligned in the same space, and at the same time, check the coordinates and images of the texture mapping to ensure that they correspond correctly, repair the mapping problem to ensure consistency of the representation shapes; for example, when the system judges that the representation shapes can be matched with each other, the system can generate an exhibit imaging corresponding to the exhibit based on the representation shapes, and the exhibit imaging data are input into the exhibition system, so that the 3D effect and the exhibit detail of the exhibit can be seen without real objects in the whole exhibition process, and the safety of the exhibit is effectively ensured.
Referring to fig. 2, a hollow imaging and virtual reality dynamic backlight determining apparatus according to an embodiment of the present invention includes:
the identification module 10 is used for identifying a virtual photo frame to be input in a preset setting based on the pre-acquired virtual imaging of the exhibit;
the judging module 20 is configured to judge whether the virtual imaging of the exhibit can match the virtual photo frame to be input;
the execution module 30 is configured to calculate, if the virtual imaging of the exhibit and the virtual photo frame to be input are able to do a projection difference according to a preset projection track, perform a geometric correction on the virtual imaging of the exhibit according to the projection difference, align a stereoscopic feature of the virtual imaging of the exhibit, map the virtual imaging of the exhibit to the virtual photo frame to be input, and generate a virtual exhibit, where the stereoscopic feature specifically includes an exhibit plane and an exhibit curved surface;
a second judging module 40, configured to judge whether the virtual exhibit receives an interaction instruction output by the audience;
and the second execution module 50 is configured to, if yes, read the interactive content existing in the interactive instruction, perform projection adjustment on the virtual exhibit based on the interactive content, intercept the virtual exhibit image after the projection adjustment, feed back the virtual exhibit image to the audience, and present preset exhibit information beside the virtual exhibit image, where the projection adjustment specifically includes changing the position, shape and color of the virtual exhibit, and the exhibit information specifically includes an artistic introduction, a product description and a history culture.
In this embodiment, the identification module 10 identifies a virtual photo frame to be input in a preset setting in a current exhibition based on pre-acquired virtual imaging data of the exhibit, and then the judgment module 20 performs corresponding steps by judging whether the virtual imaging of the exhibit can match the virtual photo frame to be input; for example, when the system determines that the virtual imaging of the exhibit cannot be matched with the virtual photo frame to be input, the system considers that the matching fails due to the fact that the virtual photo frame in the preset setting and the virtual imaging of the actual exhibit have large differences in shape, size or perspective, and at the moment, the virtual imaging data of the exhibit needs to be acquired again, so that the integrity and accuracy of the data are ensured, and the virtual imaging data of the exhibit can be matched with the size of the virtual photo frame to be input; for example, when the system determines that the virtual imaging of the exhibit can be matched with the virtual photo frame to be input, the execution module 30 calculates the projection difference between the virtual imaging of the exhibit and the virtual photo frame to be input according to the preset projection track, performs geometric correction on the projection content of the virtual imaging of the exhibit according to the projection difference, and finally, inputs the virtual imaging mapping of the exhibit into the virtual photo frame to be input by aligning the three-dimensional characteristics of the virtual imaging of the exhibit in the projection process, so as to generate the virtual exhibit in the virtual photo frame; the second judging module 40 then executes the corresponding steps by judging whether the virtual exhibit receives the interactive instruction output by the audience at the exhibition; for example, when the system determines that the virtual exhibit does not receive the interaction instruction, the system may consider that the interaction between the audience and the virtual exhibit is problematic or the audience does not attempt to interact with the virtual exhibit, which may be that the system fails or the virtual exhibit fails to respond to the interaction instruction of the audience in time due to network delay or other reasons, the system state is checked in time, so as to ensure the normal operation of the system, or the audience may not be interested in the virtual exhibit, the temporarily acquired interaction instruction authority is abandoned, and no person interacts with the virtual exhibit when the queue turns to the audience; for example, when the system determines that the virtual exhibit receives the interactive instruction, the second execution module 50 may read the interactive content existing in the interactive instruction, adjust the projection content of the virtual exhibit based on the interactive content, send the virtual exhibit image after confirming that the audience approval is obtained, that is, the adjustment is completed, to the interactive audience by intercepting the adjusted virtual exhibit image, and present corresponding exhibit information beside the virtual exhibit image received by the audience, so as to assist the audience to understand the other information of the exhibit in depth, including providing detailed description about the exhibit, displaying the introduction of artist or designer, introducing their style, inspiration and other works, and providing relevant history and cultural background information for the exhibit having historical or cultural significance, thereby helping the audience understand the context of the exhibit.
In this embodiment, the execution module further includes:
the generating unit is used for generating a three-dimensional model of the virtual imaging of the exhibit based on preset measurement data;
the judging unit is used for judging whether the three-dimensional model displays a plurality of point cloud data from a preset angle;
and the execution unit is used for acquiring the relative position information of the audience and the three-dimensional model by applying a preset deformation correction algorithm if the audience is in the three-dimensional model, and adaptively adjusting the projection coordinates of the three-dimensional model according to the relative position information.
In this embodiment, the system generates a three-dimensional model of virtual imaging of the exhibit based on measurement data set in advance, and then determines whether the three-dimensional model has a plurality of point cloud data when viewed from a preset angle, so as to execute corresponding steps; for example, when the system determines that the three-dimensional model is displayed without a plurality of point cloud data, it can be stated that in the virtual display, the three-dimensional model data of the exhibited item is insufficient to provide detailed point cloud information, or the data may be missing or damaged, which may cause the virtual exhibited item to lack details, have insufficient precision, or present an unreal appearance in the display process, at this time, optimization of the three-dimensional model is required to reduce the file size, improve the display efficiency, and ensure that sufficient point cloud data can be provided during the display, and repair or replace the damaged file to ensure correct display; for example, when the system determines that the three-dimensional model is displayed, the system has a plurality of point cloud data, and at the moment, the system can apply a deformation correction algorithm preset to acquire the relative position information of the exhibition audience and the three-dimensional model, and self-adaptively adjust the projection coordinates of the three-dimensional model according to different relative position information, so that the audience on the seat of each audience on the exhibition can clearly watch the three-dimensional model.
In this embodiment, the second execution module further includes:
the identification unit is used for identifying the interaction link of the audience to the virtual exhibit image, wherein the content of the interaction link specifically comprises enlarging and reducing, adding a label and uploading evaluation;
the second judging unit is used for judging whether the interaction link is recorded;
and the second execution unit is used for acquiring pre-interaction data and post-interaction data of the virtual exhibit images if the virtual exhibit images are in the same form, and transmitting at least two virtual exhibit images to the reserved equipment of the audience based on the pre-interaction data and the post-interaction data.
In this embodiment, the system identifies an interaction link of the audience to the virtual exhibit image, and then determines whether the interaction link is recorded in exhibition data, so as to execute a corresponding step; for example, when the system determines that the interaction link of the audience is not completely recorded by the exhibition system, it is indicated that the interaction process of the audience is not completely captured or recorded by the system, which may result in missing important user behaviors or data and affect the overall understanding of the audience behaviors, at this time, the system needs to check the system logic and the decision mechanism to ensure the accuracy of the decision on the interaction, if there is misjudgment, the system needs to re-record the audience which is not recorded yet after the current interaction link of the audience is finished in time, so as to avoid that the audience cannot obtain the optimal exhibition experience on the exhibition; for example, when the system determines that the interaction link is recorded in the exhibition data, at this time, the system collects conventional data of the virtual exhibit image before the interaction of the audience and authentication data after the interaction of the audience, and transmits the conventional data and the authentication data as the original virtual exhibit image and the incidental data content of the interaction virtual exhibit image to the receiving device set by the audience, and one virtual exhibit image which has not been interacted and another virtual exhibit image which has been interacted are obtained from the receiving device, so that the exhibition audience can obtain a summary of the exhibition audience after the viewing exhibition and an interaction trace reserved in the interaction process from the two images.
In this embodiment, the judging module further includes:
the acquisition unit is used for acquiring a preset number of the virtual imaging of the exhibit;
the third judging unit is used for judging whether the preset number accords with the number series of the virtual photo frame to be input;
and the third execution unit is used for checking the visual angle content of the exhibit virtual imaging in the virtual photo frame to be input if the exhibit virtual imaging is yes, wherein the visual angle content specifically comprises a transparency degree and a matching size.
In this embodiment, the system acquires a preset projection number of the virtual imaging of the exhibit, and then determines whether the projection number accords with a number sequence of the virtual photo frame to be input, so as to execute a corresponding step; for example, when the system determines that the projection number does not conform to the number sequence of the virtual photo frame to be input, that is, the projection number is not matched with the number of the expected virtual photo frame or is abnormal, the virtual display system cannot correctly identify or display the corresponding virtual content, at this time, the system needs to carefully check the numbering system to ensure that each virtual photo frame and the corresponding projection number are correct, and meanwhile, check the configuration and parameter settings of the system to ensure that the matching rule of the virtual photo frame and the projection number is correct; for example, when the system determines that the projection number can conform to the number array of the virtual photo frame to be input, the system inspects the visual angle content of the visual image of the exhibit in the virtual photo frame to be input, ensures that the position and the size of the virtual exhibit in the virtual photo frame to be input are consistent with the actual exhibit or the expected effect, ensures that the virtual exhibit is coordinated with the background of the virtual photo frame to be input at the same time so as to create a harmonious visual effect, and finally inspects the position and accessibility of the interaction element to ensure that the audience can easily interact with the virtual exhibit.
In this embodiment, the second judging module further includes:
the detection unit is used for detecting temporary interaction numbers acquired by the audience at the exhibition;
a fourth judging unit for judging whether the temporary interaction number is given authority to interact with the virtual exhibit;
and the fourth execution unit is used for carrying out temporary identity verification on the audience if not, and recovering the interaction authority owned by the temporary interaction number based on the temporary identity verification and comparing the prerecorded identity content of the audience on the exhibition.
In this embodiment, the system detects the temporary interaction number acquired by the audience at the exhibition, and then determines whether the temporary interaction number is given authority to interact with the virtual exhibit in the exhibition process, so as to execute the corresponding step; for example, when the system determines that the temporary interaction number is given the authority to interact with the virtual exhibit, the system indicates that the audience has the temporary interaction authority, the system verifies the temporary interaction number, ensures that the temporary interaction number is associated with the corresponding identity or authority in the system, checks whether the temporary interaction number is given the specific authority so as to ensure that the audience can execute the expected interaction operation, and the audience can interact with the virtual exhibit after checking; for example, when the system determines that the temporary interaction number is not assigned with the authority of interaction with the virtual exhibit, the system performs temporary authentication on the exhibition audience, compares the pre-recorded identity content of the audience before the exhibition starts based on the temporary authentication, if the temporary authentication is the same identity after confirmation, the system assigns the interaction authority corresponding to the audience again, if the temporary interaction number is not the same identity after confirmation, the system recovers the temporary interaction number corresponding to the interaction authority, and avoids the opportunity that other audiences do not need the interaction authority but obtain interaction.
In this embodiment, the identification module further includes:
the scanning unit is used for scanning the projection templates existing in the scenery;
a fifth judging unit for judging whether the projection template is provided with projection size limitation;
and the fifth execution unit is used for acquiring a number sequence corresponding to the projection template if the virtual photo frame is to be input, and adjusting a preset display queue for the virtual photo frame to be input based on the number sequence, wherein the display queue is specifically a queuing sequence for displaying on a exhibition.
In this embodiment, the system scans a projection template provided in the scenery, and then determines whether the projection template is provided with a corresponding projection size limit, so as to execute a corresponding step; for example, when the system determines that the projection template is not provided with a projection size limit, the system considers that the projection template is not provided with a corresponding exhibit at the exhibition, and therefore the projection template is not provided with a corresponding size to adapt to the matched exhibit for exhibition; for example, when the system determines that the projection template is provided with the projection size limitation, the system can acquire a number sequence corresponding to the projection template, adjust a preset display queue for a virtual photo frame to be input based on the number sequence, and adjust preset display sequences for all exhibits for an exhibition, so that the condition that the exhibits are not displayed according to the preset display sequences and influence the exhibition appearance of audiences is avoided.
In this embodiment, further comprising:
the second identification module is used for identifying at least two representation shapes of the pre-recorded exhibits by applying preset scanning equipment, wherein the representation shapes specifically comprise geometric shapes, curved surface features and texture information;
the third judging module is used for judging whether various characterization shapes are matched with each other;
and the third execution module is used for generating an exhibit imaging corresponding to the exhibit based on the representation shape if the representation shape is positive.
In this embodiment, the system identifies the characterization shapes corresponding to the pre-recorded exhibits by applying a pre-set scanning device, and then determines whether the characterization shapes match with each other, so as to execute the corresponding steps; for example, when the system determines that the representation shapes cannot be matched with each other, that is, states that there is inconsistency or unexpected situation in the process of representing the exhibited article, the reality and user experience of the virtual exhibition are affected, at this time, the system needs to ensure that all the shape descriptions adopt the same coordinate system or are converted when necessary to ensure that they are aligned in the same space, and at the same time, check the coordinates and images of the texture mapping to ensure that they correspond correctly, repair the mapping problem to ensure consistency of the representation shapes; for example, when the system judges that the representation shapes can be matched with each other, the system can generate an exhibit imaging corresponding to the exhibit based on the representation shapes, and the exhibit imaging data are input into the exhibition system, so that the 3D effect and the exhibit detail of the exhibit can be seen without real objects in the whole exhibition process, and the safety of the exhibit is effectively ensured.
Fig. 3 shows an internal structural diagram of an electronic device in one embodiment. The electronic device may specifically be a terminal or a server. As shown in fig. 3, the electronic device includes a processor, a memory, and a network interface connected by a system bus. The memory includes a nonvolatile storage medium and an internal memory. The non-volatile storage medium of the electronic device has a storage operating system and may also have a computer program which, when executed by the processor, causes the processor to implement the above-described aerial imaging and virtual reality dynamic backlight determination method. The internal memory may also store a computer program that, when executed by the processor, causes the processor to perform the above-described aerial imaging and virtual reality dynamic backlight determination method. It will be appreciated by those skilled in the art that the structure shown in fig. 3 is merely a block diagram of a portion of the structure associated with the present application and does not constitute a limitation of the apparatus to which the present application is applied, and that a particular apparatus may include more or less components than those shown in the drawings, or may combine certain components, or have a different arrangement of components.
In one embodiment, an electronic device is provided that includes a memory and a processor, the memory having stored thereon a computer program that, when executed by the processor, causes the processor to perform the steps of the above-described aerial imaging and virtual reality dynamic backlight determination method.
It can be appreciated that the above method and device for aerial imaging and virtual reality dynamic backlight determination and electronic device belong to a general inventive concept, and the embodiments are mutually applicable.
Those skilled in the art will appreciate that all or part of the processes in the methods of the above embodiments may be implemented by a computer program for instructing relevant hardware, where the program may be stored in a non-volatile computer readable storage medium, and where the program, when executed, may include processes in the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the various embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), memory bus direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples only represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the present application. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application is to be determined by the claims appended hereto.

Claims (10)

1. The method for determining the dynamic backlight of the aerial imaging and the virtual reality is characterized by comprising the following steps of:
identifying a virtual photo frame to be input in a preset setting based on pre-acquired exhibit virtual imaging;
judging whether the virtual imaging of the exhibit can be matched with the virtual photo frame to be input;
if yes, calculating a projection difference between the exhibit virtual imaging and the virtual photo frame to be input according to a preset projection track, carrying out geometric correction on the exhibit virtual imaging according to the projection difference, aligning the three-dimensional characteristics of the exhibit virtual imaging, mapping the exhibit virtual imaging into the virtual photo frame to be input, and generating a virtual exhibit, wherein the three-dimensional characteristics specifically comprise an exhibit plane and an exhibit curved surface;
Judging whether the virtual exhibit receives an interaction instruction output by a spectator or not;
if yes, the interactive content existing in the interactive instruction is read, projection adjustment is carried out on the virtual exhibit based on the interactive content, the virtual exhibit image after the projection adjustment is intercepted, the virtual exhibit image is fed back to the audience, and preset exhibit information is presented beside the virtual exhibit image, wherein the projection adjustment specifically comprises changing the position, the shape and the color of the virtual exhibit, and the exhibit information specifically comprises an artistic introduction, product description and historical culture.
2. The method of claim 1, wherein the step of geometrically correcting the virtual representation based on the projection difference to align the stereoscopic features of the virtual representation comprises:
generating a three-dimensional model of the virtual imaging of the exhibit based on preset measurement data;
judging whether the three-dimensional model displays a plurality of point cloud data from a preset angle;
if yes, a preset deformation correction algorithm is applied to acquire the relative position information of the audience and the three-dimensional model, and the projection coordinates of the three-dimensional model are adjusted in a self-adaptive mode according to the relative position information.
3. The method of claim 1, wherein the step of performing projection adjustment on the virtual exhibit based on the interactive content and intercepting the projection-adjusted virtual exhibit image comprises:
identifying an interaction link of the audience to the virtual exhibit image, wherein the content of the interaction link specifically comprises zooming in and out, adding a label and uploading evaluation;
judging whether the interaction link is recorded;
and if so, acquiring pre-interaction data and post-interaction data of the virtual exhibit images, and transmitting at least two virtual exhibit images to equipment reserved by the audience based on the pre-interaction data and the post-interaction data.
4. The method of claim 1, wherein the step of determining whether the virtual imaging of the exhibit matches the virtual photo frame to be input comprises:
acquiring a preset number of the virtual imaging of the exhibit;
judging whether the preset number accords with the number array of the virtual photo frame to be input;
if yes, viewing the visual angle content of the exhibit virtual imaging in the virtual photo frame to be input, wherein the visual angle content specifically comprises a transparency degree and a matching size.
5. The method of claim 1, wherein the step of determining whether the virtual exhibit receives the interactive instruction output by the viewer further comprises:
detecting a temporary interaction number acquired by the audience at an exhibition;
judging whether the temporary interaction number is given authority to interact with the virtual exhibit;
if not, carrying out temporary identity verification on the audience, comparing the prerecorded identity content of the audience on the exhibition based on the temporary identity verification, and recovering the interaction authority owned by the temporary interaction number.
6. The method for determining an air image and a virtual reality dynamic backlight according to claim 1, wherein the step of identifying a virtual photo frame to be input in a preset scene comprises:
scanning a projection template existing in the scenery;
judging whether the projection template is provided with projection size limitation or not;
if yes, a number array corresponding to the projection template is obtained, and a preset display queue is adjusted for the virtual photo frame to be input based on the number array, wherein the display queue is specifically a queuing sequence for displaying on an exhibition.
7. The method of aerial imaging and virtual reality dynamic backlight determination of claim 1, further comprising, prior to the step of pre-acquisition-based exhibit imaging:
identifying at least two representation shapes of the pre-recorded exhibits by using preset scanning equipment, wherein the representation shapes specifically comprise geometric shapes, curved surface features and texture information;
judging whether various characterization shapes are matched with each other;
if yes, generating an exhibit imaging corresponding to the exhibit based on the representation shape.
8. Aerial formation of image and virtual reality dynamic backlight determine device, its characterized in that includes:
the identification module is used for identifying a virtual photo frame to be input in a preset setting based on the pre-acquired virtual imaging of the exhibit;
the judging module is used for judging whether the virtual imaging of the exhibit can be matched with the virtual photo frame to be input;
the execution module is used for calculating the projection difference between the exhibit virtual imaging and the virtual photo frame to be input according to a preset projection track if the exhibit virtual imaging is enabled, carrying out geometric correction on the exhibit virtual imaging according to the projection difference, aligning the stereoscopic features of the exhibit virtual imaging, mapping the exhibit virtual imaging into the virtual photo frame to be input, and generating a virtual exhibit, wherein the stereoscopic features specifically comprise an exhibit plane and an exhibit curved surface;
The second judging module is used for judging whether the virtual exhibit receives an interaction instruction output by a spectator or not;
and the second execution module is used for reading the interactive content existing in the interactive instruction, carrying out projection adjustment on the virtual exhibit based on the interactive content, intercepting the virtual exhibit image after the projection adjustment, feeding back the virtual exhibit image to the audience, and simultaneously presenting preset exhibit information beside the virtual exhibit image, wherein the projection adjustment specifically comprises changing the position, the shape and the color of the virtual exhibit, and the exhibit information specifically comprises an artistic introduction, a product description and a history culture.
9. The aerial imaging and virtual reality dynamic backlight determination device of claim 8, wherein the execution module further comprises:
the generating unit is used for generating a three-dimensional model of the virtual imaging of the exhibit based on preset measurement data;
the judging unit is used for judging whether the three-dimensional model displays a plurality of point cloud data from a preset angle;
and the execution unit is used for acquiring the relative position information of the audience and the three-dimensional model by applying a preset deformation correction algorithm if the audience is in the three-dimensional model, and adaptively adjusting the projection coordinates of the three-dimensional model according to the relative position information.
10. An aerial imaging and virtual reality dynamic backlight determination electronic device comprising a memory and a processor, the memory storing a computer program that, when executed by the processor, causes the processor to perform the steps of the aerial imaging and virtual reality dynamic backlight determination method of any one of claims 1 to 7.
CN202311715130.9A 2023-12-14 2023-12-14 Aerial imaging and virtual reality dynamic backlight determination method and device and electronic equipment Pending CN117412019A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311715130.9A CN117412019A (en) 2023-12-14 2023-12-14 Aerial imaging and virtual reality dynamic backlight determination method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311715130.9A CN117412019A (en) 2023-12-14 2023-12-14 Aerial imaging and virtual reality dynamic backlight determination method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN117412019A true CN117412019A (en) 2024-01-16

Family

ID=89489397

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311715130.9A Pending CN117412019A (en) 2023-12-14 2023-12-14 Aerial imaging and virtual reality dynamic backlight determination method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN117412019A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008113176A (en) * 2006-10-30 2008-05-15 Hitachi Ltd Adjustment system of video display system
US20150213584A1 (en) * 2014-01-24 2015-07-30 Ricoh Company, Ltd. Projection system, image processing apparatus, and correction method
CN104869377A (en) * 2012-03-14 2015-08-26 海信集团有限公司 Method for correcting colors of projected images and projector
CN113934297A (en) * 2021-10-13 2022-01-14 西交利物浦大学 Interaction method and device based on augmented reality, electronic equipment and medium
CN115222929A (en) * 2022-06-25 2022-10-21 深圳市博铭维***工程有限公司 VR virtual exhibition room construction method and device
WO2023020622A1 (en) * 2021-08-20 2023-02-23 上海商汤智能科技有限公司 Display method and apparatus, electronic device, computer-readable storage medium, computer program, and computer program product
CN115981516A (en) * 2023-03-17 2023-04-18 北京点意空间展览展示有限公司 Interaction method based on network virtual exhibition hall
WO2023087947A1 (en) * 2021-11-16 2023-05-25 海信视像科技股份有限公司 Projection device and correction method

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008113176A (en) * 2006-10-30 2008-05-15 Hitachi Ltd Adjustment system of video display system
CN104869377A (en) * 2012-03-14 2015-08-26 海信集团有限公司 Method for correcting colors of projected images and projector
US20150213584A1 (en) * 2014-01-24 2015-07-30 Ricoh Company, Ltd. Projection system, image processing apparatus, and correction method
WO2023020622A1 (en) * 2021-08-20 2023-02-23 上海商汤智能科技有限公司 Display method and apparatus, electronic device, computer-readable storage medium, computer program, and computer program product
CN113934297A (en) * 2021-10-13 2022-01-14 西交利物浦大学 Interaction method and device based on augmented reality, electronic equipment and medium
WO2023087947A1 (en) * 2021-11-16 2023-05-25 海信视像科技股份有限公司 Projection device and correction method
CN115222929A (en) * 2022-06-25 2022-10-21 深圳市博铭维***工程有限公司 VR virtual exhibition room construction method and device
CN115981516A (en) * 2023-03-17 2023-04-18 北京点意空间展览展示有限公司 Interaction method based on network virtual exhibition hall

Similar Documents

Publication Publication Date Title
CN104346834B (en) Message processing device and position designation method
JP6264834B2 (en) Guide method, information processing apparatus, and guide program
US8988343B2 (en) Method of automatically forming one three-dimensional space with multiple screens
US9519968B2 (en) Calibrating visual sensors using homography operators
US20160125638A1 (en) Automated Texturing Mapping and Animation from Images
US20110157155A1 (en) Layer management system for choreographing stereoscopic depth
US20130258062A1 (en) Method and apparatus for generating 3d stereoscopic image
CN112907751B (en) Virtual decoration method, system, equipment and medium based on mixed reality
US20180357819A1 (en) Method for generating a set of annotated images
CN108369749B (en) Method for controlling an apparatus for creating an augmented reality environment
US8571303B2 (en) Stereo matching processing system, stereo matching processing method and recording medium
US10169891B2 (en) Producing three-dimensional representation based on images of a person
US9589385B1 (en) Method of annotation across different locations
US20140375685A1 (en) Information processing apparatus, and determination method
US20230041573A1 (en) Image processing method and apparatus, computer device and storage medium
CN104808956A (en) System and method for controlling a display
CN109584377B (en) Method and device for presenting augmented reality content
US20090189888A1 (en) Procedure and Device for the Texturizing of an Object of a Virtual Three-Dimensional Geometric Model
CN113112612B (en) Positioning method and system for dynamic superposition of real person and mixed reality
CN113689578A (en) Human body data set generation method and device
CN104574355B (en) Calibration system of stereo camera and calibration method of stereo camera
WO2010061860A1 (en) Stereo matching process system, stereo matching process method, and recording medium
CN110796709A (en) Method and device for acquiring size of frame number, computer equipment and storage medium
CN112581632A (en) House source data processing method and device
CN113807451A (en) Panoramic image feature point matching model training method and device and server

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination