CN118037999B - Interactive scene construction method and system based on VR thinking teaching - Google Patents

Interactive scene construction method and system based on VR thinking teaching Download PDF

Info

Publication number
CN118037999B
CN118037999B CN202410430108.8A CN202410430108A CN118037999B CN 118037999 B CN118037999 B CN 118037999B CN 202410430108 A CN202410430108 A CN 202410430108A CN 118037999 B CN118037999 B CN 118037999B
Authority
CN
China
Prior art keywords
scene
images
sequence
virtual
difference value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410430108.8A
Other languages
Chinese (zh)
Other versions
CN118037999A (en
Inventor
田龙
吴雷
武亚苹
虞勇勇
崔璐
昌磊
茆昌盛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Time New Media Publishing Co ltd
Original Assignee
Time New Media Publishing Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Time New Media Publishing Co ltd filed Critical Time New Media Publishing Co ltd
Priority to CN202410430108.8A priority Critical patent/CN118037999B/en
Publication of CN118037999A publication Critical patent/CN118037999A/en
Application granted granted Critical
Publication of CN118037999B publication Critical patent/CN118037999B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The invention belongs to the technical field of virtual reality scene construction, in particular to an interactive scene construction method based on VR thinking teaching, which comprises the steps of acquiring a plurality of sequence images at a target machine position by using a rotary camera to form a sequence image set; matching sequence images with scene overlapping areas in the sequence image set based on a similarity matching algorithm, and performing seamless splicing processing on the matched sequence images based on the positions of the scene overlapping areas to generate a panoramic image of a target machine position; further, a panoramic view of all machine positions is obtained, and a panoramic view set for constructing a virtual thinking teaching scene is obtained, so that the scene display quality of the constructed virtual thinking teaching scene is better; the invention also provides a system for realizing the interactive scene construction method based on VR thinking teaching. According to the interactive scene construction method and system, the constructed virtual scene of the thinking teaching is better in scene display quality, and the overall construction period of the virtual teaching scene is shortened.

Description

Interactive scene construction method and system based on VR thinking teaching
Technical Field
The invention belongs to the technical field of virtual reality scene construction, and particularly relates to an interactive scene construction method and system based on VR thinking teaching.
Background
Traditional thinking and political affairs class lacks informatization, the teaching form of variety. Due to the arrival of the information age, a great deal of impact of various thinking tides has great influence on ideological ideas and life modes of people, various media and cultural works impact on the traditional value ideas of society, so that ideological and political education modes with pertinence and insufficient originality are relatively traditional, and the method is difficult to be suitable for modern distinctive ideological and political education.
With the social development and the technical progress, the survival mode of the digital technology is changed greatly, the revolutionary change of the presentation and the transmission of the knowledge is generated, the huge impact is brought to the traditional teaching mode, and the conditions and the opportunities for the innovation of the teaching mode method are provided; virtual reality technology is the leading edge technology of the digital age, and ideological and political theory class virtual simulation experience teaching is a new form of integrating thinking class teaching activities by utilizing information technology.
The traditional virtual reality scene construction generally adopts a geometric modeling method, for example, modeling software such as AutoCAD (computer aided design), 3D Studio and the like is used for carrying out geometric modeling on a scene, rendering output is further carried out based on data such as materials, illumination, textures and the like of the geometric body, and interaction and roaming of the virtual scene are realized by adding event response when user interaction is carried out, however, the greatest disadvantage of the scene modeling method is that the modeling of the geometric body is very complicated, a scene with good completion degree needs a plurality of entities, the modeling is very troublesome, the hardware requirement is higher, and the cost and speed of the scene modeling are high.
Disclosure of Invention
The embodiment of the invention aims to provide an interactive scene construction method and system based on VR thinking teaching, which aims to solve the technical problems in the background technology.
In order to achieve the above purpose, the present invention provides the following technical solutions.
In a first aspect, in one embodiment of the present invention, there is provided an interactive scene construction method based on VR ideological education, the interactive scene construction method including the steps of:
Step S101: acquiring a plurality of sequence images at a target machine position by using a rotary camera to form a sequence image set;
Step S102: selecting adjacent first images and second images from the sequence image set, selecting a rectangular area in the overlapping part area of the second images as a matching area, selecting a rectangular area with the same size in the overlapping part area of the first images as a target area, comparing pixel values in the two areas, calculating a difference value reflecting the scene difference degree in the matching area and the target area by using a similarity matching algorithm, moving the position of the target area in the first images, calculating a new difference value, comparing the new difference value with the difference value of the previous target area, finding the smaller value in the two difference values, repeating the steps, searching the whole images to obtain the minimum difference value, and determining the scene overlapping area existing in the two images based on the minimum difference value; seamless splicing processing is carried out on the matched sequence images based on the positions of the scene overlapping areas, and a panoramic view of the target machine position is generated;
step S103: and obtaining a panoramic view of all the machine positions, obtaining a panoramic view set, and constructing a virtual thinking teaching scene by using the panoramic view set.
In some embodiments of the present invention, in step S101, the method further includes a step of preprocessing the sequence image, where the preprocessing includes:
Performing depth mapping and smoothing on the sequence image by using DIBR technology to realize noise reduction and image cavity reduction on the sequence image; wherein the depth mapping operation is completed by using linear and inverse proportion mapping, and the linear mapping Expressed as:
(1);
In the formula (1), the components are as follows, 、/>Representing near and far cuts, respectively,/>, at projective transformation,/>For restoring the actual depth value;
Using linear mapping Performing depth mapping of the sequence image according to the inverse proportion mapping relation of the sequence image:
(2);
By means of binary Gaussian functions Performing depth smoothing on the sequence image, and enabling the sequence image to maintain symmetry after rotation, and performing binary Gaussian function/>Expressed as:
(3);
in the formula (3), the amino acid sequence of the compound, Pixel points representing sequential images,/>Is a Gaussian distribution parameter; the image edge gaussian threshold is denoted by e;
Setting and pixel point Corresponding depth value use/>Expressed by a binary Gaussian function/>For pixel dot/>After filtering and smoothing, the corresponding depth value/>, is obtained
(4);
In the formula (4), the amino acid sequence of the compound,For representing the number of smooth windows; data points within window Length use/>Description; /(I)、/>Representing a constant;
The depth mapping and smoothing operation of the image are realized by using the formula (2) and the formula (4), so that the noise and the number of holes in the image can be effectively reduced.
In some embodiments of the present invention, the step of determining the scene overlap region present in the two images based on the minimum disparity value comprises:
If the minimum difference value is smaller than the threshold value, the target area and the matching area of the minimum difference value of the two images are considered to have the same scene, namely, the two images have a scene overlapping area; otherwise, the two images do not have a scene overlap region.
In some embodiments of the present invention, the step of calculating a difference value reflecting a degree of scene difference in the matching region and the target region using a similarity matching algorithm includes:
calculating the degree of similarity of the matching region T and the target region S Wherein:
(5);
normalizing the formula (5) to be a similarity matching function, and calculating a difference value
(6);
(7);
Wherein,When the matching region T and the target region S are more similar,/>The larger the value of/>The smaller the value of (2), the difference value/>When the difference value is smaller than the threshold value, the target area and the matching area of the minimum difference value of the two images are considered to have the same scene, namely, the two images have a scene overlapping area; otherwise, the two images do not have a scene overlap region.
In some embodiments of the present invention, the step of performing seamless stitching on the matched sequence images based on the location of the scene overlap region includes:
Extracting pixel blocks of a matched first image in an overlapping region of a scene And pixel block/>, of the second image
Scalar functions corresponding to the two pixel blocks are respectivelyAnd/>
(8);
Wherein,Representing a gradient operator; /(I)Representing pixel block boundaries;
Calculate a first pixel block Second pixel block/>Spliced pixel block/>
(9);
Wherein,The representation of the laplace operator is provided,
And a differential solution definition formula is adopted to remove splicing fusion traces, so that seamless splicing treatment is realized:
(10);
Wherein, Representing the gradient field; n represents the pixel block/>Is the number of (3); /(I)Representing a guiding field which enables the pixel blocks to be the same as the scalar function changing field; z denotes the ordinal number of the pixel block to be stitched.
In some embodiments of the present invention, the method for constructing an interaction scenario based on VR ideological education further includes:
step S201: importing the panoramic atlas into Untiy D to create a virtual roaming scene for thinking and administrative teaching;
Step S202: acquiring the travelling speed data of a user in real time by using a speed measuring device, inputting the travelling speed data into a virtual roaming scene, and fitting the virtual roaming scene by using a curve calculation method to obtain a roaming path;
Step S203: and driving the viewpoint of the virtual roaming scene to move along the roaming path by using the travelling speed data, thereby realizing virtual roaming when the user travels.
In some embodiments of the present invention, the step of obtaining the roaming path by fitting the virtual roaming scene through a curve calculation method includes: and (3) calculating key interpolation points in real time by using an original roaming road control point in the virtual roaming scene through a Catmull-Rom curve interpolation algorithm, and fitting after connecting the key interpolation points to obtain a roaming path in the virtual roaming scene.
In some embodiments of the present invention, the Catmull-Rom curve algorithm performs curve interpolation processing on all the obtained path points as follows:
(11);
In the formula (11), the amino acid sequence of the compound, Representing a Path Point/>And Path Point/>Interpolation points of/>、/>、/>、/>Are all sequentially adjacent path points,/>Is the sequence number of the path point,/>Representing a curve tension parameter; /(I)A ratio parameter representing curve interpolation;
And (3) obtaining a plurality of interpolation points based on the formula (11), sequentially connecting the interpolation points in a broken line mode, converting the motion of the curve path into a linear motion path of the interpolation points, and obtaining a final roaming path.
In a second aspect, in another embodiment of the present invention, there is also provided an interactive scenario construction system based on VR ideological education, the system including:
The rotary camera is used for acquiring a plurality of sequence images at the target machine position by utilizing the rotary camera to form a sequence image set;
The image processor is used for matching the sequence images with the scene overlapping areas based on a similarity matching algorithm, performing seamless splicing processing on the matched sequence images based on the positions of the scene overlapping areas, and generating a panoramic image of the target machine position;
And the VR display equipment is used for acquiring the panoramic images of all the machine positions, obtaining a panoramic image set, and constructing a thinking teaching virtual scene by using the panoramic image set.
In a third aspect, in yet another embodiment of the present invention, there is also provided a computer device comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the interactive scenario construction method based on VR ideological education as provided in the first aspect when executing the computer program.
In summary, the interactive scene construction method and system based on VR thinking teaching have the beneficial effects that:
Firstly, the interactive scene construction method of the invention utilizes a rotary camera to acquire a plurality of sequence images at a target machine position to form a sequence image set; matching sequence images with scene overlapping areas in the sequence image set based on a similarity matching algorithm, and performing seamless splicing processing on the matched sequence images based on the positions of the scene overlapping areas to generate a panoramic image of a target machine position; further, a panoramic view of all machine positions is obtained, and a panoramic view set for constructing a virtual thinking teaching scene is obtained, so that the scene display quality of the constructed virtual thinking teaching scene is better;
Secondly, the sequence images with scene overlapping areas in the sequence image set are matched by utilizing a similarity matching algorithm, so that the required images can be accurately and rapidly extracted from the shot image materials, the acquisition speed of the panoramic image is greatly improved, and the overall construction period of the teaching virtual scene is shortened;
thirdly, in the process of splicing the panoramic pictures, the scene overlapping area can be quickly found, seamless splicing and fusion of the panoramic pictures are completed according to the found optimal splicing position, the quality of the panoramic pictures is ensured, and virtual scenes are not distorted and are not split in the process of realizing display;
Fourth, the invention can carry out immersive browsing in the virtual scene along the viewpoint of the roaming path according to the travelling speed of the user, and the students can enter the online virtual roaming scene through 5G or high-speed Internet no matter in classrooms or remote places, so that the classrooms can pass through any space and time at any time, and immersive interactive teaching is completed.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the following description will briefly introduce the drawings that are needed in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are only some embodiments of the present invention.
FIG. 1 is a flow chart of an implementation of an interactive scene construction method based on VR ideological teaching;
FIG. 2 is a sub-flowchart of an interactive scenario construction method based on VR ideological education in accordance with the present invention;
FIG. 3 is another sub-flowchart of the VR ideological teaching-based interactive scenario construction method of the present invention;
FIG. 4 is a flow chart of the present invention for implementing interactive scene roaming path planning;
FIG. 5 is a block diagram of an interactive scenario construction system based on VR ideological teaching of the present invention;
fig. 6 is a block diagram of a computer device according to the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments of the present application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The flow diagrams depicted in the figures are merely illustrative and not necessarily all of the elements and operations/steps are included or performed in the order described. For example, some operations/steps may be further divided, combined, or partially combined, so that the order of actual execution may be changed according to actual situations.
Some embodiments of the present application are described in detail below with reference to the accompanying drawings. The following embodiments and features of the embodiments may be combined with each other without conflict.
At present, a traditional virtual reality scene construction generally adopts a geometric modeling method, for example, modeling software such as AutoCAD (computer aided design), 3D Studio and the like is used for carrying out geometric modeling on a scene, rendering output is further carried out based on data such as materials, illumination, textures and the like of geometric bodies, and when user interaction is carried out, event response is added, so that operations such as movement, rotation, visual angle transformation and the like are realized in a virtual reality scene model, and interaction and roaming of a virtual scene are realized.
However, the biggest disadvantage of the above-mentioned scene modeling method is that the modeling of the geometry is very complicated, a scene with a better degree of completion needs very many entities, the modeling is very complicated, and the hardware requirement is higher, the cost of the scene modeling is high, and the speed is slow.
In one embodiment of the invention, an interactive scene construction method based on VR thinking teaching is provided, a virtual learning scene with good quality can be constructed, the period for constructing the virtual learning scene is greatly reduced, and students can enter an online virtual roaming scene through 5G or high-speed Internet no matter in classrooms or remote places, so that the classrooms can pass through any space and time at any time, and immersive interactive teaching is completed.
Specifically, as shown in fig. 1, the interactive scene construction method based on VR ideological teaching provided by the invention comprises the following steps:
Step S101: acquiring a plurality of sequence images at a target machine position by using a rotary camera to form a sequence image set;
in performing a rotary camera operation, it is first necessary to determine the target camera position, i.e. the fixed position of the camera. The choice of this position is crucial because it will directly affect the range and quality of the acquired image, and once the target pose is determined, the camera's rotation parameters, including rotation angle, speed and rotation axis, can be set up.
By precisely controlling the rotation of the camera, the target area can be photographed in all directions and at multiple angles according to a preset path. In this way, each frame of image captures a different portion of the scene, thereby forming a continuous, comprehensive sequence of images. This sequence image set is used for subsequent interactive scene modeling.
Further, as shown in fig. 2, in step S101, the method further includes a step of preprocessing the sequence image, where the preprocessing includes:
step S1011: performing depth mapping and smoothing on the sequence image by using DIBR technology to realize noise reduction and image cavity reduction on the sequence image; wherein the depth mapping operation is completed by using linear and inverse proportion mapping, and the linear mapping Expressed as:
(1);
In the formula (1), the components are as follows, 、/>Representing near and far cuts, respectively,/>, at projective transformation∈[0,255],/>For restoring the actual depth value;
step S1012: using linear mapping Performing depth mapping of the sequence image according to the inverse proportion mapping relation of the sequence image:
(2);
step S1013: by means of binary Gaussian functions Performing depth smoothing on the sequence image, and enabling the sequence image to maintain symmetry after rotation, and performing binary Gaussian function/>Expressed as:
(3);
in the formula (3), the amino acid sequence of the compound, Pixel points representing sequential images,/>Is a Gaussian distribution parameter; the image edge gaussian threshold is denoted by e;
Step S1014: setting and pixel point Corresponding depth value use/>Expressed by a binary Gaussian function/>For pixel dot/>After filtering and smoothing, the corresponding depth value/>, is obtained
(4);
In the formula (4), the amino acid sequence of the compound,For representing the number of smooth windows; data points within window Length use/>Description; /(I)、/>Representing a constant;
Therefore, the invention can realize the depth mapping and smoothing operation of the image by using the formula (2) and the formula (4), and can effectively reduce the noise and the number of holes in the image.
Further, referring to fig. 1, the method for constructing an interaction scenario based on VR ideological education of the present invention further includes step S102: selecting adjacent first images and second images from the sequence image set, selecting a rectangular area in the overlapping part area of the second images as a matching area, selecting a rectangular area with the same size in the overlapping part area of the first images as a target area, comparing pixel values in the two areas, calculating a difference value reflecting the scene difference degree in the matching area and the target area by using a similarity matching algorithm, moving the position of the target area in the first images, calculating a new difference value, comparing the new difference value with the difference value of the previous target area, finding the smaller value in the two difference values, repeating the steps, searching the whole images to obtain the minimum difference value, and determining the scene overlapping area existing in the two images based on the minimum difference value; seamless splicing processing is carried out on the matched sequence images based on the positions of the scene overlapping areas, and a panoramic view of the target machine position is generated;
Further comprising step S103: and obtaining a panoramic view of all the machine positions, obtaining a panoramic view set, and constructing a virtual thinking teaching scene by using the panoramic view set.
Therefore, the interactive scene construction method of the invention utilizes the rotary camera to acquire a plurality of sequence images at the target machine position to form a sequence image set; matching sequence images with scene overlapping areas in the sequence image set based on a similarity matching algorithm, and performing seamless splicing processing on the matched sequence images based on the positions of the scene overlapping areas to generate a panoramic image of a target machine position; and then, obtaining the panoramic view of all the machine positions to obtain a panoramic view set for constructing the virtual scene of the thinking teaching, so that the constructed virtual scene of the thinking teaching has better scene display quality.
Specifically, in step S102, the whole image is searched to obtain a minimum difference value, and if the minimum difference value is smaller than the threshold value, the target area and the matching area of the minimum difference value of the two images are considered to have the same scene, i.e. the two images have a scene overlapping area; otherwise, the two images do not have a scene overlap region.
Further, in an embodiment of the present invention, the step of calculating, using a similarity matching algorithm, a difference value reflecting a degree of scene difference between the matching region and the target region includes:
calculating the degree of similarity of the matching region T and the target region S Wherein:
(5);
normalizing the formula (5) to be a similarity matching function, and calculating a difference value
(6);
(7);
Wherein,When the matching region T and the target region S are more similar,/>The larger the value of/>The smaller the value of (2), the difference value/>When the difference value is smaller than the threshold value, the target area and the matching area of the minimum difference value of the two images are considered to have the same scene, namely, the two images have a scene overlapping area; otherwise, the two images do not have a scene overlap region.
It can be understood that the invention utilizes the similarity matching algorithm to match the sequence images with scene overlapping areas in the sequence image set, can accurately and rapidly extract the required images from the shot image materials, greatly improves the acquisition speed of the panoramic image, and shortens the integral construction period of the teaching virtual scene.
Further, as shown in fig. 3, in the embodiment of the present invention, the step of performing seamless stitching processing on the matched sequence images based on the position of the scene overlapping area includes:
step S1021: extracting pixel blocks of a matched first image in an overlapping region of a scene And pixel block/>, of the second image
Scalar functions corresponding to the two pixel blocks are respectivelyAnd/>
(8);
Wherein,Representing a gradient operator; /(I)Representing pixel block boundaries;
step S1022: calculate a first pixel block Second pixel block/>Spliced pixel block/>
(9);
Wherein,The representation of the laplace operator is provided,
Step S1023: and a differential solution definition formula is adopted to remove splicing fusion traces, so that seamless splicing treatment is realized:
(10);
Wherein, Representing the gradient field; n represents the pixel block/>Is the number of (3); /(I)Representing a guiding field which enables the pixel blocks to be the same as the scalar function changing field; z denotes the ordinal number of the pixel block to be stitched.
According to the invention, in the process of splicing the panoramic pictures, the scene overlapping area can be quickly found, and seamless splicing and fusion of the panoramic pictures are completed according to the found optimal splicing position, so that the quality of the panoramic pictures is ensured, and the virtual scene is not distorted or split in the process of realizing display.
With continued reference to fig. 4, in an embodiment of the present invention, the method for constructing an interaction scenario based on VR ideological education further includes:
step S201: importing the panoramic atlas into Untiy D to create a virtual roaming scene for thinking and administrative teaching;
Step S202: acquiring the travelling speed data of a user in real time by using a speed measuring device, inputting the travelling speed data into a virtual roaming scene, and fitting the virtual roaming scene by using a curve calculation method to obtain a roaming path;
Step S203: and driving the viewpoint of the virtual roaming scene to move along the roaming path by using the travelling speed data, thereby realizing virtual roaming when the user travels.
In the embodiment of the present invention, the step of obtaining the roaming path by fitting the virtual roaming scene through a curve calculation method includes: and (3) calculating key interpolation points in real time by using an original roaming road control point in the virtual roaming scene through a Catmull-Rom curve interpolation algorithm, and fitting after connecting the key interpolation points to obtain a roaming path in the virtual roaming scene.
The invention adopts a Catmull-Rom curve algorithm of the following formula (11) to perform curve interpolation processing on all path points obtained in the steps, so that a plurality of interpolation points can be obtained, each interpolation point is further connected in turn in a fold line mode, and the movement of a curve path is converted into a linear movement path of the interpolation point, so that a final roaming path is obtained;
(11);
Wherein, in the formula (11), Representing a Path Point/>And Path Point/>Interpolation points of/>、/>、/>、/>Are all sequentially adjacent path points,/>Is the sequence number of the path point,/>Representing a curve tension parameter; /(I)A ratio parameter representing the interpolation of the curve.
In summary, the invention can perform immersive browsing in the virtual scene along the viewpoint of the roaming path according to the travelling speed of the user, and the students can enter the online virtual roaming scene through 5G or high-speed Internet no matter in classrooms or remote places, so that the classrooms can pass through any space and time at any time, and immersive interactive teaching is completed.
Optionally, in the learning of the immersive thinking education, the invention can also perform roaming through riding based on the roaming path, which is an immersive experience combining the virtual reality technology and the sensor technology; specifically, a bicycle is used as an interaction tool: in such systems, exercise bicycles are used as the primary tool for human-machine interaction, with the learner controlling movement in a virtual environment by riding the bicycle;
Wherein, the tool bicycle is provided with various sensors, such as photoelectric encoders, angular displacement sensors and force sensors, which capture the operation of a user on the bicycle in real time, including data of speed, direction, rotation of a cage head, weight of a rider and the like; the data collected by the sensor is transmitted to the upper computer through a Digital Signal Processor (DSP) control circuit, and the DSP is responsible for processing the data and converting the data into corresponding actions in the virtual scene.
The invention can enable students to obtain more realistic visual experience in the virtual interaction scene, and 3D roaming through riding can be used as a learning mode and a body-building tool, and the students can exercise the body while enjoying the learning process of virtual roaming.
As shown in fig. 5, in another embodiment of the present invention, there is further provided an interaction scenario construction system based on VR ideological education, the interaction scenario construction system including:
A rotary camera 301, configured to acquire a plurality of sequence images at a target camera position by using the rotary camera, so as to form a sequence image set;
The image processor 302 is configured to match the sequence images with the scene overlapping regions based on a similarity matching algorithm, and perform seamless stitching processing on the matched sequence images based on the positions of the scene overlapping regions, so as to generate a panoramic image of the target machine position;
And the VR display device 303 is configured to obtain a panoramic view of all the machine positions, obtain a panoramic view set, and construct a virtual thinking teaching scene by using the panoramic view set.
As shown in fig. 6, in another embodiment of the present invention, there is further provided a computer device, where the computer device 400 includes a memory 401, a processor 402, and computer readable instructions stored in the memory 401 and executable on the processor 402, and where the processor 402 implements the interactive scenario construction method based on VR ideological education as provided in the above embodiment when executing the computer readable instructions;
Step S101: acquiring a plurality of sequence images at a target machine position by using a rotary camera to form a sequence image set;
Step S102: selecting adjacent first images and second images from the sequence image set, selecting a rectangular area in the overlapping part area of the second images as a matching area, selecting a rectangular area with the same size in the overlapping part area of the first images as a target area, comparing pixel values in the two areas, calculating a difference value reflecting the scene difference degree in the matching area and the target area by using a similarity matching algorithm, moving the position of the target area in the first images, calculating a new difference value, comparing the new difference value with the difference value of the previous target area, finding the smaller value in the two difference values, repeating the steps, searching the whole images to obtain the minimum difference value, and determining the scene overlapping area existing in the two images based on the minimum difference value; seamless splicing processing is carried out on the matched sequence images based on the positions of the scene overlapping areas, and a panoramic view of the target machine position is generated;
step S103: and obtaining a panoramic view of all the machine positions, obtaining a panoramic view set, and constructing a virtual thinking teaching scene by using the panoramic view set.
In addition, in the embodiment of the present invention, the computer device 400 provided in the present invention may further have a communication interface 403, for receiving a control instruction.
A processor and a memory are included in the computer device, and may further include: an input system and an output system. The processor, memory, input system, and output system may be connected by a bus or other means, and the input system may receive input numerical or character information. The output system may include a display device such as a display screen.
The memory is used as a non-volatile computer readable storage medium and can be used for storing non-volatile software programs, non-volatile computer executable programs and modules, such as program instructions/modules corresponding to the interactive scene construction method based on VR ideological teaching in the embodiment of the application. The memory may include a memory program area and a memory data area, wherein the memory program area may store an operating system, at least one application program required for a function; the storage data area may store data created using an interactive scenario construction method based on VR ideological education, and the like. In addition, the memory may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid-state storage device. In some embodiments, the memory optionally includes memory remotely located relative to the processor, the remote memory being connectable to the local module through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The processor may be a central processing unit (Central Processing Unit, CPU), controller, microcontroller, microprocessor, or other data processing chip in some embodiments. The processor is typically used to control the overall operation of the computer device. In this embodiment, the processor is configured to execute the program code stored in the memory or process the data. The processors of the multiple computer devices of the computer device of the present embodiment execute various functional applications and data processing of the server by running nonvolatile software programs, instructions and modules stored in the memory, that is, the steps of the interactive scene construction method based on VR ideological education of the method embodiment are implemented.
Those of skill would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the disclosure herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as software or hardware depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
Finally, it should be noted that the computer-readable storage media (e.g., memory) herein can be either volatile memory or nonvolatile memory, or can include both volatile and nonvolatile memory. By way of example, and not limitation, nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM), which acts as external cache memory. By way of example, and not limitation, RAM may be available in a variety of forms such as synchronous RAM (DRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), and Direct Rambus RAM (DRRAM). The storage devices of the disclosed aspects are intended to comprise, without being limited to, these and other suitable types of memory.
The various illustrative logical blocks, modules, and circuits described in connection with the disclosure herein may be implemented or performed with the following components designed to perform such functions: a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP and/or any other such configuration.
The foregoing is an exemplary embodiment of the present disclosure, but it should be noted that various changes and modifications could be made herein without departing from the scope of the disclosure as defined by the appended claims. The functions, steps and/or actions of the method claims in accordance with the disclosed embodiments described herein need not be performed in any particular order. Furthermore, although elements of the disclosed embodiments may be described or claimed in the singular, the plural is contemplated unless limitation to the singular is explicitly stated.
It should be understood that as used herein, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly supports the exception. It should also be understood that "and/or" as used herein is meant to include any and all possible combinations of one or more of the associated listed items. The foregoing embodiment of the present invention has been disclosed with reference to the number of embodiments for the purpose of description only, and does not represent the advantages or disadvantages of the embodiments.
Those of ordinary skill in the art will appreciate that: the above discussion of any embodiment is merely exemplary and is not intended to imply that the scope of the disclosure of embodiments of the invention, including the claims, is limited to such examples; combinations of features of the above embodiments or in different embodiments are also possible within the idea of an embodiment of the invention, and many other variations of the different aspects of the embodiments of the invention as described above exist, which are not provided in detail for the sake of brevity. Therefore, any omission, modification, equivalent replacement, improvement, etc. of the embodiments should be included in the protection scope of the embodiments of the present invention.

Claims (7)

1. An interactive scene construction method based on VR thinking teaching is characterized by comprising the following steps:
the interactive scene construction method comprises the following steps:
Step S101: acquiring a plurality of sequence images at a target machine position by using a rotary camera to form a sequence image set;
Step S102: selecting adjacent first images and second images from the sequence image set, selecting a rectangular area in the overlapping part area of the second images as a matching area, selecting a rectangular area with the same size in the overlapping part area of the first images as a target area, comparing pixel values in the two areas, calculating a difference value reflecting the scene difference degree in the matching area and the target area by using a similarity matching algorithm, moving the position of the target area in the first images, calculating a new difference value, comparing the new difference value with the difference value of the previous target area, finding the smaller value in the two difference values, repeating the steps, searching the whole images to obtain the minimum difference value, and determining the scene overlapping area existing in the two images based on the minimum difference value; seamless splicing processing is carried out on the matched sequence images based on the positions of the scene overlapping areas, and a panoramic view of the target machine position is generated;
Step S103: acquiring panoramic views of all the machine positions to obtain a panoramic view set, and constructing a thinking teaching virtual scene by using the panoramic view set;
in step S101, the method further includes a step of preprocessing the sequence image, where the preprocessing step includes:
Performing depth mapping and smoothing on the sequence image by using DIBR technology to realize noise reduction and image cavity reduction on the sequence image; wherein the depth mapping operation is completed by using linear and inverse proportion mapping, and the linear mapping Expressed as:(1);
In the formula (1), the components are as follows, 、/>Representing near and far cuts, respectively,/>, at projective transformation∈[0,255],/>For restoring the actual depth value;
Using linear mapping Performing depth mapping of the sequence image according to the inverse proportion mapping relation of the sequence image:(2);
By means of binary Gaussian functions Performing depth smoothing on the sequence image, and enabling the sequence image to maintain symmetry after rotation, and performing binary Gaussian function/>Expressed as:
(3);
in the formula (3), the amino acid sequence of the compound, Pixel points representing sequential images,/>Is a Gaussian distribution parameter; the image edge gaussian threshold is denoted by e;
Setting and pixel point Corresponding depth value use/>Expressed by a binary Gaussian functionFor pixel dot/>After filtering and smoothing, the corresponding depth value/>, is obtained
(4);
In the formula (4), the amino acid sequence of the compound,For representing the number of smooth windows; data points within window Length use/>Description; /(I)、/>Representing a constant;
The step of determining a scene overlap region present in the two images based on the minimum disparity value comprises:
if the minimum difference value is smaller than the threshold value, the target area and the matching area of the minimum difference value of the two images are considered to have the same scene, namely, the two images have a scene overlapping area; otherwise, the two images do not have a scene overlapping area;
The step of calculating a difference value reflecting the degree of scene difference in the matching region and the target region using a similarity matching algorithm includes:
calculating the degree of similarity of the matching region T and the target region S Wherein:
(5);
normalizing the formula (5) to be a similarity matching function, and calculating a difference value
(6);
(7);
Wherein,When the matching region T and the target region S are more similar,/>The larger the value of/>The smaller the value of (2), the difference value/>When the difference value is smaller than the threshold value, the target area and the matching area of the minimum difference value of the two images are considered to have the same scene, namely, the two images have a scene overlapping area; otherwise, the two images do not have a scene overlap region.
2. The interactive scene construction method based on VR ideological education of claim 1, wherein the step of performing seamless stitching processing on the matched sequence images based on the positions of the scene overlapping areas comprises:
Extracting pixel blocks of a matched first image in an overlapping region of a scene And pixel block/>, of the second image; Scalar functions corresponding to the two pixel blocks are/>, respectivelyAnd/>
(8);
Wherein,Representing a gradient operator; /(I)Representing pixel block boundaries;
Calculate a first pixel block Second pixel block/>Spliced pixel block/>
(9);
Wherein,Representing a laplace operator;
And a differential solution definition formula is adopted to remove splicing fusion traces, so that seamless splicing treatment is realized:
(10);
Wherein, Representing the gradient field; n represents the pixel block/>Is the number of (3); /(I)Representing a guiding field which enables the pixel blocks to be the same as the scalar function changing field; z denotes the ordinal number of the pixel block to be stitched.
3. The interactive scenario construction method based on VR thinking teaching according to claim 2, further comprising:
step S201: importing the panoramic atlas into Untiy D to create a virtual roaming scene for thinking and administrative teaching;
Step S202: acquiring the travelling speed data of a user in real time by using a speed measuring device, inputting the travelling speed data into a virtual roaming scene, and fitting the virtual roaming scene by using a curve calculation method to obtain a roaming path;
Step S203: and driving the viewpoint of the virtual roaming scene to move along the roaming path by using the travelling speed data, thereby realizing virtual roaming when the user travels.
4. The interactive scene construction method based on VR ideological education of claim 3, wherein the step of fitting the virtual roaming scene to obtain the roaming path through a curve calculation method comprises the steps of: and (3) calculating key interpolation points in real time by using an original roaming road control point in the virtual roaming scene through a Catmull-Rom curve interpolation algorithm, and fitting after connecting the key interpolation points to obtain a roaming path in the virtual roaming scene.
5. The interactive scene construction method based on VR ideological education of claim 4 wherein the curve interpolation processing of all the obtained path points by the Catmull-Rom curve algorithm is expressed as:
(11);
In the formula (11), the amino acid sequence of the compound, Representing a Path Point/>And Path Point/>Interpolation points of/>、/>、/>、/>Are all sequentially adjacent path points,/>Is the sequence number of the path point,/>Representing a curve tension parameter; /(I)A ratio parameter representing curve interpolation;
And (3) obtaining a plurality of interpolation points based on the formula (11), sequentially connecting the interpolation points in a broken line mode, converting the motion of the curve path into a linear motion path of the interpolation points, and obtaining a final roaming path.
6. A system for implementing the VR ideological teaching-based interactive scenario construction method of any one of claims 1-5, the system comprising:
The rotary camera is used for acquiring a plurality of sequence images at the target machine position by utilizing the rotary camera to form a sequence image set;
The image processor is used for matching the sequence images with the scene overlapping areas based on a similarity matching algorithm, performing seamless splicing processing on the matched sequence images based on the positions of the scene overlapping areas, and generating a panoramic image of the target machine position;
And the VR display equipment is used for acquiring the panoramic images of all the machine positions, obtaining a panoramic image set, and constructing a thinking teaching virtual scene by using the panoramic image set.
7. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the VR-based ideological teaching interactive scenario construction method of any one of claims 1-5 when the computer program is executed.
CN202410430108.8A 2024-04-10 2024-04-10 Interactive scene construction method and system based on VR thinking teaching Active CN118037999B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410430108.8A CN118037999B (en) 2024-04-10 2024-04-10 Interactive scene construction method and system based on VR thinking teaching

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410430108.8A CN118037999B (en) 2024-04-10 2024-04-10 Interactive scene construction method and system based on VR thinking teaching

Publications (2)

Publication Number Publication Date
CN118037999A CN118037999A (en) 2024-05-14
CN118037999B true CN118037999B (en) 2024-06-18

Family

ID=90991699

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410430108.8A Active CN118037999B (en) 2024-04-10 2024-04-10 Interactive scene construction method and system based on VR thinking teaching

Country Status (1)

Country Link
CN (1) CN118037999B (en)

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2443286A1 (en) * 2003-10-01 2005-04-01 Aryan Saed Digital composition of a mosaic motion picture
US9811946B1 (en) * 2016-05-30 2017-11-07 Hong Kong Applied Science and Technology Research Institute Company, Limited High resolution (HR) panorama generation without ghosting artifacts using multiple HR images mapped to a low resolution 360-degree image
CN107666606B (en) * 2016-07-29 2019-07-12 东南大学 Binocular panoramic picture acquisition methods and device
CN106373088B (en) * 2016-08-25 2019-08-06 中国电子科技集团公司第十研究所 The quick joining method of low Duplication aerial image is tilted greatly
CN107644411A (en) * 2017-09-19 2018-01-30 武汉中旗生物医疗电子有限公司 Ultrasonic wide-scene imaging method and device
CN109407547A (en) * 2018-09-28 2019-03-01 合肥学院 Multi-camera in-loop simulation test method and system for panoramic visual perception
CN110175011B (en) * 2019-05-06 2022-06-03 长春理工大学 Panoramic image seamless splicing method
CN114581611B (en) * 2022-04-28 2022-09-20 阿里巴巴(中国)有限公司 Virtual scene construction method and device
CN116152068A (en) * 2023-02-21 2023-05-23 杭州图谱光电科技有限公司 Splicing method for solar panel images
CN117635421A (en) * 2023-11-28 2024-03-01 四川启睿克科技有限公司 Image stitching and fusion method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
全景图像拼接关键技术研究;张少坤;《信息科技辑》;20190131;第I138-4043页 *
全景图拼接算法的设计与实现;杨刚;;重庆工学院学报(自然科学版);20070915(第09期);第1-6页 *

Also Published As

Publication number Publication date
CN118037999A (en) 2024-05-14

Similar Documents

Publication Publication Date Title
Tretschk et al. Demea: Deep mesh autoencoders for non-rigidly deforming objects
CN112771539A (en) Using three-dimensional data predicted from two-dimensional images using neural networks for 3D modeling applications
Yang et al. A multi-task Faster R-CNN method for 3D vehicle detection based on a single image
CN113706699B (en) Data processing method and device, electronic equipment and computer readable storage medium
WO2021063271A1 (en) Human body model reconstruction method and reconstruction system, and storage medium
CN109035415B (en) Virtual model processing method, device, equipment and computer readable storage medium
CN112639878A (en) Unsupervised depth prediction neural network
CN112651881A (en) Image synthesis method, apparatus, device, storage medium, and program product
EP4191538A1 (en) Large scene neural view synthesis
KR20200136723A (en) Method and apparatus for generating learning data for object recognition using virtual city model
He et al. Learning scene dynamics from point cloud sequences
CN112801064A (en) Model training method, electronic device and storage medium
Wang et al. A synthetic dataset for Visual SLAM evaluation
CN115272565A (en) Head three-dimensional model reconstruction method and electronic equipment
CN117218246A (en) Training method and device for image generation model, electronic equipment and storage medium
CN118037999B (en) Interactive scene construction method and system based on VR thinking teaching
CN117711066A (en) Three-dimensional human body posture estimation method, device, equipment and medium
CN116665274A (en) Face driving method and device
Little et al. Tools for richer crowd source image annotations
Domínguez-Morales et al. Stereo matching: From the basis to neuromorphic engineering
Su et al. Omnidirectional depth estimation with hierarchical deep network for multi-fisheye navigation systems
Cai et al. Automatic generation of Labanotation based on human pose estimation in folk dance videos
Feng Deep Learning for Depth, Ego-Motion, Optical Flow Estimation, and Semantic Segmentation
WO2024077791A1 (en) Video generation method and apparatus, device, and computer readable storage medium
Shang et al. Semantic Image Translation for Repairing the Texture Defects of Building Models

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant