CN112235556B - VR scene construction method, system and device - Google Patents

VR scene construction method, system and device Download PDF

Info

Publication number
CN112235556B
CN112235556B CN202011030578.3A CN202011030578A CN112235556B CN 112235556 B CN112235556 B CN 112235556B CN 202011030578 A CN202011030578 A CN 202011030578A CN 112235556 B CN112235556 B CN 112235556B
Authority
CN
China
Prior art keywords
data
point cloud
path
scene
video data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011030578.3A
Other languages
Chinese (zh)
Other versions
CN112235556A (en
Inventor
乐倍宁
宋阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Lingjing World Technology Co ltd
Original Assignee
Beijing Lingjing World Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Lingjing World Technology Co ltd filed Critical Beijing Lingjing World Technology Co ltd
Priority to CN202011030578.3A priority Critical patent/CN112235556B/en
Publication of CN112235556A publication Critical patent/CN112235556A/en
Application granted granted Critical
Publication of CN112235556B publication Critical patent/CN112235556B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/122Improving the 3D impression of stereoscopic images by modifying image signal contents, e.g. by filtering or adding monoscopic depth cues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/261Image signal generators with monoscopic-to-stereoscopic image conversion

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention provides a method, a system and a device for constructing a VR scene, which comprise the following steps: receiving data, namely receiving live-action data, wherein the live-action data is used for establishing the VR scene; screening data, wherein the live-action data further comprises an acquisition path, and an invalid path on the acquisition path is output to obtain an effective path; data processing, wherein the live-action data also comprises video data and point cloud data, the video data and the point cloud data collected on the effective path are accessed, and the frame numbers of the video data and the point cloud data are made to correspond; and scene construction, namely establishing a simulated scene according to the point cloud data, embedding the video data in the simulated scene, and constructing a VR scene. On the one hand, the method avoids using large-scale three-dimensional rendering, reduces the calculated amount and solves the problem that the user experiences poor feeling due to blockage on the premise of ensuring the user immersion; on the other hand, the physical effect and the authenticity of the VR scene are enhanced by combining the point cloud data.

Description

VR scene construction method, system and device
Technical Field
The invention relates to the technical field of virtual reality, in particular to a VR scene construction method, system and device.
Background
Virtual Reality technology (abbreviated as VR, english name) also called smart environment technology, which includes computer, electronic information, and simulation technology, is basically implemented by a way that a computer simulates a Virtual environment to provide people with a sense of environmental immersion. With the continuous development of social productivity and scientific technology, VR technology is increasingly in great demand in various industries. The VR technology has made great progress and gradually becomes a new scientific and technical field, and the VR technology has been applied to various fields, such as power patrol inspection, VR house watching and game fields, but the VR effect can not be guaranteed to be real enough often, and in order to guarantee the authenticity of user experience, the construction of previous VR scenes is very important.
The traditional VR scene has two types of construction modes: the first is achieved by three-dimensional modeling; the second is to use a panoramic picture instead of the three-dimensional model. VR scene effect that the first kind of mode was built is lifelike, can bring fine sense of immersing, but development cycle is longer, and development cost is great, and the file is great, and it is great to render the calculation volume, leads to blocking, has improved system delay. The second mode is small in system delay and low in development cost, reduces the size of a file, can make up for the defects of the first mode, but can cause serious distortion phenomenon due to errors of shooting equipment and a synthesis algorithm of a scene constructed by one panoramic picture, the scene constructed by one panoramic picture is planar, only one picture is stretched into a three-dimensional space, a vivid three-dimensional effect cannot be achieved, a VR scene manufactured in a pure image mode can only adopt a video implantation mode for user interaction, cannot have operability and high interactivity like a model, and cannot bring immersion feeling to a user.
Therefore, there is a need in the art for a VR scene construction method, system and apparatus.
Accordingly, the present invention is directed to such a system.
Disclosure of Invention
The invention aims to provide a method, a system and a device for constructing a VR scene so as to solve at least one technical problem.
The invention provides a VR scene construction method, which comprises the following steps:
receiving data, namely receiving live-action data, wherein the live-action data is acquired in a scene to be acquired and is used for establishing the VR scene;
screening data, wherein the live-action data further comprises an acquisition path, outputting an invalid path on the acquisition path, and deleting the invalid path from the acquisition path to obtain an effective path;
the live-action data also comprises video data and point cloud data, the video data and the point cloud data collected on the effective path are accessed, and the video data and the point cloud data are respectively processed to enable the frame number of the video data to be corresponding to that of the point cloud data;
and scene construction, namely establishing a simulated scene according to the point cloud data, embedding the video data in the simulated scene, and constructing a VR scene.
By adopting the scheme, on one hand, on the premise of ensuring the immersion of the user, the large-scale three-dimensional rendering is avoided, the calculated amount is reduced, the operation speed is improved, and the problem that the experience sense of the user is poor due to the fact that the user is blocked when actually using the VR scene is solved; on the other hand, the real scene data comprises video data and point cloud data, the VR scene constructed by the video data is more immersive relative to a single photo, and the physical effect of the VR scene, the interchangeability of the video scene and the reality of the VR scene are enhanced by combining the point cloud data.
Further, the live-action data is data acquired in a scene to be acquired by using an acquisition device.
Further, the capturing device may be a capturing radar, a video camera, a panoramic camera, or the like.
By adopting the scheme, the live-action data is acquired from the actual acquisition scene, so that the problem that huge calculation is needed when the virtual scene is directly established is avoided, and the live-action data is acquired, so that the authenticity of the data is improved, and the authenticity of the virtual scene established in the later stage is improved.
Further, the acquiring path includes an acquiring sub-path, the data screening further includes invalid path screening, and an invalid path on the acquiring path is output, further including the steps of:
judging whether the number of the acquisition sub-paths in the acquisition path is more than 1;
if not, outputting the invalid path according to a first scheme;
if yes, judging whether intersection points exist among the acquisition sub-paths;
if not, outputting the invalid path according to a first scheme;
and if so, outputting the invalid path according to a second scheme.
Gather above-mentioned scheme, collection equipment can install on carrying equipment usually, if on carrying car or unmanned aerial vehicle, when carrying equipment starts or stops, all need certain time and distance, gets into or breaks away from the steady state, and the data that get into or break away from the steady state and gather are usually because carrying equipment is not stable enough and the error is great, and the data of gathering do not have the referential, with this partial data deletion, can further guarantee the degree of reduction in final VR scene.
Further, the step of calculating the first scheme comprises:
receiving the acquisition sub-path and outputting the position of the end point of the acquisition sub-path;
receiving a first invalid threshold, and making a circle by taking the endpoint of the acquisition sub-path as the center of a circle and the radius of the first invalid threshold;
outputting the intersection point of the acquisition sub-path and a circle with the first invalid threshold as the radius as a first intersection point;
and outputting a path between the first intersection point and the center of the circle on the acquisition sub-path as an invalid path.
Gather above-mentioned scheme, through setting up first invalid threshold value, regard the data collection of carrying on equipment in the first invalid threshold value distance as unstable data, do not possess the referential, discern this type of data through first invalid threshold value, improve the discernment precision.
Further, the step of calculating the second scheme includes:
receiving the intersection point and the position of the end point of the acquisition sub-path;
receiving a second invalid threshold, and rounding by taking the intersection point as a circle center and the second invalid threshold as a radius;
and judging whether the end point of the acquisition sub-path is in a circle with a second invalid threshold value as a radius, if so, outputting the path between the intersection point and the end point on the acquisition sub-path as an invalid path.
By adopting the scheme, when the intersection point exists between the acquisition sub-paths, the carrying equipment usually turns, the acquisition of the turning part which is not on the main path has no reference, and the invalid paths outside the main path are deleted by setting the second invalid threshold, so that the calculation amount of the later data classification is reduced.
Further, the data processing further comprises the steps of:
the data access is used for receiving the effective path and accessing the video data and the point cloud data collected on the effective path;
analyzing the data, namely analyzing the video data and the point cloud data to respectively obtain the number of frames collected by the video data and the point cloud data per second;
and data matching, namely receiving the acquisition time sequence of the video data and the point cloud data, and matching the video data and the point cloud data frame by frame according to the acquisition time sequence.
By adopting the scheme, the number of frames collected by the video data and the point cloud data per second is analyzed, the video data and the point cloud data are matched frame by frame according to the collection time sequence of the video data and the point cloud data, the data matching fineness is improved, and the condition that the final VR scene reduction degree is poor due to rough data processing is avoided.
Further, the data matching further comprises the steps of:
judging whether the number of frames per second of the video data is greater than the number of frames per second of the point cloud data;
and if so, equally dividing the point cloud data in equal proportion to ensure that the frame number per second of the video data is equal to the frame number per second of the point cloud data.
By adopting the scheme, the frame number per second of the video data is the frame number acquired per second when the video data is acquired, the point cloud data is equally divided in equal proportion, the frame number does not need to be reduced, the smoothness of a VR scene during use is further ensured, and the user experience is improved.
Further, the data matching further comprises the steps of:
if the number of frames per second of the video data is equal to the number of frames per second of the point cloud data, matching the video data and the point cloud data frame by frame according to the acquisition time sequence;
and if the frame number per second of the video data is less than that of the point cloud data, merging the point cloud data in an equal proportion to ensure that the frame number per second of the video data is equal to that of the point cloud data.
By adopting the scheme, when the data matching is carried out, the frames of the video data are not damaged, the highest reduction degree is ensured, and the VR scene simulation authenticity is improved.
Further, the scene construction step further includes:
establishing a point cloud simulation scene, receiving point cloud data, representing the point cloud data in a coordinate mode, and establishing the point cloud simulation scene;
and establishing a VR scene, receiving the video data, embedding each frame of the video data into the position of the point cloud data corresponding to the video data, and establishing the VR scene.
By adopting the scheme, only the part with higher density is classified, and then the corresponding solution method of the part is to classify by using the vector of the point, and the part can be expressed into one point in a space coordinate system. Classifying, wherein the classification method can be multiple methods such as Dbscan, kmeans, knn, svm and the like, the obtained classification result corresponds to the collected point cloud to obtain the final classification result, in the actual sampling process, due to the particularity of the corner position of the space to be collected, the point is often collected for multiple times, if the point is not classified, the object where the point is located, such as a wall and the like, cannot be distinguished, the condition of data confusion is easy to occur, and the problem is solved; each frame of the video data is embedded into the position of the corresponding point cloud data, and the video data is built frame by frame, so that the reality of a VR scene is improved, and the immersion sense of a user in use is improved.
Further, the VR scene construction method further comprises an external implant, wherein the external implant comprises the steps of;
receiving an implant video of the implant, wherein the implant video is video data acquired by the implant in the scene to be acquired;
receiving an implant position of an implant in a VR scene, and embedding the implant video in the implant position.
By adopting the scheme, the video of the implant is embedded into the implantation position, so that the authenticity of the implant is improved.
Further, the step of embedding the implant video into the implantation location comprises:
dividing the implant video into frames, matching the implant video with video data of an implant position frame by frame, and fusing the implant video into the video data.
Preferably, the external implant comprises the steps of:
receiving point cloud information of the implant;
embedding the point cloud information into the implantation location;
and fusing the implant video with the implant position and the point cloud information of the implant.
By adopting the scheme, the implant video and the point cloud information of the implant are fused, the point cloud data has physical characteristics, and the scene interactivity is improved.
Further, the manner of incorporating the implant video into the video data may be to perform matting processing on the video data against the implant video.
By adopting the scheme, the image matting processing is simple and quick, and when the external object is small or the influence on the whole video information is small, the efficiency is higher
Preferably, the method for integrating the implant video into the video data comprises the following steps:
screening out frames of the implant in the implant video;
extracting frames in the video data corresponding to frames in the implant video where the implant exists;
replacing the corresponding frame in the video data with the corresponding frame in which the implant is present.
By adopting the scheme, the corresponding frame in the video data is extracted and directly replaced, so that the processing speed is improved, and the image distortion is avoided.
A second aspect of the present invention provides a VR scene construction system, including:
the data receiving module is used for receiving live-action data, the live-action data is data acquired in a scene to be acquired, and the live-action data is used for establishing the VR scene;
the data screening module is used for outputting an invalid path on the acquisition path and deleting the invalid path from the acquisition path to obtain an effective path;
the data processing module is used for accessing the video data and the point cloud data collected on the effective path and respectively processing the video data and the point cloud data to enable the frame numbers of the video data and the point cloud data to be corresponding to each other;
and the scene construction module is used for establishing a simulation scene according to the point cloud data, embedding the video data in the simulation scene and constructing a VR scene.
By adopting the scheme, on one hand, on the premise of ensuring the immersion of the user, the large-scale three-dimensional rendering is avoided, the calculated amount is reduced, the operation speed is improved, and the problem of poor experience of the user in the process of actually using the VR scene due to the pause phenomenon is solved; on the other hand, the live-action data comprises video data and point cloud data, the VR scene constructed by the video data is stronger in immersion compared with a single photo, and the physical effect of the VR scene is enhanced and the reality of the VR scene is enhanced by combining the point cloud data.
Further, the acquisition path includes an acquisition sub-path, and the data filtering module further includes an invalid path filtering module, configured to output an invalid path on the acquisition path, and further includes:
judging whether the number of the acquisition sub-paths in the acquisition path is greater than 1;
if not, outputting the invalid path according to a first scheme;
if yes, judging whether intersection points exist among the acquisition sub-paths;
if not, outputting the invalid path according to a first scheme;
and if so, outputting the invalid path according to a second scheme.
Gather above-mentioned scheme, collection equipment usually can install on carrying equipment, if on carrying car or unmanned aerial vehicle, when carrying equipment starts or stops, all need certain time and distance, get into or break away from the steady state, and the data of gathering getting into or breaking away from the steady state is usually because carrying equipment is not stable enough and the error is great, and the data of gathering do not have the reference nature, deletes this partial data, can further guarantee the degree of reduction of final VR scene.
Further, the invalid path screening module further includes a first scheme calculating module, and the step of calculating the first scheme includes:
receiving the acquisition sub-path and outputting the position of the end point of the acquisition sub-path;
receiving a first invalid threshold, and rounding by taking the endpoint of the acquisition sub-path as a circle center and the first invalid threshold as a radius;
outputting the intersection point of the acquisition sub-path and a circle with the first invalid threshold as the radius as a first intersection point;
and outputting a path between the first intersection point and the center of the circle on the acquisition sub-path as an invalid path.
Gather above-mentioned scheme, through setting up first invalid threshold value, regard the data collection of carrying on equipment in the first invalid threshold value distance as unstable data, do not possess the referential, discern this type of data through first invalid threshold value, improve the discernment precision.
Further, the invalid path filtering module further includes a second scheme calculating module, where the second scheme calculating module includes:
receiving the intersection point and the position of the end point of the acquisition sub-path;
receiving a second invalid threshold, and rounding by taking the intersection point as a circle center and the second invalid threshold as a radius;
and judging whether the end point of the acquisition sub-path is in a circle with a second invalid threshold value as a radius, if so, outputting the path between the intersection point and the end point on the acquisition sub-path as an invalid path.
By adopting the scheme, when the intersection point exists between the acquisition sub-paths, the carrying equipment usually turns, the acquisition of the turning part which is not on the main path has no reference, and the invalid paths outside the main path are deleted by setting the second invalid threshold, so that the calculation amount of the later data classification is reduced.
Further, the data processing module further comprises:
the data access module is used for receiving the effective path and accessing the video data and the point cloud data collected on the effective path;
the data analysis module is used for analyzing the video data and the point cloud data to respectively obtain the collected frame number of the video data and the point cloud data per second;
and the data matching module is used for receiving the acquisition time sequence of the video data and the point cloud data and matching the video data and the point cloud data frame by frame according to the acquisition time sequence.
By adopting the scheme, the number of frames collected by the video data and the point cloud data per second is analyzed, the video data and the point cloud data are matched frame by frame according to the collection time sequence of the video data and the point cloud data, the data matching fineness is improved, and the condition that the final VR scene reduction degree is poor due to rough data processing is avoided.
Further, the data matching module further comprises:
judging whether the number of frames per second of the video data is greater than the number of frames per second of the point cloud data;
and if so, equally dividing the point cloud data in equal proportion to ensure that the frame number per second of the video data is equal to the frame number per second of the point cloud data.
By adopting the scheme, the frame number per second of the video data is the frame number per second acquired during video data acquisition, the point cloud data is equally divided in equal proportion, the frame number does not need to be reduced, the smoothness of a VR scene during use is further ensured, and the user experience is improved.
Further, the data matching module further comprises:
if the number of frames per second of the video data is equal to the number of frames per second of the point cloud data, matching the video data with the point cloud data frame by frame according to the acquisition time sequence;
and if the frame number per second of the video data is less than that of the point cloud data, merging the point cloud data in equal proportion to ensure that the frame number per second of the video data is equal to that of the point cloud data.
By adopting the scheme, when the data matching is carried out, the frame of the video data is not damaged, the highest reduction degree is ensured, and the VR scene simulation reality is improved.
Further, the scene construction module further includes:
the point cloud simulation scene establishing module is used for receiving point cloud data, representing the point cloud data in a coordinate mode and establishing a point cloud simulation scene;
and the VR scene establishing module is used for receiving the video data, embedding each frame of the video data into the position of the point cloud data corresponding to the video data, and establishing the VR scene.
By adopting the scheme, in the actual sampling process, due to the particularity of the corner position of the space to be acquired, the space is often acquired for multiple times, and if the point is not classified, objects such as walls and the like where the point is located cannot be distinguished, so that the condition of data confusion is easily caused, and the problem is solved; each frame of the video data is embedded into the position of the point cloud data corresponding to the frame, and the point cloud data is built frame by frame, so that the reality of a VR scene is improved, and the immersion sense of a user during use is improved.
Further, the VR scene construction method further includes an external implant module, where the external implant module includes:
receiving an implant video of the implant, wherein the implant video is video data acquired by the implant in the scene to be acquired;
receiving an implant position of an implant in a VR scene, and embedding the implant video in the implant position.
Adopt above-mentioned scheme, with implant video embedding implant position improves implant authenticity.
Further, the step of embedding the implant video in the implantation location comprises:
dividing the implant video into frames, matching the implant video with video data of an implant position frame by frame, and fusing the implant video into the video data.
Preferably, the external implant module further comprises:
receiving point cloud information of the implant;
embedding the point cloud information into the implantation location;
and fusing the implant video with the implant position and the point cloud information of the implant.
By adopting the scheme, the implant video and the point cloud information of the implant are fused, the point cloud data has physical characteristics, and the scene interactivity is improved.
Further, the manner of blending the implant video into the video data may be to perform matting processing on the video data against the implant video.
By adopting the scheme, the image matting processing is simple and quick, and when the external object is smaller or the influence on the whole video information is smaller, the efficiency is higher
Preferably, the method for integrating the implant video into the video data comprises the following steps:
screening out frames of the implant in the implant video;
extracting frames in the video data corresponding to frames in the implant video where the implant exists;
replacing the corresponding frame in the video data with the corresponding frame in which the implant is present.
By adopting the scheme, the corresponding frame in the video data is extracted and directly replaced, so that the processing speed is improved, and the image distortion is avoided.
A third aspect of the invention provides a VR scene construction apparatus comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the above method when executing the program.
A fourth aspect of the invention provides a storage medium comprising one or more programs which are executable by a processor to perform the method described above.
In conclusion, the invention has the following beneficial effects:
1. according to the VR scene construction method, on one hand, on the premise of ensuring the immersion of the user, large-scale three-dimensional rendering is avoided, the calculated amount is reduced, the operation speed is improved, and the problem that the experience sense of the user is poor due to the fact that the user is stuck when the VR scene is actually used is solved; on the other hand, the live-action data comprises video data and point cloud data, the VR scene constructed by using the video data has stronger immersion compared with a single photo, and the physical effect of the VR scene is enhanced and the reality of the VR scene is enhanced by combining the point cloud data;
2. according to the VR scene construction method, the acquisition equipment is usually installed on carrying equipment, such as a carrying vehicle or an unmanned aerial vehicle, when the carrying equipment is started or stopped, a certain time and distance are required, the carrying equipment enters or departs from a stable state, the acquired data entering or departing from the stable state usually has larger error due to instability of the carrying equipment, the acquired data has no referential property, and the data is deleted, so that the reduction degree of the final VR scene can be further ensured;
3. according to the VR scene construction method, the number of frames collected by the video data and the point cloud data per second is analyzed, the video data and the point cloud data are matched frame by frame according to the collection time sequence of the video data and the point cloud data, the data matching fineness is improved, and the condition that the final VR scene reduction degree is poor due to rough data processing is avoided;
4. according to the VR scene construction method, the frame number per second of the video data is the number of frames acquired per second when the video data are acquired, the point cloud data are equally divided in equal proportion, the number of frames does not need to be reduced, the smoothness of the VR scene during use is further ensured, and the user experience is improved;
5. according to the VR scene construction method, the implant video is embedded into the implantation position, and the authenticity of the implant is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the embodiments or the prior art descriptions will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flowchart of an embodiment of a VR scene construction method of the invention;
FIG. 2 is a flow diagram of one embodiment of the present invention including invalid path screening;
FIG. 3 is a flowchart detailing the invalid path filtering step of FIG. 2;
FIG. 4 is a flow chart of the first embodiment of FIG. 2;
FIG. 5 is a flow chart of a second version of FIG. 2;
FIG. 6 is a flow chart detailing the steps of FIG. 1;
FIG. 7 is a flow chart detailing the data matching step of FIG. 6;
FIG. 8 is a flowchart of another embodiment of a VR scene construction method of the present invention;
FIG. 9 is a diagram of one embodiment of a VR scene construction system in accordance with the present invention;
FIG. 10 is a schematic diagram of a detail of the module of FIG. 9;
FIG. 11 is a diagram illustrating a VR scene construction system in accordance with another embodiment of the present invention;
FIG. 12 is a schematic diagram of a refinement of the module of FIG. 11;
FIG. 13 is a diagram illustrating one embodiment of the invalid path filter;
fig. 14 is a schematic diagram of another embodiment of the invalid path screening.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present invention. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the invention, as detailed in the appended claims.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in this specification and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
As shown in fig. 1, a first aspect of the present invention provides a VR scene construction method, including the following steps:
s100, receiving data, namely receiving live-action data, wherein the live-action data are acquired in a scene to be acquired and are used for establishing the VR scene;
in the specific implementation process, the collected data need to be collected by using a collecting device, the collecting device comprises a data collecting device and a positioning device, the positioning device is used for determining the position of the data collecting device, and the collecting device needs to be installed on a carrying device.
S200, screening data, wherein the live-action data further comprises an acquisition path, outputting an invalid path on the acquisition path, and deleting the invalid path from the acquisition path to obtain an effective path;
in the specific implementation process, data collection equipment is including gathering the camera and gathering the radar, positioning device can be for the location radar to wait to gather the overlook plane of scene and establish the coordinate system, positioning device's position can directly be reachd through the location radar, data collection equipment and positioning device's relative position reachs for measuring in advance, by positioning device's position calculation data collection equipment's position in the coordinate system, it is to gather the route the track that data collection equipment installed the motion on the carrying device, carrying device includes but not limited to collection car, unmanned aerial vehicle etc..
In a specific implementation, the way of calculating the position of the data collection device in the coordinate system from the position of the positioning device may be:
receiving the coordinates of the positioning equipment, converting the coordinates of the positioning equipment into quaternions, and converting the quaternions into rotation matrixes M 0
Receiving a conversion matrix of the coordinate of the positioning equipment and the data collection equipment, and setting the conversion matrix as T 01
Setting a rotation matrix corresponding to coordinates of the data collection equipment as M 1 ,M 1 =M 0 ·T 01
Will M 1 And converting the coordinates into coordinates, namely the coordinates of the data collection equipment.
By adopting the scheme, the position of the data collection equipment can not be directly positioned, and the coordinate of the data collection equipment is calculated by measuring the relative position of the data collection equipment and the positioning equipment, so that the calculation convenience is improved.
In the implementation, the Rotation matrix (english: rotation matrix) is a matrix that has the effect of changing the direction of a vector but not changing the size when multiplied by a vector and maintains the chirality. The rotation matrix does not include point inversion, which can change the handedness, i.e. change the right-hand coordinate system to the left-hand coordinate system or vice versa.
In a preferred embodiment of the present invention, the carrying device is a collection vehicle, and a stepping motor for ensuring motion stability is installed in the collection vehicle.
S300, data processing is carried out, wherein the live-action data further comprises video data and point cloud data, the video data and the point cloud data collected on the effective path are accessed, and the video data and the point cloud data are respectively processed to enable the frame number of the video data to be corresponding to that of the point cloud data;
in a preferred embodiment of the present invention, the video data is data collected by a collecting camera, and the collecting camera may be a panoramic camera, a single lens reflex camera, a full-frame camera, or the like; the point cloud data is data collected by a collection radar, and the collection radar can be a pulse laser radar or a continuous wave laser radar and the like.
S400, scene construction, namely establishing a simulation scene according to the point cloud data, embedding the video data in the simulation scene, and constructing a VR scene.
In a specific implementation process, a simulation scene established by the point cloud data is a simulation scene formed by point accumulation.
By adopting the scheme, on one hand, on the premise of ensuring the immersion of the user, the large-scale three-dimensional rendering is avoided, the calculated amount is reduced, the operation speed is improved, and the problem that the experience sense of the user is poor due to the fact that the user is blocked when actually using the VR scene is solved; on the other hand, the live-action data comprises video data and point cloud data, the VR scene constructed by the video data is stronger in immersion compared with a single photo, and the physical effect of the VR scene is enhanced and the reality of the VR scene is enhanced by combining the point cloud data.
As shown in fig. 2 and 3, in a specific implementation process, the acquiring path includes an acquiring sub-path, the S200 and data filtering further includes S210 and invalid path filtering, and an invalid path on the acquiring path is output, and the method further includes the steps of:
judging whether the number of the acquisition sub-paths in the acquisition path is greater than 1;
if not, outputting the invalid path according to a first scheme;
if yes, judging whether intersection points exist among the acquisition sub-paths;
if not, outputting the invalid path according to a first scheme;
and if so, outputting the invalid path according to a second scheme.
Gather above-mentioned scheme, collection equipment can install on carrying equipment usually, if on carrying car or unmanned aerial vehicle, when carrying equipment starts or stops, all need certain time and distance, gets into or breaks away from the steady state, and the data that get into or break away from the steady state and gather are usually because carrying equipment is not stable enough and the error is great, and the data of gathering do not have the referential, with this partial data deletion, can further guarantee the degree of reduction in final VR scene.
As shown in fig. 3 and 4, in a specific implementation process, the step of calculating the first scheme includes:
receiving the acquisition sub-path and outputting the position of the end point of the acquisition sub-path;
receiving a first invalid threshold, and rounding by taking the endpoint of the acquisition sub-path as a circle center and the first invalid threshold as a radius;
outputting the intersection point of the acquisition sub-path and a circle with the first invalid threshold as the radius as a first intersection point;
and outputting a path between the first intersection point and the center of the circle on the acquisition sub-path as an invalid path.
Gather above-mentioned scheme, through setting up first invalid threshold value, the data collection of carrying on equipment in with first invalid threshold value distance deems unstable data, does not possess the referential ability, discerns this type of data through first invalid threshold value, improves the discernment precision.
As shown in fig. 13, in the specific implementation process, the end points of the acquisition sub-path may be a and B, the first invalid threshold is X1, X1 is used as the radius to make a circle, and the circle and the acquisition sub-path are respectively C and D; the segments AC, BD are invalid paths.
As shown in fig. 3 and 5, in a specific implementation process, the step of calculating the second scheme includes:
receiving the intersection point and the position of the end point of the acquisition sub-path;
receiving a second invalid threshold, and rounding by taking the intersection point as a circle center and the second invalid threshold as a radius;
and judging whether the end point of the acquisition sub-path is in a circle with a second invalid threshold value as a radius, if so, outputting the path between the intersection point and the end point on the acquisition sub-path as an invalid path.
By adopting the scheme, when the intersection point exists between the acquisition sub-paths, the carrying equipment usually turns, the acquisition of the turning part which is not on the main path has no reference, and the invalid paths outside the main path are deleted by setting the second invalid threshold, so that the calculation amount of the later data classification is reduced.
As shown in fig. 14, in the specific implementation process, the intersection points are set as O1 and O2, the end points of the acquisition sub-path are respectively set as A1, A2, A3, and A4, the second invalid threshold is set as X2, the centers of the acquisition sub-path and the acquisition sub-path are respectively set as O1 and O2, and the radius of the acquisition sub-path are set as X2, so that the line segments O1A1, O1A2, and O2A3 are invalid paths, and the line segments O2A4 are not invalid paths.
As shown in fig. 6, in a specific implementation process, the S300 and the data processing further include the steps of:
s310, data access, namely receiving the effective path and accessing the video data and point cloud data acquired on the effective path;
in a specific implementation process, the video data and the point cloud data are synchronously acquired, and the video data and the point cloud data acquired at the same time point can be extracted according to acquisition time.
S320, analyzing the data, analyzing the video data and the point cloud data, and respectively obtaining the number of frames collected by the video data and the point cloud data per second;
in a specific implementation process, the number of frames acquired per second of the video data is the number of frames acquired per second of the acquisition camera, and the number of frames of the point cloud data is the wonderful scanning times of the acquisition radar.
And S330, data matching, namely receiving the acquisition time sequence of the video data and the point cloud data, and matching the video data and the point cloud data frame by frame according to the acquisition time sequence.
In a specific implementation process, if the total duration of the video data is 10s, the video data acquired in the first second and the point cloud data can be extracted and simultaneously extracted for matching.
By adopting the scheme, the number of frames collected by the video data and the point cloud data per second is analyzed, the video data and the point cloud data are matched frame by frame according to the collection time sequence of the video data and the point cloud data, the data matching fineness is improved, and the condition that the final VR scene reduction degree is poor due to rough data processing is avoided.
As shown in fig. 7, in a preferred embodiment of the present invention, the S330, data matching further includes the steps of:
judging whether the number of frames per second of the video data is greater than the number of frames per second of the point cloud data;
and if so, equally dividing the point cloud data in equal proportion to ensure that the frame number per second of the video data is equal to the frame number per second of the point cloud data.
By adopting the scheme, the frame number per second of the video data is the frame number acquired per second when the video data is acquired, the point cloud data is equally divided in equal proportion, the frame number is not required to be reduced, the smoothness of a VR scene during use is further ensured, and the user experience is improved.
In a specific implementation process, the video data may be 30 frames per second, the point cloud data may be 1 frame per second, and the equal proportion averaging is to copy the data acquired by the point cloud data in 1 second to the same 30 parts, so that each 1 part corresponds to 1 frame of the video data.
In a specific implementation process, the S330, the data matching further includes the steps of:
if the number of frames per second of the video data is equal to the number of frames per second of the point cloud data, matching the video data with the point cloud data frame by frame according to the acquisition time sequence;
and if the frame number per second of the video data is less than that of the point cloud data, merging the point cloud data in equal proportion to ensure that the frame number per second of the video data is equal to that of the point cloud data.
By adopting the scheme, when the data matching is carried out, the frame of the video data is not damaged, the highest reduction degree is ensured, and the VR scene simulation reality is improved.
In a specific implementation process, if the number of frames per second of the video data is equal to the number of frames per second of the point cloud data, the frames of the video data and the frames of the point cloud data can be directly matched; and if the frame number per second of the video data is less than the frame number per second of the point cloud data, carrying out equal proportion combination, wherein if the frame number per second of the video data is 1 frame per second and the frame number per second of the point cloud data is 2 frames per second, the frame number per 0.5 second of the point cloud data is 1 frame, combining the two frames of data in 1 second, combining the two frames of the collected points, and matching the combined point cloud with the video data frame by frame.
In a specific implementation process, the step S400 of constructing a scene further includes:
s410, establishing a point cloud simulation scene, receiving point cloud data, representing the point cloud data in a coordinate mode, and establishing the point cloud simulation scene;
in a preferred embodiment of the present invention, the step of representing the point cloud data in a coordinate manner further includes classifying the point cloud data, the classifying the point cloud data includes screening the point cloud data by using a PCL method and generating a plane normal vector, and the classifying method can classify the points with almost the same normal vector according to the normal vector.
In a specific implementation process, the classification method can be a plurality of methods such as Dbscan, kmeans, knn or svm.
In the specific implementation process, the PCL Point Cloud Library (PCL) realizes a large number of Point Cloud-related general algorithms and efficient data structures, and relates to Point Cloud acquisition, filtering, segmentation, registration, retrieval, feature extraction, identification, tracking, curved surface reconstruction, visualization and the like. And the system supports various operating system platforms and can run on Windows, linux, android, mac OS X and partially embedded real-time systems.
In the implementation process, the DBSCAN (sensitivity-Based Spatial Clustering of Application with Noise) algorithm is a typical Clustering method Based on Density. The method defines clusters as the maximum set of points connected by density, can divide areas with sufficient density into clusters, and can find clusters with any shapes in noisy spatial data sets; the kmeans, K-means clustering algorithm (K-means clustering algorithm) is an iterative solution clustering analysis algorithm, and the steps are that data is divided into K groups in advance, K objects are randomly selected as initial clustering centers, then the distance between each object and each seed clustering center is calculated, and each object is allocated to the nearest clustering center. The cluster centers and the objects assigned to them represent a cluster. The cluster center of a cluster is recalculated for each sample assigned, based on the existing objects in the cluster. This process will be repeated until some termination condition is met. The termination condition may be that no (or minimum number) objects are reassigned to different clusters, no (or minimum number) cluster centers are changed again, and the sum of squared errors is locally minimal. The Knn-neighbor algorithm, or K-nearest neighbor (Knn, K-nearest neighbor) classification algorithm, is one of the simplest methods in data mining classification techniques. By K nearest neighbors is meant the K nearest neighbors, meaning that each sample can be represented by its nearest K neighbors. The nearest neighbor algorithm is a method for classifying each record in the data set. SVM (Support Vector Machine, SVM) is a generalized linear classifier (generalized linear classifier) that performs binary classification on data in a supervised learning manner, and a decision boundary of the SVM is a maximum-margin hyperplane (maximum-margin hyperplane) that solves for a learning sample.
S420, establishing a VR scene, receiving the video data, embedding each frame of the video data into the position of the point cloud data corresponding to the video data, and establishing the VR scene.
In the specific implementation process, each frame of the video data is a frame, and the video data is inserted into the point cloud data position acquired at the same time according to the acquisition time of the frame.
By adopting the scheme, in the actual sampling process, due to the particularity of the corner position of the space to be collected, the space is often collected for multiple times, if the point is not classified, objects such as walls and the like where the point is located cannot be distinguished, and the condition of data confusion is easy to occur, so that the problem is solved; each frame of the video data is embedded into the position of the point cloud data corresponding to the frame, and the point cloud data is built frame by frame, so that the reality of a VR scene is improved, and the immersion sense of a user during use is improved.
As shown in fig. 8, in a specific implementation process, the VR scene construction method further includes S500, externally connecting an implant, where S500 and externally connecting an implant include steps;
receiving an implant video of the implant, wherein the implant video is video data acquired by the implant in the scene to be acquired;
receiving an implant position of an implant in a VR scene, and embedding the implant video in the implant position.
Adopt above-mentioned scheme, with implant video embedding implant position improves implant authenticity.
In a specific implementation, the step of embedding the implant video into the implantation location comprises:
dividing the implant video into frames, matching the implant video with video data of an implant position frame by frame, and fusing the implant video into the video data.
In a specific implementation process, the S500 externally connecting implant includes the steps of:
receiving point cloud information of the implant;
embedding the point cloud information into the implantation location;
and fusing the implant video with the implant position and the point cloud information of the implant.
By adopting the scheme, the implant video and the point cloud information of the implant are fused, the point cloud data has physical characteristics, and the scene interactivity is improved.
In a specific implementation process, the manner of integrating the implant video into the video data may be to perform matting processing on the video data in comparison with the implant video.
By adopting the scheme, the image matting processing is simple and quick, and when the external object is smaller or the influence on the whole video information is smaller, the efficiency is higher
In a preferred embodiment of the present invention, the means for integrating the implant video into the video data comprises the steps of:
screening out frames of the implant in the implant video;
extracting frames in the video data corresponding to frames in the implant video where the implant exists;
and replacing the corresponding frame in the video data with the corresponding frame in which the implant exists.
By adopting the scheme, the corresponding frame in the video data is extracted and directly replaced, so that the processing speed is increased and the image distortion is avoided.
In a specific implementation, the implant is an object added to the VR scene.
As shown in fig. 9, a second aspect of the present invention provides a VR scene construction system, including:
a data receiving module 100, configured to receive live-action data, where the live-action data is data acquired in a scene to be acquired, and the live-action data is used to establish the VR scene;
the data screening module 200, the live-action data further includes an acquisition path, configured to output an invalid path on the acquisition path, and delete the invalid path from the acquisition path to obtain an effective path;
the data processing module 300 is configured to access the video data and the point cloud data collected on the effective path, and process the video data and the point cloud data respectively to make the frames of the video data and the points cloud data correspond to each other;
and a scene construction module 400, configured to create a simulated scene according to the point cloud data, embed the video data in the simulated scene, and construct a VR scene.
By adopting the scheme, on one hand, on the premise of ensuring the immersion of the user, the large-scale three-dimensional rendering is avoided, the calculated amount is reduced, the operation speed is improved, and the problem that the experience sense of the user is poor due to the fact that the user is blocked when actually using the VR scene is solved; on the other hand, the live-action data comprises video data and point cloud data, the VR scene constructed by the video data is stronger in immersion compared with a single photo, and the physical effect of the VR scene is enhanced and the reality of the VR scene is enhanced by combining the point cloud data.
As shown in fig. 10, in a specific implementation process, the acquisition path includes an acquisition sub-path, and the data filtering module 200 further includes an invalid path filtering module 210, configured to output an invalid path on the acquisition path, and further includes:
judging whether the number of the acquisition sub-paths in the acquisition path is more than 1;
if not, outputting the invalid path according to a first scheme;
if yes, judging whether intersection points exist among the acquisition sub-paths;
if not, outputting the invalid path according to a first scheme;
and if so, outputting the invalid path according to a second scheme.
Gather above-mentioned scheme, collection equipment usually can install on carrying equipment, if on carrying car or unmanned aerial vehicle, when carrying equipment starts or stops, all need certain time and distance, get into or break away from the steady state, and the data of gathering getting into or breaking away from the steady state is usually because carrying equipment is not stable enough and the error is great, and the data of gathering do not have the reference nature, deletes this partial data, can further guarantee the degree of reduction of final VR scene.
In a specific implementation process, the invalid path screening module 210 further includes a first scheme calculating module 211, where the step of calculating the first scheme includes:
receiving the acquisition sub-path and outputting the position of the end point of the acquisition sub-path;
receiving a first invalid threshold, and rounding by taking the endpoint of the acquisition sub-path as a circle center and the first invalid threshold as a radius;
outputting the intersection point of the acquisition sub-path and a circle with the first invalid threshold as the radius as a first intersection point;
and outputting the path between the first intersection point and the circle center on the acquisition sub-path as an invalid path.
Gather above-mentioned scheme, through setting up first invalid threshold value, regard the data collection of carrying on equipment in the first invalid threshold value distance as unstable data, do not possess the referential, discern this type of data through first invalid threshold value, improve the discernment precision.
In a specific implementation process, the invalid path filtering module 210 further includes a second scheme calculating module 212, where the second scheme calculating step includes:
receiving the intersection point and acquiring the end point position of the sub-path;
receiving a second invalid threshold, and rounding by taking the intersection point as a circle center and the second invalid threshold as a radius;
and judging whether the end point of the acquisition sub-path is in a circle with a second invalid threshold value as a radius, if so, outputting the path between the intersection point and the end point on the acquisition sub-path as an invalid path.
By adopting the scheme, when the intersection point exists between the acquisition sub-paths, the carrying equipment usually turns, the acquisition of the turning part which is not on the main path has no reference, and the invalid paths outside the main path are deleted by setting the second invalid threshold, so that the calculation amount of the later data classification is reduced.
In a specific implementation process, the data processing module 300 further includes:
a data access module 310, configured to receive the effective path, and access the video data and the point cloud data collected on the effective path;
the data analysis module 320 is configured to analyze the video data and the point cloud data to obtain the number of frames collected per second of the video data and the point cloud data respectively;
and the data matching module 330 is configured to receive an acquisition time sequence of the video data and the point cloud data, and match the video data and the point cloud data frame by frame according to the acquisition time sequence.
By adopting the scheme, the number of frames collected by the video data and the point cloud data per second is analyzed, the video data and the point cloud data are matched frame by frame according to the collection time sequence of the video data and the point cloud data, the data matching fineness is improved, and the condition that the final VR scene reduction degree is poor due to rough data processing is avoided.
In a specific implementation process, the data matching module 330 further includes:
judging whether the number of frames per second of the video data is greater than the number of frames per second of the point cloud data;
and if so, equally dividing the point cloud data in equal proportion to ensure that the frame number per second of the video data is equal to the frame number per second of the point cloud data.
By adopting the scheme, the frame number per second of the video data is the frame number acquired per second when the video data is acquired, the point cloud data is equally divided in equal proportion, the frame number is not required to be reduced, the smoothness of a VR scene during use is further ensured, and the user experience is improved.
In a specific implementation process, the data matching module 330 further includes:
if the number of frames per second of the video data is equal to the number of frames per second of the point cloud data, matching the video data and the point cloud data frame by frame according to the acquisition time sequence;
and if the frame number per second of the video data is less than that of the point cloud data, merging the point cloud data in an equal proportion to ensure that the frame number per second of the video data is equal to that of the point cloud data.
By adopting the scheme, when the data matching is carried out, the frame of the video data is not damaged, the highest reduction degree is ensured, and the VR scene simulation reality is improved.
In a specific implementation process, the scene construction module 400 further includes:
a point cloud simulation scene establishing module 410, configured to receive point cloud data, represent the point cloud data in a coordinate manner, and establish the point cloud simulation scene;
and the VR scene establishing module 420 is configured to receive the video data, embed each frame of the video data into a position of the point cloud data corresponding to the frame of the video data, and establish the VR scene.
By adopting the scheme, in the actual sampling process, due to the particularity of the corner position of the space to be collected, the space is often collected for multiple times, if the point is not classified, objects such as walls and the like where the point is located cannot be distinguished, and the condition of data confusion is easy to occur, so that the problem is solved; each frame of the video data is embedded into the position of the point cloud data corresponding to the frame, and the point cloud data is built frame by frame, so that the reality of a VR scene is improved, and the immersion sense of a user during use is improved.
As shown in fig. 11 and 12, in a specific implementation process, the VR scene construction method further includes an external implant module 500, where the external implant module 500 includes:
receiving an implant video of the implant, wherein the implant video is video data acquired by the implant in the scene to be acquired;
receiving an implant position of an implant in a VR scene, and embedding the implant video in the implant position.
Adopt above-mentioned scheme, with implant video embedding implant position improves implant authenticity.
In a specific implementation, the step of embedding the implant video into the implantation location comprises:
dividing the implant video into frames, matching the implant video with the video data of the implant position frame by frame, and fusing the implant video into the video data.
In a preferred embodiment of the present invention, the external implant module 500 further comprises:
receiving point cloud information of the implant;
embedding the point cloud information into the implantation location;
and fusing the implant video with the implant position and the point cloud information of the implant.
By adopting the scheme, the implant video and the point cloud information of the implant are fused, the point cloud data has physical characteristics, and the scene interactivity is improved.
In a specific implementation process, the manner of integrating the implant video into the video data may be to perform matting processing on the video data in comparison with the implant video.
By adopting the scheme, the image matting processing is simple and quick, and when the external object is smaller or the influence on the whole video information is smaller, the efficiency is higher
In a preferred embodiment of the present invention, the means for integrating the implant video into the video data comprises the steps of:
screening out frames of the implant in the implant video;
extracting frames in the video data corresponding to frames in the implant video where the implant exists;
replacing the corresponding frame in the video data with the corresponding frame in which the implant is present.
By adopting the scheme, the corresponding frame in the video data is extracted and directly replaced, so that the processing speed is improved, and the image distortion is avoided.
A third aspect of the invention provides a VR scene construction apparatus comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the above method when executing the program.
A fourth aspect of the invention provides a storage medium comprising one or more programs which are executable by a processor to perform the method described above.
It should be noted that, for those skilled in the art, it is possible to make several improvements and modifications to the present invention without departing from the principle of the present invention, and those improvements and modifications also fall within the protection scope of the claims of the present invention.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
It should be understood that in the embodiments of the present application, the technical problems described above can be solved by combining and combining the features of the embodiments and the embodiments.
The functions may be stored in a computer-readable storage medium if they are implemented in the form of software functional units and sold or used as separate products. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (7)

1. A VR scene construction method is characterized by comprising the following steps:
s100, receiving data, namely receiving live-action data, wherein the live-action data comprises data acquired in a scene to be acquired, and the live-action data is used for establishing the VR scene;
the data acquisition method comprises the following steps that acquired data are acquired through acquisition equipment, wherein the acquisition equipment comprises data acquisition equipment and positioning equipment, the positioning equipment is used for determining the position of the data acquisition equipment, and the data acquisition equipment is arranged on a carrying device;
the way to calculate the position of the data collection device in the coordinate system from the position of the positioning device is:
receiving the coordinates of the positioning equipment, converting the coordinates of the positioning equipment into quaternions, and converting the quaternions into rotation matrixes M 0
Receiving a conversion matrix of the coordinates of the positioning equipment and the coordinates of the data collection equipment, and setting the conversion matrix as T 01
Setting a rotation matrix corresponding to coordinates of the data collection equipment as M 1 ,M 1 =M 0 ·T 01
Will M 1 Converting the coordinates into coordinates which are coordinates of the data collection equipment;
s200, screening data, wherein the live-action data further comprises an acquisition path, outputting an invalid path on the acquisition path, and deleting the invalid path from the acquisition path to obtain an effective path;
the collecting path comprises a collecting sub-path, and the outputting of the invalid path on the collecting path comprises the following steps:
judging whether the number of the acquisition sub-paths in the acquisition path is more than 1;
if not, outputting the invalid path according to a first scheme;
if yes, judging whether intersection points exist among the acquisition sub-paths;
if not, outputting the invalid path according to a first scheme;
if yes, outputting the invalid path according to a second scheme;
s300, data processing is carried out, wherein data collected in a scene to be collected comprise video data and point cloud data, the video data and the point cloud data collected on the effective path are accessed, and the video data and the point cloud data are respectively processed to enable the frame number of the video data and the frame number of the point cloud data to be corresponding;
s400, scene construction, namely establishing a simulated scene according to the point cloud data, embedding the video data in the simulated scene, and constructing a VR scene;
in S200, the step of calculating the first scheme includes:
receiving the acquisition sub-path and outputting the position of the end point of the acquisition sub-path;
receiving a first invalid threshold, and making a circle by taking the endpoint of the acquisition sub-path as the center of a circle and the radius of the first invalid threshold;
outputting the intersection point of the acquisition sub-path and a circle with the first invalid threshold as the radius as a first intersection point;
outputting the path between the first intersection point and the circle center on the acquisition sub-path as an invalid path;
the step of calculating the second solution comprises:
receiving the intersection point and the position of the end point of the acquisition sub-path;
receiving a second invalid threshold, and rounding by taking the intersection point as a circle center and the second invalid threshold as a radius;
and judging whether the end point of the acquisition sub-path is in a circle with a second invalid threshold value as a radius, if so, outputting the path between the intersection point and the end point on the acquisition sub-path as an invalid path.
2. The VR scene construction method of claim 1, wherein: the data processing further comprises the steps of:
the data access is used for receiving the effective path and accessing the video data and point cloud data collected on the effective path;
analyzing the video data and the point cloud data to respectively obtain the per second acquisition frame number of the video data and the point cloud data;
and data matching, namely receiving the acquisition time sequence of the video data and the point cloud data, and matching the video data and the point cloud data frame by frame according to the acquisition time sequence.
3. The VR scene construction method of claim 2, wherein: the data matching further comprises the steps of:
judging whether the number of collected frames per second of the video data is greater than that of collected frames per second of the point cloud data;
and if so, equally dividing the point cloud data in equal proportion to ensure that the number of collected frames per second of the video data is equal to that of the collected frames per second of the point cloud data.
4. The VR scene construction method of claim 3, wherein: the data matching further comprises the steps of:
if the number of frames of video data collected per second is equal to the number of frames of point cloud data collected per second, matching the video data and the point cloud data frame by frame according to the collection time sequence;
and if the number of the video data per second acquisition frames is less than that of the point cloud data per second acquisition frames, merging the point cloud data in an equal proportion to ensure that the number of the video data per second acquisition frames is equal to that of the point cloud data per second acquisition frames.
5. The VR scene construction method of claim 3 or 4, wherein: the scene construction step further comprises:
establishing a point cloud simulation scene, receiving point cloud data, representing the point cloud data in a coordinate mode, and establishing the point cloud simulation scene;
and establishing a VR scene, receiving the video data, embedding each frame of the video data into the position of the point cloud data corresponding to the video data, and establishing the VR scene.
6. The VR scene construction method of claim 5, wherein: the VR scene construction method further comprises an external implant, and the external implant comprises the following steps:
receiving an implant video of the implant, wherein the implant video is video data acquired of the implant in the scene to be acquired;
receiving an implant position of an implant in a VR scene, and embedding the implant video in the implant position.
7. The utility model provides a VR scene constructs device which characterized in that: comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the VR scene construction method of any of the claims 1-6 when executing the program.
CN202011030578.3A 2020-09-27 2020-09-27 VR scene construction method, system and device Active CN112235556B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011030578.3A CN112235556B (en) 2020-09-27 2020-09-27 VR scene construction method, system and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011030578.3A CN112235556B (en) 2020-09-27 2020-09-27 VR scene construction method, system and device

Publications (2)

Publication Number Publication Date
CN112235556A CN112235556A (en) 2021-01-15
CN112235556B true CN112235556B (en) 2022-10-14

Family

ID=74107204

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011030578.3A Active CN112235556B (en) 2020-09-27 2020-09-27 VR scene construction method, system and device

Country Status (1)

Country Link
CN (1) CN112235556B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117894215A (en) * 2024-02-28 2024-04-16 国网浙江省电力有限公司杭州市富阳区供电公司 Electric power training warning education system based on VR simulation

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104008569A (en) * 2014-02-24 2014-08-27 惠州学院 3D scene generation method based on depth video
CN105825544A (en) * 2015-11-25 2016-08-03 维沃移动通信有限公司 Image processing method and mobile terminal
CN110322542A (en) * 2018-03-28 2019-10-11 苹果公司 Rebuild the view of real world 3D scene

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130215239A1 (en) * 2012-02-21 2013-08-22 Sen Wang 3d scene model from video
US20130215221A1 (en) * 2012-02-21 2013-08-22 Sen Wang Key video frame selection method
US9953400B2 (en) * 2013-07-23 2018-04-24 Microsoft Technology Licensing, Llc Adaptive path smoothing for video stabilization
US10002640B2 (en) * 2014-02-28 2018-06-19 Microsoft Technology Licensing, Llc Hyper-lapse video through time-lapse and stabilization
CA3028653C (en) * 2018-11-13 2021-02-16 Beijing Didi Infinity Technology And Development Co., Ltd. Methods and systems for color point cloud generation

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104008569A (en) * 2014-02-24 2014-08-27 惠州学院 3D scene generation method based on depth video
CN105825544A (en) * 2015-11-25 2016-08-03 维沃移动通信有限公司 Image processing method and mobile terminal
CN110322542A (en) * 2018-03-28 2019-10-11 苹果公司 Rebuild the view of real world 3D scene

Also Published As

Publication number Publication date
CN112235556A (en) 2021-01-15

Similar Documents

Publication Publication Date Title
Oesau et al. Planar shape detection and regularization in tandem
US9633483B1 (en) System for filtering, segmenting and recognizing objects in unconstrained environments
AU2009243442B2 (en) Detection of abnormal behaviour in video objects
CN112927363B (en) Voxel map construction method and device, computer readable medium and electronic equipment
CN107798725B (en) Android-based two-dimensional house type identification and three-dimensional presentation method
CN106663196A (en) Computerized prominent person recognition in videos
CN109635783A (en) Video monitoring method, device, terminal and medium
CN112861575A (en) Pedestrian structuring method, device, equipment and storage medium
WO2023082453A1 (en) Image processing method and device
WO2020222934A1 (en) Deriving information from images
CN112235556B (en) VR scene construction method, system and device
US11651533B2 (en) Method and apparatus for generating a floor plan
CN112037279A (en) Article position identification method and device, storage medium and electronic equipment
CN110163095B (en) Loop detection method, loop detection device and terminal equipment
CN105141974B (en) A kind of video clipping method and device
WO2022021287A1 (en) Data enhancement method and training method for instance segmentation model, and related apparatus
CN108780576B (en) System and method for ghost removal in video segments using object bounding boxes
CN111709473B (en) Clustering method and device for object features
CN111818364B (en) Video fusion method, system, device and medium
Colombari et al. Exemplar-based background model initialization
KR101990634B1 (en) Method and apparatus for dividing 3d indoor space using virtual walls
CN113537199B (en) Image boundary box screening method, system, electronic device and medium
JP4449483B2 (en) Image analysis apparatus, image analysis method, and computer program
CN113486914B (en) Method, device and storage medium for training neural network for image feature extraction
CN113553877B (en) Depth gesture recognition method and system and electronic equipment thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant