CN115661417A - Virtual world scene generation method and system in meta-space - Google Patents

Virtual world scene generation method and system in meta-space Download PDF

Info

Publication number
CN115661417A
CN115661417A CN202211592134.8A CN202211592134A CN115661417A CN 115661417 A CN115661417 A CN 115661417A CN 202211592134 A CN202211592134 A CN 202211592134A CN 115661417 A CN115661417 A CN 115661417A
Authority
CN
China
Prior art keywords
scene
model
transition
pixel
edge
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211592134.8A
Other languages
Chinese (zh)
Other versions
CN115661417B (en
Inventor
李方悦
颜佳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Aoya Design Inc
Original Assignee
Shenzhen Aoya Design Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Aoya Design Inc filed Critical Shenzhen Aoya Design Inc
Priority to CN202211592134.8A priority Critical patent/CN115661417B/en
Publication of CN115661417A publication Critical patent/CN115661417A/en
Application granted granted Critical
Publication of CN115661417B publication Critical patent/CN115661417B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The invention belongs to the technical field of virtual reality, and provides a method and a system for generating virtual world scenes in a metauniverse space, wherein scene sequences are obtained by sequentially arranging scene models according to the splicing sequence of the scene models; sequentially calculating the edge adaptation degree between each scene model and the adjacent scene model in the scene sequence; marking scene models needing edge transition in each scene model as transition models through the edge adaptation degree; performing edge transition on each transition model in the scene sequence; sequentially splicing all three-dimensional models in the scene sequence; the method can highlight the weak difference at the scene joint under the virtual reality environment, improve the immersive experience of the user, and stably eliminate the phenomena of sudden exposure and color mutation when the user is close to the joint position in the virtual scene of the metasma through edge transition.

Description

Virtual world scene generation method and system in meta-space
Technical Field
The invention belongs to the technical field of virtual reality, and particularly relates to a method and a system for generating a virtual world scene in a meta-space.
Background
At present, the metaspace is a collective virtual shared space, although virtual world scenes in the metaspace space are images of a real world generated based on a digital twin technology, virtual world images are also provided, but scenes of a large metaspace space are sourced from different space servers and then spliced, if the difference of the joint positions of the scenes of each metaspace is too large, the problem that the scenes of the virtual world are switched too hard is generated, so that a user loses a real immersion experience, many sources of the scenes of each space server are independently developed, the difference between the development environment and the scene source inevitably causes a large difference between the scenes of each space server, for example, the scene of one space server is obtained by three-dimensional reconstruction, the scene of the other space server is obtained according to three-dimensional scanning or three-dimensional modeling, the difference inevitably occurs between the two space servers, when the user passes through the joint of the two scenes, or the whole color of a scene model is suddenly changed and then suddenly changes, a series of problems that the scene is obviously uncoordinated in the near place of the virtual reality and the scene is seen between the joints of the scene, and the immersion experience of the scene is very weak world is lost, and therefore, the user experiences of the matching relationship between the scenes are very weak immersion.
Disclosure of Invention
The invention aims to provide a method and a system for generating a virtual world scene in a meta-space, which are used for solving one or more technical problems in the prior art and at least provide a beneficial selection or creation condition.
To achieve the above object, according to an aspect of the present invention, there is provided a method for generating a virtual world scene in a metacosmic space, the method including the steps of:
s100, acquiring three-dimensional models of different scenes as scene models, and sequentially arranging the scene models according to the splicing sequence of the scene models to obtain a scene sequence;
s200, sequentially calculating the edge adaptation degree between each scene model and the adjacent scene model in the scene sequence;
s300, marking scene models needing edge transition in each scene model as transition models through the edge adaptation degree;
s400, performing edge transition on each transition model in the scene sequence;
and S500, sequentially splicing all the three-dimensional models in the scene sequence to obtain the virtual world scene.
Preferably, the virtual world scene is output to a virtual reality headset for display.
Further, in S100, the three-dimensional models of the different scenes are: taking a three-dimensional model of a scene obtained by three-dimensional modeling or three-dimensional scanning as a scene model, or taking a three-dimensional model obtained by photographing the scene and performing three-dimensional reconstruction as the scene model; the scene is a building, tree, vehicle and/or geographical environment of a preset area.
Wherein the three-dimensional models of different scenes originate from different metas servers.
Preferably, the hardware of the metaspace server is a heterogeneous server; the software of the meta-space server comprises Simulation SDKs used for structure, perception and control Simulation, SDKs used for rendering, real-time light tracking and AI noise reduction, different SDKs are combined in a modularization mode through the Kit function of the SDKs, development of customized App or micro-service is completed rapidly, creatate used for modeling and rendering and visual View are used, and the software of the meta-space server also comprises a database and a cooperation engine NUCLEUS, and a modeling tool interconnection plug-in CONNECT used for supporting interconnection with software such as 3DS, MAX UE and MAYA installed on each client connected to the meta-space server.
Further, in S100, the splicing sequence of the scene models is an order of splicing the virtual scenes from the scene model to each scene, starting with the first acquired scene model.
Preferably, in S100, the splicing sequence of the scene models is a splicing sequence in which the time sequence of acquiring the scene models is taken as the scene model.
Further, in S200, the method for calculating the edge adaptation degree between the scene model and the adjacent scene model includes the following steps:
recording the number of scene models in a scene sequence as N, taking i as the serial number of the scene models in the scene sequence, taking i belonging to [2,N-1], and taking L (i) as a splicing edge line between the ith scene model and the (i-1) th scene model; taking R (i) as a splicing edge line between the ith scene model and the (i + 1) th scene model; the splicing edge line is a common edge line after the two scene models are combined (namely the edge line at the position where the two scene models are prepared to be combined or the edge line at the position where the two scene models are prepared to be jointed);
in the value range of i, sequentially calculating the edge adaptation degree Suit (i) between the ith scene model and the adjacent scene model as follows:
Figure 897869DEST_PATH_IMAGE001
j is a variable, and the Meang function is the mean value of the gray values of all the pixel points on the splicing edge line; ln refers to natural logarithm, mart (i, j) is pixel brightness difference between ith scene model and jth scene model in scene sequence, and its calculation method is: mart (i, j) = | MaxG (L (i)) -MaxG (L (j)) | - | MaxG (L (i)) -MaxG (R (j)) |;
the MaxG function is the maximum value of gray values of all pixel points on the splicing edge line; l (j) is a splicing edge line between the jth scene model and the j-1 st scene model; and R (j) is a splicing edge line between the jth scene model and the j +1 th scene model.
The beneficial effects are as follows: the edge adaptation degree obtained through the pixel brightness difference calculation can well highlight the brightness adaptation degree of the edge between the scene model and the adjacent scene model, highlight the weak difference at the scene joint under the virtual reality environment, and distinguish the difference characteristics of the current scene model and the surrounding scene model according to the gray level change trend on the common edge line of the current scene model and the surrounding scene model.
Further, in S300, the method for marking the scene model requiring edge transition in each scene model as the transition model through the edge adaptation degree includes: taking the mean value of the edge adaptation degrees between all the scene models and the adjacent scene models as Suitmean; and in the value range of i, when the edge adaptation degree Suit (i) < Suitmean between the ith scene model and the adjacent scene model, judging that the scene model needs edge transition and marking the scene model as a transition model.
Further, in S400, the method for performing edge transition on each transition model in the scene sequence includes:
recording the number of each transition model in the scene sequence as N2, taking k as the serial number of the transition model in the scene sequence, taking k belonging to [2, N2-1], and obtaining the serial number of the corresponding scene model of the kth transition model in the scene sequence as k (i);
the method for screening the region to be transited of the transition model comprises the following specific steps:
taking L (k (i)) as a splicing edge line between the kth transition model and the kth (i) -1 scene model; taking R (k (i)) as a splicing edge line between the kth transition model and the kth scene sequence (i) + 1); respectively carrying out corner detection on L (k (i)) and R (k (i)) to obtain corners, taking the corner with the maximum gray value on L (k (i)) as MAX _ Lk, and taking the corner with the maximum gray value on R (k (i)) as MAX _ Rk; taking projection points of the middle points of MAX _ Lk and MAX _ Rk on a kth transition model as depth points PJ (k), or taking the point with the shortest distance from the middle points of MAX _ Lk and MAX _ Rk to each pixel point on the kth transition model as a depth point PJ (k), and taking a circumscribed circle of a triangle formed by mutually connecting the three points of MAX _ Lk, MAX _ Rk and PJ (k) as CYC1; taking a region of the kth transition model in the CYC1 range, and recording the region as a to-be-transitioned region Trend (k) of the kth transition model;
performing edge transition on the regions to be transitioned of each transition model, specifically:
marking CYC1 (p) as the p-th pixel point in the region to be transited, and p as the serial number of the pixel point in the region to be transited, traversing all CYC1 (p) in the range of p to carry out edge transition, specifically: calculating an absolute value A (p) of a difference value between the gray value of the pixel CYC1 (p) and the gray value of MAX _ Lk, calculating an absolute value B (p) of a difference value between the gray value of the pixel CYC1 (p) and the gray value of MAX _ Rk, if A (p) > B (p), reducing the pixel value of the pixel CYC1 (p) by | MaxP (L (k (i))) -MaxP (R (k (i))) |, and otherwise increasing the pixel value of the pixel CYC1 (p) by | MinP (L (k (i))) -MinP (R (k (i))) |;
the MaxP function is the maximum value of pixel values of all pixel points on the splicing edge line; the MinP function is the minimum value of the pixel values of all the pixel points on the splicing edge line.
By performing edge transition on each transition model in the scene sequence, the problem of overlarge difference of seam positions of scenes of each metasma can be greatly reduced, the immersive experience of a user is improved, and through the range fine adjustment of the pixel level, the edge transition can stably eliminate the phenomena of sudden exposure and color mutation when the user is close to the seam positions in the virtual scenes of the metasma; however, if the range of the exposure and color mutation generated at the seam position in the region to be transitioned Trend (k) is too large, there is a possibility that a smaller range of the exposure and color mutation may still be generated when the user approaches the seam position in the virtual scene of the metasma, and in order to eliminate this problem, the present invention proposes the following priority scheme for performing edge transition on the region to be transitioned of each transition model:
preferably, or, the method for performing edge transition on the region to be transitioned of each transition model specifically includes:
sequentially screening a region Trend (k-1) to be transited of a k-1 transition model and a region Trend (k + 1) to be transited of a k +1 transition model;
marking CYC1 (p) as the p-th pixel point in the region to be transited, and p as the serial number of the pixel point in the region to be transited, traversing all CYC1 (p) in the range of p to carry out edge transition, specifically: calculating that the distance between the pixel points CYC1 (p) and MAX _ Lk is smaller than the distance between the pixel points CYC1 (p) and MAX _ Rk, marking MAX _ Lk as a reference repair line LR, otherwise marking MAX _ Rk as the reference repair line LR; calculating an absolute value C (p) of a difference value between the maximum gray value of each pixel point on the reference repair line LR and the maximum gray value of each pixel point in Trend (k-1), and calculating an absolute value D (p) of a difference value between the maximum gray value of each pixel point on the reference repair line LR and the maximum gray value of each pixel point in Trend (k + 1); if C (p) > D (p), reducing the pixel value of the pixel point CYC1 (p) by | MeanP (L (k (i))) -MeanL (Trend (k + 1)) |, and otherwise increasing the pixel value of the pixel point CYC1 (p) by | MeanP (L (k (i))) -MeanL (Trend (k-1)) |;
the mean P function is the average value of pixel values of all pixel points on the splicing edge line; the MeanL function is the average value of pixel values of all pixel points in the region to be transited. The above pixel values may be replaced with gray scale values.
The beneficial effects are that: the salient regions of sudden exposure and color mutation phenomena when the salient regions are close to the joint positions in the virtual scene in the current scene sequence can be intelligently screened out through the to-be-transited regions, so that the to-be-transited regions and the abnormal joint regions in the virtual scene have extremely strong relevance, the problems of small exposure and color mutation due to the fact that the range of the exposure and the color mutation is too large at the joint positions in the to-be-transited regions are solved, meanwhile, the impact of the afterimage phenomena caused by the loss of part of three-dimensional scenes on the immersion feeling of users can be greatly reduced, and the visual immersion performance of multi-scene seamless switching is improved.
Further, in S500, the method for sequentially stitching all three-dimensional models in the scene sequence to obtain the virtual world scene includes: and sequentially splicing all the three-dimensional models in the scene sequence by any one of an APAP method, an SPHP method or a PT method to obtain the virtual world scene.
Preferably, in S500, the method for sequentially stitching all three-dimensional models in the scene sequence to obtain the virtual world scene includes: all three-dimensional models in a scene sequence are sequentially spliced to obtain a virtual world scene through any one of a self-adaptive no-mark-point three-dimensional point cloud automatic splicing method with the patent publication number of CN104392426B, a geometric data and texture data automatic registration algorithm of a CN103049896B three-dimensional model, a CN109598677A three-dimensional image splicing method, a device, equipment and a readable storage medium, or a CN114283250A high-precision three-dimensional point cloud map automatic splicing and optimizing method and system.
The invention also provides a method and a system for generating the virtual world scene in the meta-space, wherein the method and the system for generating the virtual world scene in the meta-space comprise the following steps: the processor executes the computer program to realize the steps in the virtual world scene generation method in the metaspace space, the virtual world scene generation method in the metaspace space system can be operated in a computing device such as a desktop computer, a notebook computer, a palm computer and a cloud data center, the operable system can include, but is not limited to, the processor, the memory and a server cluster, and the processor executes the computer program to operate in the units of the following system:
the scene model acquisition unit is used for acquiring three-dimensional models of different scenes as scene models and sequentially arranging the scene models according to the splicing sequence of the scene models to obtain a scene sequence;
the edge adaptation calculation unit is used for sequentially calculating the edge adaptation degree between each scene model and the adjacent scene model in the scene sequence;
the transition model marking unit is used for marking scene models needing edge transition in each scene model as transition models through the edge adaptation degree;
the edge transition processing unit is used for carrying out edge transition on each transition model in the scene sequence;
and the virtual scene splicing unit is used for sequentially splicing all the three-dimensional models in the scene sequence to obtain a virtual world scene.
And the virtual scene display unit is used for outputting the virtual world scene to the virtual reality helmet for display.
The beneficial effects of the invention are as follows: the invention provides a method and a system for generating a virtual world scene in a metaspace space, which can well highlight the brightness adaptation degree of the edge between a scene model and an adjacent scene model through the edge adaptation degree obtained by calculating the pixel brightness difference, highlight the weak difference of a scene joint under a virtual reality environment, greatly reduce the problem of overlarge difference of the joint position of each metaspace scene, improve the immersive experience of a user, and stably eliminate the phenomena of sudden exposure and color mutation when the user approaches the joint position in the metaspace virtual scene through edge transition.
Drawings
The above and other features of the present invention will become more apparent by describing in detail embodiments thereof with reference to the attached drawings in which like reference numerals designate the same or similar elements, it being apparent that the drawings in the following description are merely exemplary of the present invention and other drawings can be obtained by those skilled in the art without inventive effort, wherein:
FIG. 1 is a flow chart of a method for generating a virtual world scene in a metaspace space;
fig. 2 is a system structure diagram of a method for generating a virtual world scene in a meta-space.
Detailed Description
The conception, the specific structure and the technical effects of the present invention will be clearly and completely described in conjunction with the embodiments and the accompanying drawings to fully understand the objects, the schemes and the effects of the present invention. It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict.
Fig. 1 is a flow chart of a method for generating a virtual world scene in a metaspace space, and the following describes a method for generating a virtual world scene in a metaspace space according to an embodiment of the present invention with reference to fig. 1, the method including the following steps:
s100, acquiring three-dimensional models of different scenes as scene models, and sequentially arranging the scene models according to the splicing sequence of the scene models to obtain a scene sequence;
s200, sequentially calculating the edge adaptation degree between each scene model and the adjacent scene model in the scene sequence;
s300, marking scene models needing edge transition in each scene model as transition models through the edge adaptation degree;
s400, performing edge transition on each transition model in the scene sequence;
and S500, sequentially splicing all the three-dimensional models in the scene sequence to obtain the virtual world scene.
Preferably, the virtual world scene is output to a virtual reality headset for display.
Further, in S100, the three-dimensional models of the different scenes are: taking a three-dimensional model of a scene obtained by three-dimensional modeling or three-dimensional scanning as a scene model, or taking a three-dimensional model obtained by photographing the scene and performing three-dimensional reconstruction as the scene model; the scene is a building, a tree, a vehicle and/or a geographical environment of a preset area.
Wherein the three-dimensional models of different scenes originate from different metastables servers.
Preferably, the hardware of the metasystem server is a heterogeneous server; the software of the meta-space server comprises Simulation SDKs used for structure, perception and control Simulation, SDKs used for rendering, real-time light tracking and AI noise reduction, different SDKs are combined in a modularization mode through the Kit function of the SDKs, development of customized App or micro-service is completed rapidly, creatate used for modeling and rendering and visual View are used, and the software of the meta-space server also comprises a database and a cooperation engine NUCLEUS, and a modeling tool interconnection plug-in CONNECT used for supporting interconnection with software such as 3DS, MAX UE and MAYA installed on each client connected to the meta-space server.
Further, in S100, the splicing sequence of the scene models is an order of splicing the virtual scenes from the scene model to each scene, starting with the first acquired scene model.
Preferably, in S100, the splicing sequence of the scene models is the splicing sequence of the scene models in the time sequence of acquiring the scene models.
Further, in S200, the method for calculating the edge adaptation degree between the scene model and the adjacent scene model includes the following steps:
recording the number of the scene models in the scene sequence as N, taking i as the serial number of the scene models in the scene sequence, taking i as an element [2,N-1], and taking L (i) as a splicing edge line between the ith scene model and the (i-1) th scene model; taking R (i) as a splicing edge line between the ith scene model and the (i + 1) th scene model; the splicing edge line is a common edge line of the two scene model preparation merging positions (namely the edge line of the two scene model preparation merging positions or the edge line of the position of the joint of the two scene models preparation merging positions);
in the value range of i, sequentially calculating the edge adaptation degree Suit (i) between the ith scene model and the adjacent scene model as follows:
Figure 100002_DEST_PATH_IMAGE002
j is a variable, and the Meang function is the mean value of the gray values of all the pixel points on the splicing edge line; ln refers to natural logarithm, mart (i, j) is pixel brightness difference between ith scene model and jth scene model in scene sequence, and its calculation method is: mart (i, j) = | MaxG (L (i)) -MaxG (L (j)) | - | MaxG (L (i)) -MaxG (R (j)) |;
the MaxG function is the maximum value of gray values of all pixel points on the splicing edge line; l (j) is a splicing edge line between the jth scene model and the jth-1 scene model; and R (j) is a splicing edge line between the jth scene model and the j +1 th scene model.
The beneficial effects are as follows: the edge adaptation degree obtained through the pixel brightness difference calculation can well highlight the brightness adaptation degree of the edge between the scene model and the adjacent scene model, highlight the weak difference at the scene joint under the virtual reality environment, and distinguish the difference characteristics of the current scene model and the surrounding scene model according to the gray level change trend on the common edge line of the current scene model and the surrounding scene model.
Further, in S300, the method for marking the scene model requiring edge transition in each scene model as the transition model through the edge adaptation degree includes: taking the mean value of the edge adaptation degrees between all the scene models and the adjacent scene models as Suitmean; within the value range of i, when the edge adaptation degree Suit (i) < Suitmean between the ith scene model and the adjacent scene model, judging that the scene model needs edge transition and marking the scene model as a transition model.
Further, in S400, the method for performing edge transition on each transition model in the scene sequence includes:
recording the number of each transition model in the scene sequence as N2, taking k as the serial number of the transition model in the scene sequence, taking k belonging to [2, N2-1], and obtaining the serial number of the corresponding scene model of the kth transition model in the scene sequence as k (i);
the method for screening the region to be transited of the transition model comprises the following specific steps:
taking L (k (i)) as a splicing edge line between the kth transition model and the kth (i) -1 scene model; taking R (k (i)) as a splicing edge line between the kth transition model and the kth scene sequence (i) + 1); respectively carrying out corner detection on L (k (i)) and R (k (i)) to obtain corners, taking the corner with the maximum gray value on L (k (i)) as MAX _ Lk, and taking the corner with the maximum gray value on R (k (i)) as MAX _ Rk; taking projection points of the middle points of MAX _ Lk and MAX _ Rk on a kth transition model as depth points PJ (k), or taking the point with the shortest distance from the middle points of MAX _ Lk and MAX _ Rk to each pixel point on the kth transition model as a depth point PJ (k), and taking a circumscribed circle of a triangle formed by mutually connecting the three points of MAX _ Lk, MAX _ Rk and PJ (k) as CYC1; taking a region of the kth transition model in the CYC1 range, and recording the region as a to-be-transitioned region Trend (k) of the kth transition model;
performing edge transition on the regions to be transitioned of each transition model, specifically:
marking CYC1 (p) as the p-th pixel point in the region to be transited, and p as the serial number of the pixel point in the region to be transited, traversing all CYC1 (p) in the range of p to carry out edge transition, specifically: calculating an absolute value A (p) of a difference value between the gray value of the pixel CYC1 (p) and the gray value of MAX _ Lk, calculating an absolute value B (p) of a difference value between the gray value of the pixel CYC1 (p) and the gray value of MAX _ Rk, if A (p) > B (p), reducing the pixel value of the pixel CYC1 (p) by | MaxP (L (k (i))) -MaxP (R (k (i))) |, and otherwise increasing the pixel value of the pixel CYC1 (p) by | MinP (L (k (i))) -MinP (R (k (i))) |;
the MaxP function is the maximum value of pixel values of all pixel points on the splicing edge line; the MinP function is the minimum value of the pixel values of all the pixel points on the splicing edge line.
By performing edge transition on each transition model in the scene sequence, the problem of overlarge difference of seam positions of scenes of each metasma can be greatly reduced, the immersive experience of a user is improved, and through the range fine adjustment of the pixel level, the edge transition can stably eliminate the phenomena of sudden exposure and color mutation when the user is close to the seam positions in the virtual scenes of the metasma; however, if the range of the exposure and color mutation generated at the seam position in the region to be transitioned Trend (k) is too large, there is a possibility that a smaller range of the exposure and color mutation may still be generated when the user approaches the seam position in the virtual scene of the metasma, and in order to eliminate this problem, the present invention proposes the following priority scheme for performing edge transition on the region to be transitioned of each transition model:
preferably, or, the method for performing edge transition on the region to be transitioned of each transition model specifically includes:
sequentially screening a region to be transited Trend (k-1) of a k-1 transition model and a region to be transited Trend (k + 1) of a k +1 transition model;
marking CYC1 (p) as the p-th pixel point in the region to be transited, and p as the serial number of the pixel point in the region to be transited, traversing all CYC1 (p) in the range of p to carry out edge transition, specifically: calculating that the distance between the pixel points CYC1 (p) and MAX _ Lk is smaller than the distance between the pixel points CYC1 (p) and MAX _ Rk, marking MAX _ Lk as a reference repair line LR, otherwise marking MAX _ Rk as the reference repair line LR; calculating an absolute value C (p) of a difference value between the maximum gray value of each pixel point on the reference repair line LR and the maximum gray value of each pixel point in Trend (k-1), and calculating an absolute value D (p) of a difference value between the maximum gray value of each pixel point on the reference repair line LR and the maximum gray value of each pixel point in Trend (k + 1); if C (p) > D (p), reducing the pixel value of the pixel point CYC1 (p) by | MeanP (L (k (i))) -MeanL (Trend (k + 1)) |, and otherwise increasing the pixel value of the pixel point CYC1 (p) by | MeanP (L (k (i))) -MeanL (Trend (k-1)) |;
the mean P function is the average value of pixel values of all pixel points on the splicing edge line; the MeanL function is the average value of pixel values of all pixel points in the region to be transited. The above pixel values may be replaced with gray scale values.
The beneficial effects are that: the salient regions of sudden exposure and color mutation phenomena when the salient regions are close to the joint positions in the virtual scene in the current scene sequence can be intelligently screened out through the to-be-transited regions, so that the to-be-transited regions and the abnormal joint regions in the virtual scene have extremely strong relevance, the problems of small exposure and color mutation due to the fact that the range of the exposure and the color mutation is too large at the joint positions in the to-be-transited regions are solved, meanwhile, the impact of the afterimage phenomena caused by the loss of part of three-dimensional scenes on the immersion feeling of users can be greatly reduced, and the visual immersion performance of multi-scene seamless switching is improved.
Further, in S500, the method for sequentially splicing all three-dimensional models in the scene sequence to obtain the virtual world scene includes: and sequentially splicing all the three-dimensional models in the scene sequence by any one of an APAP method, an SPHP method or a PT method to obtain the virtual world scene.
Preferably, in S500, the method for sequentially stitching all three-dimensional models in the scene sequence to obtain the virtual world scene includes: all three-dimensional models in a scene sequence are sequentially spliced to obtain a virtual world scene through any one of a self-adaptive no-mark-point three-dimensional point cloud automatic splicing method with the patent publication number of CN104392426B, a geometric data and texture data automatic registration algorithm of a CN103049896B three-dimensional model, a CN109598677A three-dimensional image splicing method, a device, equipment and a readable storage medium, or a CN114283250A high-precision three-dimensional point cloud map automatic splicing and optimizing method and system.
An embodiment of the present invention provides a method and system for generating a virtual world scene in a metaspace space, and as shown in fig. 2, a system structure diagram of the method and system for generating a virtual world scene in a metaspace space is shown, where the method and system for generating a virtual world scene in a metaspace space of the present invention includes: the virtual world scene generation method comprises a processor, a memory and a computer program which is stored in the memory and can run on the processor, wherein the processor executes the computer program to realize the steps in the virtual world scene generation method system embodiment in the metaspace.
The system comprises: a memory, a processor, and a computer program stored in the memory and executable on the processor, the processor executing the computer program to run in the units of the following system:
the scene model acquisition unit is used for acquiring three-dimensional models of different scenes as scene models and sequentially arranging the scene models according to the splicing sequence of the scene models to obtain a scene sequence;
the edge adaptation calculation unit is used for calculating the edge adaptation degree between each scene model and the adjacent scene model in the scene sequence in sequence;
the transition model marking unit is used for marking scene models needing edge transition in each scene model as transition models through the edge adaptation degree;
the edge transition processing unit is used for carrying out edge transition on each transition model in the scene sequence;
and the virtual scene splicing unit is used for sequentially splicing all the three-dimensional models in the scene sequence to obtain a virtual world scene.
And the virtual scene display unit is used for outputting the virtual world scene to the virtual reality helmet for display.
The virtual world scene generation method and system in the meta space can be operated in computing equipment such as desktop computers, notebook computers, palm computers and cloud servers. The system for generating the virtual world scene in the metacosmic space can be operated by a system comprising but not limited to a processor and a memory. Those skilled in the art will appreciate that the example is only an example of a virtual world scene generation method system in a meta-space, and does not constitute a limitation of a virtual world scene generation method system in a meta-space, and may include more or less components than a proportion, or some components in combination, or different components, for example, the virtual world scene generation method system in a meta-space may further include an input-output device, a network access device, a bus, and the like.
The Processor may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. The general-purpose processor can be a microprocessor or the processor can be any conventional processor and the like, the processor is a control center of the virtual world scene generation method system operation system in the metaspace space, and various interfaces and lines are utilized to connect various parts of the virtual world scene generation method system operable system in the whole metaspace space.
The memory can be used for storing the computer program and/or the module, and the processor realizes various functions of the virtual world scene generation method system in the metaspace space by operating or executing the computer program and/or the module stored in the memory and calling data stored in the memory. The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. In addition, the memory may include high-speed random access memory, and may also include non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), at least one magnetic disk storage device, a Flash memory device, or other volatile solid state storage device.
Although the present invention has been described in considerable detail and with reference to certain illustrated embodiments, it is not intended to be limited to any such details or embodiments or any particular embodiment, so as to effectively encompass the intended scope of the invention. Furthermore, the foregoing describes the invention in terms of embodiments foreseen by the inventor for which an enabling description was available, notwithstanding that insubstantial modifications of the invention, not presently foreseen, may nonetheless represent equivalent modifications thereto.

Claims (7)

1. A method for generating a virtual world scene in a meta-space, the method comprising the steps of:
s100, acquiring three-dimensional models of different scenes as scene models, and sequentially arranging the scene models according to the splicing sequence of the scene models to obtain a scene sequence;
s200, sequentially calculating the edge adaptation degree between each scene model and the adjacent scene model in the scene sequence;
s300, marking scene models needing edge transition in each scene model as transition models through the edge adaptation degree;
s400, performing edge transition on each transition model in the scene sequence;
and S500, sequentially splicing all three-dimensional models in the scene sequence to obtain a virtual world scene.
2. The method according to claim 1, wherein in S100, the order of splicing the scene models is from the first acquired scene model to the order of splicing the virtual scenes with each other between the scene models and the scenes.
3. The method for generating a virtual world scene in a metacosmic space according to claim 1, wherein in S200, the method for calculating the edge adaptation degree between the scene model and the adjacent scene model comprises the following steps:
recording the number of scene models in a scene sequence as N, taking i as the serial number of the scene models in the scene sequence, taking i belonging to [2,N-1], and taking L (i) as a splicing edge line between the ith scene model and the (i-1) th scene model; taking R (i) as a splicing edge line between the ith scene model and the (i + 1) th scene model; the splicing edge line is a common edge line after the two scene models are combined;
in the value range of i, sequentially calculating the edge adaptation degree Suit (i) between the ith scene model and the adjacent scene model as follows:
Figure DEST_PATH_IMAGE002
j is a variable, and the Meang function is the mean value of the gray values of all the pixel points on the splicing edge line; ln refers to natural logarithm, mart (i, j) is pixel brightness difference between ith scene model and jth scene model in scene sequence, and its calculation method is: mart (i, j) = | MaxG (L (i)) -MaxG (L (j)) | - | MaxG (L (i)) -MaxG (R (j)) |;
the MaxG function is the maximum value of gray values of all pixel points on the splicing edge line; l (j) is a splicing edge line between the jth scene model and the j-1 st scene model; and R (j) is a splicing edge line between the jth scene model and the j +1 th scene model.
4. The method for generating virtual world scenes in the meta-space according to claim 3, wherein in S300, the method for marking the scene model needing edge transition in each scene model as the transition model through the edge adaptation degree comprises the following steps: taking the mean value of the edge adaptation degrees between all the scene models and the adjacent scene models as Suitmean; and in the value range of i, when the edge adaptation degree Suit (i) < Suitmean between the ith scene model and the adjacent scene model, judging that the scene model needs edge transition and marking the scene model as a transition model.
5. The method for generating a virtual world scene in a metaspace space according to claim 3, wherein in S400, the method for performing edge transition on each transition model in the scene sequence is:
recording the number of each transition model in the scene sequence as N2, taking k as the serial number of the transition model in the scene sequence, making k belong to [2, N2-1], and obtaining the serial number of the corresponding scene model of the kth transition model in the scene sequence as k (i);
the method for screening the region to be transited of the transition model comprises the following specific steps:
taking L (k (i)) as a splicing edge line between the kth transition model and the kth (i) -1 scene model; taking R (k (i)) as a splicing edge line between the kth transition model and the kth scene sequence (i) + 1); respectively carrying out corner detection on L (k (i)) and R (k (i)) to obtain corners, and taking the corner with the maximum gray value on L (k (i)) as MAX _ Lk and taking the corner with the maximum gray value on R (k (i)) as MAX _ Rk; taking projection points of the middle points of MAX _ Lk and MAX _ Rk on a kth transition model as depth points PJ (k), or taking the point with the shortest distance from the middle points of MAX _ Lk and MAX _ Rk to each pixel point on the kth transition model as a depth point PJ (k), and taking a circumscribed circle of a triangle formed by mutually connecting the three points of MAX _ Lk, MAX _ Rk and PJ (k) as CYC1; taking a region of the kth transition model in the CYC1 range, and recording the region as a to-be-transitioned region Trend (k) of the kth transition model;
performing edge transition on the regions to be transitioned of each transition model, specifically:
marking CYC1 (p) as the p-th pixel point in the region to be transited, and p as the serial number of the pixel point in the region to be transited, traversing all CYC1 (p) in the range of p to carry out edge transition, specifically: calculating an absolute value A (p) of a difference between the gray value of the pixel CYC1 (p) and the gray value of MAX _ Lk, calculating an absolute value B (p) of a difference between the gray value of the pixel CYC1 (p) and the gray value of MAX _ Rk, if A (p) > B (p), decreasing the pixel value of the pixel CYC1 (p) by | MaxP (L (k (i))) -MaxP (R (k (i))) |, and otherwise increasing the pixel value of the pixel CYC1 (p) by | MinP (L (k (i))) -MinP (R (k (i))) |;
the MaxP function is the maximum value of pixel values of all pixel points on the splicing edge line; the MinP function is the minimum value of the pixel values of all the pixel points on the splicing edge line.
6. The method for generating a virtual world scene in a meta-space according to claim 5, wherein the method for performing edge transition on the region to be transitioned of each transition model specifically comprises:
sequentially screening a region to be transited Trend (k-1) of a k-1 transition model and a region to be transited Trend (k + 1) of a k +1 transition model;
marking CYC1 (p) as the p-th pixel point in the region to be transited, and p as the serial number of the pixel point in the region to be transited, traversing all CYC1 (p) in the range of p to carry out edge transition, specifically: calculating that the distance between the pixel points CYC1 (p) and MAX _ Lk is smaller than the distance between the pixel points CYC1 (p) and MAX _ Rk, marking MAX _ Lk as a reference repair line LR, otherwise marking MAX _ Rk as the reference repair line LR; calculating an absolute value C (p) of a difference value between the maximum gray value of each pixel point on the reference repair line LR and the maximum gray value of each pixel point in Trend (k-1), and calculating an absolute value D (p) of a difference value between the maximum gray value of each pixel point on the reference repair line LR and the maximum gray value of each pixel point in Trend (k + 1); if C (p) > D (p), reducing the pixel value of the pixel point CYC1 (p) by | MeanP (L (k (i))) -MeanL (Trend (k + 1)) |, and otherwise increasing the pixel value of the pixel point CYC1 (p) by | MeanP (L (k (i))) -MeanL (Trend (k-1)) |;
the mean P function is the average value of pixel values of all pixel points on the splicing edge line; the MeanL function is the average value of pixel values of all pixel points in the region to be transited.
7. A method and a system for generating virtual world scenes in a meta-space are characterized by comprising the following steps: a processor, a memory and a computer program stored in the memory and running on the processor, the processor implementing the steps in a method of virtual world scene generation in metaspace according to any of claims 1 to 6 when executing the computer program.
CN202211592134.8A 2022-12-13 2022-12-13 Virtual world scene generation method and system in meta-space Active CN115661417B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211592134.8A CN115661417B (en) 2022-12-13 2022-12-13 Virtual world scene generation method and system in meta-space

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211592134.8A CN115661417B (en) 2022-12-13 2022-12-13 Virtual world scene generation method and system in meta-space

Publications (2)

Publication Number Publication Date
CN115661417A true CN115661417A (en) 2023-01-31
CN115661417B CN115661417B (en) 2023-03-31

Family

ID=85019543

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211592134.8A Active CN115661417B (en) 2022-12-13 2022-12-13 Virtual world scene generation method and system in meta-space

Country Status (1)

Country Link
CN (1) CN115661417B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2000004503A1 (en) * 1998-07-16 2000-01-27 France Telecom (S.A.) Method for modelling three-dimensional objects or scenes
CN101770656A (en) * 2010-02-11 2010-07-07 中铁第一勘察设计院集团有限公司 Stereo orthophoto pair-based large-scene stereo model generating method and measuring method thereof
CN111612897A (en) * 2020-06-05 2020-09-01 腾讯科技(深圳)有限公司 Three-dimensional model fusion method, device and equipment and readable storage medium
CN112231020A (en) * 2020-12-16 2021-01-15 成都完美时空网络技术有限公司 Model switching method and device, electronic equipment and storage medium
CN114004939A (en) * 2021-12-31 2022-02-01 深圳奥雅设计股份有限公司 Three-dimensional model optimization method and system based on modeling software script

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2000004503A1 (en) * 1998-07-16 2000-01-27 France Telecom (S.A.) Method for modelling three-dimensional objects or scenes
CN101770656A (en) * 2010-02-11 2010-07-07 中铁第一勘察设计院集团有限公司 Stereo orthophoto pair-based large-scene stereo model generating method and measuring method thereof
CN111612897A (en) * 2020-06-05 2020-09-01 腾讯科技(深圳)有限公司 Three-dimensional model fusion method, device and equipment and readable storage medium
CN112231020A (en) * 2020-12-16 2021-01-15 成都完美时空网络技术有限公司 Model switching method and device, electronic equipment and storage medium
WO2022127275A1 (en) * 2020-12-16 2022-06-23 成都完美时空网络技术有限公司 Method and device for model switching, electronic device, and storage medium
CN114004939A (en) * 2021-12-31 2022-02-01 深圳奥雅设计股份有限公司 Three-dimensional model optimization method and system based on modeling software script

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王淑青;李叶伟;: "基于亮度一致性的多曝光图像融合" *

Also Published As

Publication number Publication date
CN115661417B (en) 2023-03-31

Similar Documents

Publication Publication Date Title
CN109919869B (en) Image enhancement method and device and storage medium
CN108830816B (en) Image enhancement method and device
CN109712165B (en) Similar foreground image set segmentation method based on convolutional neural network
CN111768477B (en) Three-dimensional facial expression base establishment method and device, storage medium and electronic equipment
CN114820905B (en) Virtual image generation method and device, electronic equipment and readable storage medium
CN113298226A (en) Controlling neural networks through intermediate latent spaces
CN112686824A (en) Image correction method, image correction device, electronic equipment and computer readable medium
CN112150347B (en) Image modification patterns learned from a limited set of modified images
WO2023024441A1 (en) Model reconstruction method and related apparatus, and electronic device and storage medium
CN111340077A (en) Disparity map acquisition method and device based on attention mechanism
US20210366190A1 (en) Systems and methods for optimizing a model file
CN114782864A (en) Information processing method and device, computer equipment and storage medium
CN113140034A (en) Room layout-based panoramic new view generation method, device, equipment and medium
CN116797768A (en) Method and device for reducing reality of panoramic image
CN113706431B (en) Model optimization method and related device, electronic equipment and storage medium
CN113516697B (en) Image registration method, device, electronic equipment and computer readable storage medium
US20240135747A1 (en) Information processing method, computer device, and storage medium
CN115661417B (en) Virtual world scene generation method and system in meta-space
CN116330667B (en) Toy 3D printing model design method and system
US20220301348A1 (en) Face reconstruction using a mesh convolution network
CN111833374A (en) Path planning method, system, storage medium and terminal based on video fusion
CN113298931B (en) Reconstruction method and device of object model, terminal equipment and storage medium
KR20230062462A (en) Method and apparatus for creating a natural three-dimensional digital twin through verification of a reference image
WO2020224118A1 (en) Lesion determination method and apparatus based on picture conversion, and computer device
CN113763558A (en) Information processing method and device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant