CN112190944A - Virtual building model construction method and device and electronic device - Google Patents

Virtual building model construction method and device and electronic device Download PDF

Info

Publication number
CN112190944A
CN112190944A CN202011133417.7A CN202011133417A CN112190944A CN 112190944 A CN112190944 A CN 112190944A CN 202011133417 A CN202011133417 A CN 202011133417A CN 112190944 A CN112190944 A CN 112190944A
Authority
CN
China
Prior art keywords
virtual
orientation
model
prefabricated part
adsorption
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011133417.7A
Other languages
Chinese (zh)
Inventor
李宇冲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202011133417.7A priority Critical patent/CN112190944A/en
Publication of CN112190944A publication Critical patent/CN112190944A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • A63F13/525Changing parameters of virtual cameras
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/57Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game
    • A63F13/577Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game using determination of contact between game characters or objects, e.g. to avoid collision between virtual racing cars
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/66Methods for processing data by generating or executing the game program for rendering three dimensional images
    • A63F2300/6661Methods for processing data by generating or executing the game program for rendering three dimensional images for changing the position of the virtual camera

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a method and a device for constructing a virtual building model and an electronic device, wherein the method comprises the following steps: adjusting the orientation of the first virtual preform model from the first orientation to the second orientation in response to the orientation adjustment operation of the virtual character; adjusting the first position of the first virtual preform model according to the displacement and/or the second orientation of the virtual character to obtain a second position; when the virtual character faces the second virtual prefabricated part model in the second direction, judging whether the second virtual prefabricated part model is located in a preset range; when the second virtual prefabricated part model is located in the preset range, adjusting the second position and the second orientation according to the third position and the third orientation of the second virtual prefabricated part model to obtain a target position and a target orientation; and placing the first virtual prefabricated part model according to the target position and the target orientation to construct a virtual building model. The invention solves the technical problem that the orientation of the virtual prefabricated part model in the game scene is limited by the orientation of the grid in the related art.

Description

Virtual building model construction method and device and electronic device
Technical Field
The invention relates to the field of computers, in particular to a method and a device for constructing a virtual building model and an electronic device.
Background
The building play method belongs to a basic play method which can be applied to various electronic games. The building playing method can endow the game player with certain operation freedom and exert the subjective imagination of the game player, so the building playing method is very popular among the game players. The construction play method not only needs the game planners to provide good design and planning, but also needs a matched programming scheme to support and realize.
Currently, in most gaming projects offered by the related art, custom granularity of construction play can typically only be refined to the building level. That is, the customized granularity provided for the game player in the game scene can only be refined to the pre-designed basic building model (for example, the virtual base model, the virtual factory model, the virtual camping model, the virtual airport model, etc. continuously created in the strategy game). Therefore, the basic building style of the building play method cannot realize the self-definition of the game player, so that the self-definition degree of the building result of the game player is low, and the individualized building requirement of the game player cannot be met. For example: when a game player wishes to build a virtual architectural model that does not exist in a library of game art, it finds it impractical to implement in such a game scenario. However, if the goal of enriching the construction resources in the game scene is to be achieved, a brand-new virtual building model can only be continuously introduced through the game version optimization, but the resource manufacturing cost of art workers is significantly increased, and in fact, the diversified demands of different game players are still difficult to meet.
In addition, in a small part of game items, a pre-designed virtual prefabricated model is provided for a game player so as to realize custom construction in a game scene. In such game items, the three-dimensional space in the game scene is usually divided into continuous grids with different orientations, and the placement position and orientation of the virtual prefabricated part model in the game scene need to be limited by the position and orientation of the grids obtained through division, although the orientation of the virtual prefabricated part model can be changed when the rotation orientation of the virtual character is large, the orientation of the virtual prefabricated part model is still changed according to the corresponding grid orientations before and after the rotation of the virtual character. Particularly, when a certain included angle exists between the orientation of the virtual character and the orientation of the grid, the virtual prefabricated part model can face the virtual character according to the orientation of the grid before and after the virtual character rotates, and the certain included angle exists between the orientation of the virtual prefabricated part model and the orientation of the virtual character, but the virtual prefabricated part model does not strictly correspond to the orientation of the virtual character, so that the virtual building model lacks reality and flexibility in the construction process, and the game immersion of a game player is further reduced.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
At least part of embodiments of the present invention provide a method, an apparatus, and an electronic apparatus for constructing a virtual building model, so as to solve the technical problem that, in a manner of constructing a virtual building model in a game scene provided in related technologies, a three-dimensional space in the game scene needs to be divided into meshes with different orientations and continuously, so that the orientation of a virtual prefabricated part model in the game scene needs to be limited by the orientation of the meshes, thereby causing a lack of reality and flexibility of the virtual building model during the construction process, and further reducing game immersion of a game player.
According to an embodiment of the present invention, there is provided a method for constructing a virtual building model, in which a graphical user interface is obtained by executing a software application on a processor of a terminal and rendering the software application on a touch display of the terminal, and contents displayed on the graphical user interface at least partially include a game scene and a virtual character, the method including:
adjusting the orientation of the first virtual prefabricated part model from a first orientation to a second orientation in response to the orientation adjusting operation of the virtual character, wherein the orientation adjusting operation is used for controlling the orientation of the virtual character, the first orientation is the initial orientation of the first virtual prefabricated part model, and the second orientation is the orientation corresponding to the orientation adjusting operation; adjusting a first position of the first virtual prefabricated part model according to the displacement and/or the second orientation of the virtual character to obtain a second position, wherein the first position is an initial position of the first virtual prefabricated part model, and the second position is a position corresponding to the displacement and/or the second orientation; when the virtual character faces the second virtual prefabricated part model in the second direction, judging whether the second virtual prefabricated part model is located in a preset range; when the second virtual prefabricated part model is located in the preset range, adjusting the second position and the second orientation according to the third position and the third orientation of the second virtual prefabricated part model to obtain a target position and a target orientation; and placing the first virtual prefabricated part model according to the target position and the target orientation so as to construct a virtual building model.
Optionally, the orientation adjustment operation is also used to control a lens direction of a virtual camera for presenting a game screen corresponding to the lens direction.
Optionally, the method further includes: determining a second orientation corresponding visual ray based on the gaze detection; when the visual ray is directed at the second virtual preform model, it is determined that the virtual character is directed toward the second virtual preform model in a second orientation.
Optionally, adjusting the second position and the second orientation according to the third position and the third orientation of the second virtual preform model comprises: determining a third position and a third orientation from the alternative position and the alternative orientation of the second virtual preform model based on an adsorption-forbidden constraint condition detection for determining whether adsorption is forbidden between the first virtual preform model and the second virtual preform model, a collision detection for determining whether a collision occurs at the alternative position, and a line-of-sight detection for determining whether the alternative position is within a line-of-sight range of the virtual character; the second position and the second orientation are adjusted according to the third position and the third orientation of the second virtual preform model.
Optionally, the method further includes: acquiring adsorption attribute information of the second virtual prefabricated part model, wherein the adsorption attribute information is used for determining the adsorption feasibility between the first virtual prefabricated part model and the second virtual prefabricated part model; an alternative position and an alternative orientation of the second virtual preform model are determined from the adsorption property information.
Optionally, the adsorption attribute information includes: the type of the at least one adsorption interface, the adsorption position of the at least one adsorption interface, the adsorption orientation of the at least one adsorption interface, the adsorption capacity of the at least one adsorption interface, the support capacity of the at least one adsorption interface, and the adsorption inhibition constraint of the at least one adsorption interface.
Optionally, the method further includes: selecting at least one coordinate point combination under the local coordinate system of the second virtual prefabricated part model, wherein each coordinate point combination in the at least one coordinate point combination comprises: the first three-dimensional coordinate point and the second three-dimensional coordinate point are different coordinate points in a three-dimensional space of the second virtual prefabricated part model; and performing collision detection based on a connecting line between the first three-dimensional coordinate point and the second three-dimensional coordinate point, and determining whether a virtual prefabricated part model exists in the alternative position.
Optionally, the method further includes: acquiring an included angle between a first vector and a second vector, wherein the first vector is a vector between the alternative position and the position of the virtual character in the game scene, and the second vector is a vector in the lens direction of the virtual camera; and performing sight line detection according to the comparison result of the included angle and a preset threshold value, and determining whether the alternative position is located in the sight line range of the virtual character.
Optionally, the method further includes: and performing collision detection based on the alternative position and the position of the virtual camera in the game scene, and determining whether the alternative position is positioned in the sight line of the virtual character.
Optionally, the method further includes: and calculating and storing voxelized coordinates of the first virtual preform model, wherein the voxelized coordinates are used for determining that the first virtual preform model exists at the target position when a new virtual preform model is placed subsequently.
Optionally, the method further includes: constructing an adjacency relation between the first virtual prefabricated part model and the second virtual prefabricated part model, wherein the adjacency relation is used for searching a virtual foundation model closest to the first virtual prefabricated part model; and carrying out support detection on the first virtual prefabricated part model based on the adjacency relation.
Optionally, the support inspection of the first virtual preform model based on the adjacency relation comprises: taking the first virtual prefabricated part model as a starting point, and acquiring the current step number between the first virtual prefabricated part model and the nearest virtual foundation model by utilizing the adjacency relation and the attribute value of the supporting capacity of the first virtual prefabricated part model; and when the current step number is less than or equal to the preset step number, determining that a supporting relation exists between the first virtual prefabricated part model and the nearest virtual foundation model.
Optionally, the preset range includes a preset range of the second position or a preset range of the current position of the virtual character.
According to an embodiment of the present invention, there is also provided an apparatus for constructing a virtual building model, in which a graphical user interface is obtained by executing a software application on a processor of a terminal and rendering the software application on a touch display of the terminal, and content displayed by the graphical user interface at least partially includes a game scene and a virtual character, the apparatus including:
a first adjusting module, configured to adjust an orientation of the first virtual preform model from a first orientation to a second orientation in response to an orientation adjusting operation of the virtual character, wherein the orientation adjusting operation is used to control the orientation of the virtual character, the first orientation is an initial orientation of the first virtual preform model, and the second orientation is an orientation corresponding to the orientation adjusting operation; a second adjusting module, configured to adjust a first position of the first virtual prefabricated part model according to a displacement and/or the second orientation of the virtual character to obtain a second position, where the first position is an initial position of the first virtual prefabricated part model, and the second position is a position corresponding to the displacement and/or the second orientation; the judging module is used for judging whether the second virtual prefabricated part model is positioned in a preset range or not when the virtual character faces the second virtual prefabricated part model in the second orientation; a third adjusting module, configured to adjust the second position and the second orientation according to a third position and a third orientation of the second virtual preform model when the second virtual preform model is within the preset range, so as to obtain a target position and a target orientation; and the building module is used for placing the first virtual prefabricated part model according to the target position and the target orientation so as to build a virtual building model.
Optionally, the orientation adjustment operation is also used to control a lens direction of a virtual camera for presenting a game screen corresponding to the lens direction.
Optionally, the apparatus further comprises: the first determining module is used for determining a visual ray corresponding to the second orientation based on the sight line detection; when the visual ray is directed at the second virtual preform model, it is determined that the virtual character is directed toward the second virtual preform model in a second orientation.
Optionally, a third adjusting module for determining a third position and a third orientation from the candidate position and the candidate orientation of the second virtual preform model based on a no-adsorption constraint detection for determining whether adsorption is prohibited between the first virtual preform model and the second virtual preform model, a collision detection for determining whether a collision occurs at the candidate position, and a line-of-sight detection for determining whether the candidate position is within a line-of-sight range of the virtual character; the second position and the second orientation are adjusted according to the third position and the third orientation of the second virtual preform model.
Optionally, the apparatus further comprises: the first processing module is used for acquiring adsorption attribute information of the second virtual prefabricated part model, wherein the adsorption attribute information is used for determining adsorption feasibility between the first virtual prefabricated part model and the second virtual prefabricated part model; an alternative position and an alternative orientation of the second virtual preform model are determined from the adsorption property information.
Optionally, the adsorption attribute information includes: the type of the at least one adsorption interface, the adsorption position of the at least one adsorption interface, the adsorption orientation of the at least one adsorption interface, the adsorption capacity of the at least one adsorption interface, the support capacity of the at least one adsorption interface, and the adsorption inhibition constraint of the at least one adsorption interface.
Optionally, the apparatus further comprises: a second processing module for selecting at least one coordinate point combination under the local coordinate system of the second virtual preform model, wherein each coordinate point combination in the at least one coordinate point combination comprises: the first three-dimensional coordinate point and the second three-dimensional coordinate point are different coordinate points in a three-dimensional space of the second virtual prefabricated part model; and performing collision detection based on a connecting line between the first three-dimensional coordinate point and the second three-dimensional coordinate point, and determining whether a virtual prefabricated part model exists in the alternative position.
Optionally, the apparatus further comprises: the third processing module is used for acquiring an included angle between a first vector and a second vector, wherein the first vector is a vector between the alternative position and the position of the virtual character in the game scene, and the second vector is a vector in the lens direction of the virtual camera; and performing sight line detection according to the comparison result of the included angle and a preset threshold value, and determining whether the alternative position is located in the sight line range of the virtual character.
Optionally, the apparatus further comprises: and the second determining module is used for performing collision detection based on the alternative position and the position of the virtual camera in the game scene and determining whether the alternative position is positioned in the sight line range of the virtual character.
Optionally, the apparatus further comprises: and the fourth processing module is used for calculating and storing voxelized coordinates of the first virtual prefabricated part model, wherein the voxelized coordinates are used for determining that the first virtual prefabricated part model exists at the target position when a new virtual prefabricated part model is placed subsequently.
Optionally, the apparatus further comprises: the fifth processing module is used for constructing an adjacency relation between the first virtual prefabricated part model and the second virtual prefabricated part model, wherein the adjacency relation is used for searching a virtual foundation model closest to the first virtual prefabricated part model; and carrying out support detection on the first virtual prefabricated part model based on the adjacency relation.
Optionally, the fifth processing module is configured to obtain, using the first virtual prefabricated part model as a starting point, a current step number between the first virtual prefabricated part model and the nearest virtual foundation model by using the adjacency relation and the attribute value of the supporting capability of the first virtual prefabricated part model; and when the current step number is less than or equal to the preset step number, determining that a supporting relation exists between the first virtual prefabricated part model and the nearest virtual foundation model.
Optionally, the preset range includes a preset range of the second position or a preset range of the current position of the virtual character.
According to an embodiment of the present invention, there is further provided a non-volatile storage medium, in which a computer program is stored, wherein the computer program is configured to execute the method for constructing a virtual building model in any one of the above methods when the computer program runs.
There is further provided, according to an embodiment of the present invention, a processor for executing a program, where the program is configured to execute the method for constructing a virtual building model in any one of the above embodiments when running.
There is further provided, according to an embodiment of the present invention, an electronic apparatus including a memory and a processor, where the memory stores a computer program, and the processor is configured to execute the computer program to perform the method for constructing a virtual building model according to any one of the above embodiments.
In at least some embodiments of the invention, the initial position of the first virtual preform model is adjusted by the displacement of the virtual character and/or the orientation corresponding to the orientation adjustment operation in such a way that the orientation of the first virtual preform model is adjusted from the initial orientation of the first virtual preform model to the orientation corresponding to the orientation adjustment operation in response to the orientation adjustment operation of the virtual character, to obtain a position corresponding to the orientation corresponding to the displacement and/or orientation adjustment operation, and when the virtual character is oriented towards the second virtual preform model in the orientation corresponding to the orientation adjustment operation and the second virtual preform model is within the preset range, the position corresponding to the displacement and/or the second orientation and the orientation corresponding to the orientation adjustment operation are adjusted according to the position and the orientation of the second virtual preform model to obtain the target position and the target orientation, and the first virtual prefabricated part model is placed according to the target position and the target orientation to construct the virtual building model, which not only can achieve the purpose of self-defining construction of the virtual building model in a game scene by taking the virtual prefabricated part model (such as a virtual foundation model, a virtual wall model, a virtual ceiling model, a virtual ladder model and the like) as a basic building model, but also can avoid the orientation of the virtual prefabricated part model in the game scene from being limited by the orientation of grids, thereby realizing the technical effects of improving the construction freedom of the virtual building model, reducing the expenditure of art resources, reducing the operation complexity of the construction process and enhancing the game experience, and further solving the problem that the three-dimensional space in the game scene needs to be divided into grids with different orientations and continuity in the manner of constructing the virtual building model in the game scene provided in the related technology, the orientation of the virtual prefabricated model in the game scene needs to be limited by the orientation of the grid, so that the virtual building model is lack of reality and flexibility in the construction process, and the game immersion of game players is reduced.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
FIG. 1 is a flow diagram of a method of constructing a virtual building model according to one embodiment of the invention;
FIG. 2 is a schematic illustration of a suction interface and a local origin of coordinates of a virtual preform model according to an alternative embodiment of the present invention;
FIG. 3 is a schematic diagram of a suction process between a virtual wall model and a virtual ground model according to an alternative embodiment of the present invention;
FIG. 4 is a schematic view of a gaze detection process in accordance with an alternative embodiment of the present invention;
FIG. 5 is a schematic illustration of an abutment relationship according to an alternative embodiment of the present invention;
FIG. 6 is a schematic view of a support inspection of a virtual preform model according to an alternative embodiment of the present invention;
FIG. 7 is a schematic illustration of an unsatisfied support relationship determination within a game scene in accordance with an alternative embodiment of the present invention;
FIG. 8 is a schematic illustration of satisfying a support relationship determination within a game scene in accordance with an alternative embodiment of the invention;
FIG. 9 is a block diagram of an apparatus for constructing a virtual building model according to an embodiment of the present invention;
fig. 10 is a block diagram of a construction apparatus of a virtual building model according to an alternative embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In accordance with one embodiment of the present invention, there is provided an embodiment of a method for constructing a virtual building model, wherein the steps illustrated in the flowchart of the figure may be performed in a computer system, such as a set of computer-executable instructions, and wherein, although a logical order is illustrated in the flowchart, in some cases, the steps illustrated or described may be performed in an order different than that illustrated.
The method embodiment can be executed in a mobile terminal, a computer terminal or a similar operation terminal. Taking the example of the Mobile terminal running on the Mobile terminal, the Mobile terminal may be a terminal device such as a smart phone (e.g., an Android phone, an iOS phone, etc.), a tablet computer, a palm computer, a Mobile Internet device (MID for short), a PAD, and the like. The mobile terminal may include one or more processors (which may include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a Digital Signal Processing (DSP) chip, a Microprocessor (MCU), a programmable logic device (FPGA), a neural Network Processor (NPU), a Tensor Processor (TPU), an Artificial Intelligence (AI) type processor, etc.) and a memory for storing data. Optionally, the mobile terminal may further include a transmission device, an input/output device, and a display device for a communication function. It will be understood by those skilled in the art that the foregoing structural description is only illustrative and not restrictive of the structure of the mobile terminal. For example, the mobile terminal may also include more or fewer components than described above, or have a different configuration than described above.
The memory may be used to store a computer program, for example, a software program and a module of application software, such as a computer program corresponding to the method for constructing a virtual building model in the embodiment of the present invention, and the processor executes various functional applications and data processing by running the computer program stored in the memory, so as to implement the method for constructing a virtual building model described above. The memory may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory may further include memory located remotely from the processor, and these remote memories may be connected to the mobile terminal through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device is used to receive or transmit data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the mobile terminal. In one example, the transmission device includes a Network adapter (NIC) that can be connected to other Network devices through a base station to communicate with the internet. In one example, the transmission device may be a Radio Frequency (RF) module, which is used for communicating with the internet in a wireless manner.
The display device may be, for example, a touch screen type Liquid Crystal Display (LCD) and a touch display (also referred to as a "touch screen" or "touch display screen"). The liquid crystal display may enable a user to interact with a user interface of the mobile terminal. In some embodiments, the mobile terminal has a Graphical User Interface (GUI) with which a user can interact by touching finger contacts and/or gestures on a touch-sensitive surface, where the human-machine interaction function optionally includes the following interactions: executable instructions for creating web pages, drawing, word processing, making electronic documents, games, video conferencing, instant messaging, emailing, call interfacing, playing digital video, playing digital music, and/or web browsing, etc., for performing the above-described human-computer interaction functions, are configured/stored in one or more processor-executable computer program products or readable storage media.
In the present embodiment, a method for constructing a virtual building model running on the terminal is provided, a graphical user interface is obtained by executing a software application on a processor of the terminal and rendering the software application on a touch display of the terminal, content displayed by the graphical user interface at least partially includes a game scene and a virtual character, fig. 1 is a flowchart of a method for constructing a virtual building model according to an embodiment of the present invention, and it should be noted that although a logical sequence is shown in the flowchart, in some cases, the steps shown or described may be executed in a sequence different from that here. As shown in fig. 1, the method comprises the steps of:
step S100, responding to the orientation adjustment operation of the virtual character, and adjusting the orientation of the first virtual prefabricated part model from a first orientation to a second orientation, wherein the orientation adjustment operation is used for controlling the orientation of the virtual character, the first orientation is the initial orientation of the first virtual prefabricated part model, and the second orientation is the orientation corresponding to the orientation adjustment operation;
the virtual prefabricated part model refers to a foundation building model which is manufactured in advance and is used for building a virtual building model in a game scene in a self-defined mode. For example: virtual ground models, virtual wall models, virtual ceiling models, virtual staircase models, etc. A game player may be provided with a selectable list of virtual pre-form models within a graphical user interface. The list of virtual pre-cast models is used to enumerate the numbers and names of the virtual pre-cast models that the game player currently uses within the game scene. For example: a virtual ground model (1), a virtual wall model (2), a virtual ceiling model (3), a virtual staircase model (4), etc. The game player can determine the first virtual prefabricated part model by clicking a touch operation in the virtual prefabricated part model list, and can also determine the first virtual prefabricated part model by sliding (i.e. dragging) the touch operation in the virtual prefabricated part model list. Before the orientation adjustment operation has not been performed, the initial position and initial orientation, i.e., the default rendering position and default orientation, of the first virtual pre-form model within the game scene need to be obtained.
After adding the first virtual prefabricated part model to the game scene, the default rendering position of the first virtual prefabricated part model is usually a three-dimensional space vector result obtained by multiplying the vector of the virtual camera in the current orientation in the game scene by the detection distance (i.e. the preset fixed distance) in the current orientation, with the current position of the virtual camera in the game scene as a starting point. The default orientation of the first virtual preform model may be determined by the euler angles of the virtual character. When the virtual character moves or rotates in the game scene, the orientation of the virtual character changes, and the virtual camera moves or rotates in the game scene along with the virtual character. Therefore, when the virtual character moves or rotates in the game scene, the current position and the current orientation of the virtual camera in the game scene are also changed, so that the current position and the current orientation of the virtual camera in the game scene are updated. In addition, since it is considered that a collision body including the virtual terrain model exists in the game scene and the virtual terrain model also exists at a terrain height, the initial position and the initial orientation of the acquired first virtual preform model do not necessarily enable the first virtual preform model to be actually laid out. For this purpose, the current position of the virtual camera in the game scene is taken as a starting point, the detection distance is taken as a collision detection distance, the current orientation is taken as a direction, and collision detection is carried out, wherein the collision detection types comprise virtual terrain model collision detection and virtual building model collision detection so as to obtain a collision detection result. If a collision is determined, the initial position of the first virtual preform model within the game scene becomes the position of the collision point. In addition, it is also necessary to confirm whether the initial position and initial orientation of the obtained first virtual preform model can actually place the first virtual preform model according to the terrain height of the virtual terrain model.
The above-mentioned orientation adjustment operation may adopt different control modes on different types of terminals. For example: for the end game, the direction adjustment operation can be executed in a way of moving a mouse; for the game, the direction adjustment operation can be performed by adopting a sliding touch operation and controlling the orientation of the center of gravity. In addition, a binding relationship needs to be established in advance between the orientation of the virtual character and the orientation of the first virtual prefabricated part model, and the binding relationship is used for controlling the first virtual prefabricated part model to keep consistent with the orientation of the virtual character in the orientation adjustment process of the virtual character, namely the first virtual prefabricated part model is opposite to the virtual character.
In an alternative embodiment, the above-described orientation adjustment operation can be used not only to control the orientation of the virtual character but also to control the lens direction of a virtual camera for presenting a game screen corresponding to the lens direction.
Step S101, adjusting a first position of a first virtual prefabricated part model according to the displacement and/or a second orientation of the virtual character to obtain a second position, wherein the first position is an initial position of the first virtual prefabricated part model, and the second position is a position corresponding to the displacement and/or the second orientation;
after determining the initial position of the first virtual preform model, the initial position of the first virtual preform model may be adjusted according to the displacement of the virtual character to obtain a position corresponding to the displacement, the initial position of the first virtual preform model may be adjusted according to the second orientation to obtain a position corresponding to the second orientation, and the initial position of the first virtual preform model may be adjusted according to both the displacement and the second orientation to obtain a position corresponding to both the displacement and the second orientation.
Step S102, when the virtual character faces the second virtual prefabricated part model in the second direction, judging whether the second virtual prefabricated part model is located in a preset range;
the second virtual prefabricated part model is the virtual prefabricated part model to be adsorbed. Since the second virtual preform model is a virtual preform model already laid out in the game scene, the second virtual preform model can provide a plurality of alternative positions and alternative orientations to be adsorbed. For example: when the virtual foundation model is hexahedron in shape, the lower surface of the virtual foundation model is adjacent to the ground in the game scene, and four edges of the upper surface of the virtual foundation model can be used as four alternative positions and alternative orientations to be adsorbed.
In an alternative embodiment, the second orientation may be determined based on line-of-sight detection. When the visual ray is directed at the second virtual preform model, it is determined that the virtual character is directed toward the second virtual preform model in a second orientation.
The preset range may include a preset range of the second position, that is, a preset range centered on the position corresponding to the displacement and/or the second orientation, for example: a circular region having a radius of m (m is a positive integer) centered at a position corresponding to the displacement and/or the second orientation; the preset range of the current position of the virtual character may also be included, that is, the preset range centered on the current position of the virtual character, for example: and a circular area with the current position of the virtual character as the center and the radius of n (n is a positive integer).
Step S103, when the second virtual prefabricated part model is located in the preset range, adjusting the second position and the second orientation according to the third position and the third orientation of the second virtual prefabricated part model to obtain a target position and a target orientation;
and based on the real-time position of the first virtual prefabricated part model, continuously calculating a second virtual prefabricated part model taking the first virtual prefabricated part model as the periphery of a coordinate center, determining whether the first virtual prefabricated part model and the second virtual prefabricated part model can finish adsorption, if the first virtual prefabricated part model and the second virtual prefabricated part model can finish adsorption, further calculating the position capable of being adsorbed on the second virtual prefabricated part model, and when the distance between the real-time position of the first virtual prefabricated part model and the adsorption position obtained by calculation is smaller than a specific threshold value, directly adjusting the second position and the second orientation according to the third position and the third orientation of the second virtual prefabricated part model to obtain a target position and a target orientation so as to finish adsorption.
And S104, placing the first virtual prefabricated part model according to the target position and the target orientation so as to construct a virtual building model.
By the above steps, the orientation of the first virtual prefabricated part model can be adjusted from the initial orientation of the first virtual prefabricated part model to the orientation corresponding to the orientation adjustment operation by the orientation adjustment operation of the virtual character, the initial position of the first virtual prefabricated part model is adjusted by the displacement and/or the orientation corresponding to the orientation adjustment operation of the virtual character to obtain the position corresponding to the orientation corresponding to the displacement and/or the orientation adjustment operation, when the virtual character is oriented to the second virtual prefabricated part model in the orientation corresponding to the orientation adjustment operation and the second virtual prefabricated part model is within the preset range, the position corresponding to the displacement and/or the second orientation and the orientation corresponding to the orientation adjustment operation are adjusted according to the position and the orientation of the second virtual prefabricated part model to obtain the target position and the target orientation, and the first virtual prefabricated part model is placed according to the target position and the target orientation to construct the virtual building model The method not only can achieve the purpose of self-defining construction of the virtual building model in the game scene by taking the virtual prefabricated part model (such as a virtual foundation model, a virtual wall model, a virtual ceiling model, a virtual ladder model and the like) as the basic building model, but also can ensure that the orientation of the virtual prefabricated part model is strictly consistent with the orientation of the virtual character, and avoids the limitation that the orientation of the virtual prefabricated part model in the game scene needs to be limited by the orientation of the grids, thereby realizing the technical effects of improving the construction freedom of the virtual building model, reducing the expenditure of art resources, reducing the operation complexity of the construction process and enhancing the game experience, further solving the problem that the three-dimensional space in the game scene needs to be divided into grids with different orientations and continuity in the manner of constructing the virtual building model in the game scene provided by the related technology, so that the orientation of the virtual prefabricated part model in the game scene needs to be limited, therefore, the virtual building model is lack of reality and flexibility in the building process, and the game immersion of game players is reduced.
Optionally, in step S103, adjusting the second position and the second orientation according to the third position and the third orientation of the second virtual preform model may comprise performing the steps of:
step S1031, determining a third position and a third orientation from the alternative position and the alternative orientation of the second virtual preform model based on adsorption inhibition constraint condition detection, collision detection and line-of-sight detection, wherein the adsorption inhibition constraint condition detection is used for determining whether adsorption is inhibited between the first virtual preform model and the second virtual preform model, the collision detection is used for determining whether collision occurs at the alternative position, and the line-of-sight detection is used for determining whether the alternative position is located in the line-of-sight range of the virtual character;
step S1032 adjusts the second position and the second orientation according to the third position and the third orientation of the second virtual preform model.
In order to be able to determine the third position and the third orientation from the alternative positions and the alternative orientations of the second virtual preform model, it is necessary to perform the inhibition of adsorption constraint detection, the collision detection and the line-of-sight detection, respectively. The forbidden adsorption constraint test is used to determine whether adsorption is forbidden between the first virtual preform model and the second virtual preform model. Collision detection is used to determine whether a collision has occurred at an alternative location. Gaze detection is used to determine whether an alternate location is within the gaze range of the virtual character. Thereby, the second position and the second orientation are adjusted according to the third position and the third orientation of the second virtual preform model.
Optionally, the method may further include the following steps:
step S105, acquiring adsorption attribute information of the second virtual prefabricated part model, wherein the adsorption attribute information is used for determining the adsorption feasibility between the first virtual prefabricated part model and the second virtual prefabricated part model;
and S106, determining the alternative position and the alternative orientation of the second virtual prefabricated part model according to the adsorption attribute information.
Before each virtual prefabricated part model is placed in a game scene, adsorption attribute information can be configured for each virtual prefabricated part model, and the mutual connection relation of different virtual prefabricated part models in a geometric space can be determined through the adsorption attribute information. Therefore, the candidate position and the candidate orientation can be specified by the adsorption attribute information. For example: when the virtual foundation model is a hexahedron, the four edges of the upper surface of the virtual foundation model can be determined according to the adsorption attribute information, and the four edges can be used as four alternative positions and alternative orientations to be adsorbed.
Optionally, the adsorption attribute information includes: the type of the at least one adsorption interface, the adsorption position of the at least one adsorption interface, the adsorption orientation of the at least one adsorption interface, the adsorption capacity of the at least one adsorption interface, the support capacity of the at least one adsorption interface, and the adsorption inhibition constraint of the at least one adsorption interface.
The adsorption interfaces involved in the adsorption attribute information are used for realizing the interconnection of different virtual prefabricated part models in a geometric space. The adsorption attribute information may include, but is not limited to:
(1) the type of the at least one adsorption interface, for example: a point adsorption interface, a horizontal line adsorption interface, a vertical line adsorption interface, a door adsorption interface, a surface adsorption interface and the like;
(2) the adsorption position and the adsorption orientation of at least one adsorption interface are mainly determined by the following parameters:
1) horizontal distance between the adsorption interface and the local coordinate origin of the virtual prefabricated part model;
2) the height between the adsorption interface and the local coordinate origin of the virtual prefabricated part model;
3) the rotation angle of the adsorption interface around the Y axis relative to the local coordinate origin of the virtual prefabricated part model;
4) and the adsorption interface rotates around the X axis relative to the local coordinate origin of the virtual prefabricated part model.
Fig. 2 is a schematic diagram of a local coordinate origin of the suction interface and the virtual prefabricated model according to an alternative embodiment of the present invention, and as shown in fig. 2, when the virtual prefabricated model is a virtual ceiling model, a local coordinate origin O exists in a local coordinate system in which the virtual ceiling model is located. The AB side is a horizontal linear adsorption interface (the essence of which is a space line segment), and the M point is the midpoint of the AB side. The length of a line segment projected in the horizontal direction by a three-dimensional space connecting line between the M point and the local coordinate origin O is the horizontal distance between the adsorption interface and the local coordinate origin of the virtual prefabricated part model. The length of a line segment projected in the vertical direction by a three-dimensional space connecting line between the M point and the local coordinate origin O is the height between the adsorption interface and the local coordinate origin of the virtual prefabricated part model. The rotation angle of the three-dimensional space connecting line between the M point and the local coordinate origin O around the Y axis is the rotation angle of the adsorption interface relative to the local coordinate origin of the virtual prefabricated part model around the Y axis. The rotation angle of the three-dimensional space connecting line between the M point and the local coordinate origin O around the Y axis is the rotation angle of the adsorption interface relative to the local coordinate origin of the virtual prefabricated part model around the X axis.
(3) The adsorption capacity of the at least one adsorption interface;
(4) a support capability of the at least one adsorption interface;
(5) a no adsorption constraint of at least one adsorption interface. The adsorption prohibition constraint is closely related to the degree of freedom of the game construction play and is preset. For example: the virtual ceiling model is prohibited from directly adsorbing to the virtual ground model.
Optionally, the method may further include the following steps:
step S107, selecting at least one coordinate point combination under the local coordinate system of the second virtual prefabricated part model, wherein each coordinate point combination in the at least one coordinate point combination comprises: the first three-dimensional coordinate point and the second three-dimensional coordinate point are different coordinate points in a three-dimensional space of the second virtual prefabricated part model;
and S108, performing collision detection based on a connecting line between the first three-dimensional coordinate point and the second three-dimensional coordinate point, and determining whether a virtual prefabricated part model exists in the alternative position.
The data structure used in the collision detection process may be a list, which may contain at least one data item, each data item containing a combination of coordinate points. Each coordinate point combination includes: the first three-dimensional coordinate point and the second three-dimensional coordinate point are different coordinate points in a three-dimensional space of the second virtual prefabricated part model.
The first three-dimensional coordinate point may be represented as: (len1, height1, angle 1);
where len1 represents the horizontal distance between the first three-dimensional coordinate point and the local origin of coordinates of the virtual preform model, height1 represents the height between the first three-dimensional coordinate point and the local origin of coordinates of the virtual preform model, and angle1 represents the rotation angle of the vector formed by the first three-dimensional coordinate point and the local origin of coordinates of the virtual preform model around the Y-axis, whereby unique coordinates can be found in the local coordinate system from (len1, height1, angle 1).
The second three-dimensional coordinate point may be expressed as: (len2, height2, angle 2);
where len2 represents the horizontal distance between the second three-dimensional coordinate point and the local origin of coordinates of the virtual preform model, height2 represents the height between the second three-dimensional coordinate point and the local origin of coordinates of the virtual preform model, and angle2 represents the rotation angle of the vector formed by the second three-dimensional coordinate point and the local origin of coordinates of the virtual preform model around the Y-axis, whereby unique coordinates can be obtained in the local coordinate system from (len2, height2, angle 2).
Then, collision detection is performed with (len1, height1, angle1) as a starting point and (len2, height2, angle2) as an end point, and candidate positions and candidate orientations where collisions occur are filtered out based on the collision detection screening.
In an alternative embodiment, the first three-dimensional coordinate point and the second three-dimensional coordinate point may be two different vertices of the virtual preform model.
Optionally, the method may further include the following steps:
step S109, acquiring an included angle between a first vector and a second vector, wherein the first vector is a vector between the alternative position and the position of the virtual character in the game scene, and the second vector is a vector in the lens direction of the virtual camera;
and step S110, performing sight line detection according to the comparison result of the included angle and a preset threshold value, and determining whether the alternative position is located in the sight line range of the virtual character.
In the process of performing the line-of-sight detection based on the candidate position and the candidate orientation, an angle between the first vector and the second vector may be acquired based on the candidate position and the candidate orientation. The first vector is a vector between the candidate position and a third current position of the virtual character in the game scene. The second vector is the vector in the second current orientation. And then, performing line-of-sight detection according to the comparison result of the included angle and a preset threshold (for example, 75 degrees), and determining whether the alternative position is located within the line-of-sight range of the virtual character. For example: and when the included angle is larger than 75 degrees, determining that the alternative position is positioned outside the visual range of the virtual character. And when the included angle is smaller than or equal to 75 degrees, determining that the alternative position is positioned in the visual line range of the virtual character.
Optionally, the method may further include the following steps:
and step S111, performing collision detection based on the candidate position and the position of the virtual camera in the game scene, and determining whether the candidate position is positioned in the sight line range of the virtual character.
In the process of performing the sight line detection based on the alternative position, in addition to acquiring an included angle between the first vector and the second vector, collision detection can be performed based on the alternative position and the current position of the virtual camera, and whether the alternative position is located within the sight line range of the virtual character is determined. And performing collision detection by taking the current position of the virtual camera as a starting point and the alternative position as an end point, and determining whether the virtual prefabricated part model is placed on the alternative position.
Optionally, the method may further include the following steps:
and step S112, calculating and storing voxelized coordinates of the first virtual prefabricated part model, wherein the voxelized coordinates are used for determining that the first virtual prefabricated part model exists at the target position when a new virtual prefabricated part model is placed subsequently.
For each virtual prefabricated part model which is successfully placed in a game scene by a game player, voxel coordinate calculation needs to be carried out on each virtual prefabricated part model, and then a mapping relation is established between the adsorption attribute information and the voxel coordinate of each virtual prefabricated part model and is stored and managed. When a virtual prefabricated part model is newly added in a game scene, the existing virtual prefabricated part model near the newly added virtual prefabricated part model can be quickly determined according to the voxelized coordinates, so that the alternative position and the alternative orientation are screened out for the newly added virtual prefabricated part model.
Fig. 3 is a schematic view of a suction process between a virtual wall model and a virtual foundation model according to an alternative embodiment of the present invention, where the virtual wall model is the first virtual prefabricated part model and the virtual foundation model is the second virtual prefabricated part model, as shown in fig. 3. After the virtual wall model is added to the game scene, the default rendering position of the virtual wall model is usually a three-dimensional space vector result obtained by multiplying the vector of the virtual camera in the current direction in the game scene by the detection distance (i.e., the preset fixed distance) in the current direction, with the current position of the virtual camera in the game scene as a starting point. The default orientation of the virtual wall model is strictly consistent with the orientation of the virtual character, and can be directly opposite to the virtual character or determined by the preset Euler angle of the virtual character. In fig. 3, when the lens of the virtual camera is not oriented toward the virtual wall model or the virtual ground model, the virtual wall model is oriented in the same direction as the lens. In this case, the orientation of the virtual wall model is the first orientation, and the position of the virtual wall model is the first position.
In addition, since it is considered that a collision body including a virtual terrain model exists in a game scene and the virtual terrain model also has a terrain height, the initial position and the initial orientation of the acquired virtual wall model are not always able to actually place the virtual wall model. For this purpose, the current position of the virtual camera in the game scene is taken as a starting point, the detection distance is taken as a collision detection distance, the current orientation is taken as a direction, and collision detection is carried out, wherein the collision detection types comprise virtual terrain model collision detection and virtual building model collision detection so as to obtain a collision detection result. If a collision does occur, the initial position of the virtual wall model within the game scene becomes the collision point position. In addition, it is also necessary to confirm whether the initial position and initial orientation of the obtained virtual wall model can actually place the virtual wall model according to the terrain height of the virtual terrain model.
When the virtual character moves or rotates in the game scene, the orientation of the virtual character changes, and the virtual camera also moves or rotates in the game scene along with the virtual character, so that the current position and the current orientation of the virtual camera in the game scene are updated. In fig. 3, when the lens of the virtual camera is oriented to the virtual ground model, it is necessary to first adjust the orientation of the virtual wall model from the first orientation to the second orientation and to adjust the position of the virtual wall model from the first position to the second position. When the virtual wall model is oriented toward the virtual ground model in the second orientation and the virtual ground model is within the preset range, the second position and the second orientation need to be adjusted according to the third position and the third orientation of the virtual ground model to obtain the target position and the target orientation. At this time, since the position and orientation of the virtual wall model are updated synchronously, the voxelized coordinates of the virtual wall model need to be recalculated. For each virtual prefabricated part model which is successfully placed in a game scene by a game player, voxel coordinate calculation needs to be carried out on each virtual prefabricated part model, and then a mapping relation is established between the adsorption attribute information and the voxel coordinate of each virtual prefabricated part model and is stored and managed. Thus, when a virtual ground model exists in the vicinity of the virtual wall model, the virtual ground model can be detected.
Since the bottom of the virtual wall model has 1 horizontal line adsorption interface, the periphery of the upper surface of the virtual foundation model has 4 horizontal line adsorption interfaces, and it is found that the virtual wall model and the virtual foundation model have no inhibition adsorption constraint condition, the 4 horizontal line adsorption interfaces become the above-mentioned alternative positions and alternative orientations.
Then, collision detection and sight line detection are carried out on the 4 horizontal line adsorption interfaces, and it is further determined that the 4 horizontal line adsorption interfaces can be the target position and the target orientation of the virtual wall model. At this time, the target position and target orientation closest to the virtual character are selected by default to complete the adsorption.
After the first virtual wall model placement is completed, the game player may also continue to attempt to place subsequent virtual wall models. At this time, it should be noted that, because the adsorption interface closest to the virtual character in the 4 horizontal line adsorption interfaces is already occupied, the adsorption interface can be filtered out through collision detection, only three target positions and target orientations are left to be placed, and the target position and target orientation closest to the virtual character are also selected by default to complete adsorption. And analogizing until all the virtual wall models are placed.
Fig. 4 is a schematic view of a line-of-sight detection process according to an alternative embodiment of the present invention, as shown in fig. 4, when 2 virtual wall models have been placed within the line-of-sight of a virtual character, the 2 adsorption interfaces can be filtered out through collision detection. Then, the other 2 adsorption interfaces are found to be located outside the visual range of the virtual character through visual line detection (namely, the adsorption interfaces are shielded by the 2 virtual wall models which are already placed). At this time, the virtual ground model no longer has the alternative position and the alternative orientation for the newly added virtual wall model to adsorb. And the newly added virtual wall model can not be placed continuously unless the virtual character is controlled to move continuously, so that the other 1 or 2 adsorption interfaces are positioned in the visual line range of the virtual character.
Optionally, the method may further include the following steps:
step S113, constructing an adjacency relation between the first virtual prefabricated part model and the second virtual prefabricated part model, wherein the adjacency relation is used for searching a virtual foundation model closest to the first virtual prefabricated part model;
and step S114, carrying out support detection on the first virtual prefabricated part model based on the adjacency relation.
For different virtual preform models which are completely absorbed, an undirected graph of the adjacency relation of the virtual preform models needs to be constructed and recorded so as to carry out support detection on the virtual preform models to be placed based on the adjacency relation.
Fig. 5 is a schematic diagram of an adjacency according to an alternative embodiment of the present invention, as shown in fig. 5, there currently exist a virtual ground model 1, a virtual ground model 2, a virtual wall model 3, a virtual wall model 4, a virtual wall model 5, a virtual door frame model 6, and a virtual ground model 7. Therefore, the adjacency relation of the virtual ground model 1 is: virtual ground model 2, virtual ground model 7, virtual wall model 4, virtual wall model 5. The adjacency relation of the virtual wall model 4 is: virtual ground model 1, virtual wall model 3, virtual wall model 5. The adjacency relation of the virtual wall model 5 is: virtual ground model 1, virtual wall model 4, virtual door frame model 6. And so on.
Optionally, in step S114, the support inspection of the first virtual preform model based on the adjacency relation may include performing the steps of:
step S1141, taking the first virtual prefabricated part model as a starting point, and acquiring the current step number between the first virtual prefabricated part model and the nearest virtual foundation model by using the adjacency relation and the attribute value of the supporting capacity of the first virtual prefabricated part model;
step S1142, when the current step number is less than or equal to the preset step number, determining that a supporting relation exists between the first virtual prefabricated part model and the nearest virtual foundation model.
The supporting capability of at least one adsorption interface is included in the adsorption property information configured for each virtual preform model, whereby the supporting capability (i.e., support property) of each virtual preform model can be determined. For example: the support attribute value of the virtual ground model and the virtual wall model is 1, and the support attribute value of the virtual ceiling model is 0. Between the first virtual preform model and the second virtual preform model, if the virtual foundation model can be found within the preset number of steps according to the adjacency graph, it can be determined that a support relationship exists between the first virtual preform model and the nearest virtual foundation model.
It should be noted that, when the support attribute of a virtual prefabricated member model is 0, the current step number between the virtual prefabricated member model and the nearest virtual foundation model is increased by 1.
And searching a virtual foundation model in a preset step number according to the adjacency graph, wherein the virtual foundation model essentially belongs to a breadth-first search algorithm. And taking the first virtual prefabricated part model as a starting point, and acquiring the current step number between the first virtual prefabricated part model and the nearest virtual foundation model by utilizing the adjacency relation and the attribute value of the supporting capacity of the first virtual prefabricated part model. The initial number of steps of the first virtual preform model is 0. In the process of obtaining the current step number, any virtual prefabricated part model can only be traversed once and cannot be traversed repeatedly. Assuming that the preset number of steps is 4, in case that the current number of steps is greater than 4 and the virtual foundation model cannot be found, the continuous traversal of the adjacent virtual prefabricated part models is stopped, which indicates that there is no supporting relationship between the first virtual prefabricated part model and the nearest virtual foundation model. In case the current number of steps is less than or equal to 4 and a virtual ground model can be found, it can then be determined that there is a supporting relationship between the first virtual preform model and the nearest virtual ground model.
Fig. 6 is a schematic view of the support detection of a virtual preform model according to an alternative embodiment of the invention, as shown in fig. 6, where there is currently a virtual ceiling model 1, a virtual ceiling model 2, a virtual ceiling model 3, a virtual ceiling model 4, a virtual ceiling model 5, a virtual ceiling model 6. Assuming that the maximum number of steps of the support relationship determination is not more than 4 steps, the virtual ceiling models 5 and 6 are virtual ceiling models that do not satisfy the support relationship, and thus cannot perform adsorption with other virtual preform models within the game scene.
Fig. 7 is a schematic diagram of an unsatisfied support relationship determination within a game scene, as shown in fig. 7, in which a set of virtual ceiling models already exist that are attached to each other, according to an alternative embodiment of the present invention. When the game player tries to continue to put a new virtual ceiling model, the support detection can be performed according to the maximum number of steps determined by the preset support relation being not more than 4 steps. The virtual preform model that has been put in place at present includes: 1 virtual ground model, 1 virtual wall model, 4 virtual ceiling models. Because the value of the virtual ceiling model support attribute is 0, when the nearest virtual foundation model is found through the adjacency undirected graph, the current step number is found to be more than 4, and the virtual foundation model cannot be found, so that the situation that a new virtual ceiling model cannot be placed is determined (the dotted line is adopted in the figure, and a warning prompt can be sent to a game player by adopting a red material in an actual game scene).
FIG. 8 is a schematic diagram of satisfying a support relationship determination within a game scene in which a set of virtual ceiling models attached to each other already exist, as shown in FIG. 8, according to an alternative embodiment of the present invention. When the game player tries to continue to put a new virtual ceiling model, the support detection can be performed according to the maximum number of steps determined by the preset support relation being not more than 4 steps. The virtual preform model that has been put in place at present includes: 1 virtual ground model, 1 virtual wall model, 4 virtual ceiling models. Since the value of the virtual ceiling model support attribute is 0, when the nearest virtual ground model is found through the adjacency undirected graph, the current step number is found to be equal to 4 and the virtual ground model can be found, so that the new virtual ceiling model can be placed (indicated by a solid line in the figure, and a blue material can be used for giving a success prompt to a game player in an actual game scene).
Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
In this embodiment, a device for constructing a virtual building model is further provided, and the device is used to implement the foregoing embodiments and preferred embodiments, and the description of the device that has been already made is omitted. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
Fig. 9 is a block diagram of an apparatus for constructing a virtual building model according to an embodiment of the present invention, in which a graphical user interface is obtained by executing a software application on a processor of a terminal and rendering the software application on a touch display of the terminal, and contents displayed by the graphical user interface at least partially include a game scene and a virtual character, as shown in fig. 9, the apparatus includes:
a first adjusting module 100, configured to adjust an orientation of the first virtual preform model from a first orientation to a second orientation in response to an orientation adjusting operation of the virtual character, wherein the orientation adjusting operation is used to control the orientation of the virtual character, the first orientation is an initial orientation of the first virtual preform model, and the second orientation is an orientation corresponding to the orientation adjusting operation; a second adjusting module 102, configured to adjust a first position of the first virtual preform model according to a displacement and/or the second orientation of the virtual character to obtain a second position, where the first position is an initial position of the first virtual preform model, and the second position is a position corresponding to the displacement and/or the second orientation; a determining module 104, configured to determine whether the second virtual prefabricated part model is within a preset range when the virtual character faces the second virtual prefabricated part model in the second orientation; a third adjusting module 106, configured to adjust the second position and the second orientation according to a third position and a third orientation of the second virtual preform model when the second virtual preform model is within the preset range, so as to obtain a target position and a target orientation; a building module 108, configured to place the first virtual prefabricated part model according to the target position and the target orientation to build a virtual building model.
Optionally, the orientation adjustment operation is also used to control a lens direction of a virtual camera for presenting a game screen corresponding to the lens direction.
Optionally, fig. 10 is a block diagram of a device for constructing a virtual building model according to an alternative embodiment of the present invention, and as shown in fig. 10, the device includes, in addition to all modules shown in fig. 9: a first determining module 110, configured to determine a second orientation corresponding to the visual ray based on the gaze detection; when the visual ray is directed at the second virtual preform model, it is determined that the virtual character is directed toward the second virtual preform model in a second orientation.
Optionally, a third adjusting module 106 for determining a third position and a third orientation from the candidate positions and candidate orientations of the second virtual preform model based on a no-adsorption constraint detection for determining whether adsorption is prohibited between the first virtual preform model and the second virtual preform model, a collision detection for determining whether a collision occurs at the candidate positions, and a line-of-sight detection for determining whether the candidate positions are within the line-of-sight of the virtual character; the second position and the second orientation are adjusted according to the third position and the third orientation of the second virtual preform model.
Optionally, as shown in fig. 10, the apparatus includes, in addition to all the modules shown in fig. 9: a first processing module 112, configured to obtain adsorption attribute information of the second virtual preform model, where the adsorption attribute information is used to determine adsorption feasibility between the first virtual preform model and the second virtual preform model; an alternative position and an alternative orientation of the second virtual preform model are determined from the adsorption property information.
Optionally, the adsorption attribute information includes: the type of the at least one adsorption interface, the adsorption position of the at least one adsorption interface, the adsorption orientation of the at least one adsorption interface, the adsorption capacity of the at least one adsorption interface, the support capacity of the at least one adsorption interface, and the adsorption inhibition constraint of the at least one adsorption interface.
Optionally, as shown in fig. 10, the apparatus includes, in addition to all the modules shown in fig. 9: a second processing module 114 for selecting at least one coordinate point combination under the local coordinate system of the second virtual preform model, wherein each coordinate point combination of the at least one coordinate point combination comprises: the first three-dimensional coordinate point and the second three-dimensional coordinate point are different coordinate points in a three-dimensional space of the second virtual prefabricated part model; and performing collision detection based on a connecting line between the first three-dimensional coordinate point and the second three-dimensional coordinate point, and determining whether a virtual prefabricated part model exists in the alternative position.
Optionally, as shown in fig. 10, the apparatus includes, in addition to all the modules shown in fig. 9: the third processing module 116 is configured to obtain an included angle between a first vector and a second vector, where the first vector is a vector between the candidate position and the position of the virtual character in the game scene, and the second vector is a vector in the lens direction of the virtual camera; and performing sight line detection according to the comparison result of the included angle and a preset threshold value, and determining whether the alternative position is located in the sight line range of the virtual character.
Optionally, as shown in fig. 10, the apparatus includes, in addition to all the modules shown in fig. 9: a second determination module 118, configured to perform collision detection based on the candidate location and the location of the virtual camera within the game scene, and determine whether the candidate location is within the line of sight of the virtual character.
Optionally, as shown in fig. 10, the apparatus includes, in addition to all the modules shown in fig. 9: a fourth processing module 120, configured to calculate and store voxelized coordinates of the first virtual preform model, where the voxelized coordinates are used to determine that the first virtual preform model exists at the target position when a new virtual preform model is subsequently placed.
Optionally, as shown in fig. 10, the apparatus includes, in addition to all the modules shown in fig. 9: a fifth processing module 122, configured to construct an adjacency between the first virtual prefabricated part model and the second virtual prefabricated part model, where the adjacency is used to find a virtual foundation model closest to the first virtual prefabricated part model; and carrying out support detection on the first virtual prefabricated part model based on the adjacency relation.
Optionally, the fifth processing module 122 is configured to obtain, starting from the first virtual preform model, a current step number between the first virtual preform model and the nearest virtual foundation model by using the adjacency relation and the attribute value of the supporting capability of the first virtual preform model; and when the current step number is less than or equal to the preset step number, determining that a supporting relation exists between the first virtual prefabricated part model and the nearest virtual foundation model.
Optionally, the preset range includes a preset range of the second position or a preset range of the current position of the virtual character.
It should be noted that, the above modules may be implemented by software or hardware, and for the latter, the following may be implemented, but not limited to: the modules are all positioned in the same processor; alternatively, the modules are respectively located in different processors in any combination.
Embodiments of the present invention also provide a non-volatile storage medium having a computer program stored therein, wherein the computer program is configured to perform the steps of any of the above method embodiments when executed.
Alternatively, in the present embodiment, the above-mentioned nonvolatile storage medium may be configured to store a computer program for executing the steps of:
s1, responding to the orientation adjustment operation of the virtual character, and adjusting the orientation of the first virtual prefabricated model from a first orientation to a second orientation, wherein the orientation adjustment operation is used for controlling the orientation of the virtual character, the first orientation is the initial orientation of the first virtual prefabricated model, and the second orientation is the orientation corresponding to the orientation adjustment operation;
s2, adjusting a first position of the first virtual prefabricated part model according to the displacement and/or the second orientation of the virtual character to obtain a second position, wherein the first position is an initial position of the first virtual prefabricated part model, and the second position is a position corresponding to the displacement and/or the second orientation;
s3, when the virtual character faces the second virtual prefabricated part model in the second orientation, judging whether the second virtual prefabricated part model is located in a preset range;
s4, when the second virtual prefabricated part model is located in the preset range, adjusting the second position and the second orientation according to the third position and the third orientation of the second virtual prefabricated part model to obtain a target position and a target orientation;
and S5, placing the first virtual prefabricated part model according to the target position and the target orientation to construct a virtual building model.
Optionally, in this embodiment, the nonvolatile storage medium may include, but is not limited to: various media capable of storing computer programs, such as a usb disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk.
Embodiments of the present invention also provide an electronic device comprising a memory having a computer program stored therein and a processor arranged to run the computer program to perform the steps of any of the above method embodiments.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, wherein the transmission device is connected to the processor, and the input/output device is connected to the processor.
Optionally, in this embodiment, the processor may be configured to execute the following steps by a computer program:
s1, responding to the orientation adjustment operation of the virtual character, and adjusting the orientation of the first virtual prefabricated model from a first orientation to a second orientation, wherein the orientation adjustment operation is used for controlling the orientation of the virtual character, the first orientation is the initial orientation of the first virtual prefabricated model, and the second orientation is the orientation corresponding to the orientation adjustment operation;
s2, adjusting a first position of the first virtual prefabricated part model according to the displacement and/or the second orientation of the virtual character to obtain a second position, wherein the first position is an initial position of the first virtual prefabricated part model, and the second position is a position corresponding to the displacement and/or the second orientation;
s3, when the virtual character faces the second virtual prefabricated part model in the second orientation, judging whether the second virtual prefabricated part model is located in a preset range;
s4, when the second virtual prefabricated part model is located in the preset range, adjusting the second position and the second orientation according to the third position and the third orientation of the second virtual prefabricated part model to obtain a target position and a target orientation;
and S5, placing the first virtual prefabricated part model according to the target position and the target orientation to construct a virtual building model.
Optionally, the specific examples in this embodiment may refer to the examples described in the above embodiments and optional implementation manners, and this embodiment is not described herein again.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units may be a logical division, and in actual implementation, there may be another division, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (16)

1. A method for constructing a virtual building model, wherein a graphical user interface is obtained by executing a software application on a processor of a terminal and rendering the software application on a touch display of the terminal, and content displayed by the graphical user interface at least partially includes a game scene and a virtual character, the method comprising:
adjusting the orientation of the first virtual prefabricated part model from a first orientation to a second orientation in response to the orientation adjustment operation of the virtual character, wherein the orientation adjustment operation is used for controlling the orientation of the virtual character, the first orientation is the initial orientation of the first virtual prefabricated part model, and the second orientation is the orientation corresponding to the orientation adjustment operation;
adjusting a first position of the first virtual prefabricated part model according to the displacement and/or the second orientation of the virtual character to obtain a second position, wherein the first position is an initial position of the first virtual prefabricated part model, and the second position is a position corresponding to the displacement and/or the second orientation;
when the virtual character faces a second virtual prefabricated part model in the second orientation, judging whether the second virtual prefabricated part model is located in a preset range;
when the second virtual prefabricated part model is located in the preset range, adjusting the second position and the second orientation according to the third position and the third orientation of the second virtual prefabricated part model to obtain a target position and a target orientation;
and placing the first virtual prefabricated part model according to the target position and the target orientation so as to construct a virtual building model.
2. The method of claim 1, wherein the orientation adjustment operation is further used to control a lens direction of a virtual camera used to present a game view corresponding to the lens direction.
3. The method of claim 1, further comprising:
determining, based on gaze detection, a corresponding visual ray of the second orientation;
determining that the virtual character is oriented toward the second virtual preform model in the second orientation when the visual ray is oriented toward the second virtual preform model.
4. The method of claim 1, wherein adjusting the second position and the second orientation according to the third position and the third orientation of the second virtual preform model comprises:
determining the third position and the third orientation from the candidate position and the candidate orientation of the second virtual preform model based on a no-chucking constraint detection for determining whether chucking is prohibited between the first virtual preform model and the second virtual preform model, a collision detection for determining whether a collision occurs at the candidate position, and a line-of-sight detection for determining whether the candidate position is within a line-of-sight of the virtual character;
adjusting the second position and the second orientation according to the third position and the third orientation of the second virtual preform model.
5. The method of claim 4, further comprising:
acquiring adsorption attribute information of the second virtual prefabricated part model, wherein the adsorption attribute information is used for determining adsorption feasibility between the first virtual prefabricated part model and the second virtual prefabricated part model;
determining the candidate position and the candidate orientation of the second virtual preform model from the adsorption property information.
6. The method of claim 5, wherein the adsorption attribute information comprises:
the type of the at least one adsorption interface, the adsorption position of the at least one adsorption interface, the adsorption orientation of the at least one adsorption interface, the adsorption capacity of the at least one adsorption interface, the support capacity of the at least one adsorption interface, and the adsorption inhibition constraint of the at least one adsorption interface.
7. The method of claim 4, further comprising:
selecting at least one coordinate point combination under the local coordinate system of the second virtual preform model, wherein each coordinate point combination in the at least one coordinate point combination comprises: a first three-dimensional coordinate point and a second three-dimensional coordinate point, the first three-dimensional coordinate point and the second three-dimensional coordinate point being different coordinate points in a three-dimensional space of the second virtual preform model;
and performing collision detection based on a connecting line between the first three-dimensional coordinate point and the second three-dimensional coordinate point, and determining whether a virtual preform model exists at the alternative position.
8. The method of claim 4, further comprising:
acquiring an included angle between a first vector and a second vector, wherein the first vector is a vector between the alternative position and the position of the virtual character in the game scene, and the second vector is a vector in the lens direction of the virtual camera;
and performing the sight line detection according to the comparison result of the included angle and a preset threshold value, and determining whether the alternative position is located in the sight line range of the virtual character.
9. The method of claim 4, further comprising:
and performing collision detection based on the alternative position and the position of the virtual camera in the game scene, and determining whether the alternative position is positioned in the sight line range of the virtual character.
10. The method of claim 1, further comprising:
calculating and storing voxelized coordinates of the first virtual preform model, wherein the voxelized coordinates are used for determining that the first virtual preform model exists at the target position when a new virtual preform model is subsequently put.
11. The method of claim 1, further comprising:
constructing an adjacency relation between the first virtual prefabricated part model and the second virtual prefabricated part model, wherein the adjacency relation is used for searching a virtual foundation model closest to the first virtual prefabricated part model;
and carrying out support detection on the first virtual preform model based on the adjacency relation.
12. The method of claim 11, wherein the support inspection of the first virtual preform model based on the adjacency relationship comprises:
taking the first virtual prefabricated part model as a starting point, and acquiring the current step number between the first virtual prefabricated part model and the nearest virtual foundation model by using the adjacency relation and the attribute value of the supporting capacity of the first virtual prefabricated part model;
and when the current step number is less than or equal to a preset step number, determining that a supporting relation exists between the first virtual prefabricated part model and the nearest virtual foundation model.
13. The method of claim 1, wherein the predetermined range comprises a predetermined range of the second location or a predetermined range of the current location of the virtual character.
14. An apparatus for constructing a virtual building model, wherein a graphical user interface is obtained by executing a software application on a processor of a terminal and rendering the software application on a touch-sensitive display of the terminal, and the content displayed by the graphical user interface at least partially includes a game scene and a virtual character, the apparatus comprising:
a first adjusting module, configured to adjust an orientation of the first virtual preform model from a first orientation to a second orientation in response to an orientation adjusting operation of the virtual character, wherein the orientation adjusting operation is used to control the orientation of the virtual character, the first orientation is an initial orientation of the first virtual preform model, and the second orientation is an orientation corresponding to the orientation adjusting operation;
a second adjusting module, configured to adjust a first position of the first virtual prefabricated part model according to a displacement and/or the second orientation of the virtual character to obtain a second position, where the first position is an initial position of the first virtual prefabricated part model, and the second position is a position corresponding to the displacement and/or the second orientation;
the judging module is used for judging whether the second virtual prefabricated part model is positioned in a preset range or not when the virtual character faces the second virtual prefabricated part model in the second orientation;
a third adjusting module, configured to adjust the second position and the second orientation according to a third position and a third orientation of the second virtual preform model when the second virtual preform model is within the preset range, so as to obtain a target position and a target orientation;
and the building module is used for placing the first virtual prefabricated part model according to the target position and the target orientation so as to build a virtual building model.
15. A non-volatile storage medium, in which a computer program is stored, wherein the computer program is arranged to execute the method of constructing a virtual building model according to any one of claims 1 to 13 when running.
16. An electronic device comprising a memory and a processor, wherein the memory stores a computer program, and the processor is configured to execute the computer program to perform the method of constructing a virtual building model according to any one of claims 1 to 13.
CN202011133417.7A 2020-10-21 2020-10-21 Virtual building model construction method and device and electronic device Pending CN112190944A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011133417.7A CN112190944A (en) 2020-10-21 2020-10-21 Virtual building model construction method and device and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011133417.7A CN112190944A (en) 2020-10-21 2020-10-21 Virtual building model construction method and device and electronic device

Publications (1)

Publication Number Publication Date
CN112190944A true CN112190944A (en) 2021-01-08

Family

ID=74010551

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011133417.7A Pending CN112190944A (en) 2020-10-21 2020-10-21 Virtual building model construction method and device and electronic device

Country Status (1)

Country Link
CN (1) CN112190944A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113289334A (en) * 2021-05-14 2021-08-24 网易(杭州)网络有限公司 Game scene display method and device
CN117371075A (en) * 2023-10-30 2024-01-09 北京元跃科技有限公司 Model assembling method and device based on AR technology, storage medium and electronic equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108355353A (en) * 2017-12-29 2018-08-03 网易(杭州)网络有限公司 Information processing method and device, storage medium, electronic equipment
CN109731329A (en) * 2019-01-31 2019-05-10 网易(杭州)网络有限公司 A kind of determination method and apparatus for the placement location of virtual component in game
CN110152297A (en) * 2019-05-17 2019-08-23 网易(杭州)网络有限公司 Edit methods and device, storage medium, the electronic equipment of virtual resource
CN110197534A (en) * 2019-06-10 2019-09-03 网易(杭州)网络有限公司 Hooking method, device, processor and the terminal of Virtual Building accessory model
CN110262730A (en) * 2019-05-23 2019-09-20 网易(杭州)网络有限公司 Edit methods, device, equipment and the storage medium of game virtual resource
CN111467794A (en) * 2020-04-20 2020-07-31 网易(杭州)网络有限公司 Game interaction method and device, electronic equipment and storage medium
CN111617475A (en) * 2020-05-28 2020-09-04 腾讯科技(深圳)有限公司 Interactive object construction method, device, equipment and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108355353A (en) * 2017-12-29 2018-08-03 网易(杭州)网络有限公司 Information processing method and device, storage medium, electronic equipment
CN109731329A (en) * 2019-01-31 2019-05-10 网易(杭州)网络有限公司 A kind of determination method and apparatus for the placement location of virtual component in game
CN110152297A (en) * 2019-05-17 2019-08-23 网易(杭州)网络有限公司 Edit methods and device, storage medium, the electronic equipment of virtual resource
CN110262730A (en) * 2019-05-23 2019-09-20 网易(杭州)网络有限公司 Edit methods, device, equipment and the storage medium of game virtual resource
CN110197534A (en) * 2019-06-10 2019-09-03 网易(杭州)网络有限公司 Hooking method, device, processor and the terminal of Virtual Building accessory model
CN111467794A (en) * 2020-04-20 2020-07-31 网易(杭州)网络有限公司 Game interaction method and device, electronic equipment and storage medium
CN111617475A (en) * 2020-05-28 2020-09-04 腾讯科技(深圳)有限公司 Interactive object construction method, device, equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113289334A (en) * 2021-05-14 2021-08-24 网易(杭州)网络有限公司 Game scene display method and device
CN117371075A (en) * 2023-10-30 2024-01-09 北京元跃科技有限公司 Model assembling method and device based on AR technology, storage medium and electronic equipment

Similar Documents

Publication Publication Date Title
CN109598777B (en) Image rendering method, device and equipment and storage medium
JP6875535B2 (en) Information processing methods and devices, storage media, electronic devices
JP7112516B2 (en) Image display method and device, storage medium, electronic device, and computer program
CN108245885B (en) Information processing method, device, mobile terminal and storage medium
JP7178416B2 (en) METHOD AND DEVICE FOR PROCESSING VIRTUAL RESOURCES IN GAME SCENES
CN108196765A (en) Display control method, electronic equipment and storage medium
CN111957040B (en) Detection method and device for shielding position, processor and electronic device
WO2022247592A1 (en) Virtual prop switching method and apparatus, terminal, and storage medium
CN112190944A (en) Virtual building model construction method and device and electronic device
CN110694271A (en) Camera attitude control method and device in game scene and electronic equipment
CN111135565B (en) Terrain splicing method and device in game scene, processor and electronic device
CN106658139B (en) Focus control method and device
CN111558221B (en) Virtual scene display method and device, storage medium and electronic equipment
CN111450529B (en) Game map acquisition method and device, storage medium and electronic device
CN110377215B (en) Model display method and device and terminal equipment
CN106604014A (en) VR film watching multi-person interaction method and VR film watching multi-person interaction system based on mobile terminals
KR102603609B1 (en) Method, device, terminal, and storage medium for selecting virtual objects
CN110619683A (en) Three-dimensional model adjusting method and device, terminal equipment and storage medium
CN110716766A (en) Game scene loading method and device, computer readable medium and electronic equipment
JP2018057854A (en) Game provision method and system
CN114661394A (en) Interface display control method and device, storage medium and processor
CN113926190A (en) Method and device for controlling three-dimensional model in game editor and storage medium
CN111494948B (en) Editing method of game lens, electronic equipment and storage medium
CN110025953B (en) Game interface display method and device, storage medium and electronic device
CN111589151A (en) Method, device, equipment and storage medium for realizing interactive function

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination