WO2020114176A1 - 对虚拟环境进行观察的方法、设备及存储介质 - Google Patents

对虚拟环境进行观察的方法、设备及存储介质 Download PDF

Info

Publication number
WO2020114176A1
WO2020114176A1 PCT/CN2019/115623 CN2019115623W WO2020114176A1 WO 2020114176 A1 WO2020114176 A1 WO 2020114176A1 CN 2019115623 W CN2019115623 W CN 2019115623W WO 2020114176 A1 WO2020114176 A1 WO 2020114176A1
Authority
WO
WIPO (PCT)
Prior art keywords
scene
virtual object
virtual
observation
detection
Prior art date
Application number
PCT/CN2019/115623
Other languages
English (en)
French (fr)
Inventor
刘柏君
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Priority to JP2021514085A priority Critical patent/JP7191210B2/ja
Priority to SG11202103706SA priority patent/SG11202103706SA/en
Priority to KR1020217006432A priority patent/KR20210036392A/ko
Publication of WO2020114176A1 publication Critical patent/WO2020114176A1/zh
Priority to US17/180,018 priority patent/US11783549B2/en
Priority to US18/351,780 priority patent/US20230360343A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/002Specific input/output arrangements not covered by G06F3/01 - G06F3/16
    • G06F3/005Input arrangements through a video camera
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment

Definitions

  • Embodiments of the present application relate to the field of virtual environments, and in particular, to a method, device, and storage medium for observing a virtual environment.
  • an application developed by a virtual engine is usually installed.
  • the display of the display elements such as virtual objects, virtual objects, or the ground is implemented in a model manner.
  • virtual objects include virtual houses, virtual water towers, virtual hillsides, virtual grasslands, and virtual furniture. Users can control virtual objects to perform virtual operations in a virtual environment.
  • the virtual environment is observed through the camera model with the virtual object as the observation center, and the camera model is a three-dimensional model that is separated from the virtual object by a certain distance in the virtual environment and the shooting direction faces the virtual object .
  • the virtual environment usually includes different observation scenes, such as dim scenes, bright scenes, indoor scenes, or outdoor scenes
  • observing the virtual environment in the above observation mode may result in observation modes under multiple observation scenes
  • Incompatible issues such as: In the indoor scene, the observation method has a greater probability of being blocked by the indoor furniture. In the dim scene, the observation method cannot clearly present the virtual items in the virtual environment.
  • the above incompatibility issues will affect the combat During the process, the user needs to adjust the viewing angle of the virtual object multiple times or adjust the screen display brightness of the terminal itself.
  • Embodiments of the present application provide a method, device, and storage medium for observing a virtual environment.
  • a method for observing a virtual environment is executed by a terminal.
  • the method includes:
  • the movement operation is used to transfer the virtual object from the first scene to a second scene, the first scene and the second scene are two different observation scenes, and the observation scene Corresponding to at least one observation mode for observing the virtual environment;
  • a device for observing a virtual environment includes:
  • a display module configured to display a first environment screen of an application, where the first environment screen includes virtual objects in a first scene, and the first environment screen is displayed in a first observation manner in the virtual environment A picture of observation in the virtual environment;
  • the receiving module is used to receive a mobile operation, the mobile operation is used to transfer the virtual object from the first scene to a second scene, the first scene and the second scene are two different observation scenes ,
  • the observation scene corresponds to at least one observation mode for observing the virtual environment;
  • An adjustment module configured to adjust the first observation mode to a second observation mode according to the movement operation, wherein the first observation mode corresponds to the first scene, and the second observation mode Corresponding to the two scenarios;
  • the display module is further configured to display a second environment screen of an application, the second environment screen includes the virtual object in a second scene, and the second environment screen is in the virtual environment The screen for observing the virtual environment in the second observation mode.
  • a terminal is characterized in that the terminal includes a processor and a memory, and the memory stores computer-readable instructions, and when the computer-readable instructions are executed by the processor, the processor is caused to perform the following steps :
  • the movement operation is used to transfer the virtual object from the first scene to a second scene, the first scene and the second scene are two different observation scenes, and the observation scene Corresponding to at least one observation mode for observing the virtual environment;
  • a non-volatile computer-readable storage medium that stores computer-readable instructions, which when executed by one or more processors, causes the one or more processors to perform the following steps:
  • the movement operation is used to transfer the virtual object from the first scene to a second scene, the first scene and the second scene are two different observation scenes, and the observation scene Corresponding to at least one observation mode for observing the virtual environment;
  • FIG. 1 is a structural block diagram of an electronic device provided by an exemplary embodiment of the present application.
  • FIG. 2 is a structural block diagram of a computer system provided by an exemplary embodiment of the present application.
  • FIG. 3 is a schematic diagram of an exemplary embodiment of the present application providing a camera model to observe a virtual environment
  • FIG. 4 is a flowchart of a method for observing a virtual environment provided by an exemplary embodiment of the present application
  • FIG. 5 is a schematic diagram of observing a virtual environment in indoor and outdoor scenes in the related art based on the embodiment shown in FIG. 4;
  • FIG. 6 is a schematic diagram of observing a virtual environment in indoor and outdoor scenes in the present application based on the embodiment shown in FIG. 4;
  • FIG. 7 is a schematic diagram of observing a virtual environment in another indoor and outdoor scene in this application based on the embodiment shown in FIG. 4;
  • FIG. 8 is a schematic diagram of observing a virtual environment in an indoor and outdoor scene in another related art based on the embodiment shown in FIG. 4;
  • FIG. 9 is a schematic diagram of observing a virtual environment in another indoor and outdoor scene in this application based on the embodiment shown in FIG. 4;
  • FIG. 10 is a flowchart of a method for observing a virtual environment provided by another exemplary embodiment of the present application.
  • FIG. 11 is a schematic diagram of vertical ray detection based on the embodiment shown in FIG. 10;
  • FIG. 12 is a schematic diagram of another vertical ray detection based on the embodiment shown in FIG. 10;
  • FIG. 13 is a schematic diagram of another vertical ray detection based on the embodiment shown in FIG. 10;
  • FIG. 14 is a flowchart of a method for observing a virtual environment provided by another exemplary embodiment of the present application.
  • FIG. 15 is a schematic diagram of horizontal radiation detection based on the embodiment shown in FIG. 14;
  • 16 is a flowchart of a method for observing a virtual environment provided by another exemplary embodiment of the present application.
  • FIG. 17 is a structural block diagram of an apparatus for observing a virtual environment provided by an exemplary embodiment of the present application.
  • FIG. 18 is a structural block diagram of an apparatus for observing a virtual environment provided by another exemplary embodiment of the present application.
  • FIG. 19 is a structural block diagram of a terminal provided by an exemplary embodiment of the present application.
  • Virtual environment A virtual environment that is displayed (or provided) when an application is running on a terminal.
  • the virtual environment may be a simulation environment for the real world, a semi-simulation semi-fictional three-dimensional environment, or a purely fictional three-dimensional environment.
  • the virtual environment may be any one of a two-dimensional virtual environment, a 2.5-dimensional virtual environment, and a three-dimensional virtual environment.
  • the following embodiments illustrate the virtual environment as a three-dimensional virtual environment, but this is not limited.
  • the virtual environment is also used in a virtual environment battle between at least two virtual characters.
  • the virtual environment is also used for a battle between at least two virtual characters using virtual firearms.
  • the virtual environment is also used to compete with at least two virtual characters using virtual firearms within the range of the target area, and the range of the target area will continue to decrease with the passage of time in the virtual environment.
  • Virtual object refers to the movable object in the virtual environment.
  • the movable object may be at least one of virtual characters, virtual animals, and cartoon characters.
  • the virtual object when the virtual environment is a three-dimensional virtual environment, the virtual object is a three-dimensional stereo model created based on the animation skeleton technology.
  • Each virtual object has its own shape and volume in the three-dimensional virtual environment and occupies a part of the space in the three-dimensional virtual environment.
  • Observation scene It is a scene corresponding to at least one observation mode for observing the virtual environment.
  • the angle of view of the observation method is the same, and the observation angle, observation distance, and observation configuration (such as: At least one parameter is different when the night vision device is turned on; when the observation scene corresponds to at least one observation method that observes the virtual environment at the target observation angle of the virtual object, the observation angle of the observation method is the same, the observation angle and the observation distance And at least one parameter in the observation configuration is different; when the observation scene corresponds to at least one observation method for observing the virtual environment by the target observation distance of the virtual object, the observation distance of the observation method is the same, the observation angle, observation angle, and observation At least one parameter in the configuration is different; when the observation parameter corresponds to at least one observation method that observes the virtual environment in the target observation configuration of the virtual object, the observation configuration of the observation method is the same; when the observation parameter corresponds to at least one observation method that observes the virtual environment in the target observation configuration of the virtual object, the observation configuration
  • the observation scene is a scene corresponding to a specific observation mode for observing the virtual environment.
  • the observation scene corresponds to scene characteristics
  • the observation mode corresponding to the observation scene is a mode set for the scene characteristics
  • the scene characteristics include at least one of light condition characteristics, scene height characteristics, and degree of concentration of virtual objects in the scene.
  • the observation scenes in the virtual environment can be divided into various types, and multiple observation scenes can be superimposed to realize a new observation scene.
  • the observation scene includes: indoor scene, outdoor scene, dim scene, bright At least one of a scene, a house area scene, a mountain scene, an air-raid shelter scene, and an object stacking scene, wherein the indoor scene can be superimposed with the dim scene to realize a new dim scene indoors, such as an indoor room without lights,
  • the house scene can be superimposed with the mountain scene to realize a new house scene on the mountain.
  • the camera model is a three-dimensional model located around the virtual object in a three-dimensional virtual environment.
  • the camera model is located near the head of the virtual object or at the head of the virtual object.
  • the camera model can be located behind the virtual object and bound to the virtual object, or it can be located at any position away from the virtual object at a preset distance.
  • the virtual environment located in the three-dimensional virtual environment can be performed from different angles Observe that, optionally, when the third-person perspective is the first-person over-shoulder perspective, the camera model is located behind the virtual object (such as the head and shoulders of the virtual character).
  • the camera model is not actually displayed in the three-dimensional virtual environment, that is, the camera model cannot be recognized in the three-dimensional virtual environment displayed on the user interface.
  • the terminal in this application may be a desktop computer, laptop portable computer, mobile phone, tablet computer, e-book reader, MP3 (Moving Pictures Experts Group Audio Layer III, motion picture expert compression standard audio level 3) player, MP4 ( Moving Picture Experts Group Audio Layer IV, motion picture expert compression standard audio layer 4) player and so on.
  • An application program supporting the virtual environment such as an application program supporting the three-dimensional virtual environment, is installed and running on the terminal.
  • the application program may be any of virtual reality application programs, three-dimensional map programs, military simulation programs, TPS games, FPS games, and MOBA games.
  • the application program may be a stand-alone version of the application program, such as a stand-alone version of the 3D game program, or an online version of the application program.
  • FIG. 1 shows a structural block diagram of an electronic device provided by an exemplary embodiment of the present application.
  • the electronic device may specifically be the terminal 100, and the terminal 100 includes an operating system 120 and an application program 122.
  • the operating system 120 is the basic software that provides the application 122 with secure access to computer hardware.
  • the application 122 is an application that supports a virtual environment.
  • the application 122 is an application that supports a three-dimensional virtual environment.
  • the application 122 may be a virtual reality application, a three-dimensional map program, a military simulation program, a third-person shooting game (Third-Personal Shooting Game, TPS), a first-person shooting game (First-person shooting game, FPS), a MOBA game, Any kind of multiplayer shootout survival game.
  • the application 122 may be a stand-alone version of the application, such as a stand-alone version of the 3D game program.
  • FIG. 2 shows a structural block diagram of a computer system provided by an exemplary embodiment of the present application.
  • the computer system 200 includes: a first device 220, a server 240, and a second device 260.
  • the first device 220 has an application program that supports the virtual environment installed and running.
  • the application can be any of virtual reality applications, three-dimensional map programs, military simulation programs, TPS games, FPS games, MOBA games, and multiplayer shootout survival games.
  • the first device 220 is a device used by the first user.
  • the first user uses the first device 220 to control the first virtual object located in the virtual environment to perform activities.
  • the activities include but are not limited to: adjusting body posture, crawling, walking, running, At least one of riding, jumping, driving, picking, shooting, attacking, and throwing.
  • the first virtual object is a first virtual character, such as a simulated character character or anime character character.
  • the first device 220 is connected to the server 240 through a wireless network or a wired network.
  • the server 240 includes at least one of a server, multiple servers, a cloud computing platform, and a virtualization center.
  • the server 240 is used to provide background services for applications that support a three-dimensional virtual environment.
  • the server 240 undertakes primary computing work, and the first device 220 and the second device 260 undertake secondary computing work; or, the server 240 undertakes secondary computing work, and the first device 220 and second device 260 undertake primary computing work Work; or, the server 240, the first device 220, and the second device 260 adopt a distributed computing architecture for collaborative computing.
  • the second device 260 has an application program supporting the virtual environment installed and running.
  • the application may be any of virtual reality applications, three-dimensional map programs, military simulation programs, FPS games, MOBA games, and multiplayer shootout survival games.
  • the second device 260 is a device used by the second user.
  • the second user uses the second device 260 to control the second virtual object located in the virtual environment to perform activities, including but not limited to: adjusting body posture, crawling, walking, running, At least one of riding, jumping, driving, picking, shooting, attacking, and throwing.
  • the second virtual object is a second virtual character, such as a simulated character character or anime character character.
  • first avatar and the second avatar are in the same virtual environment.
  • first virtual character and the second virtual character may belong to the same team, the same organization, have a friend relationship, or have temporary communication rights.
  • first virtual character and the second virtual character may also belong to different teams, different organizations, or two groups with hostility.
  • the application programs installed on the first device 220 and the second device 260 are the same, or the application programs installed on the two devices are the same type of application programs on different control system platforms.
  • the first device 220 may refer to one of multiple devices, and the second device 260 may refer to one of multiple devices. In this embodiment, only the first device 220 and the second device 260 are used as examples.
  • the device types of the first device 220 and the second device 260 are the same or different.
  • the device types include: game console, desktop computer, smart phone, tablet computer, e-book reader, MP3 player, MP4 player, and laptop portable At least one of the computers. The following embodiments are exemplified by the case that the device is a desktop computer.
  • the number of the above-mentioned devices may be more or less.
  • the above equipment may be only one, or the above equipment may be dozens or hundreds, or more.
  • the embodiments of the present application do not limit the number and types of devices.
  • the camera model is located at a predetermined distance from the virtual object.
  • a virtual object corresponds to a camera model, and the camera model can be rotated using the virtual object as the center of rotation, for example, rotating the camera model using any point of the virtual object as the center of rotation.
  • the camera model not only rotates in angle, but also shifts in displacement.
  • the distance between the camera model and the center of rotation remains unchanged, that is, the camera model uses the center of rotation as a sphere
  • the sphere surface of the heart rotates.
  • any point of the virtual object may be the head, torso of the virtual object, or any point around the virtual object, which is not limited in the embodiment of the present application.
  • the viewing direction of the camera model is the direction in which the vertical line on the tangent plane of the spherical surface where the camera model is located points to the virtual object.
  • the camera model can also observe the virtual environment at a preset angle in different directions of the virtual object.
  • a point in the virtual object 31 is determined as the rotation center 32, and the camera model rotates around the rotation center 32.
  • the camera model is configured with an initial position, which is The position above and behind the virtual object (such as the position behind the brain).
  • the initial position is position 33.
  • FIG. 4 is a method for observing a virtual environment provided by an exemplary embodiment of the present application. The method is applied to the terminal 100 shown in FIG. 1 as an example for illustration. As shown in FIG. 4, the method includes:
  • step S401 a first environment screen of an application program is displayed.
  • the first environment screen is a screen for observing the virtual environment in the first observation mode in the virtual environment.
  • the first environment picture includes virtual objects in the first scene.
  • the virtual object belongs to at least one scene in the virtual environment.
  • the scene in the virtual environment includes at least one of an indoor scene, an outdoor scene, a dim scene, and a bright scene. Since the indoor scene and the outdoor scene It is two independent and complementary observation scenes, that is, if the virtual object is in an indoor scene or an outdoor scene.
  • the first observation mode includes an observation mode corresponding to the first scene.
  • each observation scene corresponds to an observation mode, and the corresponding relationship is preset.
  • the observation method for observing the virtual environment may be the observation method corresponding to more than one observation scene Can also be one of the observation modes corresponding to more than one observation scene.
  • priority can be set for different observation methods, and the observation method with a higher priority can be selected according to the priority to observe the virtual environment at that location, and You can randomly select one observation mode among multiple observation modes to observe the virtual environment.
  • the observation mode corresponding to the indoor scene is to observe the virtual environment at a first distance from the virtual object
  • the observation mode corresponding to the dim scene is to observe the virtual environment through a night vision device, then when the virtual object is indoors And in a dim scene, the virtual environment can be observed by the night vision device at a first distance from the virtual object, or by the night vision device only at the position of the virtual object at the second distance (the second distance Is the default distance) to observe the virtual environment.
  • Step S402 Receive a movement operation, which is used to transfer the virtual object from the first scene to the second scene.
  • the first scene and the second scene are two different observation scenes.
  • the first scene and the second scene are two mutually complementary observation scenes, that is, if the virtual object is not in the first scene, it is in the second scene.
  • the first scene is an outdoor scene, and the second scene is an indoor scene; or, the first scene is a bright scene, and the second scene is a dim scene; or, the first scene is an item stacking scene, and the second scene is a wild scene .
  • transferring the virtual object from the first scene to the second scene according to the mobile operation may be implemented as transferring the virtual object from outdoor to indoor according to the mobile operation; then the first scene is an outdoor scene and the second scene is an indoor scene,
  • the indoor scene can also be realized as a dim scene, that is, the first scene is a bright scene, and the second scene is a dim scene.
  • the movement operation may be generated after the user slides on the touch display screen, or may be generated after the user presses a physical key of the mobile terminal
  • the mobile operation may be an operation corresponding to the signal received by the terminal receiving an input from an external input device, such as: the user sends a mobile signal to the terminal by operating the mouse as a mobile operation, Or the user sends a mobile signal to the terminal as a mobile operation by operating the keyboard.
  • Step S403 Adjust the first observation mode to the second observation mode according to the mobile operation, where the first observation mode corresponds to the first scene and the second observation mode corresponds to the second scene.
  • the corresponding parameters in the observation mode include at least one of an observation angle, an observation distance, whether to turn on a night vision device, and an observation angle person.
  • the terminal detects the observation scene where the virtual object is located in the virtual environment every preset time. In some embodiments, the detection process includes at least one of the following situations:
  • the first scene is an outdoor scene
  • the second scene is an indoor scene
  • the collision detection method is used to detect the observation scene where the virtual object is located in the virtual environment, and when the detected virtual object moves from the outdoor scene according to the movement operation When reaching an indoor scene, adjust the first observation mode to the second observation mode;
  • the first observation mode is a way for the camera model to observe the virtual environment at a first distance from the virtual object
  • the second observation mode is for the camera model to observe the virtual environment at a second distance from the virtual object
  • the camera model is a three-dimensional model that observes a virtual object in a virtual environment, the first distance is greater than the second distance, that is, the distance between the camera model and the virtual object is adjusted from the first distance to the second distance .
  • the observation distance for observing the virtual object is the same, please refer to FIG. 5, in the indoor scene, when observing the virtual object 51
  • the distance between the camera model 50 and the virtual object 51 is a.
  • the distance between the camera model 50 and the virtual object 51 is also a when the virtual object 51 is observed.
  • the distance between the camera model 50 and the virtual object 51 may be regarded as the distance between the physical center point of the camera model 50 and the virtual object 51, or may be regarded as any distance between the camera model 50 and the virtual object 51 distance.
  • the distance between the camera model 60 and the virtual object 61 is a when observing the virtual object 61.
  • the distance between the camera model 60 and the virtual object 61 is b, where b ⁇ a.
  • the first observation mode may also be a mode in which the camera model observes the virtual environment from a first perspective
  • the second observation mode is a mode in which the camera model observes the virtual environment from a second perspective.
  • the angle between the direction of one viewing angle and the horizontal direction in the virtual environment is smaller than the angle between the direction of the second viewing angle and the horizontal direction, that is, the angle at which the camera model observes the virtual object is rotated from the first viewing angle to the second viewing angle according to the movement operation.
  • the angle between the camera model 70 and the horizontal direction 73 when observing the virtual object 71 is ⁇
  • the camera model 70 when observing the virtual object 71 The angle to the virtual object 71 is ⁇ , where ⁇ .
  • the first observation mode may also be a third person observation mode
  • the second observation mode is the first person observation mode, that is, the observation angle of view is converted from the third person perspective to the first person perspective according to a mobile operation.
  • the first scene is a bright scene
  • the second scene is a dim scene
  • the color scene is used to detect the observation scene of the virtual object in the virtual environment, and when the detected virtual object moves from the bright scene according to the movement operation
  • the scene is dim, adjust the first observation mode to the second observation mode
  • the first observation mode is an observation mode in which the night vision device is turned off, that is, virtual objects and virtual environments are not observed through the night vision device
  • the second observation mode is the observation mode in which the night vision device is turned on. That is to observe the virtual environment through the night vision device.
  • the color detection method is used to detect pixels in the display interface.
  • the virtual object is considered to move from the first scene to the second scene .
  • the first scene is a field scene
  • the second scene is a stacking scene of objects
  • the scene scene where the virtual object is located in the virtual environment is detected through the scene identification verification method, and when the virtual object is detected, it is moved from the field according to the mobile operation.
  • the first observation mode is adjusted to the second observation mode.
  • the coordinates corresponding to the position of the virtual object correspond to a scene identifier, and the scene where the virtual object is located is verified according to the scene identifier.
  • the first observation mode includes a camera model observing the virtual environment at a first distance from the virtual object
  • the second observation mode includes the camera model observing the virtual environment at a second distance from the virtual object
  • the camera model includes a three-dimensional model that observes the virtual object in the virtual environment, the first distance is greater than the second distance, that is, the distance between the camera model and the virtual object is adjusted from the first distance to the second distance .
  • the first observation mode may also be a mode in which the camera model observes the virtual environment from a first perspective
  • the second observation mode is a mode in which the camera model observes the virtual environment from a second perspective.
  • the angle between the direction of one viewing angle and the horizontal direction in the virtual environment is smaller than the angle between the direction of the second viewing angle and the horizontal direction, that is, the angle at which the camera model observes the virtual object is rotated from the first viewing angle to the second viewing angle according to the movement operation.
  • the first observation mode may also be a third person observation mode
  • the second observation mode is the first person observation mode, that is, the observation angle of view is converted from the third person perspective to the first person perspective according to a mobile operation.
  • step S404 a second environment screen of the application program is displayed.
  • the second environment screen is a screen for observing the virtual environment in the second observation mode in the virtual environment.
  • the second environment picture includes virtual objects in the second scene.
  • the first environment picture and the second environment picture are described or described in conjunction with the above-mentioned first scene being an outdoor scene and the second scene being an indoor scene, and adjusting the distance between the camera model and the virtual object as an example.
  • the virtual objects in the outdoor scene and the indoor scene of the environmental screen are described, please refer to FIG. 8, in the indoor scene, the first screen 81 includes the virtual object 82, according to the virtual door 83 and the virtual cabinet 84 The virtual object 82 is in an indoor scene; while in an outdoor scene, the second screen 85 includes a virtual object 82.
  • the virtual object 82 is in an outdoor scene
  • the second screen 85 also includes a virtual object 87
  • the virtual object 87 forms a block below the virtual object 82.
  • the first environment picture and the second environment picture corresponding to the scheme involved in this application will be described.
  • the first environment picture 91 includes a virtual object 92 according to the virtual door 93 and the virtual The cabinet 94 can know that the virtual object 92 is in an indoor scene; while in the outdoor scene, the second environment picture 95 includes the virtual object 92.
  • the virtual cloud 96 it can be known that the virtual object 92 is in the outdoor scene, and the second picture 85 The virtual object forms a blocked virtual object 87.
  • the virtual object 87 cannot be displayed on the second environment screen 95, that is, the The virtual object 87 does not block the line of sight of the virtual object or the camera model.
  • the way of observing the virtual environment changes the way of observing the virtual object in the virtual environment according to the different observing scene where the virtual object is located, so as to adapt the observation to the observing scene Observe the virtual objects in the observation scene by way of avoiding the observation angle, the observation distance, or the improper configuration of observation when the virtual objects are observed in the same observation mode under different observation scenarios due to the single observation mode And issues that affect combat.
  • the first scene is an outdoor scene and the second scene is an indoor scene.
  • the terminal detects the observation scene where the virtual object is located in the virtual environment through the collision detection method, and the collision detection method is vertical Ray detection.
  • FIG. 10 is a method for observing a virtual environment provided by another exemplary embodiment of the present application. The method is applied to the terminal 100 shown in FIG. 1 as an example for illustration. As shown in FIG. 10, the method includes:
  • step S1001 the first environment screen of the application program is displayed.
  • the first environment picture includes a virtual object in the first scene, and the first environment picture is a picture of the virtual environment observed in the virtual environment in the first observation mode.
  • the virtual object belongs to at least one scene in the virtual environment.
  • the observation scene in the virtual environment includes any one of the indoor scene and the outdoor scene, because the indoor scene and the outdoor scene are two independent complementary The observed scene, that is, if the virtual object is not in the indoor scene, it is in the outdoor scene.
  • step S1002 a mobile operation is received.
  • the move operation is used to transfer the virtual object from the first scene to the second scene, where the first scene is an outdoor scene and the second scene is an indoor scene, that is, the move operation is used to move the virtual object Move from outdoor scene to indoor scene.
  • step S1003 the target point in the virtual object is used as a starting point, and vertical ray detection is performed along the vertical upward direction in the virtual environment.
  • the target point may be any one of a physical center point in the virtual object, a point corresponding to the head, a point corresponding to the arm, and a point corresponding to the leg, or may be any point in the virtual object , Can also be any point corresponding to the virtual object other than the virtual object.
  • the vertical ray detection may also be a ray done vertically downward in the virtual environment.
  • the coordinate system 111 is the applied three-dimensional coordinate system in the virtual environment, where the direction pointed by the z-axis is the vertical upward direction in the virtual environment, and the terminal can move from the target of the virtual object 112
  • Point 113 is the starting point, and a vertical ray 114 is detected along the vertical upward direction for detection.
  • the vertical ray 114 is taken as an example for illustration in FIG. 11. In an actual application scenario, the vertical ray 114 may not be displayed in the environment picture.
  • Step S1004 Receive the first detection result returned after performing vertical ray detection.
  • the first detection result is used to represent the virtual object that is collided in the vertical upward direction of the virtual object.
  • the first detection result includes the object identification of the first virtual object collided by the vertical ray detection, and/or the length of the ray when the vertical ray detection collides with the first virtual object.
  • the first detection result is empty.
  • Step S1005 Determine the observation scene where the virtual object is located according to the first detection result.
  • the way that the terminal determines the observation scene where the virtual object is located according to the first detection result includes any one of the following ways:
  • the first detection result includes the object identifier of the first virtual object collided by the vertical ray detection, then when the object identifier in the first detection result is the virtual house identifier, the terminal determines that the observation scene where the virtual object is located is indoor Scenes.
  • the terminal may determine the observation scene in which the virtual object is located It is an outdoor scene, that is, when the object identifier in the first detection result is other than the virtual house identifier, the terminal determines that the observation scene where the virtual object is located is an outdoor scene.
  • the virtual object 120 is in an indoor scene, and a vertical ray is detected from the target point 121 of the virtual object 120 in a vertical upward direction, and the vertical ray 122 returns after colliding with a virtual house.
  • the terminal determines that the virtual object 120 is in a virtual house, that is, in an indoor scene; while the virtual object 130 in FIG. 13 is in an outdoor scene, vertical ray detection is performed vertically upward from the target point 131 of the virtual object 130 Since the vertical ray 132 does not collide with a virtual object, after returning a null value (English: null), it is determined that the virtual object 130 is in an outdoor scene.
  • a null value English: null
  • the first detection result includes the length of the ray when it collides with the first virtual object during vertical ray detection, then when the length of the ray in the first detection result is less than or equal to the preset length, the terminal determines the location of the virtual object The observation scene is an indoor scene. When the length of the ray in the first detection result exceeds a preset length, the terminal determines that the observation scene where the virtual object is located is an outdoor scene.
  • the terminal may determine that the virtual object is in the indoor scene, when the ray in the first detection result When the length exceeds 2m, the terminal can determine that the virtual object is in an outdoor scene.
  • the execution of the above steps 1003 to 1005 runs through the entire process of displaying on the environmental screen, that is, for each frame of the environmental screen, the observation scene where the virtual object is located is detected.
  • every second includes 30 frames of environmental images, and every second the terminal needs to perform 30 detections on the observation scene where the virtual object is located.
  • Step S1006 when it is detected that the virtual object is transferred from the outdoor scene to the indoor scene according to the movement operation, the first observation mode is adjusted to the second observation mode.
  • the first observation mode corresponds to the first scene
  • the second observation mode corresponds to the second scene
  • Step S1007 Display the second environment screen of the application.
  • the second environment picture includes virtual objects in the second scene, and the second environment picture is a picture of the virtual environment observed in the virtual environment in the second observation mode.
  • the way of observing the virtual environment transforms the way of observing the virtual object in the virtual environment according to the different observation scene where the virtual object is located, so as to adapt the observation to the observation scene Observe the virtual objects in the observation scene by way of avoiding the observation angle, the observation distance, or the improper configuration of observation when the virtual objects are observed in the same observation mode under different observation scenarios due to the single observation mode And issues that affect combat.
  • the method provided in this embodiment judges the observation scene where the virtual object is located by vertical ray detection, and detects the observation scene where the virtual object is located in a convenient and accurate manner to avoid the occurrence of different observations caused by the single observation method
  • the problem of combat is affected due to improper observation angle, observation distance, and observation configuration.
  • the first scene is an outdoor scene and the second scene is an indoor scene.
  • the terminal detects the observation scene where the virtual object is located in the virtual environment through the collision detection method, and the collision detection method is horizontal Ray detection.
  • FIG. 14 is a method for observing a virtual environment provided by another exemplary embodiment of the present application. The method is applied to the terminal 100 shown in FIG. 1 as an example for illustration. As shown in FIG. 14, the method includes:
  • Step S1401 Display the first environment screen of the application.
  • the first environment picture includes a virtual object in the first scene, and the first environment picture is a picture of the virtual environment observed in the virtual environment in the first observation mode.
  • the virtual object belongs to at least one observation scene in the virtual environment.
  • the observation scene in the virtual environment includes any one of the indoor scene and the outdoor scene. Since the indoor scene and the outdoor scene are two independent Complementary observation scene, that is, if the virtual object is not in the indoor scene, it is in the outdoor scene.
  • Step S1402 Receive a mobile operation.
  • the move operation is used to transfer the virtual object from the first scene to the second scene, where the first scene is an outdoor scene and the second scene is an indoor scene, that is, the move operation is used to move the virtual object Move from outdoor scene to indoor scene.
  • step S1403 starting from the target point in the virtual object as the starting point, at least three detection rays with mutually different directions are made along the horizontal direction in the virtual environment.
  • the target point may be any one of a physical center point in the virtual object, a point corresponding to the head, a point corresponding to the arm, and a point corresponding to the leg, or may be any point in the virtual object , Can also be any point corresponding to the virtual object other than the virtual object.
  • the angle between each two of the at least three detection rays is greater than the preset angle.
  • the minimum included angle between each two detected rays is 90°, then at most 4 detected rays, when three detected rays, the included angle between each two rays is 120°, or two The included angle is 90°, the third included angle is 180°, and any combination of each included angle is greater than or equal to 90°.
  • FIG. 15 shows a top view of the virtual object 1501, wherein the target point 1502 of the virtual object 1501 is the starting point, and the detection ray 1503, the detection ray 1504, and the detection are performed in the horizontal direction Ray 1505, wherein the angle between the detection ray 1503 and the detection ray 1504 is 90°, the angle between the detection ray 1504 and the detection ray 1505 is 110°, and the angle between the detection ray 1503 and the detection ray 1505 is 160°.
  • Step S1404 Receive a second detection result returned by horizontal radiation detection through at least three detection radiations.
  • the second detection result is used to represent the virtual object that the detection rays collide in the horizontal direction.
  • Step S1405 Determine the observation scene where the virtual object is located according to the second detection result.
  • the manner of determining the observation scene where the virtual object is located according to the second detection result includes any one of the following ways:
  • the second detection result includes the ray length when at least three detection rays collide with the first virtual object. If at least three of the detection rays collide with the first virtual object, the ray length is not less than half Within the set length, the terminal can determine that the virtual object is in the indoor scene; if at least three detection rays, more than half of the detection rays collide with the first virtual object to produce a preset length, the terminal can determine the virtual object is located
  • the observation scene is an outdoor scene;
  • the second detection result includes the object identifier of the first virtual object collided by at least three rays, if at least three of the detected rays, the object identifier of the first virtual object collided by not less than half of the detected rays is the house identifier
  • the terminal may determine that the virtual object is in an indoor scene; if the object identifier of the first virtual object collided by more than half of the detected rays in at least three detection rays is not a house identifier, the terminal may determine that the virtual object is in an outdoor scene.
  • Step S1406 when it is detected that the virtual object is transferred from the outdoor scene to the indoor scene according to the movement operation, the first observation mode is adjusted to the second observation mode.
  • the first observation mode corresponds to the first scene
  • the second observation mode corresponds to the second scene
  • Step S1407 displaying the second environment screen of the application.
  • the second environment picture includes virtual objects in the second scene, and the second environment picture is a picture of the virtual environment observed in the virtual environment in the second observation mode.
  • the way of observing the virtual environment changes the way of observing the virtual object in the virtual environment according to the different observing scene where the virtual object is located, so as to adapt the observation to the observing scene Observe the virtual objects in the observation scene by way of avoiding the observation angle, the observation distance, or the inappropriate configuration of the observation object when the virtual objects are observed in the same observation mode under different observation scenarios due to the single observation mode And issues that affect combat.
  • the method provided in this embodiment judges the observation scene where the virtual object is located by horizontal ray detection, and detects the observation scene where the virtual object is located in a convenient and accurate manner to avoid the occurrence of different observations due to the single observation method.
  • the problem of combat is affected due to improper observation angle, observation distance, and observation configuration.
  • FIG. 16 is a method for observing a virtual environment provided by another exemplary embodiment of the present application. As shown in FIG. 16, the method includes:
  • Step S1601 The client detects the observation scene where the virtual object is located for each frame of image.
  • every second includes 30 frames of environmental images, and every second the terminal needs to perform 30 detections on the observation scene where the virtual object is located.
  • Step S1602 the user controls the virtual object in the client to enter the room.
  • the terminal receives a movement operation, which is used to control the movement of the virtual object in the virtual environment.
  • step S1603 the client detects that the virtual object is in the indoor scene through the rays.
  • Step S1604 Adjust the distance between the camera model and the virtual object from the first distance to the second distance.
  • the first distance is greater than the second distance, that is, when the virtual object moves from the outdoor scene to the indoor scene, the distance between the camera model and the virtual object is reduced.
  • Step S1605 the user controls the virtual object in the client to move to the outdoor.
  • Step S1606 the client detects that the virtual object is in the outdoor scene through the ray.
  • Step S1607 Adjust the distance between the camera model and the virtual object from the second distance to the first distance.
  • the way of observing the virtual environment transforms the way of observing the virtual object in the virtual environment according to the different observation scene where the virtual object is located, so as to adapt the observation to the observation scene Observe the virtual objects in the scene by way of avoiding the observation of virtual objects in the same observation mode under different observation scenarios due to the single observation mode, due to the inappropriate observation angle, observation distance, or observation configuration. Issues affecting operations.
  • the method provided in this embodiment shortens the distance between the camera model and the virtual object when the virtual object is in an indoor scene to reduce the situation that the virtual object blocks the line of sight.
  • FIG. 17 is a structural block diagram of an apparatus for observing a virtual environment provided by an exemplary embodiment of the present application.
  • the apparatus may be implemented in the terminal 100 shown in FIG. 1.
  • the apparatus includes:
  • the display module 1710 is configured to display a first environment screen of an application program, where the first environment screen includes virtual objects in the first scene, and the first environment screen observes the virtual environment in the virtual environment in a first observation manner Screen.
  • the receiving module 1720 is used to receive a mobile operation, and the mobile operation is used to transfer a virtual object from the first scene to the second scene.
  • the first scene and the second scene are two different observation scenes. Observing the scene and observing the virtual environment Corresponds to at least one observation method.
  • the adjustment module 1730 is configured to adjust the first observation mode to the second observation mode according to the movement operation, where the first observation mode corresponds to the first scene and the second observation mode corresponds to the second scene.
  • the display module 1710 is also used to display the second environment screen of the application, the second environment screen includes virtual objects in the second scene, and the second environment screen is to observe the virtual environment in the virtual environment in the second observation mode Screen.
  • the first scene includes an outdoor scene
  • the second scene includes an indoor scene
  • the adjustment module 1730 includes:
  • the detection unit 1731 is configured to detect the observation scene where the virtual object is located in the virtual environment through the collision detection method.
  • the adjusting unit 1732 is configured to adjust the first observation mode to the second observation mode when it is detected that the virtual object is transferred from the outdoor scene to the indoor scene according to the movement operation.
  • the first observation mode includes a camera model observing the virtual environment at a first distance from the virtual object
  • the second observation mode includes the camera model observing the virtual environment at a second distance from the virtual object
  • the camera model includes a three-dimensional model for observation around the virtual object in the virtual environment, and the first distance is greater than the second distance.
  • the adjusting unit 1732 is also used to adjust the distance between the camera model and the virtual object from the first distance to the second distance.
  • the first observation mode includes a camera model observing the virtual environment from a first perspective
  • the second observation mode includes a camera model observing the virtual environment from a second perspective
  • the camera model includes For a three-dimensional model observed around a virtual object, the angle between the direction of the first viewing angle and the horizontal direction in the virtual environment is smaller than the angle between the direction of the second viewing angle and the horizontal direction; the adjusting unit 1732 is also used to convert the camera model according to the mobile operation The angle of viewing the virtual object rotates from the first perspective to the second perspective.
  • the collision detection method is vertical ray detection; the detection unit 1731 is also used to perform vertical ray detection along the vertical upward direction in the virtual environment from the target point in the virtual object as the starting point; The first detection result returned after the vertical ray detection.
  • the first detection result is used to represent the virtual object that is collided in the vertical upward direction of the virtual object; the observation scene where the virtual object is located is determined according to the first detection result.
  • the first detection result includes the object identification of the first virtual object collided by the vertical ray detection; the detection unit 1731 is also used when the object identification in the first detection result is the virtual house identification, It is determined that the observation scene where the virtual object is located is an indoor scene; the detection unit 1731 is also used to determine that the observation scene where the virtual object is located is outdoor when the object identifier in the first detection result is other than the virtual house identifier Scenes.
  • the first detection result includes the length of the ray when the vertical ray detection collides with the first virtual object; the detection unit 1731 is also used when the length of the ray in the first detection result is less than or equal to the preset length When determining that the observation scene where the virtual object is located is an indoor scene; the detection unit 1731 is also used to determine that the observation scene where the virtual object is located is an outdoor scene when the length of the ray in the first detection result exceeds a preset length.
  • the collision detection method includes horizontal ray detection; the detection unit 1731 is also used to start from the target point in the virtual object and perform at least three different directions along the horizontal direction in the virtual environment. Detecting rays, and the included angle between each two detecting rays is greater than the preset included angle; receiving the second detection result returned by horizontal radiation detection through at least three detecting rays, the second detection result is used to indicate that the detecting rays are in the horizontal direction The virtual object collided on; determine the observation scene where the virtual object is located according to the second detection result.
  • the second detection result includes the length of the rays when at least three detection rays collide with the first virtual object; the detection unit 1731 is also used to detect not less than half of the at least three detection rays The ray length when the ray collides with the first virtual object is within the preset length, and it is determined that the virtual object is in the indoor scene; the detection unit 1731 is also used when more than half of the at least three detection rays collide with the first virtual object The ray length exceeds the preset length, and it is determined that the virtual object is in the outdoor scene.
  • the receiving module 1720 and the adjusting module 1730 in the above embodiments may be implemented by a processor or may be implemented in cooperation with a processor and a memory; the display module 1710 in the above embodiments may be implemented by a display screen or may be processed by The display and display are coordinated.
  • FIG. 19 shows a structural block diagram of a terminal 1900 provided by an exemplary embodiment of the present invention.
  • the terminal 1900 may be: a smartphone, a tablet computer, an MP3 player (Moving Pictures Experts Group Audio Audio Layer III, motion picture expert compression standard audio level 3), MP4 (Moving Pictures Experts Group Audio Audio Layer IV, motion picture expert compression standard audio Level 4) Player, laptop or desktop computer.
  • the terminal 1900 may also be called other names such as user equipment, portable terminal, laptop terminal, and desktop terminal.
  • the terminal 1900 includes a processor 1901 and a memory 1902.
  • the processor 1901 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on.
  • the processor 1901 may adopt at least one hardware form from DSP (Digital Signal Processing, digital signal processing), FPGA (Field-Programmable Gate Array), PLA (Programmable Logic Array). achieve.
  • the processor 1901 may also include a main processor and a coprocessor.
  • the main processor is a processor for processing data in a wake-up state, also known as a CPU (Central Processing Unit, central processing unit); the coprocessor is A low-power processor for processing data in the standby state.
  • the processor 1901 may be integrated with a GPU (Graphics Processing Unit, image processor), and the GPU is used to render and draw content that needs to be displayed on the display screen.
  • the processor 1901 may further include an AI (Artificial Intelligence, Artificial Intelligence) processor, which is used to process computing operations related to machine learning.
  • AI Artificial Intelligence, Artificial Intelligence
  • the memory 1902 may include one or more computer-readable storage media, which may be non-transitory.
  • the memory 1902 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices.
  • the non-transitory computer-readable storage medium in the memory 1902 is used to store at least one instruction that is executed by the processor 1901 to implement the virtual simulation provided by the method embodiment in the present application. The way to observe the environment.
  • the terminal 1900 may optionally include a peripheral device interface 1903 and at least one peripheral device.
  • the processor 1901, the memory 1902, and the peripheral device interface 1903 may be connected by a bus or a signal line.
  • Each peripheral device may be connected to the peripheral device interface 1903 through a bus, a signal line, or a circuit board.
  • the peripheral device includes at least one of a radio frequency circuit 1904, a touch display screen 1905, a camera 1906, an audio circuit 1907, a positioning component 1908, and a power supply 1909.
  • the peripheral device interface 1903 may be used to connect at least one peripheral device related to I/O (Input/Output) to the processor 1901 and the memory 1902.
  • the processor 1901, the memory 1902, and the peripheral device interface 1903 are integrated on the same chip or circuit board; in some other embodiments, any one of the processor 1901, the memory 1902, and the peripheral device interface 1903, or Both can be implemented on a separate chip or circuit board, which is not limited in this embodiment.
  • the radio frequency circuit 1904 is used to receive and transmit RF (Radio Frequency) signals, also called electromagnetic signals.
  • the radio frequency circuit 1904 communicates with a communication network and other communication devices through electromagnetic signals.
  • the radio frequency circuit 1904 converts the electrical signal into an electromagnetic signal for transmission, or converts the received electromagnetic signal into an electrical signal.
  • the radio frequency circuit 1904 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so on.
  • the radio frequency circuit 1904 can communicate with other terminals through at least one wireless communication protocol.
  • the wireless communication protocol includes but is not limited to: World Wide Web, Metropolitan Area Network, Intranet, various generations of mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or WiFi (Wireless Fidelity, wireless fidelity) networks.
  • the radio frequency circuit 1904 may further include NFC (Near Field Communication) related circuits, which is not limited in this application.
  • the display screen 1905 is used to display a UI (User Interface).
  • the UI may include graphics, text, icons, video, and any combination thereof.
  • the display screen 1905 also has the ability to collect touch signals on or above the surface of the display screen 1905.
  • the touch signal can be input to the processor 1901 as a control signal for processing.
  • the display screen 1905 can also be used to provide virtual buttons and/or virtual keyboards, also called soft buttons and/or soft keyboards.
  • the display screen 1905 there may be one display screen 1905, which is provided with the front panel of the terminal 1900; in other embodiments, the display screen 1905 may be at least two, which are respectively provided on different surfaces of the terminal 1900 or have a folded design; In still other embodiments, the display screen 1905 may be a flexible display screen, which is disposed on a curved surface or a folding surface of the terminal 1900. Even, the display screen 1905 can also be set as a non-rectangular irregular figure, that is, a special-shaped screen.
  • the display screen 1905 can be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode, organic light emitting diode) and other materials.
  • the camera assembly 1906 is used to collect images or videos.
  • the camera assembly 1906 includes a front camera and a rear camera.
  • the front camera is set on the front panel of the terminal, and the rear camera is set on the back of the terminal.
  • there are at least two rear cameras which are respectively one of the main camera, the depth-of-field camera, the wide-angle camera, and the telephoto camera, so as to realize the fusion of the main camera and the depth-of-field camera to realize the background blur function, the main camera Integrate with wide-angle camera to realize panoramic shooting and VR (Virtual Reality, virtual reality) shooting function or other fusion shooting functions.
  • the camera assembly 1906 may also include a flash.
  • the flash can be a single-color flash or a dual-color flash. Dual color temperature flash refers to the combination of warm light flash and cold light flash, which can be used for light compensation at different color temperatures.
  • the audio circuit 1907 may include a microphone and a speaker.
  • the microphone is used to collect sound waves of the user and the environment, and convert the sound waves into electrical signals and input them to the processor 1901 for processing, or input them to the radio frequency circuit 1904 to implement voice communication.
  • the microphone can also be an array microphone or an omnidirectional acquisition microphone.
  • the speaker is used to convert the electrical signal from the processor 1901 or the radio frequency circuit 1904 into sound waves.
  • the speaker can be a traditional thin-film speaker or a piezoelectric ceramic speaker.
  • the speaker When the speaker is a piezoelectric ceramic speaker, it can not only convert electrical signals into sound waves audible by humans, but also convert electrical signals into sound waves inaudible to humans for distance measurement and other purposes.
  • the audio circuit 1907 may also include a headphone jack.
  • the positioning component 1908 is used to locate the current geographic location of the terminal 1900 to implement navigation or LBS (Location Based Service, location-based service).
  • the positioning component 1908 may be a positioning component based on the GPS (Global Positioning System) of the United States, the Beidou system of China, or the Galileo system of Russia.
  • the power supply 1909 is used to supply power to various components in the terminal 1900.
  • the power source 1909 may be alternating current, direct current, disposable batteries, or rechargeable batteries.
  • the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery.
  • the wired rechargeable battery is a battery charged through a wired line
  • the wireless rechargeable battery is a battery charged through a wireless coil.
  • the rechargeable battery can also be used to support fast charging technology.
  • the terminal 1900 further includes one or more sensors 1910.
  • the one or more sensors 1910 include, but are not limited to: an acceleration sensor 1911, a gyro sensor 1912, a pressure sensor 1913, a fingerprint sensor 1914, an optical sensor 1915, and a proximity sensor 1916.
  • the acceleration sensor 1911 can detect the magnitude of acceleration on the three coordinate axes of the coordinate system established with the terminal 1900.
  • the acceleration sensor 1911 can be used to detect the components of gravity acceleration on three coordinate axes.
  • the processor 1901 may control the touch display 1905 to display the user interface in a landscape view or a portrait view according to the gravity acceleration signal collected by the acceleration sensor 1911.
  • the acceleration sensor 1911 can also be used for game or user movement data collection.
  • the gyro sensor 1912 can detect the body direction and rotation angle of the terminal 1900, and the gyro sensor 1912 can cooperate with the acceleration sensor 1911 to collect a 3D action of the user on the terminal 1900.
  • the processor 1901 can realize the following functions based on the data collected by the gyro sensor 1912: motion sensing (such as changing the UI according to the user's tilt operation), image stabilization during shooting, game control, and inertial navigation.
  • the pressure sensor 1913 may be disposed on the side frame of the terminal 1900 and/or the lower layer of the touch display 1905.
  • the processor 1901 can perform left-right hand recognition or shortcut operation according to the grip signal collected by the pressure sensor 1913.
  • the processor 1901 controls the operability control on the UI interface according to the user's pressure operation on the touch display 1905.
  • the operability control includes at least one of a button control, a scroll bar control, an icon control, and a menu control.
  • the fingerprint sensor 1914 is used to collect the user's fingerprint, and the processor 1901 recognizes the user's identity according to the fingerprint collected by the fingerprint sensor 1914, or the fingerprint sensor 1914 recognizes the user's identity based on the collected fingerprint. When the user's identity is recognized as a trusted identity, the processor 1901 authorizes the user to perform related sensitive operations, including unlocking the screen, viewing encrypted information, downloading software, paying, and changing settings.
  • the fingerprint sensor 1914 may be provided on the front, back, or side of the terminal 1900. When a physical button or manufacturer logo is provided on the terminal 1900, the fingerprint sensor 1914 may be integrated with the physical button or manufacturer logo.
  • the optical sensor 1915 is used to collect the ambient light intensity.
  • the processor 1901 may control the display brightness of the touch display 1905 according to the ambient light intensity collected by the optical sensor 1915. Specifically, when the ambient light intensity is high, the display brightness of the touch display 1905 is turned up; when the ambient light intensity is low, the display brightness of the touch display 1905 is turned down.
  • the processor 1901 can also dynamically adjust the shooting parameters of the camera assembly 1906 according to the ambient light intensity collected by the optical sensor 1915.
  • the proximity sensor 1916 also called a distance sensor, is usually provided on the front panel of the terminal 1900.
  • the proximity sensor 1916 is used to collect the distance between the user and the front of the terminal 1900.
  • the processor 1901 controls the touch display 1905 to switch from the bright screen state to the breathing state; when the proximity sensor 1916 detects When the distance from the user to the front of the terminal 1900 gradually becomes larger, the processor 1901 controls the touch display 1905 to switch from the screen-on state to the screen-on state.
  • FIG. 19 does not constitute a limitation on the terminal 1900, and may include more or fewer components than those illustrated, or combine certain components, or adopt different component arrangements.
  • An embodiment of the present application also provides a terminal for observing a virtual environment.
  • the terminal includes a processor and a memory, and the memory stores computer-readable instructions.
  • the processing The device executes the steps of the above observation method of the virtual environment.
  • the steps of the method for observing the virtual environment may be the steps in the method for observing the virtual environment of the foregoing embodiments.
  • An embodiment of the present application further provides a computer-readable storage medium that stores computer-readable instructions.
  • the processor When the computer-readable instructions are executed by a processor, the processor is caused to perform the steps of the above method for observing a virtual environment.
  • the steps of the method for observing the virtual environment may be the steps in the method for observing the virtual environment of the foregoing embodiments.
  • a person of ordinary skill in the art may understand that all or part of the steps in the various methods of the above embodiments may be completed by a program instructing related hardware, and the program may be stored in a computer-readable storage medium, and the computer-readable storage
  • the medium may be a computer-readable storage medium included in the memory in the foregoing embodiments; or it may be a computer-readable storage medium that exists alone and is not installed in the terminal.
  • At least one instruction, at least one program, code set or instruction set is stored in the computer-readable storage medium, the at least one instruction, the at least one program, the code set or the instruction set is loaded and executed by the processor In order to realize the method for observing the virtual environment as described in any one of FIG. 4, FIG. 10, FIG. 14 and FIG. 16.
  • the computer-readable storage medium may include: read only memory (ROM, Read Only Memory), random access memory (RAM, Random Access Memory), solid state drive (SSD, Solid State Drives), or optical disc Wait.
  • random access memory can include resistive random access memory (ReRAM, Resistance Random Access Memory) and dynamic random access memory (DRAM, Dynamic Random Access Memory).
  • ReRAM resistive random access memory
  • DRAM Dynamic Random Access Memory
  • the program may be stored in a computer-readable storage medium.
  • the mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

一种对虚拟环境进行观察的方法,包括:显示应用程序的第一环境画面,第一环境画面中包括处于第一场景中的虚拟对象;接收移动操作,移动操作用于将虚拟对象从第一场景转移至第二场景;根据移动操作将第一观察方式调整为第二观察方式;及显示应用程序的第二环境画面,第二环境画面中包括处于第二场景中的虚拟对象。

Description

对虚拟环境进行观察的方法、设备及存储介质
本申请要求于2018年12月05日提交中国专利局,申请号为201811478458.2、发明名称为“对虚拟环境进行观察的方法、设备及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请实施例涉及虚拟环境领域,特别涉及一种对虚拟环境进行观察的方法、设备及存储介质。
背景技术
在智能终端中,通常安装有由虚拟引擎开发的应用程序。在该支持虚拟环境的应用程序中,虚拟对象、虚拟物体、或地面等显示元素的显示是以模型的方式来实现的。其中,虚拟物体包括虚拟房屋、虚拟水塔、虚拟山坡、虚拟草地、及虚拟家具等,用户可以控制虚拟对象在虚拟环境中进行虚拟作战。
通常,在对虚拟环境进行观察时,是通过摄像机模型以虚拟对象为观察中心对虚拟环境进行观察,并且摄像机模型是在虚拟环境中与虚拟对象相隔一定距离且拍摄方向朝向该虚拟对象的三维模型。
然而,由于虚拟环境中通常会包括不同的观察场景,如:昏暗场景、明亮场景、室内场景、或室外场景,以上述观察方式对虚拟环境进行观察时,会导致在多个观察场景下观察方式不兼容的问题,如:在室内场景下该观察方式有较大的视线被室内家具遮挡的概率,在昏暗场景下该观察方式无法清晰呈现虚拟环境中的虚拟物品,上述不兼容问题都会影响作战过程,用户需要多次调整虚拟对象的观察角度、或者调整终端本身的屏幕显示亮度。
发明内容
本申请实施例提供了一种对虚拟环境进行观察的方法、设备及存储介质。
一种对虚拟环境进行观察的方法,由终端执行,所述方法包括:
显示应用程序的第一环境画面,所述第一环境画面中包括处于第一场景中的虚拟对象,所述第一环境画面是在所述虚拟环境中以第一观察方式对所述虚拟环境进行观察的画面;
接收移动操作,所述移动操作用于将所述虚拟对象从所述第一场景转移至第二场景,所述第一场景和所述第二场景为两种不同的观察场景,所述观察场景与对所述虚拟环境进行观察的至少一个观察方式对应;
根据所述移动操作将所述第一观察方式调整为第二观察方式,其中,所述第一观察方式与所述第一场景对应,所述第二观察方式与所述第二场景对应;及
显示应用程序的第二环境画面,所述第二环境画面中包括处于第二场景中的所述虚拟对象,所述第二环境画面是在所述虚拟环境中以所述第二观察方式对所述虚拟环境进行观察的画面。
一种对虚拟环境进行观察的装置,所述装置包括:
显示模块,用于显示应用程序的第一环境画面,所述第一环境画面中包括处于第一场景中的虚拟对象,所述第一环境画面是在所述虚拟环境中以第一观察方式对所述虚拟环境进行观察的画面;
接收模块,用于接收移动操作,所述移动操作用于将所述虚拟对象从所述第一场景转移至第二场景,所述第一场景和所述第二场景为两种不同的观察场景,所述观察场景与对所述虚拟环境进行观察的至少一个观察方式对应;
调整模块,用于根据所述移动操作将所述第一观察方式调整为第二观察方式,其中,所述第一观察方式与所述第一场景对应,所述第二观察方式与所述第二场景对应;及
所述显示模块,还用于显示应用程序的第二环境画面,所述第二环境画面中包括处于第二场景中的所述虚拟对象,所述第二环境画面是在所述虚拟环境中以所述第二观察方式对所述虚拟环境进行观察的画面。
一种终端,其特征在于,所述终端包括处理器和存储器,所述存储器中存储有计算机可读指令,所述计算机可读指令被所述处理器执行时,使得所述处理器执行以下步骤:
显示应用程序的第一环境画面,所述第一环境画面中包括处于第一场景中的虚拟对象,所述第一环境画面是在所述虚拟环境中以第一观察方式对所述虚拟环境进行观察的画面;
接收移动操作,所述移动操作用于将所述虚拟对象从所述第一场景转移至第二场景,所述第一场景和所述第二场景为两种不同的观察场景,所述观察场景与对所述虚拟环境进行观察的至少一个观察方式对应;
根据所述移动操作将所述第一观察方式调整为第二观察方式,其中,所述第一观察方式与所述第一场景对应,所述第二观察方式与所述第二场景对应;及
显示应用程序的第二环境画面,所述第二环境画面中包括处于第二场景中的所述虚拟对象,所述第二环境画面是在所述虚拟环境中以所述第二观察方式对所述虚拟环境进行观察的画面。
一种非易失性的计算机可读存储介质,存储有计算机可读指令,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行以下步骤:
显示应用程序的第一环境画面,所述第一环境画面中包括处于第一场景中的虚拟对象,所述第一环境画面是在所述虚拟环境中以第一观察方式对所述虚拟环境进行观察的画面;
接收移动操作,所述移动操作用于将所述虚拟对象从所述第一场景转移至第二场景,所述第一场景和所述第二场景为两种不同的观察场景,所述观察场景与对所述虚拟环境进行观察的至少一个观察方式对应;
根据所述移动操作将所述第一观察方式调整为第二观察方式,其中,所述第一观察方式与所述第一场景对应,所述第二观察方式与所述第二场景对应;及
显示应用程序的第二环境画面,所述第二环境画面中包括处于第二场景中的所述虚拟对象,所述第二环境画面是在所述虚拟环境中以所述第二观察方式对所述虚拟环境进行观察的画面。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本申请一个示例性实施例提供的电子设备的结构框图;
图2是本申请一个示例性实施例提供的计算机***的结构框图;
图3是本申请一个示例性实施例提供摄像机模型对虚拟环境进行观察的示意图;
图4是本申请一个示例性实施例提供对虚拟环境进行观察的方法流程图;
图5是基于图4示出的实施例提供的相关技术中室内外场景中对虚拟环境进行观察的示意图;
图6是基于图4示出的实施例提供的本申请中室内外场景中对虚拟环境进行观察的示意图;
图7是基于图4示出的实施例提供的另一个本申请中室内外场景中对虚拟环境进行观察的示意图;
图8是基于图4示出的实施例提供的另一个相关技术中室内外场景中对虚拟环境进行观察的示意图;
图9是基于图4示出的实施例提供的另一个本申请中室内外场景中对虚拟环境进行观察的示意图;
图10是本申请另一个示例性实施例提供对虚拟环境进行观察的方法流程图;
图11是基于图10示出的实施例提供的垂直射线检测的示意图;
图12是基于图10示出的实施例提供的另一个垂直射线检测的示意图;
图13是基于图10示出的实施例提供的另一个垂直射线检测的示意图;
图14是本申请另一个示例性实施例提供对虚拟环境进行观察的方法流程图;
图15是基于图14示出的实施例提供的水平射线检测的示意图;
图16是本申请另一个示例性实施例提供对虚拟环境进行观察的方法流程图;
图17是本申请一个示例性实施例提供的对虚拟环境进行观察的装置结构框图;
图18是本申请另一个示例性实施例提供的对虚拟环境进行观察的装置结构框图;
图19是本申请一个示例性实施例提供的终端的结构框图。
具体实施方式
为使本申请的目的、技术方案和优点更加清楚,下面将结合附图对本申请实施方式作进一步地详细描述。
首先,对本申请实施例涉及的若干个名词进行解释:
虚拟环境:是应用程序在终端上运行时显示(或提供)的虚拟环境。该虚拟环境可以是对真实世界的仿真环境,也可以是半仿真半虚构的三维环境,还可以是纯虚构的三维环境。虚拟环境可以是二维虚拟环境、2.5维虚拟环境和三维虚拟环境中的任意一种,下述实施例以虚拟环境是三维虚拟环境来举例说明,但对此不加以限定。可选地,该虚拟环境还用于至少两个虚拟角色之间的虚拟环境对战。可选地,该虚拟环境还用于至少两个虚拟角色之间使用虚拟枪械进行对战。可选地,该虚拟环境还用于在目标区域范围内,至少两个虚拟角色之间使用虚拟枪械进行对战,该目标区域范围会随虚拟环境中的时间推移而不断变小。
虚拟对象:是指在虚拟环境中的可活动对象。该可活动对象可以是虚拟人物、虚拟动物、动漫人物中的至少一种。在一些实施例中,当虚拟环境为三维虚拟环境时,虚拟对象是基于动画骨骼技术创建的三维立体模型。每个虚拟对象在三维虚拟环境中具有自身的形状和体积,占据三维虚拟环境中的一部分空间。
观察场景:是与对虚拟环境进行观察的至少一个观察方式对应的场景。在一些实施例中,该观察场景与采用虚拟对象的目标视角对虚拟环境进行观察的至少一种观察方式对应时,该观察方式的视角类型相同,观察角度、观察距离、观察配置(如:是否开启夜视仪)中的至少一个参数不同;该观察场景采用与虚拟对象的目标观察角度对虚拟环境进行观察的至少一种观察方 式对应时,该观察方式的观察角度相同,观察视角、观察距离以及观察配置中的至少一个参数不同;该观察场景采用与虚拟对象的目标观察距离对虚拟环境进行观察的至少一种观察方式对应时,该观察方式的观察距离相同,观察视角、观察角度以及观察配置中的至少一个参数不同;该观察参数采用与虚拟对象的目标观察配置对虚拟环境进行观察的至少一种观察方式对应时,该观察方式的观察配置相同,观察视角、观察角度以及观察距离中的至少一个参数不同。可选地,该观察场景为对应有特定的观察方式对虚拟环境进行观察的场景,可选地,该观察场景对应有场景特征,与该观察场景对应的观察方式是针对场景特征设定的方式,可选地,该场景特征包括光线条件特征、场景高度特征、场景中虚拟物体集中程度特征中的至少一种。可选地,虚拟环境中的观察场景可以分为多种种类,多个观察场景可以叠加实现为一个新的观察场景,示意性的,该观察场景包括:室内场景、室外场景、昏暗场景、明亮场景、房区场景、山地场景、防空洞场景、物品堆集场景中的至少一种,其中,室内场景可以与昏暗场景叠加实现为一个新的在室内的昏暗的场景,如:未开灯的室内,房区场景可以与山地场景叠加实现为一个新的在山地上的房区场景。
摄像机模型:摄像机模型是在三维虚拟环境中位于虚拟对象周围的三维模型,当采用第一人称视角时,该摄像机模型位于虚拟对象的头部附近或者位于虚拟对象的头部,当采用第三人称视角时,该摄像机模型可以位于虚拟对象的后方并与虚拟对象进行绑定,也可以位于与虚拟对象相距预设距离的任意位置,通过该摄像机模型可以从不同角度对位于三维虚拟环境中的虚拟环境进行观察,可选地,该第三人称视角为第一人称的过肩视角时,摄像机模型位于虚拟对象(比如虚拟人物的头肩部)的后方。可选地,该摄像机模型在三维虚拟环境中不会进行实际显示,即,在用户界面显示的三维虚拟环境中无法识别到该摄像机模型。
本申请中的终端可以是台式计算机、膝上型便携计算机、手机、平板电脑、电子书阅读器、MP3(Moving Picture Experts Group Audio Layer III,动态影像专家压缩标准音频层面3)播放器、MP4(Moving Picture Experts Group Audio Layer IV,动态影像专家压缩标准音频层面4)播放器等等。该终端中 安装和运行有支持虚拟环境的应用程序,比如支持三维虚拟环境的应用程序。该应用程序可以是虚拟现实应用程序、三维地图程序、军事仿真程序、TPS游戏、FPS游戏、MOBA游戏中的任意一种。在一些实施例中,该应用程序可以是单机版的应用程序,比如单机版的3D游戏程序,也可以是网络联机版的应用程序。
图1示出了本申请一个示例性实施例提供的电子设备的结构框图。该电子设备具体可以是终端100,终端100包括:操作***120和应用程序122。
操作***120是为应用程序122提供对计算机硬件的安全访问的基础软件。
应用程序122是支持虚拟环境的应用程序。在一些实施例中,应用程序122是支持三维虚拟环境的应用程序。该应用程序122可以是虚拟现实应用程序、三维地图程序、军事仿真程序、第三人称射击游戏(Third-Personal Shooting Game,TPS)、第一人称射击游戏(First-person shooting game,FPS)、MOBA游戏、多人枪战类生存游戏中的任意一种。该应用程序122可以是单机版的应用程序,比如单机版的3D游戏程序。
图2示出了本申请一个示例性实施例提供的计算机***的结构框图。该计算机***200包括:第一设备220、服务器240和第二设备260。
第一设备220安装和运行有支持虚拟环境的应用程序。该应用程序可以是虚拟现实应用程序、三维地图程序、军事仿真程序、TPS游戏、FPS游戏、MOBA游戏、及多人枪战类生存游戏中的任意一种。第一设备220是第一用户使用的设备,第一用户使用第一设备220控制位于虚拟环境中的第一虚拟对象进行活动,该活动包括但不限于:调整身体姿态、爬行、步行、奔跑、骑行、跳跃、驾驶、拾取、射击、攻击、及投掷中的至少一种。示意性的,第一虚拟对象是第一虚拟人物,比如仿真人物角色或动漫人物角色。
第一设备220通过无线网络或有线网络与服务器240相连。
服务器240包括一台服务器、多台服务器、云计算平台和虚拟化中心中的至少一种。服务器240用于为支持三维虚拟环境的应用程序提供后台服务。在一些实施例中,服务器240承担主要计算工作,第一设备220和第二设备 260承担次要计算工作;或者,服务器240承担次要计算工作,第一设备220和第二设备260承担主要计算工作;或者,服务器240、第一设备220和第二设备260三者之间采用分布式计算架构进行协同计算。
第二设备260安装和运行有支持虚拟环境的应用程序。该应用程序可以是虚拟现实应用程序、三维地图程序、军事仿真程序、FPS游戏、MOBA游戏、及多人枪战类生存游戏中的任意一种。第二设备260是第二用户使用的设备,第二用户使用第二设备260控制位于虚拟环境中的第二虚拟对象进行活动,该活动包括但不限于:调整身体姿态、爬行、步行、奔跑、骑行、跳跃、驾驶、拾取、射击、攻击、及投掷中的至少一种。示意性的,第二虚拟对象是第二虚拟人物,比如仿真人物角色或动漫人物角色。
在一些实施例中,第一虚拟人物和第二虚拟人物处于同一虚拟环境中。在一些实施例中,第一虚拟人物和第二虚拟人物可以属于同一个队伍、同一个组织、具有好友关系或具有临时性的通讯权限。在一些实施例中,第一虚拟人物和第二虚拟人物也可以属于不同队伍、不同组织、或具有敌对性的两个团体。
在一些实施例中,第一设备220和第二设备260上安装的应用程序是相同的,或两个设备上安装的应用程序是不同控制***平台的同一类型应用程序。第一设备220可以泛指多个设备中的一个,第二设备260可以泛指多个设备中的一个,本实施例仅以第一设备220和第二设备260来举例说明。第一设备220和第二设备260的设备类型相同或不同,该设备类型包括:游戏主机、台式计算机、智能手机、平板电脑、电子书阅读器、MP3播放器、MP4播放器和膝上型便携计算机中的至少一种。以下实施例以设备是台式计算机来举例说明。
本领域技术人员可以知晓,上述设备的数量可以更多或更少。比如上述设备可以仅为一个,或者上述设备为几十个或几百个,或者更多数量。本申请实施例对设备的数量和设备类型不加以限定。
在一些实施例中,摄像机模型位于与虚拟对象相距预设距离的任意位置。在一些实施例中,一个虚拟对象对应一个摄像机模型,该摄像机模型可以以虚拟对象为旋转中心进行旋转,如:以虚拟对象的任意一点为旋转中心对摄 像机模型进行旋转。摄像机模型在旋转过程中的不仅在角度上有转动,还在位移上有偏移,旋转时摄像机模型与该旋转中心之间的距离保持不变,即,将摄像机模型在以该旋转中心作为球心的球体表面进行旋转。其中,虚拟对象的任意一点可以是虚拟对象的头部、躯干、或者虚拟对象周围的任意一点,本申请实施例对此不加以限定。在一些实施例中,摄像机模型在对虚拟环境进行观察时,该摄像机模型的视角方向为该摄像机模型所在球面的切面上的垂线指向虚拟对象的方向。
在一些实施例中,该摄像机模型还可以在虚拟对象的不同方向以预设的角度对虚拟环境进行观察。
示意性的,请参考图3,在虚拟对象31中确定一点作为旋转中心32,摄像机模型围绕该旋转中心32进行旋转,在一些实施例中,该摄像机模型配置有一个初始位置,该初始位置为虚拟对象后上方的位置(比如脑部的后方位置)。示意性的,如图3所示,该初始位置为位置33,当摄像机模型旋转至位置34或者位置35时,摄像机模型的视角方向随摄像机模型的转动而进行改变。
图4是本申请一个示例性实施例提供的对虚拟环境进行观察的方法,以该方法应用在如图1所示的终端100中为例进行说明,如图4所示,该方法包括:
步骤S401,显示应用程序的第一环境画面,第一环境画面是在虚拟环境中以第一观察方式对虚拟环境进行观察的画面。
在一些实施例中,该第一环境画面中包括处于第一场景中的虚拟对象。
在一些实施例中,虚拟对象在虚拟环境中属于至少一个场景,示意性的,虚拟环境中的场景包括室内场景、室外场景、昏暗场景以及明亮场景中的至少一种,由于室内场景和室外场景为两种独立互补的观察场景,即虚拟对象若不是在室内场景中,就是在室外场景中。
在一些实施例中,该第一观察方式包括与第一场景对应的观察方式。在一些实施例中,每个观察场景对应一种观察方式,该对应关系是预先设置好的。在一些实施例中,当虚拟环境中的某个位置对应不止一种观察场景时,虚拟对象在该位置处时,对该虚拟环境观察的观察方式可以是上述不止一种 观察场景对应的观察方式的叠加,也可以是上述不止一种观察场景对应的观察方式中的其中一种观察方式。其中,在多种观察方式中对一种观察方式进行选择时,可以对不同的观察方式设置优先级,并根据优先级选择优先级较高的观察方式对处于该位置的虚拟环境进行观察,也可以在多种观察方式中随机选择一种观察方式对该虚拟环境进行观察。
示意性的,室内场景对应的观察方式为在与虚拟对象相隔第一距离的位置对虚拟环境进行观察,昏暗场景对应的观察方式为通过夜视仪对虚拟环境进行观察,则当虚拟对象处于室内且昏暗的场景时,也可以在与虚拟对象相隔第一距离的位置通过夜视仪对虚拟环境进行观察,也可以仅通过夜视仪而在于虚拟对象相隔第二距离的位置(该第二距离为默认距离)对虚拟环境进行观察。
步骤S402,接收移动操作,移动操作用于将虚拟对象从第一场景转移至第二场景。
在一些实施例中,第一场景和第二场景为两种不同的观察场景。
在一些实施例中,第一场景和第二场景为独立互补的两个观察场景,即虚拟对象若不处于第一场景,则处于第二场景。示意性的,第一场景为室外场景,第二场景为室内场景;或,第一场景为明亮场景,第二场景为昏暗场景;或,第一场景为物品堆集场景,第二场景为野地场景。
示意性的,根据移动操作将虚拟对象从第一场景转移至第二场景可以实现为根据移动操作将虚拟对象从室外转移至室内;则该第一场景为室外场景,第二场景为室内场景,另外,由于室外场景同时也可以实现为明亮场景,室内场景同时也可以实现为昏暗场景,也即第一场景为明亮场景,第二场景为昏暗场景。
在一些实施例中,当上述终端为具有触摸显示屏的移动终端时,该移动操作可以是用户在触摸显示屏上进行滑动后生成的,也可以是用户对移动终端的物理按键进行按压后生成的;当上述终端为台式电脑、或便携式膝上笔记本电脑时,该移动操作可以是终端接收到外部输入设备输入的信号对应的操作,如:用户通过操作鼠标作为移动操作向终端发送移动信号,或用户通过操作键盘作为移动操作向终端发送移动信号。
步骤S403,根据移动操作将第一观察方式调整为第二观察方式,其中, 第一观察方式与第一场景对应,第二观察方式与第二场景对应。
在一些实施例中,观察方式中对应的参数包括:观察角度、观察距离、是否开启夜视仪、及观察视角人称中的至少一种。
在一些实施例中,终端每隔预设时间对虚拟对象在虚拟环境中所处的观察场景进行检测,在一些实施例中,该检测过程包括如下情况中的至少一种:
第一,第一场景为室外场景,第二场景为室内场景,则通过碰撞检测方式对虚拟对象在虚拟环境中所处的观察场景进行检测,并当检测到虚拟对象根据移动操作从室外场景移动至室内场景时,将第一观察方式调整为第二观察方式;
在一些实施例中,该第一观察方式为摄像机模型在距离虚拟对象第一距离处对虚拟环境进行观察的方式,第二观察方式为摄像机模型在距离虚拟对象第二距离处对虚拟环境进行观察的方式,该摄像机模型为在虚拟环境中围绕虚拟对象进行观察的三维模型,第一距离大于第二距离,也即,将摄像机模型与虚拟对象之间的距离从第一距离调整为第二距离。
示意性的,相关技术中,虚拟对象无论处于室外场景还是处于室内场景,对该虚拟对象进行观察的观察距离都是一致的,请参考图5,在室内场景下,对虚拟对象51进行观察时摄像机模型50与虚拟对象51之间的距离为a,在室外场景下,对虚拟对象51进行观察时摄像机模型50与虚拟对象51之间的距离也是a。其中,摄像机模型50与虚拟对象51之间的距离可以认为是摄像机模型50与虚拟对象51的物理中心点之间的距离,也可以是认为是摄像机模型50与虚拟对象51中的任意一点之间的距离。以该方式对虚拟环境进行观察时,较易产生在摄像机模型与虚拟对象之间的虚拟物体对摄像机模型对虚拟环境进行观察的视线进行遮挡,导致影响作战过程的问题。
而本申请涉及的上述方式请参考图6,在室内场景下,对虚拟对象61进行观察时摄像机模型60与虚拟对象61之间的距离为a,在室外场景下,对虚拟对象61进行观察时摄像机模型60与虚拟对象61之间的距离为b,其中,b<a。
在一些实施例中,该第一观察方式还可以是摄像机模型以第一视角对虚拟环境进行观察的方式,第二观察方式为摄像机模型以第二视角对虚拟环境进行观察的方式,其中,第一视角的方向与虚拟环境中水平方向的夹角小于 第二视角的方向与水平方向的夹角,也即,根据移动操作将摄像机模型观察虚拟对象的角度从第一视角旋转至第二视角。
示意性的,请参考图7,在室内场景下,对虚拟对象71进行观察时摄像机模型70与水平方向73之间的角度为α,在室外场景下,对虚拟对象71进行观察时摄像机模型70与虚拟对象71之间的角度为β,其中,α<β。
在一些实施例中,该第一观察方式还可以是第三人称观察方式,第二观察方式为第一人称观察方式,也即,根据移动操作将观察视角由第三人称视角转换为第一人称视角。
第二,第一场景为明亮场景,第二场景为昏暗场景,则通过色彩检测方式对虚拟对象在虚拟环境中所处的观察场景进行检测,并当检测到虚拟对象根据移动操作从明亮场景移动至昏暗场景时,将第一观察方式调整为第二观察方式;
在一些实施例中,该第一观察方式为关闭夜视仪的观察方式,也即不通过夜视仪对虚拟对象以及虚拟环境进行观察,而第二观察方式为开启夜视仪的观察方式,也即通过夜视仪对虚拟环境进行观察。
在一些实施例中,该色彩检测方式用于对显示界面中的像素点进行检测,当像素点的平均灰度值大于预设阈值时,则认为该虚拟对象从第一场景移动至第二场景。
第三,第一场景为野地场景,第二场景为物品堆集场景,则通过场景标识验证方式对虚拟对象在虚拟环境中所处的观察场景进行检测,并当检测到虚拟对象根据移动操作从野地场景移动至物品堆集场景时,将第一观察方式调整为第二观察方式。
在一些实施例中,虚拟对象所处的位置对应的坐标对应有场景标识,根据该场景标识对虚拟对象所处的场景进行验证。
在一些实施例中,该第一观察方式包括摄像机模型在距离虚拟对象第一距离处对虚拟环境进行观察的方式,第二观察方式包括摄像机模型在距离虚拟对象第二距离处对虚拟环境进行观察的方式,该摄像机模型包括在虚拟环境中围绕虚拟对象进行观察的三维模型,第一距离大于第二距离,也即,将摄像机模型与虚拟对象之间的距离从第一距离调整为第二距离。
在一些实施例中,该第一观察方式还可以是摄像机模型以第一视角对虚 拟环境进行观察的方式,第二观察方式为摄像机模型以第二视角对虚拟环境进行观察的方式,其中,第一视角的方向与虚拟环境中水平方向的夹角小于第二视角的方向与水平方向的夹角,也即,根据移动操作将摄像机模型观察虚拟对象的角度从第一视角旋转至第二视角。
在一些实施例中,该第一观察方式还可以是第三人称观察方式,第二观察方式为第一人称观察方式,也即,根据移动操作将观察视角由第三人称视角转换为第一人称视角。
步骤S404,显示应用程序的第二环境画面,第二环境画面是在虚拟环境中以第二观察方式对虚拟环境进行观察的画面。
在一些实施例中,第二环境画面中包括处于第二场景中的虚拟对象。
示意性的,结合上述第一场景为室外场景,第二场景为室内场景,且调整摄像机模型与虚拟对象之间的距离为例,对第一环境画面和第二环境画面进行或说明。首先,对相关技术中,虚拟对象在室外场景和室内场景的环境画面进行说明,请参考图8,在室内场景下,第一画面81中包括虚拟对象82,根据虚拟门83以及虚拟柜子84可知该虚拟对象82处于室内场景下;而在室外场景下,第二画面85中包括虚拟对象82,根据虚拟云86可知该虚拟对象82处于室外场景下,该第二画面85中还包括虚拟物体87,该虚拟物体87对虚拟对象82的下方形成了遮挡。其次,对本申请中涉及的方案对应的第一环境画面和第二环境画面进行说明,如图9所示,在室内场景下,第一环境画面91中包括虚拟对象92,根据虚拟门93以及虚拟柜子94可知该虚拟对象92处于室内场景下;而在室外场景下,第二环境画面95中包括虚拟对象92,根据虚拟云96可知该虚拟对象92处于室外场景下,而第二画面85中对虚拟对象形成遮挡的虚拟物体87,由于第二环境画面95是摄像机模型以更近的距离观察虚拟对象92时生成的,故该虚拟物体87在第二环境画面95中未能显示,也即该虚拟物体87对虚拟对象或摄像机模型的视线未形成遮挡。
综上所述,本实施例提供的对虚拟环境进行观察的方式,通过针对虚拟对象所处的不同的观察场景对在虚拟环境中观察虚拟对象的方式进行变换,以与观察场景适配的观察方式对处于该观察场景中的虚拟对象进行观察,避免出现由于观察方式单一而导致的在不同的观察场景下以同一种观察方式观察虚拟对象时,由于观察角度、观察距离、或观察配置不恰当而影响作战的 问题。
在一个可选的实施例中,第一场景为室外场景,第二场景为室内场景,终端通过碰撞检测方式对虚拟对象在虚拟环境中所处的观察场景进行检测,且该碰撞检测方式为垂直射线检测。图10是本申请另一个示例性实施例提供的对虚拟环境进行观察的方法,以该方法应用在如图1所示的终端100中为例进行说明,如图10所示,该方法包括:
步骤S1001,显示应用程序的第一环境画面。
在一些实施例中,第一环境画面中包括处于第一场景中的虚拟对象,该第一环境画面是在虚拟环境中以第一观察方式对虚拟环境进行观察的画面。
在一些实施例中,虚拟对象在虚拟环境中属于至少一个场景,示意性的,虚拟环境中的观察场景包括室内场景和室外场景中的任意一种,由于室内场景和室外场景为两种独立互补的观察场景,即虚拟对象若不是在室内场景中,即在室外场景中。
步骤S1002,接收移动操作。
在一些实施例中,该移动操作用于将虚拟对象从第一场景转移至第二场景,其中,第一场景为室外场景,第二场景为室内场景,也即该移动操作用于将虚拟对象从室外场景转移至室内场景。
步骤S1003,从虚拟对象中的目标点为起始点,沿虚拟环境中的垂直向上方向做垂直射线检测。
在一些实施例中,该目标点可以是虚拟对象中的物理中心点、头部对应的点、手臂对应的点、腿部对应的点中的任意一种,也可以是虚拟对象中的任意一点,还可以是虚拟对象之外与虚拟对象对应的任意一点。
在一些实施例中,该垂直射线检测还可以是沿虚拟环境中的垂直向下方式做的射线。
示意性的,请参考图11,坐标系111为虚拟环境中的所应用的三维坐标系,其中,z轴所指向的方向即为虚拟环境中的垂直向上方向,终端可从虚拟对象112的目标点113为起始点,沿垂直向上方向做垂直射线114进行检测。值得注意的是,图11中以垂直射线114为例进行说明,实际应用场景中,该垂直射线114可以不显示在环境画面中。
步骤S1004,接收进行垂直射线检测后返回的第一检测结果。
在一些实施例中,该第一检测结果用于表示虚拟对象的垂直向上方向被碰撞的虚拟物体。
在一些实施例中,该第一检测结果中包括垂直射线检测碰撞的第一个虚拟物体的物体标识,和/或,垂直射线检测碰撞第一个虚拟物体时射线的长度。
在一些实施例中,当垂直射线检测中并未碰撞到任何虚拟物体时,则该第一检测结果为空。
步骤S1005,根据第一检测结果确定虚拟对象所处的观察场景。
在一些实施例中,终端根据第一检测结果确定虚拟对象所处的观察场景的方式包括如下方式中的任意一种:
第一,第一检测结果中包括垂直射线检测碰撞的第一个虚拟物体的物体标识,则当第一检测结果中的物体标识为虚拟房屋标识时,终端确定虚拟对象所处的观察场景为室内场景。
在一些实施例中,当第一检测结果为空,或第一检测结果中的物体标识为其他物体标识,如:虚拟云标识、虚拟树标识等,则终端可确定虚拟对象所处的观察场景为室外场景,即,当第一检测结果中的物体标识为除虚拟房屋标识之外的其他标识时,终端确定虚拟对象所处的观察场景为室外场景。
示意性的,请参考图12和图13,图12中虚拟对象120处于室内场景,从该虚拟对象120的目标点121向垂直向上方向做垂直射线检测,该垂直射线122碰撞中虚拟房屋后返回房屋标识后,终端确定该虚拟对象120处于虚拟房屋中,也即处于室内场景;而图13中的虚拟对象130处于室外场景,从该虚拟对象130的目标点131向垂直向上方向做垂直射线检测,该垂直射线132未碰撞到虚拟物体,故返回空值(英文:null)后,确定该虚拟对象130处于室外场景。
值得注意的是,上述图12中的垂直射线122以及图13中的垂直射线132皆为示意所作,实际应用中并不存在。
第二,第一检测结果中包括经过垂直射线检测时碰撞第一个虚拟物体时射线的长度,则当第一检测结果中射线的长度小于或等于预设长度内,终端确定虚拟对象所处的观察场景为室内场景,当第一检测结果中射线的长度超出预设长度时,终端确定虚拟对象所处的观察场景为室外场景。
示意性的,房屋最高的层高为2m,则该预设长度为2m,当第一检测结果中射线的长度在2m以内时,终端可确定虚拟对象处于室内场景,当第一检测结果中射线的长度超出2m时,终端可确定虚拟对象处于室外场景。
值得注意的是,上述步骤1003至步骤1005的执行贯穿在环境画面显示的全过程,即针对每一帧环境画面对虚拟对象所处的观察场景进行检测。示意性的,每一秒包括30帧环境画面,则每一秒终端需要进行30次对虚拟对象所处的观察场景进行的检测。
步骤S1006,当检测到虚拟对象根据移动操作从室外场景转移至室内场景时,将第一观察方式调整为第二观察方式。
其中,第一观察方式与第一场景对应,第二观察方式与第二场景对应。
步骤S1007,显示应用程序的第二环境画面。
在一些实施例中,第二环境画面中包括处于第二场景中的虚拟对象,第二环境画面是在虚拟环境中以第二观察方式对虚拟环境进行观察的画面。
综上所述,本实施例提供的对虚拟环境进行观察的方式,通过针对虚拟对象所处的不同的观察场景对在虚拟环境中观察虚拟对象的方式进行变换,以与观察场景适配的观察方式对处于该观察场景中的虚拟对象进行观察,避免出现由于观察方式单一而导致的在不同的观察场景下以同一种观察方式观察虚拟对象时,由于观察角度、观察距离、或观察配置不恰当而影响作战的问题。
本实施例提供的方法,通过垂直射线检测判断虚拟对象所处的观察场景,以便捷且准确的方式对虚拟对象所处的观察场景进行检测,避免出现由于观察方式单一而导致的在不同的观察场景下以同一种观察方式观察虚拟对象时,由于观察角度、观察距离、观察配置不恰当而影响作战的问题。
在一个可选的实施例中,第一场景为室外场景,第二场景为室内场景,终端通过碰撞检测方式对虚拟对象在虚拟环境中所处的观察场景进行检测,且该碰撞检测方式为水平射线检测。图14是本申请另一个示例性实施例提供的对虚拟环境进行观察的方法,以该方法应用在如图1所示的终端100中为例进行说明,如图14所示,该方法包括:
步骤S1401,显示应用程序的第一环境画面。
在一些实施例中,第一环境画面中包括处于第一场景中的虚拟对象,该第一环境画面是在虚拟环境中以第一观察方式对虚拟环境进行观察的画面。
在一些实施例中,虚拟对象在虚拟环境中属于至少一个观察场景,示意性的,虚拟环境中的观察场景包括室内场景和室外场景中的任意一种,由于室内场景和室外场景为两种独立互补的观察场景,即虚拟对象若不是在室内场景中,即在室外场景中。
步骤S1402,接收移动操作。
在一些实施例中,该移动操作用于将虚拟对象从第一场景转移至第二场景,其中,第一场景为室外场景,第二场景为室内场景,也即该移动操作用于将虚拟对象从室外场景转移至室内场景。
步骤S1403,从虚拟对象中的目标点为起始点,沿虚拟环境中的水平方向做至少三条方向互不相同的检测射线。
在一些实施例中,该目标点可以是虚拟对象中的物理中心点、头部对应的点、手臂对应的点、腿部对应的点中的任意一种,也可以是虚拟对象中的任意一点,还可以是虚拟对象之外与虚拟对象对应的任意一点。
在一些实施例中,该至少三条检测射线中每两条检测射线之间的夹角大于预设夹角。示意性的,每两条检测射线之间的最小夹角为90°,则至多4条检测射线,当三条检测射线时,可以每两条射线之间的夹角为120°,也可以两个夹角为90°,第三个夹角为180°,还可以是每个夹角都大于或者等于90°的任意组合。
示意性的,请参考图15,图15中示出的为对虚拟对象1501的俯视图,其中,从虚拟对象1501的目标点1502为起始点,沿水平方向做检测射线1503、检测射线1504以及检测射线1505,其中,检测射线1503与检测射线1504之间的夹角为90°,检测射线1504与检测射线1505之间的夹角为110°,检测射线1503与检测射线1505之间的夹角为160°。
步骤S1404,接收通过至少三条检测射线进行水平射线检测所返回的第二检测结果。
在一些实施例中,该第二检测结果用于表示检测射线在水平方向上碰撞的虚拟物体。
步骤S1405,根据第二检测结果确定虚拟对象所处的观察场景。
在一些实施例中,根据第二检测结果确定虚拟对象所处的观察场景的方式包括如下方式中的任意一种:
第一,第二检测结果中包括至少三条检测射线碰撞第一个虚拟物体时的射线长度,若至少三条检测射线中,不少于半数的检测射线碰撞第一个虚拟物体时的射线长度在预设长度以内,则终端可确定虚拟对象处于室内场景;若至少三条检测射线中,超出半数的检测射线碰撞第一个虚拟物体时的射线长度产出预设长度,则终端可确定虚拟对象所处的观察场景为室外场景;
第二,第二检测结果中包括至少三条射线碰撞的第一个虚拟物体的物体标识,若至少三条检测射线中,不少于半数的检测射线碰撞的第一个虚拟物体的物体标识为房屋标识时,终端可确定虚拟对象处于室内场景;若至少三条检测射线中,超出半数的检测射线碰撞的第一个虚拟物体的物体标识不是房屋标识时,终端可确定虚拟对象处于室外场景。
值得注意的是,上述步骤S1403至步骤S1405的执行贯穿在环境画面显示的全过程,即针对每一帧环境画面对虚拟对象所处的观察场景进行检测。示意性的,每一秒包括30帧环境画面,则每一秒终端需要进行30次对虚拟对象所处的观察场景进行的检测。
步骤S1406,当检测到虚拟对象根据移动操作从室外场景转移至室内场景时,将第一观察方式调整为第二观察方式。
其中,第一观察方式与第一场景对应,第二观察方式与第二场景对应。
步骤S1407,显示应用程序的第二环境画面。
在一些实施例中,第二环境画面中包括处于第二场景中的虚拟对象,第二环境画面是在虚拟环境中以第二观察方式对虚拟环境进行观察的画面。
综上所述,本实施例提供的对虚拟环境进行观察的方式,通过针对虚拟对象所处的不同的观察场景对在虚拟环境中观察虚拟对象的方式进行变换,以与观察场景适配的观察方式对处于该观察场景中的虚拟对象进行观察,避免出现由于观察方式单一而导致的在不同的观察场景下以同一种观察方式观察虚拟对象时,由于观察角度、观察距离、或观察配置不恰当而影响作战的问题。
本实施例提供的方法,通过水平射线检测判断虚拟对象所处的观察场景,以便捷且准确的方式对虚拟对象所处的观察场景进行检测,避免出现由于观 察方式单一而导致的在不同的观察场景下以同一种观察方式观察虚拟对象时,由于观察角度、观察距离、观察配置不恰当而影响作战的问题。
图16是本申请另一个示例性实施例提供的对虚拟环境进行观察的方法,如图16所示,该方法包括:
步骤S1601,客户端针对每帧图像对虚拟对象所处的观察场景进行检测。
示意性的,每一秒包括30帧环境画面,则每一秒终端需要进行30次对虚拟对象所处的观察场景进行的检测。
步骤S1602,用户控制客户端中的虚拟对象进入室内。
在一些实施例中,终端接收到移动操作,该移动操作用于控制虚拟对象在虚拟环境中进行移动。
步骤S1603,客户端通过射线检测到虚拟对象处于室内场景。
步骤S1604,将摄像机模型与虚拟对象之间的距离从第一距离调整为第二距离。
在一些实施例中,第一距离大于第二距离,也即,当虚拟对象从室外场景运动至室内场景时,减小摄像机模型与虚拟对象之间的距离。
步骤S1605,用户控制客户端中的虚拟对象运动至室外。
步骤S1606,客户端通过射线检测到虚拟对象处于室外场景。
步骤S1607,将摄像机模型与虚拟对象之间的距离从第二距离调整为第一距离。
综上所述,本实施例提供的对虚拟环境进行观察的方式,通过针对虚拟对象所处的不同的观察场景对在虚拟环境中观察虚拟对象的方式进行变换,以与观察场景适配的观察方式对处于该场景中的虚拟对象进行观察,避免出现由于观察方式单一而导致的在不同的观察场景下以同一种观察方式观察虚拟对象时,由于观察角度、观察距离、或观察配置不恰当而影响作战的问题。
本实施例提供的方法,当虚拟对象处于室内场景时,缩短摄像机模型与虚拟对象之间的距离,以减小虚拟物体对视线进行遮挡的情况产生。
图17是本申请一个示例性实施例提供的对虚拟环境进行观察的装置结构框图,该装置可以实现在如图1所示的终端100中,该装置包括:
显示模块1710,用于显示应用程序的第一环境画面,第一环境画面中包括处于第一场景中的虚拟对象,第一环境画面是在虚拟环境中以第一观察方式对虚拟环境进行观察的画面。
接收模块1720,用于接收移动操作,移动操作用于将虚拟对象从第一场景转移至第二场景,第一场景和第二场景为两种不同的观察场景,观察场景与对虚拟环境进行观察的至少一个观察方式对应。
调整模块1730,用于根据移动操作将第一观察方式调整为第二观察方式,其中,第一观察方式与第一场景对应,第二观察方式与第二场景对应。
显示模块1710,还用于显示应用程序的第二环境画面,第二环境画面中包括处于第二场景中的虚拟对象,第二环境画面是在虚拟环境中以第二观察方式对虚拟环境进行观察的画面。
在一个可选的实施例中,如图18所示,第一场景包括室外场景,第二场景包括室内场景,调整模块1730,包括:
检测单元1731,用于通过碰撞检测方式对虚拟对象在虚拟环境中所处的观察场景进行检测。
调整单元1732,用于当检测到虚拟对象根据移动操作从室外场景转移至室内场景时,将第一观察方式调整为第二观察方式。
在一个可选的实施例中,第一观察方式包括摄像机模型在距离虚拟对象第一距离处对虚拟环境进行观察的方式,第二观察方式包括摄像机模型在距离虚拟对象第二距离处对虚拟环境进行观察的方式,摄像机模型包括在虚拟环境中围绕虚拟对象进行观察的三维模型,第一距离大于第二距离。调整单元1732,还用于将摄像机模型与虚拟对象之间的距离从第一距离调整为第二距离。
在一个可选的实施例中,第一观察方式包括摄像机模型以第一视角对虚拟环境进行观察的方式,第二观察方式包括摄像机模型以第二视角对虚拟环境进行观察的方式,摄像机模型包括围绕虚拟对象进行观察的三维模型,第一视角的方向与虚拟环境中水平方向的夹角小于第二视角的方向与水平方向的夹角;调整单元1732,还用于根据移动操作,将摄像机模型观察虚拟对象的角度从第一视角旋转至第二视角。
在一个可选的实施例中,碰撞检测方式为垂直射线检测;检测单元1731, 还用于从虚拟对象中的目标点为起始点,沿虚拟环境中的垂直向上方向做垂直射线检测;接收进行垂直射线检测后返回的第一检测结果,第一检测结果用于表示在虚拟对象的垂直向上方向被碰撞的虚拟物体;根据第一检测结果确定虚拟对象所处的观察场景。
在一个可选的实施例中,第一检测结果包括垂直射线检测碰撞的第一个虚拟物体的物体标识;检测单元1731,还用于当第一检测结果中的物体标识为虚拟房屋标识时,确定虚拟对象所处的观察场景为室内场景;检测单元1731,还用于当第一检测结果中的物体标识为除虚拟房屋标识之外的其他标识时,确定虚拟对象所处的观察场景为室外场景。
在一个可选的实施例中,第一检测结果包括垂直射线检测碰撞第一个虚拟物体时射线的长度;检测单元1731,还用于当第一检测结果中射线的长度小于或等于预设长度时,确定虚拟对象所处的观察场景为室内场景;检测单元1731,还用于当第一检测结果中射线的长度超出预设长度时,确定虚拟对象所处的观察场景为室外场景。
在一个可选的实施例中,碰撞检测方式包括水平射线检测;检测单元1731,还用于从虚拟对象中的目标点为起始点,沿虚拟环境中的水平方向做至少三条方向互不相同的检测射线,且每两条检测射线之间的夹角大于预设夹角;接收通过至少三条检测射线进行水平射线检测所返回的第二检测结果,第二检测结果用于表示检测射线在水平方向上碰撞的虚拟物体;根据第二检测结果确定虚拟对象所处的观察场景。
在一个可选的实施例中,第二检测结果中包括至少三条检测射线碰撞第一个虚拟物体时的射线长度;检测单元1731,还用于若至少三条检测射线中,不少于半数的检测射线碰撞第一个虚拟物体时的射线长度在预设长度以内,确定虚拟对象处于室内场景;检测单元1731,还用于若至少三条检测射线中,超出半数的检测射线碰撞第一个虚拟物体时的射线长度超出预设长度,确定虚拟对象处于室外场景。
需要说明的是,上述实施例中的接收模块1720、调整模块1730可以由处理器实现也可以有处理器和存储器协同实现;上述实施例中的显示模块1710可以由显示屏实现,也可以由处理器和显示屏协同实现。
图19示出了本发明一个示例性实施例提供的终端1900的结构框图。该终端1900可以是:智能手机、平板电脑、MP3播放器(Moving Picture Experts Group Audio Layer III,动态影像专家压缩标准音频层面3)、MP4(Moving Picture Experts Group Audio Layer IV,动态影像专家压缩标准音频层面4)播放器、笔记本电脑或台式电脑。终端1900还可能被称为用户设备、便携式终端、膝上型终端、台式终端等其他名称。
通常,终端1900包括有:处理器1901和存储器1902。
处理器1901可以包括一个或多个处理核心,比如4核心处理器、8核心处理器等。处理器1901可以采用DSP(Digital Signal Processing,数字信号处理)、FPGA(Field-Programmable Gate Array,现场可编程门阵列)、PLA(Programmable Logic Array,可编程逻辑阵列)中的至少一种硬件形式来实现。处理器1901也可以包括主处理器和协处理器,主处理器是用于对在唤醒状态下的数据进行处理的处理器,也称CPU(Central Processing Unit,中央处理器);协处理器是用于对在待机状态下的数据进行处理的低功耗处理器。在一些实施例中,处理器1901可以在集成有GPU(Graphics Processing Unit,图像处理器),GPU用于负责显示屏所需要显示的内容的渲染和绘制。一些实施例中,处理器1901还可以包括AI(Artificial Intelligence,人工智能)处理器,该AI处理器用于处理有关机器学习的计算操作。
存储器1902可以包括一个或多个计算机可读存储介质,该计算机可读存储介质可以是非暂态的。存储器1902还可包括高速随机存取存储器,以及非易失性存储器,比如一个或多个磁盘存储设备、闪存存储设备。在一些实施例中,存储器1902中的非暂态的计算机可读存储介质用于存储至少一个指令,该至少一个指令用于被处理器1901所执行以实现本申请中方法实施例提供的对虚拟环境进行观察的方法。
在一些实施例中,终端1900还可选包括有:***设备接口1903和至少一个***设备。处理器1901、存储器1902和***设备接口1903之间可以通过总线或信号线相连。各个***设备可以通过总线、信号线或电路板与***设备接口1903相连。具体地,***设备包括:射频电路1904、触摸显示屏1905、摄像头1906、音频电路1907、定位组件1908和电源1909中的至少一种。
***设备接口1903可被用于将I/O(Input/Output,输入/输出)相关的至少一个***设备连接到处理器1901和存储器1902。在一些实施例中,处理器1901、存储器1902和***设备接口1903被集成在同一芯片或电路板上;在一些其他实施例中,处理器1901、存储器1902和***设备接口1903中的任意一个或两个可以在单独的芯片或电路板上实现,本实施例对此不加以限定。
射频电路1904用于接收和发射RF(Radio Frequency,射频)信号,也称电磁信号。射频电路1904通过电磁信号与通信网络以及其他通信设备进行通信。射频电路1904将电信号转换为电磁信号进行发送,或者,将接收到的电磁信号转换为电信号。在一些实施例中,射频电路1904包括:天线***、RF收发器、一个或多个放大器、调谐器、振荡器、数字信号处理器、编解码芯片组、用户身份模块卡等等。射频电路1904可以通过至少一种无线通信协议来与其它终端进行通信。该无线通信协议包括但不限于:万维网、城域网、内联网、各代移动通信网络(2G、3G、4G及5G)、无线局域网和/或WiFi(Wireless Fidelity,无线保真)网络。在一些实施例中,射频电路1904还可以包括NFC(Near Field Communication,近距离无线通信)有关的电路,本申请对此不加以限定。
显示屏1905用于显示UI(User Interface,用户界面)。该UI可以包括图形、文本、图标、视频及其它们的任意组合。当显示屏1905是触摸显示屏时,显示屏1905还具有采集在显示屏1905的表面或表面上方的触摸信号的能力。该触摸信号可以作为控制信号输入至处理器1901进行处理。此时,显示屏1905还可以用于提供虚拟按钮和/或虚拟键盘,也称软按钮和/或软键盘。在一些实施例中,显示屏1905可以为一个,设置终端1900的前面板;在另一些实施例中,显示屏1905可以为至少两个,分别设置在终端1900的不同表面或呈折叠设计;在再一些实施例中,显示屏1905可以是柔性显示屏,设置在终端1900的弯曲表面上或折叠面上。甚至,显示屏1905还可以设置成非矩形的不规则图形,也即异形屏。显示屏1905可以采用LCD(Liquid Crystal Display,液晶显示屏)、OLED(Organic Light-Emitting Diode,有机发光二极管)等材质制备。
摄像头组件1906用于采集图像或视频。在一些实施例中,摄像头组件 1906包括前置摄像头和后置摄像头。通常,前置摄像头设置在终端的前面板,后置摄像头设置在终端的背面。在一些实施例中,后置摄像头为至少两个,分别为主摄像头、景深摄像头、广角摄像头、长焦摄像头中的任意一种,以实现主摄像头和景深摄像头融合实现背景虚化功能、主摄像头和广角摄像头融合实现全景拍摄以及VR(Virtual Reality,虚拟现实)拍摄功能或者其它融合拍摄功能。在一些实施例中,摄像头组件1906还可以包括闪光灯。闪光灯可以是单色温闪光灯,也可以是双色温闪光灯。双色温闪光灯是指暖光闪光灯和冷光闪光灯的组合,可以用于不同色温下的光线补偿。
音频电路1907可以包括麦克风和扬声器。麦克风用于采集用户及环境的声波,并将声波转换为电信号输入至处理器1901进行处理,或者输入至射频电路1904以实现语音通信。出于立体声采集或降噪的目的,麦克风可以为多个,分别设置在终端1900的不同部位。麦克风还可以是阵列麦克风或全向采集型麦克风。扬声器则用于将来自处理器1901或射频电路1904的电信号转换为声波。扬声器可以是传统的薄膜扬声器,也可以是压电陶瓷扬声器。当扬声器是压电陶瓷扬声器时,不仅可以将电信号转换为人类可听见的声波,也可以将电信号转换为人类听不见的声波以进行测距等用途。在一些实施例中,音频电路1907还可以包括耳机插孔。
定位组件1908用于定位终端1900的当前地理位置,以实现导航或LBS(Location Based Service,基于位置的服务)。定位组件1908可以是基于美国的GPS(Global Positioning System,全球定位***)、中国的北斗***或俄罗斯的伽利略***的定位组件。
电源1909用于为终端1900中的各个组件进行供电。电源1909可以是交流电、直流电、一次性电池或可充电电池。当电源1909包括可充电电池时,该可充电电池可以是有线充电电池或无线充电电池。有线充电电池是通过有线线路充电的电池,无线充电电池是通过无线线圈充电的电池。该可充电电池还可以用于支持快充技术。
在一些实施例中,终端1900还包括有一个或多个传感器1910。该一个或多个传感器1910包括但不限于:加速度传感器1911、陀螺仪传感器1912、压力传感器1913、指纹传感器1914、光学传感器1915以及接近传感器1916。
加速度传感器1911可以检测以终端1900建立的坐标系的三个坐标轴上 的加速度大小。比如,加速度传感器1911可以用于检测重力加速度在三个坐标轴上的分量。处理器1901可以根据加速度传感器1911采集的重力加速度信号,控制触摸显示屏1905以横向视图或纵向视图进行用户界面的显示。加速度传感器1911还可以用于游戏或者用户的运动数据的采集。
陀螺仪传感器1912可以检测终端1900的机体方向及转动角度,陀螺仪传感器1912可以与加速度传感器1911协同采集用户对终端1900的3D动作。处理器1901根据陀螺仪传感器1912采集的数据,可以实现如下功能:动作感应(比如根据用户的倾斜操作来改变UI)、拍摄时的图像稳定、游戏控制以及惯性导航。
压力传感器1913可以设置在终端1900的侧边框和/或触摸显示屏1905的下层。当压力传感器1913设置在终端1900的侧边框时,可以检测用户对终端1900的握持信号,由处理器1901根据压力传感器1913采集的握持信号进行左右手识别或快捷操作。当压力传感器1913设置在触摸显示屏1905的下层时,由处理器1901根据用户对触摸显示屏1905的压力操作,实现对UI界面上的可操作性控件进行控制。可操作性控件包括按钮控件、滚动条控件、图标控件、菜单控件中的至少一种。
指纹传感器1914用于采集用户的指纹,由处理器1901根据指纹传感器1914采集到的指纹识别用户的身份,或者,由指纹传感器1914根据采集到的指纹识别用户的身份。在识别出用户的身份为可信身份时,由处理器1901授权该用户执行相关的敏感操作,该敏感操作包括解锁屏幕、查看加密信息、下载软件、支付及更改设置等。指纹传感器1914可以被设置终端1900的正面、背面或侧面。当终端1900上设置有物理按键或厂商Logo时,指纹传感器1914可以与物理按键或厂商Logo集成在一起。
光学传感器1915用于采集环境光强度。在一个实施例中,处理器1901可以根据光学传感器1915采集的环境光强度,控制触摸显示屏1905的显示亮度。具体地,当环境光强度较高时,调高触摸显示屏1905的显示亮度;当环境光强度较低时,调低触摸显示屏1905的显示亮度。在另一个实施例中,处理器1901还可以根据光学传感器1915采集的环境光强度,动态调整摄像头组件1906的拍摄参数。
接近传感器1916,也称距离传感器,通常设置在终端1900的前面板。 接近传感器1916用于采集用户与终端1900的正面之间的距离。在一个实施例中,当接近传感器1916检测到用户与终端1900的正面之间的距离逐渐变小时,由处理器1901控制触摸显示屏1905从亮屏状态切换为息屏状态;当接近传感器1916检测到用户与终端1900的正面之间的距离逐渐变大时,由处理器1901控制触摸显示屏1905从息屏状态切换为亮屏状态。
本领域技术人员可以理解,图19中示出的结构并不构成对终端1900的限定,可以包括比图示更多或更少的组件,或者组合某些组件,或者采用不同的组件布置。
本申请实施例还提供了一种用于对虚拟环境进行观察的终端,该终端包括处理器和存储器,存储器中存储有计算机可读指令,计算机可读指令被所述处理器执行时,使得处理器执行上述对虚拟环境进行观察方法的步骤。此处对虚拟环境进行观察方法的步骤可以是上述各个实施例的对虚拟环境进行观察方法中的步骤。
本申请实施例还提供了一种计算机可读存储介质,存储有计算机可读指令,计算机可读指令被处理器执行时,使得处理器执行上述对虚拟环境进行观察方法的步骤。此处对虚拟环境进行观察方法的步骤可以是上述各个实施例的对虚拟环境进行观察方法中的步骤。
本领域普通技术人员可以理解上述实施例的各种方法中的全部或部分步骤是可以通过程序来指令相关的硬件来完成,该程序可以存储于一计算机可读存储介质中,该计算机可读存储介质可以是上述实施例中的存储器中所包含的计算机可读存储介质;也可以是单独存在,未装配入终端中的计算机可读存储介质。该计算机可读存储介质中存储有至少一条指令、至少一段程序、代码集或指令集,所述至少一条指令、所述至少一段程序、所述代码集或指令集由所述处理器加载并执行以实现如图4、图10、图14以及图16任一所述的对虚拟环境进行观察的方法。
在一些实施例中,该计算机可读存储介质可以包括:只读存储器(ROM,Read Only Memory)、随机存取记忆体(RAM,Random Access Memory)、固态硬盘(SSD,Solid State Drives)或光盘等。其中,随机存取记忆体可以包 括电阻式随机存取记忆体(ReRAM,Resistance Random Access Memory)和动态随机存取存储器(DRAM,Dynamic Random Access Memory)。上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。
本领域普通技术人员可以理解实现上述实施例的全部或部分步骤可以通过硬件来完成,也可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,上述提到的存储介质可以是只读存储器,磁盘或光盘等。
以上所述仅为本申请的较佳实施例,并不用以限制本申请,凡在本申请的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。

Claims (20)

  1. 一种对虚拟环境进行观察的方法,由终端执行,其特征在于,所述方法包括:
    显示应用程序的第一环境画面,所述第一环境画面中包括处于第一场景中的虚拟对象,所述第一环境画面是在所述虚拟环境中以第一观察方式对所述虚拟环境进行观察的画面;
    接收移动操作,所述移动操作用于将所述虚拟对象从所述第一场景转移至第二场景,所述第一场景和所述第二场景为两种不同的观察场景,所述观察场景与对所述虚拟环境进行观察的至少一个观察方式对应;
    根据所述移动操作将所述第一观察方式调整为第二观察方式,其中,所述第一观察方式与所述第一场景对应,所述第二观察方式与所述第二场景对应;及
    显示所述应用程序的第二环境画面,所述第二环境画面中包括处于所述第二场景中的所述虚拟对象,所述第二环境画面是在所述虚拟环境中以所述第二观察方式对所述虚拟环境进行观察的画面。
  2. 根据权利要求1所述的方法,其特征在于,所述第一场景包括室外场景,所述第二场景包括室内场景,所述根据所述移动操作将所述第一观察方式调整为第二观察方式,包括:
    通过碰撞检测方式对所述虚拟对象在所述虚拟环境中所处的观察场景进行检测;及
    当检测到所述虚拟对象根据所述移动操作从所述室外场景转移至所述室内场景时,将所述第一观察方式调整为第二观察方式。
  3. 根据权利要求2所述的方法,其特征在于,所述第一观察方式包括摄像机模型在距离所述虚拟对象第一距离处对所述虚拟环境进行观察的方式,所述第二观察方式包括所述摄像机模型在距离所述虚拟对象第二距离处对所述虚拟环境进行观察的方式,所述摄像机模型包括在所述虚拟环境中围绕所述虚拟对象进行观察的三维模型,所述第一距离大于所述第二距离;
    所述将所述第一观察方式调整为第二观察方式,包括:
    将所述摄像机模型与所述虚拟对象之间的距离从所述第一距离调整为所述第二距离。
  4. 根据权利要求2所述的方法,其特征在于,所述第一观察方式包括摄像机模型以第一视角对所述虚拟环境进行观察的方式,所述第二观察方式包括所述摄像机模型以第二视角对所述虚拟环境进行观察的方式,所述摄像机模型包括围绕所述虚拟对象进行观察的三维模型,所述第一视角的方向与虚拟环境中水平方向的夹角小于所述第二视角的方向与所述水平方向的夹角;
    所述根据所述移动操作将所述第一观察方式调整为第二观察方式,包括:
    根据所述移动操作,将所述摄像机模型观察所述虚拟对象的角度从所述第一视角旋转至所述第二视角。
  5. 根据权利要求2至4任一所述的方法,其特征在于,所述碰撞检测方式包括垂直射线检测;
    所述通过碰撞检测方式对所述虚拟对象在所述虚拟环境中所处的观察场景进行检测,包括:
    从所述虚拟对象中的目标点为起始点,沿所述虚拟环境中的垂直向上方向做所述垂直射线检测;
    接收进行所述垂直射线检测后返回的第一检测结果,所述第一检测结果用于表示在所述虚拟对象的垂直向上方向被碰撞的虚拟物体;及
    根据所述第一检测结果确定所述虚拟对象所处的观察场景。
  6. 根据权利要求5所述的方法,其特征在于,所述第一检测结果包括进行所述垂直射线检测时碰撞的第一个虚拟物体的物体标识;
    所述根据所述第一检测结果确定所述虚拟对象所处的观察场景,包括:
    当所述第一检测结果中的所述物体标识为虚拟房屋标识时,确定所述虚拟对象所处的观察场景为所述室内场景;及
    当所述第一检测结果中的所述物体标识为除所述虚拟房屋标识之外的其他标识时,确定所述虚拟对象所处的观察场景为所述室外场景。
  7. 根据权利要求5所述的方法,其特征在于,所述第一检测结果包括进行所述垂直射线检测时碰撞第一个虚拟物体时射线的长度;
    所述根据所述第一检测结果确定所述虚拟对象所处的观察场景,包括:
    当所述第一检测结果中所述射线的长度小于或等于预设长度时,确定所述虚拟对象所处的观察场景为所述室内场景;及
    当所述第一检测结果中所述射线的长度超出所述预设长度时,确定所述 虚拟对象所处的观察场景为所述室外场景。
  8. 根据权利要求2至4任一所述的方法,其特征在于,所述碰撞检测方式包括水平射线检测;
    所述通过碰撞检测方式对所述虚拟对象在所述虚拟环境中所处的观察场景进行检测,包括:
    从所述虚拟对象中的目标点为起始点,沿所述虚拟环境中的水平方向做至少三条方向互不相同的检测射线,且每两条所述检测射线之间的夹角大于预设夹角;
    接收通过至少三条所述检测射线进行水平射线检测所返回的第二检测结果,所述第二检测结果用于表示所述检测射线在所述水平方向上碰撞的虚拟物体;及
    根据所述第二检测结果确定所述虚拟对象所处的观察场景。
  9. 根据权利要求8所述的方法,其特征在于,所述第二检测结果包括至少三条所述检测射线碰撞第一个虚拟物体时的射线长度;
    所述根据所述第二检测结果确定所述虚拟对象所处的观察场景,包括:
    若至少三条所述检测射线中,不少于半数的所述检测射线碰撞第一个虚拟物体时的所述射线长度在预设长度以内,确定所述虚拟对象处于所述室内场景;及
    若至少三条所述检测射线中,超出半数的所述检测射线碰撞所述第一个虚拟物体时的所述射线长度超出所述预设长度,确定所述虚拟对象处于所述室外场景。
  10. 一种对虚拟环境进行观察的装置,其特征在于,所述装置包括:
    显示模块,用于显示应用程序的第一环境画面,所述第一环境画面中包括处于第一场景中的虚拟对象,所述第一环境画面是在所述虚拟环境中以第一观察方式对所述虚拟环境进行观察的画面;
    接收模块,用于接收移动操作,所述移动操作用于将所述虚拟对象从所述第一场景转移至第二场景,所述第一场景和所述第二场景为两种不同的观察场景,所述观察场景与对所述虚拟环境进行观察的至少一个观察方式对应;
    调整模块,用于根据所述移动操作将所述第一观察方式调整为第二观察 方式,其中,所述第一观察方式与所述第一场景对应,所述第二观察方式与所述第二场景对应;及
    所述显示模块,还用于显示所述应用程序的第二环境画面,所述第二环境画面中包括处于所述第二场景中的所述虚拟对象,所述第二环境画面是在所述虚拟环境中以所述第二观察方式对所述虚拟环境进行观察的画面。
  11. 根据权利要求10所述的装置,其特征在于,所述第一场景包括室外场景,所述第二场景包括室内场景,所述调整模块,包括:
    检测单元,用于通过碰撞检测方式对所述虚拟对象在所述虚拟环境中所处的观察场景进行检测;及
    调整单元,用于当检测到所述虚拟对象根据所述移动操作从所述室外场景转移至所述室内场景时,将所述第一观察方式调整为第二观察方式。
  12. 根据权利要求11所述的装置,其特征在于,所述第一观察方式包括摄像机模型在距离所述虚拟对象第一距离处对所述虚拟环境进行观察的方式,所述第二观察方式包括所述摄像机模型在距离所述虚拟对象第二距离处对所述虚拟环境进行观察的方式,所述摄像机模型包括在所述虚拟环境中围绕所述虚拟对象进行观察的三维模型,所述第一距离大于所述第二距离;
    所述调整单元,还用于将所述摄像机模型与所述虚拟对象之间的距离从所述第一距离调整为所述第二距离。
  13. 根据权利要求11所述的装置,其特征在于,所述第一观察方式包括摄像机模型以第一视角对所述虚拟环境进行观察的方式,所述第二观察方式包括所述摄像机模型以第二视角对所述虚拟环境进行观察的方式,所述摄像机模型包括围绕所述虚拟对象进行观察的三维模型,所述第一视角的方向与虚拟环境中水平方向的夹角小于所述第二视角的方向与所述水平方向的夹角;
    所述调整单元,还用于根据所述移动操作,将所述摄像机模型观察所述虚拟对象的角度从所述第一视角旋转至所述第二视角。
  14. 根据权利要求11至13任一所述的装置,其特征在于,所述碰撞检测方式包括垂直射线检测;
    所述检测单元,还用于从所述虚拟对象中的目标点为起始点,沿所述虚拟环境中的垂直向上方向做所述垂直射线检测;接收进行所述垂直射线检测 后返回的第一检测结果,所述第一检测结果用于表示在所述虚拟对象的垂直向上方向被碰撞的虚拟物体;及根据所述第一检测结果确定所述虚拟对象所处的观察场景。
  15. 根据权利要求14所述的装置,其特征在于,所述第一检测结果包括进行所述垂直射线检测时碰撞的第一个虚拟物体的物体标识;
    所述检测单元,还用于当所述第一检测结果中的所述物体标识为虚拟房屋标识时,确定所述虚拟对象所处的观察场景为所述室内场景;及
    所述检测单元,还用于当所述第一检测结果中的所述物体标识为除所述虚拟房屋标识之外的其他标识时,确定所述虚拟对象所处的观察场景为所述室外场景。
  16. 根据权利要求14所述的装置,其特征在于,所述第一检测结果包括进行所述垂直射线检测碰撞第一个虚拟物体时射线的长度;
    所述检测单元,还用于当所述第一检测结果中所述射线的长度小于或等于预设长度时,确定所述虚拟对象所处的观察场景为所述室内场景;及
    所述检测单元,还用于当所述第一检测结果中所述射线的长度超出所述预设长度时,确定所述虚拟对象所处的观察场景为所述室外场景。
  17. 根据权利要求11至13任一所述的装置,其特征在于,所述碰撞检测方式包括水平射线检测;
    所述检测单元,还用于从所述虚拟对象中的目标点为起始点,沿所述虚拟环境中的水平方向做至少三条方向互不相同的检测射线,且每两条所述检测射线之间的夹角大于预设夹角;接收通过至少三条所述检测射线进行水平射线检测所返回的第二检测结果,所述第二检测结果用于表示所述检测射线在所述水平方向上碰撞的虚拟物体;及根据所述第二检测结果确定所述虚拟对象所处的观察场景。
  18. 根据权利要求17所述的装置,其特征在于,所述第二检测结果包括至少三条所述检测射线碰撞第一个虚拟物体时的射线长度;
    所述检测单元,还用于若至少三条所述检测射线中,不少于半数的所述检测射线碰撞第一个虚拟物体时的所述射线长度在预设长度以内,确定所述虚拟对象处于所述室内场景;及
    所述检测单元,还用于若至少三条所述检测射线中,超出半数的所述检 测射线碰撞所述第一个虚拟物体时的所述射线长度超出所述预设长度,确定所述虚拟对象处于所述室外场景。
  19. 一种终端,其特征在于,所述终端包括处理器和存储器,所述存储器中存储有计算机可读指令,所述计算机可读指令被所述处理器执行时,使得所述处理器执行如权利要求1至9中任一项所述的方法的步骤。
  20. 一种非易失性的计算机可读存储介质,存储有计算机可读指令,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行如权利要求1至9中任一项所述的方法的步骤。
PCT/CN2019/115623 2018-12-05 2019-11-05 对虚拟环境进行观察的方法、设备及存储介质 WO2020114176A1 (zh)

Priority Applications (5)

Application Number Priority Date Filing Date Title
JP2021514085A JP7191210B2 (ja) 2018-12-05 2019-11-05 仮想環境の観察方法、デバイス及び記憶媒体
SG11202103706SA SG11202103706SA (en) 2018-12-05 2019-11-05 Virtual environment viewing method, device and storage medium
KR1020217006432A KR20210036392A (ko) 2018-12-05 2019-11-05 가상 환경 관찰 방법, 기기 및 저장 매체
US17/180,018 US11783549B2 (en) 2018-12-05 2021-02-19 Method for observing virtual environment, device, and storage medium
US18/351,780 US20230360343A1 (en) 2018-12-05 2023-07-13 Method for observing virtual environment, device, and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811478458.2A CN109634413B (zh) 2018-12-05 2018-12-05 对虚拟环境进行观察的方法、设备及存储介质
CN201811478458.2 2018-12-05

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/180,018 Continuation US11783549B2 (en) 2018-12-05 2021-02-19 Method for observing virtual environment, device, and storage medium

Publications (1)

Publication Number Publication Date
WO2020114176A1 true WO2020114176A1 (zh) 2020-06-11

Family

ID=66071135

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/115623 WO2020114176A1 (zh) 2018-12-05 2019-11-05 对虚拟环境进行观察的方法、设备及存储介质

Country Status (6)

Country Link
US (2) US11783549B2 (zh)
JP (1) JP7191210B2 (zh)
KR (1) KR20210036392A (zh)
CN (1) CN109634413B (zh)
SG (1) SG11202103706SA (zh)
WO (1) WO2020114176A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112843716A (zh) * 2021-03-17 2021-05-28 网易(杭州)网络有限公司 虚拟物体提示与查看方法、装置、计算机设备及存储介质

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109634413B (zh) 2018-12-05 2021-06-11 腾讯科技(深圳)有限公司 对虚拟环境进行观察的方法、设备及存储介质
CN110585707B (zh) * 2019-09-20 2020-12-11 腾讯科技(深圳)有限公司 视野画面显示方法、装置、设备及存储介质
CN111784844B (zh) * 2020-06-09 2024-01-05 北京五一视界数字孪生科技股份有限公司 观察虚拟对象的方法、装置、存储介质及电子设备
CN117339205A (zh) * 2022-06-29 2024-01-05 腾讯科技(成都)有限公司 画面显示方法、装置、设备、存储介质及程序产品
US11801448B1 (en) 2022-07-01 2023-10-31 Datadna, Inc. Transposing virtual content between computing environments

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1595584A1 (en) * 2004-05-11 2005-11-16 Sega Corporation Image processing program, game information processing program and game information processing apparatus
CN105278676A (zh) * 2014-06-09 2016-01-27 伊默森公司 基于视角和/或接近度修改触觉强度的可编程触觉设备和方法
CN107977141A (zh) * 2017-11-24 2018-05-01 网易(杭州)网络有限公司 交互控制方法、装置、电子设备及存储介质
US20180167553A1 (en) * 2016-12-13 2018-06-14 Canon Kabushiki Kaisha Method, system and apparatus for configuring a virtual camera
CN109634413A (zh) * 2018-12-05 2019-04-16 腾讯科技(深圳)有限公司 对虚拟环境进行观察的方法、设备及存储介质

Family Cites Families (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5850352A (en) * 1995-03-31 1998-12-15 The Regents Of The University Of California Immersive video, including video hypermosaicing to generate from multiple video views of a scene a three-dimensional video mosaic from which diverse virtual video scene images are synthesized, including panoramic, scene interactive and stereoscopic images
US6283857B1 (en) * 1996-09-24 2001-09-04 Nintendo Co., Ltd. Three-dimensional image processing apparatus with enhanced automatic and user point of view control
US10151599B1 (en) * 2003-03-13 2018-12-11 Pamala Meador Interactive virtual reality tour
GB2462095A (en) * 2008-07-23 2010-01-27 Snell & Wilcox Ltd Processing of images to represent a transition in viewpoint
JP6085411B2 (ja) * 2011-06-02 2017-02-22 任天堂株式会社 画像処理装置、画像処理方法、および画像処理装置の制御プログラム
US20130303247A1 (en) * 2012-05-08 2013-11-14 Mediatek Inc. Interaction display system and method thereof
US8979652B1 (en) * 2014-03-27 2015-03-17 TECHLAND Sp. z o. o Natural movement in a virtual environment
JP2014184300A (ja) * 2014-04-28 2014-10-02 Copcom Co Ltd ゲームプログラム、及びゲーム装置
US9332285B1 (en) * 2014-05-28 2016-05-03 Lucasfilm Entertainment Company Ltd. Switching modes of a media content item
JP2016171989A (ja) * 2015-03-16 2016-09-29 株式会社スクウェア・エニックス プログラム、記録媒体、情報処理装置及び制御方法
KR20160128119A (ko) * 2015-04-28 2016-11-07 엘지전자 주식회사 이동 단말기 및 이의 제어방법
CA3023488C (en) * 2016-04-14 2022-06-28 The Research Foundation For The State University Of New York System and method for generating a progressive representation associated with surjectively mapped virtual and physical reality image data
WO2018020735A1 (ja) * 2016-07-28 2018-02-01 株式会社コロプラ 情報処理方法及び当該情報処理方法をコンピュータに実行させるためのプログラム
US10372970B2 (en) * 2016-09-15 2019-08-06 Qualcomm Incorporated Automatic scene calibration method for video analytics
CN106237616A (zh) * 2016-10-12 2016-12-21 大连文森特软件科技有限公司 基于在线可视化编程的vr器械格斗类游戏制作体验***
CN106600709A (zh) * 2016-12-15 2017-04-26 苏州酷外文化传媒有限公司 基于装修信息模型的vr虚拟装修方法
JP6789830B2 (ja) * 2017-01-06 2020-11-25 任天堂株式会社 情報処理システム、情報処理プログラム、情報処理装置、情報処理方法
EP3542252B1 (en) * 2017-08-10 2023-08-02 Google LLC Context-sensitive hand interaction
US10726626B2 (en) * 2017-11-22 2020-07-28 Google Llc Interaction between a viewer and an object in an augmented reality environment
CN108665553B (zh) * 2018-04-28 2023-03-17 腾讯科技(深圳)有限公司 一种实现虚拟场景转换的方法及设备
CN108717733B (zh) * 2018-06-07 2019-07-02 腾讯科技(深圳)有限公司 虚拟环境的视角切换方法、设备及存储介质
US20200285784A1 (en) * 2019-02-27 2020-09-10 Simulation Engineering, LLC Systems and methods for generating a simulated environment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1595584A1 (en) * 2004-05-11 2005-11-16 Sega Corporation Image processing program, game information processing program and game information processing apparatus
CN105278676A (zh) * 2014-06-09 2016-01-27 伊默森公司 基于视角和/或接近度修改触觉强度的可编程触觉设备和方法
US20180167553A1 (en) * 2016-12-13 2018-06-14 Canon Kabushiki Kaisha Method, system and apparatus for configuring a virtual camera
CN107977141A (zh) * 2017-11-24 2018-05-01 网易(杭州)网络有限公司 交互控制方法、装置、电子设备及存储介质
CN109634413A (zh) * 2018-12-05 2019-04-16 腾讯科技(深圳)有限公司 对虚拟环境进行观察的方法、设备及存储介质

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112843716A (zh) * 2021-03-17 2021-05-28 网易(杭州)网络有限公司 虚拟物体提示与查看方法、装置、计算机设备及存储介质
CN112843716B (zh) * 2021-03-17 2024-06-11 网易(杭州)网络有限公司 虚拟物体提示与查看方法、装置、计算机设备及存储介质

Also Published As

Publication number Publication date
JP2021535806A (ja) 2021-12-23
SG11202103706SA (en) 2021-05-28
CN109634413B (zh) 2021-06-11
US11783549B2 (en) 2023-10-10
US20230360343A1 (en) 2023-11-09
CN109634413A (zh) 2019-04-16
KR20210036392A (ko) 2021-04-02
US20210201591A1 (en) 2021-07-01
JP7191210B2 (ja) 2022-12-16

Similar Documents

Publication Publication Date Title
US11151773B2 (en) Method and apparatus for adjusting viewing angle in virtual environment, and readable storage medium
US11703993B2 (en) Method, apparatus and device for view switching of virtual environment, and storage medium
US11224810B2 (en) Method and terminal for displaying distance information in virtual scene
US11471768B2 (en) User interface display method and apparatus, device and computer-readable storage medium
CN109529319B (zh) 界面控件的显示方法、设备及存储介质
US11628371B2 (en) Method, apparatus, and storage medium for transferring virtual items
WO2020114176A1 (zh) 对虚拟环境进行观察的方法、设备及存储介质
US11766613B2 (en) Method and apparatus for observing virtual item in virtual environment and readable storage medium
CN112494955B (zh) 虚拟对象的技能释放方法、装置、终端及存储介质
CN111921197B (zh) 对局回放画面的显示方法、装置、终端及存储介质
WO2020151594A1 (zh) 视角转动的方法、装置、设备及存储介质
CN110496392B (zh) 虚拟对象的控制方法、装置、终端及存储介质
US11790607B2 (en) Method and apparatus for displaying heat map, computer device, and readable storage medium
CN111589141A (zh) 虚拟环境画面的显示方法、装置、设备及介质
US20220291791A1 (en) Method and apparatus for determining selected target, device, and storage medium
JP2024509064A (ja) 位置マークの表示方法及び装置、機器並びにコンピュータプログラム
CN109806583B (zh) 用户界面显示方法、装置、设备及***
WO2022237076A1 (zh) 虚拟对象的控制方法、装置、设备及计算机可读存储介质
CN112057861B (zh) 虚拟对象控制方法、装置、计算机设备及存储介质
CN111754631A (zh) 三维模型的生成方法、装置、设备及可读存储介质
CN113318443A (zh) 基于虚拟环境的侦察方法、装置、设备及介质
CN112316419A (zh) 应用程序的运行方法、装置、设备及可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19894253

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 20217006432

Country of ref document: KR

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2021514085

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19894253

Country of ref document: EP

Kind code of ref document: A1