CN111359203B - Personalized railway VR scene interaction method - Google Patents

Personalized railway VR scene interaction method Download PDF

Info

Publication number
CN111359203B
CN111359203B CN202010156431.2A CN202010156431A CN111359203B CN 111359203 B CN111359203 B CN 111359203B CN 202010156431 A CN202010156431 A CN 202010156431A CN 111359203 B CN111359203 B CN 111359203B
Authority
CN
China
Prior art keywords
scene
user
time
browsing
old user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010156431.2A
Other languages
Chinese (zh)
Other versions
CN111359203A (en
Inventor
朱军
朱庆
李维炼
张天奕
任诗曼
党沛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southwest Jiaotong University
Original Assignee
Southwest Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southwest Jiaotong University filed Critical Southwest Jiaotong University
Priority to CN202010156431.2A priority Critical patent/CN111359203B/en
Publication of CN111359203A publication Critical patent/CN111359203A/en
Application granted granted Critical
Publication of CN111359203B publication Critical patent/CN111359203B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/25Output arrangements for video game devices
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/70Game security or game management aspects
    • A63F13/79Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Business, Economics & Management (AREA)
  • Computer Security & Cryptography (AREA)
  • General Business, Economics & Management (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses an individualized railway VR scene interaction method, which comprises the following steps: s1, judging whether the user is a new user, if so, entering a step S2, otherwise, entering a step S3; s2, recommending hot browsing scenes to the user, and entering the step S5; s3, acquiring personal interests and preferences of the user according to the historical browsing records of the user; s4, recommending browsing scenes according to the personal interests and preferences of the user, and entering the step S5; s5, displaying the corresponding optimal route according to the browsing scene selected by the user; and S6, the user browses scenes according to the optimal route or the custom route. According to the invention, the interest characteristics of the user are abstracted from a large amount of interactive information, the preference of the user is deeply mined, and the multi-level perception and exploration requirements of the user on the railway are fully analyzed, so that an individualized and humanized exploration scheme is provided for the user, and the experience and the interaction efficiency of the user in a railway VR scene are comprehensively improved.

Description

Personalized railway VR scene interaction method
Technical Field
The invention relates to the field of virtual reality, in particular to a personalized railway VR scene interaction method.
Background
Intellectualization is an important direction for future development of railways in the world. With the rapid development of railway technology in China, the development and application range and depth of intelligent new technology in the railway field are continuously expanded, and particularly, a digital twin is an important sign of railway informatization and is also a new way for building intelligent railways.
One core problem that needs to be solved by VR scene exploration analysis is interaction, which is essential in a virtual scene to help users capture target information more quickly, naturally, and efficiently in a virtual environment. The immersion feeling and the experience feeling are more emphasized in the existing game VR scene interaction, and the interaction mode is single. However, the railway line is long, the number of scene objects is large, the spatial relationship is complex, and in the railway construction process, users in different fields and different professional backgrounds are involved, the concerned scene information is different, and a single interaction mode can cause different users to be interfered by additional factors in the interaction process, and the information of interest of the users cannot be accurately acquired, so that the interaction efficiency is low.
Disclosure of Invention
Aiming at the defects in the prior art, the personalized railway VR scene interaction method provided by the invention solves the problem of low interaction efficiency in the existing railway VR scene interaction process.
In order to achieve the purpose of the invention, the invention adopts the technical scheme that:
a personalized railway VR scene interaction method is provided, which comprises the following steps:
s1, reading the login information of the user, judging whether the user is a new user, if so, entering a step S2, otherwise, entering a step S3;
s2, recommending hot browsing scenes to the VR visual interface for the user, and entering the step S5;
s3, acquiring personal interests and preferences of the user according to the historical browsing records of the user;
s4, recommending browsing scenes to a VR visual interface according to the personal interests and preferences of the user, and entering the step S5;
s5, displaying the corresponding optimal route through the VR visual display according to the browsing scene selected by the user;
and S6, browsing scenes by the user according to the optimal route or the user-defined route, and completing interaction of the personalized railway VR scenes.
Further, the specific method for determining whether the user is a new user in step S1 is as follows:
and judging whether the user has a historical browsing record according to the login information of the user, wherein if the user has the historical browsing record, the user is an old user, and if the user does not have the historical browsing record, the user is a new user.
Further, the specific method for recommending a hot browsing scene to the VR visual interface to the new user in step S2 includes the following sub-steps:
s2-1, acquiring a user ID, a browsing scene space position, a browsing scene category, a focus object of attention and interaction time in the historical browsing records of the old user, wherein the interaction time comprises an entering time, a leaving time and a staying time;
s2-2, for the ith old user uiCombining the user ID and the n browsing scene space positions into a vector according to the time sequence
Figure BDA0002404214250000021
And using the vector as the user context information; wherein u isiIs the user ID of the ith old user,
Figure BDA0002404214250000022
the nth browsing scene space position of the ith old user;
s2-3, for the Mth scene, combining the user ID of the ith old user and the spatial position of the browsed scene browsed in the scene into a vector in time sequence
Figure BDA0002404214250000023
Taking the vector as a track scene vector of the ith old user in the Mth scene, and performing line sequence on the track scene vectors of the ith old user in each scene to obtain track scene information of the ith old user; wherein
Figure BDA0002404214250000024
Representing the spatial position of the M-th browsed scene in the M-th scene of the ith old user;
s2-4, for the ith old user uiAnd constructing a position vector according to the coordinates of two positions at the time k and the time k +1, the browsing scene category and the focus object of interest
Figure BDA0002404214250000025
The ith old user uiAll the position vectors are subjected to rank order to obtain the ith old user uiLocation context information of; wherein
Figure BDA0002404214250000031
As coordinates of the location of time k, ckBrowsing scene category for time k, okFor the focus object of interest at time k,
Figure BDA0002404214250000032
is the coordinate of the location at time k +1, ck+1Class of browsing scenes at time k +1, ok+1The focus of attention object at the moment k + 1;
s2-5, for the ith old user uiThe row sequence of the user ID, the coordinates of the position, the entering time, the leaving time and the staying time is carried out to obtain the ith old user uiScene information vector at f position
Figure BDA0002404214250000033
The ith old user uiSequencing the scene information vectors at each position to obtain the ith old user uiRemoving the data with the stay time less than 5 seconds in the initial time scene information corresponding to the ith old user to obtain the time scene information corresponding to the ith old user; wherein
Figure BDA0002404214250000034
The coordinates of the ith old user at position f,
Figure BDA0002404214250000035
respectively the entry time, the leaving time and the stay time of the ith old user at the position f;
s2-6, projecting the user scene information, the track scene information, the position scene information and the time scene information of the ith old user to a low-dimensional vector space to respectively obtain a user scene embedding vector, a track scene embedding vector, a position scene embedding vector and a time scene embedding vector, and according to a formula:
Figure BDA0002404214250000036
obtaining average scene embedding vector of ith old user
Figure BDA0002404214250000037
Wherein
Figure BDA0002404214250000038
A vector is embedded for the user scene of the ith old user,
Figure BDA0002404214250000039
a vector is embedded for the locus scene of the ith old user,
Figure BDA00024042142500000310
a vector is embedded for the location scene of the ith old user,
Figure BDA00024042142500000311
embedding a vector for the time scene of the ith old user;
s2-7, embedding the average scene of the ith old user into a vector
Figure BDA00024042142500000312
As the input of a hierarchical sampling softmax function, acquiring the recommendation probability of the ith old user to each browsing scene space position, and further acquiring the recommendation probability of each old user to each browsing scene space position;
s2-8, according to the formula:
Figure BDA00024042142500000313
obtaining the probability P of recommending Mth scene by all old usersMFurther, the probability that all the old users recommend each scene is obtained; where U represents the set of all old users, R represents the set of trajectories generated by all old users, NRGenerating the length of the track for all the old users, wherein N is the total number of the spatial positions of the browsed scenes in the Mth scene; log (·) is a logarithmic function, and Pr (mi) is the recommendation probability of the ith old user to the spatial position of the mth browsing scene;
s2-9, sequencing each scene according to the recommendation probabilities of all old users from large to small, and pushing the sequencing result to a VR visual interface of a new user as a popular browsing scene list.
Further, the specific method in step S3 includes the following sub-steps:
s3-1, acquiring a user ID, a browsing scene spatial position, a browsing scene category, a focus object of attention and interaction time in the historical browsing record of the user, wherein the interaction time comprises an entering time, a leaving time and a staying time;
s3-2, combining the user ID and the n browsing scene space positions into a vector in chronological order
Figure BDA0002404214250000041
And using the vector as the user context information; wherein u isiIs the user ID of the old user,
Figure BDA0002404214250000042
the nth browsing scene space position of the old user;
s3-3, for the Mth scene, combining the user ID of the old user and the spatial position of the browsing scene browsed in the scene into a vector in time sequence
Figure BDA0002404214250000043
Taking the vector as the track scene information of the old user in the Mth scene; wherein
Figure BDA0002404214250000044
Representing the M-th browsed scene space position of the old user in the M-th scene;
s3-4, constructing a position vector according to the coordinates of the old user at two positions of the k moment and the k +1 moment, browsing scene categories and focus attention objects
Figure BDA0002404214250000045
The old user uiAll the position vectors of the old user u are subjected to rank ordering to obtain the old user uiLocation context information of; wherein
Figure BDA0002404214250000046
Is time kCoordinates of the location, ckBrowsing scene category for time k, okFor the focus object of interest at time k,
Figure BDA0002404214250000047
is the coordinate of the location at time k +1, ck+1Class of browsing scenes at time k +1, ok+1The focus of attention object at the moment k + 1;
s3-5, obtaining the old user u by sequencing the user ID, the coordinates of the position, the entering time, the leaving time and the staying time of the old useriScene information vector at f position
Figure BDA0002404214250000051
The old user uiThe scene information vector at each position is subjected to rank order to obtain the old user uiRemoving the data with the retention time less than 5 seconds in the initial time scene information corresponding to the old user to obtain the time scene information corresponding to the old user; wherein
Figure BDA0002404214250000052
The coordinates of the old user at the f position,
Figure BDA0002404214250000053
respectively the entering time, the leaving time and the staying time of the old user at the position f;
s3-6, projecting the user scene information, the track scene information, the position scene information and the time scene information of the ith old user to a low-dimensional vector space to respectively obtain a user scene embedding vector, a track scene embedding vector, a position scene embedding vector and a time scene embedding vector, and according to a formula:
Figure BDA0002404214250000054
obtaining average scene embedding vector of ith old user
Figure BDA0002404214250000055
Wherein
Figure BDA0002404214250000056
A vector is embedded for the user scene of the ith old user,
Figure BDA0002404214250000057
a vector is embedded for the locus scene of the ith old user,
Figure BDA0002404214250000058
a vector is embedded for the location scene of the ith old user,
Figure BDA0002404214250000059
embedding a vector for the time scene of the ith old user;
s3-7, embedding the average scene of the old user into a vector
Figure BDA00024042142500000510
As the input of a hierarchical sampling softmax function, acquiring the recommendation probability of the old user to the spatial position of each browsing scene;
s3-8, according to the formula:
Figure BDA00024042142500000511
probability P of the old user recommending the Mth sceneMFurther obtaining the probability of each scene recommended by the old user; where U represents the set of all old users, RiRepresents the set of tracks generated by the old user, NRThe length of the track generated for the old user, and N is the total number of the spatial positions of the browsed scenes in the Mth scene; log (-) is a logarithmic function, and Pr (mi) is the recommendation probability of the old user for the spatial position of the mth browsing scene;
s3-9, sorting each scene according to the recommendation probability of the old user from large to small, pushing the sorting result to the VR visual interface of the user as the personal interest and the preference list of the user, and entering the step S5.
Further, the average scene of the i-th old user is embedded into the vector V in step S2-7iThe specific method for acquiring the recommendation probability of the ith old user to each browsing scene space position as the input of the hierarchical sampling softmax function is as follows:
according to a hierarchical sampling softmax function formula:
Figure BDA0002404214250000061
acquiring the recommendation probability Pr (mi) of the ith old user to the spatial position of the mth browsing scene, and further acquiring the recommendation probability of the ith old user to each spatial position of the browsing scene; wherein N isRLength of the track generated for the ith old user; exp (·) is an exponential function with a natural constant e as the base;
Figure BDA0002404214250000062
embedding vectors for the average scene of the ith old user
Figure BDA0002404214250000063
Transposing a matrix; thetap-1Sampling parameters of a p-1 node of a softmax function in a layered mode; b is a value parameter, and when the mth browsing scene space position takes a left branch at the pth node, b is 0; when the mth browsing scene space location takes the right branch at the pth node, b is 1.
Further, the average scene of the old user is embedded into the vector V in step S3-7iThe specific method for acquiring the recommendation probability of the old user to each browsing scene space position as the input of the hierarchical sampling softmax function is as follows:
according to a hierarchical sampling softmax function formula:
Figure BDA0002404214250000064
obtaining the recommendation probability Pr (mi) of the ith old user to the spatial position of the mth browsing scene) Further obtaining the recommendation probability of the ith old user to the space position of each browsing scene; wherein N isRLength of the track generated for the ith old user; exp (·) is an exponential function with a natural constant e as the base;
Figure BDA0002404214250000065
embedding vectors for the average scene of the ith old user
Figure BDA0002404214250000066
Transposing a matrix; thetap-1Sampling parameters of a p-1 node of a softmax function in a layered mode; b is a value parameter, and when the mth browsing scene space position takes a left branch at the pth node, b is 0; when the mth browsing scene space location takes the right branch at the pth node, b is 1.
The invention has the beneficial effects that: according to the invention, the interest characteristics of the user are abstracted from a large amount of interactive information, the preference of the user is deeply excavated, and the multi-level perception and exploration requirements of the user on the railway are fully analyzed, so that an individualized and humanized exploration scheme is provided for the user, the experience of the user in a railway VR scene is comprehensively improved, and the interaction efficiency of the railway VR is further improved.
Drawings
FIG. 1 is a schematic flow chart of the present invention.
Detailed Description
The following description of the embodiments of the present invention is provided to facilitate the understanding of the present invention by those skilled in the art, but it should be understood that the present invention is not limited to the scope of the embodiments, and it will be apparent to those skilled in the art that various changes may be made without departing from the spirit and scope of the invention as defined and defined in the appended claims, and all matters produced by the invention using the inventive concept are protected.
As shown in fig. 1, the personalized railway VR scene interaction method includes the following steps:
s1, reading the login information of the user, judging whether the user is a new user, if so, entering a step S2, otherwise, entering a step S3;
s2, recommending hot browsing scenes to the VR visual interface for the user, and entering the step S5;
s3, acquiring personal interests and preferences of the user according to the historical browsing records of the user;
s4, recommending browsing scenes to a VR visual interface according to the personal interests and preferences of the user, and entering the step S5;
s5, displaying the corresponding optimal route through the VR visual display according to the browsing scene selected by the user;
and S6, browsing scenes by the user according to the optimal route or the user-defined route, and completing interaction of the personalized railway VR scenes.
The specific method for determining whether the user is a new user in step S1 is as follows: and judging whether the user has a historical browsing record according to the login information of the user, wherein if the user has the historical browsing record, the user is an old user, and if the user does not have the historical browsing record, the user is a new user.
The specific method for recommending the hot browsing scene to the VR visual interface to the new user in step S2 includes the following substeps:
s2-1, acquiring a user ID, a browsing scene space position, a browsing scene category, a focus object of attention and interaction time in the historical browsing records of the old user, wherein the interaction time comprises an entering time, a leaving time and a staying time;
s2-2, for the ith old user uiCombining the user ID and the n browsing scene space positions into a vector according to the time sequence
Figure BDA0002404214250000081
And using the vector as the user context information; wherein u isiIs the user ID of the ith old user,
Figure BDA0002404214250000082
the nth browsing scene space position of the ith old user;
s2-3, for the Mth scene, combining the user ID of the ith old user and the spatial position of the browsed scene browsed in the scene into a vector in time sequence
Figure BDA0002404214250000083
Taking the vector as a track scene vector of the ith old user in the Mth scene, and performing line sequence on the track scene vectors of the ith old user in each scene to obtain track scene information of the ith old user; wherein
Figure BDA0002404214250000084
Representing the spatial position of the M-th browsed scene in the M-th scene of the ith old user;
s2-4, for the ith old user uiAnd constructing a position vector according to the coordinates of two positions at the time k and the time k +1, the browsing scene category and the focus object of interest
Figure BDA0002404214250000085
The ith old user uiAll the position vectors are subjected to rank order to obtain the ith old user uiLocation context information of; wherein
Figure BDA0002404214250000086
As coordinates of the location of time k, ckBrowsing scene category for time k, okFor the focus object of interest at time k,
Figure BDA0002404214250000087
is the coordinate of the location at time k +1, ck+1Class of browsing scenes at time k +1, ok+1The focus of attention object at the moment k + 1;
s2-5, for the ith old user uiThe row sequence of the user ID, the coordinates of the position, the entering time, the leaving time and the staying time is carried out to obtain the ith old user uiScene information vector at f position
Figure BDA0002404214250000091
The ith old user uiSequencing the scene information vectors at each position to obtain the ith old user uiRemoving the initial time corresponding to the ith old userThe data with the stay time of less than 5 seconds in the context information obtain the time context information corresponding to the ith old user; wherein
Figure BDA0002404214250000092
The coordinates of the ith old user at position f,
Figure BDA0002404214250000093
respectively the entry time, the leaving time and the stay time of the ith old user at the position f;
s2-6, projecting the user scene information, the track scene information, the position scene information and the time scene information of the ith old user to a low-dimensional vector space to respectively obtain a user scene embedding vector, a track scene embedding vector, a position scene embedding vector and a time scene embedding vector, and according to a formula:
Figure BDA0002404214250000094
obtaining average scene embedding vector of ith old user
Figure BDA0002404214250000095
Wherein
Figure BDA0002404214250000096
A vector is embedded for the user scene of the ith old user,
Figure BDA0002404214250000097
a vector is embedded for the locus scene of the ith old user,
Figure BDA0002404214250000098
a vector is embedded for the location scene of the ith old user,
Figure BDA0002404214250000099
embedding a vector for the time scene of the ith old user;
s2-7, embedding the average scene of the ith old user into a vector
Figure BDA00024042142500000910
As the input of a hierarchical sampling softmax function, acquiring the recommendation probability of the ith old user to each browsing scene space position, and further acquiring the recommendation probability of each old user to each browsing scene space position;
s2-8, according to the formula:
Figure BDA00024042142500000911
obtaining the probability P of recommending Mth scene by all old usersMFurther, the probability that all the old users recommend each scene is obtained; where U represents the set of all old users, R represents the set of trajectories generated by all old users, NRGenerating the length of the track for all the old users, wherein N is the total number of the spatial positions of the browsed scenes in the Mth scene; log (·) is a logarithmic function, and Pr (mi) is the recommendation probability of the ith old user to the spatial position of the mth browsing scene;
s2-9, sequencing each scene according to the recommendation probabilities of all old users from large to small, and pushing the sequencing result to a VR visual interface of a new user as a popular browsing scene list.
The specific method in step S3 includes the following substeps:
s3-1, acquiring a user ID, a browsing scene spatial position, a browsing scene category, a focus object of attention and interaction time in the historical browsing record of the user, wherein the interaction time comprises an entering time, a leaving time and a staying time;
s3-2, combining the user ID and the n browsing scene space positions into a vector in chronological order
Figure BDA0002404214250000101
And using the vector as the user context information; wherein u isiIs the user ID of the old user,
Figure BDA0002404214250000102
for the old to useThe nth browsing scene space position of the user;
s3-3, for the Mth scene, combining the user ID of the old user and the spatial position of the browsing scene browsed in the scene into a vector in time sequence
Figure BDA0002404214250000103
Taking the vector as the track scene information of the old user in the Mth scene; wherein
Figure BDA0002404214250000104
Representing the M-th browsed scene space position of the old user in the M-th scene;
s3-4, constructing a position vector according to the coordinates of the old user at two positions of the k moment and the k +1 moment, browsing scene categories and focus attention objects
Figure BDA0002404214250000105
The old user uiAll the position vectors of the old user u are subjected to rank ordering to obtain the old user uiLocation context information of; wherein
Figure BDA0002404214250000106
As coordinates of the location of time k, ckBrowsing scene category for time k, okFor the focus object of interest at time k,
Figure BDA0002404214250000107
is the coordinate of the location at time k +1, ck+1Class of browsing scenes at time k +1, ok+1The focus of attention object at the moment k + 1;
s3-5, obtaining the old user u by sequencing the user ID, the coordinates of the position, the entering time, the leaving time and the staying time of the old useriScene information vector at f position
Figure BDA0002404214250000108
The old user uiThe scene information vector at each position is subjected to rank order to obtain the old user uiRemoving the data with the retention time less than 5 seconds in the initial time scene information corresponding to the old user to obtain the time scene information corresponding to the old user; wherein
Figure BDA0002404214250000109
The coordinates of the old user at the f position,
Figure BDA0002404214250000111
respectively the entering time, the leaving time and the staying time of the old user at the position f;
s3-6, projecting the user scene information, the track scene information, the position scene information and the time scene information of the ith old user to a low-dimensional vector space to respectively obtain a user scene embedding vector, a track scene embedding vector, a position scene embedding vector and a time scene embedding vector, and according to a formula:
Figure BDA0002404214250000112
obtaining average scene embedding vector of ith old user
Figure BDA0002404214250000113
Wherein
Figure BDA0002404214250000114
A vector is embedded for the user scene of the ith old user,
Figure BDA0002404214250000115
a vector is embedded for the locus scene of the ith old user,
Figure BDA0002404214250000116
a vector is embedded for the location scene of the ith old user,
Figure BDA0002404214250000117
embedding a vector for the time scene of the ith old user;
S3-7, embedding the average scene of the old user into a vector
Figure BDA0002404214250000118
As the input of a hierarchical sampling softmax function, acquiring the recommendation probability of the old user to the spatial position of each browsing scene;
s3-8, according to the formula:
Figure BDA0002404214250000119
probability P of the old user recommending the Mth sceneMFurther obtaining the probability of each scene recommended by the old user; where U represents the set of all old users, RiRepresents the set of tracks generated by the old user, NRThe length of the track generated for the old user, and N is the total number of the spatial positions of the browsed scenes in the Mth scene; log (-) is a logarithmic function, and Pr (mi) is the recommendation probability of the old user for the spatial position of the mth browsing scene;
s3-9, sorting each scene according to the recommendation probability of the old user from large to small, pushing the sorting result to the VR visual interface of the user as the personal interest and the preference list of the user, and entering the step S5.
In one embodiment of the invention, the average scene of the i-th old user is embedded into a vector ViThe specific method for acquiring the recommendation probability of the ith old user to each browsing scene space position as the input of the hierarchical sampling softmax function is as follows: according to a hierarchical sampling softmax function formula:
Figure BDA0002404214250000121
acquiring the recommendation probability Pr (mi) of the ith old user to the spatial position of the mth browsing scene, and further acquiring the recommendation probability of the ith old user to each spatial position of the browsing scene; wherein N isRLength of the track generated for the ith old user; exp (·) is an exponential function with a natural constant e as the base;
Figure BDA0002404214250000122
embedding vectors for the average scene of the ith old user
Figure BDA0002404214250000123
Transposing a matrix; thetap-1Sampling parameters of a p-1 node of a softmax function in a layered mode; b is a value parameter, and when the mth browsing scene space position takes a left branch at the pth node, b is 0; when the mth browsing scene space location takes the right branch at the pth node, b is 1.
Each node in the hierarchically sampled binary tree structure is associated with an embedded vector that computes the associated branch probability. In this configuration, each leaf can be reached from a node of the first layer through an appropriate path. For hierarchical sampling the softmax function can train all parameters in the function by random gradient descent (SGD). In parameter training, the function iterates over all trajectory positions and computes the gradient of the parameters through an error back-propagation algorithm, which is then used to update the parameters in the function until convergence.
In a specific implementation process, parameters such as the spatial position, the spatial attitude, the scaling and the like of equipment such as a VR handle, a VR helmet and the like in a virtual scene can be recorded based on technologies such as spatial positioning and gravity sensing of VR hardware equipment; secondly, the method can utilize technologies such as ray focus and the like to obtain the focus information data of the user; finally, the invention can acquire the background data in the VR scene by an information acquisition technology, and process and optimize the acquired interactive information by an information processing technology.
Obtaining the operating parameters of the handle: firstly, a user selects a specific key on a VR handle to operate according to own requirements, the user can realize displacement in a scene by emitting rays to a designated point through the handle in the scene, the system acquires the operation of the user when using the handle by detecting the state of the handle, the system records the information of the position, the spatial attitude, the scaling, the use time and the like of the handle in the virtual scene in the operation of the handle, and the information is stored in the system through a data persistence technology.
For virtual scene user spatial location data acquisition: and taking the spatial position of a head-mounted display in the VR hardware device in the constructed scene as the position of the current user, and recording the position information of the user in the virtual scene at certain time intervals, wherein the position information comprises X, Y, Z three coordinates in the scene and time record.
For user attention focus information acquisition: the center coordinate in the visual field of the head-mounted display is used as a user focus of attention, the specific implementation mode is that the center of the head-mounted display of a current user emits rays, the rays collide with an object in a scene, a first collision point is used as the current user focus of attention, and focus information comprises X, Y, Z coordinates, recording time and information of the object where the focus is located.
Information acquisition and data processing: background data in a VR scene is acquired through the technology, data preprocessing is carried out on the acquired interactive information through the information processing technology, the processed data format comprises user ID, spatial position of a browsed scene, types of the browsed scene, focus objects, entry time, exit time, stay time and other elements, and an interactive information data set is constructed on the basis. For example, after a user enters a VR scene, each time the user accesses one scene by using a VR handle, a piece of interaction information { u, l, c, o, t is generated1,t2,t3Where l can be represented by coordinates (X, Y, Z), i.e., { user ID, browsing scene spatial location, browsing scene category, focus of interest object, time-in, time-out, dwell time }, i.e., user u is at t1Enters the o focus object in the c type scene with the space position l at the moment and reaches t2The moment leaves and stays for t3And second. For example { UID001, (X, Y, Z), station, security equipment, 2019-01-20/3:30pm, 2019-01-20/4:00pm, 1800s }, which indicates that a user with a user ID of 001 enters the station with spatial position (X, Y, Z) at No. 1/20 pm of 2019 and 3:30 in the afternoon, and watches the security equipment, and leaves at No. 1/20 of 2019 and 4:00 in the afternoon, staying for 1800 seconds.
In conclusion, the interest characteristics of the user are abstracted from a large amount of interactive information, the preference of the user is deeply mined, and the multi-level perception and exploration requirements of the user on the railway are fully analyzed, so that a personalized and humanized exploration scheme is provided for the user, the experience of the user in a railway VR scene is comprehensively improved, and the interactive efficiency of the railway VR is further improved.

Claims (5)

1. A personalized railway VR scene interaction method is characterized by comprising the following steps:
s1, reading the login information of the user, judging whether the user is a new user, if so, entering a step S2, otherwise, entering a step S3;
s2, recommending hot browsing scenes to the VR visual interface for the user, and entering the step S5;
s3, acquiring personal interests and preferences of the user according to the historical browsing records of the user;
s4, recommending browsing scenes to a VR visual interface according to the personal interests and preferences of the user, and entering the step S5;
s5, displaying the corresponding optimal route through the VR visual display according to the browsing scene selected by the user;
s6, browsing scenes by the user according to the optimal route or the user-defined route, and completing interaction of the personalized railway VR scenes;
the specific method for recommending the hot browsing scene to the VR visual interface to the new user in the step S2 includes the following sub-steps:
s2-1, acquiring a user ID, a browsing scene space position, a browsing scene category, a focus object of attention and interaction time in the historical browsing records of the old user, wherein the interaction time comprises an entering time, a leaving time and a staying time;
s2-2, for the ith old user uiCombining the user ID and the n browsing scene space positions into a vector according to the time sequence
Figure FDA0003144808400000011
And using the vector as the user context information; wherein u isiFor the i-th old userThe user ID is set to the user ID,
Figure FDA0003144808400000012
the nth browsing scene space position of the ith old user;
s2-3, for the Mth scene, combining the user ID of the ith old user and the spatial position of the browsed scene browsed in the scene into a vector in time sequence
Figure FDA0003144808400000013
Taking the vector as a track scene vector of the ith old user in the Mth scene, and performing line sequence on the track scene vectors of the ith old user in each scene to obtain track scene information of the ith old user; wherein
Figure FDA0003144808400000014
Representing the spatial position of the M-th browsed scene in the M-th scene of the ith old user;
s2-4, for the ith old user uiAnd constructing a position vector according to the coordinates of two positions at the time k and the time k +1, the browsing scene category and the focus object of interest
Figure FDA0003144808400000021
The ith old user uiAll the position vectors are subjected to rank order to obtain the ith old user uiLocation context information of; wherein
Figure FDA0003144808400000022
As coordinates of the location of time k, ckBrowsing scene category for time k, okFor the focus object of interest at time k,
Figure FDA0003144808400000023
is the coordinate of the location at time k +1, ck+1Class of browsing scenes at time k +1, ok+1The focus of attention object at the moment k + 1;
s2-5, for the ith old ageHuu (household)iThe row sequence of the user ID, the coordinates of the position, the entering time, the leaving time and the staying time is carried out to obtain the ith old user uiScene information vector at f position
Figure FDA0003144808400000024
The ith old user uiSequencing the scene information vectors at each position to obtain the ith old user uiRemoving the data with the stay time less than 5 seconds in the initial time scene information corresponding to the ith old user to obtain the time scene information corresponding to the ith old user; wherein
Figure FDA0003144808400000025
The coordinates of the ith old user at position f,
Figure FDA0003144808400000026
respectively the entry time, the leaving time and the stay time of the ith old user at the position f;
s2-6, projecting the user scene information, the track scene information, the position scene information and the time scene information of the ith old user to a low-dimensional vector space to respectively obtain a user scene embedding vector, a track scene embedding vector, a position scene embedding vector and a time scene embedding vector, and according to a formula:
Figure FDA0003144808400000027
obtaining average scene embedding vector of ith old user
Figure FDA0003144808400000028
Wherein
Figure FDA0003144808400000029
A vector is embedded for the user scene of the ith old user,
Figure FDA00031448084000000210
a vector is embedded for the locus scene of the ith old user,
Figure FDA00031448084000000211
embedding a vector, V, for the location context of the ith old usert iEmbedding a vector for the time scene of the ith old user;
s2-7, embedding the average scene of the ith old user into a vector
Figure FDA00031448084000000212
As the input of a hierarchical sampling softmax function, acquiring the recommendation probability of the ith old user to each browsing scene space position, and further acquiring the recommendation probability of each old user to each browsing scene space position;
s2-8, according to the formula:
Figure FDA0003144808400000031
obtaining the willingness value P of all the old users to recommend the Mth sceneMFurther obtaining the willingness value of all the old users for recommending each scene; where U represents the set of all old users, R represents the set of trajectories generated by all old users, NRGenerating the length of the track for all the old users, wherein N is the total number of the spatial positions of the browsed scenes in the Mth scene; log (-) is a logarithmic function, and Pr (m | i) is the recommendation probability of the ith old user to the spatial position of the mth browsing scene;
s2-9, sequencing each scene from large to small according to the recommendation willingness values of all old users, and pushing the sequencing result to a VR visual interface of a new user as a popular browsing scene list.
2. The personalized railway VR scene interaction method of claim 1, wherein the specific method for determining whether the user is a new user in step S1 is as follows:
and judging whether the user has a historical browsing record according to the login information of the user, wherein if the user has the historical browsing record, the user is an old user, and if the user does not have the historical browsing record, the user is a new user.
3. The personalized railway VR scene interaction method of claim 1, wherein the specific method in step S3 includes the following sub-steps:
s3-1, acquiring a user ID, a browsing scene spatial position, a browsing scene category, a focus object of attention and interaction time in the historical browsing record of the user, wherein the interaction time comprises an entering time, a leaving time and a staying time;
s3-2, combining the user ID and the n browsing scene space positions into a vector in chronological order
Figure FDA0003144808400000041
And using the vector as the user context information; wherein u isiIs the user ID of the old user,
Figure FDA0003144808400000042
the nth browsing scene space position of the old user;
s3-3, for the Mth scene, combining the user ID of the old user and the spatial position of the browsing scene browsed in the scene into a vector in time sequence
Figure FDA0003144808400000043
Taking the vector as the track scene information of the old user in the Mth scene; wherein
Figure FDA0003144808400000044
Representing the M-th browsed scene space position of the old user in the M-th scene;
s3-4, constructing a position vector according to the coordinates of the old user at two positions of the k moment and the k +1 moment, browsing scene categories and focus attention objects
Figure FDA0003144808400000045
The old user uiAll the position vectors of the old user u are subjected to rank ordering to obtain the old user uiLocation context information of; wherein
Figure FDA0003144808400000046
As coordinates of the location of time k, ckBrowsing scene category for time k, okFor the focus object of interest at time k,
Figure FDA0003144808400000047
is the coordinate of the location at time k +1, ck+1Class of browsing scenes at time k +1, ok+1The focus of attention object at the moment k + 1;
s3-5, obtaining the old user u by sequencing the user ID, the coordinates of the position, the entering time, the leaving time and the staying time of the old useriScene information vector at f position
Figure FDA0003144808400000051
The old user uiThe scene information vector at each position is subjected to rank order to obtain the old user uiRemoving the data with the retention time less than 5 seconds in the initial time scene information corresponding to the old user to obtain the time scene information corresponding to the old user; wherein
Figure FDA0003144808400000052
The coordinates of the old user at the f position,
Figure FDA0003144808400000053
respectively the entering time, the leaving time and the staying time of the old user at the position f;
s3-6, projecting the user scene information, the track scene information, the position scene information and the time scene information of the ith old user to a low-dimensional vector space to respectively obtain a user scene embedding vector, a track scene embedding vector, a position scene embedding vector and a time scene embedding vector, and according to a formula:
Figure FDA0003144808400000054
obtaining average scene embedding vector of ith old user
Figure FDA0003144808400000055
Wherein
Figure FDA0003144808400000056
A vector is embedded for the user scene of the ith old user,
Figure FDA0003144808400000057
a vector is embedded for the locus scene of the ith old user,
Figure FDA0003144808400000058
embedding a vector, V, for the location context of the ith old usert iEmbedding a vector for the time scene of the ith old user;
s3-7, embedding the average scene of the old user into a vector
Figure FDA0003144808400000059
As the input of a hierarchical sampling softmax function, acquiring the recommendation probability of the old user to the spatial position of each browsing scene;
s3-8, according to the formula:
Figure FDA00031448084000000510
the old user recommends the willingness value P of the Mth sceneMFurther obtaining the willingness value of the old user for recommending each scene; wherein R isiRepresents the set of tracks generated by the old user, NRThe length of the track generated for the old user, and N is the total number of the spatial positions of the browsed scenes in the Mth scene; log (-) is logarithmicFunction, Pr (m | i) is the recommendation probability of the old user to the spatial position of the mth browsing scene;
s3-9, sorting each scene according to the recommendation willingness value of the old user from big to small, pushing the sorting result to the VR visual interface of the user as the personal interest and the preference list of the user, and entering the step S5.
4. The personalized railway VR scene interaction method of claim 1, wherein the average scene of the ith old user is embedded into a vector in step S2-7
Figure FDA0003144808400000061
The specific method for acquiring the recommendation probability of the ith old user to each browsing scene space position as the input of the hierarchical sampling softmax function is as follows:
according to a hierarchical sampling softmax function formula:
Figure FDA0003144808400000062
acquiring the recommendation probability Pr (m | i) of the ith old user to the spatial position of the mth browsing scene, and further acquiring the recommendation probability of the ith old user to each spatial position of the browsing scene; wherein N isRLength of the track generated for the ith old user; exp (·) is an exponential function with a natural constant e as the base;
Figure FDA0003144808400000063
embedding vectors for the average scene of the ith old user
Figure FDA0003144808400000064
Transposing a matrix; thetap-1Sampling parameters of a p-1 node of a softmax function in a layered mode; b is a value parameter, and when the mth browsing scene space position takes a left branch at the pth node, b is 0; when the mth browsing scene space location takes the right branch at the pth node, b is 1.
5. The personalized railway VR scene interaction method of claim 3, wherein the average scene of the old user is embedded into a vector in step S3-7
Figure FDA0003144808400000065
The specific method for acquiring the recommendation probability of the old user to each browsing scene space position as the input of the hierarchical sampling softmax function is as follows:
according to a hierarchical sampling softmax function formula:
Figure FDA0003144808400000066
acquiring the recommendation probability Pr (m | i) of the ith old user to the spatial position of the mth browsing scene, and further acquiring the recommendation probability of the ith old user to each spatial position of the browsing scene; wherein N isRLength of the track generated for the ith old user; exp (·) is an exponential function with a natural constant e as the base;
Figure FDA0003144808400000067
embedding vectors for the average scene of the ith old user
Figure FDA0003144808400000068
Transposing a matrix; thetap-1Sampling parameters of a p-1 node of a softmax function in a layered mode; b is a value parameter, and when the mth browsing scene space position takes a left branch at the pth node, b is 0; when the mth browsing scene space location takes the right branch at the pth node, b is 1.
CN202010156431.2A 2020-03-09 2020-03-09 Personalized railway VR scene interaction method Active CN111359203B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010156431.2A CN111359203B (en) 2020-03-09 2020-03-09 Personalized railway VR scene interaction method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010156431.2A CN111359203B (en) 2020-03-09 2020-03-09 Personalized railway VR scene interaction method

Publications (2)

Publication Number Publication Date
CN111359203A CN111359203A (en) 2020-07-03
CN111359203B true CN111359203B (en) 2021-09-28

Family

ID=71198381

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010156431.2A Active CN111359203B (en) 2020-03-09 2020-03-09 Personalized railway VR scene interaction method

Country Status (1)

Country Link
CN (1) CN111359203B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112395323A (en) * 2020-09-18 2021-02-23 江苏园上园智能科技有限公司 Interaction method based on user experience data
CN113076436B (en) * 2021-04-09 2023-07-25 成都天翼空间科技有限公司 VR equipment theme background recommendation method and system
CN113704605A (en) * 2021-08-24 2021-11-26 山东库睿科技有限公司 Service information recommendation method and device, electronic equipment and medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8065254B1 (en) * 2007-02-19 2011-11-22 Google Inc. Presenting a diversity of recommendations
CN104794207A (en) * 2015-04-23 2015-07-22 山东大学 Recommendation system based on cooperation and working method of recommendation system
CN108303108A (en) * 2017-12-05 2018-07-20 华南理工大学 A kind of personalized route recommendation method based on vehicle historical track
CN108733653A (en) * 2018-05-18 2018-11-02 华中科技大学 A kind of sentiment analysis method of the Skip-gram models based on fusion part of speech and semantic information
CN110738370A (en) * 2019-10-15 2020-01-31 南京航空航天大学 novel moving object destination prediction algorithm
CN110807150A (en) * 2019-10-14 2020-02-18 腾讯科技(深圳)有限公司 Information processing method and device, electronic equipment and computer readable storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8065254B1 (en) * 2007-02-19 2011-11-22 Google Inc. Presenting a diversity of recommendations
CN104794207A (en) * 2015-04-23 2015-07-22 山东大学 Recommendation system based on cooperation and working method of recommendation system
CN108303108A (en) * 2017-12-05 2018-07-20 华南理工大学 A kind of personalized route recommendation method based on vehicle historical track
CN108733653A (en) * 2018-05-18 2018-11-02 华中科技大学 A kind of sentiment analysis method of the Skip-gram models based on fusion part of speech and semantic information
CN110807150A (en) * 2019-10-14 2020-02-18 腾讯科技(深圳)有限公司 Information processing method and device, electronic equipment and computer readable storage medium
CN110738370A (en) * 2019-10-15 2020-01-31 南京航空航天大学 novel moving object destination prediction algorithm

Also Published As

Publication number Publication date
CN111359203A (en) 2020-07-03

Similar Documents

Publication Publication Date Title
CN111359203B (en) Personalized railway VR scene interaction method
CN110309427B (en) Object recommendation method and device and storage medium
CN109086439A (en) Information recommendation method and device
CN111078939A (en) Method, system and recording medium for extracting and providing highlight image in video content
CN104782138A (en) Identifying a thumbnail image to represent a video
CN105493057A (en) Content selection with precision controls
CN111949877B (en) Personalized interest point recommendation method and system
CN112734068B (en) Conference room reservation method, conference room reservation device, computer equipment and storage medium
CN111708876A (en) Method and device for generating information
CN112215129A (en) Crowd counting method and system based on sequencing loss and double-branch network
CN110555428A (en) Pedestrian re-identification method, device, server and storage medium
CN111783895B (en) Travel plan recommendation method, device, computer equipment and storage medium based on neural network
CN108197203A (en) A kind of shop front head figure selection method, device, server and storage medium
CN110465089A (en) Map heuristic approach, device, medium and electronic equipment based on image recognition
CN111859142A (en) Cross-equipment migration recommendation system based on interconnection and intercommunication home platform and working method thereof
CN113158038A (en) Interest point recommendation method and system based on STA-TCN neural network framework
JPWO2010084839A1 (en) Likelihood estimation apparatus, content distribution system, likelihood estimation method, and likelihood estimation program
CN113742590A (en) Recommendation method and device, storage medium and electronic equipment
CN112784177A (en) Spatial distance adaptive next interest point recommendation method
CN116257704A (en) Point-of-interest recommendation method based on user space-time behaviors and social information
CN113158735A (en) Dense event description method based on graph neural network
CN111414538A (en) Text recommendation method and device based on artificial intelligence and electronic equipment
CN116628310B (en) Content recommendation method, device, equipment, medium and computer program product
CN117763492B (en) Network security tool intelligent recommendation method and device based on time sequence spatial characteristics and preference fluctuation
CN116932893B (en) Sequence recommendation method, system, equipment and medium based on graph convolution network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant