WO2021093703A1 - 基于光通信装置的交互方法和*** - Google Patents

基于光通信装置的交互方法和*** Download PDF

Info

Publication number
WO2021093703A1
WO2021093703A1 PCT/CN2020/127476 CN2020127476W WO2021093703A1 WO 2021093703 A1 WO2021093703 A1 WO 2021093703A1 CN 2020127476 W CN2020127476 W CN 2020127476W WO 2021093703 A1 WO2021093703 A1 WO 2021093703A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
optical communication
location information
posture
virtual object
Prior art date
Application number
PCT/CN2020/127476
Other languages
English (en)
French (fr)
Inventor
方俊
李江亮
Original Assignee
北京外号信息技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京外号信息技术有限公司 filed Critical 北京外号信息技术有限公司
Publication of WO2021093703A1 publication Critical patent/WO2021093703A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q11/00Selecting arrangements for multiplex systems
    • H04Q11/0001Selecting arrangements for multiplex systems using optical switching
    • H04Q11/0062Network aspects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/22Adaptations for optical transmission
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q11/00Selecting arrangements for multiplex systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q11/00Selecting arrangements for multiplex systems
    • H04Q11/0001Selecting arrangements for multiplex systems using optical switching
    • H04Q11/0062Network aspects
    • H04Q2011/0077Labelling aspects, e.g. multiprotocol label switching [MPLS], G-MPLS, MPAS

Definitions

  • the present invention relates to the field of information interaction, in particular to an interaction method and system based on an optical communication device.
  • Location-based services are, for example, navigation, finding nearby businesses, finding nearby people, and so on.
  • the existing location-based services usually obtain the location information (geographical coordinates) of the user equipment through the radio communication network (such as GSM network, CDMA network) or satellite positioning system (such as GPS) of the telecom operator, and based on the location information Provide users with corresponding services.
  • the existing location-based services cannot obtain the precise location information of the user equipment, nor can it obtain the posture information of the user equipment, which limits the device-based communication and interaction between users.
  • One aspect of the present invention relates to an interaction method based on an optical communication device, including: obtaining position information and posture information of a first device, wherein the first device has a camera, and wherein, by analyzing the first device
  • the camera collects the image including the optical communication device to determine the position information and posture information of the first device; obtains the position information of the second device, wherein the second device has a camera, and wherein, by analyzing the The image collected by the camera of the second device including the optical communication device determines the position information of the second device; according to the position information and posture information of the first device and the position information of the second device, the second device is determined The position relationship of the second device relative to the first device; and the operation is performed based on the position relationship of the second device relative to the first device and a predetermined rule.
  • the positional relationship of the second device with respect to the first device includes the position of the second device in the field of view of the camera of the first device.
  • the obtaining the location information and the posture information of the first device includes: the server determines the location information and the posture information of the first device by analyzing the image including the optical communication device collected by the first device;
  • the obtaining the location information of the second device includes: the server determines the location information of the second device by analyzing the image including the optical communication device collected by the second device.
  • the optical communication device associated with the position information and attitude information of the first device and the optical communication device associated with the position information of the second device are the same or different Optical communication devices, the different optical communication devices have a certain relative positional relationship.
  • the position information and posture information of the first device are the position information and posture information of the first device relative to the optical communication device, the position information and posture information in the scene coordinate system, or the position information and posture information of the first device in the world Position information and posture information in the coordinate system;
  • the position information of the second device is the position information of the second device relative to the optical communication device, the position information in the scene coordinate system, or the position in the world coordinate system information.
  • the position information and posture information of the first device in the scene coordinate system are based on the position information and posture information of the first device relative to the optical communication device and the location of the optical communication device itself. Obtained from the position information and posture information in the scene coordinate system, the position information and posture information of the first device in the world coordinate system are based on the position information and posture information of the first device relative to the optical communication device And the position information and attitude information of the optical communication device itself in the world coordinate system; the position information of the second device in the scene coordinate system is based on the second device relative to the optical communication device The position information and the position information of the optical communication device itself in the scene coordinate system are obtained, and the position information of the second device in the world coordinate system is based on the position of the second device relative to the optical communication device Information and the position information of the optical communication device itself in the world coordinate system.
  • the predetermined rule includes: performing an operation when the second device is located in a predetermined area of the camera field of view of the first device.
  • the performing operation based on the position relationship of the second device relative to the first device and a predetermined rule includes: based on the position relationship of the second device relative to the first device, The input of the first device or the second device and the predetermined rule execution operation are described.
  • the performing operation based on the position relationship of the second device relative to the first device and a predetermined rule includes: based on the position relationship of the second device relative to the first device, The attribute information of the first device or the second device and the predetermined rule execution operation are described.
  • the operation includes obtaining, sending, displaying, modifying, adding or deleting attribute information associated with the first device or the second device.
  • the method further includes: obtaining updated location information and posture information of the first device; and/or obtaining updated location information of the second device.
  • the method further includes: obtaining posture information of the second device, wherein the posture information of the second device is determined by analyzing the image including the optical communication device; according to the location information of the first device and the second device The location information and posture information of the device determine the location relationship of the first device relative to the second device; perform operations based on the location relationship of the first device relative to the second device and a predetermined rule.
  • the method further includes: setting a virtual object with spatial location information associated with the second device, and the spatial location information of the virtual object is determined based on the location information of the second device;
  • the relevant information is sent to the first device so that it can be used by the first device to present the virtual object on its display medium based on its position information and posture information determined by the optical communication device; and wherein,
  • the performing operation includes performing an operation on the virtual object.
  • the virtual object also has posture information.
  • the method further includes: setting another virtual object having spatial location information associated with the first device, and the spatial location information of the another virtual object is determined based on the location information of the first device;
  • the information related to the other virtual object is sent to the second device so that it can be used by the second device to present the other virtual object on its display medium based on the position information and posture information determined by the optical communication device.
  • Another aspect of the present invention relates to an interaction system based on device position information and posture information, including: one or more optical communication devices; An image of an optical communication device; and a server capable of communicating with the device, which is configured to implement any of the above-mentioned methods.
  • Another aspect of the present invention relates to a storage medium in which a computer program is stored, and when the computer program is executed by a processor, it can be used to implement the above-mentioned method.
  • Another aspect of the present invention relates to an electronic device, including a processor and a memory, and a computer program is stored in the memory.
  • the computer program When the computer program is executed by the processor, the computer program can be used to implement the above-mentioned method.
  • Figure 1 shows an exemplary optical label
  • Figure 2 shows an exemplary optical label network
  • Figure 3 shows an interaction method according to an embodiment
  • Fig. 4 shows an interaction method according to an embodiment.
  • Optical communication devices are also called optical tags, and these two terms can be used interchangeably in this article.
  • Optical tags can transmit information through different light-emitting methods, which have the advantages of long recognition distance and relaxed requirements for visible light conditions, and the information transmitted by optical tags can change over time, which can provide large information capacity and flexible configuration capabilities.
  • the optical tag usually includes a controller and at least one light source, and the controller can drive the light source through different driving modes to transmit different information to the outside.
  • Fig. 1 shows an exemplary optical label 100, which includes three light sources (respectively a first light source 101, a second light source 102, and a third light source 103).
  • the optical label 100 also includes a controller (not shown in FIG. 1), which is used to select a corresponding driving mode for each light source according to the information to be transmitted. For example, in different driving modes, the controller can use different driving signals to control the light emitting mode of the light source, so that when the light label 100 is photographed by a device with imaging function, the image of the light source therein can show a different appearance.
  • FIG. 1 is only used as an example, and the optical label may have a different shape from the example shown in FIG. 1 and may have a different number and/or different shape of light sources from the example shown in FIG. 1.
  • each optical tag may be assigned an identification information (ID).
  • ID an identification information
  • the light source can be driven by the controller in the optical label to transmit the identification information outward, and the image acquisition device can perform image acquisition on the optical label to obtain one or more images containing the optical label, and analyze the optical label in the image. (Or each light source in the optical tag) is imaged to identify the identification information transmitted by the optical tag, and then other information associated with the identification information can be obtained, for example, the position information of the optical tag corresponding to the identification information.
  • the information related to each optical tag can be stored in the server.
  • a large number of optical labels can also be constructed into an optical label network.
  • Fig. 2 shows an exemplary optical label network, which includes a plurality of optical labels and at least one server.
  • the identification information (ID) or other information of each optical label can be saved on the server, such as service information related to the optical label, description information or attribute information related to the optical label, such as location information and model information of the optical label , Physical size information, physical shape information, posture or orientation information, etc.
  • the optical label may also have uniform or default physical size information and physical shape information.
  • the device can use the identified identification information of the optical tag to query the server to obtain other information related to the optical tag.
  • the location information of the optical tag may refer to the actual location of the optical tag in the physical world, which may be indicated by geographic coordinate information.
  • the server may be a software program running on a computing device, a computing device, or a cluster composed of multiple computing devices.
  • the optical tag may be offline, that is, the optical tag does not need to communicate with the server.
  • online optical tags that can communicate with the server are also feasible.
  • the devices mentioned in this article can be, for example, devices that users carry or control (for example, mobile phones with cameras, tablets, smart glasses, AR glasses, smart helmets, smart watches, etc.), or they can be machines that can move autonomously (For example, drones with cameras, driverless cars, robots, etc.).
  • the device can collect the image of the optical label through the camera on it to obtain the image containing the optical label.
  • the device may have a display medium or be associated with a display medium.
  • participant carrying equipment can scan and identify optical tags arranged around them, and access corresponding services through the identified optical tag identification information.
  • the user uses his device to scan the optical tag, he can take an image of the optical tag and determine the location information and posture information of the user device relative to the optical tag through relative positioning based on the image, and send the location information and posture information to the server.
  • the server obtains the location information and posture information (referred to as pose information) of the participant's equipment through the above method, it can determine the field of view of the camera of the device according to the location information and posture information of the device.
  • the server sends the relevant information of the second device to the first device, then when the location of participant B (ie, device B) is When it is within the predetermined range of the camera field of participant A (ie, device A), the server can send relevant information (for example, name, occupation, work unit, etc.) of participant B to the participant according to predetermined rules Personnel A.
  • a game player carrying a device can take an image of the light tag, and analyze the image to determine that the game player’s device is relative to the light tag Location information and posture information.
  • the position information and posture information can be sent to the server.
  • the server may determine the field of view of the camera of the game player based on the position information and posture information of the game player's equipment.
  • the server can determine that game player A is currently aiming at game player B according to the predetermined rules. At this time, if game player A performs a shooting operation, the server It can record that game player A hits game player B, and the attribute information related to game player B can be changed accordingly (for example, it can be a vitality value, etc.).
  • Fig. 3 shows an interaction method based on an optical communication device according to an embodiment. The method includes the following steps:
  • Step 310 The server obtains the location information and posture information of the first device, where the first device has a camera, and wherein the location information of the first device is determined by analyzing the image including the optical communication device collected by the camera of the first device And posture information.
  • the first device can identify the information transmitted by the optical label by scanning the optical label, and access the server based on the information, and transmit the information to the server.
  • the server can obtain the pose information of the first device in various ways.
  • the server may extract the pose information of the device from the information from the first device.
  • the information from the first device may include the pose information of the first device.
  • the device can determine its pose information relative to the light tag by acquiring an image including the light tag and analyzing the image. For example, the device can determine the relative distance between the optical tag and the device through the imaging size of the optical tag in the image and optional other information (for example, the actual physical size of the optical tag, the focal length of the device’s camera) (the larger the image, the greater the distance The closer; the smaller the image, the farther the distance).
  • the device may use the identification information of the optical tag to obtain the actual physical size information of the optical tag from the server, or the optical tag may have a uniform physical size and store the physical size on the device.
  • the device may use the identification information of the optical tag to obtain the physical shape information of the optical tag from the server, or the optical tag may have a unified physical shape and store the physical shape on the device.
  • the device can also directly obtain the relative distance between the optical label and the device through a depth camera or binocular camera installed on it.
  • the device can also use any other existing positioning method to determine its position information relative to the optical tag.
  • the device can also determine its posture information, which can be used to determine the range or boundary of the real scene shot by the device.
  • the posture information of the device is actually the posture information of the image acquisition device (such as a camera) of the device.
  • the device can scan the light tag, and can determine its posture information relative to the light tag based on the imaging of the light tag.
  • the imaging position or imaging area of the light tag is at the center of the imaging field of the device, it can be regarded as the device Currently facing the light label.
  • the imaging direction of the optical tag can be further considered. As the posture of the device changes, the imaging position and/or imaging direction of the optical tag on the device will change accordingly. Therefore, the posture information of the device relative to the optical tag can be obtained according to the imaging of the optical tag on the device.
  • the device may also establish a coordinate system based on the optical label, and the coordinate system may be referred to as the optical label coordinate system.
  • Some points on the optical label may be determined as some spatial points in the optical label coordinate system, and the coordinates of these spatial points in the optical label coordinate system may be determined according to the physical size information and/or physical shape information of the optical label.
  • Some points on the optical label may be, for example, the corners of the housing of the optical label, the end of the light source in the optical label, some identification points in the optical label, and so on.
  • the image points corresponding to these spatial points can be found in the image taken by the device camera, and the position of each image point in the image can be determined.
  • the pose information of the device camera in the optical label coordinate system when the image is taken can be calculated (R, t), where R is the rotation matrix, which can be used to represent the posture information of the device camera in the optical label coordinate system, and t is the displacement vector, which can be used to represent the position information of the device camera in the optical label coordinate system .
  • R is the rotation matrix, which can be used to represent the posture information of the device camera in the optical label coordinate system
  • t is the displacement vector, which can be used to represent the position information of the device camera in the optical label coordinate system .
  • the method of calculating R and t is known in the prior art.
  • the PnP (Perspective-n-Point) method of 3D-2D can be used to calculate R and t. In order not to obscure the present invention, the details are omitted here. Introduction.
  • the rotation matrix R and the displacement vector t can actually describe how to transform the coordinates of a certain point between the optical label coordinate system and the device camera coordinate system. For example, through the rotation matrix R and the displacement vector t, the coordinates of a certain point in the optical label coordinate system can be converted to the coordinates in the device camera coordinate system, and can be further converted to the position of the image point in the image.
  • the server may also obtain the pose information of the device by analyzing the information from the first device.
  • the information from the first device may include the image information of the optical tag.
  • the server determines the pose information of the first device relative to the light tag by analyzing the image.
  • the specific method is similar to that obtained by the above device by analyzing the image of the optical label to obtain its pose information relative to the optical label, which will not be repeated here.
  • Step 320 The server obtains the location information of the second device, where the second device has a camera, and wherein the location information of the second device is determined by analyzing the image including the optical communication device collected by the camera of the second device.
  • the server may use various methods to obtain the position information of the second device relative to the optical tag, and the specific method is similar to the various methods described in step 310 above, and details are not described herein again.
  • the server can also obtain the posture information of the second device in a similar manner to the above.
  • the pose information received by the server from the device or the device's pose information obtained by the server through analysis can be the device's pose information relative to the light tag, or the device's pose information in the scene coordinate system or in the world coordinate system The pose information.
  • the device or server can realize the conversion of the target pose between different coordinate systems according to the transformation matrix between different coordinate systems.
  • the device or server can determine the device's pose information in the scene coordinate system according to the device's pose information relative to the light tag and the pose information of the light tag itself in the scene coordinate system.
  • the pose information of the device in the world coordinate system can also be determined according to the pose information of the device relative to the optical tag and the pose information of the optical tag itself in the world coordinate system.
  • the device can send its pose information relative to the light tag to the server.
  • the server can use the device's pose information relative to the light tag and the position of the light tag itself in the scene coordinate system or the world coordinate system.
  • Pose information to determine the device's pose information in the scene coordinate system or the world coordinate system.
  • the pose information of the optical tag itself in the scene coordinate system or the world coordinate system can be stored in the server, and the identification information of the optical tag can be obtained from the server by the device.
  • the pose information of the device can be the pose information of the device when scanning the light tag, or it can be the device using a built-in acceleration sensor, gyroscope, camera, etc. after scanning the light tag through methods known in the art (for example, inertial navigation , Visual odometer, SLAM, VSLAM, SFM, etc.) measure or track the new pose information obtained.
  • the server can continuously obtain new pose information of the device and update the pose information of the device.
  • the server can know the pose information of each optical tag or the relative pose relationship between them.
  • the optical tags scanned by the first device and the second device may not be the same optical tag, and the first device may also scan multiple different optical tags at different times to provide or update its location information (in the provision or Updating the location information can send the identification information of the related optical tags), and the second device may also scan multiple different optical tags at different times to determine its location information and posture information.
  • Step 330 The server determines the position relationship of the second device relative to the first device according to the position information and posture information of the first device and the position information of the second device.
  • the server may determine a coordinate system with the first device as the origin of the coordinates based on the position information and posture information of the first device, and convert the position information of the second device into position information in the coordinate system. , The positional relationship of the second device relative to the first device (that is, the origin of the coordinates) can be determined.
  • the server may determine the field of view of the camera of the first device based on the pose information of the first device, and determine that the second device is located in the first device based on the field of view of the camera of the first device and the position information of the second device. Whether the device camera is within the field of view or outside the field of view, and the specific position of the second device within the field of view of the first device camera can be determined.
  • Step 340 The server performs an operation based on the positional relationship of the second device relative to the first device and a predetermined rule.
  • the operations performed by the server may include selecting a device, obtaining information about the device, sending relevant information to the device, adding or modifying information related to the device, or deleting some information about the device.
  • the information related to the device can be pre-stored in the server, or sent by the device to the server in real time.
  • the information related to the device may include attribute information of the device.
  • the attribute information of the device can be the information of the user who uses the device, user-defined information or system setting information, or any other information.
  • the attribute information of device A can be the personal information of participant A (for example, it can include the name, occupation, work unit, etc.) of participant A, or it can be customized information of participant A (For example, it can be the contact information provided by A on its own initiative, such as mobile phone number, email address, etc.), or attribute information set by the system according to the identity of participant A (for example, "speaker").
  • the attribute information of the device may also include specific data values of the attributes of the device.
  • the attribute information of the equipment A may include the identity, level, skill, vitality value, etc. of the game character of the game player A.
  • the server determines whether the corresponding operation should be performed based on the relative position relationship between the devices and predetermined rules to realize the interaction between users.
  • the predetermined rule can be any rule, which can be set by the user or can be preset by the server.
  • a predetermined rule may stipulate that when the second device is within the field of view of the camera of the first device, a corresponding operation is performed. At this time, for example, in a conference scene, when participant B (ie, device B) appears within the field of view of participant A's device camera, the server sends information about B's identity, occupation, and work unit to A.
  • the predetermined rule may also stipulate that when the second device is located in the central area of the camera field of view of the first device, the server performs a corresponding operation. At this time, only when participant B (ie, device B) is located in the central area of participant A's device camera field of vision, the server will send B's identity, occupation, work unit, and other relevant information.
  • the server may perform corresponding operations based on the relative positional relationship between the devices, the input of the devices, and predetermined rules.
  • a predetermined rule may stipulate: if the second device is located in a non-central area of the camera field of view of the first device, when the first device selects the imaging of the second device in the display medium, the server performs the corresponding operation, and , When participant B (ie, device B) appears in the field of view of participant A’s device camera, but is located in a non-central area, when participant A clicks on the image of participant B in his device, the server will A sends B’s identity, occupation, work unit and other relevant information.
  • a corresponding effect may be presented on the display medium of the first device to prompt the user of the first device to select or hit the second device.
  • the selected or hit icon may be presented on the second device image of the display medium.
  • corresponding auxiliary icons such as indicator boxes or sight icons, may also be presented on the display medium of the first device.
  • the server may perform corresponding operations according to the relative positional relationship between the devices, the attribute information of the devices, and predetermined rules.
  • a predetermined rule may stipulate that: when the second device is located within the field of view of the camera of the first device, and when the first user and/or the second user have a specific identity, the server performs the corresponding operation.
  • the server can continuously obtain the continuously updated pose information of the device, and determine whether to perform a corresponding operation according to the updated pose information of the device.
  • the location of the second device may not be in the field of view of the first device (for example, the field of view of the camera of the first device).
  • a built-in sensor such as an acceleration sensor, a gyroscope, a visual odometer, etc.
  • the corresponding operation can be performed.
  • the technology of using the built-in sensor of the device to track the device's pose change is a well-known technology in the field of positioning and navigation technology, and will not be repeated here.
  • the location information and posture information of the second device can be obtained based on the image of the optical tag collected by the second device, and the server can determine the first device based on the location information and posture information of the second device and the location information of the first device.
  • the location relationship of the device relative to the second device, and the corresponding operation is performed based on the location relationship of the first device relative to the second device and a predetermined rule.
  • the specific manner is similar to the various manners described in steps 310-340 above, and will not be repeated here.
  • the server may also set a virtual object associated with the device, and may use a light tag as an anchor point to implement the superposition of the virtual object onto the real scene, for example, use the virtual object to accurately identify the virtual object in the real scene.
  • the virtual object can be, for example, an icon, a picture, a text, a number, an emoticon, a virtual three-dimensional object, a three-dimensional scene model, an animation, a video, and so on.
  • Users or devices can operate on virtual objects of other users to achieve interactive communication.
  • a game player with equipment for example, a simulated shooting device with a camera, AR glasses, etc.
  • the tag determines the pose information of the device.
  • the server can set a virtual object with spatial location information for A according to the position information of A's device.
  • the virtual object may be, for example, an icon or numerical value representing related information of the character selected by A in the game, for example, a police icon with experience value data or vitality data, and a gangster icon with experience value data or vitality data.
  • the virtual object can be accurately presented on the display medium of the equipment of other players.
  • the server can determine that Game Player A is currently aiming at Game Player B according to predetermined rules.
  • Game Player A performs a shooting operation, the server can record that Game Player A hits Game Player B and can change the game player accordingly.
  • the attribute information related to B for example, the vitality value, etc.
  • the attribute information related to the game player A for example, the experience value, etc.
  • the change of the attribute information of the game player A or B can be expressed through the virtual object, for example, the experience value data presented by the virtual object is increased or the vitality data presented by the virtual object is reduced. In this way, each player and the virtual object representing his character can be accurately corresponded, thereby enhancing the player's immersive sensory experience.
  • game players can use AR glasses instead of mobile phones during the game to experience a more realistic game environment.
  • Fig. 4 shows an interaction method based on an optical communication device according to an embodiment, wherein steps 410 and 420 are similar to the above-mentioned steps 310 and 320, in addition, the method further includes the following steps:
  • Step 430 The server sets a virtual object with spatial location information for the second device, and the spatial location information of the virtual object is determined based on the location information of the second device.
  • the server may set a virtual object associated with the second device.
  • the virtual object may be, for example, the name, gender, and virtual icon of the character image corresponding to the character selected by the second device in the game.
  • the server may also set the virtual object according to information related to the second device, for example, according to the attribute information of the device, the information of the user who uses the device, and the information about a certain operation performed by the user using the device ( For example, the time to join the game), user-defined information, or information set by the system to set virtual objects.
  • the spatial position information of the virtual object may be determined according to the position information of the second device.
  • the spatial position information may be position information relative to the light tag, or position information in a scene coordinate system or a world coordinate system.
  • the server may simply determine the spatial location of the virtual object as the location of the second device, or may determine the spatial location of the virtual object as other locations, for example, other locations near the location of the second device.
  • the server may also set the posture information of the virtual object according to the posture information of the second device.
  • the posture information of the virtual object may be the posture information of the second device relative to the optical tag, or its posture information in the scene coordinate system or the world coordinate system.
  • the pose information of the virtual object associated with the game player can be set according to the pose information of the game player in the game scene. For example, when player A is facing player B, his virtual object (such as the character image of a game character) is also facing player B.
  • the posture information of the virtual object associated with the second device can be determined according to the posture information of the first device, that is, the posture of the virtual object associated with the second device can follow the posture of the first device. Adjust the posture.
  • the posture of the virtual object may be determined according to the posture information of the first device relative to the second device, so that a certain orientation of the virtual object (for example, the front direction of the virtual object) always faces the first device. Taking the aforementioned shooting game as an example, the front of B's virtual object (such as the character name) always faces A's device.
  • a direction from the virtual object to the device may be determined in space based on the positions of the device and the virtual object, and the pose of the virtual object may be determined based on the direction.
  • Step 440 The server sends information related to the virtual object to the first device so that it can be used by the first device to present the virtual object on its display medium based on the position information and posture information determined by the optical communication device.
  • the server may send the related information of the virtual object to the first device in various ways.
  • the server may directly send the information related to the virtual object to the first device via a wireless link, for example.
  • the optical label identification device can identify the information (such as identification information) conveyed by the optical label by scanning the optical labels arranged in the scene, and use the information to access the server (for example, access through wireless signals) to obtain information from the server. Obtain information about virtual objects.
  • the server may use the optical tag to transmit the related information of the virtual object to the optical tag identification device in an optical communication manner.
  • the related information of the virtual object can be used by the optical tag recognition device to present the virtual object on its display medium based on its position information and/or posture information.
  • Step 450 The server determines the position relationship of the second device relative to the first device according to the pose information of the first device and the position information of the second device.
  • Step 460 The server performs an operation on the virtual object according to the position relationship of the second device relative to the first device and a predetermined rule.
  • the server may perform operations based on the positional relationship of the second device relative to the first device and predetermined rules, and the operations performed by the server may be presented on the display medium through virtual objects.
  • the server performs operations such as selecting a device, obtaining relevant information of the device, sending relevant information to the device, adding or modifying information related to the device, or deleting some information of the device, etc., by selecting the virtual object associated with the device , Add or modify the related information of the virtual object, or delete some information of the virtual object, etc., presented on the display medium.
  • another virtual object associated with the first device may also be set, and the spatial location information of the another virtual object may be determined based on the location information of the first device.
  • the other virtual object may be sent to the second device, and the virtual object may be presented on the display medium of the second device based on the position information and posture information of the second device relative to the optical tag. In this way, users can interact based on mutual virtual objects.
  • the server may continuously update the virtual object associated with the device based on the new information from the device.
  • the server sets the virtual object associated with the device
  • the location information and/or posture information of the device is changed.
  • the new location information of the device can be sent to the server by scanning the optical tag again or in other ways.
  • the device can determine its latest pose information relative to the light tag through the various methods mentioned above (for example, by collecting an image including the light tag and analyzing the image), or through the built-in sensor of the device (such as an acceleration sensor). , Gyroscope, camera, etc.) to track the device’s position changes.
  • the new pose information of the device can be sent to the server periodically, or the new pose information can be started when the difference between the new pose of the device and the pose sent to the server last time is greater than a certain preset threshold. send.
  • the server can know the new pose information of the device in time, and can update the spatial pose information of the virtual object accordingly. For example, in a shooting game, when the distance between player A and player B gradually increases, the virtual object of A presented on the display medium of device B corresponds to the virtual object of B presented on the display medium of device A. The ground becomes smaller.
  • the virtual objects of B presented on the display medium of A’s device and A’s virtual objects presented on the display medium of B’s device also follow the postures of their respective devices.
  • the information changes and make corresponding adjustments.
  • the server can continuously update the virtual object associated with the device according to the attribute information of the new device. For example, if player A is hit multiple times in a shooting game, the vitality value of player A's virtual object may show a gradual decrease.
  • the device or its user can change the related information of the virtual object.
  • the device or its user can set a new virtual object, move the position of the virtual object, change the posture of the virtual object, change the size or color of the virtual object, add a label on the virtual object, or delete its virtual object, and so on.
  • the server can update the virtual object based on the modified content and send it to related devices.
  • users can communicate with each other by editing virtual objects associated with other users.
  • the user can upload relevant information about the edited virtual object to the server, and the server can send it to the device associated with the virtual object, or display it on the virtual object or other virtual objects associated with itself and be visible to other users.
  • the device or its user can perform a delete operation on the superimposed virtual object and notify the server.
  • the user can set privacy settings to limit the visible range of their editing operations.
  • the present invention can be implemented in the form of a computer program.
  • the computer program can be stored in various storage media (for example, a hard disk, an optical disk, a flash memory, etc.), and when the computer program is executed by a processor, it can be used to implement the method of the present invention.
  • the present invention may be implemented in the form of an electronic device.
  • the electronic device includes a processor and a memory, and a computer program is stored in the memory.
  • the computer program When the computer program is executed by the processor, it can be used to implement the method of the present invention.
  • references herein to "each embodiment”, “some embodiments”, “one embodiment”, or “an embodiment”, etc. refer to the specific features, structures, or properties described in connection with the embodiments that are included in In at least one embodiment. Therefore, the appearances of the phrases “in various embodiments”, “in some embodiments”, “in one embodiment”, or “in an embodiment” in various places throughout this document do not necessarily refer to the same implementation example.
  • specific features, structures, or properties can be combined in any suitable manner in one or more embodiments. Therefore, a specific feature, structure, or property shown or described in combination with one embodiment can be combined in whole or in part with the feature, structure, or property of one or more other embodiments without limitation, as long as the combination is not incompatible. Logical or not working.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

一种基于光通信装置的交互方法,包括:获得第一设备的位置信息和姿态信息,其中,所述第一设备上具有摄像头,以及其中,通过分析所述第一设备的摄像头采集的包括光通信装置的图像来确定所述第一设备的位置信息和姿态信息;获得第二设备的位置信息,其中,所述第二设备上具有摄像头,以及其中,通过分析所述第二设备的摄像头采集的包括光通信装置的图像来确定所述第二设备的位置信息;根据所述第一设备的位置信息和姿态信息以及所述第二设备的位置信息,确定所述第二设备相对于所述第一设备的位置关系;以及基于所述第二设备相对于所述第一设备的位置关系以及预定的规则执行操作。

Description

基于光通信装置的交互方法和*** 技术领域
本发明涉及信息交互领域,尤其涉及一种基于光通信装置的交互方法和***。
背景技术
本部分的陈述仅仅是为了提供与本发明相关的背景信息,以帮助理解本发明,这些背景信息并不一定构成现有技术。
随着科技的发展,基于位置的服务(Location Based Service)获得了越来越广泛的应用。基于位置的服务例如是导航、寻找附近商家、寻找附近的人、等等。现有的基于位置的服务通常是通过电信运营商的无线电通讯网络(如GSM网、CDMA网)或卫星定位***(如GPS)来获取用户设备的位置信息(地理坐标),并基于该位置信息为用户提供相应服务。然而,现有的基于位置的服务并不能获得用户设备的精确位置信息,也无法获得用户设备的姿态信息,这对用户间基于设备的交流互动造成限制。
发明内容
本发明的一个方面涉及一种基于光通信装置的交互方法,包括:获得第一设备的位置信息和姿态信息,其中,所述第一设备上具有摄像头,以及其中,通过分析所述第一设备的摄像头采集的包括光通信装置的图像来确定所述第一设备的位置信息和姿态信息;获得第二设备的位置信息,其中,所述第二设备上具有摄像头,以及其中,通过分析所述第二设备的摄像头采集的包括光通信装置的图像来确定所述第二设备的位置信息;根据所述第一设备的位置信息和姿态信息以及所述第二设备的位置信息,确定所述第二设备相对于所述第一设备的位置关系;以及基于所述第二设备相对于所述第一设备的位置关系以及预定的规则执行操作。
可选的,其中,所述第二设备相对于所述第一设备的位置关系包括所述第二设备在所述第一设备的摄像头的视野内的位置。
可选的,其中,所述获得第一设备的位置信息和姿态信息包括:从所述第一设备接收所述位置信息和姿态信息,其中,所述第一设备通过采集并分析包括光通信装置的图像来确定所述位置信息和姿态信息;所述获得第二设备的位置信息包括:从所述第二设备接收所述位置信息,其中,所述第二设备通过采集并分析包括光通信装置的图像来确定所述位置信息。
可选的,其中,所述获得第一设备的位置信息和姿态信息包括:服务器通过分析所述第一设备采集的包括光通信装置的图像以确定所述第一设备的位置信息和姿态信息;所述获得第二设备的位置信息包括:服务器通过分析所述第二设备采集的包括光通信装置的图像以确定所述第二设备的位置信息。
可选的,其中,与所述第一设备的位置信息和姿态信息相关联的光通信装置和与所述第二设备的位置信息相关联的光通信装置是相同的光通信装置,或者不同的光通信装置,所述不同的光通信装置具有确定的相对位置关系。
可选的,其中,所述第一设备的位置信息和姿态信息是所述第一设备相对于光通信装置的位置信息和姿态信息、在场景坐标系中的位置信息和姿态信息、或者在世界坐标系中的位置信息和姿态信息;所述第二设备的位置信息是所述第二设备相对于光通信装置的位置信息、在场景坐标系中的位置信息、或者在世界坐标系中的位置信息。
可选的,其中,所述第一设备在场景坐标系中的位置信息和姿态信息是基于所述第一设备相对于所述光通信装置的位置信息和姿态信息以及所述光通信装置本身在场景坐标系中的位置信息和姿态信息所获得的,所述第一设备在世界坐标系中的位置信息和姿态信息是基于所述第一设备相对于所述光通信装置的位置信息和姿态信息以及所述光通信装置本身在世界坐标系中的位置信息和姿态信息所获得的;所述第二设备在场景坐标系中的位置信息是基于所述第二设备相对于所述光通信装置的位置信息以及所述光通信装置本身在场景坐标系中的位置信息所获得的,所述第二设备在世界坐标系中的位置信息是基于所述第二设备相对于所述光通信装置的位置信息以及所述光通信装置本身在世界坐标系中的位置信息所获得的。
可选的,其中,所述预定的规则包括:当所述第二设备位于所述第一设备的摄像头视野的预定区域时执行操作。
可选的,其中,所述基于所述第二设备相对于所述第一设备的位置关系以及预定的规则执行操作包括:基于所述第二设备相对于所述第一设备的位置关系、所述第一设备或第二设备的输入、以及预定的规则执行操作。
可选的,其中,所述基于所述第二设备相对于所述第一设备的位置关系以及预定的规则执行操作包括:基于所述第二设备相对于所述第一设备的位置关系、所述第一设备或第二设备的属性信息、以及预定的规则执行操作。
可选的,其中,所述操作包括获取、发送、显示、修改、增加或删除与所述第一设备或者所述第二设备相关联的属性信息。
可选的,还包括:获得所述第一设备的更新的位置信息和姿态信息;和/或获得所述第二设备的更新的位置信息。
可选的,还包括:获得第二设备的姿态信息,其中,通过分析包括光通信装置的图像来确定所述第二设备的姿态信息;根据所述第一设备的位置信息以及所述第二设备的位置信息和姿态信息,确定所述第一设备相对于所述第二设备的位置关系;基于所述第一设备相对于所述第二设备的位置关系以及预定的规则执行操作。
可选的,还包括:设置与所述第二设备相关联的具有空间位置信息的虚拟对象,所述虚拟对象的空间位置信息基于所述第二设备的位置信息确定;将与所述虚拟对象有关的信息发送给所述第一设备,使其能够被所述第一设备使用以基于其通过光通信装置确定的位置信息和姿态信息在其显示媒介上呈现所述虚拟对象;以及其中,所述执行操作包括对所述虚拟对象执行操作。
可选的,其中,所述虚拟对象还具有姿态信息。
可选的,还包括:设置与所述第一设备相关联的具有空间位置信息的另一虚拟对象,所述另一虚拟对象的空间位置信息基于所述第一设备的位置信息确定;将与所述另一虚拟对象有关的信息发送给所述第二设备,使其能够被所述第二设备使用以基于其通过光通信装置确定的位置信息和 姿态信息在其显示媒介上呈现所述另一虚拟对象。
本发明的另一个方面涉及一种基于设备位置信息和姿态信息的交互***,包括:一个或多个光通信装置;至少两个设备,所述设备上具有摄像头,所述摄像头能够采集包括所述光通信装置的图像;以及能够与所述设备通信的服务器,其配置用于实现任一上述的方法。
本发明的再一个方面涉及一种存储介质,其中存储有计算机程序,在所述计算机程序被处理器执行时,能够用于实现上述的方法。
本发明的再一个方面涉及一种电子设备,包括处理器和存储器,所述存储器中存储有计算机程序,在所述计算机程序被处理器执行时,能够用于实现上述的方法。
附图说明
以下参照附图对本发明实施例作进一步说明,其中:
图1示出了一种示例性的光标签;
图2示出了一种示例性的光标签网络;
图3示出了根据一个实施例的交互方法;
图4示出了根据一个实施例的交互方法。
具体实施方式
为了使本发明的目的、技术方案及优点更加清楚明白,以下结合附图通过具体实施例对本发明进一步详细说明。应当理解,此处所描述的具体实施例仅用以解释本发明,并不用于限定本发明。
光通信装置也称为光标签,这两个术语在本文中可以互换使用。光标签能够通过不同的发光方式来传递信息,其具有识别距离远、可见光条件要求宽松的优势,并且光标签所传递的信息可以随时间变化,从而可以提供大的信息容量和灵活的配置能力。
光标签中通常可以包括控制器和至少一个光源,该控制器可以通过不同的驱动模式来驱动光源,以向外传递不同的信息。图1示出了一种示例性的光标签100,其包括三个光源(分别是第一光源101、第二光源102、第三光源103)。光标签100还包括控制器(在图1中未示出),其用于根据要传递的信息为每个光源选择相应的驱动模式。例如,在不同的驱动模 式下,控制器可以使用不同的驱动信号来控制光源的发光方式,从而使得当使用具有成像功能的设备拍摄光标签100时,其中的光源的图像可以呈现出不同的外观(例如,不同的颜色、图案、亮度、等等)。通过分析光标签100中的光源的成像,可以解析出各个光源此刻的驱动模式,从而解析出光标签100此刻传递的信息。可以理解,图1仅仅用作示例,光标签可以具有与图1所示的示例不同的形状,并且可以具有与图1所示的示例不同数量和/或不同形状的光源。
为了基于光标签向用户提供相应的服务,每个光标签可以被分配一个标识信息(ID)。通常,可由光标签中的控制器驱动光源以向外传递该标识信息,图像采集设备可以对光标签进行图像采集来获得包含光标签的一幅或多幅图像,并通过分析图像中的光标签(或光标签中的各个光源)的成像以识别出光标签传递的标识信息,之后,可以获取与标识信息相关联的其他信息,例如,与该标识信息对应的光标签的位置信息。
可以将与每个光标签相关的信息存储于服务器中。在现实中,还可以将大量的光标签构建成一个光标签网络。图2示出了一种示例性的光标签网络,该光标签网络包括多个光标签和至少一个服务器。可以在服务器上保存每个光标签的标识信息(ID)或其他信息,例如与该光标签相关的服务信息、与该光标签相关的描述信息或属性信息,如光标签的位置信息、型号信息、物理尺寸信息、物理形状信息、姿态或朝向信息等。光标签也可以具有统一的或默认的物理尺寸信息和物理形状信息等。设备可以使用识别出的光标签的标识信息来从服务器查询获得与该光标签有关的其他信息。光标签的位置信息可以是指该光标签在物理世界中的实际位置,其可以通过地理坐标信息来指示。服务器可以是在计算装置上运行的软件程序、一台计算装置或者由多台计算装置构成的集群。光标签可以是离线的,也即,光标签不需要与服务器进行通信。当然,可以理解,能够与服务器进行通信的在线光标签也是可行的。
本文提到的设备例如可以是用户随身携带或控制的设备(例如,带有摄像头的手机、平板电脑、智能眼镜、AR眼镜、智能头盔、智能手表等等),也可以是能够自主移动的机器(例如,带有摄像头的无人机、无人驾驶汽车、机器人等等)。设备可以通过其上的摄像头对光标签进行图像采集来 获得包含光标签的图像。设备可以具有显示媒介或与显示媒介相关联。
以大型会议为例进行说明,携带有设备(例如带有摄像头的手机)的参会人员可以扫描并识别布置在其周围的光标签,并通过所识别的光标签标识信息来访问相应的服务。在用户使用其设备扫描光标签时,可以拍摄光标签的图像并基于该图像通过相对定位来确定用户设备相对于光标签的位置信息和姿态信息,并将该位置信息和姿态信息发送到服务器。服务器通过上述方法获得参会人员设备的位置信息和姿态信息(简称位姿信息)后,可以根据设备的位置信息和姿态信息确定设备的摄像头的视野范围。若预定的规则规定:当第二设备位于第一设备的摄像头视野的预定范围内时,服务器将第二设备的有关信息发送给第一设备,则当参会人员乙(即设备乙)的位置处于参会人员甲(即设备甲)的摄像头视野的预定范围内时,服务器可以根据预定的规则,将参会人员乙的有关信息(例如可以是姓名、职业、工作单位等)发送给参会人员甲。
再以虚拟射击游戏为例进行说明,携带有设备(例如可以是带有摄像头的仿真射击器械)的游戏玩家可以拍摄光标签的图像,并通过分析该图像来确定该游戏玩家设备相对于光标签的位置信息和姿态信息。该位置信息和姿态信息可以被发送给服务器。服务器可以基于该游戏玩家设备的位置信息和姿态信息确定其摄像头的视野范围。若预定的规则规定:当第二设备位于第一设备的摄像头视野的预定范围(例如,摄像头视野的中心)内时,表明第一设备当前瞄准了第二设备,则当游戏玩家乙(即设备乙)的位置处于游戏玩家甲(即设备甲)的摄像头视野的预定范围内时,服务器根据预定的规则可以判断游戏玩家甲当前瞄准了游戏玩家乙,此时如果游戏玩家甲执行射击操作,服务器可以记录游戏玩家甲命中游戏玩家乙,并可以相应地改变与游戏玩家乙有关的属性信息(例如可以是生命力值等)。
图3示出了根据一个实施例的基于光通信装置的交互方法,该方法包括以下步骤:
步骤310:服务器获得第一设备的位置信息和姿态信息,其中,第一设备上具有摄像头,以及其中,通过分析第一设备的摄像头采集的包括光通信装置的图像来确定第一设备的位置信息和姿态信息。
第一设备可以通过扫描光标签来识别光标签传递的信息,并基于该信息访问服务器,向服务器传送信息。
服务器可以采用各种方式来获得第一设备的位姿信息。在一个实施例中,服务器可以从来自第一设备的信息中提取该设备的位姿信息,此时,来自第一设备的信息中可以包含该第一设备的位姿信息。在一个实施例中,设备可以通过采集包括光标签的图像并分析该图像来确定其相对于光标签的位姿信息。例如,设备可以通过图像中的光标签成像大小以及可选的其他信息(例如,光标签的实际物理尺寸信息、设备的摄像头的焦距)来确定光标签与设备的相对距离(成像越大,距离越近;成像越小,距离越远)。设备可以使用光标签的标识信息从服务器获得光标签的实际物理尺寸信息,或者光标签可以具有统一的物理尺寸并在设备上存储该物理尺寸。设备可以使用光标签的标识信息从服务器获得光标签的物理形状信息,或者光标签可以具有统一的物理形状并在设备上存储该物理形状。在一个实施例中,设备也可以通过其上安装的深度摄像头或双目摄像头等来直接获得光标签与设备的相对距离。设备也可以采用现有的任何其他定位方法来确定其相对于光标签的位置信息。设备也可以确定其姿态信息,该姿态信息可以用于确定设备拍摄的现实场景的范围或边界。通常情况下,设备的姿态信息实际上是设备的图像采集器件(例如摄像头)的姿态信息。在一个实施例中,设备可以扫描光标签,并且可以根据光标签的成像来确定其相对于光标签的姿态信息,当光标签的成像位置或成像区域位于设备成像视野的中心时,可以认为设备当前正对着光标签。在确定设备的姿态时可以进一步考虑光标签的成像的方向。随着设备的姿态发生改变,光标签在设备上的成像位置和/或成像方向会发生相应的改变,因此,可以根据光标签在设备上的成像来获得设备相对于光标签的姿态信息。在一个实施例中,设备也可以根据光标签建立一个坐标系,该坐标系可以被称为光标签坐标系。可以将光标签上的一些点确定为在光标签坐标系中的一些空间点,并且可以根据光标签的物理尺寸信息和/或物理形状信息来确定这些空间点在光标签坐标系中的坐标。光标签上的一些点例如可以是光标签的外壳的角、光标签中的光源的端部、光标签中的一些标识点、等等。根据光标签的物体结构特征或几何结构特征,可以在设备相机拍摄的图像中找到与这些空间点分别对应的像点,并确定各个像点在图像中的位置。根据各个空间点在光标签坐标系中的坐标以及对应的各个像点在图像中的位置,结 合设备相机的内参信息,可以计算得到拍摄该图像时设备相机在光标签坐标系中的位姿信息(R,t),其中R为旋转矩阵,其可以用于表示设备相机在光标签坐标系中的姿态信息,t为位移向量,其可以用于表示设备相机在光标签坐标系中的位置信息。计算R、t的方法在现有技术中是已知的,例如,可以利用3D-2D的PnP(Perspective-n-Point)方法来计算R、t,为了不模糊本发明,在此不再详细介绍。旋转矩阵R和位移向量t实际上可以描述如何将某个点的坐标在光标签坐标系和设备相机坐标系之间转换。例如,通过旋转矩阵R和位移向量t,可以将某个点在光标签坐标系中的坐标转换为在设备相机坐标系中的坐标,并可以进一步转换为图像中的像点的位置。
在一个实施例中,服务器也可以通过分析来自第一设备的信息来获得该设备的位姿信息。来自第一设备的信息中可以包含有光标签的图像信息。服务器通过分析该图像以确定第一设备相对于光标签的位姿信息。具体方法同上述设备通过分析光标签图像获得其相对于光标签的位姿信息类似,在此不再赘述。
步骤320:服务器获得第二设备的位置信息,其中,第二设备上具有摄像头,以及其中,通过分析第二设备的摄像头采集的包括光通信装置的图像来确定第二设备的位置信息。
服务器可以采用各种方式来获得第二设备相对于光标签的位置信息,具体方式与上文在步骤310中描述的各种方式类似,在此不再赘述。
在一个实施例中,服务器还可以获得第二设备的姿态信息,方法同上文类似。
服务器从设备接收的位姿信息或者服务器通过分析获得的设备的位姿信息可以是设备相对于光标签的位姿信息,也可以是设备在场景坐标系下的位姿信息或在世界坐标系下的位姿信息。设备或者服务器可以根据不同坐标系之间的变换矩阵来实现目标位姿在不同坐标系之间的转换。在一个实施例中,设备或者服务器可以根据设备相对于光标签的位姿信息以及光标签本身在场景坐标系中的位姿信息,来确定设备在场景坐标系中的位姿信息,设备或者服务器也可以根据设备相对于光标签的位姿信息以及光标签本身在世界坐标系中的位姿信息,来确定设备在世界坐标系中的位姿信息。在一个实施例中,设备可以向服务器发送其相对于光标签的位姿信 息,之后,服务器可以根据设备相对于光标签的位姿信息以及光标签本身在场景坐标系或世界坐标系中的位姿信息,来确定设备在场景坐标系或世界坐标系中的位姿信息。光标签本身在场景坐标系或世界坐标系中的位姿信息可以存储于服务器,并且可以由设备使用光标签的标识信息从服务器获得。
设备的位姿信息可以是设备在扫描光标签时的位姿信息,也可以是设备在扫描光标签之后使用内置的加速度传感器、陀螺仪、摄像头等通过本领域已知的方法(例如,惯性导航、视觉里程计、SLAM、VSLAM、SFM等)测量或跟踪获得的新的位姿信息。服务器可以持续获得设备的新的位姿信息并更新设备的位姿信息。
在许多场景下,可能存在不止一个光标签,而是存在如图2所示的光标签网络,其中,服务器可以知悉各个光标签的位姿信息或者它们之间的相对位姿关系。在这些场景下,第一设备和第二设备扫描的光标签可能不是同一个光标签,第一设备也可能在不同的时间扫描多个不同的光标签来提供或更新其位置信息(在提供或更新位置信息可以发送相关的光标签的标识信息),第二设备也可能在不同的时间扫描多个不同的光标签来确定其位置信息和姿态信息。
步骤330:服务器根据第一设备的位置信息和姿态信息以及第二设备的位置信息,确定第二设备相对于第一设备的位置关系。
在一个实施例中,服务器可以基于第一设备的位置信息和姿态信息确定一个以第一设备为坐标原点的坐标系,并将第二设备的位置信息转换为该坐标系下的位置信息,如此,可以确定第二设备相对于第一设备(也即坐标原点)的位置关系。在一个实施例中,服务器可以根据第一设备的位姿信息确定第一设备的摄像头的视野范围,并根据第一设备的摄像头的视野范围以及第二设备的位置信息确定第二设备位于第一设备摄像头视野范围内还是视野范围外,并且可以确定第二设备在第一设备摄像头视野范围内的具***置。
步骤340:服务器基于第二设备相对述第一设备的位置关系以及预定的规则执行操作。
服务器执行的操作可以包括选中设备、获取设备的有关信息、向设备 发送有关的信息、增加或修改与设备有关的信息、或者删除设备的某些信息等。与设备有关的信息可以预先存储在服务器中,也可以由设备实时发送给服务器。
在一个实施例中,与设备有关的信息可以包括设备的属性信息。设备的属性信息可以是使用设备的用户的信息,也可以是用户自定义的信息或***设置的信息,或者是其他任何信息。例如,在会议场景中,设备甲的属性信息可以是参会人员甲的个人信息(例如可以包括参会人员甲的姓名、职业、工作单位等),也可以是参会人员甲自定义的信息(例如可以是甲主动提供的联系方式,如手机号码,邮箱等),或是***根据参会人员甲的身份设置的属性信息(例如“演讲者”)。在一个实施例中,设备的属性信息还可以包括设备属性的具体数据值。例如,在射击游戏场景中,设备甲的属性信息可以包括游戏玩家甲的游戏角色的身份、等级、技能、生命力值等。
服务器基于设备之间的相对位置关系以及预定的规则判断是否应当执行相应的操作以实现用户之间的交互。预定的规则可以是任何规则,其可以由用户自行设定,也可以由服务器预先设定。例如,预定的规则可以规定:当第二设备位于第一设备摄像头视野范围内,就执行相应的操作。此时,例如在会议场景中,当参会人员乙(即设备乙)出现在参会人员甲的设备摄像头视野范围内,服务器就向甲发送乙的身份、职业、工作单位等有关信息。预定的规则也可以规定:当第二设备位于第一设备的摄像头视野的中心区域时,服务器执行相应的操作。此时,只有当参会人员乙(即设备乙)位于参会人员甲的设备摄像头视野的中心区域时,服务器才向乙发送甲的身份、职业、工作单位等有关信息。
在一个实施例中,服务器可以基于设备间的相对位置关系、设备的输入、以及预定的规则执行相应的操作。例如,预定的规则可以规定:若第二设备位于第一设备的摄像头视野的非中心区域时,当第一设备选中显示媒介中第二设备的成像时,服务器执行相应的操作,则在会议场景中,当参会人员乙(即设备乙)出现在参会人员甲的设备摄像头视野范围内,但位于非中心区域时,当参会人员甲点击了乙在其设备中的成像时,服务器向甲发送乙的身份、职业、工作单位等有关信息。在一个实施例中,可以 在第一设备的显示媒介上呈现相应的效果以提示第一设备的用户选中或命中了第二设备。例如,可以在显示媒介的第二设备成像上呈现选中或命中的图标。在一个实施例中,为了帮助用户选中或瞄准,也可以在第一设备的显示媒介上呈现相应的辅助图标,例如指示框或瞄准器图标等。
在一个实施例中,服务器可以根据设备间的相对位置关系、设备的属性信息、以及预定的规则执行相应的操作。例如,预定的规则可以规定:当第二设备位于第一设备的摄像头视野范围内时,且当第一用户和/或第二用户为特定身份时,服务器才执行相应的操作。
在一些场景下,服务器可以持续获得设备不断更新的位姿信息,并根据设备更新的位姿信息确定是否执行相应的操作。在一个实施例中,当第一设备扫描光标签时,第二设备所在的位置可能并不位于第一设备的视野(例如第一设备的摄像头的视野)中,在这种情况下,可以尝试平移和/或旋转第一设备,并通过第一设备内置的传感器(例如加速度传感器、陀螺仪、视觉里程计等)跟踪第一设备的位姿变化,从而确定第一设备的新的视野。当第二设备所在的位置进入第一设备的视野时,可以执行相应的操作。使用设备内置传感器跟踪设备位姿变化的技术是定位与导航技术领域的公知技术,在此不再赘述。
在一个实施例中,可以基于第二设备采集的光标签的图像获得第二设备的位置信息和姿态信息,服务器可以根据第二设备的位置信息和姿态信息以及第一设备的位置信息确定第一设备相对于第二设备的位置关系,并基于第一设备相对于第二设备的位置关系以及预定的规则执行相应的操作。具体方式与上文步骤310-340中描述的各种方式类似,在此不再赘述。
在一个实施例中,服务器也可以设置与设备相关联的虚拟对象,并可以将光标签作为锚点,来实现虚拟对象到现实场景的叠加,例如使用该虚拟对象来准确地标识现实场景中的用户或设备所在的位置。虚拟对象例如可以是图标、图片、文字、数字、表情符号、虚拟的三维物体、三维场景模型、一段动画、一段视频、等等。用户或设备可以通过对其他用户的虚拟对象进行操作以实现互动交流。
还以上述多人射击游戏为例进行说明,携带有设备(例如可以是带有摄像头的仿真射击器械、AR眼镜等)的游戏玩家甲可以使用其设备扫描并 识别布置在其周围的某个光标签以确定该设备的位姿信息。服务器获得甲的设备的位姿信息后可以根据甲的设备的位置信息为甲设置一个具有空间位置信息的虚拟对象。该虚拟对象例如可以是用于表示甲在游戏中选择的角色的相关信息的图标或数值,例如,具有经验值数据或生命力数据的警察图标、具有经验值数据或生命力数据的匪徒图标。根据该虚拟对象的空间位置信息以及其他玩家的设备的位姿信息,可以将该虚拟对象准确地呈现在其他玩家的设备的显示媒介上。游戏开始后,服务器根据预定的规则可以判断游戏玩家甲当前瞄准了游戏玩家乙,此时如果游戏玩家甲执行射击操作,服务器可以记录游戏玩家甲命中游戏玩家乙,并可以相应地改变与游戏玩家乙有关的属性信息(例如可以是生命力值等)以及与游戏玩家甲有关的属性信息(例如可以是经验值等)。游戏玩家甲或乙的属性信息的改变可以通过虚拟对象表现出来,例如,使得虚拟对象呈现的经验值数据增加或者使得虚拟对象呈现的生命力数据减少。如此,可以将各个玩家及代表其角色的虚拟对象准确对应,从而提升玩家沉浸式感官体验。优选地,游戏玩家在游戏过程中可以使用AR眼镜而不是手机,以体验更真实的游戏环境。
图4示出了根据一个实施例的基于光通信装置的交互方法,其中步骤410和420与上述步骤310和320类似,此外,该方法还包括如下步骤:
步骤430:服务器为第二设备设置具有空间位置信息的虚拟对象,该虚拟对象的空间位置信息基于第二设备的位置信息确定。
在一些场景下,例如在接收到来自第二设备的信息(例如请求加入游戏的信息)后,服务器可以设置与第二设备相关联的虚拟对象。该虚拟对象例如可以是第二设备在游戏中选择的角色所对应的名称、性别、人物形象的虚拟图标等等。在一个实施例中,服务器还可以根据与第二设备有关的信息设置虚拟对象,例如可以根据设备的属性信息、使用该设备的用户的信息、用户使用设备所执行的某个操作有关的信息(例如加入游戏的时间)、用户自定义的信息、或***设置的信息设置虚拟对象。
虚拟对象的空间位置信息可以根据第二设备的位置信息来确定,该空间位置信息可以是相对于光标签的位置信息,也可以是在场景坐标系中或世界坐标系中的位置信息。服务器可以将虚拟对象的空间位置简单地确定 为第二设备的位置,也可以将虚拟对象的空间位置确定为其他位置,例如,位于第二设备的位置附近的其他位置。
在一个实施例中,服务器还可以根据第二设备的姿态信息设置与虚拟对象的姿态信息。虚拟对象的姿态信息可以是第二设备相对于光标签的姿态信息,也可以是其在场景坐标系或世界坐标系中的姿态信息。以上述射击游戏为例,可以根据游戏玩家在游戏场景下的位姿信息设置与游戏玩家相关联的虚拟对象的位姿信息。例如,当玩家甲背对玩家乙时,甲的虚拟对象(例如游戏角色的人物形象)也是背对着玩家乙。
在虚拟对象具有姿态信息的情况下,与第二设备关联的虚拟对象的姿态信息可以根据第一设备的姿态信息确定,即与第二设备相关联的虚拟对象的姿态可以随着第一设备的姿态而调整。在一个实施例中,可以根据第一设备相对于第二设备的姿态信息确定虚拟对象的姿态,使得虚拟对象的某个方位(例如虚拟对象的正面方向)始终朝向第一设备。还以上述射击游戏为例,乙的虚拟对象(例如角色名称)的正面始终朝向甲的设备。在一个实施例中,可以基于设备和虚拟对象的位置在空间中确定一个从虚拟对象到设备的方向,并基于该方向来确定虚拟对象的姿态。通过上述方法,同一个虚拟对象对于处于不同位置的设备实际上可以具有各自的姿态。
步骤440:服务器将与虚拟对象有关的信息发送给第一设备,使其能够被第一设备使用以基于其通过光通信装置确定的位置信息和姿态信息在其显示媒介上呈现该虚拟对象。
服务器可以通过多种方式向第一设备发送虚拟对象的相关信息。在一个实施例中,服务器可以例如通过无线链路直接将与虚拟对象有关的信息发送给第一设备。在一个实施例中,光标签识别设备可以通过扫描场景中布置的光标签来识别光标签传递的信息(例如标识信息),并使用该信息访问服务器(例如,通过无线信号进行访问)以从服务器获得虚拟对象的相关信息。在一个实施例中,服务器可以使用光标签以光通信方式将虚拟对象的相关信息发送到光标签识别设备。虚拟对象的相关信息能够被光标签识别设备使用以基于其位置信息和/或姿态信息在其显示媒介上呈现所述虚拟对象。
步骤450:服务器根据第一设备的位姿信息和第二设备的位置信息确 定第二设备相对于第一设备的位置关系。
步骤460:服务器根据第二设备相对于第一设备的位置关系以及预定的规则对所述虚拟对象执行操作。
如前所述,服务器可以基于所述第二设备相对于所述第一设备的位置关系以及预定的规则执行操作,服务器执行的操作可以通过虚拟对象呈现在显示媒介上。例如,服务器执行选中设备、获取设备的有关信息、向设备发送有关的信息、增加或修改与设备有关的信息、或者删除设备的某些信息等操作,可以通过选中与该设备相关联的虚拟对象、增加或修改与该虚拟对象的相关信息、或者删除该虚拟对象的某些信息等方式呈现在显示媒介上。
在一些场景中,也可以设置与第一设备相关联的另一虚拟对象,该另一虚拟对象的空间位置信息可以基于第一设备的位置信息确定。可以将该另一虚拟对象发送给第二设备并可以基于第二设备相对于光标签的位置信息和姿态信息在其显示媒介上呈现该虚拟对象。如此,用户之间可以基于相互的虚拟对象进行交互。
在一些情况下,在叠加了虚拟对象之后,与设备有关的信息发生了改变,服务器可以基于来自设备的新的信息持续更新与设备相关联的虚拟对象。在一个实施例中,在服务器设置了与设备关联的虚拟对象后,设备的位置信息和/或姿态信息发生了改变。为了使服务器能够及时知悉该设备的最新位置和姿态,可以通过再次扫描光标签或其他方式将设备的新的位置信息发送给服务器。设备可以通过上文提到的各种方式(例如,通过采集包括光标签的图像并分析该图像)来确定其相对于光标签的最新位姿信息,也可以通过设备内置的传感器(例如加速度传感器、陀螺仪、摄像头等)来跟踪该设备的位置变化。可以定期地将该设备的新的位姿信息发送给服务器,也可以在设备的新位姿与上次发送给服务器的位姿之间的差大于某个预设阈值时启动新位姿信息的发送。如此,服务器可以及时知悉该设备的新的位姿信息,并可以相应地更新虚拟对象的空间位姿信息。例如,在射击游戏中,当玩家甲和玩家乙之间的距离逐渐变大时,呈现在乙设备的显示媒介上的甲的虚拟对象和呈现在甲设备的显示媒介上的乙的虚拟对象相应地变小,当玩家甲和玩家乙由面对面转向背对背时,呈现在甲设 备是显示媒介上的乙的虚拟对象和呈现在乙设备的显示媒介上的甲的虚拟对象也随着各自设备的姿态信息变化而做相应的调整。在一个实施例中,在服务器为设备设置虚拟对象后,与设备有关的属性信息发生改变,此时,服务器可以根据新的设备的属性信息持续更新与该设备相关联的虚拟对象。例如,射击游戏中玩家甲被多次命中,则玩家甲的虚拟对象的生命力值可以显示逐渐减少。
在一个实施例中,设备或其用户可以改变虚拟对象的有关信息。例如,设备或其用户可以设置新的虚拟对象、移动虚拟对象的位置、改变虚拟对象的姿态、改变虚拟对象的大小或颜色、在虚拟对象上添加标注、或者删除其虚拟对象等等。服务器可以基于修改后内容来更新该虚拟对象并发送给相关设备。
在一个实施例中,用户可以通过编辑与其他用户相关联的虚拟对象来进行相互交流。例如,用户可以把编辑后的虚拟对象的有关信息上传到服务器,由服务器发送给与该虚拟对象相关联的设备,或者显示在与自己相关联的虚拟对象或其他虚拟对象上并被其他用户可见。在一个实施例中,设备或其用户可以对叠加的虚拟对象执行删除操作,并通知服务器。在一个实施例中,用户可以进行隐私设置,限制其编辑操作的可见范围。
在本发明的一个实施例中,可以以计算机程序的形式来实现本发明。计算机程序可以存储于各种存储介质(例如,硬盘、光盘、闪存等)中,当该计算机程序被处理器执行时,能够用于实现本发明的方法。
在本发明的另一个实施例中,可以以电子设备的形式来实现本发明。该电子设备包括处理器和存储器,在存储器中存储有计算机程序,当该计算机程序被处理器执行时,能够用于实现本发明的方法。
本文中针对“各个实施例”、“一些实施例”、“一个实施例”、或“实施例”等的参考指代的是结合所述实施例所描述的特定特征、结构、或性质包括在至少一个实施例中。因此,短语“在各个实施例中”、“在一些实施例中”、“在一个实施例中”、或“在实施例中”等在整个本文中各处的出现并非必须指代相同的实施例。此外,特定特征、结构、或性质可以在一个或多个实施例中以任何合适方式组合。因此,结合一个实施例中所示出或描述的特定特征、结构或性质可以整体地或部分地与一个或多个其他 实施例的特征、结构、或性质无限制地组合,只要该组合不是不符合逻辑的或不能工作。本文中出现的类似于“根据A”、“基于A”、“通过A”或“使用A”的表述意指非排他性的,也即,“根据A”可以涵盖“仅仅根据A”,也可以涵盖“根据A和B”,除非特别声明或者根据上下文明确可知其含义为“仅仅根据A”。在本申请中为了清楚说明,以一定的顺序描述了一些示意性的操作步骤,但本领域技术人员可以理解,这些操作步骤中的每一个并非是必不可少的,其中的一些步骤可以被省略或者被其他步骤替代。这些操作步骤也并非必须以所示的方式依次执行,相反,这些操作步骤中的一些可以根据实际需要以不同的顺序执行,或者并行执行,只要新的执行方式不是不符合逻辑的或不能工作。
由此描述了本发明的至少一个实施例的几个方面,可以理解,对本领域技术人员来说容易地进行各种改变、修改和改进。这种改变、修改和改进意于在本发明的精神和范围内。虽然本发明已经通过优选实施例进行了描述,然而本发明并非局限于这里所描述的实施例,在不脱离本发明范围的情况下还包括所作出的各种改变以及变化。

Claims (19)

  1. 一种基于光通信装置的交互方法,包括:
    获得第一设备的位置信息和姿态信息,其中,所述第一设备上具有摄像头,以及其中,通过分析所述第一设备的摄像头采集的包括光通信装置的图像来确定所述第一设备的位置信息和姿态信息;
    获得第二设备的位置信息,其中,所述第二设备上具有摄像头,以及其中,通过分析所述第二设备的摄像头采集的包括光通信装置的图像来确定所述第二设备的位置信息;
    根据所述第一设备的位置信息和姿态信息以及所述第二设备的位置信息,确定所述第二设备相对于所述第一设备的位置关系;以及
    基于所述第二设备相对于所述第一设备的位置关系以及预定的规则执行操作。
  2. 根据权利要求1所述的方法,其中,所述第二设备相对于所述第一设备的位置关系包括所述第二设备在所述第一设备的摄像头的视野内的位置。
  3. 根据权利要求1所述的方法,其中,
    所述获得第一设备的位置信息和姿态信息包括:从所述第一设备接收所述位置信息和姿态信息,其中,所述第一设备通过采集并分析包括光通信装置的图像来确定所述位置信息和姿态信息;
    所述获得第二设备的位置信息包括:从所述第二设备接收所述位置信息,其中,所述第二设备通过采集并分析包括光通信装置的图像来确定所述位置信息。
  4. 根据权利要求1所述的方法,其中,
    所述获得第一设备的位置信息和姿态信息包括:服务器通过分析所述第一设备采集的包括光通信装置的图像以确定所述第一设备的位置信息和姿态信息;
    所述获得第二设备的位置信息包括:服务器通过分析所述第二设备采集的包括光通信装置的图像以确定所述第二设备的位置信息。
  5. 根据权利要求1所述的方法,其中,与所述第一设备的位置信息和姿态信息相关联的光通信装置和与所述第二设备的位置信息相关联的光通信装置是相同的光通信装置,或者不同的光通信装置,所述不同的光通信装置具有确定的相对位置关系。
  6. 根据权利要求1所述的方法,其中,
    所述第一设备的位置信息和姿态信息是所述第一设备相对于光通信装置的位置信息和姿态信息、在场景坐标系中的位置信息和姿态信息、或者在世界坐标系中的位置信息和姿态信息;
    所述第二设备的位置信息是所述第二设备相对于光通信装置的位置信息、在场景坐标系中的位置信息、或者在世界坐标系中的位置信息。
  7. 根据权利要求6所述的方法,其中,
    所述第一设备在场景坐标系中的位置信息和姿态信息是基于所述第一设备相对于所述光通信装置的位置信息和姿态信息以及所述光通信装置本身在场景坐标系中的位置信息和姿态信息所获得的,所述第一设备在世界坐标系中的位置信息和姿态信息是基于所述第一设备相对于所述光通信装置的位置信息和姿态信息以及所述光通信装置本身在世界坐标系中的位置信息和姿态信息所获得的;
    所述第二设备在场景坐标系中的位置信息是基于所述第二设备相对于所述光通信装置的位置信息以及所述光通信装置本身在场景坐标系中的位置信息所获得的,所述第二设备在世界坐标系中的位置信息是基于所述第二设备相对于所述光通信装置的位置信息以及所述光通信装置本身在世界坐标系中的位置信息所获得的。
  8. 根据权利要求1所述的方法,其中,所述预定的规则包括:
    当所述第二设备位于所述第一设备的摄像头视野的预定区域时执行 操作。
  9. 根据权利要求1所述的方法,其中,所述基于所述第二设备相对于所述第一设备的位置关系以及预定的规则执行操作包括:
    基于所述第二设备相对于所述第一设备的位置关系、所述第一设备或第二设备的输入、以及预定的规则执行操作。
  10. 根据权利要求1所述的方法,其中,所述基于所述第二设备相对于所述第一设备的位置关系以及预定的规则执行操作包括:
    基于所述第二设备相对于所述第一设备的位置关系、所述第一设备或第二设备的属性信息、以及预定的规则执行操作。
  11. 根据权利要求1所述的方法,其中,所述操作包括获取、发送、显示、修改、增加或删除与所述第一设备或者所述第二设备相关联的属性信息。
  12. 根据权利要求1所述的方法,还包括:
    获得所述第一设备的更新的位置信息和姿态信息;和/或
    获得所述第二设备的更新的位置信息。
  13. 根据权利要求1所述的方法,还包括:
    获得第二设备的姿态信息,其中,通过分析包括光通信装置的图像来确定所述第二设备的姿态信息;
    根据所述第一设备的位置信息以及所述第二设备的位置信息和姿态信息,确定所述第一设备相对于所述第二设备的位置关系;
    基于所述第一设备相对于所述第二设备的位置关系以及预定的规则执行操作。
  14. 根据权利要求1-13中任一项所述的方法,还包括:
    设置与所述第二设备相关联的具有空间位置信息的虚拟对象,所述虚 拟对象的空间位置信息基于所述第二设备的位置信息确定;
    将与所述虚拟对象有关的信息发送给所述第一设备,使其能够被所述第一设备使用以基于其通过光通信装置确定的位置信息和姿态信息在其显示媒介上呈现所述虚拟对象;
    以及其中,所述执行操作包括对所述虚拟对象执行操作。
  15. 根据权利要求14所述的方法,其中,所述虚拟对象还具有姿态信息。
  16. 根据权利要求14所述的方法,还包括:
    设置与所述第一设备相关联的具有空间位置信息的另一虚拟对象,所述另一虚拟对象的空间位置信息基于所述第一设备的位置信息确定;
    将与所述另一虚拟对象有关的信息发送给所述第二设备,使其能够被所述第二设备使用以基于其通过光通信装置确定的位置信息和姿态信息在其显示媒介上呈现所述另一虚拟对象。
  17. 一种基于设备位置信息和姿态信息的交互***,包括:
    一个或多个光通信装置;
    至少两个设备,所述设备上具有摄像头,所述摄像头能够采集包括所述光通信装置的图像;以及
    能够与所述设备通信的服务器,其配置用于实现权利要求1-16中任一项所述的方法。
  18. 一种存储介质,其中存储有计算机程序,在所述计算机程序被处理器执行时,能够用于实现权利要求1-16中任一项所述的方法。
  19. 一种电子设备,包括处理器和存储器,所述存储器中存储有计算机程序,在所述计算机程序被处理器执行时,能够用于实现权利要求1-16中任一项所述的方法。
PCT/CN2020/127476 2019-11-11 2020-11-09 基于光通信装置的交互方法和*** WO2021093703A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911094717.6 2019-11-11
CN201911094717.6A CN112788443B (zh) 2019-11-11 2019-11-11 基于光通信装置的交互方法和***

Publications (1)

Publication Number Publication Date
WO2021093703A1 true WO2021093703A1 (zh) 2021-05-20

Family

ID=75749694

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/127476 WO2021093703A1 (zh) 2019-11-11 2020-11-09 基于光通信装置的交互方法和***

Country Status (3)

Country Link
CN (1) CN112788443B (zh)
TW (1) TWI764366B (zh)
WO (1) WO2021093703A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115174879A (zh) * 2022-07-18 2022-10-11 峰米(重庆)创新科技有限公司 投影画面校正方法、装置、计算机设备和存储介质

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115704877A (zh) * 2021-08-11 2023-02-17 上海光视融合智能科技有限公司 使用光束对设备进行定位的方法和***

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060273984A1 (en) * 2005-04-20 2006-12-07 Canon Kabushiki Kaisha Image processing method and image processing apparatus
CN105844714A (zh) * 2016-04-12 2016-08-10 广州凡拓数字创意科技股份有限公司 基于增强现实的场景显示方法及***
CN107479699A (zh) * 2017-07-28 2017-12-15 深圳市瑞立视多媒体科技有限公司 虚拟现实交互方法、装置及***
CN107734449A (zh) * 2017-11-09 2018-02-23 陕西外号信息技术有限公司 一种基于光标签的室外辅助定位方法、***及设备
CN109671118A (zh) * 2018-11-02 2019-04-23 北京盈迪曼德科技有限公司 一种虚拟现实多人交互方法、装置及***

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5901891B2 (ja) * 2011-05-23 2016-04-13 任天堂株式会社 ゲームシステム、ゲーム処理方法、ゲーム装置、およびゲームプログラム
BR112014011803A2 (pt) * 2011-11-16 2017-05-16 Qualcomm Inc sistema e método para compartilhar dados sem fio entre dispositivos de usuário
JP6952713B2 (ja) * 2016-01-19 2021-10-20 マジック リープ, インコーポレイテッドMagic Leap,Inc. 反射を利用する拡張現実システムおよび方法
CN105718840B (zh) * 2016-01-27 2018-07-24 西安小光子网络科技有限公司 一种基于光标签的信息交互***及方法
JP7214195B2 (ja) * 2017-02-17 2023-01-30 北陽電機株式会社 物体捕捉装置、捕捉対象物及び物体捕捉システム
CN108154533A (zh) * 2017-12-08 2018-06-12 北京奇艺世纪科技有限公司 一种位置姿态确定方法、装置及电子设备
CN108709559B (zh) * 2018-06-11 2020-05-22 浙江国自机器人技术有限公司 一种移动机器人定位***及其定位方法
US20180345129A1 (en) * 2018-07-27 2018-12-06 Yogesh Rathod Display virtual objects within predefined geofence or receiving of unique code from closest beacon

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060273984A1 (en) * 2005-04-20 2006-12-07 Canon Kabushiki Kaisha Image processing method and image processing apparatus
CN105844714A (zh) * 2016-04-12 2016-08-10 广州凡拓数字创意科技股份有限公司 基于增强现实的场景显示方法及***
CN107479699A (zh) * 2017-07-28 2017-12-15 深圳市瑞立视多媒体科技有限公司 虚拟现实交互方法、装置及***
CN107734449A (zh) * 2017-11-09 2018-02-23 陕西外号信息技术有限公司 一种基于光标签的室外辅助定位方法、***及设备
CN109671118A (zh) * 2018-11-02 2019-04-23 北京盈迪曼德科技有限公司 一种虚拟现实多人交互方法、装置及***

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115174879A (zh) * 2022-07-18 2022-10-11 峰米(重庆)创新科技有限公司 投影画面校正方法、装置、计算机设备和存储介质
CN115174879B (zh) * 2022-07-18 2024-03-15 峰米(重庆)创新科技有限公司 投影画面校正方法、装置、计算机设备和存储介质

Also Published As

Publication number Publication date
CN112788443B (zh) 2023-05-05
TWI764366B (zh) 2022-05-11
CN112788443A (zh) 2021-05-11
TW202119228A (zh) 2021-05-16

Similar Documents

Publication Publication Date Title
JP7013420B2 (ja) モバイルデバイスの位置特定
US9536350B2 (en) Touch and social cues as inputs into a computer
US9699375B2 (en) Method and apparatus for determining camera location information and/or camera pose information according to a global coordinate system
TWI615776B (zh) 移動物件的虛擬訊息建立方法、搜尋方法與應用系統
US8275834B2 (en) Multi-modal, geo-tempo communications systems
WO2013028813A1 (en) Implicit sharing and privacy control through physical behaviors using sensor-rich devices
JP2015176578A (ja) 情報処理システム、情報処理装置、情報処理プログラム、および情報処理方法
WO2021093703A1 (zh) 基于光通信装置的交互方法和***
KR20220140391A (ko) 가상 카메라 머신을 이용한 콘텐츠 제공 시스템 및 방법
WO2023205032A1 (en) Location-based shared augmented reality experience system
JP2024515995A (ja) 注目点の再現性予測
TWI750822B (zh) 用於為目標設置可呈現的虛擬對象的方法和系統
TW201823929A (zh) 移動物件的虛擬訊息遠距管理方法與應用系統
US11748962B2 (en) Resilient interdependent spatial alignment to improve and maintain spatial alignment between two coordinate systems for augmented reality and other applications
CN111242107B (zh) 用于设置空间中的虚拟对象的方法和电子设备
CN112581630B (zh) 一种用户交互方法和***
CN112051919B (zh) 一种基于位置的交互方法和交互***
WO2020244578A1 (zh) 基于光通信装置的交互方法和电子设备
WO2020244576A1 (zh) 基于光通信装置叠加虚拟对象的方法和相应的电子设备
TWI747333B (zh) 基於光通信裝置的交互方法、電子設備以及電腦可讀取記錄媒體
TWI759764B (zh) 基於光通信裝置疊加虛擬物件的方法、電子設備以及電腦可讀取記錄媒體
TWI788217B (zh) 立體空間中定位方法與定位系統
US20230236874A1 (en) Information processing system and information processing terminal
TW202223749A (zh) 用於獲得場景中的設備或其使用者的標識資訊的方法和系統
CN113885983A (zh) 名片展示方法、智能终端及可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20887848

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20887848

Country of ref document: EP

Kind code of ref document: A1