CN113660347B - Data processing method, device, electronic equipment and readable storage medium - Google Patents

Data processing method, device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN113660347B
CN113660347B CN202111015711.2A CN202111015711A CN113660347B CN 113660347 B CN113660347 B CN 113660347B CN 202111015711 A CN202111015711 A CN 202111015711A CN 113660347 B CN113660347 B CN 113660347B
Authority
CN
China
Prior art keywords
data
augmented reality
audio
video data
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111015711.2A
Other languages
Chinese (zh)
Other versions
CN113660347A (en
Inventor
秦禄东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202111015711.2A priority Critical patent/CN113660347B/en
Publication of CN113660347A publication Critical patent/CN113660347A/en
Application granted granted Critical
Publication of CN113660347B publication Critical patent/CN113660347B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/06Protocols specially adapted for file transfer, e.g. file transfer protocol [FTP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/2803Home automation networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/2803Home automation networks
    • H04L12/2805Home Audio Video Interoperability [HAVI] networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/2803Home automation networks
    • H04L2012/2847Home automation networks characterised by the type of home appliance used
    • H04L2012/2849Audio/video appliances

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Automation & Control Theory (AREA)
  • Multimedia (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses a data processing method, a data processing device, electronic equipment and a readable storage medium, and belongs to the technical field of communication. The method comprises the following steps: obtaining augmented reality data, wherein the augmented reality data comprises video data and audio data, and the video data and the audio data correspond to the same video file; and sending the video data to an augmented reality device, and sending the audio data to an intelligent home device, wherein the augmented reality device is used for playing the video data, and the intelligent home device is used for playing the audio data. According to the method and the device, the video data and the audio data of the same video file are respectively sent to the augmented reality equipment and the intelligent home equipment, so that the use experience of a user can be improved to a certain extent.

Description

Data processing method, device, electronic equipment and readable storage medium
Technical Field
The present application relates to the field of communications technologies, and in particular, to a data processing method, apparatus, electronic device, and readable storage medium.
Background
Augmented reality (AugmentedReality, AR for short) is simply to apply virtual information to the real world through computer technology, and the real environment and the virtual object are superimposed on the same screen or space in real time, so that the virtual object and the real object exist simultaneously. Augmented reality provides information that is generally different from what a human can perceive. The virtual information display system not only displays real world information, but also displays virtual information at the same time, and the two information are mutually complemented and overlapped. At present, when a user actually uses an augmented reality device, the problems of weak immersion and poor user experience generally exist.
Disclosure of Invention
The application provides a data processing method, a data processing device, an electronic device and a readable storage medium, so as to improve the defects.
In a first aspect, an embodiment of the present application provides a data processing method, where the method includes: obtaining augmented reality data, wherein the augmented reality data comprises video data and audio data, and the video data and the audio data correspond to the same video file; and sending the video data to an augmented reality device, and sending the audio data to an intelligent home device, wherein the augmented reality device is used for playing the video data, and the intelligent home device is used for playing the audio data.
In a second aspect, an embodiment of the present application further provides a data processing apparatus, including: the device comprises an acquisition module and a sending module. The device comprises an acquisition module, a storage module and a storage module, wherein the acquisition module is used for acquiring augmented reality data, the augmented reality data comprises video data and audio data, and the video data and the audio data correspond to the same video file; the sending module is used for sending the video data to the augmented reality equipment and sending the audio data to the intelligent home equipment, the augmented reality equipment is used for playing the video data, and the intelligent home equipment is used for playing the audio data.
In a third aspect, embodiments of the present application further provide an electronic device, including one or more processors; a memory; one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the one or more processors, the one or more applications configured to perform the above-described method.
In a fourth aspect, embodiments of the present application also provide a computer readable storage medium having stored therein program code that is callable by a processor to perform the above method.
The data processing method, the device, the electronic equipment and the readable storage medium provided by the embodiment of the application can respectively send the audio data and the video data in the augmented reality data to different equipment when the augmented reality data is acquired, wherein the video data and the audio data correspond to the same video file. According to the application, the video data and the audio data in the same video file are respectively sent to the augmented reality equipment and the intelligent home equipment, so that the data processing amount of the augmented reality equipment can be reduced, and the immersion feeling of a user when using the augmented reality equipment can be improved.
Additional features and advantages of embodiments of the application will be set forth in the description which follows, and in part will be apparent from the description, or may be learned by practice of embodiments of the application. The objectives and other advantages of embodiments of the application may be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of an application environment of a data processing method according to an embodiment of the present application;
FIG. 2 shows a method flow diagram of a data processing method provided by one embodiment of the application;
fig. 3 is a diagram showing an example of wearing an augmented reality device by a person in a data processing method according to an embodiment of the present application;
Fig. 4 is a diagram showing an example of distribution of smart home devices in a data processing method according to an embodiment of the present application;
FIG. 5 is a flow chart of a method for data processing according to another embodiment of the present application;
Fig. 6 is a flowchart showing a step of step S230 in a data processing method according to another embodiment of the present application;
Fig. 7 is a diagram showing an example in which a plurality of audio playing devices exist within a preset range of an augmented reality device in a data processing method according to another embodiment of the present application;
Fig. 8 is an exemplary diagram showing that no sound box device exists in a preset range of an augmented reality device in a data processing method according to another embodiment of the present application;
FIG. 9 is a flow chart of a method for data processing according to still another embodiment of the present application;
FIG. 10 is a diagram showing an example of wearing a wearable device by a user in a data processing method according to still another embodiment of the present application;
FIG. 11 is a block diagram showing a data processing apparatus according to an embodiment of the present application;
FIG. 12 is a block diagram showing a configuration of a transmitting module 420 in a data processing apparatus according to an embodiment of the present application;
Fig. 13 is a block diagram of an electronic device according to an embodiment of the present application;
Fig. 14 shows a storage unit for storing or carrying program codes for implementing a data processing method according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. The components of the embodiments of the present application generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the application, as presented in the figures, is not intended to limit the scope of the application, as claimed, but is merely representative of selected embodiments of the application. All other embodiments, which can be made by a person skilled in the art without making any inventive effort, are intended to be within the scope of the present application.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures. Meanwhile, in the description of the present application, the terms "first", "second", and the like are used only to distinguish the description, and are not to be construed as indicating or implying relative importance.
When the prior augmented reality device executes video playing operation, the effect of multi-channel playing cannot be realized well, so that the multi-channel sound effect of the augmented reality video cannot be realized sufficiently, and particularly, when receiving Du Biduo-channel audio signals, the prior augmented reality device can only transmit the audio signals to a plurality of augmented reality speakers for playing, wherein the speakers are configured on the augmented reality device. Therefore, when the video is played by the augmented reality device in the prior art, the audio and video playing operation is executed on the augmented reality device, and the existing augmented reality device is large in data processing capacity and weak in immersion sense when the user uses the augmented reality device due to the fact that the augmented reality device is not combined with other intelligent devices.
In view of the above problems, the inventor proposes a data processing method, an electronic device and a storage medium according to embodiments of the present application, when augmented reality data is acquired, audio data and video data in the augmented reality data may be sent to different devices, where the video data and the audio data correspond to the same video file, and specifically, the present application may send the video data to the augmented reality device, and send the audio data to the smart home device, where the augmented reality device is used for playing the video data in the video file, and the smart home device is used for playing the audio data in the video file. According to the application, the video data and the audio data in the same video file are respectively sent to the augmented reality equipment and the intelligent home equipment, so that the data processing amount of the augmented reality equipment can be reduced, and the immersion feeling of a user when using the augmented reality equipment can be improved. The specific data processing method is described in detail in the following embodiments.
Referring to fig. 1, fig. 1 shows a schematic diagram of an application environment of a data processing method according to an embodiment of the present application, where the application scenario includes: the electronic device 101, the first controlled device 102 and the second controlled device 103, wherein the electronic device 101 can be respectively connected with the first controlled device 102 and the second controlled device 103 to realize data interaction between the electronic device 101 and the first controlled device 102 and the second controlled device 103. In addition, the first controlled device 102 may be connected to the second controlled device 103, so as to implement data interaction between the first controlled device 102 and the second controlled device 103.
In the embodiment of the present application, the electronic device 101 may be, but is not limited to, a cellular phone, a smart speaker, a smart watch, a portable computer, a handheld communication device, a handheld computing device, a satellite radio, a global positioning system, a Personal computer (Personal DIGITAL ASSISTANT, PDA), etc. The electronic device 101, the first controlled device 102, and the second controlled device 103 may be directly or indirectly connected through a communication manner of a wired network or a wireless network, which is not limited herein.
Alternatively, the wireless network or wired network described above uses standard communication techniques and/or protocols. The network is typically the internet, but may be any network including, but not limited to, a local area network (Local Area Network, LAN), metropolitan area network (Metropolitan Area Network, MAN), wide area network (Wide Area Network, WAN), a mobile, wired or wireless network, a private network, or any combination of virtual private networks. In some embodiments, data exchanged over the network is represented using techniques and/or formats including HyperText Mark-up Language (HTML), extensible markup Language (Extensible Markup Language, XML), and the like. All or some of the links may also be encrypted using conventional encryption techniques such as secure sockets layer (Secure Socket Layer, SSL), transport layer security (Transport Layer Security, TLS), virtual private network (Virtual Private Network, VPN), internet protocol security (Internet Protocol Security, IPsec), etc. In other embodiments, custom and/or dedicated data communication techniques may also be used in place of or in addition to the data communication techniques described above.
In the embodiment of the present application, the first intelligent device 101 may be AR glasses, a head-mounted display, a mobile device such as a mobile phone, a tablet, etc., and the second controlled device 103 may be a sound box, or may be an intelligent home device such as a television, an air conditioner, a lamp, etc. In addition, the second controlled device 103 may be a mobile device such as a mobile phone, a tablet, or a wearable device.
Referring to fig. 2, fig. 2 is a flow chart illustrating a data processing method according to an embodiment of the application. In a specific embodiment, the data processing method is applied to the data processing apparatus 400 shown in fig. 11 and to the electronic device 510 shown in fig. 13. The data processing method may specifically include steps 110 to 120, which will be described in detail with respect to the flowchart shown in fig. 2.
Step S110: and obtaining augmented reality data, wherein the augmented reality data comprises video data and audio data, and the video data and the audio data correspond to the same video file.
The data processing method in the embodiment of the application can be applied to electronic equipment, and the electronic equipment can be a smart phone, a smart sound box, a smart watch, a portable computer and the like. As one approach, the augmented reality data may include video data and audio data, where the video data and the audio data belong to the same video file.
In some embodiments, the augmented reality data may be data that is acquired by the electronic device in advance and stored in the electronic device, and the augmented reality data may also be data that is downloaded by the electronic device from a server through a wired or wireless network, and the augmented reality data may also be data that is acquired in real time according to an environment where the electronic device is currently located, where the data acquired in real time may include not only information of a gesture, a position, and the like of a user corresponding to the electronic device, but also information of other human bodies or objects and the like in the same environment. The augmented Reality data may be AR-related data or VR (Virtual Reality) -related data, and the augmented Reality data may be, specifically, AR-related data or VR-related data, and may be selected according to the actual situation without explicit limitation.
In an embodiment of the present application, the augmented reality data may include video data and audio data, where the video data may be screen data related to augmented reality/virtual reality, the screen data may be related to a game scene, and the screen data at this time may be data related to a game object, where the game object may include a game character, a game scene, a game weapon, and the like; the picture data can also be related to a video watching scene, and the picture data can be data related to a video object, wherein the video object can comprise people, animals, scenery, buildings, traffic and the like; the picture data can also be related to a remote conference/teaching scene, and the picture data at the moment can be related to a conference/teaching task, wherein the conference/teaching task can comprise data display, data dynamic analysis, data analysis conclusion display, interactive character display and the like, and the data can be displayed in a three-dimensional rendering mode; the image data may also be related to a medical scene, and at this time, the image data may be data related to a pathology, where the data related to a pathology may include machine analysis data, doctor analysis data, three-dimensional model data corresponding to a patient, and the like, and final pathology analysis results may be obtained by comprehensively analyzing the data, and the pathology analysis results may be displayed in a three-dimensional rendering manner. The augmented reality data may also include data of other scenes, specifically including which scenes, which are not described in detail herein, and may be selected according to actual use conditions.
In some embodiments, the enhanced display data may further include audio data in addition to the video data, where the audio data corresponds to the same video file as the video data, where the audio data may be sound data related to augmented reality/virtual reality, where the sound data may be related to a game scene, where the sound data may be data related to a game scene, a game object, etc., where the game scene or the sound data corresponding to a different game object is different. For example, sound data corresponding to game character a is a, sound data corresponding to game character B is B, and sound data a and sound data B are different; the sound data may also be related to a movie scene, where the sound data may be related to a movie object or a movie scene, and if the movie object or the movie scene is different, the corresponding sound data is also different, and if the movie scene has sea waves, the corresponding sound data may include sound of sea waves, and if the movie scene is a city, the corresponding sound data may include sound of a vehicle; the sound data can also be related to a remote conference/teaching scene, the sound data at the moment can be data related to a conference/teaching task, and the corresponding sound data are different when the conference/teaching task is different, for example, the sound data corresponding to a teacher A lecture and a teacher B lecture are different; the sound data may be related to a medical scene, and the sound data may be related to a pathology, or may be different if a pathology or a disease state is different.
In the embodiment of the application, the video data and the audio data may correspond to each other, and a picture of the video data may correspond to a sound of the sound data. In addition, the frames of the same video data may correspond to sounds of a plurality of different targets. For example, only one target person in the picture 1 is speaking, and the target sound corresponding to the picture 1 may be 1, and the picture 2 includes not only a plurality of target persons, but also a plurality of target persons placed beside a noisy road, and the target sound corresponding to the picture 2 may include not only the sound of a plurality of target persons but also the sound of other vehicles or objects on the road. The frame in the video data may not correspond to the sound of the audio data. For example, although wind noise is included in the audio data, since wind is invisible, a screen corresponding to the audio data does not exist in the video data. In addition, although some video text has audio data, since the object corresponding to the audio data does not appear in the video screen, there is no screen corresponding to sound at this time.
Step S120: and sending the video data to an augmented reality device and sending the audio data to an intelligent home device.
In the embodiment of the application, after the electronic device acquires the augmented reality data, the electronic device can send the video data in the augmented reality data to the augmented reality device and send the audio data in the augmented reality data to the intelligent home device, wherein the augmented reality device can be used for playing the video data of the video file, and the intelligent home device can be used for playing the audio data of the video file.
In some embodiments, the augmented reality device may be a head-mounted device, AR glasses, or the like, the augmented reality device may be a device capable of augmented reality display, the augmented reality device may be worn on a human body, as shown in fig. 3, the person 121 may be worn with an augmented reality device 122, the person 121 may play a game through the augmented reality device 122, and may realize viewing, or may realize remote teaching, teleconferencing, remote medical treatment, or the like. The smart home device may be at least one of a smart speaker, a television, an air conditioner, a refrigerator, etc., and the speaker 123, the television 124, and the air conditioner 125 shown in fig. 4 may be used as the smart home device according to the embodiments of the present application.
In addition, the smart home devices in the embodiment of the application can be configured with the loudspeaker, and when the smart home devices receive the audio data transmitted by the electronic device, the smart home devices can output the audio data by using the loudspeaker. The electronic equipment and the augmented reality equipment can perform data interaction through a wired or wireless network, the electronic equipment and the intelligent home equipment can also perform data interaction through a wired or wireless network, and the augmented reality equipment and the intelligent home equipment can also perform data interaction through a wired or wireless mode.
In one way, when the video data and the audio data are respectively sent to the augmented reality device and the intelligent home device, the electronic device can instruct the augmented reality device and the intelligent home device to output the video data and the audio data at the same time. In other words, the video data may include a first output instruction, and the audio data may include a second output instruction, and the electronic device may instruct the augmented reality device and the smart home device to synchronously play the video data and the audio data using the first output instruction and the second output instruction.
In another way, when video data and audio data are respectively sent to the augmented reality device and the smart home device, the electronic device can send a third output instruction to the augmented reality device and send a fourth output instruction to the smart home device at the same time, so that the augmented reality device and the smart home device can be instructed to play the video data and the audio data at the same time through the third output instruction and the fourth output instruction.
In other embodiments, after the electronic device acquires the augmented reality data, in order to ensure the synchronism of the data, the electronic device may determine a first duration corresponding to the video data and determine a second duration corresponding to the audio data, where the first duration may be a total duration consumed by the electronic device to process the video data after the electronic device acquires the video data and send the processed video data to the augmented reality device for display, and similarly, the second duration may be a total duration consumed by the electronic device to process the audio data after the electronic device acquires the audio data and send the processed audio data to the smart home device for playing.
As a way, after the first duration and the second duration are acquired, the embodiment of the present application may synchronize the video data and the audio data based on the first duration and the second duration. Specifically, the electronic device may determine, from the first duration and the second duration, a duration with a shorter duration as the target duration, and delay output of data corresponding to the target duration. As an example, the processing and transmitting of the video data may consume longer time than the audio data, so that the audio data may be delayed from being transmitted to the smart home device after being processed, until the video data is processed, and then the video data and the audio data are simultaneously transmitted to the smart home device, respectively.
As another way, synchronizing the video data and the audio data based on the first time period and the second time period may further include determining whether the first time period is greater than the second time period, and if the first time period is greater than the second time period, increasing a processing priority of the video data. In addition, if the first duration is smaller than the second duration, the processing priority of the audio data can be increased according to the embodiment of the application. For example, the actual audio processing priority is higher than the video processing priority, and the video processing priority may be adjusted to be higher than the audio processing priority by determining that the first time period corresponding to the video data is longer than the second time period.
In other embodiments, before synchronizing the video data and the audio data, the embodiment of the present application may further determine whether a target time required for processing the augmented reality data is less than a preset time, and if the target time is less than the preset time, synchronize the video data and the audio data based on the first duration and the second duration. In addition, if the time required to process the augmented reality data is greater than or equal to a preset time, the synchronization operation may not be performed, but the video data and the audio data may be directly transmitted to the augmented reality device.
It should be noted that, the augmented reality data in the embodiment of the present application may be data corresponding to one video frame file, after the video data and the audio data are sent to the augmented reality device, the embodiment of the present application may acquire the augmented reality data corresponding to the next video frame, determine again whether the time required for processing the augmented reality data is greater than a preset time, and if the time required for processing the augmented reality data is less than the preset time, send the video data in the augmented reality data corresponding to the video frame to the augmented reality device, and send the audio data in the augmented reality to the smart home device. In addition, if the time required for processing the augmented reality data is still greater than or equal to the preset time, the video data and the audio data in the augmented reality data corresponding to the video frame are sent to the augmented reality device in a similar way.
According to the data processing method provided by the embodiment of the application, when the augmented reality data is acquired, the audio data and the video data in the augmented reality data can be respectively sent to different devices, wherein the video data and the audio data correspond to the same video file. According to the application, the video data and the audio data in the same video file are respectively sent to the augmented reality equipment and the intelligent home equipment, so that the data processing amount of the augmented reality equipment can be reduced, and the immersion feeling of a user when using the augmented reality equipment can be improved.
In another embodiment of the present application, referring to fig. 5, a data processing method is provided, and the data processing method may include steps S210 to S240.
Step S210: and obtaining augmented reality data, wherein the augmented reality data comprises video data and audio data, and the video data and the audio data correspond to the same video file.
Step S220: and determining whether an audio playing device exists in the preset range of the augmented reality device.
In the embodiment of the present application, the electronic device that obtains the augmented reality data may be a first electronic device, after obtaining the augmented reality data, the first electronic device may determine whether an audio playing device exists in a preset range of the augmented reality device, that is, the first electronic device may determine that the audio playing device exists in an environment where the augmented reality device is located, and if the audio playing device exists in the preset range of the augmented reality device, the audio playing device may be used as an intelligent home device, that is, step S230 is entered. Wherein the audio playback device does not include the first electronic device. In addition, if there is no audio playback device within the preset range of the augmented reality, video data and audio data may be simultaneously transmitted to the augmented reality device, i.e., step S240 is entered.
In other embodiments, if the audio playing device does not exist in the preset range of the augmented reality device, it may be determined whether a second electronic device exists in the preset range of the augmented reality device, where the second electronic device may be a mobile phone, a computer, or other portable electronic devices, that is, the second electronic device may be other electronic devices except for the smart home device. If it is determined that the second electronic device exists within the preset range of the augmented reality device, the second electronic device can be used as the intelligent home device, video data is sent to the augmented reality device, and audio data is sent to the second electronic device.
In other embodiments, if the second electronic device does not exist within the preset range of the augmented reality device, the first electronic device may be used as the smart home device, and then the video data is sent to the augmented reality device, and finally the augmented reality device is instructed to play the video data, and the audio data is played by using the first electronic device.
In one mode, when determining whether the audio playing device exists in the preset range of the augmented reality device, the embodiment of the application can directly detect whether the intelligent home device exists in the preset range of the first electronic device, and if the intelligent home device exists in the preset range of the first electronic device, the audio playing device exists in the preset range of the intelligent home device. It should be noted that, on the premise that whether the audio playing device exists in the preset range of the first electronic device to replace the audio playing device in the preset range of the augmented reality device is determined, the first electronic device and the augmented reality device are in the same environment, so that the efficiency of data processing can be improved.
In another way, when determining whether the audio playing device exists in the preset range of the augmented reality device, the first electronic device may also send a position determining instruction to the augmented reality device, so as to instruct the augmented reality device to determine whether the audio playing device exists in the preset range of the augmented reality device through the position determining instruction, if the augmented reality device determines that the audio playing device exists in the preset range, the detection result may be transmitted back to the first electronic device, and at this time, the first electronic device may determine whether the audio playing device exists in the preset range of the augmented reality device.
In another way, when determining whether the audio playing device exists in the preset range of the augmented reality device, the first electronic device may also send a device determining instruction to each audio playing device, so that the audio playing device is instructed to determine whether the augmented reality device exists in the preset range by the device determining instruction, if the audio playing device determines that the augmented reality device exists in the preset range, the detection result may be transmitted back to the first electronic device, and at this time, the first electronic device may determine whether the audio playing device exists in the preset range of the augmented reality device. In addition, the embodiment of the application can comprise a plurality of audio playing devices, so when the first electronic device indicates whether the augmented reality device exists in the preset range of the audio playing device, the presence of the audio playing device in the preset range of the augmented reality device can be determined as long as one audio playing device determines that the augmented reality device exists in the preset range of the audio playing device.
As a way, the embodiment of the application can determine whether the audio playing device exists in the preset range of the augmented reality device through technologies such as WiFi, zigBee, UWB (Ultra Wide Band) and the like. In addition, the first electronic device can also comprehensively determine whether the audio playing device exists in the preset range of the augmented reality device through a position sensor, a distance sensor, an inertial sensor (Inertial Measurement Unit, an IMU), a visual inertial navigation odometer (Visual Inertial Odometry, VIO) and the like.
Step S230: and taking the audio playing device as the intelligent home device, sending the audio data to the intelligent home device, and sending the video data to the augmented reality device.
It can be known from the above description that when it is determined that an audio playing device exists in a preset range of the augmented reality device, the audio playing device can be used as an intelligent home device, and then audio data is sent to the audio playing device and video data is sent to the augmented reality device. Referring to fig. 6, the step of using the audio playing device as the smart home device may include steps S231 to S232.
Step S231: if so, determining whether the audio playing device is a plurality of audio playing devices.
In one manner, when it is determined that the audio playing device exists in the preset range of the augmented reality device, the embodiment of the present application may also determine whether the audio playing device exists in the preset range is a plurality of audio playing devices, and if the audio playing device exists in the preset range of the augmented reality device is a plurality of audio playing devices, at least one seat smart home device may be selected from the plurality of audio playing devices, that is, step S232 is performed.
Step S232: and if the number of the audio playing devices is multiple, selecting at least one from the multiple audio playing devices as the intelligent home equipment.
In some embodiments, if the audio playing device is a plurality of audio playing devices, selecting at least one from the plurality of audio playing devices as the smart home device may include: determining whether sound box equipment exists in a plurality of audio playing equipment; and if the sound box equipment exists, taking the sound box equipment as the intelligent household equipment. Because the sound box equipment is specially used for playing the audio data, when the audio playing equipment is determined to be a plurality of audio playing equipment, the embodiment of the application can determine whether the sound box equipment exists in the plurality of audio playing equipment, and if the sound box equipment exists, the sound box equipment can be used as intelligent household equipment.
In order to more clearly understand the acquisition process of the smart home device, an example diagram shown in fig. 7 is given, and it can be seen from fig. 7 that, in the preset range of the augmented reality device 201, not only the sound box device 202, but also the television 203, the air conditioner 204, and the like exist, and when the electronic device detects that the sound box device 202, the television 203, the air conditioner 204, and the like exist in the preset range of the augmented reality device 201, the electronic device can use the sound box device 202 as the smart home device.
In addition, when detecting that a plurality of sound box devices 202 exist in the preset range of the augmented reality device 201, the embodiment of the application can use the plurality of sound box devices 202 as intelligent home devices. As shown in fig. 7, the speaker device 202 below the television 203 and the speaker device 202 beside the sofa can be used as smart home devices, and the audio data output by a plurality of smart home devices can better realize immersion, and meanwhile, the user can better experience stereo surround.
In other embodiments, if no sound box device exists in the plurality of audio playing devices, the audio playing device with the speaker not in the working state may be used as the smart home device. In addition, when the audio playing device with the speaker not in the working state is used as the intelligent household device, the embodiment of the application can also determine the priority of the audio playing devices, and the device with the highest priority is used as the intelligent household device. In the example diagram shown in fig. 8, since no speaker device is present, one of the television 203 and the air conditioner 204 may be selected as the smart home device. In addition, since the frequency of playing audio data through the television 203 is relatively high, the television 203 can be used as an intelligent home device. In addition, if it is determined that the speaker of the television 203 is in an operating state, the air conditioner 204 may be used as the smart home device. It should be noted that, in the embodiment of the present application, the air conditioner 204 is configured with a speaker, which may be an intelligent air conditioner.
In other embodiments, when determining that the speaker of the audio playing device is in an operating state, the electronic device may also detect whether the user is using the audio playing device, and if it is detected that the user is using the audio playing device, the audio playing device with the speaker not in an operating state may be used as the smart home device. If the user is detected not to use the audio playing device and the user is detected to use the augmented reality device, the audio playing device with the loudspeaker in the working state can be used as the intelligent home device, namely, the audio originally played by the audio playing device is closed, and the audio data in the augmented reality data is played by the audio playing device. As an example, the electronic device detects that the speaker of the television 203 is in an active state, and detects that no user is watching the television 203, and detects that the user is using the augmented reality device 201, at this time, the television 203 may be used as a smart home device.
In other embodiments, if the audio playing device is determined to be multiple, the audio playing device closest to the electronic device may be used as the smart home device. In addition, before the audio playing device closest to the electronic device is used as the intelligent home device, the embodiment of the application can also determine whether a shielding object exists between the electronic device and the audio playing device, and if the shielding object exists, the audio playing device is not used as the intelligent home device. In other embodiments, if it is determined that the audio playing devices are multiple, the communication quality between each audio playing device and the electronic device may be obtained, and the audio playing device with the optimal communication quality may be used as the smart home device.
Step S240: and transmitting the video data and the audio data to the augmented reality device at the same time.
In the embodiment of the application, when the electronic device determines that the audio playing device does not exist in the preset range of the augmented reality device, the electronic device can send the video data and the audio data to the augmented reality device at the same time. And in this process, the electronic device may determine whether the signal strength between the electronic device and the augmented reality device is greater than a preset strength, if the signal strength is greater than the preset signal strength, may simultaneously transmit video data and audio data to the augmented reality device, if the signal strength is less than or equal to the preset signal strength, may transmit video data to the augmented reality device, and audio data is not transmitted, and then play the audio data using the electronic device, and play the video data using the augmented reality device.
In other embodiments, when the electronic device determines that the audio playing device does not exist in the preset range of the augmented reality device, the embodiment of the application may acquire the size of the video file, determine whether the size of the video file is smaller than the specified file size, if the size of the video file is smaller than the specified file size, send the video data and the audio data to the augmented reality device at the same time, if the size of the video file is larger than or equal to the specified file size, send the video data to the augmented reality device, and not send the audio data, and then play the audio data by using the electronic device and play the video data by using the augmented reality device.
According to the data processing method provided by the embodiment of the application, when the augmented reality data is acquired, the audio data and the video data in the augmented reality data can be respectively sent to different devices, wherein the video data and the audio data correspond to the same video file. According to the application, the video data and the audio data in the same video file are respectively sent to the augmented reality equipment and the intelligent home equipment, so that the data processing amount of the augmented reality equipment can be reduced, and the immersion feeling of a user when using the augmented reality equipment can be improved. In addition, the embodiment of the application selects at least one of the plurality of audio playing devices as the intelligent home device, so that the audio and video data can be more reasonably and effectively played to a certain extent, and the use experience of a user can be further improved. In addition, when the signal quality between the electronic equipment and the augmented reality equipment is poor or the video file is large, the method and the device can avoid sending audio data, and further can reduce delay caused by data transmission.
Still another embodiment of the present application provides a data processing method, referring to fig. 9, the data processing method may include steps S310 to 340.
Step S310: and obtaining augmented reality data, wherein the augmented reality data comprises video data and audio data, and the video data and the audio data correspond to the same video file.
Step S320: and sending the video data to an augmented reality device and sending the audio data to an intelligent home device.
The above embodiments of step S310 to step S320 have been described in detail, and will not be described in detail here.
Step S330: and acquiring somatosensory data, wherein the somatosensory data corresponds to an action picture in the video data.
In some embodiments, in order to improve the immersion of the user, the embodiment of the present application may also obtain somatosensory data when playing the audio data of the video file by using the smart home device, where the somatosensory data may correspond to the action picture in the video data. In other words, when the embodiment of the application acquires video data, motion recognition can be performed on the video data, and then corresponding somatosensory data is acquired based on the motion recognition result. For example, when a user watches a movie, a fight scene is identified in the video data, and at this time, corresponding somatosensory data corresponding to the fight can be acquired correspondingly.
Step S340: and sending the somatosensory data to wearable equipment, wherein the wearable equipment is used for outputting the somatosensory data.
As one way, after acquiring the somatosensory data, the electronic device may send the somatosensory data to the wearable device, where the wearable device is configured to output the somatosensory data. In addition, the wearable device can comprise an myoelectricity sensor, when the hitting type action exists in the video data, the somatosensory data corresponding to the hitting type action can be obtained, the somatosensory data is output by the wearable device, the output effect of the somatosensory data is similar to that of the hitting type action, and therefore a user can be immersed in the video data better. The wearable device may include a smart watch, a smart bracelet, a smart ring, and the like. The character 301 shown in fig. 10 is not only worn with the augmented reality device 302, but also with the wearable device 303, and when detecting that there is a striking motion for the picture motion in the video data, the electronic device can present different striking sensations through the wearable device 303. Also, the larger the striking action in the screen action, the larger the striking feeling that the wearable device 303 can present.
In another mode, when the intelligent household equipment is used for playing the audio data of the video file and the augmented reality equipment is used for playing the video data of the video file, the electronic equipment can also identify lamplight in the video data and acquire corresponding lamplight data by using an identification result, then the electronic equipment can send the lamplight data to the lamplight equipment, and the lamplight equipment can output the lamplight data while the augmented reality equipment outputs the video data. The lighting device can be an intelligent illumination device in the same environment as the augmented reality device.
As another way, when playing audio data of a video file by using the smart home device and playing video data of the video file by using the augmented reality device, the embodiment of the application can also identify sound in the audio data and determine whether target sound data exists in the audio data according to the identification result, wherein the target sound data does not have a corresponding picture, such as a wind sound does not have a corresponding video picture, if the target sound data exists, the device sound data matched with the target sound data can be obtained, then the electronic device can send the device sound data to the air conditioning device or the fan device, and the air conditioning device or the fan device can output the device sound data while the augmented reality device outputs the video data. By using the air conditioning equipment or the fan equipment, the embodiment of the application can enable the user to more truly experience the world in the virtual picture.
In other embodiments, after the electronic device is connected with the augmented reality device and the smart home device, the electronic device may detect whether a first document sending instruction is received, and if the first document sending instruction is received, may acquire a target document corresponding to the first document sending instruction, where the first document sending instruction may be an instruction input by a user based on the first device. In addition, the electronic device may send the target document from the electronic device to the target electronic device after acquiring the target document, where the target electronic device may be one or a plurality of target electronic devices.
In a specific embodiment, when a teacher and a student perform a teaching task through the augmented reality device, the electronic device detects that the teacher inputs a file sharing instruction through the augmented reality device, at this time, the electronic device may directly obtain a target document corresponding to the file sharing instruction and send the target document to other electronic devices, and a user corresponding to the other electronic devices may be the student. When sharing the file, the file sharing instruction is received by the augmented reality device, but the file sharing task is executed by the electronic device, so that the data processing capacity of the augmented reality device can be reduced to a certain extent, and the use experience of a user can be improved.
In other embodiments, after the electronic device is connected to the augmented reality device and the smart home device, the electronic device may also detect whether other electronic devices are acquired and sent to obtain pathology data, where the other electronic devices may be medical analysis equipment. When the electronic equipment acquires the pathology data, the electronic equipment can send the pathology data and the video data to the augmented reality equipment at the same time; and sending the audio data to the intelligent household equipment, indicating the augmented reality equipment to display the pathological data and the video data, and indicating the intelligent household equipment to play the audio data.
In a specific embodiment, a doctor can perform remote diagnosis with a patient through the augmented reality device, in the diagnosis process, the doctor can acquire real-time pathology data by inquiring the patient, observing the gas color of the patient and the like, in the process, the doctor can input the diagnosis record of the doctor through the augmented reality device, the augmented reality device can search whether machine pathology data matched with the identity of the patient exist or not when acquiring the diagnosis record, if so, the real-time diagnosis pathology data are sent to the medical analysis instrument, meanwhile, the medical analysis instrument can store the pathology data corresponding to the identity of the patient and the pathology data of the patient, and the medical analysis instrument can comprehensively analyze the real-time diagnosis pathology data and the pathology data of the patient to obtain pathology analysis results. When the electronic equipment acquires the pathological analysis result, the electronic equipment can perform focus rendering on the pathological analysis result to obtain a three-dimensional rendering result, and then the three-dimensional rendering result is transmitted to the augmented reality equipment to instruct the augmented reality equipment to display the rendering result, so that a doctor can observe the condition of a patient more intuitively and vividly.
According to the data processing method provided by the embodiment of the application, when the augmented reality data is acquired, the audio data and the video data in the augmented reality data can be respectively sent to different devices, wherein the video data and the audio data correspond to the same video file. According to the application, the video data and the audio data in the same video file are respectively sent to the augmented reality equipment and the intelligent home equipment, so that the data processing amount of the augmented reality equipment can be reduced, and the immersion feeling of a user when using the augmented reality equipment can be improved. In addition, the embodiment of the application realizes data transmission, processing and the like by other electronic equipment by means of the intelligent home equipment, so that the data processing capacity of the augmented reality equipment can be reduced to a certain extent, and the display rate of the augmented reality data can be further accelerated. In addition, through the intelligent home equipment and other electronic equipment, the embodiment of the application can more truly and effectively present different sensory experiences for the user.
Referring to fig. 11, a data processing apparatus 400 is provided in an embodiment of the present application. In a specific embodiment, the data processing apparatus 400 includes: an acquisition module 410 and a transmission module 420.
The sending module 410 is configured to obtain augmented reality data, where the augmented reality data includes video data and audio data, and the video data and the audio data correspond to a same video file.
Further, the data processing apparatus 400 is further configured to cancel the downloading retry operation if the activation code is invalid.
The sending module 420 is configured to send the video data to an augmented reality device, and send the audio data to an intelligent home device, where the augmented reality device is configured to play the video data, and the intelligent home device is configured to play the audio data.
Referring to fig. 12, the transmission module 420 may include a device determination unit 421 and a data transmission unit 422.
The device determining unit 421 is configured to determine whether an audio playing device exists in a preset range of the augmented reality device. The data sending unit 422 is configured to, if any, use the audio playing device as the smart home device, send the audio data to the smart home device, and send the video data to the augmented reality device.
Further, the data sending unit 422 is configured to determine whether the audio playing device is a plurality of audio playing devices if the audio playing device is present; and if the number of the audio playing devices is multiple, selecting at least one from the multiple audio playing devices as the intelligent home equipment.
Further, the data sending unit 422 is further configured to determine whether a sound box device exists in the plurality of audio playing devices; and if the sound box equipment exists, taking the sound box equipment as the intelligent household equipment.
Further, the data sending unit 422 is further configured to, if a plurality of audio playing devices do not have a speaker box device, take the audio playing device with a speaker not in a working state as the smart home device.
Further, the data processing method may be applied to an electronic device, and the data sending unit 422 is further configured to use, if the audio playing device is plural, the audio playing device closest to the electronic device as the smart home device.
Further, the data processing apparatus 400 is further configured to send the video data and the audio data to the augmented reality device at the same time if it is determined that the audio playing device does not exist within the preset range of the augmented reality device.
Further, after the augmented reality data is acquired, the data processing apparatus 400 is further configured to acquire somatosensory data, where the somatosensory data corresponds to an action picture in the video data; and sending the somatosensory data to wearable equipment, wherein the wearable equipment is used for outputting the somatosensory data.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the apparatus and units described above may refer to corresponding procedures in the foregoing method embodiments, which are not described herein again.
In addition, each functional module in each embodiment of the present application may be integrated into one processing module, or each module may exist alone physically, or two or more modules may be integrated into one module. The integrated modules may be implemented in hardware or in software functional modules.
According to the data processing device provided by the embodiment of the application, when the augmented reality data is acquired, the audio data and the video data in the augmented reality data can be respectively sent to different devices, wherein the video data and the audio data correspond to the same video file. According to the application, the video data and the audio data in the same video file are respectively sent to the augmented reality equipment and the intelligent home equipment, so that the data processing amount of the augmented reality equipment can be reduced, and the immersion feeling of a user when using the augmented reality equipment can be improved.
Referring to fig. 13, a block diagram of an electronic device 500 according to an embodiment of the application is shown. The electronic device 500 may be a smart phone, a tablet computer, an electronic book, or other electronic device capable of running an application program. The electronic device 500 of the present application may include one or more of the following components: a processor 510, a memory 520, and one or more application programs, wherein the one or more application programs may be stored in the memory 520 and configured to be executed by the one or more processors 510, the one or more program(s) configured to perform the method as described in the foregoing method embodiments.
Processor 510 may include one or more processing cores. The processor 510 utilizes various interfaces and lines to connect various portions of the overall electronic device 500, perform various functions of the electronic device 500, and process data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 520, and invoking data stored in the memory 520. Alternatively, the processor 510 may be implemented in hardware in at least one of digital signal Processing (DIGITAL SIGNAL Processing, DSP), field-Programmable gate array (Field-Programmable GATE ARRAY, FPGA), programmable logic array (Programmable Logic Array, PLA). The processor 510 may integrate one or a combination of several of a central processing unit (Central ProceSSing Unit, CPU), a voiceprint identifier (Graphics ProceSSing Unit, GPU), a modem, etc. The CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for being responsible for rendering and drawing of display content; the modem is used to handle wireless communications. It will be appreciated that the modem may not be integrated into the processor 510 and may be implemented solely by a single communication chip.
Memory 520 may include random access Memory (Random Access Memory, RAM) or Read-Only Memory (ROM). Memory 520 may be used to store instructions, programs, code sets, or instruction sets. The memory 520 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for implementing at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing the various method embodiments described below, etc. The storage data area may also store data created by the electronic device 500 in use (e.g., phonebook, audiovisual data, chat log data), and the like.
Referring to FIG. 14, a block diagram of a computer readable storage medium 600 according to an embodiment of the application is shown. The computer readable storage medium 600 has stored therein program code that can be invoked by a processor to perform the methods described in the method embodiments described above.
The computer readable storage medium 600 may be an electronic memory such as a flash memory, an EEPROM (electrically erasable programmable read only memory), an EPROM, a hard disk, or a ROM. Optionally, computer readable storage medium 600 comprises a non-volatile computer readable medium (non-transitory computer-readable storage medium). The computer readable storage medium 600 has storage space for program code 610 that performs any of the method steps in the method embodiments described above. The program code can be read from or written to one or more computer program products. Program code 610 may be compressed, for example, in a suitable form. Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present application, and are not limiting; although the application has been described in detail with reference to the foregoing embodiments, it will be appreciated by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not drive the essence of the corresponding technical solutions to depart from the spirit and scope of the technical solutions of the embodiments of the present application.

Claims (11)

1. A method of data processing, the method comprising:
Obtaining augmented reality data, wherein the augmented reality data comprises video data and audio data, and the video data and the audio data correspond to the same video file;
acquiring target time required for processing the augmented reality data;
If the target time is smaller than the preset time, synchronizing the video data and the audio data, sending the synchronized video data to an augmented reality device, and sending the audio data to an intelligent home device, wherein the augmented reality device is used for playing the video data, and the intelligent home device is used for playing the audio data;
And if the target time is greater than or equal to the preset time, not synchronizing the video data and the audio data, and sending the video data and the audio data in the augmented reality data corresponding to the video frame to the augmented reality equipment.
2. The method of claim 1, wherein the sending the synchronized video data to an augmented reality device and the audio data to a smart home device comprises:
determining whether an audio playing device exists in a preset range of the augmented reality device;
And if the audio playing device is present, the audio playing device is used as the intelligent home device, the audio data is sent to the intelligent home device, and the video data is sent to the augmented reality device.
3. The method of claim 2, wherein the assuming the audio playback device is the smart home device if present, comprises:
if yes, determining whether the audio playing device is a plurality of audio playing devices;
And if the number of the audio playing devices is multiple, selecting at least one from the multiple audio playing devices as the intelligent home equipment.
4. The method of claim 3, wherein if the audio playing device is a plurality of audio playing devices, selecting at least one of the plurality of audio playing devices as the smart home device comprises:
determining whether sound box equipment exists in a plurality of audio playing equipment;
and if the sound box equipment exists, taking the sound box equipment as the intelligent household equipment.
5. The method according to claim 4, wherein the method further comprises:
and if the plurality of audio playing devices do not have sound box equipment, taking the audio playing devices with the loudspeakers not in working states as the intelligent household equipment.
6. The method of claim 3, wherein the method is applied to an electronic device, and if the audio playing device is a plurality of audio playing devices, selecting at least one of the plurality of audio playing devices as the smart home device further comprises:
and if the number of the audio playing devices is multiple, taking the audio playing device closest to the electronic device as the intelligent home device.
7. The method according to claim 2, wherein the method further comprises:
and if the video data and the audio data do not exist, transmitting the video data and the audio data to the augmented reality equipment at the same time.
8. The method according to any one of claims 1 to 7, further comprising, after the obtaining the augmented reality data:
Acquiring somatosensory data, wherein the somatosensory data corresponds to an action picture in the video data;
And sending the somatosensory data to wearable equipment, wherein the wearable equipment is used for outputting the somatosensory data.
9. A data processing apparatus, the apparatus comprising:
the device comprises an acquisition module, a storage module and a storage module, wherein the acquisition module is used for acquiring augmented reality data, the augmented reality data comprises video data and audio data, and the video data and the audio data correspond to the same video file;
A transmitting module, configured to acquire a target time required for processing the augmented reality data; if the target time is smaller than the preset time, synchronizing the video data and the audio data, sending the synchronized video data to an augmented reality device, and sending the audio data to an intelligent home device, wherein the augmented reality device is used for playing the video data, and the intelligent home device is used for playing the audio data; and if the target time is greater than or equal to the preset time, not synchronizing the video data and the audio data, and sending the video data and the audio data in the augmented reality data corresponding to the video frame to the augmented reality equipment.
10. An electronic device, comprising:
One or more processors;
A memory;
one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the one or more processors, the one or more applications configured to perform the method of any of claims 1-8.
11. A computer readable storage medium, characterized in that the computer readable storage medium has stored therein a program code, which is callable by a processor for executing the method according to any one of claims 1-8.
CN202111015711.2A 2021-08-31 2021-08-31 Data processing method, device, electronic equipment and readable storage medium Active CN113660347B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111015711.2A CN113660347B (en) 2021-08-31 2021-08-31 Data processing method, device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111015711.2A CN113660347B (en) 2021-08-31 2021-08-31 Data processing method, device, electronic equipment and readable storage medium

Publications (2)

Publication Number Publication Date
CN113660347A CN113660347A (en) 2021-11-16
CN113660347B true CN113660347B (en) 2024-05-07

Family

ID=78493363

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111015711.2A Active CN113660347B (en) 2021-08-31 2021-08-31 Data processing method, device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN113660347B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107003733A (en) * 2014-12-27 2017-08-01 英特尔公司 Technology for sharing augmented reality presentation
CN109564504A (en) * 2016-08-10 2019-04-02 高通股份有限公司 For the multimedia device based on mobile processing space audio
CN111466122A (en) * 2017-10-12 2020-07-28 弗劳恩霍夫应用研究促进协会 Audio delivery optimization for virtual reality applications

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10089071B2 (en) * 2016-06-02 2018-10-02 Microsoft Technology Licensing, Llc Automatic audio attenuation on immersive display devices

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107003733A (en) * 2014-12-27 2017-08-01 英特尔公司 Technology for sharing augmented reality presentation
CN109564504A (en) * 2016-08-10 2019-04-02 高通股份有限公司 For the multimedia device based on mobile processing space audio
CN111466122A (en) * 2017-10-12 2020-07-28 弗劳恩霍夫应用研究促进协会 Audio delivery optimization for virtual reality applications

Also Published As

Publication number Publication date
CN113660347A (en) 2021-11-16

Similar Documents

Publication Publication Date Title
US10699482B2 (en) Real-time immersive mediated reality experiences
US20170195650A1 (en) Method and system for multi point same screen broadcast of video
CN108594997B (en) Gesture skeleton construction method, device, equipment and storage medium
CN110413108B (en) Virtual picture processing method, device and system, electronic equipment and storage medium
JP6760271B2 (en) Information processing equipment, information processing methods and programs
US9762791B2 (en) Production of face images having preferred perspective angles
EP3180911A1 (en) Immersive video
WO2021098338A1 (en) Model training method, media information synthesizing method, and related apparatus
CN110401810B (en) Virtual picture processing method, device and system, electronic equipment and storage medium
EP4243398A1 (en) Video processing method and apparatus, electronic device, and storage medium
US10998870B2 (en) Information processing apparatus, information processing method, and program
US11412341B2 (en) Electronic apparatus and controlling method thereof
US20230306654A1 (en) Augmented reality interactive display method and device
CN113467603A (en) Audio processing method and device, readable medium and electronic equipment
CN109413152B (en) Image processing method, image processing device, storage medium and electronic equipment
KR20220061467A (en) Electronic device and Method for processing the audio signal thereof
EP3385915A1 (en) Method and device for processing multimedia information
CN114422935A (en) Audio processing method, terminal and computer readable storage medium
CN110956571A (en) SLAM-based virtual-real fusion method and electronic equipment
CN113963108A (en) Medical image cooperation method and device based on mixed reality and electronic equipment
CN113660347B (en) Data processing method, device, electronic equipment and readable storage medium
CN113676720A (en) Multimedia resource playing method and device, computer equipment and storage medium
WO2018135057A1 (en) Information processing device, information processing method, and program
KR102268750B1 (en) Method and apparatus for connecting user terminals as a group and providing a service including content associated with the group
CN112912822A (en) System for controlling audio-enabled connected devices in mixed reality environments

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant