CN107589932A - A kind of data processing method, virtual reality terminal and mobile terminal - Google Patents

A kind of data processing method, virtual reality terminal and mobile terminal Download PDF

Info

Publication number
CN107589932A
CN107589932A CN201710771331.9A CN201710771331A CN107589932A CN 107589932 A CN107589932 A CN 107589932A CN 201710771331 A CN201710771331 A CN 201710771331A CN 107589932 A CN107589932 A CN 107589932A
Authority
CN
China
Prior art keywords
voice data
osteoacusis
positional information
virtual reality
incidence relation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710771331.9A
Other languages
Chinese (zh)
Inventor
陈增
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201710771331.9A priority Critical patent/CN107589932A/en
Publication of CN107589932A publication Critical patent/CN107589932A/en
Pending legal-status Critical Current

Links

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The embodiments of the invention provide a kind of data processing method, virtual reality terminal and mobile terminal, the virtual reality terminal includes multiple osteoacusis parts, and methods described includes:The called application-specific in the virtual reality terminal, the application-specific have and have incidence relation with least one osteoacusis part respectively with specific interface element and corresponding voice data, the voice data;When running to specific interface element, voice data corresponding with the specific interface element is extracted;The voice data is transmitted using the osteoacusis part with the voice data with incidence relation.In the embodiment of the present invention, it is associated by voice data and the osteoacusis part, the stereo sound effect effect that can be formed in a three-dimensional environment, improves the usage experience of user.

Description

A kind of data processing method, virtual reality terminal and mobile terminal
Technical field
The present invention relates to the technical field of virtual reality, more particularly to a kind of data processing method, a kind of virtual reality Terminal and a kind of mobile terminal.
Background technology
With the progress of science and technology, virtual reality (Virtual Reality, abbreviation VR) technology is gradually popularized.Virtually Reality technology is a kind of computer simulation system that can be created with the experiencing virtual world, and it is a kind of using computer technology generation Simulated environment, it is interactive Three-Dimensional Dynamic what comes into a driver's and the system emulation of entity behavior of a kind of Multi-source Information Fusion, can makes User is immersed in virtual environment, and the sensation in true border is such as faced in experience.What virtual reality presently the most people knew applies in scene of game, And most common virtual reality terminal is then virtual implementing helmet, head mounted display.Virtual implementing helmet is shown using helmet-type Show that vision of the people to the external world, sense of hearing closing, guiding user are produced a kind of feeling of immersion in virtual environment by device.Wear-type shows Show that device is also earliest virtual reality display, its displaying principle is the image that right and left eyes screen shows right and left eyes respectively, human eye Third dimension is produced in brain after obtaining the discrepant information of this band.
But the audio effect processing of virtual reality terminal is denounced by people always.Sound in actual environment is from four sides From all directions, therefore the environmental aspect for surrounding and the thing occurred can produce direct, accurate judgement to user;And in virtual environment In, need also exist for allowing user to hear the sound from diverse location, just contribute to user to be produced in virtual environment really heavy Leaching sense.
The content of the invention
In view of the above problems, the embodiment of the present invention provides a kind of data processing method and a kind of corresponding virtual reality is whole End, a kind of mobile terminal, to solve the bad above mentioned problem of the audio effect processing of virtual reality terminal.
It is whole applied to virtual reality the embodiment of the invention discloses a kind of data processing method in order to solve the above problems End, the virtual reality terminal include multiple osteoacusis parts, and methods described includes:
The called application-specific in the virtual reality terminal, the application-specific have and specific interface member Plain and corresponding voice data, the voice data have incidence relation with least one osteoacusis part respectively;
When running to specific interface element, voice data corresponding with the specific interface element is extracted;
The voice data is transmitted using the osteoacusis part with the voice data with incidence relation.
The embodiment of the invention also discloses a kind of virtual reality terminal, the virtual reality terminal includes multiple osteoacusis portions Part, the virtual reality terminal include:
Application-specific calling module, for called application-specific, the spy in the virtual reality terminal Determine application program have with specific interface element and corresponding voice data, the voice data respectively with least one osteoacusis Part has incidence relation;
Voice data extraction module, for when running to specific interface element, extracting and the specific interface element pair The voice data answered;
Voice data transfer module, for transmitting institute using the osteoacusis part with the voice data with incidence relation State voice data.
The embodiment of the invention also discloses a kind of mobile terminal, including processor, memory, is stored on the memory And the data processor that can be run on the processor, the data processor are realized during the computing device The step of data processing method stated.
The embodiment of the invention also discloses a kind of computer-readable recording medium, deposited on the computer-readable recording medium Data processor is contained, the data processor realizes above-mentioned data processing method when being executed by processor the step of.
The embodiment of the present invention includes advantages below:
In the embodiment of the present invention, the called application-specific in the virtual reality terminal, the application-specific With with specific interface element and corresponding voice data, the voice data respectively with least one osteoacusis part have close Connection relation, when running to specific interface element, extract corresponding with specific interface element voice data, using with it is described The osteoacusis part that voice data has incidence relation transmits the voice data;In the embodiment of the present invention, pass through voice data It is associated with the osteoacusis part, the stereo sound effect effect that can be formed in a three-dimensional environment, improves the use of user Experience.
Brief description of the drawings
Technical scheme in order to illustrate the embodiments of the present invention more clearly, make required in being described below to embodiment Accompanying drawing is briefly described, it should be apparent that, drawings in the following description are only some embodiments of the present invention, for For those of ordinary skill in the art, on the premise of not paying creative work, other can also be obtained according to these accompanying drawings Accompanying drawing.
Fig. 1 is a kind of step flow chart of data processing method embodiment one of the embodiment of the present invention;
Fig. 2 is a kind of step flow chart of data processing method embodiment two of the embodiment of the present invention;
Fig. 3 is a kind of structured flowchart of virtual reality terminal of device embodiment three in the embodiment of the present invention;
Fig. 4 is a kind of structured flowchart of virtual reality terminal of device embodiment four in the embodiment of the present invention.
Embodiment
In order that technical problem, technical scheme and beneficial effect that the embodiment of the present invention solves are more clearly understood, with Lower combination drawings and Examples, the embodiment of the present invention is further described.It should be appreciated that specific implementation described herein Example is not intended to limit the present invention only to explain the present invention.
Embodiment of the method one
Reference picture 1, a kind of step flow chart of data processing method embodiment one of the embodiment of the present invention is shown, applied In virtual reality terminal, the virtual reality terminal includes multiple osteoacusis parts, specifically may include steps of:
Step 101, the called application-specific in the virtual reality terminal, the application-specific have and spy Delimit surface element and corresponding voice data, the voice data has incidence relation with least one osteoacusis part respectively;
In the embodiment of the present invention, the virtual reality terminal includes mobile terminal, optical module and construction module;Certainly, External connection module, such as camera, sensor, locator and controller can also be included.The external connection module virtually shows with described Connected mode between the modules of real terminal, can be data wire connection or wireless connection, and the data wire connection connects Mouthful can include USB (Universal Serial Bus, USB) interface, HDMI (HDMI, High Definition Multimedia Interface) interface etc.;The wireless connection can be Wi-Fi (Wireless- Fidelity, Wireless Fidelity), bluetooth, ZigBee (ZigBee protocol), NFC (Near Field Communication, closely Wireless communication technology) etc..
In the embodiment of the present invention, the mobile terminal can be the terminals such as smart mobile phone, tablet personal computer or can pacify The other-end of application program, such as intelligent watch etc. are filled, the present invention is not construed as limiting to the particular type of mobile terminal.The shifting The operating system of dynamic terminal can include Android (Android), IOS, Windows Phone, Windows etc..
In the embodiment of the present invention, the optical module is made up of two groups of convex lens and optics governor motion, the convex lens Can be single eyeglass or more eyeglasses;The optics governor motion can be used for adjusting between the convex lens and eyes of user away from From either for adjusting the distance between two groups of convex lens or diopter for adjusting convex lens.
In the embodiment of the present invention, the construction module includes the circuit board of the enclosure interior of virtual reality terminal, described Physical button is set outside the housing of virtual reality terminal, related function, such as ON/OFF are realized by physical button and circuit board Machine, play, exit, returning.
It should be noted that in the embodiment of the present invention, the virtual reality terminal also includes multiple osteoacusis parts, when with Family when using virtual reality terminal, the osteoacusis part can be individually fixed in the top of user ear, lower section, right and Left, the position around other ears is can also be, be close to the skull of user, the embodiment of the present invention is not restricted to this.Bone The use principle of conducting parts is that sound is converted into the mechanical oscillation of different frequency, by vibration the skull of people, osseous labyrinth, Inner ear lymph transmission, spiral organ, auditory nerve, auditory center transmit sound wave.Sound wave is produced with respect to traditional vibrating diaphragm Sound conduction mode, osteoacusis part eliminate the step of many sound waves transmit, can realize clearly sound in a noisy environment Sound reduces, and sound wave also will not have influence on other people because of spreading in atmosphere.In the embodiment of the present invention, the osteoacusis part It can be separated with mobile terminal, by way of user is worn on ear, be close to the skull of user, for transmitting sound.
In the embodiment of the present invention, the virtual reality terminal has also run multiple application-specifics, the application-specific Program has and specific interface element and corresponding voice data.The application-specific can include game class application journey Sequence, when the virtual reality terminal operating game class application program, virtual environment corresponding to generation, the virtual environment can be institute Specific interface element and voice data are stated by rendering the environment to be formed.
In the embodiment of the present invention, the called application-specific in the virtual reality terminal, wherein, the application-specific Program can include specific interface element and corresponding voice data, the voice data respectively with least one osteoacusis part With incidence relation.
Further, the voice data can establish incidence relation with the osteoacusis part in advance, and the association is closed The mode of establishing of system can correspond to the positional information of interface element and establish according to voice data, or according to user oneself Definition is associated, and the embodiment of the present invention is not limited specifically this.
Step 102, when running to specific interface element, voice data corresponding with the specific interface element is extracted;
In the embodiment of the present invention, after the virtual reality terminal called application-specific, when running to the application-specific During specific interface element in program, voice data corresponding with the specific interface element is extracted.For example, Yong Huzheng Using the virtual reality terminal operating some game class application program, when running to " second of the game class application program There are a specific interface element " fighter plane to be roared past on sky ", extraction and the specific interface during outpost of the tax office ", in virtual environment Voice data corresponding to element " fighter plane to be roared past on sky ".
Step 103, the voice data is transmitted using the osteoacusis part with the voice data with incidence relation.
Apply in the embodiment of the present invention, after extracting voice data corresponding to the specific interface element, can use There is the osteoacusis part transmission voice data of incidence relation with the voice data.It should be noted that the sound Data can pre-establish incidence relation with osteoacusis part, can also be established when the specific interface element is currently running Incidence relation, the embodiment of the present invention are not restricted to this.Specifically, because specific interface element " roars past on sky The positional information of fighter plane " is upper position information, can be by corresponding voice data and the multiple osteoacusis part Top osteoacusis part is associated, and using voice data corresponding to top osteoacusis part transmission, is formed a body and is faced The audio effect in its border,
In the embodiment of the present invention, the called application-specific in the virtual reality terminal, the application-specific With with specific interface element and corresponding voice data, the voice data respectively with least one osteoacusis part have close Connection relation, when running to specific interface element, extract corresponding with specific interface element voice data, using with it is described The osteoacusis part that voice data has incidence relation transmits the voice data;In the embodiment of the present invention, pass through voice data It is associated with the osteoacusis part, the stereo sound effect effect of a three-dimensional environment can be formed, that improves user uses body Test.
Embodiment of the method two
Reference picture 2, a kind of step flow chart of data processing method embodiment two of the embodiment of the present invention is shown, applied In virtual reality terminal, the virtual reality terminal includes multiple osteoacusis parts, specifically may include steps of:
Step 201, voice data corresponding with specific interface element is detected;
In the embodiment of the present invention, the voice data corresponding with specific interface element in application-specific is detected.Its In, the specific interface element includes character element, object elements and control element;For example, the character element can be trip Personage in class application program of playing, such as " spirit ";Object elements can be the object in game class application program, such as " stage property: Rifle, tank ";Control element can be the control in game class application program, such as " river, tower ", detect the application-specific journey Specific interface element in sequence, obtain voice data corresponding with the specific interface element.
In the embodiment of the present invention, described the step of detecting voice data corresponding with specific interface element, includes:Detect people Voice data corresponding to matter-element element;And/or voice data corresponding to detection object element;And/or detection control element is corresponding Voice data.
Step 202, the voice data is established into incidence relation with least one osteoacusis part respectively;
It is possible to further which voice data is established into incidence relation with least one osteoacusis part respectively.It is of the invention real It is described that the voice data is established into incidence relation with least one osteoacusis part respectively in a kind of preferred embodiment for applying example The step of include:Identify the positional information of specific interface element corresponding with the voice data;According to the positional information Voice data is established into incidence relation with least one osteoacusis part respectively.
Specifically, the positional information includes upper position information, lower position information, right station information and left part position Confidence ceases.The osteoacusis part including the use of when be close to the top osteoacusis part of skull of user respectively, inferior bone passes Lead part, right part osteoacusis part and left part osteoacusis part.It is described according to the positional information by voice data respectively with extremely A step of few osteoacusis part establishes incidence relation includes:Judge the positional information belonging to the voice data;When described When positional information is upper position information, the voice data and top osteoacusis part are associated;Or, when the position When information is lower position information, the voice data and inferior bone conducting parts are associated;Or, when the positional information For right station information when, the voice data and right part osteoacusis part are associated;Or, when the positional information is a left side During portion's positional information, the voice data and left part osteoacusis part are associated.
It should be noted that above-mentioned incidence relation establishes the citing that mode is the embodiment of the present invention, there can also be other Incidence relation establish mode, such as go to establish associating for voice data and osteoacusis part based on user-defined selection System.In addition, the relation of the voice data and osteoacusis part can be one-to-one, it can also be one-to-many, i.e. a sound Data can establish incidence relation with multiple osteoacusis parts, and the embodiment of the present invention is not limited specifically this.
Step 203, the called application-specific in the virtual reality terminal, the application-specific have and spy Delimit surface element and corresponding voice data, the voice data has incidence relation with least one osteoacusis part respectively;
For reality, after establishing the relation of voice data and osteoacusis part, when virtual reality terminal calling is specific During application program, i.e. user can use virtual reality terminal operating application-specific, the application-specific have with Specific interface element and corresponding voice data.
Step 204, when running to specific interface element, voice data corresponding with the specific interface element is extracted;
Further, when the application-specific runs to specific interface element, extract and the specific interface Voice data corresponding to element.For example, user wears virtual reality terminal, runs the application-specific, described specific Application program can include game class application program;When running to specific interface element, e.g., character element's " spirit ";Obtaining should Voice data corresponding to character element's " spirit ".
Step 205, the voice data is transmitted using the osteoacusis part with the voice data with incidence relation.
There is the bone of incidence relation it is possible to further start voice data corresponding with above-mentioned character element " spirit " Conducting parts, the voice data is transmitted using the osteoacusis part.
It is described using the bone with the voice data with incidence relation in a kind of preferred embodiment of the embodiment of the present invention The step of conducting parts transmission voice data, includes:Using the top osteoacusis part, inferior bone conducting parts, right part Osteoacusis part or left part osteoacusis part transmit the voice data.
In the embodiment of the present invention, detect corresponding with specific interface element voice data, by the voice data respectively with At least one osteoacusis part establishes incidence relation, the called application-specific in the virtual reality terminal, described specific Application program have with specific interface element and corresponding voice data, the voice data respectively with least one osteoacusis portion Part has incidence relation.When running to specific interface element, voice data corresponding with the specific interface element is extracted, is adopted The voice data is transmitted with the osteoacusis part with the voice data with incidence relation, establishes voice data and osteoacusis Incidence relation between part, more three-dimensional more life-like audio effect can be formed in virtual environment, use is greatly improved The usage experience at family.
It should be noted that for embodiment of the method, in order to be briefly described, therefore it is all expressed as to a series of action group Close, but those skilled in the art should know, the embodiment of the present invention is not limited by described sequence of movement, because according to According to the embodiment of the present invention, some steps can use other orders or carry out simultaneously.Secondly, those skilled in the art also should Know, embodiment described in this description belongs to preferred embodiment, and the involved action not necessarily present invention is implemented Necessary to example.
Device embodiment three
Fig. 3 is the structured flowchart of the virtual reality terminal of one embodiment of the invention, and the virtual reality terminal 300 includes Multiple osteoacusis parts, the virtual reality terminal 300 shown in Fig. 3 includes application-specific calling module 301, voice data carries Modulus block 302 and voice data transfer module 303;
Application-specific calling module 301, it is described for the called application-specific in the virtual reality terminal Application-specific has to be passed with least one bone respectively with specific interface element and corresponding voice data, the voice data Leading part has incidence relation;
Voice data extraction module 302, for when running to specific interface element, extracting and the specific interface element Corresponding voice data;
Voice data transfer module 303, for being passed using the osteoacusis part with the voice data with incidence relation Pass the voice data.
Preferably, the virtual reality terminal also includes:
Voice data detection module, for detecting voice data corresponding with the specific interface element;
Incidence relation establishes module, and pass is associated for the voice data to be established with least one osteoacusis part respectively System.
Preferably, the specific interface element includes character element, object elements and control element.
Preferably, the voice data detection module includes:
First voice data detection sub-module, for detecting voice data corresponding with the character element;
And/or second sound Data Detection submodule, for detecting voice data corresponding with the object elements;
And/or the 3rd voice data detection sub-module, for detecting voice data corresponding with the control element.
Preferably, the incidence relation is established module and included:
Submodule is identified, for identifying the positional information of specific interface element corresponding with the voice data;
Incidence relation setting up submodule, for according to the positional information by voice data respectively with least one osteoacusis Part establishes incidence relation.
Preferably, the positional information includes upper position information, lower position information, right station information and left part position Confidence ceases, and the osteoacusis part includes top osteoacusis part, inferior bone conducting parts, right part osteoacusis part and left part bone Conducting parts, the incidence relation setting up submodule include:
Positional information judging unit, for judging the positional information belonging to the voice data;
First associative cell, for when the positional information is upper position information, by the voice data and top Osteoacusis part is associated;
Or, second associative cell, for when the positional information is lower position information, by the voice data with Portion's osteoacusis part is associated;
Or, the 3rd associative cell, for when the positional information is right station information, by the voice data with it is right Portion's osteoacusis part is associated;
Or, the 4th associative cell, for when the positional information is left part positional information, by the voice data with it is left Portion's osteoacusis part is associated.
Preferably, the voice data transfer module 303 includes:
Voice data transmits submodule, for being passed using the top osteoacusis part, inferior bone conducting parts, right part bone Lead part or left part osteoacusis part transmits the voice data.
In the embodiment of the present invention, the called application-specific in the virtual reality terminal, the application-specific With with specific interface element and corresponding voice data, the voice data respectively with least one osteoacusis part have close Connection relation, when running to specific interface element, extract corresponding with specific interface element voice data, using with it is described The osteoacusis part that voice data has incidence relation transmits the voice data;In the embodiment of the present invention, pass through voice data It is associated with the osteoacusis part, the stereo sound effect effect of a three-dimensional environment can be formed, that improves user uses body Test.
The embodiment of the invention also discloses a kind of mobile terminal, including processor, memory, is stored on the memory And the data processor that can be run on the processor, the data processor are realized during the computing device The step of data processing method stated.
The embodiment of the invention also discloses a kind of computer-readable recording medium, deposited on the computer-readable recording medium Data processor is contained, the data processor realizes above-mentioned data processing method when being executed by processor the step of.
Device embodiment four
Fig. 4 is the structured flowchart of the virtual reality terminal of another embodiment of the present invention.The virtual reality terminal 400 is wrapped Mobile terminal 407, optical module 408 and construction module 409 are included, external connection module 410, the external connection module 410 can also be included Multiple osteoacusis parts can be included.
Wherein, the mobile terminal 407 in the virtual reality terminal shown in Fig. 4 includes:At least one processor 401, storage Device 402, at least one network interface 404 and other users interface 403 and component 406 of taking pictures.Each group in mobile terminal 400 Part is coupled by bus system 405.It is understood that bus system 405 is used to realize the connection communication between these components. Bus system 405 is in addition to including data/address bus, in addition to power bus, controlling bus and status signal bus in addition.But in order to For the sake of clear explanation, various buses are all designated as into the component 406 of taking pictures of bus system 405 in Fig. 4 includes camera.
Wherein, user interface 403 can include display, keyboard or pointing device (for example, mouse, trace ball (trackball), touch-sensitive plate or touch-screen etc..
It is appreciated that the memory 402 in the embodiment of the present invention can be volatile memory or nonvolatile memory, Or it may include both volatibility and nonvolatile memory.Wherein, nonvolatile memory can be read-only storage (Read- OnlyMemory, ROM), programmable read only memory (ProgrammableROM, PROM), Erasable Programmable Read Only Memory EPROM (ErasablePROM, EPROM), Electrically Erasable Read Only Memory (ElectricallyEPROM, EEPROM) dodge Deposit.Volatile memory can be random access memory (RandomAccessMemory, RAM), and it is used as outside slow at a high speed Deposit.By exemplary but be not restricted explanation, the RAM of many forms can use, such as static RAM (StaticRAM, SRAM), dynamic random access memory (DynamicRAM, DRAM), Synchronous Dynamic Random Access Memory (SynchronousDRAM, SDRAM), double data speed synchronous dynamic RAM (DoubleDataRate SDRAM, DDRSDRAM), enhanced Synchronous Dynamic Random Access Memory (Enhanced SDRAM, ESDRAM), synchronized links Dynamic random access memory (SynchlinkDRAM, SLDRAM) and direct rambus random access memory (DirectRambusRAM, DRRAM).The memory 402 of the system and method for description of the embodiment of the present invention is intended to include but unlimited In these memories with any other suitable type.
In some embodiments, memory 402 stores following element, can perform module or data structure, or Their subset of person, or their superset:Operating system 4021 and application program 4022.
Wherein, operating system 4021, comprising various system programs, such as ccf layer, core library layer, driving layer etc., it is used for Realize various basic businesses and the hardware based task of processing.Application program 4022, include various application programs, such as media Player (MediaPlayer), browser (Browser) etc., for realizing various applied business.Realize embodiment of the present invention side The program of method may be embodied in application program 4022.
In embodiments of the present invention, by calling program or the instruction of the storage of memory 402, specifically, can be application The program stored in program 4022 or instruction, processor 401 are used for the called application-specific in the virtual reality terminal, The application-specific have with specific interface element and corresponding voice data, the voice data respectively with it is at least one Osteoacusis part has incidence relation;When running to specific interface element, sound corresponding with the specific interface element is extracted Sound data;The voice data is transmitted using the osteoacusis part with the voice data with incidence relation.
The method that the embodiments of the present invention disclose can apply in processor 401, or be realized by processor 401. Processor 401 is probably a kind of IC chip, has the disposal ability of signal.In implementation process, the above method it is each Step can be completed by the integrated logic circuit of the hardware in processor 401 or the instruction of software form.Above-mentioned processing Device 401 can be general processor, digital signal processor (DigitalSignalProcessor, DSP), application specific integrated circuit (ApplicationSpecific IntegratedCircuit, ASIC), ready-made programmable gate array (FieldProgrammableGateArray, FPGA) either other PLDs, discrete gate or transistor logic Device, discrete hardware components.It can realize or perform disclosed each method, step and the box in the embodiment of the present invention Figure.General processor can be microprocessor or the processor can also be any conventional processor etc..With reference to the present invention The step of method disclosed in embodiment, can be embodied directly in hardware decoding processor and perform completion, or use decoding processor In hardware and software module combination perform completion.Software module can be located at random access memory, and flash memory, read-only storage can In the ripe storage medium in this area such as program read-only memory or electrically erasable programmable memory, register.The storage Medium is located at memory 402, and processor 401 reads the information in memory 402, and the step of the above method is completed with reference to its hardware Suddenly.
It is understood that the embodiment of the present invention description these embodiments can use hardware, software, firmware, middleware, Microcode or its combination are realized.Realized for hardware, processing unit can be realized in one or more application specific integrated circuits (ApplicationSpecificIntegratedCircuits, ASIC), digital signal processor (DigitalSignalProcessing, DSP), digital signal processing appts (DSPDevice, DSPD), programmable logic device (ProgrammableLogicDevice, PLD), field programmable gate array (Field-ProgrammableGateArray, FPGA), general processor, controller, microcontroller, microprocessor, other electronics lists for performing herein described function In member or its combination.
For software realize, can by perform the module (such as process, function etc.) of function described in the embodiment of the present invention come Realize the technology described in the embodiment of the present invention.Software code is storable in memory and passes through computing device.Memory can To realize within a processor or outside processor.
Alternatively, following steps can be also realized when data processor is performed by processor 401:Detection and the specific boundary Voice data corresponding to surface element;The voice data is established into incidence relation with least one osteoacusis part respectively.
Alternatively, the specific interface element includes character element, object elements and control element.
Alternatively, following steps can be also realized when data processor is performed by processor 401:Detection and people's matter-element Voice data corresponding to element;And/or detection voice data corresponding with the object elements;And/or detection and the control Voice data corresponding to element.
Alternatively, following steps can be also realized when data processor is performed by processor 401:Identify and the sound The positional information of specific interface element corresponding to data;Voice data is passed with least one bone respectively according to the positional information Lead part and establish incidence relation.
Alternatively, the positional information includes upper position information, lower position information, right station information and left part position Confidence ceases, and the osteoacusis part includes top osteoacusis part, inferior bone conducting parts, right part osteoacusis part and left part bone Conducting parts.
Alternatively, following steps can be also realized when data processor is performed by processor 401:Judge the voice data Affiliated positional information;When the positional information is upper position information, by the voice data and top osteoacusis part It is associated;Or, when the positional information is lower position information, the voice data and inferior bone conducting parts are carried out Association;Or, when the positional information is right station information, the voice data and right part osteoacusis part are closed Connection;Or, when the positional information is left part positional information, the voice data and left part osteoacusis part are associated.
Alternatively, following steps can be also realized when data processor is performed by processor 401:Passed using the top bone Lead part, inferior bone conducting parts, right part osteoacusis part or left part osteoacusis part and transmit the voice data.
Mobile terminal 400 can realize each process that mobile terminal is realized in previous embodiment, to avoid repeating, here Repeat no more.
It can be seen that in the embodiment of the present invention, voice data corresponding with specific interface element is detected, by the voice data point Incidence relation is not established with least one osteoacusis part, the called application-specific in the virtual reality terminal is described Application-specific has to be passed with least one bone respectively with specific interface element and corresponding voice data, the voice data Leading part has incidence relation,.When running to specific interface element, sound number corresponding with the specific interface element is extracted According to, the voice data is transmitted using the osteoacusis part with the voice data with incidence relation, establish voice data with Incidence relation between osteoacusis part, more three-dimensional more life-like audio effect can be formed in virtual environment, greatly Improve the usage experience of user.
Those of ordinary skill in the art it is to be appreciated that with reference to disclosed in the embodiment of the present invention embodiment description it is each The unit and algorithm steps of example, it can be realized with the combination of electronic hardware or computer software and electronic hardware.These Function is performed with hardware or software mode actually, application-specific and design constraint depending on technical scheme.Specialty Technical staff can realize described function using distinct methods to each specific application, but this realization should not Think beyond the scope of this invention.
It is apparent to those skilled in the art that for convenience and simplicity of description, the system of foregoing description, The specific work process of device and unit, the corresponding process in preceding method embodiment is may be referred to, will not be repeated here.
In embodiment provided herein, it should be understood that disclosed apparatus and method, others can be passed through Mode is realized.For example, device embodiment described above is only schematical, for example, the division of the unit, is only A kind of division of logic function, can there is an other dividing mode when actually realizing, for example, multiple units or component can combine or Person is desirably integrated into another system, or some features can be ignored, or does not perform.Another, shown or discussed is mutual Between coupling or direct-coupling or communication connection can be INDIRECT COUPLING or communication link by some interfaces, device or unit Connect, can be electrical, mechanical or other forms.
The unit illustrated as separating component can be or may not be physically separate, show as unit The part shown can be or may not be physical location, you can with positioned at a place, or can also be distributed to multiple On NE.Some or all of unit therein can be selected to realize the mesh of this embodiment scheme according to the actual needs 's.
In addition, each functional unit in each embodiment of the present invention can be integrated in a processing unit, can also That unit is individually physically present, can also two or more units it is integrated in a unit.
If the function is realized in the form of SFU software functional unit and is used as independent production marketing or in use, can be with It is stored in a computer read/write memory medium.Based on such understanding, technical scheme is substantially in other words The part to be contributed to prior art or the part of the technical scheme can be embodied in the form of software product, the meter Calculation machine software product is stored in a storage medium, including some instructions are causing a computer equipment (can be People's computer, server, or network equipment etc.) perform all or part of step of each embodiment methods described of the present invention. And foregoing storage medium includes:USB flash disk, mobile hard disk, ROM, RAM, magnetic disc or CD etc. are various can be with store program codes Medium.
The foregoing is only a specific embodiment of the invention, but protection scope of the present invention is not limited thereto, any Those familiar with the art the invention discloses technical scope in, change or replacement can be readily occurred in, should all be contained Cover within protection scope of the present invention.Therefore, protection scope of the present invention should be defined by scope of the claims.

Claims (16)

1. a kind of data processing method, it is characterised in that applied to virtual reality terminal, the virtual reality terminal includes multiple Osteoacusis part, methods described include:
The called application-specific in the virtual reality terminal, the application-specific have with specific interface element and Corresponding voice data, the voice data have incidence relation with least one osteoacusis part respectively;
When running to specific interface element, voice data corresponding with the specific interface element is extracted;
The voice data is transmitted using the osteoacusis part with the voice data with incidence relation.
2. according to the method for claim 1, it is characterised in that the voice data respectively with least one osteoacusis part Incidence relation establish in the following way:
Detection voice data corresponding with the specific interface element;
The voice data is established into incidence relation with least one osteoacusis part respectively.
3. according to the method for claim 1, it is characterised in that the specific interface element includes character element, object meta Element and control element.
4. the method according to claim 1 or 3, it is characterised in that the detection is corresponding with the specific interface element The step of voice data, includes:
Detection voice data corresponding with the character element;
And/or detection voice data corresponding with the object elements;
And/or detection voice data corresponding with the control element.
5. according to the method for claim 2, it is characterised in that described to pass the voice data with least one bone respectively Leading the step of part establishes incidence relation includes:
Identify the positional information of specific interface element corresponding with the voice data;
Voice data is established into incidence relation with least one osteoacusis part respectively according to the positional information.
6. according to the method for claim 5, it is characterised in that the positional information includes upper position information, lower portion Confidence breath, right station information and left part positional information, the osteoacusis part include top osteoacusis part, bottom osteoacusis Part, right part osteoacusis part and left part osteoacusis part, it is described according to the positional information by voice data respectively with least The step of one osteoacusis part establishes incidence relation includes:
Judge the positional information belonging to the voice data;
When the positional information is upper position information, the voice data and top osteoacusis part are associated;
Or, when the positional information is lower position information, the voice data and inferior bone conducting parts are associated;
Or, when the positional information is right station information, the voice data and right part osteoacusis part are associated;
Or, when the positional information is left part positional information, the voice data and left part osteoacusis part are associated.
7. the method according to claim 1 or 6, it is characterised in that the use has with the voice data associates pass The step of osteoacusis part transmission voice data of system, includes:
Using the top osteoacusis part, inferior bone conducting parts, right part osteoacusis part or left part osteoacusis part transmission The voice data.
8. a kind of virtual reality terminal, it is characterised in that the virtual reality terminal includes multiple osteoacusis parts, described virtual Non-real end includes:
Application-specific calling module, for the called application-specific in the virtual reality terminal, it is described it is specific should With program have with specific interface element and corresponding voice data, the voice data respectively with least one osteoacusis part With incidence relation;
Voice data extraction module, for when running to specific interface element, extraction to be corresponding with the specific interface element Voice data;
Voice data transfer module, for transmitting the sound using the osteoacusis part with the voice data with incidence relation Sound data.
9. virtual reality terminal according to claim 8, it is characterised in that the virtual reality terminal also includes:Sound Data detection module, for detecting voice data corresponding with the specific interface element;
Incidence relation establishes module, for the voice data to be established into incidence relation with least one osteoacusis part respectively.
10. virtual reality terminal according to claim 8, it is characterised in that the specific interface element includes people's matter-element Element, object elements and control element.
11. the virtual reality terminal according to claim 8 or 10, it is characterised in that the voice data detection module bag Include:
First voice data detection module, for detecting voice data corresponding with the character element;
And/or second sound data detection module, for detecting voice data corresponding with the object elements;
And/or the 3rd voice data detection module, for detecting voice data corresponding with the control element.
12. virtual reality terminal according to claim 9, it is characterised in that the incidence relation, which establishes module, to be included:
Submodule is identified, for identifying the positional information of specific interface element corresponding with the voice data;
Incidence relation setting up submodule, for according to the positional information by voice data respectively with least one osteoacusis part Establish incidence relation.
13. virtual reality terminal according to claim 12, it is characterised in that the positional information includes upper bit confidence Breath, lower position information, right station information and left part positional information, the osteoacusis part include top osteoacusis part, Inferior bone conducting parts, right part osteoacusis part and left part osteoacusis part, the incidence relation setting up submodule include:
Positional information judging unit, for judging the positional information belonging to the voice data;
First associative cell, for when the positional information is upper position information, the voice data and top bone to be passed Part is led to be associated;
Or, second associative cell, for when the positional information is lower position information, by the voice data and inferior bone Conducting parts are associated;
Or, the 3rd associative cell, for when the positional information is right station information, by the voice data and right part bone Conducting parts are associated;
Or, the 4th associative cell, for when the positional information is left part positional information, by the voice data and left part bone Conducting parts are associated.
14. the virtual reality terminal according to claim 8 or 13, it is characterised in that the voice data transfer module bag Include:
Voice data transmits submodule, for using the top osteoacusis part, inferior bone conducting parts, right part osteoacusis portion Part or left part osteoacusis part transmit the voice data.
15. a kind of mobile terminal, it is characterised in that including processor, memory is stored on the memory and can be described The data processor run on processor, the data processor are realized such as claim 1 during the computing device The step of to data processing method any one of 7.
16. a kind of computer-readable recording medium, it is characterised in that be stored with the computer-readable recording medium at data Program is managed, the data processing as any one of claim 1 to 7 is realized when the data processor is executed by processor The step of method.
CN201710771331.9A 2017-08-31 2017-08-31 A kind of data processing method, virtual reality terminal and mobile terminal Pending CN107589932A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710771331.9A CN107589932A (en) 2017-08-31 2017-08-31 A kind of data processing method, virtual reality terminal and mobile terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710771331.9A CN107589932A (en) 2017-08-31 2017-08-31 A kind of data processing method, virtual reality terminal and mobile terminal

Publications (1)

Publication Number Publication Date
CN107589932A true CN107589932A (en) 2018-01-16

Family

ID=61051626

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710771331.9A Pending CN107589932A (en) 2017-08-31 2017-08-31 A kind of data processing method, virtual reality terminal and mobile terminal

Country Status (1)

Country Link
CN (1) CN107589932A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100110368A1 (en) * 2008-11-02 2010-05-06 David Chaum System and apparatus for eyeglass appliance platform
CN106226903A (en) * 2016-08-02 2016-12-14 彭顺德 A kind of virtual reality helmet
CN106982407A (en) * 2016-05-26 2017-07-25 上海拆名晃信息科技有限公司 A kind of virtual reality device of the three-dimensional sound field of combination osteoacusis

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100110368A1 (en) * 2008-11-02 2010-05-06 David Chaum System and apparatus for eyeglass appliance platform
CN106982407A (en) * 2016-05-26 2017-07-25 上海拆名晃信息科技有限公司 A kind of virtual reality device of the three-dimensional sound field of combination osteoacusis
CN106226903A (en) * 2016-08-02 2016-12-14 彭顺德 A kind of virtual reality helmet

Similar Documents

Publication Publication Date Title
CN204269949U (en) A kind of multifunctional modular Brilliant Eyes temple
CN106774932A (en) The data processing method and virtual reality terminal of a kind of virtual reality terminal
US10564717B1 (en) Apparatus, systems, and methods for sensing biopotential signals
CN106774929A (en) The display processing method and virtual reality terminal of a kind of virtual reality terminal
CN106200960A (en) The content display method of electronic interactive product and device
CN106502377A (en) Mobile terminal and its control method
CN104536579A (en) Interactive three-dimensional scenery and digital image high-speed fusing processing system and method
EP3253054A1 (en) Glass-type mobile terminal
WO2014144918A2 (en) Enhanced optical and perceptual digital eyewear
CN105653020A (en) Time traveling method and apparatus and glasses or helmet using same
CN108141542A (en) Mobile terminal and its control method
CN106657286A (en) Data interaction method, virtual reality terminal and unmanned aerial vehicle
EP3264760A1 (en) Glasses-type mobile terminal and method of operating the same
CN105955717A (en) Method and device for manufacturing interactive electronic manual
CN106621320A (en) Data processing method of virtual reality terminal and virtual reality terminal
CN106850577A (en) The method and apparatus of data interaction, the first virtual reality terminal, Conference server
CN106774924A (en) The data processing method and virtual reality terminal of a kind of virtual reality terminal
CN106681491A (en) Data processing method of virtual reality terminal and virtual reality terminal
CN112071130A (en) Knowledge education system and education method based on VR technology
CN110349271B (en) Lens color adjustment method, device, storage medium and augmented reality equipment
CN106774928A (en) The data processing method and virtual reality terminal of a kind of virtual reality terminal
US10656710B1 (en) Apparatus, systems, and methods for sensing biopotential signals via compliant electrodes
CN106802716A (en) The data processing method and virtual reality terminal of a kind of virtual reality terminal
CN107707733A (en) A kind of data processing method, virtual reality terminal and mobile terminal
CN107589932A (en) A kind of data processing method, virtual reality terminal and mobile terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20180116

RJ01 Rejection of invention patent application after publication