CN108827338A - Phonetic navigation method and Related product - Google Patents
Phonetic navigation method and Related product Download PDFInfo
- Publication number
- CN108827338A CN108827338A CN201810574609.8A CN201810574609A CN108827338A CN 108827338 A CN108827338 A CN 108827338A CN 201810574609 A CN201810574609 A CN 201810574609A CN 108827338 A CN108827338 A CN 108827338A
- Authority
- CN
- China
- Prior art keywords
- target
- internet
- wearable device
- things
- voice
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 52
- 238000012545 processing Methods 0.000 claims abstract description 23
- 238000013507 mapping Methods 0.000 claims description 59
- 230000015654 memory Effects 0.000 claims description 21
- 238000004891 communication Methods 0.000 claims description 19
- 230000007613 environmental effect Effects 0.000 claims description 16
- 238000004590 computer program Methods 0.000 claims description 15
- 238000013497 data interchange Methods 0.000 claims description 2
- 230000006870 function Effects 0.000 description 26
- 238000010586 diagram Methods 0.000 description 13
- 238000005516 engineering process Methods 0.000 description 12
- 238000001514 detection method Methods 0.000 description 5
- 238000010168 coupling process Methods 0.000 description 4
- 238000005859 coupling reaction Methods 0.000 description 4
- 230000001133 acceleration Effects 0.000 description 3
- 230000009471 action Effects 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 230000003190 augmentative effect Effects 0.000 description 2
- 230000005611 electricity Effects 0.000 description 2
- MRNHPUHPBOKKQT-UHFFFAOYSA-N indium;tin;hydrate Chemical compound O.[In].[Sn] MRNHPUHPBOKKQT-UHFFFAOYSA-N 0.000 description 2
- 238000007726 management method Methods 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000005192 partition Methods 0.000 description 2
- 241001672694 Citrus reticulata Species 0.000 description 1
- WHXSMMKQMYFTQS-UHFFFAOYSA-N Lithium Chemical compound [Li] WHXSMMKQMYFTQS-UHFFFAOYSA-N 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 239000003990 capacitor Substances 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000007717 exclusion Effects 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 229910052744 lithium Inorganic materials 0.000 description 1
- 230000005389 magnetism Effects 0.000 description 1
- 230000035800 maturation Effects 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000005236 sound signal Effects 0.000 description 1
- 230000001052 transient effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/34—Route searching; Route guidance
- G01C21/36—Input/output arrangements for on-board computers
- G01C21/3626—Details of the output of route guidance instructions
- G01C21/3629—Guidance using speech or audio output, e.g. text-to-speech
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Automation & Control Theory (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- General Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Telephone Function (AREA)
- Navigation (AREA)
Abstract
This application discloses a kind of phonetic navigation method and Related product, it is applied to wearable device, the wearable device includes processing circuit, and the telecommunication circuit, sensor and the audio component that connect with the processing circuit, wherein this method includes:Target position is obtained by Internet of Things;Obtain current location;Generate the navigation routine between the current location and the target position;Voice plays the navigation routine.Voice Navigation can be realized by wearable device using the embodiment of the present application, enrich the function of wearable device, the user experience is improved.
Description
Technical field
This application involves electronic technology field more particularly to a kind of phonetic navigation methods and Related product.
Background technique
With the maturation of wireless technology, wireless headset is more next by the scene that wireless technology connects the wearable devices such as mobile phone
It is more.People can be realized by wireless headset the various functions such as listens to music, makes a phone call.However, current wireless earphone function
It is more single, therefore, reduce user experience.
Summary of the invention
The embodiment of the present application provides a kind of phonetic navigation method and Related product, can realize voice by wearable device
Navigation, enriches the function of wearable device, the user experience is improved.
In a first aspect, the embodiment of the present application provides a kind of wearable device, the wearable device includes processing circuit, with
And telecommunication circuit, sensor and the audio component being connect with the processing circuit, wherein
The audio component, for obtaining target position by Internet of Things;
The sensor, for obtaining current location;
The processing circuit, for generating the navigation routine between the current location and the target position;
The audio component is also used to voice and plays the navigation routine.
Second aspect, the embodiment of the present application provide a kind of phonetic navigation method, are applied to wearable device, including:
Target position is obtained by Internet of Things;
Obtain current location;
Generate the navigation routine between the current location and the target position;
Voice plays the navigation routine.
The third aspect, the embodiment of the present application provide a kind of voice guiding device, are applied to wearable device, and the voice is led
The device that navigates includes acquiring unit, generation unit and broadcast unit, wherein
The acquiring unit, for obtaining target position by Internet of Things;And obtain current location;
The generation unit, for generating the navigation routine between the current location and the target position;
The broadcast unit plays the navigation routine for voice.
Fourth aspect, the embodiment of the present application provide a kind of wearable device, including processor, memory, communication interface with
And one or more programs, wherein said one or multiple programs are stored in above-mentioned memory, and are configured by above-mentioned
Processor executes, and above procedure is included the steps that for executing the instruction in the embodiment of the present application second aspect either method.
5th aspect, the embodiment of the present application provide a kind of computer readable storage medium, wherein above-mentioned computer-readable
Storage medium storage is used for the computer program of electronic data interchange, wherein above-mentioned computer program executes computer such as
Step some or all of described in the embodiment of the present application second aspect either method.
6th aspect, the embodiment of the present application provide a kind of computer program product, wherein above-mentioned computer program product
Non-transient computer readable storage medium including storing computer program, above-mentioned computer program are operable to make to calculate
Machine executes the step some or all of as described in the embodiment of the present application second aspect either method.The computer program product
It can be a software installation packet.
As can be seen that phonetic navigation method and Related product described in above-mentioned the embodiment of the present application, are applied to wearable
Equipment, wearable device are worn on user's head, obtain target position by Internet of Things, obtain current location, generate present bit
The navigation routine between target position is set, voice plays navigation routine, in this way, can realize that voice is led by wearable device
Boat, enriches the function of wearable device, the user experience is improved.
Detailed description of the invention
In order to illustrate the technical solutions in the embodiments of the present application or in the prior art more clearly, to embodiment or will show below
There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this
Some embodiments of application for those of ordinary skill in the art without creative efforts, can be with
It obtains other drawings based on these drawings.
Figure 1A is a kind of structural schematic diagram of wearable device disclosed in the embodiment of the present application;
Figure 1B is a kind of flow diagram of phonetic navigation method disclosed in the embodiment of the present application;
Fig. 1 C is a kind of orientation demonstration schematic diagram disclosed in the embodiment of the present application;
Fig. 2 is the flow diagram of another kind phonetic navigation method disclosed in the embodiment of the present application;
Fig. 3 is the flow diagram of another kind phonetic navigation method disclosed in the embodiment of the present application;
Fig. 4 is the structural schematic diagram of another kind wearable device disclosed in the embodiment of the present application;
Fig. 5 is a kind of structural schematic diagram of voice guiding device disclosed in the embodiment of the present application.
Specific embodiment
In order to make those skilled in the art more fully understand application scheme, below in conjunction in the embodiment of the present application
Attached drawing, the technical scheme in the embodiment of the application is clearly and completely described, it is clear that described embodiment is only
The embodiment of the application a part, instead of all the embodiments.Based on the embodiment in the application, ordinary skill people
Member's every other embodiment obtained without making creative work, all should belong to the model of the application protection
It encloses.
It is described in detail separately below.
The description and claims of this application and term " first ", " second ", " third " and " in the attached drawing
Four " etc. are not use to describe a particular order for distinguishing different objects.In addition, term " includes " and " having " and it
Any deformation, it is intended that cover and non-exclusive include.Such as it contains the process, method of a series of steps or units, be
System, product or equipment are not limited to listed step or unit, but optionally further comprising the step of not listing or list
Member, or optionally further comprising other step or units intrinsic for these process, methods, product or equipment.
Referenced herein " embodiment " is it is meant that a particular feature, structure, or characteristic described can wrap in conjunction with the embodiments
It is contained at least one embodiment of the application.Each position in the description occur the phrase might not each mean it is identical
Embodiment, nor the independent or alternative embodiment with other embodiments mutual exclusion.Those skilled in the art explicitly and
Implicitly understand, embodiment described herein can be combined with other embodiments.
Wearable device may include following at least one:Wireless headset, brain wave collector, augmented reality
(augmented reality, AR)/virtual reality (virtual reality, VR) equipment, intelligent glasses etc., wherein nothing
Line earphone can be realized by following technology and be communicated:Wireless Fidelity (wireless fidelity, Wi-Fi) technology, bluetooth skill
Art, visible light communication technology, black light communication technology (infrared communication technology, ultraviolet communications technology) etc..The application
In embodiment, by taking wireless headset as an example comprising left and right earplug, left earplug can be used as an individual components, and right earplug can also
Using as an individual components.
Electronic equipment involved by the embodiment of the present application may include the various handheld devices with wireless communication function,
Mobile unit, wearable device calculate equipment or are connected to other processing equipments and various forms of radio modem
User equipment (user equipment, UE), mobile station (mobile station, MS), terminal device (terminal
Device) etc..For convenience of description, apparatus mentioned above is referred to as electronic equipment.
Optionally, wireless headset can be clip-on type earphone, or PlayGear Stealth, or wear-type ear
Machine, the embodiment of the present application is without limitation.
Wireless headset can be accommodated in Earphone box, and Earphone box may include:Two receiving cavities (the first receiving cavity and second
Receiving cavity), the size and shape of two receiving cavities is designed to receive a pair of of wireless headset (left earplug and right earplug);Setting exists
One or more earphone outer covering magnetic parts in box, said one or multiple earphone outer covering magnetic parts are used for will be a pair of wireless
Earphone magnetic attraction and respectively magnetism are fixed in two receiving cavities.Earphone box can also include ear cap.Wherein, it first receives
The size and shape of chamber is designed to the first wireless headset of reception, and the size and shape of the second receiving cavity is designed to receive second wirelessly
Earphone.
Wireless headset may include earphone outer covering, the recyclable charging being arranged in earphone outer covering battery (for example, lithium is electric
Pond), multiple hard contacts for connecting battery and charging unit, the loudspeaker including actuator unit and direct sound port
Component, wherein actuator unit includes magnet, voice coil and diaphragm, and actuator unit is used to make a sound from direct sound port,
The outer surface of earphone outer covering is arranged in above-mentioned multiple hard contacts.
In one possible implementation, wireless headset can also include Petting Area, which can be located in ear
The outer surface of machine shell is provided at least one touch sensor in Petting Area, for detecting touch operation, touch sensor
It may include capacitance sensor.When user touches Petting Area, at least one capacitance sensor can detecte selfcapacity
Variation is to identify touch operation.
In one possible implementation, wireless headset can also include acceleration transducer and three-axis gyroscope, add
Velocity sensor and three-axis gyroscope can be set in earphone outer covering, acceleration transducer and three-axis gyroscope for identification without
The pick-up of line earphone and remove movement.
In one possible implementation, wireless headset can also include at least one baroceptor, air pressure sensing
Device can be set on the surface of earphone outer covering, for detecting air pressure in ear after wireless headset wearing.Air pressure sensing can be passed through
The wearing elasticity of device detection wireless headset.When detect wireless headset wear it is more loose when, wireless headset can to wireless ear
The electronic device of machine connection sends prompt information, to prompt user's wireless headset to fall risk.
It describes in detail below to the embodiment of the present application.
Figure 1A is please referred to, Figure 1A is a kind of structural schematic diagram of wearable device disclosed in the embodiment of the present application, wearable
Equipment 100 includes storage and processing circuit 110, and the sensor 170 and audio that connect with the storage and processing circuit 110
Component 140, wherein:
Wearable device 100 may include control circuit, which may include storage and processing circuit 110.It should
Storage and processing circuit 110 can store device, such as hard drive memory, nonvolatile memory (such as flash memory or be used for
Form other electrically programmable read only memories etc. of solid state drive), volatile memory (such as either statically or dynamically deposit at random
Access to memory etc.) etc., the embodiment of the present application is with no restriction.Processing circuit in storage and processing circuit 110 can be used for controlling
The operating of wearable device 100.The processing circuit can microprocessor based on one or more, microcontroller, at digital signal
Manage device, baseband processor, power management unit, audio codec chip, specific integrated circuit, display-driver Ics
Etc. realizing.
Storage and processing circuit 110 can be used for running the software in wearable device 100, such as the Internet browser application journey
Sequence, voice over internet protocol (Voice over Internet Protocol, VOIP) call application program, Email
Application program, media play-back application, operation system function etc..These softwares can be used for executing some control operations, example
Such as, based on the Image Acquisition of camera, the ambient light measurement based on ambient light sensor, based on proximity sensor close to sensing
Device measurement, the information display function that the positioning indicators such as status indicator lamp based on light emitting diode are realized, based on touch
The touch event of sensor detects, function associated with information is shown on multiple (such as layering) displays, with execution
The associated operation of wireless communication function, operation associated with collecting and generating audio signal, is pressed with collection and treatment button
Other functions etc. in the associated control operation of event data and wearable device 100 are pressed, the embodiment of the present application does not limit
System.
Wearable device 100 can also include input-output circuit 150.Input-output circuit 150 can be used for making to wear
It wears equipment 100 and realizes outputting and inputting for data, that is, allow wearable device 100 from outer equipment receiving data and also permission can
Wearable device 100 exports data to external equipment from wearable device 100.Input-output circuit 150 may further include
Sensor 170.Sensor 170 may include ambient light sensor, the proximity sensor based on light and capacitor, touch sensor
(for example, being based on light touch sensor and/or capacitive touch sensors, wherein touch sensor can be touching display screen
A part can also be used as a touch sensor arrangement and independently use), acceleration transducer, ultrasonic sensor and other
Sensor etc..Wherein, ultrasonic sensor specific structure may include at least one earpiece and microphone, and specifically, microphone issues
Ultrasonic wave, earpiece receive ultrasonic wave, form ultrasonic sensor by earpiece and microphone.
Input-output circuit 150 can also include one or more displays, such as display 130.Display 130 can
To include liquid crystal display, organic light emitting diode display, electronic ink display, plasma display, using other aobvious
Show combination one or several kinds of in the display of technology.Display 130 may include touch sensor array (that is, display
130 can be touching display screen).Touch sensor can be by transparent touch sensor electrode (such as tin indium oxide (ITO)
Electrode) capacitive touch sensors that array is formed, or it can be the touch sensor formed using other touching techniques, example
Such as sound wave touch-control, pressure sensible touch, resistive touch, optical touch etc., the embodiment of the present application is with no restriction.
Audio component 140 can be used for providing audio input and output function for wearable device 100.Wearable device
Audio component 140 in 100 may include loudspeaker, microphone, buzzer, tone generator and other for generating and examining
Survey the component of sound.
Telecommunication circuit 120 can be used for providing the ability with external device communication for wearable device 100.Telecommunication circuit
120 may include analog- and digital- input-output interface circuit, and based on radiofrequency signal and/or the wireless communication of optical signal electricity
Road.Radio communication circuit in telecommunication circuit 120 may include that radio-frequency transceiver circuitry, power amplifier circuit, low noise are put
Big device, switch, filter and antenna.For example, the radio communication circuit in telecommunication circuit 120 may include for passing through hair
Near-field coupling electromagnetic signal is penetrated and received to support the circuit of near-field communication (Near Field Communication, NFC).Example
Such as, telecommunication circuit 120 may include near-field communication aerial and near-field communication transceiver.Telecommunication circuit 120 can also include honeycomb
Telephone transceiver and antenna, wireless lan transceiver circuit and antenna etc..
Wearable device 100 can further include battery, power management circuitry and other input-output units 160.
Input-output unit 160 may include button, control stick, click wheel, scroll wheel, touch tablet, keypad, keyboard, camera,
Light emitting diode and other positioning indicators etc..
User can input a command for the operation of control wearable device 100 by input-output circuit 150, and can
It realizes to use the output data of input-output circuit 150 and receives status information from wearable device 100 and other defeated
Out.
Based on wearable device described in above-mentioned Figure 1A, following function can be used to implement:
The telecommunication circuit 120, for obtaining target position by Internet of Things;
The sensor 170, for obtaining current location;
The processing circuit, for generating the navigation routine between the current location and the target position;
The audio component 140 is also used to voice and plays the navigation routine.
As can be seen that wearable device described in above-mentioned the embodiment of the present application, wearable device are worn on user's head,
Target position is obtained by Internet of Things, obtains current location, generates the navigation routine between current location and target position, voice
Navigation routine is played, in this way, can realize Voice Navigation by wearable device, the function of wearable device is enriched, is promoted
User experience.
In a possible example, the sensor 170, also particularly useful for acquisition target environment parameter;
In terms of the voice plays the navigation routine, the audio component 140 is specifically used for:
Determine target play parameter corresponding with the target environment parameter;
The navigation routine is played according to the target play parameter voice.
In a possible example, the wearable device includes the first speech features and the second speech features;
When the wearable device plays target audio, in terms of the voice plays the navigation routine, the sound
Frequency component 140 is specifically used for:
The target audio is played using the first speech features;And it is led using described in the second speech features voice broadcasting
Air route line.
Navigation routine in a possible example, between the generation current location and the target position
Aspect, the processing circuit are specifically used for:
Determine the mean path of at least one guidance path between the current location and the target position;
According to the mapping relations between preset distance and trip mode, the corresponding target trip of the mean path is determined
Mode;
The navigation routine between the current location and the target position is generated according to the target trip mode.
In a possible example, in terms of the target position by Internet of Things acquisition, the telecommunication circuit 120 has
Body is used for:
Receive the search result for carrying out Internet of things node search for the Internet of Things by target user, described search result
Include multiple Internet of things node;
Three target Internet of things node are chosen from the multiple Internet of things node, three target Internet of things node are not
Positioned at same straight line, the position of each target Internet of things node is known quantity;
The signal strength indication for obtaining each target Internet of things node in three target Internet of things node obtains three letters
Number intensity value;
Position according to three signal strength indications and three target Internet of things node determines the target position
It sets.
Based on wearable device described in above-mentioned Figure 1A, a kind of following phonetic navigation method can be used to implement:
The telecommunication circuit 120 obtains target position by Internet of Things;
The sensor 170 obtains current location;
The processing circuit generates the navigation routine between the current location and the target position;
140 voice of audio component plays the navigation routine.
Figure 1B is please referred to, Figure 1B is a kind of flow diagram of phonetic navigation method disclosed in the embodiment of the present application.Using
In wearable device as shown in Figure 1A, the wearable device is worn on user's head, which includes as follows
Step.
101, target position is obtained by Internet of Things.
Wherein, the embodiment of the present application can be applied to indoor navigation environment, and indoors under navigational environment, wearable device can be with
Network connection is established between Internet of Things, Internet of things node can be following at least one:Router, server, monitor supervision platform,
Gateway, electronic equipment etc., indoor navigation environment can be following at least one:Railway station, airport, market, supermarket, natural science
Shop, hospital, school, bus station etc., it is not limited here.Specifically, for example, being established between wearable device and electronic equipment
Network connection receives the target position sent by electronic equipment, alternatively, can input target position by user speech.
Optionally, above-mentioned steps 101 obtain target position by Internet of Things, it may include following steps:
111, the search result for carrying out Internet of things node search for the Internet of Things by target user, described search are received
It as a result include multiple Internet of things node;
112, three target Internet of things node, three targets Internet of Things section are chosen from the multiple Internet of things node
Point is not located at same straight line, and the position of each target Internet of things node is known quantity;
113, the signal strength indication for obtaining each target Internet of things node in three target Internet of things node, obtains three
A signal strength indication;
114, the position according to three signal strength indications and three target Internet of things node determines the mesh
Cursor position.
Wherein, target user is the destination locations of navigation, and target user can be an electronic equipment, indoors navigational environment
Under, Internet of things node search can be carried out to Internet of Things, obtain search result, which may include multiple Internet of Things sections
Point and the corresponding signal strength indication of each Internet of things node, since some positions of these Internet of things node are variations, because
This, can choose three target Internet of things node from multiple Internet of things node, which is not located at together
The position of one straight line, each target Internet of things node is known quantity, obtains each target Internet of Things in three target Internet of things node
The signal strength indication of net node obtains three signal strength indications, can preset the mapping between signal strength indication and distance
Relationship, in turn, three target Internet of things node can be mapped in indoor map, with each by available multiple distance values
Centered on target Internet of things node, corresponding distance value is that radius work is justified, and three circles is obtained, by the shared intersection area of three circles
The corresponding map location in domain is as target position, for example, in Fig. 1 C, a1, a2, a3 are three target Internet of Things sections such as Fig. 1 C
Point, r1 are the corresponding distance value of a1 (target user is relative to the distance between a1), and r2 is the corresponding distance value of a2, and r3 is a3 pairs
The distance value answered, thus it is possible to three circles be obtained, using three round intersection area positions as target position.
Optionally, above-mentioned steps 101, it is described that target position is obtained by Internet of Things, it may include following steps:
121, the targeted voice signal sent by target user is obtained by Internet of Things;
122, the targeted voice signal is parsed, obtains multiple target speaker features;
123, according to the mapping relations between preset pronunciation character and language form, determine that the multiple target speaker is special
The corresponding language form of each target speaker feature in sign, obtains multiple language forms;
124, frequency of occurrence is most in the multiple language form language form is chosen as target language type;
125, the corresponding target analytic modell analytical model of the target language type is obtained;
126, the targeted voice signal is parsed according to the target analytic modell analytical model, obtains object content, from institute
It states and extracts the target position in object content.
Wherein, language form can be some country or the language in place, for example, may include following at least one:
Mandarin, English, Spanish, Arabic, Russian language, Chongqing words, Sichuan words etc., it is not limited here.It is wearable
Equipment can obtain targeted voice signal by microphone, for example, user can input one section of voice by input mode, into
And targeted voice signal can be parsed, multiple target speaker features are obtained, pronunciation character can be used for unique identification
A country or locale language, for example, Sichuan words and Chongqing words, there is apparent pronunciation character, in another example, for example, Chinese
And English, it there is apparent pronunciation character, identify different local languages by pronunciation character, it can be in wearable device
The mapping relations between pronunciation character and language form are stored in advance, determine that each target speaker is special in multiple target speaker features
Corresponding language form is levied, multiple language forms are obtained, the language form that frequency of occurrence is most in multiple language forms is chosen and makees
For target language type, the mapping relations between language form and analytic modell analytical model, foundation can be stored in advance in wearable device
The mapping relations can obtain the corresponding target analytic modell analytical model of target language type, and there are different parsing moulds for different language form
Type, for example, language A corresponds to analytic modell analytical model A, the corresponding parsing Model B of language B in turn can be according to target analytic modell analytical model to target
Voice signal is parsed, and object content is obtained, and target position is extracted from object content, in this way, for different country or
Person's local language can go out target position with accurate Analysis.
102, current location is obtained.
Wherein, wearable device can by global positioning system (global positioning system, GPS) or
Person's Wireless Fidelity (wireless fidelity, Wi-Fi) location technology obtains current location.
103, the navigation routine between the current location and the target position is generated.
Wherein, after determining current location and target position, then current location can be generated with passage path generating algorithm
Navigation routine between target position.
Optionally, above-mentioned steps 103 generate the navigation routine between the current location and the target position, can wrap
Include following steps:
31, the mean path of at least one guidance path between the current location and the target position is determined;
32, according to the mapping relations between preset distance and trip mode, the corresponding target of the mean path is determined
Trip mode;
33, the navigation routine between the current location and the target position is generated according to the target trip mode.
Wherein, it after current location and target position has been determined, can be generated between current location and target position
Navigation routine, available at least one navigation routine, each navigation routine can correspond to a distance, by a plurality of navigation road
The distance of line takes mean value, obtains mean path, can be stored in advance between preset distance and trip mode in wearable device
Mapping relations can determine the mean path pair according to the mapping relations between preset distance and trip mode in turn
The target trip mode answered, above-mentioned trip mode can be following at least one:Taxi, public transport, bicycle, walking, taxi
Vehicle+public transport etc., it is not limited here.Finally, generating leading between current location and target position according to target trip mode
Air route line.
104, voice plays the navigation routine.
Wherein, wearable device can play navigation routine every prefixed time interval voice, and prefixed time interval can be with
By user's self-setting or system default.Wearable device can position in real time, lead every the broadcasting of default moving distance voice
Air route line, default moving distance can be by user's self-setting or system defaults.
Optionally, above-mentioned steps 104, voice play the navigation routine, it may include following steps:
41, target environment parameter is obtained;
42, target play parameter corresponding with the target environment parameter is determined;
43, the navigation routine is played according to the target play parameter voice.
Wherein, the sensor of above-mentioned wearable device can be environmental sensor, environmental sensor can for it is following at least
It is a kind of:Alignment sensor, humidity sensor, temperature sensor, external sound detection sensor etc..It can by environmental sensor
Target environment parameter is obtained, target environment parameter may include following at least one:Position, humidity, temperature, outside noise etc..
Above-mentioned play parameter may include following at least one:Volume, audio, word speed etc..It can be stored in advance in wearable device
Mapping relations between environmental parameter and play parameter in turn, can be according to the mapping after obtaining target environment parameter
Relationship determines the corresponding target play parameter of target environment parameter, and plays navigation routine according to target play parameter voice.
Optionally, above-mentioned steps 104, the wearable device include the first speech features and the second speech features;
When the wearable device plays target audio, the voice plays the navigation routine, including:
The target audio is played using the first speech features;And it is led using described in the second speech features voice broadcasting
Air route line.
Wherein, target audio can be following at least one:Music, radio, call voice etc..Wearable device can
To include the first speech features and the second speech features, for example, wireless headset includes left earplug and right earplug, left earplug can be regarded
Make the first speech features, right earplug can be regarded as the second speech features.During the navigation process, target sound is played in wearable device
When frequency, target audio is played using the first speech features, and navigation routine is played using the second speech features voice.
Optionally, above-mentioned steps 104, voice play the navigation routine, it may include following steps:
A1, target compactness between the wearable device and ear is determined;
A2, according to the mapping relations between preset compactness and the volume of the wearable device, determine the target
Corresponding first volume of compactness;
A3, the control wearable device play the navigation routine with the first volume voice.
Wherein, the embodiment of the present application, compactness are used to state the tightness degree that is bonded between wearable device and ear, patch
It is right to be indicated with specific value.Sensor can be set in wearable device, and sensor is for detecting wearable device
Compactness between ear, sensor may include following at least one:Pressure sensor, baroceptor, ultrasonic wave pass
Sensor, range sensor etc..In the specific implementation, the sound of compactness and wearable device can be stored in advance in wearable device
Mapping relations between amount determine corresponding first volume of target compactness according to the mapping relations, in target compactness in turn
Under, wearable device can control wearable device and play navigation routine with the first volume voice.
In practical application, by taking wireless headset as an example, it is illustrated with specified volume, is bonded and gets between wireless headset and ear
Closely, then it sounds that perceived sounds is bigger, overrelaxation is bonded between wireless headset and ear, then sound that perceived sounds is smaller.
Optionally, the wearable device includes pressure sensor, above-mentioned steps A1, determine wearable device and ear it
Between target compactness, it may include following steps:
Target pressure value between A11, the detection wearable device and the ear;
A12, according to the mapping relations between preset pressure value and compactness, determine the corresponding institute of the target pressure value
State target compactness.
Wherein it is possible to which at least one pressure sensor is arranged at the position of wearable device and ear contacts, this at least one
A pressure sensor can detecte the target pressure value between wearable device and ear, and target pressure value can be at least one
The pressure value of any pressure sensor in pressure sensor, alternatively, all pressure sensors at least one pressure sensor
Average pressure value, alternatively, the maximum pressure value that at least one pressure sensor detects, alternatively, at least one pressure sensor
Minimum pressure values detected etc..The mapping relations between pressure value and compactness can be stored in advance in wearable device,
In turn, the corresponding target compactness of target pressure value can be determined according to the mapping relations.
Pressure value | Compactness |
A~b | K1 |
B~c | K2 |
C~d | K3 |
Wherein, a<b<c<D, K1, K2, K3 are the number greater than 0.
Optionally, the wearable device includes baroceptor, in above-mentioned steps A1, determines wearable device and ear
Between target compactness, it may include following steps:
Target air pressure value between A21, the detection wearable device and the ear;
A22, according to the mapping relations between preset atmospheric pressure value and compactness, determine the corresponding institute of the target air pressure value
State target compactness.
Wherein, wearable device includes baroceptor, is detected between wearable device and ear by baroceptor
Target air pressure value.The mapping relations between atmospheric pressure value and compactness, in turn, Ke Yiyi can be stored in advance in wearable device
The corresponding target compactness of target air pressure value is determined according to the mapping relations.
Optionally, the wearable device includes the first component and second component;Above-mentioned steps A1, determines wearable device
Target compactness between ear, it may include following steps:
A31, target range between first speech features and second speech features is determined;
A32, according to the mapping relations between preset distance and compactness, determine the corresponding mesh of the target range
Labeling is right.
Wherein, wearable device may include the first speech features and voice second component, for example, wireless headset, it may include
Two earplugs, each settable ultrasonic sensor of earplug, for example, a transmitter is arranged in left earplug, right earplug is arranged one
Receiver measures the target range between the first component and second component by two earplugs in turn.It can be pre- in wearable device
The mapping relations first stored between distance and compactness can determine the corresponding target of target range in turn according to the mapping relations
Compactness.
Optionally, mapping relations collection is stored in advance in the wearable device, the mapping relations collection includes multiple mappings
Relationship, mapping relations of each mapping relations between preset compactness and the volume of the wearable device;
Between above-mentioned steps A1 and A2, it can also include the following steps:
B1, current environment parameter is obtained;
B2, according to the corresponding relationship between preset environmental parameter and mapping relations, determine the current environment parameter pair
The target mapping relations answered;
Above-mentioned steps A2 is determined according to the mapping relations between preset compactness and the volume of the wearable device
Corresponding first volume of the target compactness, can implement as follows:
Corresponding first volume of the target compactness is determined according to the target mapping relations.
Wherein, mapping relations collection can be stored in advance in wearable device, mapping relations collection may include multiple mapping relations, often
Mapping relations of one mapping relations between preset compactness and the volume of wearable device.The sensing of above-mentioned wearable device
Device can be environmental sensor, and environmental sensor can be following at least one:Alignment sensor, humidity sensor, temperature pass
Sensor, external sound detection sensor etc..Current environment parameter can be obtained by environmental sensor.It can be in wearable device
The corresponding relationship between environmental parameter and mapping relations is stored in advance, can determine current environment parameter pair according to the corresponding relationship
The target mapping relations answered.In turn, corresponding first volume of target compactness can be determined according to the target mapping relations.It is as follows
A kind of mapping table between environmental parameter and mapping relations is provided, it is specific as follows:
Environmental parameter | Mapping relations |
Environmental parameter 1 | Mapping relations 1 |
Environmental parameter 2 | Mapping relations 2 |
… | … |
Environmental parameter n | Mapping relations n |
In this way, different mapping relations can be taken under different environmental parameters, for example, if external environment is noisy, this
When mapping relations and quiet environment under mapping relations it is different, the embodiment of the present application can mention in different environments
For corresponding mapping relations, in this way, obtaining the volume suitable with environment.
Optionally, after above-mentioned steps A3, can also include the following steps:
The object variations amount of A4, the monitoring target compactness;
A5, the object variations amount absolute value be greater than preset threshold when, according to preset variable quantity and volume adjustment
Mapping relations between parameter determine the corresponding target volume adjustment parameter of the object variations amount;
A6, the second volume is determined according to first volume and the target volume adjustment parameter;
A7, the control wearable device play the navigation routine with the second volume voice.
Wherein, wearable device can be by the object variations amount of sensor monitoring objective compactness, object variations amount
The variable quantity of compactness, in practical application, by taking wireless headset as an example, earphone is worn long, alternatively, user is moving, it is easy to allow
Compactness is lower, conversely, user stoppers earphone, then will increase compactness, and above-mentioned object variations amount can be realized by sensor,
For example, sensor includes pressure sensor, it can be changed by pressure value and determine object variations amount.Above-mentioned preset threshold can be by
User's self-setting or system default.Above-mentioned volume adjustment parameter can be "+" volume (increasing volume), alternatively, "-" volume
(reduce volume) can preset the mapping relations between preparatory variable quantity and volume adjustment parameter in wearable device,
When the absolute value of object variations amount is greater than preset threshold, the corresponding target volume of object variations amount is determined according to the mapping relations
Adjustment parameter.It is being determined that target volume adjustment parameter can determine the second sound according to the first volume and target volume adjustment parameter
Amount, for example, the second volume=first volume+target volume adjustment parameter, if target compactness increases, the second volume is less than the
One volume, if target compactness reduces, the second volume is greater than the first volume.
As can be seen that phonetic navigation method described in above-mentioned the embodiment of the present application, is applied to wearable device, it is wearable
Equipment is worn on user's head, obtains target position by Internet of Things, obtains current location, generates current location and target position
Between navigation routine, voice play navigation routine, in this way, can by wearable device realize Voice Navigation, enriching can
The function of wearable device, the user experience is improved.
Referring to Fig. 2, Fig. 2 is a kind of flow diagram of phonetic navigation method disclosed in the embodiment of the present application, it is applied to
Wearable device as shown in Figure 1A, the wearable device include the first speech features and the second speech features;It is described to wear
It wears equipment and is worn on user's head, which includes the following steps.
201, target position is obtained by Internet of Things.
202, current location is obtained.
203, the navigation routine between the current location and the target position is generated.
204, when the wearable device plays target audio, the target audio is played using the first speech features,
And the navigation routine is played using the second speech features voice.
As can be seen that phonetic navigation method described in above-mentioned the embodiment of the present application, is applied to wearable device, it is wearable
Equipment is worn on user's head, obtains target position by Internet of Things, obtains current location, generates current location and target position
Between navigation routine, wearable device play target audio when, using the first speech features play target audio;And it uses
Second speech features voice plays navigation routine, in this way, can realize Voice Navigation by wearable device, enriches wearable
The function of equipment can be listened to music with side, and side Voice Navigation, the user experience is improved.
Referring to Fig. 3, Fig. 3 is a kind of flow diagram of phonetic navigation method disclosed in the embodiment of the present application, it is applied to
Wearable device as shown in Figure 1A, the wearable device are worn on user's head, which includes following step
Suddenly.
301, target position is obtained by Internet of Things.
302, current location is obtained.
303, the navigation routine between the current location and the target position is generated.
304, the target compactness between the wearable device and ear is determined.
305, according to the mapping relations between preset compactness and the volume of the wearable device, the target is determined
Corresponding first volume of compactness.
306, the navigation routine is played according to the first volume voice.
As can be seen that phonetic navigation method described in above-mentioned the embodiment of the present application, is applied to wearable device, it is wearable
Equipment is worn on user's head, obtains target position by Internet of Things, obtains current location, generates current location and target position
Between navigation routine, determine the target compactness between wearable device and ear, according to preset compactness with it is wearable
Mapping relations between the volume of equipment determine corresponding first volume of target compactness, lead according to the broadcasting of the first volume voice
Air route line enriches the function of wearable device in this way, can realize Voice Navigation by wearable device, can also foundation
Compactness between user's earphone and wearable device adjusts volume, and the user experience is improved.
Referring to Fig. 4, Fig. 4 is the structural schematic diagram of another kind electronic equipment disclosed in the embodiment of the present application, as shown,
The wearable device includes that processor, memory, communication interface and one or more programs, the wearable device are worn on
User's head, wherein said one or multiple programs are stored in above-mentioned memory, and are configured to be held by above-mentioned processor
Row, above procedure includes the instruction for executing following steps:
Target position is obtained by Internet of Things;
Obtain current location;
Generate the navigation routine between the current location and the target position;
Voice plays the navigation routine.
As can be seen that wearable device described in above-mentioned the embodiment of the present application, wearable device are worn on user's head,
Target position is obtained by Internet of Things, obtains current location, generates the navigation routine between current location and target position, voice
Navigation routine is played, in this way, can realize Voice Navigation by wearable device, the function of wearable device is enriched, is promoted
User experience.
In a possible example, in terms of the voice plays the navigation routine, above procedure includes for executing
The instruction of following steps:
Obtain target environment parameter;
Determine target play parameter corresponding with the target environment parameter;
The navigation routine is played according to the target play parameter voice.
In a possible example, the wearable device includes the first speech features and the second speech features;
When the wearable device plays target audio, the voice plays the navigation routine, including:
The target audio is played using the first speech features;And it is led using described in the second speech features voice broadcasting
Air route line.
Navigation routine side in a possible example, between the generation current location and the target position
Face, above procedure include the instruction for executing following steps:
Determine the mean path between the current location and the target position;
According to the mapping relations between preset distance and trip mode, the corresponding target trip of the mean path is determined
Mode;
The navigation routine between the current location and the target position is generated according to the target trip mode.
In a possible example, in terms of the target position by Internet of Things acquisition, above procedure includes for holding
The instruction of row following steps:
Receive the search result for carrying out Internet of things node search for the Internet of Things by target user, described search result
Include multiple Internet of things node and the corresponding signal strength indication of each Internet of things node;
Three target Internet of things node are chosen from the multiple Internet of things node, three target Internet of things node are not
Positioned at same straight line, the position of each target Internet of things node is known quantity;
The signal strength indication for obtaining each target Internet of things node in three target Internet of things node obtains three letters
Number intensity value;
Position according to three signal strength indications and three target Internet of things node determines the target position
It sets.
It is above-mentioned that mainly the scheme of the embodiment of the present application is described from the angle of method side implementation procedure.It is understood that
, in order to realize the above functions, it comprises execute the corresponding hardware configuration of each function and/or software mould for wearable device
Block.Those skilled in the art should be readily appreciated that, in conjunction with each exemplary unit of embodiment description presented herein
And algorithm steps, the application can be realized with the combining form of hardware or hardware and computer software.Some function actually with
Hardware or computer software drive the mode of hardware to execute, the specific application and design constraint item depending on technical solution
Part.Professional technician can specifically realize described function to each using distinct methods, but this reality
Now it is not considered that exceeding scope of the present application.
The embodiment of the present application can carry out the division of functional unit according to above method example to wearable device, for example,
The each functional unit of each function division can be corresponded to, it is single that two or more functions can also be integrated in a processing
In member.Above-mentioned integrated unit both can take the form of hardware realization, can also realize in the form of software functional units.
It should be noted that be schematical, only a kind of logical function partition to the division of unit in the embodiment of the present application, it is practical
There may be another division manner when realization.
Referring to Fig. 5, Fig. 5 is a kind of structural schematic diagram of voice guiding device disclosed in the embodiment of the present application, it is applied to
Wearable device, the wearable device are worn on user's head, and the voice guiding device includes acquiring unit 501, generates
Unit 502 and broadcast unit 503, wherein
The acquiring unit 501, for obtaining target position by Internet of Things;And obtain current location;
The generation unit 502, for generating the navigation routine between the current location and the target position;
The broadcast unit 503 plays the navigation routine for voice.
As can be seen that voice guiding device described in above-mentioned the embodiment of the present application, is applied to wearable device, it is wearable
Equipment is worn on user's head, obtains target position by Internet of Things, obtains current location, generates current location and target position
Between navigation routine, voice play navigation routine, in this way, can by wearable device realize Voice Navigation, enriching can
The function of wearable device, the user experience is improved.
In a possible example, in terms of the voice plays the navigation routine, the broadcast unit 503 is specific
For:
Obtain target environment parameter;
Determine target play parameter corresponding with the target environment parameter;
The navigation routine is played according to the target play parameter voice.
In a possible example, the wearable device includes the first speech features and the second speech features;Institute
When stating wearable device broadcasting target audio, in terms of the voice plays the navigation routine, the broadcast unit 503 is specific
For:
The target audio is played using the first speech features;And it is led using described in the second speech features voice broadcasting
Air route line.
Navigation routine in a possible example, between the generation current location and the target position
Aspect, the generation unit 502 are specifically used for:
Determine the mean path of at least one guidance path between the current location and the target position;
According to the mapping relations between preset distance and trip mode, the corresponding target trip of the mean path is determined
Mode;
The navigation routine between the current location and the target position is generated according to the target trip mode.
In a possible example, in terms of the target position by Internet of Things acquisition, the acquiring unit 501 has
Body is used for:
Receive the search result for carrying out Internet of things node search for the Internet of Things by target user, described search result
Include multiple Internet of things node and the corresponding signal strength indication of each Internet of things node;
Three target Internet of things node are chosen from the multiple Internet of things node, three target Internet of things node are not
Positioned at same straight line, the position of each target Internet of things node is known quantity;
The signal strength indication for obtaining each target Internet of things node in three target Internet of things node obtains three letters
Number intensity value;
Position according to three signal strength indications and three target Internet of things node determines the target position
It sets.
The embodiment of the present application also provides a kind of computer storage medium, wherein computer storage medium storage is for electricity
The computer program of subdata exchange, the computer program make computer execute any as recorded in above method embodiment
Some or all of method step, above-mentioned computer include wearable device.
The embodiment of the present application also provides a kind of computer program product, and above-mentioned computer program product includes storing calculating
The non-transient computer readable storage medium of machine program, above-mentioned computer program are operable to that computer is made to execute such as above-mentioned side
Some or all of either record method step in method embodiment.The computer program product can be a software installation
Packet, above-mentioned computer includes wearable device.
It should be noted that for the various method embodiments described above, for simple description, therefore, it is stated as a series of
Combination of actions, but those skilled in the art should understand that, the application is not limited by the described action sequence because
According to the application, some steps may be performed in other sequences or simultaneously.Secondly, those skilled in the art should also know
It knows, the embodiments described in the specification are all preferred embodiments, related actions and modules not necessarily the application
It is necessary.
In the above-described embodiments, it all emphasizes particularly on different fields to the description of each embodiment, there is no the portion being described in detail in some embodiment
Point, reference can be made to the related descriptions of other embodiments.
In several embodiments provided herein, it should be understood that disclosed device, it can be by another way
It realizes.For example, the apparatus embodiments described above are merely exemplary, such as the division of said units, it is only a kind of
Logical function partition, there may be another division manner in actual implementation, such as multiple units or components can combine or can
To be integrated into another system, or some features can be ignored or not executed.Another point, shown or discussed is mutual
Coupling, direct-coupling or communication connection can be through some interfaces, the indirect coupling or communication connection of device or unit,
It can be electrical or other forms.
Above-mentioned unit as illustrated by the separation member may or may not be physically separated, aobvious as unit
The component shown may or may not be physical unit, it can and it is in one place, or may be distributed over multiple
In network unit.It can select some or all of unit therein according to the actual needs to realize the mesh of this embodiment scheme
's.
It, can also be in addition, each functional unit in each embodiment of the application can integrate in one processing unit
It is that each unit physically exists alone, can also be integrated in one unit with two or more units.Above-mentioned integrated list
Member both can take the form of hardware realization, can also realize in the form of software functional units.
If above-mentioned integrated unit is realized in the form of SFU software functional unit and sells or use as independent product
When, it can store in a computer-readable access to memory.Based on this understanding, the technical solution of the application substantially or
Person says that all or part of the part that contributes to existing technology or the technical solution can body in the form of software products
Reveal and, which is stored in a memory, including some instructions are used so that a computer equipment
(can be personal computer, server or network equipment etc.) executes all or part of each embodiment above method of the application
Step.And memory above-mentioned includes:USB flash disk, read-only memory (ROM, Read-Only Memory), random access memory
The various media that can store program code such as (RAM, Random Access Memory), mobile hard disk, magnetic or disk.
Those of ordinary skill in the art will appreciate that all or part of the steps in the various methods of above-described embodiment is can
It is completed with instructing relevant hardware by program, which can store in a computer-readable memory, memory
May include:Flash disk, read-only memory (English:Read-Only Memory, referred to as:ROM), random access device (English:
Random Access Memory, referred to as:RAM), disk or CD etc..
The embodiment of the present application is described in detail above, specific case used herein to the principle of the application and
Embodiment is expounded, the description of the example is only used to help understand the method for the present application and its core ideas;
At the same time, for those skilled in the art can in specific implementation and application range according to the thought of the application
There is change place, to sum up above-mentioned, the contents of this specification should not be construed as limiting the present application.
Claims (13)
1. a kind of wearable device, which is characterized in that the wearable device includes processing circuit, and with the processing circuit
Telecommunication circuit, sensor and the audio component of connection, wherein
The telecommunication circuit, for obtaining target position by Internet of Things;
The sensor, for obtaining current location;
The processing circuit, for generating the navigation routine between the current location and the target position;
The audio component is also used to voice and plays the navigation routine.
2. wearable device according to claim 1, which is characterized in that the sensor, also particularly useful for acquisition target
Environmental parameter;
In terms of the voice plays the navigation routine, the audio component is specifically used for:
Determine target play parameter corresponding with the target environment parameter;
The navigation routine is played according to the target play parameter voice.
3. wearable device according to claim 1, which is characterized in that the wearable device includes the first speech features
With the second speech features;
When the wearable device plays target audio, in terms of the voice plays the navigation routine, the audio group
Part is specifically used for:
The target audio is played using the first speech features;And the navigation road is played using the second speech features voice
Line.
4. according to claim 1 or 3 described in any item wearable devices, which is characterized in that generate the present bit described
In terms of setting the navigation routine between the target position, the processing circuit is specifically used for:
Determine the mean path of at least one guidance path between the current location and the target position;
According to the mapping relations between preset distance and trip mode, the corresponding target trip side of the mean path is determined
Formula;
The navigation routine between the current location and the target position is generated according to the target trip mode.
5. wearable device according to claim 1-4, which is characterized in that obtain mesh by Internet of Things described
In terms of cursor position, the telecommunication circuit is specifically used for:
The search result for carrying out Internet of things node search for the Internet of Things by target user is received, described search result includes
Multiple Internet of things node and the corresponding signal strength indication of each Internet of things node;
Three target Internet of things node are chosen from the multiple Internet of things node, three target Internet of things node are not located at
Same straight line, the position of each target Internet of things node are known quantity;
The signal strength indication for obtaining each target Internet of things node in three target Internet of things node, it is strong to obtain three signals
Angle value;
Position according to three signal strength indications and three target Internet of things node determines the target position.
6. a kind of phonetic navigation method, which is characterized in that it is applied to wearable device, including:
Target position is obtained by Internet of Things;
Obtain current location;
Generate the navigation routine between the current location and the target position;
Voice plays the navigation routine.
7. according to the method described in claim 6, it is characterized in that, the voice plays the navigation routine, including:
Obtain target environment parameter;
Determine target play parameter corresponding with the target environment parameter;
The navigation routine is played according to the target play parameter voice.
8. according to the method described in claim 6, it is characterized in that, the wearable device includes the first speech features and second
Speech features;
When the wearable device plays target audio, the voice plays the navigation routine, including:
The target audio is played using the first speech features;And the navigation road is played using the second speech features voice
Line.
9. according to the described in any item methods of claim 6 or 8, which is characterized in that it is described generate the current location with it is described
Navigation routine between target position, including:
Determine the mean path of at least one guidance path between the current location and the target position;
According to the mapping relations between preset distance and trip mode, the corresponding target trip side of the mean path is determined
Formula;
The navigation routine between the current location and the target position is generated according to the target trip mode.
10. the method according to claim 6, which is characterized in that described stated by Internet of Things obtains target position
It sets, including:
The search result for carrying out Internet of things node search for the Internet of Things by target user is received, described search result includes
Multiple Internet of things node;
Three target Internet of things node are chosen from the multiple Internet of things node, three target Internet of things node are not located at
Same straight line, the position of each target Internet of things node are known quantity;
The signal strength indication for obtaining each target Internet of things node in three target Internet of things node, it is strong to obtain three signals
Angle value;
Position according to three signal strength indications and three target Internet of things node determines the target position.
11. a kind of voice guiding device, which is characterized in that be applied to wearable device, the voice guiding device includes obtaining
Unit, generation unit and broadcast unit, wherein
The acquiring unit, for obtaining target position by Internet of Things;And obtain current location;
The generation unit, for generating the navigation routine between the current location and the target position;
The broadcast unit plays the navigation routine for voice.
12. a kind of wearable device, which is characterized in that including processor, memory, communication interface, and one or more journeys
Sequence, one or more of programs are stored in the memory, and are configured to be executed by the processor, described program
Include the steps that for executing such as the instruction in the described in any item methods of claim 6-10.
13. a kind of computer readable storage medium, which is characterized in that storage is used for the computer program of electronic data interchange,
In, the computer program makes computer execute such as the described in any item methods of claim 6-10.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810574609.8A CN108827338B (en) | 2018-06-06 | 2018-06-06 | Voice navigation method and related product |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810574609.8A CN108827338B (en) | 2018-06-06 | 2018-06-06 | Voice navigation method and related product |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108827338A true CN108827338A (en) | 2018-11-16 |
CN108827338B CN108827338B (en) | 2021-06-25 |
Family
ID=64144039
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810574609.8A Expired - Fee Related CN108827338B (en) | 2018-06-06 | 2018-06-06 | Voice navigation method and related product |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108827338B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111148167A (en) * | 2019-03-18 | 2020-05-12 | 广东小天才科技有限公司 | Operator network switching method of wearable device and wearable device |
CN111198962A (en) * | 2018-11-19 | 2020-05-26 | 纳博特斯克有限公司 | Information processing apparatus, system, method, similarity judging method, and medium |
CN113834478A (en) * | 2020-06-23 | 2021-12-24 | 阿里巴巴集团控股有限公司 | Travel method, target object guiding method and wearable device |
Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101917656A (en) * | 2010-08-30 | 2010-12-15 | 鸿富锦精密工业(深圳)有限公司 | Automatic volume adjustment device and method |
CN101988835A (en) * | 2009-07-30 | 2011-03-23 | 黄金富 | Navigation directing system adopting electronic compass for pedestrians and corresponding method |
US8401200B2 (en) * | 2009-11-19 | 2013-03-19 | Apple Inc. | Electronic device and headset with speaker seal evaluation capabilities |
US20130083933A1 (en) * | 2011-09-30 | 2013-04-04 | Apple Inc. | Pressure sensing earbuds and systems and methods for the use thereof |
CN104280038A (en) * | 2013-07-12 | 2015-01-14 | 中国电信股份有限公司 | Navigation method and navigation device |
CN104507003A (en) * | 2014-11-28 | 2015-04-08 | 广东好帮手电子科技股份有限公司 | A method and a system for adjusting a volume intelligently according to a noise in a vehicle |
CN104702763A (en) * | 2015-03-04 | 2015-06-10 | 乐视致新电子科技(天津)有限公司 | Method, device and system for adjusting volume |
CN104902359A (en) * | 2014-03-06 | 2015-09-09 | 昆山研达电脑科技有限公司 | Navigation earphones |
CN104969581A (en) * | 2013-03-13 | 2015-10-07 | 英特尔公司 | Dead zone location detection apparatus and method |
CN105246000A (en) * | 2015-10-28 | 2016-01-13 | 维沃移动通信有限公司 | Method for improving sound quality of headset and mobile terminal |
CN105744410A (en) * | 2014-12-10 | 2016-07-06 | 曾辉赛 | Headset with memory-prompting function suitable for old people |
US20160265917A1 (en) * | 2015-03-10 | 2016-09-15 | Toyota Motor Engineering & Manufacturing North America, Inc. | System and method for providing navigation instructions at optimal times |
CN106896528A (en) * | 2017-03-15 | 2017-06-27 | 苏州创必成电子科技有限公司 | Bluetooth spectacles with ear-phone function |
CN107403232A (en) * | 2016-05-20 | 2017-11-28 | 北京搜狗科技发展有限公司 | A kind of navigation control method, device and electronic equipment |
CN107843250A (en) * | 2017-10-17 | 2018-03-27 | 三星电子(中国)研发中心 | Vibration air navigation aid, device and wearable device for wearable device |
-
2018
- 2018-06-06 CN CN201810574609.8A patent/CN108827338B/en not_active Expired - Fee Related
Patent Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101988835A (en) * | 2009-07-30 | 2011-03-23 | 黄金富 | Navigation directing system adopting electronic compass for pedestrians and corresponding method |
US8401200B2 (en) * | 2009-11-19 | 2013-03-19 | Apple Inc. | Electronic device and headset with speaker seal evaluation capabilities |
CN101917656A (en) * | 2010-08-30 | 2010-12-15 | 鸿富锦精密工业(深圳)有限公司 | Automatic volume adjustment device and method |
US20130083933A1 (en) * | 2011-09-30 | 2013-04-04 | Apple Inc. | Pressure sensing earbuds and systems and methods for the use thereof |
CN104969581A (en) * | 2013-03-13 | 2015-10-07 | 英特尔公司 | Dead zone location detection apparatus and method |
CN104280038A (en) * | 2013-07-12 | 2015-01-14 | 中国电信股份有限公司 | Navigation method and navigation device |
CN104902359A (en) * | 2014-03-06 | 2015-09-09 | 昆山研达电脑科技有限公司 | Navigation earphones |
CN104507003A (en) * | 2014-11-28 | 2015-04-08 | 广东好帮手电子科技股份有限公司 | A method and a system for adjusting a volume intelligently according to a noise in a vehicle |
CN105744410A (en) * | 2014-12-10 | 2016-07-06 | 曾辉赛 | Headset with memory-prompting function suitable for old people |
CN104702763A (en) * | 2015-03-04 | 2015-06-10 | 乐视致新电子科技(天津)有限公司 | Method, device and system for adjusting volume |
US20160265917A1 (en) * | 2015-03-10 | 2016-09-15 | Toyota Motor Engineering & Manufacturing North America, Inc. | System and method for providing navigation instructions at optimal times |
CN105246000A (en) * | 2015-10-28 | 2016-01-13 | 维沃移动通信有限公司 | Method for improving sound quality of headset and mobile terminal |
CN107403232A (en) * | 2016-05-20 | 2017-11-28 | 北京搜狗科技发展有限公司 | A kind of navigation control method, device and electronic equipment |
CN106896528A (en) * | 2017-03-15 | 2017-06-27 | 苏州创必成电子科技有限公司 | Bluetooth spectacles with ear-phone function |
CN107843250A (en) * | 2017-10-17 | 2018-03-27 | 三星电子(中国)研发中心 | Vibration air navigation aid, device and wearable device for wearable device |
Non-Patent Citations (3)
Title |
---|
张明星编著,: "《Android智能穿戴设备开发实战详解》", 31 January 2016, 中国铁道出版社 * |
李天祥编著,: "《ANDROID物联网开发细致入门与最佳实践》", 30 June 2016, 中国铁道出版社 * |
王炳锡等著,: "《实用语音识别基础》", 31 January 2005, 国防工业出版社 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111198962A (en) * | 2018-11-19 | 2020-05-26 | 纳博特斯克有限公司 | Information processing apparatus, system, method, similarity judging method, and medium |
CN111148167A (en) * | 2019-03-18 | 2020-05-12 | 广东小天才科技有限公司 | Operator network switching method of wearable device and wearable device |
CN113834478A (en) * | 2020-06-23 | 2021-12-24 | 阿里巴巴集团控股有限公司 | Travel method, target object guiding method and wearable device |
Also Published As
Publication number | Publication date |
---|---|
CN108827338B (en) | 2021-06-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110544488B (en) | Method and device for separating multi-person voice | |
CN108810693A (en) | Apparatus control method and Related product | |
CN105722009A (en) | Portable Apparatus And Method Of Controlling Location Information Of Portable Apparatus | |
CN108735209A (en) | Wake up word binding method, smart machine and storage medium | |
CN109558512A (en) | A kind of personalized recommendation method based on audio, device and mobile terminal | |
CN109005480A (en) | Information processing method and related product | |
CN108958696A (en) | Principal and subordinate's earphone method for handover control and Related product | |
CN105955700A (en) | Sound effect adjusting method and user terminal | |
CN107356261B (en) | Air navigation aid and Related product | |
CN108827338A (en) | Phonetic navigation method and Related product | |
CN107863110A (en) | Safety prompt function method, intelligent earphone and storage medium based on intelligent earphone | |
JP2018078398A (en) | Autonomous assistant system using multifunctional earphone | |
CN108769850A (en) | Apparatus control method and Related product | |
CN108777827A (en) | Wireless headset, method for regulation of sound volume and Related product | |
CN109348467A (en) | Emergency call realization method, electronic device and computer readable storage medium | |
CN107786714B (en) | Sound control method, apparatus and system based on vehicle-mounted multimedia equipment | |
CN106126160A (en) | A kind of effect adjusting method and user terminal | |
CN109067965A (en) | Interpretation method, translating equipment, wearable device and storage medium | |
CN109246580A (en) | 3D sound effect treatment method and Related product | |
CN110430475A (en) | A kind of interactive approach and relevant apparatus | |
CN109039355B (en) | Voice prompt method and related product | |
CN108683790A (en) | Method of speech processing and Related product | |
WO2022057365A1 (en) | Noise reduction method, terminal device, and computer readable storage medium | |
CN108924705A (en) | 3D sound effect treatment method and Related product | |
CN110517677A (en) | Speech processing system, method, equipment, speech recognition system and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20210625 |