CN108896962A - Iteration localization method based on sound position fingerprint - Google Patents
Iteration localization method based on sound position fingerprint Download PDFInfo
- Publication number
- CN108896962A CN108896962A CN201810611952.5A CN201810611952A CN108896962A CN 108896962 A CN108896962 A CN 108896962A CN 201810611952 A CN201810611952 A CN 201810611952A CN 108896962 A CN108896962 A CN 108896962A
- Authority
- CN
- China
- Prior art keywords
- sound
- cluster
- reference point
- sound position
- fingerprint
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 82
- 230000004807 localization Effects 0.000 title claims abstract description 56
- 239000013598 vector Substances 0.000 claims abstract description 38
- SDIXRDNYIMOKSG-UHFFFAOYSA-L disodium methyl arsenate Chemical compound [Na+].[Na+].C[As]([O-])([O-])=O SDIXRDNYIMOKSG-UHFFFAOYSA-L 0.000 claims abstract description 14
- 239000000284 extract Substances 0.000 claims abstract description 5
- 238000004364 calculation method Methods 0.000 claims description 20
- 238000012804 iterative process Methods 0.000 claims description 10
- 230000009977 dual effect Effects 0.000 claims description 7
- 238000012360 testing method Methods 0.000 claims description 7
- 230000000694 effects Effects 0.000 claims description 4
- 239000000203 mixture Substances 0.000 claims description 3
- 238000000638 solvent extraction Methods 0.000 claims description 2
- 238000005516 engineering process Methods 0.000 abstract description 17
- 230000007547 defect Effects 0.000 abstract description 5
- 238000004458 analytical method Methods 0.000 abstract description 4
- 230000008569 process Effects 0.000 description 7
- 230000006872 improvement Effects 0.000 description 5
- 238000011160 research Methods 0.000 description 5
- 241000209140 Triticum Species 0.000 description 4
- 235000021307 Triticum Nutrition 0.000 description 4
- 238000005314 correlation function Methods 0.000 description 4
- 235000005035 Panax pseudoginseng ssp. pseudoginseng Nutrition 0.000 description 3
- 235000003140 Panax quinquefolius Nutrition 0.000 description 3
- 238000001514 detection method Methods 0.000 description 3
- 239000004744 fabric Substances 0.000 description 3
- 235000008434 ginseng Nutrition 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 230000005236 sound signal Effects 0.000 description 3
- 238000011161 development Methods 0.000 description 2
- 238000002474 experimental method Methods 0.000 description 2
- 241000208340 Araliaceae Species 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000004888 barrier function Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000000052 comparative effect Effects 0.000 description 1
- 238000007405 data analysis Methods 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000012827 research and development Methods 0.000 description 1
- 238000009331 sowing Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S5/00—Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations
- G01S5/18—Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations using ultrasonic, sonic, or infrasonic waves
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Measurement Of Velocity Or Position Using Acoustic Or Ultrasonic Waves (AREA)
Abstract
The present invention is based on the iteration localization method of sound position fingerprint, it is related to determining the technology of the position of signal source using sound wave, is a kind of scene analysis method, using the sodar time difference as the positioning accuracy of location fingerprint, including two big steps:A. offline acquisition, that is the offline acquisition phase of sound position fingerprint, obtain the building of 2-d spatial coordinate → sound position fingerprint database, sound position fingerprint database is clustered using the clustering method based on sound position, will be formed cluster 1, cluster 2 ..., cluster K, total K cluster centre;B. tuning on-line, that is the tuning on-line stage of sound position fingerprint, computer extracts sound position feature vector → selection cluster → iteration positioning of sound source to be positioned, i.e., position calculating → positioning result of sound source to be positioned exports the final positioning result of sound source to be positioned.The present invention overcomes existing sound localization method model dependency degree it is high in the low defect of unstructured spatial positioning accuracy.
Description
Technical field
Technical solution of the present invention is related to determining the technology of the position of signal source using sound wave, is specifically based on sound
The iteration localization method of location fingerprint.
Background technique
With the progress of Internet technology and the portability of mobile device, so that the demand for services based on location information
Also increasingly increase.The development of the outdoor positionings technology such as GPS have been relatively mature, and the fields such as navigation, satellite monitoring have extensively outdoors
General application can provide positioning service accurately and fast for user, have a very important significance.However GPS signal penetrates energy
Power is weaker, is easy to be reflected by building, reflection etc. influences, it is difficult to realize and be accurately positioned under complicated indoor environment.For this purpose,
Find it is a kind of quickly, accurate, timely indoor orientation method is imperative.
Currently used indoor positioning technologies can be divided into two major classes:It localization method based on signal propagation model and is based on
The localization method of location fingerprint.Its positioning accuracy of localization method based on signal propagation model depends on model, and signal is passing
Sowing time is highly susceptible to the influence of external environment, and there are biggish errors for parameter Estimation, leads to the signal propagation model of building
Error is larger, and positioning accuracy is lower.Localization method based on location fingerprint has positioning accuracy high, and it is excellent that model dependency degree is low etc.
Point obtains the extensive concern of researcher in recent years.Localization method based on location fingerprint is broadly divided into two operational phases:From
Line sample phase and tuning on-line stage.In order to realize high positioning accuracy, the positioning side based on location fingerprint of the prior art
Method usually requires to acquire a large amount of sample in off-line phase, not only takes huge human and material resources, also adds tuning on-line
The time of stage Search location fingerprint database, increase the complexity of localization method.
In the localization method based on sound position fingerprint, auditory localization refers to the mobile robot in positioning system not
After knowing that location point makes a sound signal, the feature of these voice signals, including signal strength, noise are extracted by central processing platform
Than with the sodar time difference, by compared with the position feature of known location point, come spatial position where determining sounding robot
Process.The development of artificial intelligence and voice process technology, so that the localization method based on sound position fingerprint is in industry, sound
The fields such as video conference, man machine language's interaction suffer from important application.
CN104865555B discloses a kind of indoor sound localization method based on sound position fingerprint, according to indoor face
Long-pending and positioning accuracy determines the size of grid, there is the situation for be easy to causeing reference point arranged for redundancy, increases building database
Cost, bring huge overhead, hinder the large-scale application of auditory localization technology, used broad sense cross-correlation
Function is suitable only for the single environment of acoustical signal, and many of pseudo- peak are easy to appear in the indoor environment existing for reflection and diffraction and are lacked
It falls into.Wang Shu text paper《Distributed microphone array localization method research》(Chinese excellent MA theses full-text database letter
Breath science and technology collection (monthly magazine) the 09th phase in 2013) describe using microphone receive signal energy as sound position fingerprint building number
According to the method in library, exist and signal energy ratio is received as fingerprint using microphone array positioning accuracy can be made to be deteriorated, and indoor ring
Border is complicated, which kind of reason will lead to the reflection of signal, and diffraction is difficult the defect of estimation.Wu Xiu modest paper《Estimated based on time delay
Mobile robot sound localization method research》(Chinese excellent MA theses full-text database Information technology collection (monthly magazine)
06th phase in 2014) method based on geometrical model positioning is described, there are the localization methods based on geometrical model to be not suitable for
Indoor environment, the indoor environment in unstructured space are difficult the defect of the propagation model of correct estimation voice signal.
Summary of the invention
The technical problem to be solved by the present invention is to:Iteration localization method based on sound position fingerprint is provided, is a kind of
Scene analysis method, using the sodar time difference as the positioning accuracy of location fingerprint, is overcome existing using large-scale distributed wheat array
The model dependency degree of sound source localization method it is high in the low defect of unstructured spatial positioning accuracy.
The present invention solves technical solution used by the technical problem:Iteration positioning side based on sound position fingerprint
Method, specific step is as follows:
A. the offline acquisition phase of sound position fingerprint constructs sound position fingerprint database and is clustered:
The first step, the arrangement of positioning scene:
(1.1) the distributed microphone array being made of four array elements M0, M1, M2 and M3 is arranged on positioning map
Column, wherein microphone M0 is reference microphone;
(1.2) 5 reference points are arranged on four vertex of the positioning map of (1.1) step of the above-mentioned first step and midpoint;
Thus the arrangement of positioning scene is completed;
Second step, the acquisition of sound position fingerprint:
(2.1) the driving mobile robot each reference point releasing position sound selected in (1.2) step of the above-mentioned first step,
The distributed microphone array constituted using (1.1) step that the sound end detecting method of two-parameter dual threshold calculates the above-mentioned first step
The road Lie Zhongge microphone starts to receive the initial time of position sound, and reference microphone M0 starts reception sound with other microphones
The time difference of sound signal, that is, sodar time difference is extracted sound position feature vector as a reference point, i-th of reference point
It is denoted as in the collected sound position feature vector of moment tWhereinIndicate t
M-th of sound position feature that moment obtains at i-th of reference point;M represents the sound position feature that each fingerprint includes
Quantity;
(2.2) mobile robot carries out T signal acquisition at each reference point respectively, with the sound of T signal acquisition
The average value of position feature vector sound position feature vector as a reference point stores, then the sound of i-th of reference point
Position feature vector is expressed as Ri=[ri1,ri2,...,rim,...,riM], whereinIndicate i-th of reference point
M-th of sound position feature;
(2.3) L is seti=[xi,yi] be i-th of reference point 2-d spatial coordinate, then in (2.2) of above-mentioned second step
Walk the sound position feature vector R of i-th obtained of reference pointiCorresponding 2-d spatial coordinate combination constitutes one group of sound
Location fingerprint is denoted as Fi=[Ri,Li]=[ri1,ri2,...,rim,...,riM,xi,yi], wherein xiIndicate i-th of reference point
Abscissa, yiIndicate the ordinate of i-th of reference point;
Thus the acquisition of sound position fingerprint is completed;
Third step constructs sound position fingerprint database, clusters to sound position fingerprint database:
(3.1) the sound position fingerprint combination for all reference points for obtaining (2.3) step of above-mentioned second step is constituted
The sound position fingerprint database of original state, is denoted as:F=[F1 F2 ... Fi ... FI]T, wherein FiIndicate i-th of reference
The sound position fingerprint of point;
(3.2) using the clustering method based on sound position to the composition original state of (3.1) step of above-mentioned third step
Sound position fingerprint database is clustered, and defines cluster centre, and concrete operations are:It will be in (1.1) step of the above-mentioned first step
Positioning map be divided into the triangle localization region not overlapped being made of neighboring reference point, by these localization region up times
Needle is numbered:Region Z1..., region ZK, the reference point in same localization region belongs in the same cluster, then same cluster
It is as follows that the sound position fingerprint of internal reference examination point forms a sound position fingerprint set:
Wherein, F_ZknIndicate that the sound position fingerprint of n-th of reference point in cluster k, K indicate the quantity of cluster, N indicates poly-
The number of class k internal reference examination point, until cluster where all reference points are grouped into it, then in each cluster definition cluster
The heart, will be formed cluster 1, cluster 2 ..., cluster k ..., cluster K, total K cluster centre, the feature vector of each cluster centreIt is the average value of all reference point feature vectors in the cluster, obtains final sound position fingerprint database;
Thus building sound position fingerprint database is completed, sound position fingerprint database is clustered;
Thus the offline acquisition for completing A. sound position fingerprint constructs sound position fingerprint database and is clustered;
B. the tuning on-line stage of sound position fingerprint obtains the final positioning result of sound source to be positioned:
4th step, selection cluster, that is, determine the cluster where sound source to be positioned:
(4.1) mobile robot discharges voice signal, point that (1.1) step of the above-mentioned first step is constituted at sound source to be positioned
Cloth microphone array will capture this voice signal and be uploaded to computer, which extracts the sound of sound source to be positioned
Position feature vector is Rx=[r1,r2,...,rm,...,rM], wherein rmIndicate that m-th of sound position of sound source to be positioned is special
Sign;
(4.2) the sound position feature vector R of sound source to be positioned is calculated using Euclidean distancexWith above-mentioned third step
(3.2) feature vector of each cluster centre described in stepSimilarity, calculation formula isWherein dk
For RxWithBetween Euclidean distance;
(4.3) formula arg min d is recycledkTo determine the cluster where sound source to be positioned;
The position of 5th step, iteration positioning, i.e., sound source to be positioned calculates:
(5.1) using the sound position fingerprint set for clustering internal reference examination point in (3.2) step of above-mentioned third stepStart iterative process as initial input, wherein F_ZknIndicate cluster k
The interior n-th sound position fingerprint adjacent to reference point;
(5.2) neighbouring virtual reference pointAs iterative process is constantly updated, after the l times iteration, neighbouring virtual ginseng
The sound position fingerprint representation of examination point is:WhereinIt indicates l times
N-th of sound position fingerprint adjacent to virtual reference point after iteration;
After (5.3) the l+1 times iteration, n-th adjacent to reference point sound position fingerprintCalculation formula it is as follows:
In formula (1),Indicate n-th of weight coefficient adjacent to virtual reference point after the l times iteration, its calculating is public
Formula is as follows:
In formula (2),Indicate n-th of Euclidean between virtual reference point and sound source to be positioned after the l times iteration
Distance, the effect of random number ε be avoid denominator be 0;
(5.4) coordinate of sound source to be positioned is calculated using weighting K k-nearest neighbor, calculation formula is as follows:
In formula (3), (xkn,ykn) indicate neighbouring virtual by n-th of iteration the last time grey iterative generation in cluster k
The position coordinates of reference point, wknIt indicates in cluster k through n-th of iteration the last time grey iterative generation adjacent to virtual reference point
Weight coefficient, wknCalculation formula it is as follows:
In formula (4), dknIndicate n-th generated in cluster k by iteration the last time adjacent to virtual reference point and to
Euclidean distance between localization of sound source, its calculation method are as follows:
In formula (5), rmIndicate m-th of sound position feature of sound source to be positioned, rknmIndicate after the last iteration the
M-th of sound position feature of n neighbouring virtual reference points;
Iteration positioning is completed by (5.1)-(5.4) step of the 5th step, the position of sound source to be positioned calculates;
6th step calculates the position error of sound source to be positioned:
The calculation method of the position error of sound source to be positioned is expressed as:
In formula (6), x indicates that the abscissa of sound source actual physical location to be positioned, y indicate sound source actual physics to be positioned
The ordinate of position,Indicate the abscissa of sound source actual test to be positioned position,Indicate sound source actual test to be positioned position
Ordinate;
The position error of calculating sound source to be positioned is completed by formula (6);
7th step exports the final positioning result of sound source to be positioned:
(7.1) the step of constantly repeating above-mentioned 5th step and six steps, the positioning result being calculated after iteration each time
It all will be saved with position error;
(7.2) iterative process for terminating above-mentioned calculating operation when position error is stablized, it is final to export sound source to be positioned
Positioning result;
Thus the tuning on-line stage of sound position fingerprint is completed, and obtains the final positioning result of sound source to be positioned.
The above-mentioned iteration localization method based on sound position fingerprint, the end-speech of the positioning map, two-parameter dual threshold
Point detecting method, mobile robot used, formula arg min dkIt is the public affairs of the art with weighting K k-nearest neighbor
Know technology.
The beneficial effects of the invention are as follows:Compared with prior art, the present invention has substantive distinguishing features outstanding as follows:
(1) the method for the present invention is a kind of scene analysis method, using large-scale distributed wheat array, using the sodar time difference as
The localization method of location fingerprint.The offline sample phase of the method for the present invention is by reducing the reference point being arranged in positioning map
Density, to reduce the workload of data processing acquired offline, the building stage of sound position fingerprint database is used later
Clustering method based on sound position clusters reference point, and defines cluster centre for each cluster, online to reduce
The calculation amount of sound position fingerprint database is searched for when positioning;The on-line stage of the method for the present invention will measure at sound source to be positioned
Sound position feature vector compared with the feature vector of cluster centre in the sound position fingerprint database that off-line phase obtains
Compared with selection and the cluster where the shortest cluster centre of sound source Euclidean distance to be positioned use alternative manner knot in the cluster
It closes weighting K k-nearest neighbor and realizes accurate positioning, to be further reduced the complexity of positioning, thus overcome existing auditory localization
The model dependency degree of method it is high in the low defect of unstructured spatial positioning accuracy.
(2) with a kind of technology phase of indoor sound localization method based on sound position fingerprint disclosed in CN104865555B
Than the present invention has following essential distinction feature and marked improvement:
The present invention calculate the method for sodar time difference and CN104865555B be it is entirely different, the present invention is using two-parameter
The speech terminals detection technology of dual threshold calculates the sodar time difference, and the sodar time difference in CN104865555B is mutual by broad sense
What correlation function was calculated;The Database clustering algorithm and iteration location algorithm of the method for the present invention this be also CN104865555B
In the technology that does not have;CN104865555B is the previous technological achievement of present inventor team, therefore applicant can define
It points out on ground:Although the method for the present invention is that further research and development obtain on these achievements, but in the technical solution of invention and grind
Study carefully on emphasis has substantive distinguishing characteristics and marked improvement as follows than CN104865555B:
1) emphasis studied in CN104865555B is how to be positioned using acoustic information, according to indoor area and
Positioning accuracy determines the size of grid, be easy to cause the situation of reference point arranged for redundancy, increases the cost of building database, band
Carry out huge overhead, hinders the large-scale application of auditory localization technology.And inventive method uses a small amount of reference point structure
Database is built, the emphasis of research is the complexity for how reducing sound localization method, is realized under low fingerprint density high-precision
Auditory localization target.
2) the broad sense cross-correlation function used in CN104865555B is suitable only for the single environment of acoustical signal, however due to
Indoor environment is usually extremely complex, is easy to appear pseudo- peak in the indoor environment existing for reflection and diffraction, so that
Technology disclosed in CN104865555B cannot reach the exact requirements of auditory localization.And the present invention uses the language of two-parameter dual threshold
Voice endpoint detection technique can obtain satisfactory end-speech by the verifying repeatedly of experiment under low signal-to-noise ratio conditions
Point detection accuracy, therefore can use under environment indoors.
3) process that broad sense cross-correlation function calculates the sodar time difference has only been highlighted in CN104865555B.And this hair
Bright method not only describes the method for calculating the sodar time difference, further highlights and is filtered to voice signal, repeatedly measures
Feature vector mean value is taken to construct database, to improve the quality and system accuracy of database.
(3) with Wang Shu text paper《Distributed microphone array localization method research》Skill disclosed in (hereinafter referred to as paper F)
Art is compared, and the present invention has following essential distinction feature and marked improvement:Signal energy is received using microphone in paper F
As sound position fingerprint construct database, due to the propagation of voice signal be it is highly unstable, in paper F use wheat
Gram wind array, which receives signal energy ratio as fingerprint, so that positioning accuracy is deteriorated, and indoor environment is complicated, which kind of reason will lead to
The reflection of signal, diffraction are difficult to estimate, although excellent to voice signal progress using the method for estimation background noise energy in paper F
Change, it is also difficult to accomplish complete ambient compensation.Compared to paper F, the present invention is special with sodar time difference position as a reference point
Sign building database, the numerous studies of the early period of inventor, sound reach the time difference of each road microphone by ring according to the present invention
Border influences small, repeatedly tests print mean value, Ke Yiyou when in addition early period is to the filtering processing of voice signal and building database
Effect makes up deficiency existing for audio fingerprint in paper F.By repeatedly comparative experiments and comprehensive data analysis, the present invention is with sodar
Time difference will be much higher than in paper F to receive signal energy than determining as location fingerprint as the positioning accuracy of location fingerprint
Position precision.
(4) with the paper of Wu Xiuqian《Mobile robot sound localization method research based on time delay estimation》(hereinafter referred to as
Paper J) disclosed in technology compare, the present invention have following essential distinction feature and marked improvement:Method of the invention is
Based on sound position fingerprint location, the method in paper J is positioned based on geometrical model;" based on several described in paper J
The method of what model orientation " needs a suitable signal propagation model, because its positioning accuracy is highly dependent on the letter of building
Number model, model accuracy is higher, and it is more accurate to position, and the indoor environment in unstructured space is difficult correct estimation voice signal
Propagation model, therefore the localization method based on geometrical model be not suitable for indoor environment.And the method for the present invention is based on sound position
Fingerprint location is set, this is a kind of common scene analysis method, compensates for sound localization method model dependency degree height and in non-knot
The low problem of structure spatial positioning accuracy.In addition, the measuring device of the method for the present invention and paper J have substantive difference again:By
Using minitype microphone array in literary J, it is only used for the orientation of estimation sound source.The method of the present invention is using large-scale point
Cloth wheat array, is applicable not only to more polynary localizing environment, can also accurately estimate sound source position.
Compared with prior art, the present invention has marked improvement as follows:
(1) quantity of reference point has been reduced to a great extent in the present invention, reduces the workload of offline acquisition phase, improves
The operability of localization method based on sound position fingerprint in actual application.
(2) present invention uses location-based clustering method to ginseng in the building sound position fingerprint database stage offline
Examination point is clustered, and the time of tuning on-line stage Search database is reduced, and effectively raises auditory localization efficiency.
(3) present invention is generated using iterative algorithm virtually adjacent to reference point, the position where Step wise approximation sound source to be positioned,
Weighting K k-nearest neighbor is finally combined to realize positioning function, for the existing localization method based on location fingerprint, drop
The low complexity of method, improves the precision of positioning.
Detailed description of the invention
Present invention will be further explained below with reference to the attached drawings and examples.
Fig. 1 is the master-plan schematic block diagram of the iteration localization method the present invention is based on sound position fingerprint.
Fig. 2 is the process schematic block of the offline acquisition phase in the iteration localization method the present invention is based on sound position fingerprint
Figure.
Fig. 3 is the process schematic block in the tuning on-line stage in the iteration localization method the present invention is based on sound position fingerprint
Figure.
Specific embodiment
Embodiment illustrated in fig. 1 shows that the master-plan of the iteration localization method the present invention is based on sound position fingerprint is two
Big step:A. offline acquisition, the i.e. offline acquisition phase of sound position fingerprint, obtaining 2-d spatial coordinate is respectively [x1,y1]、
[x2,y2]…[xi,yi] corresponding sound location fingerprint [r11,r12..., r1M]、[r21,r22..., r2M]…[ri1,ri2...,
riM] → database → cluster 1, cluster 2 ..., i.e. the building of sound position fingerprint database, using the cluster based on sound position
Method clusters sound position fingerprint database, will be formed cluster 1, cluster 2 ..., cluster K, total K cluster centre;B.
Tuning on-line, i.e. the tuning on-line stage of sound position fingerprint, computer extract the sound position feature vector of sound source to be positioned
For [r1,r1..., rM] → selection cluster → cluster 1, cluster 2 ..., that is, cluster → iteration where determining sound source to be positioned is fixed
Position, i.e., position calculating → positioning result of sound source to be positioned export the final positioning result of sound source to be positioned.
Embodiment illustrated in fig. 2 shows the offline acquisition rank in the iteration localization method the present invention is based on sound position fingerprint
The process of section is the offline acquisition phase of sound position fingerprint:The arrangement → acquisition of sound position fingerprint → of positioning scene is adopted
Sound position fingerprint database is clustered with the clustering method based on sound position, the structure of sound position fingerprint database
It builds.
Embodiment illustrated in fig. 3 shows the tuning on-line rank in the iteration localization method the present invention is based on sound position fingerprint
The process of section, the tuning on-line stage of sound position fingerprint obtain the final positioning result of sound source to be positioned:Selection cluster, i.e.,
Determine cluster → iteration positioning where sound source to be positioned, i.e., the position of sound source to be positioned calculates → calculate determining for sound source to be positioned
The final positioning result of position error → output sound source to be positioned:The step of constantly repeating above-mentioned 5th step and six steps, each time
The positioning result and position error being calculated after iteration, which all will be saved, terminates above-mentioned meter when position error is stablized
The iterative process for calculating operation, exports the final positioning result of sound source to be positioned.
Embodiment
The iteration localization method based on sound position fingerprint of the present embodiment, specific step is as follows:
A. the offline acquisition phase of sound position fingerprint constructs sound position fingerprint database and is clustered:
The first step, the arrangement of positioning scene:
(1.1) the distributed microphone array being made of four array elements M0, M1, M2 and M3 is arranged on positioning map
Column, wherein microphone M0 is reference microphone;
(1.2) using North and South direction as horizontal axis, east-west direction is that the longitudinal axis establishes horizontal coordinate referential, in the above-mentioned first step
(1.1) 5 reference points are arranged on four vertex and midpoint of the positioning map of step;
Thus the arrangement of positioning scene is completed;
Second step, the acquisition of sound position fingerprint:
(2.1) the driving mobile robot each reference point releasing position sound selected in (1.2) step of the above-mentioned first step,
The distributed microphone array constituted using (1.1) step that the sound end detecting method of two-parameter dual threshold calculates the above-mentioned first step
The road Lie Zhongge microphone starts to receive the initial time of position sound, and reference microphone M0 starts reception sound with other microphones
The time difference of sound signal, that is, sodar time difference is extracted sound position feature vector as a reference point, i-th of reference point
It is denoted as in the collected sound position feature vector of moment tWhereinIndicate t
M-th of sound position feature that moment obtains at i-th of reference point;M represents the sound position feature that each fingerprint includes
Quantity;
(2.2) for the influence for reducing noise and barrier factor measures voice signal, mobile robot is respectively every
T signal acquisition is carried out at a reference point, the average value with the sound position feature vector of T signal acquisition is as a reference point
Sound position feature vector stores, then the sound position feature vector of i-th of reference point is expressed as Ri=[ri1,ri2,...,
rim,...,riM], whereinIndicate m-th of sound position feature of i-th of reference point;
(2.3) L is seti=[xi,yi] be i-th of reference point 2-d spatial coordinate, then in (2.2) of above-mentioned second step
Walk the sound position feature vector R of i-th obtained of reference pointiCorresponding 2-d spatial coordinate combination constitutes one group of sound
Location fingerprint is denoted as Fi=[Ri,Li]=[ri1,ri2,...,rim,...,riM,xi,yi], wherein xiIndicate i-th of reference point
Abscissa, yiIndicate the ordinate of i-th of reference point;
Thus the acquisition of sound position fingerprint is completed;
Third step constructs sound position fingerprint database, clusters to sound position fingerprint database:
(3.1) the sound position fingerprint combination for all reference points for obtaining (2.3) step of above-mentioned second step is constituted
The sound position fingerprint database of original state, is denoted as:F=[F1 F2 ... Fi ... FI]T, wherein FiIndicate i-th of reference
The sound position fingerprint of point;
(3.2) set of all reference point locations fingerprints constitutes location fingerprint database, in order to reduce location algorithm
Complexity, using the clustering method based on sound position to the sound position of the composition original state of (3.1) step of above-mentioned third step
It sets fingerprint database to be clustered, and defines cluster centre, concrete operations are:By the positioning in (1.1) step of the above-mentioned first step
Map partitioning numbers these localization regions at the triangle localization region not overlapped being made of neighboring reference point clockwise
For:Region Z1..., region ZK, the reference point in same localization region belongs in the same cluster, then same cluster internal reference
It is as follows that the sound position fingerprint of point forms a sound position fingerprint set:
Wherein, F_ZknIndicate that the sound position fingerprint of n-th of reference point in cluster k, K indicate the quantity of cluster, N indicates poly-
The number of class k internal reference examination point, until cluster where all reference points are grouped into it, then in each cluster definition cluster
The heart, will be formed cluster 1, cluster 2 ..., cluster k ..., cluster K, total K cluster centre, the feature vector of each cluster centreIt is the average value of all reference point feature vectors in the cluster, obtains final sound position fingerprint database;
Thus building sound position fingerprint database is completed, sound position fingerprint database is clustered;
Thus the offline acquisition for completing A. sound position fingerprint constructs sound position fingerprint database and is clustered;
B. the tuning on-line stage of sound position fingerprint obtains the final positioning result of sound source to be positioned:
4th step, selection cluster, that is, determine the cluster where sound source to be positioned:
(4.1) mobile robot discharges voice signal, point that (1.1) step of the above-mentioned first step is constituted at sound source to be positioned
Cloth microphone array will capture this voice signal and be uploaded to computer, which extracts the sound of sound source to be positioned
Position feature vector is Rx=[r1,r2,...,rm,...,rM], wherein rmIndicate that m-th of sound position of sound source to be positioned is special
Sign;
(4.2) the sound position feature vector R of sound source to be positioned is calculated using Euclidean distancexWith above-mentioned third step
(3.2) feature vector of each cluster centre described in stepSimilarity, calculation formula isWherein dk
For RxWithBetween Euclidean distance;It is similar to the position feature of each cluster centre that Euclidean distance can characterize point to be determined
Degree, Euclidean distance is shorter, shows that point to be determined is more similar to the position feature of each cluster centre, point to be determined and cluster centre
Distance is shorter, and point to be determined is bigger the cluster a possibility that, conversely, showing the position feature of point to be determined Yu each cluster centre
More difference, point to be determined is bigger at a distance from cluster centre, and point to be determined is smaller the cluster a possibility that;
(4.3) formula arg min d is recycledkTo determine the cluster where sound source to be positioned;
The position of 5th step, iteration positioning, i.e., sound source to be positioned calculates:
(5.1) using the sound position fingerprint set for clustering internal reference examination point in (3.2) step of above-mentioned third stepStart iterative process as initial input, wherein F_ZknIndicate cluster k
The interior n-th sound position fingerprint adjacent to reference point;
(5.2) neighbouring virtual reference pointAs iterative process is constantly updated, after the l times iteration, neighbouring virtual ginseng
The sound position fingerprint representation of examination point is:WhereinIt indicates l times
N-th of sound position fingerprint adjacent to virtual reference point after iteration;
After (5.3) the l+1 times iteration, n-th adjacent to reference point sound position fingerprintCalculation formula it is as follows:
In formula (1),Indicate n-th of weight coefficient adjacent to virtual reference point after the l times iteration, its calculating is public
Formula is as follows:
In formula (2),Indicate n-th of Euclidean between virtual reference point and sound source to be positioned after the l times iteration
Distance, the effect of random number ε be avoid denominator be 0;
(5.4) coordinate of sound source to be positioned is calculated using weighting K k-nearest neighbor, calculation formula is as follows:
In formula (3), (xkn,ykn) indicate neighbouring virtual by n-th of iteration the last time grey iterative generation in cluster k
The position coordinates of reference point, wknIt indicates in cluster k through n-th of iteration the last time grey iterative generation adjacent to virtual reference point
Weight coefficient, wknCalculation formula it is as follows:
In formula (4), dknIndicate n-th generated in cluster k by iteration the last time adjacent to virtual reference point and to
Euclidean distance between localization of sound source, its calculation method are as follows:
In formula (5), rmIndicate m-th of sound position feature of sound source to be positioned, rknmIndicate after the last iteration the
M-th of sound position feature of n neighbouring virtual reference points;
Iteration positioning is completed by (5.1)-(5.4) step of the 5th step, the position of sound source to be positioned calculates;
6th step calculates the position error of sound source to be positioned:
The calculation method of the position error of sound source to be positioned is expressed as:
In formula (6), x indicates that the abscissa of sound source actual physical location to be positioned, y indicate sound source actual physics to be positioned
The ordinate of position,Indicate the abscissa of sound source actual test to be positioned position,Indicate sound source actual test to be positioned position
Ordinate;
The position error of calculating sound source to be positioned is completed by formula (6);
7th step exports the final positioning result of sound source to be positioned:
(7.1) the step of constantly repeating above-mentioned 5th step and six steps, the positioning result being calculated after iteration each time
It all will be saved with position error;
(7.2) iterative process for terminating above-mentioned calculating operation when position error is stablized, it is final to export sound source to be positioned
Positioning result;
Thus the tuning on-line stage of sound position fingerprint is completed, and obtains the final positioning result of sound source to be positioned.
The above-mentioned iteration localization method based on sound position fingerprint, the end-speech of the positioning map, two-parameter dual threshold
Point detecting method, mobile robot used, formula arg min dkIt is the public affairs of the art with weighting K k-nearest neighbor
Know technology.
Claims (1)
1. the iteration localization method based on sound position fingerprint, it is characterised in that specific step is as follows:
A. the offline acquisition phase of sound position fingerprint constructs sound position fingerprint database and is clustered:
The first step, the arrangement of positioning scene:
(1.1) the distributed microphone array being made of four array elements M0, M1, M2 and M3 is arranged on positioning map,
Middle microphone M0 is reference microphone;
(1.2) 5 reference points are arranged on four vertex of the positioning map of (1.1) step of the above-mentioned first step and midpoint;
Thus the arrangement of positioning scene is completed;
Second step, the acquisition of sound position fingerprint:
(2.1) the driving mobile robot each reference point releasing position sound selected in (1.2) step of the above-mentioned first step, uses
In the distributed microphone array that (1.1) step that the sound end detecting method of two-parameter dual threshold calculates the above-mentioned first step is constituted
Each road microphone starts to receive the initial time of position sound, and reference microphone M0 starts to receive sound letter with other microphones
Number time difference, that is, sodar time difference be extracted sound position feature vector as a reference point, i-th of reference point when
The collected sound position feature vector of t is carved to be denoted asWhereinIndicate t moment
M-th of the sound position feature obtained at i-th of reference point;M represents the quantity for the sound position feature that each fingerprint includes;
(2.2) mobile robot carries out T signal acquisition at each reference point respectively, with the sound position of T signal acquisition
The average value of feature vector sound position feature vector as a reference point stores, then the sound position of i-th of reference point
Feature vector is expressed as Ri=[ri1,ri2,...,rim,...,riM], whereinIndicate the m of i-th of reference point
A sound position feature;
(2.3) L is seti=[xi,yi] be i-th of reference point 2-d spatial coordinate, then walked in (2.2) of above-mentioned second step
The sound position feature vector R of i-th of the reference point arrivediCorresponding 2-d spatial coordinate combination constitutes one group of sound position
Fingerprint is denoted as Fi=[Ri,Li]=[ri1,ri2,...,rim,...,riM,xi,yi], wherein xiIndicate the horizontal seat of i-th of reference point
Mark, yiIndicate the ordinate of i-th of reference point;
Thus the acquisition of sound position fingerprint is completed;
Third step constructs sound position fingerprint database, clusters to sound position fingerprint database:
(3.1) the sound position fingerprint combination for all reference points for obtaining (2.3) step of above-mentioned second step is constituted initial
The sound position fingerprint database of state, is denoted as:F=[F1 F2 ... Fi ... FI]T, wherein FiIndicate i-th of reference point
Sound position fingerprint;
(3.2) using the clustering method based on sound position to the sound of the composition original state of (3.1) step of above-mentioned third step
Location fingerprint database is clustered, and defines cluster centre, and concrete operations are:By determining in (1.1) step of the above-mentioned first step
Position map partitioning compiles these localization regions at the triangle localization region not overlapped being made of neighboring reference point clockwise
Number it is:Region Z1..., region ZK, the reference point in same localization region belongs in the same cluster, then same cluster internal reference
It is as follows that the sound position fingerprint of examination point forms a sound position fingerprint set:
Fzk=[F_Zk1F_Zk2...F_Zkn...F_ZkN]T(1<k<K)(1<n<N),
Wherein, F_ZknIndicate that the sound position fingerprint of n-th of reference point in cluster k, K indicate the quantity of cluster, N indicates cluster k
The number of internal reference examination point is then that each cluster defines cluster centre until cluster where all reference points are grouped into it,
Will be formed cluster 1, cluster 2 ..., cluster k ..., cluster K, total K cluster centre, the feature vector of each cluster centreIt is
The average value of all reference point feature vectors in the cluster, obtains final sound position fingerprint database;
Thus building sound position fingerprint database is completed, sound position fingerprint database is clustered;
Thus the offline acquisition for completing A. sound position fingerprint constructs sound position fingerprint database and is clustered;
B. the tuning on-line stage of sound position fingerprint obtains the final positioning result of sound source to be positioned:
4th step, selection cluster, that is, determine the cluster where sound source to be positioned:
(4.1) mobile robot discharges voice signal, the distribution that (1.1) step of the above-mentioned first step is constituted at sound source to be positioned
Microphone array will capture this voice signal and be uploaded to computer, which extracts the sound position of sound source to be positioned
Feature vector is Rx=[r1,r2,...,rm,...,rM], wherein rmIndicate m-th of sound position feature of sound source to be positioned;
(4.2) the sound position feature vector R of sound source to be positioned is calculated using Euclidean distancexWith (3.2) step of above-mentioned third step
Described in each cluster centre feature vectorSimilarity, calculation formula isWherein dkFor RxWithBetween Euclidean distance;
(4.3) formula argmin d is recycledkTo determine the cluster where sound source to be positioned;
The position of 5th step, iteration positioning, i.e., sound source to be positioned calculates:
(5.1) using the sound position fingerprint set for clustering internal reference examination point in (3.2) step of above-mentioned third stepStart iterative process as initial input, wherein F_ZknIndicate cluster k
The interior n-th sound position fingerprint adjacent to reference point;
(5.2) neighbouring virtual reference pointAs iterative process is constantly updated, after the l times iteration, neighbouring virtual reference point
Sound position fingerprint representation be:WhereinIndicate l iteration
N-th of sound position fingerprint adjacent to virtual reference point afterwards;
After (5.3) the l+1 times iteration, n-th adjacent to reference point sound position fingerprintCalculation formula it is as follows:
In formula (1),Indicate n-th of weight coefficient adjacent to virtual reference point after the l times iteration, its calculation formula is such as
Under:
In formula (2),Indicate n-th of Euclidean distance between virtual reference point and sound source to be positioned after the l times iteration,
The effect of random number ε be avoid denominator be 0;
(5.4) coordinate of sound source to be positioned is calculated using weighting K k-nearest neighbor, calculation formula is as follows:
In formula (3), (xkn,ykn) indicate in cluster k through n-th of iteration the last time grey iterative generation adjacent to virtual reference point
Position coordinates, wknIndicate n-th of weight system adjacent to virtual reference point for passing through iteration the last time grey iterative generation in cluster k
Number, wknCalculation formula it is as follows:
In formula (4), dknIndicate n-th generated in cluster k by iteration the last time adjacent to virtual reference point with it is to be positioned
Euclidean distance between sound source, its calculation method are as follows:
In formula (5), rmIndicate m-th of sound position feature of sound source to be positioned, rknmN-th of neighbour after the last iteration of expression
M-th of sound position feature of nearly virtual reference point;
Iteration positioning is completed by (5.1)-(5.4) step of the 5th step, the position of sound source to be positioned calculates;
6th step calculates the position error of sound source to be positioned:
The calculation method of the position error of sound source to be positioned is expressed as:
In formula (6), x indicates that the abscissa of sound source actual physical location to be positioned, y indicate sound source actual physical location to be positioned
Ordinate,Indicate the abscissa of sound source actual test to be positioned position,Indicate the vertical of sound source actual test to be positioned position
Coordinate;
The position error of calculating sound source to be positioned is completed by formula (6);
7th step exports the final positioning result of sound source to be positioned:
(7.1) the step of constantly repeating above-mentioned 5th step and six steps, the positioning result being calculated after iteration each time and fixed
Position error all will be saved;
(7.2) terminate the iterative process of above-mentioned calculating operation when position error is stablized, export that sound source to be positioned is final to determine
Position result;
Thus the tuning on-line stage of sound position fingerprint is completed, and obtains the final positioning result of sound source to be positioned.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810611952.5A CN108896962B (en) | 2018-06-14 | 2018-06-14 | Iterative positioning method based on sound position fingerprint |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810611952.5A CN108896962B (en) | 2018-06-14 | 2018-06-14 | Iterative positioning method based on sound position fingerprint |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108896962A true CN108896962A (en) | 2018-11-27 |
CN108896962B CN108896962B (en) | 2022-02-08 |
Family
ID=64345170
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810611952.5A Expired - Fee Related CN108896962B (en) | 2018-06-14 | 2018-06-14 | Iterative positioning method based on sound position fingerprint |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108896962B (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110673125A (en) * | 2019-09-04 | 2020-01-10 | 珠海格力电器股份有限公司 | Sound source positioning method, device, equipment and storage medium based on millimeter wave radar |
CN112566056A (en) * | 2020-12-07 | 2021-03-26 | 浙江德清知路导航研究院有限公司 | Electronic equipment indoor positioning system and method based on audio fingerprint information |
CN112684412A (en) * | 2021-01-12 | 2021-04-20 | 中北大学 | Sound source positioning method and system based on pattern clustering |
CN114046968A (en) * | 2021-10-04 | 2022-02-15 | 北京化工大学 | Two-step fault positioning method for process equipment based on acoustic signals |
US20220317272A1 (en) * | 2021-03-31 | 2022-10-06 | At&T Intellectual Property I, L.P. | Using Scent Fingerprints and Sound Fingerprints for Location and Proximity Determinations |
EP3991442A4 (en) * | 2019-06-27 | 2023-07-05 | Gracenote, Inc. | Methods and apparatus to improve detection of audio signatures |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2011087757A1 (en) * | 2010-01-13 | 2011-07-21 | Rovi Technologies Corporation | Rolling audio recognition |
CN104865555A (en) * | 2015-05-19 | 2015-08-26 | 河北工业大学 | Indoor sound source localization method based on sound position fingerprints |
WO2015127858A1 (en) * | 2014-02-27 | 2015-09-03 | 华为技术有限公司 | Indoor positioning method and apparatus |
CN107222851A (en) * | 2017-04-07 | 2017-09-29 | 南京邮电大学 | A kind of method of utilization difference secret protection Wifi Fingerprint indoor locating system privacies |
-
2018
- 2018-06-14 CN CN201810611952.5A patent/CN108896962B/en not_active Expired - Fee Related
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2011087757A1 (en) * | 2010-01-13 | 2011-07-21 | Rovi Technologies Corporation | Rolling audio recognition |
WO2015127858A1 (en) * | 2014-02-27 | 2015-09-03 | 华为技术有限公司 | Indoor positioning method and apparatus |
CN104865555A (en) * | 2015-05-19 | 2015-08-26 | 河北工业大学 | Indoor sound source localization method based on sound position fingerprints |
CN107222851A (en) * | 2017-04-07 | 2017-09-29 | 南京邮电大学 | A kind of method of utilization difference secret protection Wifi Fingerprint indoor locating system privacies |
Non-Patent Citations (3)
Title |
---|
MA, RUI等: "An Improved WiFi Indoor Positioning Algorithm by Weighted Fusion", 《SENSORS》 * |
刘春燕 等: "基于几何聚类指纹库的约束KNN室内定位模型", 《武汉大学学报信息科学版》 * |
杨鹏 等: "机器人听觉***中指纹定位改进方法", 《传感技术学报》 * |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3991442A4 (en) * | 2019-06-27 | 2023-07-05 | Gracenote, Inc. | Methods and apparatus to improve detection of audio signatures |
US12044792B2 (en) | 2019-06-27 | 2024-07-23 | Gracenote, Inc. | Methods and apparatus to improve detection of audio signatures |
CN110673125A (en) * | 2019-09-04 | 2020-01-10 | 珠海格力电器股份有限公司 | Sound source positioning method, device, equipment and storage medium based on millimeter wave radar |
CN110673125B (en) * | 2019-09-04 | 2020-12-25 | 珠海格力电器股份有限公司 | Sound source positioning method, device, equipment and storage medium based on millimeter wave radar |
CN112566056A (en) * | 2020-12-07 | 2021-03-26 | 浙江德清知路导航研究院有限公司 | Electronic equipment indoor positioning system and method based on audio fingerprint information |
CN112566056B (en) * | 2020-12-07 | 2022-06-24 | 浙江德清知路导航研究院有限公司 | Electronic equipment indoor positioning system and method based on audio fingerprint information |
CN112684412A (en) * | 2021-01-12 | 2021-04-20 | 中北大学 | Sound source positioning method and system based on pattern clustering |
CN112684412B (en) * | 2021-01-12 | 2022-09-13 | 中北大学 | Sound source positioning method and system based on pattern clustering |
US20220317272A1 (en) * | 2021-03-31 | 2022-10-06 | At&T Intellectual Property I, L.P. | Using Scent Fingerprints and Sound Fingerprints for Location and Proximity Determinations |
CN114046968A (en) * | 2021-10-04 | 2022-02-15 | 北京化工大学 | Two-step fault positioning method for process equipment based on acoustic signals |
Also Published As
Publication number | Publication date |
---|---|
CN108896962B (en) | 2022-02-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108896962A (en) | Iteration localization method based on sound position fingerprint | |
CN101957442B (en) | Sound source positioning device | |
CN104865555B (en) | A kind of indoor sound localization method based on sound position fingerprint | |
CN107102296A (en) | A kind of sonic location system based on distributed microphone array | |
CN109068267B (en) | Indoor positioning method based on LoRa SX1280 | |
CN106872944A (en) | A kind of sound localization method and device based on microphone array | |
CN106093849B (en) | A kind of Underwater Navigation method based on ranging and neural network algorithm | |
CN108959794A (en) | A kind of structural frequency response modification methodology of dynamics model based on deep learning | |
CN110536257B (en) | Indoor positioning method based on depth adaptive network | |
CN103428850A (en) | Compressed sensing based distributed multi-zone positioning method | |
CN107703480A (en) | Mixed kernel function indoor orientation method based on machine learning | |
CN108627798B (en) | WLAN indoor positioning algorithm based on linear discriminant analysis and gradient lifting tree | |
CN109143161B (en) | High-precision indoor positioning method based on mixed fingerprint quality evaluation model | |
CN103217211A (en) | Substation noise source distribution measuring method based on synthetic aperture principle | |
CN103235286A (en) | High-precision locating method for electric noise sources | |
CN110401977A (en) | A kind of more floor indoor orientation methods returning more Classification and Identification devices based on Softmax | |
CN106358233B (en) | A kind of RSS data smoothing method based on Multidimensional Scaling algorithm | |
CN107884743A (en) | Suitable for the direction of arrival intelligence estimation method of arbitrary structures sound array | |
CN107480377B (en) | Three coordinate measuring machine gauge head pretravel error prediction method based on hybrid modeling | |
Qin et al. | A wireless sensor network location algorithm based on insufficient fingerprint information | |
CN109738852A (en) | The distributed source two-dimensional space Power estimation method rebuild based on low-rank matrix | |
CN105657653A (en) | Indoor positioning method based on fingerprint data compression | |
CN114679683A (en) | Indoor intelligent positioning method based on derivative fingerprint migration | |
CN112954633B (en) | Parameter constraint-based dual-network architecture indoor positioning method | |
CN109858517A (en) | A kind of with the direction of motion is leading track method for measuring similarity |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20220208 |