CN112285648B - Augmented reality system and method based on sound source positioning - Google Patents

Augmented reality system and method based on sound source positioning Download PDF

Info

Publication number
CN112285648B
CN112285648B CN202011089429.4A CN202011089429A CN112285648B CN 112285648 B CN112285648 B CN 112285648B CN 202011089429 A CN202011089429 A CN 202011089429A CN 112285648 B CN112285648 B CN 112285648B
Authority
CN
China
Prior art keywords
target
module
information
detection
operator
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011089429.4A
Other languages
Chinese (zh)
Other versions
CN112285648A (en
Inventor
宁方立
盛浩
姚克强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwestern Polytechnical University
Original Assignee
Northwestern Polytechnical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwestern Polytechnical University filed Critical Northwestern Polytechnical University
Priority to CN202011089429.4A priority Critical patent/CN112285648B/en
Publication of CN112285648A publication Critical patent/CN112285648A/en
Application granted granted Critical
Publication of CN112285648B publication Critical patent/CN112285648B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S5/00Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations
    • G01S5/18Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations using ultrasonic, sonic, or infrasonic waves
    • G01S5/20Position of source determined by a plurality of spaced direction-finders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01MTESTING STATIC OR DYNAMIC BALANCE OF MACHINES OR STRUCTURES; TESTING OF STRUCTURES OR APPARATUS, NOT OTHERWISE PROVIDED FOR
    • G01M3/00Investigating fluid-tightness of structures
    • G01M3/02Investigating fluid-tightness of structures by using fluid or vacuum
    • G01M3/04Investigating fluid-tightness of structures by using fluid or vacuum by detecting the presence of fluid at the leakage point
    • G01M3/24Investigating fluid-tightness of structures by using fluid or vacuum by detecting the presence of fluid at the leakage point using infrasonic, sonic, or ultrasonic vibrations
    • G01M3/243Investigating fluid-tightness of structures by using fluid or vacuum by detecting the presence of fluid at the leakage point using infrasonic, sonic, or ultrasonic vibrations for pipes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Human Computer Interaction (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Multimedia (AREA)
  • Measurement Of Velocity Or Position Using Acoustic Or Ultrasonic Waves (AREA)

Abstract

The invention provides an augmented reality system and method based on sound source positioning. By identifying the target existing in the detection range, automatically determining the frequency range of the target and automatically selecting a proper configuration and positioning algorithm of the microphone array, and displaying the distribution of the noise source and the related information of the detection target in an augmented reality mode, the convenience of positioning the noise source is improved.

Description

Augmented reality system and method based on sound source positioning
Technical Field
The invention relates to the technical field of augmented reality, in particular to an augmented reality system and method based on sound source positioning.
Background
Augmented Reality (AR) integrates real world and virtual information, is an enhancement to the real world, can enable people to know reality more deeply after enhancing the reality, becomes a novel human-computer interaction interface, and is applied to various industries.
In the industries of electric power, natural gas and the like, it is extremely important to ensure the normal operation of equipment. Some devices are accompanied by noise generation when faults occur, and the noise generated by abnormal operation of the devices contains abundant information and can be used for monitoring the state of the devices and diagnosing the faults. The noise source positioning technology is widely applied to the industrial field, sound signals collected by a certain configured microphone array can be processed by a positioning algorithm to calculate the sound source position with specified frequency, parameters such as the position and the strength of the sound source are identified, and the visualization of sound source information is realized. The sound source positioning technology can be applied to fault detection of equipment, and can also be widely applied to positioning of various noise sources, so that accurate sound source position, strength and other parameters are provided for noise reduction.
Most of the existing sound source positioning systems process signals collected by a microphone array, and then transmit the processed results to a computer terminal to display position information of a noise source, so that a terminal operator is generally far away from a microphone array detection device, the position of the microphone array is inconvenient to move to change a detection object, and the operation difficulty of the operator is increased. Therefore, research institutions at home and abroad are accelerating the research of portable sound source positioning systems. For example, chinese patent CN110412509A discloses a sound source positioning system based on an MEMS microphone array, which uses an FPGA chip to collect a multi-channel MEMS uniform circular array microphone array speech signal in parallel, performs down-sampling and buffer processing, and transmits the speech signal to a DSP processor through a gigabit network port for processing, thereby implementing the functions of speech enhancement and sound source positioning, but the sampling rate is 10kHz, and the detectable range is too small.
Noise sources of different frequency ranges are generated due to the failure of different devices, for example: when a valve on the gas pipeline generates tiny leakage, an ultrasonic signal is generated, and the frequency is between 20kHz and 60 kHz; the noise source generated by corona discharge on the high voltage line has a frequency between 11kHz and 14 kHz. In terms of hardware, the sensitivity of the fixedly configured microphone arrays for sound source positioning of different frequencies is different, that is, the positioning of noise sources for the noise sources of different frequencies needs to be performed by the differently configured microphone arrays; in software, different sound source localization algorithms are also typically applied to localization of sound sources at different frequencies. Therefore, different configurations of microphone arrays and sound source localization algorithms are selected for localization in response to different equipment-generated faults. Chinese patent CN 109752721A discloses a portable acoustic imaging tool with scanning and analyzing capabilities, which needs to manually select a frequency range to be located, and then select a correspondingly configured microphone array and a locating algorithm, but manually selecting the frequency range to be located is complicated to operate, has a high requirement on professional knowledge of an operator, and may cause failure in detection or inaccurate detection results once the selection is not appropriate. In addition, the existing positioning system performs positioning in the whole view field, has low calculation efficiency and low anti-interference performance, only displays the distribution situation of a noise source to an operator after positioning, and cannot provide relevant information of a detection target for equipment state monitoring and fault diagnosis.
Disclosure of Invention
The invention aims to overcome the defects in the prior art, and provides an augmented reality system and method based on sound source positioning, which are used for solving the problems that the existing sound source positioning system is too large, low in convenience, too large in operation requirement and difficulty, low in calculation efficiency and anti-interference performance, incapable of providing relevant information of a detection target for equipment state monitoring and fault diagnosis and the like. By identifying the target existing in the detection range, automatically determining the frequency range of the target and automatically selecting a proper configuration and positioning algorithm of the microphone array, and displaying the distribution of the noise source and the related information of the detection target in an augmented reality mode, the convenience of positioning the noise source is improved.
The technical scheme of the invention is as follows:
the augmented reality system based on sound source positioning comprises a scene selection module, a distance measurement module, an image acquisition module, a target detection module, a target display module, a target selection module, a configuration selection module, a sensor array, an acoustic signal acquisition module, a signal cache module, a signal transmission module, a positioning module, an augmented reality module and a display control module;
the scene selection module is used for an operator to select a scene to be detected and provide selected scene information to the target detection module; different detection scenes correspond to corresponding recognition libraries, and detection target information existing in the corresponding detection scenes is stored in the recognition libraries;
the image acquisition module is used for receiving optical information from a target scene, generating a corresponding optical image and transmitting the optical image information to the target detection module;
the distance measuring module is used for measuring the distance between the selected detection target in the target scene and the microphone array in the acoustic signal acquisition module and transmitting distance information to the target detection module;
the target detection module identifies targets in the target scene optical image by adopting an image identification algorithm based on the selected detection scene corresponding identification library according to the target scene optical image information output by the image acquisition module; determining the position and size of each target according to the distance information between the target and the microphone array;
the target display module enhances the information of the identified target and the target distance obtained by the distance measurement module into a target scene optical image, and displays the enhanced target scene optical image to an operator through the target display module; the enhanced information in the enhanced optical image of the target scene comprises the name of the identified target in the detection range and the distance between the target and the microphone array;
the target selection module receives a target to be detected selected by an operator, judges whether the target to be detected selected by the operator has working state data required to be input by the operator, and prompts and receives the state data input by the operator if the target to be detected selected by the operator has the working state data required to be input by the operator; the target selection module transmits the target to be detected and the working state data selected by the operator to the configuration selection module, and transmits the position, the size and the distance information between the target to be detected and the microphone array selected by the operator to the positioning module;
the configuration selection module matches the target to be detected selected by an operator and the working state data in the database to obtain a noise source frequency range generated when the target to be detected fails, selects corresponding microphone array configuration information and a positioning algorithm according to the obtained frequency range, and transmits the selected microphone array configuration information to the acoustic signal acquisition module;
the microphone array consists of a plurality of microphones and works according to configuration information;
the acoustic signal acquisition module comprises a clock unit and a signal decoding unit; the clock unit and the signal decoding unit are used for controlling the corresponding microphone to work according to the microphone array configuration information and collecting an acoustic signal generated by a target to be detected;
the signal caching module is used for caching the acoustic signals of the target to be detected, which are acquired by the acoustic signal acquisition module, and transmitting the cached acoustic signals of the target to be detected to the positioning module through the signal transmission module;
the positioning module receives an acoustic signal of a target to be detected, determines the range of a grid to be divided in a positioning algorithm according to the position, the size and the distance information between a microphone array of the target to be detected, processes the acoustic signal of the target to be detected, which is obtained from the signal caching module, by using the positioning algorithm selected by the configuration selection module, calculates the distribution information of a noise source, represents the distribution information of the noise source in a cloud image mode, and shows acoustic image data representing different acoustic signal intensities in different colors or shades;
the augmented reality module correspondingly superimposes the distribution cloud picture of the noise source, the information of the detection target and the optical image information to generate image information representing the detection result;
and the display control module displays the image information enhanced by the augmented reality module to an operator for the operator to check the distribution information of the noise source and the information of the detection target.
Furthermore, the configuration selection module, the target detection module, the augmented reality module, the positioning module and the signal transmission module are implemented in the same central processing unit, and the central processing unit is based on an ARM framework or other forms of central processing units.
Furthermore, the scene selection module, the target display module, the target selection module and the display control module are realized by adopting the same human-computer interaction tool, and the human-computer interaction tool adopts a touch display screen or a combination of the touch display screen and a key.
Further, the image recognition algorithm adopted by the object detection module includes RCNN, fast RCNN, YOLO, YOLOv2 and/or SSD algorithm.
Further, the target selection module receives a target to be detected selected by an operator by sensing the clicking operation of the operator on the touch display screen.
Further, for a pipeline type target, the operating state data includes a type, pressure and/or flow rate of fluid in the pipeline; for a high-voltage wire target in an electric power scene, the working state data comprises the diameter of the high-voltage wire, the voltage and/or the current ambient temperature.
Further, the configuration information of the microphone array comprises circular, rectangular, spiral array configuration information or array combination configuration information; the microphone is an electret microphone or an MEMS microphone.
Further, the positioning algorithm used in the positioning module comprises a conventional beam forming algorithm, a functional beam forming algorithm, a CLEAN-SC algorithm, a DAMAS algorithm or a compressed sensing algorithm.
The augmented reality method based on sound source positioning realized by utilizing the augmented reality system comprises the following steps:
step 1: an operator determines a scene to be detected through a scene selection module; the image acquisition module acquires image information in a detection range, and the distance measurement module measures the distance between each detection target and the microphone array; the target detection module identifies targets existing in a detection range in an identification library corresponding to a selected detection scene based on the optical image information acquired by the image acquisition module, and determines the position and size of each target according to the distance information between each detection target and the microphone array;
step 2: the target display module enhances the information of the identified target to the optical image acquired by the image acquisition module and displays the enhanced image to an operator; the operator selects the detection target by the target selection module and inputs some known working state information of the detection target according to the requirement.
And 3, step 3: the configuration selection module performs data matching according to the selected detection target and the working state information to obtain a noise source frequency range generated when the detected target fails, the frequency range selects a correspondingly configured microphone array and a positioning algorithm, and the acoustic signal acquisition module controls the correspondingly configured microphone array to acquire acoustic signals;
and 4, step 4: the signal caching module caches the acoustic signals acquired by the acoustic signal acquisition module, and after the acoustic signals are acquired for a certain time, the cached data are transmitted to the positioning module through the signal transmission module; and the distance measurement module transmits the distance information between the microphone array and the detection target to the positioning module.
And 5: the positioning module receives an acoustic signal of a detection target, determines the range of a positioning algorithm needing to be divided into grids according to the position and the size of the detection target and the distance information between the detection target and the microphone array, processes the acoustic signal of the detection target obtained from the signal cache module by using the positioning algorithm selected by the configuration selection module, calculates the distribution information of a noise source, and generates a noise source distribution cloud chart;
and 6: the augmented reality module correspondingly superimposes the information of the distribution cloud picture of the noise source and the detection target with the optical image information to generate image information representing a detection result;
and 7: and the display control module displays the image information after augmented reality enhancement to an operator for the operator to check the distribution information of the noise source and the information of the detection target.
Advantageous effects
The invention has the beneficial effects that:
(1) Through the mode of target detection, when a plurality of detectable targets exist in the detection range, the detectable targets can be identified, an operator can select the targets to be detected through the target selection module and can input the known working state information of the detected targets, the automatic matching of the frequency of the detected targets is carried out through the configuration selection module, the microphone arrays and the positioning algorithm which are correspondingly configured are automatically selected, the frequency range to be detected does not need to be determined manually, and the convenience of operation is improved.
(2) Through the compound mode of acoustic signal collection module + signal buffer module, the parallel synchronous collection of multichannel can be realized to acoustic signal collection module, and the acoustic signal of gathering passes through signal buffer module earlier and transmits the orientation module again and carries out subsequent processing, and the sampling rate is not influenced by signal transmission rate, can reach the great value.
(3) The scene selection module is arranged before the detection is started, so that an operator can select a target scene to be detected, different detection targets exist in different scenes, the detection targets existing in different scenes are stored in different recognition libraries, the detection targets are recognized only in the corresponding libraries after the target scene is selected, recognition in all libraries is not needed, and the detection time can be saved.
(4) After the target selection module selects the target to be detected, an operator can input some known working state information of the detected target, and the known working state information can help the configuration selection module to more accurately match the frequency of a noise source generated when the detected target fails, so that the detection accuracy is improved.
(5) The positioning module calculates the range of the detection target by using the distance information between the detection target and the microphone array and the position information of the detection target in the optical image to determine the range of grid division in the positioning algorithm, and only performs sound source positioning on the detection target without performing sound source positioning in the whole visual field range, so that the calculation efficiency is improved, the influence of a sound source except the detection target can be eliminated, and the anti-interference performance is improved.
(6) The augmented reality module correspondingly superposes the distribution cloud picture of the noise source, the information of the detection target and the optical image information to generate image information representing a detection result, so that not only can the distribution information of the noise source generated by the detection target be displayed for an operator, but also the information of the detection target can be displayed, and the related information of the detection target is provided for equipment state monitoring and fault diagnosis.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The above and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 is a signal processing flow diagram implementing the present invention.
Fig. 2 is a schematic diagram of a hardware connection relationship according to the first embodiment.
Fig. 3 is a schematic view of scene selection.
FIG. 4 is a diagram of different recognition libraries.
Fig. 5 and 6 are schematic diagrams illustrating detection target selection according to the first embodiment.
Fig. 7 is a schematic view of a microphone array assembly with different configurations.
Fig. 8 is a schematic view of positioning grid division.
Fig. 9 is a schematic diagram illustrating a positioning result according to the first embodiment.
Fig. 10 is a diagram illustrating a hardware connection relationship according to the second embodiment.
Fig. 11 and 12 are schematic diagrams illustrating selection of a detection target according to the second embodiment.
Fig. 13 is a diagram illustrating the positioning result of the second embodiment.
Fig. 14 is a schematic diagram of a hardware connection relationship according to the third embodiment.
Fig. 15 and 16 are schematic diagrams illustrating selection of a detection target according to the third embodiment.
Fig. 17 is a diagram illustrating the positioning result of the third embodiment.
Detailed Description
The following detailed description of embodiments of the invention is intended to be illustrative, and not to be construed as limiting the invention.
Example 1
The pipe gallery is provided with a large number of pipes which are used for conveying various fluids, when the pipes are slightly leaked, ultrasonic signals can be generated at leakage points, and the embodiment is used for detecting the leakage positions according to acoustic information generated by the leakage when the pipes are detected to be leaked in the pipe gallery.
As shown in fig. 1, which is a signal processing flow chart of the present invention, an augmented reality system for sound source localization is provided, which includes a scene selection module, an image acquisition module, a distance measurement module, a target detection module, a target display module, a target selection module, a configuration selection module, a configurable sensor array, an acoustic signal acquisition module, a signal buffer module, a signal transmission module, a localization module, an augmented reality module, and a display control module.
As shown in fig. 2, a specific embodiment of hardware for implementing functions of each module in the present invention is shown, and connections and signal flow relationships among the hardware are shown. The hardware includes: the device comprises a central processing unit, an FPGA, a DDR2, a camera, a touch display screen, a microphone array and a laser range finder.
The central processing unit comprises a configuration selection module, a target detection module, an augmented reality module, a positioning module and a signal transmission module. The system can be a central processing unit based on an ARM framework or other forms, in the embodiment, a raspberry group is selected as the central processing unit, the size of the raspberry group is small, the operation speed is high, IO pin control can be completed, interaction with bottom hardware is directly performed, in addition, a corresponding operating system can be operated, more complex task management and scheduling can be completed, development of upper-layer applications can be supported, and a wider application space is provided for developers.
The microphone can be a capacitance microphone or an MEMS microphone, in the embodiment, the microphone is an MEMS microphone output by PDM, the microphone is high in integration level, the output is a PDM digital signal, an AD conversion module is not needed, an interface circuit is simple, and the requirement of hardware is greatly reduced.
As shown in fig. 3, the scene selection module is used for an operator to select a scene to be detected, and provides selected scene information to the target detection module for detection. Since different detection targets exist in different scenes, a plurality of recognition libraries can be established for storing the detection targets existing in the different scenes, and the schematic diagram of the different recognition libraries is shown in fig. 4.
The image acquisition module is connected with the central processing unit, transmits the optical image information who gathers to the target detection module in the central processing unit, the image acquisition module can be realized by infrared imaging instrument, ultraviolet imaging instrument, camera etc. in this embodiment, chooses for use the camera to gather image information.
The distance measuring module is used for measuring the distance between each detection target and the microphone array and transmitting distance information to the target detecting module, and can be a measuring tool such as a laser range finder and an ultrasonic range finder. In the present embodiment, a laser range finder is selected to achieve the distance measurement.
In this embodiment, an operator selects a scene to be detected through a touch display screen, in this embodiment, an application scene is selected as a pipe gallery, a fan, a distribution box and a pipeline exist in the pipe gallery, after the scene to be detected is selected, the target detection module identifies detectable targets in a corresponding identification library based on image information acquired by the image acquisition module, and determines the position and size of each target according to distance information between each detected target and the microphone array. The target detection module may be implemented by using algorithms such as RCNN, fastRCNN, fast RCNN, YOLO, YOLOv2, SSD, etc. when identifying a detectable target, in this embodiment, the YOLO algorithm is selected for target detection.
As shown in fig. 5, after the target detection module, the target display module enhances the information of the identified detectable target to the optical image information collected by the image collection module, and displays the enhanced image to the operator through the touch display screen. The presented information may be the name of the object present in the detection range, the distance to the microphone array, etc.
The target selection module selects a target to be detected through a touch display screen by an operator after the target display module, for example, in the embodiment, a pipeline is selected as a detection target, the position of the pipeline leakage is detected according to acoustic information generated when the pipeline leaks, as shown in fig. 6, after the target to be detected is selected, some known working state information of the detection target can be input, the working state information is transmitted to the configuration selection module, and the position, the size and the distance information between the microphone array of the detected target are transmitted to the positioning module. In this embodiment, the selected target of detection is the pipe, and some known operating conditions may be the type, pressure, and flow rate of the fluid in the pipe.
Further, a configuration selection module in the central processing unit matches the detected target selected by the target selection module with the input known working state information in a database to obtain a frequency range of the noise source generated when the detected target fails. In this embodiment, the noise generated when the pipeline has a small leak is an ultrasonic signal, the frequency range of the ultrasonic signal is 20kHz to 60kHz, and if the target selection module does not input a corresponding working state, the frequency range matched by the configuration selection module is large, which results in a large error of the detection result. After the frequency range of the detection target is obtained, selecting a microphone array and a positioning algorithm which are correspondingly configured according to the obtained frequency range, in the embodiment, a signal generated when a pipeline generates micro leakage is an ultrasonic signal, a microphone configuration with a closer distance is needed for detecting a noise source with higher frequency, a microphone array combination schematic diagram with different configurations is shown in fig. 7, and in the embodiment, the microphone array with a closer distance is matched; the clear-SC algorithm has a higher resolution when positioning a high-frequency sound source, and the configuration selection module automatically selects the clear-SC algorithm according to the matched frequency range in this embodiment. And transmitting the configuration information of the selected microphone array to the acoustic signal acquisition module after the configuration of the microphone array is selected.
The microphone array is composed of a plurality of microphones which are in different configurations and can be in a circular, rectangular, spiral or other array type or a combination of the circular, rectangular, spiral and other array types. The microphone may be an electret microphone or a MEMS microphone.
As shown in fig. 2, the acoustic signal acquisition module is composed of a clock unit and a data decoding unit, and after receiving the configuration information of the microphone array selected by the configuration selection module, and waiting for the central processing unit to send an instruction, controls the correspondingly configured microphone arrays to acquire multi-channel acoustic signals in parallel.
The acoustic signal acquisition module can be realized by a Field Programmable Gate Array (FPGA), a singlechip, a DSP and the like. In this embodiment, an FPGA chip is selected as the acoustic signal acquisition module, and pins and internal resources of the FPGA chip are abundant, so that parallel and synchronous sampling of signals of multiple microphones can be realized.
The clock unit may be implemented by a PLL inside the acoustic signal acquisition module, and may also generate a clock signal in a frequency division form of a counter.
The data decoding unit is configured to convert the PDM signal generated by the microphone array into an analog PCM signal, and may be implemented by using a CIC, FIR or other low-pass filter.
Further, as shown in fig. 2, after the acoustic signal acquisition module acquires the multi-channel acoustic signals in parallel, the acquired data needs to be stored in the signal buffer module, and the signal buffer module may be a storage device such as SDRAM or DDR2, in this embodiment, the signal buffer module is implemented by using a DDR2 memory.
After the sound signal of a certain time is collected, the central processing unit controls the sound signal collecting module to stop collecting signals, the data cached in the DDR2 are transmitted to the positioning module in the central processing unit through the signal transmission module, and the signal collecting time is set by an operator before the collection is started.
The signal transmission module can be realized by adopting SPI, IIC, URAT and other protocols, the SPI transmission protocol supports full-duplex operation, the operation is simple, the data transmission rate is high, and in the embodiment, the SPI protocol is adopted to realize the transmission of signals.
The positioning module is implemented by a central processing unit in this embodiment, and is configured to receive acoustic information of the signal caching module, determine a range in which a grid needs to be divided in a positioning algorithm, that is, a range size that needs to be detected, according to information about a position, a size, and a distance between a microphone array of a detection target obtained by the target detection module, as shown in fig. 8, divide a grid only for the range that needs to be detected, process the acoustic information obtained from the signal caching module using the positioning algorithm selected by the configuration selection module, and calculate distribution information of a noise source. In the present embodiment, the positioning algorithm is automatically selected as the CBF algorithm in the configuration selection module, the distribution information of the noise source is represented by means of a cloud map, and the acoustic image data representing different acoustic signal intensities are shown in different colors or shades, as shown in fig. 9.
And the augmented reality module correspondingly superposes the distribution cloud picture of the noise source, the information of the detection target and the optical image information to generate image information representing the detection result.
As shown in fig. 9, the display control module displays the augmented reality image information to an operator for the operator to check the distribution information of the noise source and the related information of the detection target.
Example 2
The corona phenomenon is the ionization phenomenon of air around a lead under the action of a strong electric field, and when a tip with a small curvature radius exists on a conductor, corona discharge is easy to occur. The corona discharge phenomenon causes the gas in the air to generate electrochemical reaction, and some corrosive gas is generated, so that the corrosion of the line is caused. The continuous streamer and electron collapse in the corona discharge process can form high-frequency electric field pulse, form electromagnetic pollution, influence radio and television broadcasting, and can generate noise when the corona phenomenon occurs. Early faults of high-impedance equipment in the transformer substation, such as abnormal sound, structural looseness and friction, can also generate corresponding noise, and the embodiment is used for detecting corona and abnormal vibration in outdoor power scenes and provides effective information for maintaining power equipment.
As shown in fig. 1, which is a signal processing flow chart of the present invention, an augmented reality system for sound source localization is provided, including a scene selection module, an image acquisition module, a target detection module, a target display module, a target selection module, a configuration selection module, a configurable microphone array, an acoustic signal acquisition module, a signal buffer module, a signal transmission module, a localization module, a distance measurement module, an augmented reality module, and a display control module.
As shown in fig. 10, in order to implement the specific hardware of each module function in this embodiment, the connection and signal flow relationship between the hardware are shown. The hardware includes: the system comprises a central processing unit, an FPGA, an SDRAM, a camera, a touch display screen, a microphone array and an ultrasonic distance meter.
The central processing unit comprises a configuration selection module, a target detection module, an augmented reality module, a positioning module and a signal transmission module. The system can be a central processing unit based on an ARM framework or other forms, in the embodiment, an STM32 single chip microcomputer is selected as the central processing unit, and the system has the advantages of high cost performance, rich and flexible configuration, low power consumption and the like.
The microphone can be a condenser microphone or an MEMS microphone, in the embodiment, the microphone is an MEMS microphone output by PDM, the microphone is high in integration level, the output is a PDM digital signal, an AD conversion module is not needed, an interface circuit is simple, and the requirement of hardware is greatly reduced.
As shown in fig. 3, the scene selection module is used for an operator to select a scene to be detected, and because different detection targets exist in different scenes, a plurality of recognition libraries can be established for storing the detection targets existing in the different scenes, and a schematic diagram of the different recognition libraries is shown in fig. 4.
The image acquisition module is connected with the central processing unit, transmits the optical image information who gathers to the target detection module in the central processing unit, the image acquisition module can be realized by infrared imaging instrument, ultraviolet imaging instrument, camera etc. in this embodiment, chooses for use the camera to gather image information.
The distance measuring module is used for measuring the distance between each detection target and the microphone array and transmitting the distance information to the target detection module, and can be a measuring tool such as a laser range finder and an ultrasonic range finder. In the present embodiment, an ultrasonic distance meter is selected to realize the distance measurement.
In this embodiment, an operator selects a scene to be detected through a touch display screen, an application scene in this embodiment is selected as an electric power scene, a high-voltage power rack, a transmission line and a reactor exist in the electric power scene, after the scene to be detected is selected, the target detection module identifies detectable targets in a corresponding identification library based on image information acquired by the image acquisition module, and determines the position and size of each target according to distance information between each detected target and the microphone array. The target detection module may be implemented by using algorithms such as RCNN, fastRCNN, fast RCNN, YOLO, YOLOv2, SSD, etc. when identifying a detectable target, in this embodiment, the RCNN algorithm is selected for target detection.
As shown in fig. 11, after the target detection module, the target display module enhances the information of the identified detectable target to the optical image information collected by the image collection module, and displays the enhanced image to the operator through the touch display screen. The presented information may be the name of the object present in the detection range, the distance to the microphone array, etc.
The target selection module selects a target to be detected through a touch display screen by an operator after the target display module, for example, in the embodiment, a transmission line can be selected as a detection target, the position where corona occurs is detected according to acoustic information generated when the transmission line has a corona phenomenon, a reactor can also be selected as a detection target, positioning is performed according to abnormal sound generated in the reactor due to structural looseness and the like, and certain known working state information of the detection target can be input after the target to be detected is selected, as shown in fig. 12, in the embodiment, when the selected detection target is the transmission line, certain known working state of the detection target can be the magnitude of transmission voltage; when the selected target of detection is a reactor, some of its known operating states may be rated reactance rates. After some working state information is input, the working state information is transmitted to a configuration selection module, and the position, the size and the distance information of the detected target from the microphone array are transmitted to a positioning module.
Further, a configuration selection module in the central processing unit matches the detected target selected by the target selection module with the input known working state information in a database to obtain a noise source frequency range generated when the detected target fails. In this embodiment, the frequency range of the noise source generated by corona discharge of the transmission line is 11kHz to 14kHz, the frequency range of the noise source generated by the loose structure of the reactor is mainly 1kHz to 10kHz, and if a corresponding working state is not input to the target selection module, the frequency range matched by the configuration selection module is large, which results in a large error of the detection result. After the frequency range of the detection target is obtained, selecting a microphone array and a positioning algorithm which are correspondingly configured according to the obtained frequency range, in the embodiment, the frequency of a noise source generated by corona discharge of a transmission line is higher than the frequency of a noise source generated by loosening of a reactor structure, the noise source with higher detection frequency needs microphone configuration with closer spacing, the microphone array combination schematic diagram with different configurations is shown in fig. 7, and in the embodiment, the microphone spacing of the microphone array matched with the transmission line selected as the detection target is smaller than that selected as the detection target by the reactor; the DAMAS algorithm has high resolution when positioning a high-frequency sound source, low resolution for a low-frequency band, and the compressive sensing algorithm has super-resolution, and when the frequency of a noise source generated by a selected detection target is low, the configuration selection module automatically selects the compressive sensing algorithm. The configuration information of the selected microphone array is then transmitted to the acoustic signal acquisition module.
As shown in fig. 10, the acoustic signal collection module is composed of a clock unit and a data decoding unit, and after receiving the microphone array configuration information selected by the configuration selection module, waits for an instruction from the central processing unit, and controls the correspondingly configured microphone arrays to collect the multichannel acoustic signals in parallel.
The acoustic signal acquisition module can be realized by a Field Programmable Gate Array (FPGA), a singlechip, a DSP and the like. In the embodiment, the FPGA chip is selected as the sound signal acquisition module, the pins and the internal resources of the FPGA chip are rich, and the parallel synchronous sampling of the multi-path microphone signals can be realized.
The clock unit may be implemented by a PLL inside the acoustic signal acquisition module, and may also generate a clock signal in a frequency division form of a counter.
The data decoding unit is configured to convert the PDM signal generated by the microphone array into an analog PCM signal, and may be implemented by using a CIC, FIR or other low-pass filter.
Further, as shown in fig. 10, after the acoustic signal acquisition module acquires the multi-channel acoustic signals in parallel, the acquired data needs to be stored in the signal buffer module, and the signal buffer module may be a storage device such as SDRAM or DDR 2.
After sound signals are collected for a certain time, the central processing unit controls the sound signal collecting module to stop collecting signals, data cached in the SDRAM are transmitted to the positioning module in the central processing unit through the signal transmission module, and the signal collecting time is set by an operator before collection is started.
The signal transmission module can be realized by adopting SPI, IIC, URAT and other protocols, the IIC transmission protocol only needs two wires, the hardware design is simple, and in the embodiment, the IIC protocol is adopted to realize the signal transmission.
The positioning module, which is implemented by the central processing unit in this embodiment, is configured to receive the acoustic information of the signal caching module, and determine a range in which a mesh needs to be divided in a positioning algorithm, that is, a size of the range that needs to be detected, according to the position and size of the detection target obtained by the target detection module and distance information between the detection target and the microphone array, as shown in fig. 8, the mesh is divided only for the range that needs to be detected, and the positioning algorithm selected by the configuration selection module is used to process the acoustic information obtained from the signal caching module, so as to calculate distribution information of the noise source. In the present embodiment, the positioning algorithm is automatically selected as a DAMAS algorithm or a positioning algorithm based on compressed sensing in the configuration selection module, the distribution information of the noise source is represented by means of a cloud graph, and the acoustic image data representing different acoustic signal intensities are shown in different colors or shades, as shown in fig. 9.
And the augmented reality module correspondingly superposes the distribution cloud picture of the noise source, the information of the detection target and the optical image information to generate image information representing the detection result.
As shown in fig. 13, the display control module displays the augmented reality image information to an operator for the operator to check the distribution information of the noise source and the related information of the detection target.
Example 3
The pantograph is an electrical device for acquiring electric energy from a contact net by the electric traction locomotive, the pantograph lifting and pantograph lowering processes are completed by controlling the compression cylinder through the electric control valve, if the compression cylinder leaks, the pantograph cannot be lifted normally, and the electric traction locomotive is insufficient in electric energy acquisition.
As shown in fig. 1, which is a signal processing flow chart of the present invention, an augmented reality system and method for sound source localization are provided, including a scene selection module, an image acquisition module, a target detection module, a target display module, a target selection module, a configuration selection module, a configurable microphone array, an acoustic signal acquisition module, a signal buffer module, a signal transmission module, a positioning module, a distance measurement module, an augmented reality module, and a display control module.
As shown in fig. 14, in order to implement the specific hardware of each module function in this embodiment, the connection and signal flow relationship between the hardware are shown. The hardware includes: the device comprises a central processing unit, a DSP, an SDRAM, a camera, a touch display screen, a microphone array and a laser range finder.
The central processing unit comprises a configuration selection module, a target detection module, an augmented reality module, a positioning module and a signal transmission module. The system can be a central processing unit based on an ARM framework or other forms, in the embodiment, a raspberry group is selected as the central processing unit, the size of the raspberry group is small, the operation speed is high, IO pin control can be completed, interaction with bottom hardware is directly performed, in addition, a corresponding operating system can be operated, more complex task management and scheduling can be completed, development of upper-layer applications can be supported, and a wider application space is provided for developers.
The microphone can be a condenser microphone or an MEMS microphone, in the embodiment, the microphone is an MEMS microphone output by PDM, the microphone is high in integration level, the output is a PDM digital signal, an AD conversion module is not needed, an interface circuit is simple, and the requirement of hardware is greatly reduced.
As shown in fig. 3, the scene selection module is used for an operator to select a scene to be detected, and because different detection targets exist in different scenes, a plurality of recognition libraries can be established for storing the detection targets existing in the different scenes, and a schematic diagram of the different recognition libraries is shown in fig. 4.
The image acquisition module is connected with the central processing unit, transmits the optical image information who gathers to the target detection module in the central processing unit, the image acquisition module can be realized by infrared imaging instrument, ultraviolet imaging instrument, camera etc. in this embodiment, chooses for use the camera to gather image information.
The distance measuring module is used for measuring the distance between each detection target and the microphone array and transmitting distance information to the target detecting module, and can be a measuring tool such as a laser range finder and an ultrasonic range finder. In the present embodiment, a laser distance meter is selected to realize the distance measurement.
In this embodiment, an operator selects a scene to be detected by touching the display screen, the embodiment is applied to fault detection of a pantograph compression cylinder, the pantograph is composed of a support, the compression cylinder, a push rod and the like, after the application scene is selected, the target detection module identifies detectable targets in a corresponding identification library based on image information acquired by the image acquisition module, and determines the position and size of each target according to the distance information between each detected target and the microphone array. The target detection module may be implemented by using algorithms such as RCNN, fastRCNN, faster RCNN, YOLO, YOLOv2, SSD, etc. when identifying a detectable target, in this embodiment, the SSD algorithm is selected for target detection.
As shown in fig. 15, after the target detection module, the target display module enhances the information of the identified detectable target to the optical image information collected by the image collection module, and displays the enhanced image to the operator through the touch display screen. The presented information may be the name of the object present in the detection range, the distance to the microphone array, etc.
The target selection module selects a target to be detected by an operator through a touch display screen after the target display module, for example, in this embodiment, a compressed air cylinder can be selected as a detection target, a leakage position is detected according to acoustic information generated when the compressed air cylinder leaks, and some known operating state information of the detection target can be input after the target to be detected is selected, as shown in fig. 16. After some working state information is input, the working state information is transmitted to a configuration selection module, and the position, the size and the distance information of the detected target from the microphone array are transmitted to a positioning module.
Further, a configuration selection module in the central processing unit matches the detected target selected by the target selection module with the input known working state information in a database to obtain a noise source frequency range generated when the detected target fails. In this embodiment, the frequency range of the noise source generated when the small leakage occurs in the compressed air cylinder is 20kHz to 60kHz, and if the corresponding operating state is not input to the target selection module, the frequency range matched by the configuration selection module is large, which will result in a large error of the detection result. After the frequency range of the detection target is obtained, selecting a microphone array and a positioning algorithm which are correspondingly configured according to the obtained frequency range, in the embodiment, the frequency of a noise source generated when the compression cylinder generates micro leakage is high, the detection frequency of the noise source with high frequency needs microphone configuration with a close distance, the microphone array combination schematic diagram with different configurations is shown in fig. 7, and in the embodiment, the microphone array with a close distance is automatically selected; the conventional beamforming algorithm has a high resolution when positioning a high-frequency sound source, and in this embodiment, the configuration selection module automatically selects the conventional beamforming algorithm, and then transmits the configuration information of the selected microphone array to the acoustic signal acquisition module.
As shown in fig. 14, the acoustic signal acquisition module is composed of a clock unit and a data decoding unit, and after receiving the configuration information of the microphone array selected by the configuration selection module, and waiting for the central processing unit to send an instruction, controls the correspondingly configured microphone arrays to acquire multi-channel acoustic signals in parallel.
The acoustic signal acquisition module can be realized by a Field Programmable Gate Array (FPGA), a singlechip, a DSP and the like. In the embodiment, the DSP is selected as the sound signal acquisition module, the DSP has good stability, high precision and convenient interface and integration, and can execute a plurality of operations in parallel.
The clock unit may be implemented by a PLL inside the acoustic signal acquisition module, and may also generate a clock signal in a frequency division form of a counter.
The data decoding unit is configured to convert the PDM signal generated by the microphone array into an analog PCM signal, and may be implemented by using a CIC, FIR or other low-pass filter.
Further, as shown in fig. 14, after the acoustic signal acquisition module acquires the multi-channel acoustic signals in parallel, the acquired data needs to be stored in the signal buffer module, and the signal buffer module may be a storage device such as SDRAM or DDR 2.
After sound signals are collected for a certain time, the central processing unit controls the sound signal collecting module to stop collecting signals, data cached in the SDRAM are transmitted to the positioning module in the central processing unit through the signal transmission module, and the signal collecting time is set by an operator before collection is started.
The signal transmission module can be realized by adopting SPI, IIC, URAT and other protocols, the IIC transmission protocol only needs two wires, the hardware design is simple, and in the embodiment, the SPI protocol is adopted to realize the transmission of signals.
The positioning module, which is implemented by the central processing unit in this embodiment, is configured to receive the acoustic information of the signal caching module, and determine a range in which a mesh needs to be divided in a positioning algorithm, that is, a size of the range that needs to be detected, according to the position and size of the detection target obtained by the target detection module and distance information between the detection target and the microphone array, as shown in fig. 8, the mesh is divided only for the range that needs to be detected, and the positioning algorithm selected by the configuration selection module is used to process the acoustic information obtained from the signal caching module, so as to calculate distribution information of the noise source. In the present embodiment, the positioning algorithm is automatically selected as the CBF algorithm in the configuration selection module, the distribution information of the noise source is represented by a cloud map, and the acoustic image data representing different acoustic signal intensities are shown in different colors or shades, as shown in fig. 17.
And the augmented reality module correspondingly superposes the distribution cloud picture of the noise source, the information of the detection target and the optical image information to generate image information representing the detection result.
As shown in fig. 17, the display control module displays the augmented reality image information to an operator for the operator to check the distribution information of the noise source and the related information of the detection target.
The above description is only for the purpose of illustration, and the implementation of the present invention is not limited by the above embodiments, and all the equivalent structures or equivalent processes performed by using the contents of the present specification and the attached drawings, or directly or indirectly applied to other related technical fields, belong to the protection scope of the present invention.
Although embodiments of the present invention have been shown and described above, it will be understood that the above embodiments are exemplary and not to be construed as limiting the present invention, and that those skilled in the art may make variations, modifications, substitutions and alterations within the scope of the present invention without departing from the spirit and scope of the present invention.

Claims (9)

1. The utility model provides an augmented reality system based on sound localization which characterized in that: the system comprises a scene selection module, a distance measurement module, an image acquisition module, a target detection module, a target display module, a target selection module, a configuration selection module, a sensor array, an acoustic signal acquisition module, a signal cache module, a signal transmission module, a positioning module, an augmented reality module and a display control module;
the scene selection module is used for an operator to select a scene to be detected and provide selected scene information to the target detection module; different detection scenes correspond to corresponding recognition libraries, and the recognition libraries store detection target information existing in the corresponding detection scenes;
the image acquisition module is used for receiving optical information from a target scene, generating a corresponding optical image and transmitting the optical image information to the target detection module;
the distance measurement module is used for measuring the distance between the selected detection target in the target scene and the microphone array in the acoustic signal acquisition module and transmitting distance information to the target detection module;
the target detection module identifies targets in the target scene optical image by adopting an image identification algorithm based on the selected detection scene corresponding identification library according to the target scene optical image information output by the image acquisition module; determining the position and size of each target according to the distance information between the target and the microphone array;
the target display module enhances the information of the identified target and the target distance obtained by the distance measurement module into a target scene optical image, and displays the enhanced target scene optical image to an operator through the target display module; the enhanced information in the enhanced optical image of the target scene comprises the name of the identified target in the detection range and the distance between the target and the microphone array;
the target selection module receives a target which is selected by an operator and needs to be detected, judges whether the target which is selected by the operator and needs to be detected has working state data which needs to be input by the operator, and prompts and receives the state data input by the operator if the target which is selected by the operator and needs to be detected exists; the target selection module transmits the target to be detected and the working state data selected by the operator to the configuration selection module, and transmits the position, the size and the distance information between the target to be detected and the microphone array selected by the operator to the positioning module;
the configuration selection module matches the target to be detected selected by an operator and the working state data in the database to obtain a noise source frequency range generated when the target to be detected fails, selects corresponding microphone array configuration information and a positioning algorithm according to the obtained frequency range, and transmits the selected microphone array configuration information to the acoustic signal acquisition module;
the microphone array consists of a plurality of microphones and works according to configuration information;
the acoustic signal acquisition module comprises a clock unit and a signal decoding unit; the clock unit and the signal decoding unit are used for controlling the corresponding microphone to work according to the microphone array configuration information and collecting an acoustic signal generated by a target to be detected;
the signal caching module is used for caching the acoustic signals of the target to be detected, which are acquired by the acoustic signal acquisition module, and transmitting the cached acoustic signals of the target to be detected to the positioning module through the signal transmission module;
the positioning module receives an acoustic signal of a target to be detected, determines the range of a grid to be divided in a positioning algorithm according to the position, the size and the distance information between a microphone array of the target to be detected, processes the acoustic signal of the target to be detected, which is obtained from the signal caching module, by using the positioning algorithm selected by the configuration selection module, calculates the distribution information of a noise source, represents the distribution information of the noise source in a cloud image mode, and shows acoustic image data representing different acoustic signal intensities in different colors or shades;
the augmented reality module correspondingly superposes the distribution cloud picture of the noise source, the information of the detection target and the optical image information to generate image information representing the detection result;
and the display control module displays the image information enhanced by the augmented reality module to an operator for the operator to check the distribution information of the noise source and the information of the detection target.
2. The augmented reality system based on sound source localization as claimed in claim 1, wherein: the configuration selection module, the target detection module, the augmented reality module, the positioning module and the signal transmission module are realized in the same central processing unit, and the central processing unit is based on an ARM framework.
3. The augmented reality system based on sound source localization as claimed in claim 1, wherein: the scene selection module, the target display module, the target selection module and the display control module are realized by adopting the same human-computer interaction tool, and the human-computer interaction tool adopts a touch display screen or a combination of the touch display screen and a key.
4. The augmented reality system based on sound source localization as claimed in claim 1, wherein: the image recognition algorithm adopted by the target detection module comprises an RCNN (Rich coupled neural network), a Fast RCNN, a YOLO (YOLO), a YOLOv2 or an SSD algorithm.
5. The sound source localization-based augmented reality system according to claim 1, wherein: the target selection module receives a target to be detected selected by an operator by sensing the clicking operation of the operator on the touch display screen.
6. The augmented reality system based on sound source localization as claimed in claim 1, wherein: for a pipeline type target, the operating state data includes a type, pressure, or flow rate of fluid in the pipeline; for a high-voltage wire target in an electric power scene, the working state data comprises the diameter of a high-voltage wire, the voltage or the current ambient temperature.
7. The sound source localization-based augmented reality system according to claim 1, wherein: the configuration information of the microphone array comprises circular, rectangular and spiral array configuration information or array combination configuration information; the microphone is an electret microphone or an MEMS microphone.
8. The sound source localization-based augmented reality system according to claim 1, wherein: the positioning algorithm used in the positioning module comprises a conventional beamforming algorithm, a functional beamforming algorithm, a clear-SC algorithm, a DAMAS algorithm, or a compressed sensing algorithm.
9. Augmented reality method based on sound source localization implemented with the system of claim 1, characterized in that: the method comprises the following steps:
step 1: an operator determines a scene to be detected through a scene selection module; the image acquisition module acquires image information in a detection range, and the distance measurement module measures the distance between each detection target and the microphone array; the target detection module identifies targets existing in a detection range in an identification library of a corresponding scene according to the selected detection scene based on the optical image information acquired by the image acquisition module, and determines the position and size of each target according to the distance information between each detection target and the microphone array;
and 2, step: the target display module enhances the information of the identified target to the optical image acquired by the image acquisition module and displays the enhanced image to an operator; the operator selects the detection target by the target selection module and inputs some known working state information of the detection target according to the requirement;
and step 3: the configuration selection module performs data matching according to the selected detection target and the working state information to obtain a noise source frequency range generated when the detected target fails, the frequency range selects a correspondingly configured microphone array and a positioning algorithm, and the acoustic signal acquisition module controls the correspondingly configured microphone array to acquire acoustic signals;
and 4, step 4: the signal caching module caches the acoustic signals acquired by the acoustic signal acquisition module, and after the acoustic signals are acquired for a certain time, the cached data are transmitted to the positioning module through the signal transmission module; the distance measurement module transmits distance information between the microphone array and the detection target to the positioning module;
and 5: the positioning module receives an acoustic signal of a detection target, determines the range of a positioning algorithm needing to be divided into grids according to the position and the size of the detection target and the distance information between the detection target and the microphone array, processes the acoustic signal of the detection target obtained from the signal cache module by using the positioning algorithm selected by the configuration selection module, calculates the distribution information of a noise source, and generates a noise source distribution cloud chart;
step 6: the augmented reality module correspondingly superimposes the information of the distribution cloud picture of the noise source and the detection target with the optical image information to generate image information representing a detection result;
and 7: and the display control module displays the image information after augmented reality enhancement to an operator for the operator to check the distribution information of the noise source and the information of the detection target.
CN202011089429.4A 2020-10-13 2020-10-13 Augmented reality system and method based on sound source positioning Active CN112285648B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011089429.4A CN112285648B (en) 2020-10-13 2020-10-13 Augmented reality system and method based on sound source positioning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011089429.4A CN112285648B (en) 2020-10-13 2020-10-13 Augmented reality system and method based on sound source positioning

Publications (2)

Publication Number Publication Date
CN112285648A CN112285648A (en) 2021-01-29
CN112285648B true CN112285648B (en) 2022-11-01

Family

ID=74496718

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011089429.4A Active CN112285648B (en) 2020-10-13 2020-10-13 Augmented reality system and method based on sound source positioning

Country Status (1)

Country Link
CN (1) CN112285648B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113052255B (en) * 2021-04-07 2022-04-22 浙江天铂云科光电股份有限公司 Intelligent detection and positioning method for reactor

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9753119B1 (en) * 2014-01-29 2017-09-05 Amazon Technologies, Inc. Audio and depth based sound source localization
GB201812134D0 (en) * 2018-07-25 2018-09-05 Nokia Technologies Oy An apparatus, method and computer program for representing a sound space

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9736580B2 (en) * 2015-03-19 2017-08-15 Intel Corporation Acoustic camera based audio visual scene analysis
CN106887236A (en) * 2015-12-16 2017-06-23 宁波桑德纳电子科技有限公司 A kind of remote speech harvester of sound image combined positioning
CN105891780B (en) * 2016-04-01 2018-04-10 清华大学 A kind of indoor scene localization method and device based on supersonic array information
US20190129027A1 (en) * 2017-11-02 2019-05-02 Fluke Corporation Multi-modal acoustic imaging tool
US11209306B2 (en) * 2017-11-02 2021-12-28 Fluke Corporation Portable acoustic imaging tool with scanning and analysis capability
CN208886405U (en) * 2018-07-27 2019-05-21 东莞市三航军民融合创新研究院 Sensor array system for the detection positioning of underground gas pipeline minute leakage source
CN208886406U (en) * 2018-07-27 2019-05-21 东莞市三航军民融合创新研究院 For making somebody a mere figurehead the sensor array system of gas pipeline minute leakage source detection positioning
US10206036B1 (en) * 2018-08-06 2019-02-12 Alibaba Group Holding Limited Method and apparatus for sound source location detection

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9753119B1 (en) * 2014-01-29 2017-09-05 Amazon Technologies, Inc. Audio and depth based sound source localization
GB201812134D0 (en) * 2018-07-25 2018-09-05 Nokia Technologies Oy An apparatus, method and computer program for representing a sound space

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Remote leak hole localization for underwater natural gas pipelines;Yigit Mahmutoglu;《2017 40th International Conference on Telecommunications and Signal Processing (TSP)》;20171023;全文 *
基于立体成像与三维虚拟声音的增强现实***的研究;易俊;《中国优秀硕博士学术论文(硕士)信息科技》;20190515;全文 *
飞机起落架的气动噪声源定位研究;宁方立;《噪声与振动控制》;20180430;全文 *

Also Published As

Publication number Publication date
CN112285648A (en) 2021-01-29

Similar Documents

Publication Publication Date Title
CN101446571B (en) Nondestructive detecting device and detecting system
CN111141460A (en) Equipment gas leakage monitoring system and method based on artificial intelligence sense organ
CN106523928B (en) Pipeline leakage detection method based on the screening of sound wave real time data two level
CN105403348B (en) One kind may be programmed highly integrated multi-path pressure test device
CN207316488U (en) A kind of long distance wireless routine for pipe-line transportation system leakage or gas leakage
CN210486944U (en) Portable converter station valve cooling system running state on-line monitoring and analyzing device
CN112285648B (en) Augmented reality system and method based on sound source positioning
CN105242180A (en) GIL/GIS discharge source detection and positioning apparatus and method
CN116577037B (en) Air duct leakage signal detection method based on non-uniform frequency spectrogram
CN110726518A (en) Positioning and monitoring system for leakage of annular sealing surface of nuclear reactor pressure vessel
CN1116596C (en) shock wave pressure testing device
CN110221261B (en) Radar waveform generation module test analysis method and device
CN205091430U (en) Transformer internal discharge failure diagnosis device
CN201740656U (en) Bulldozer dynamic parameter testing device based on LabVIEW
CN204989451U (en) On --spot check -up communication tester of electric energy meter
CN105203937A (en) Internal discharge mode recognition method and fault diagnosis system for transformer
CN101738487A (en) Virtual instrument technology-based motor experimental system scheme
CN204944771U (en) Leakage detector
CN210270114U (en) Fault detector for heavy-duty mechanized bridge erection system
CN114902029A (en) Fluid consumption meter and leak detection system
CN106322123A (en) Pipeline leakage detecting method and system as well as sampler and host thereof
CN210109301U (en) Digital electric energy meter and power source magnitude traceability remote calibration system
CN105115885A (en) Portable monitoring system for corrosion state of grounding grid and monitoring method
CN105093083B (en) Cable local discharge signal framing device and localization method
CN211232432U (en) Water supply pipe network leak detection device based on industrial internet

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant