WO2019189424A1 - Acoustic analysis device and acoustic analysis method - Google Patents

Acoustic analysis device and acoustic analysis method Download PDF

Info

Publication number
WO2019189424A1
WO2019189424A1 PCT/JP2019/013296 JP2019013296W WO2019189424A1 WO 2019189424 A1 WO2019189424 A1 WO 2019189424A1 JP 2019013296 W JP2019013296 W JP 2019013296W WO 2019189424 A1 WO2019189424 A1 WO 2019189424A1
Authority
WO
WIPO (PCT)
Prior art keywords
analysis
sound source
point
unit
model data
Prior art date
Application number
PCT/JP2019/013296
Other languages
French (fr)
Japanese (ja)
Inventor
直穂子 豊嶋
Original Assignee
日本電産株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日本電産株式会社 filed Critical 日本電産株式会社
Priority to CN201980022442.7A priority Critical patent/CN111971536A/en
Publication of WO2019189424A1 publication Critical patent/WO2019189424A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01HMEASUREMENT OF MECHANICAL VIBRATIONS OR ULTRASONIC, SONIC OR INFRASONIC WAVES
    • G01H3/00Measuring characteristics of vibrations by using a detector in a fluid
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/20Arrangements for obtaining desired frequency or directional characteristics
    • H04R1/32Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only
    • H04R1/40Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by combining a number of identical transducers

Definitions

  • the present invention relates to an acoustic analysis apparatus and an acoustic analysis method.
  • Patent Literature 1 a position in a microphone space is detected by a position sensor attached to the microphone, and an image corresponding to the sound pressure of a sound signal output from the microphone is displayed at a display position corresponding to the position of the microphone.
  • a sound field visualization device to be displayed is disclosed.
  • an object of the present invention is to provide an acoustic analysis apparatus and an acoustic analysis method that can analyze in detail the situation in which the surface of the object to be measured vibrates.
  • an acoustic analysis device includes a first acquisition unit that acquires three-dimensional model data of a sound source surface of a measurement object, and a three-dimensional point of the point on the sound source surface.
  • a second acquisition unit for acquiring position information
  • a third acquisition unit for acquiring three-dimensional position information of points on the measurement surface of the microphone array arranged in the vicinity of the sound source surface, and the microphone array.
  • the three-dimensional model data is A display unit that is deformed and displayed according to the particle velocity at the analysis point.
  • the acoustic analysis method includes a step of acquiring three-dimensional model data of a sound source surface of a measurement object, a step of acquiring three-dimensional position information of a point on the sound source surface, and the sound source A step of acquiring three-dimensional position information of points on a measurement surface of a microphone array arranged in the vicinity of the surface, and a three-dimensional distribution of particle velocities that are physical quantities representing sound characteristics from the sound signal acquired by the microphone array Calculating the particle velocity at an analysis point on a plane parallel to the measurement plane, the three-dimensional position information of the point on the sound source plane, and the three-dimensional position of the point on the measurement plane Based on the information, the step of aligning the sound source surface and the analysis point, and the three-dimensional model data and the sound source surface based on three or more feature points fixed to the object to be measured Alignment And performing, in accordance with the positioning result, including the step of displaying to deform in response to the particle velocity at the analysis points of the three-
  • the measured particle velocity data can be appropriately superimposed on the 3D model data of the object to be measured, and the 3D model data can be deformed and displayed. Therefore, it is possible to analyze in detail the situation where the surface of the object to be measured vibrates.
  • FIG. 1 is a diagram illustrating an example of an acoustic analysis system.
  • FIG. 2 is a diagram for explaining an outline of the analysis processing in the analysis processing unit.
  • FIG. 3 is a diagram illustrating a method for imaging the object to be measured.
  • FIG. 4 is a diagram illustrating a microphone array imaging method.
  • FIG. 5 is a diagram for explaining a coordinate conversion method.
  • FIG. 6 is a diagram for explaining a method of displaying the three-dimensional model data.
  • FIG. 7 is an example of a three-dimensional mesh model.
  • FIG. 8 is a display example of 3D model data.
  • FIG. 1 is a configuration example of an acoustic analysis system 1000 including a microphone array 1 according to this embodiment.
  • the acoustic analysis system 1000 according to the present embodiment is a system that analyzes a sound to be measured from the object to be measured (sound source) 2 using a near-field acoustic holography method and displays an analysis result.
  • the near-field acoustic holography method it is necessary to measure a sound pressure distribution on a measurement surface that is close to and parallel to the sound source surface 2a, and a microphone array 1 in which a plurality of microphones mc are arranged in a lattice shape is used.
  • the microphone array 1 includes M ⁇ N microphones mc arranged in a lattice pattern.
  • the microphone mc may be a MEMS (Micro-Electrical-Mechanical Systems) microphone, for example.
  • the acoustic analysis system 1000 analyzes a signal (sound signal) input from each of the M ⁇ N microphones mc, and detects a physical quantity representing a sound characteristic.
  • the acoustic analysis system 1000 includes an imaging device 3 that is independent from the microphone array 1 and the DUT 2.
  • the imaging device 3 is a stereo camera
  • the stereo camera 3 is fixed at a position spaced a predetermined distance from the microphone array 1 and the object 2 to be measured.
  • the stereo camera 3 can acquire the three-dimensional position information of the DUT 2 and the three-dimensional position information of the microphone array 1.
  • the acoustic analysis system 1000 includes an acoustic analysis device 100 and a display device 200.
  • the acoustic analysis device 100 includes a signal processing unit 101, an analysis processing unit 102, and a storage unit 103.
  • the acoustic analysis apparatus 100 includes a first acquisition unit, a second acquisition unit, a third acquisition unit, a calculation unit, a first alignment unit, a second alignment unit, and a display. A portion.
  • the first acquisition unit acquires an image of the sound source surface 2a of the DUT 2.
  • the second acquisition unit acquires three-dimensional position information of points on the sound source surface 2a.
  • the third acquisition unit acquires three-dimensional position information of points on the measurement surface 1b of the microphone array 1 arranged in the vicinity of the sound source surface 2a.
  • the first alignment unit includes a derivation unit and a conversion unit.
  • the signal processing unit 101 performs predetermined signal processing on the signal from each microphone mc of the microphone array 1 to obtain a sound signal used for acoustic analysis.
  • the signal processing may include processing for synchronizing signals of M ⁇ N microphones mc included in the microphone array 1.
  • the analysis processing unit 102 analyzes the sound signal that has been signal-processed by the signal processing unit 101, and detects a three-dimensional distribution of physical quantities representing the characteristics of the sound.
  • the three-dimensional distribution of physical quantities representing the characteristics of sound is a particle velocity distribution.
  • the analysis processing unit 102 performs display control for causing the display device 200 to display a particle velocity, which is a physical quantity representing the characteristics of sound, as vibration of the sound source surface 2a.
  • the analysis processing unit 102 performs display control for deforming and displaying the three-dimensional model data (3D model data) representing the structure of the sound source surface 2a (the surface of the DUT 2) according to the particle velocity. Do.
  • the analysis processing in the analysis processing unit 102 will be described later.
  • the storage unit 103 stores the analysis result by the analysis processing unit 102 and the like.
  • the storage unit 103 stores the 3D model data.
  • the 3D model data can be, for example, 3D-CAD data.
  • the display device 200 includes a monitor such as a liquid crystal display and displays the analysis result of the acoustic analysis device 100.
  • the microphone array 1 has a shape smaller than that of the DUT 2.
  • the microphone array 1 measures the sound signal in a plurality of times while moving in the vicinity of the sound source surface 2a of the object 2 to be measured, and the acoustic analyzer 100 uses the sound signal measured by the microphone array 1 in a plurality of times. Each analysis is performed, and a plurality of analysis results are merged and displayed on the display device 200.
  • the acoustic analysis device 100 analyzes the sound field 1a over the entire surface of the device under test 2 using the microphone array 1 that is smaller than the device under test 2 and displays the analysis result on the display device 200. Display.
  • the analysis processing unit 102 of the acoustic analysis apparatus 100 acquires the three-dimensional position information of the points on the sound source surface 2a measured by the stereo camera 3 fixed by the fixing unit 3a. Further, the analysis processing unit 102 images the microphone array 1 being picked up arranged in the vicinity of the sound source surface 2 a of the DUT 2 by the stereo camera 3, and the three-dimensional position of the point on the measurement surface of the microphone array 1. Get information. Further, the analysis processing unit 102 analyzes the sound signal acquired by the microphone array 1, and calculates the particle velocity distribution on the sound source surface 2a based on the analysis result of the sound field 1a as shown in FIG.
  • the analysis processing unit 102 aligns the 3D model data of the sound source surface 2a of the object to be measured 2 and the particle velocity distribution of the sound source surface 2a, and deforms the 3D model data according to the particle velocity according to the alignment result. And display.
  • the analysis processing unit 102 acquires an image of the DUT 2 captured by the stereo camera 3 fixed by the fixing unit 3a before measuring the sound field by the microphone array 1. That is, as shown in FIG. 4, the stereo camera 3 is to be measured in a state where the microphone array 1 is not within the imaging range of the stereo camera 2, that is, in a state where the microphone array 1 is not disposed in the vicinity of the object 2 to be measured. The sound source surface 2a of the object 2 is imaged. At this time, the analysis processing unit 102 can acquire an image of the object 2 to be measured in which the microphone array 1 is not reflected.
  • n is an integer of 0 ⁇ n ⁇ No (No ⁇ 2). In this way, the analysis processing unit 102 calculates the position and shape of the DUT 2 in the camera coordinate system ⁇ c.
  • the analysis processing unit 102 sets an object coordinate system ⁇ o having an arbitrary point on the object 2 to be measured, for example, Po (0) as an origin and an xz plane as a sound source surface 2a. Set. Further, as shown in FIG. 6, the analysis processing unit 102 sets a microphone array coordinate system ⁇ m having an arbitrary point on the microphone array 1, for example, Pm (0) as an origin and an xz plane as a measurement surface 1b. Then, the analysis processing unit 102 calculates a transformation matrix R from the microphone array coordinate system ⁇ m to the measured object coordinate system ⁇ o.
  • the analysis processing unit 102 acquires a sound signal from the microphone array 1 that has been signal-processed by the signal processing unit 101, analyzes the sound signal, and calculates a three-dimensional sound distribution (particle velocity distribution). Then, the analysis processing unit 102 calculates the particle velocity distribution Vm (P (m)) on an arbitrary surface (analysis surface) parallel to the measurement surface 1b based on the principle of acoustic holography from the calculated particle velocity distribution. That is, the analysis processing unit 102 calculates particle velocities at a plurality of analysis points on a plane parallel to the measurement surface 1b. The particle velocity distribution Vm (P (m)) as a result of this analysis is calculated by the microphone array coordinate system ⁇ m.
  • the principle of acoustic holography is to obtain the sound pressure on the analysis surface by convolving the sound pressure on the measurement surface with a transfer function from the measurement surface to an arbitrary surface (analysis surface) parallel to the measurement surface.
  • the analysis surface is a sound source surface
  • the sound pressure of the sound source surface can be obtained.
  • it is generally easy to process by performing a spatial Fourier transform for convenience sound is recorded (spatial sampling) with a grid-like microphone array, the product of the spatial Fourier transform and the transfer function up to the analysis surface (for example, the sound source surface) is taken, and then the inverse spatial Fourier transform is performed.
  • the sound pressure of for example, the sound source surface
  • the particle velocity distribution on the sound source surface can be obtained.
  • the analysis processing unit 102 uses the transformation matrix R for the particle velocity distribution Vm (P (m)) of the analysis result calculated in the microphone array coordinate system ⁇ m, and the particle velocity distribution in the measurement object coordinate system ⁇ o. Convert to Vo (P (m)). Thereby, alignment with the sound source surface 2a and an analysis point (particle velocity distribution Vm (P (m)) of an analysis result) can be performed.
  • the analysis processing unit 102 performs alignment between the sound source surface 2 a and the 3D model data of the DUT 2. Based on three or more feature points having a geometric feature fixed to the DUT 2, the second alignment unit matches the coordinates on the sound source surface 2a with the coordinates on the 3D model data. Positioning is performed by enlarging, reducing, and rotating the data on the sound source surface 2a.
  • a geometric feature point fixed to the DUT 2 a mounting screw or a notch of the DUT 2 is used as a geometric feature point fixed to the DUT 2, a mounting screw or a notch of the DUT 2 is used.
  • the feature point may be a predetermined point defined by the analysis processing unit 102 or may be arbitrarily selected by an operator. Thereby, the 3D model data of the device under test 2 can be aligned with the device under test coordinate system ⁇ o.
  • the analysis processing unit 102 deforms the 3D model data according to the particle velocity distribution Vo (P (m)) according to the above alignment result, and causes the display device 200 to display the 3D model data.
  • the analysis processing unit 102 based on the particle velocity distribution Vo (P (m)), the analysis processing unit 102, as shown in FIG. 7, particle velocity v (m) at an arbitrary point P (m) in the object coordinate system ⁇ o. ) Is calculated.
  • the arbitrary point P (m) corresponds to a node of 3D model data (3D mesh model) M.
  • the analysis result of the particle velocity is obtained as a three-dimensional vector at an arbitrary point P (m) in the measured object coordinate system ⁇ o.
  • the analysis processing unit 102 displays the region (mesh) corresponding to the point P (m) in the 3D model data by transforming it into a color corresponding to the size of the vector indicating the particle velocity.
  • FIG. 8 shows an example in which the color of the 3D model data is modified according to the particle velocity v (m).
  • the deformation method of the 3D model data is not limited to the above.
  • the node corresponding to the point P (m) in the 3D model data may be moved and displayed according to the direction of the vector indicating the particle velocity. At this time, the deformation amount (movement amount) of the node can correspond to the magnitude of the vector.
  • the analysis processing unit 102 repeats the processing 2 to the processing 7 each time the microphone array 1 is moved, so that the analysis result on the entire surface of the device under test 2 can be displayed in association with the 3D model data of the device under test 2. .
  • the calculation unit in the present embodiment calculates a three-dimensional distribution of particle velocities, which are physical quantities representing the characteristics of sound, from the sound signal acquired by the microphone array 1, and is parallel to the measurement surface 1b of the microphone array 1.
  • the particle velocity at the analysis point on the surface is calculated.
  • the first alignment unit generates a sound source surface based on the three-dimensional position information of the point on the sound source surface 2a of the object to be measured 2 and the three-dimensional position information of the point on the measurement surface 1b of the microphone array 1.
  • First alignment between 2a and the analysis point is performed.
  • the second alignment unit performs second alignment between the three-dimensional model data of the sound source surface 2a and the sound source surface 2a.
  • the display unit deforms the three-dimensional model data according to the particle velocity at the analysis point in accordance with the alignment result, and causes the display device 200 to display it.
  • the present invention deforms and displays the surface of the device under test 2, the structure of the surface of the device under test 2 serving as a sound source and the vibration of the surface can be displayed in an appropriate manner.
  • the acoustic analysis system may include an acoustic analysis unit that performs numerical analysis. According to the configuration of the present invention, it is possible to display the result of numerical analysis in the three-dimensional model data and the result of superimposing the analysis result of the object to be measured on the three-dimensional model data side by side. In this case, since the user can compare the result of numerical analysis and the result of actual measurement based on the same model data, it is possible to analyze in detail.
  • the acoustic analysis unit performs frequency response analysis based on the three-dimensional model data and outputs an analysis result.
  • the frequency response analysis for example, sound pressure data, sound power, particle velocity in the space of the object to be analyzed, etc. are analyzed for a specific frequency.
  • the display unit displays each alignment result by the first alignment unit and the second alignment unit and the analysis result in the three-dimensional model data by the acoustic analysis unit side by side.
  • the display unit can display the region corresponding to the analysis point in the three-dimensional model data by transforming it into a color corresponding to the particle velocity at the analysis point. Thereby, the magnitude
  • the derivation unit uses the microphone array coordinate system ⁇ m having the origin at an arbitrary point Pm (0) on the measurement surface 1b as the sound source.
  • a transformation matrix R to the measured object coordinate system ⁇ o having an arbitrary point Po (0) on the surface 2a as an origin is derived.
  • the conversion unit converts the analysis point in the microphone array coordinate system ⁇ m to a point in the measurement object coordinate system ⁇ o using the conversion matrix R.
  • the acoustic analysis device 100 can appropriately align the sound source surface 2a and the analysis point.
  • the 3D model data of the sound source surface 2a of the object to be measured 2 can be appropriately superimposed on the 3D model data, and the 3D model data can be deformed and displayed. Therefore, it is possible to analyze in detail the situation in which the surface of the DUT 2 vibrates.
  • the vibration of the surface of the device under test 2 generated when the motor is incorporated into the final product can be displayed in association with the structure of the device under test 2. As a result, for example, it is possible to easily identify the cause of noise and reduce the man-hours required for noise countermeasures.
  • the third order of the points on the sound source surface 2a is obtained by using the common stereo camera 3 that is fixed independently from the sound source surface 2a of the workpiece 2 and the measurement surface 1b of the microphone array 1.
  • the case where the original position information and the three-dimensional position information of the points on the measurement surface 1b of the microphone array 1 arranged in the vicinity of the sound source surface 2a have been described has been described. That is, the first acquisition unit, the second acquisition unit, and the third acquisition unit use a common stereo camera that is independently fixed at a position separated from the sound source surface and the measurement surface.
  • the positional relationship between cameras corresponding relationship between camera coordinate systems
  • the above three-dimensional positional information may be acquired using different stereo cameras.
  • the common stereo camera 3 when used as in the above-described embodiment, it is easy to derive the transformation matrix R from the microphone array coordinate system ⁇ m to the measured object coordinate system ⁇ o using the common camera coordinate system ⁇ c. It is preferable because it can be performed.
  • the means for acquiring the three-dimensional position information is not limited to the stereo camera 3.
  • the means for acquiring three-dimensional position information may be a depth camera, a laser scanner, or an ultrasonic sensor that can detect a three-dimensional position.
  • a marker or the like may be installed on the device under test 2 or the microphone array 1.
  • 3D model data is not limited to 3D-CAD data.
  • a point Po (n) (xo (n), yo (n), zo (n) on the measurement object 2 measured by the stereo camera 3 used for alignment between the measurement object 2 and the microphone array 1.
  • 3D model data may be created based on)).
  • the object to be measured 2 may be imaged with a 3D scanner to create 3D model data.
  • SYMBOLS 1 Microphone array, 2 ... Measured object (sound source), 2a ... Sound source surface, 3 ... Stereo camera, 100 ... Acoustic analysis apparatus, 101 ... Signal processing part, 102 ... Analysis processing part, 103 ... Memory

Landscapes

  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Otolaryngology (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Measurement Of Velocity Or Position Using Acoustic Or Ultrasonic Waves (AREA)
  • Measurement Of Mechanical Vibrations Or Ultrasonic Waves (AREA)

Abstract

An acoustic analysis device 100 comprises: a first acquisition unit for acquiring three-dimensional model data of a sound source surface of an object to be measured; a second acquisition unit for acquiring three-dimensional location information of a point on the sound source surface; a third acquisition unit for acquiring three-dimensional location information of a point on a measurement surface of a microphone array arranged in the vicinity of the sound source surface; a calculation unit for calculating, from a sound signal acquired by the microphone array, a three-dimensional distribution of particle speed, which is a physical quantity indicating a sound characteristic, and for calculating the particle speed at an analysis point on a plane parallel to the measurement surface; a first alignment unit for performing alignment of the sound source surface and the analysis point; a second alignment unit for performing alignment of the three-dimensional model data and the sound source surface; and a display unit for displaying, in accordance with the result of each alignment, the three-dimensional model data so as to be deformed in accordance with the particle speed at the analysis point.

Description

音響解析装置および音響解析方法Acoustic analysis apparatus and acoustic analysis method
 本発明は、音響解析装置および音響解析方法に関する。 The present invention relates to an acoustic analysis apparatus and an acoustic analysis method.
 近年、製品の低騒音化の要求の高まりから、音場の空間的分布を測定し解析することが要求されている。特に、音響解析においては、音場を可視化することで直感的に音の伝達を把握できるようにすることが望まれている。
 特許文献1には、マイクロホンに装着された位置センサによってマイクロホンの空間内の位置を検出し、マイクロホンから出力される音信号の音圧に対応する画像を、そのマイクロホンの位置に対応する表示位置に表示させる音場可視化装置が開示されている。
In recent years, due to the increasing demand for noise reduction of products, it is required to measure and analyze the spatial distribution of the sound field. In particular, in acoustic analysis, it is desired to be able to intuitively grasp sound transmission by visualizing a sound field.
In Patent Literature 1, a position in a microphone space is detected by a position sensor attached to the microphone, and an image corresponding to the sound pressure of a sound signal output from the microphone is displayed at a display position corresponding to the position of the microphone. A sound field visualization device to be displayed is disclosed.
特許第5353316号公報Japanese Patent No. 5353316
 ところで、被測定物の音響状態をより分かり易く表現するために、音響解析結果を被測定物の画像に重ね合わせて表示することが考えられている。音響解析結果を被測定物の画像に重ね合わせて表示する方法としては、マイクロホンアレイにカメラを固定し、当該カメラで被測定物を撮像して音響解析結果と重ねて表示する方法がある。
 しかしながら、音響解析結果を被測定物の画像に重ね合わせて表示するだけでは、音源となる被測定物の表面が振動する状況を詳細に分析することはできない。
 そこで、本発明は、被測定物の表面が振動する状況を詳細に分析可能な音響解析装置および音響解析方法を提供することを目的とする。
By the way, in order to express the acoustic state of the object to be measured more easily, it is considered to display the acoustic analysis result superimposed on the image of the object to be measured. As a method of displaying the acoustic analysis result superimposed on the image of the object to be measured, there is a method of fixing the camera to the microphone array, imaging the object to be measured with the camera, and displaying it superimposed on the acoustic analysis result.
However, it is not possible to analyze in detail the situation in which the surface of the measurement object serving as the sound source vibrates only by displaying the acoustic analysis result superimposed on the measurement object image.
Therefore, an object of the present invention is to provide an acoustic analysis apparatus and an acoustic analysis method that can analyze in detail the situation in which the surface of the object to be measured vibrates.
 上記課題を解決するために、本発明の一つの態様の音響解析装置は、被測定物の音源面の三次元モデルデータを取得する第一の取得部と、前記音源面上の点の三次元位置情報を取得する第二の取得部と、前記音源面の近傍に配置されたマイクロホンアレイの測定面上の点の三次元位置情報を取得する第三の取得部と、前記マイクロホンアレイにより取得された音信号から音の特徴を表す物理量である粒子速度の三次元分布を算出し、前記測定面に平行な面上の解析点での前記粒子速度を算出する算出部と、前記音源面上の点の前記三次元位置情報と前記測定面上の点の前記三次元位置情報とに基づいて、前記音源面と前記解析点との位置合わせを行う第一の位置合わせ部と、前記被測定物に固定された3点以上の特徴点に基づいて、前記三次元モデルデータと前記音源面との位置合わせを行う第二の位置合わせ部と、前記第一の位置合わせ部および前記第二の位置合わせ部による各位置合わせ結果に従って、前記三次元モデルデータを前記解析点での前記粒子速度に応じて変形して表示させる表示部と、を備える。 In order to solve the above-described problem, an acoustic analysis device according to one aspect of the present invention includes a first acquisition unit that acquires three-dimensional model data of a sound source surface of a measurement object, and a three-dimensional point of the point on the sound source surface. A second acquisition unit for acquiring position information, a third acquisition unit for acquiring three-dimensional position information of points on the measurement surface of the microphone array arranged in the vicinity of the sound source surface, and the microphone array. Calculating a three-dimensional distribution of particle velocities, which are physical quantities representing sound characteristics, from the obtained sound signal, and calculating the particle velocities at analysis points on a plane parallel to the measurement surface; A first alignment unit that aligns the sound source surface and the analysis point based on the three-dimensional position information of the point and the three-dimensional position information of the point on the measurement surface; Based on the three or more feature points fixed to According to each positioning result by the second positioning unit that performs positioning between the original model data and the sound source surface, and the first positioning unit and the second positioning unit, the three-dimensional model data is A display unit that is deformed and displayed according to the particle velocity at the analysis point.
 また、本発明の一つの態様の音響解析方法は、被測定物の音源面の三次元モデルデータを取得するステップと、前記音源面上の点の三次元位置情報を取得するステップと、前記音源面の近傍に配置されたマイクロホンアレイの測定面上の点の三次元位置情報を取得するステップと、前記マイクロホンアレイにより取得された音信号から音の特徴を表す物理量である粒子速度の三次元分布を算出し、前記測定面に平行な面上の解析点での前記粒子速度を算出するステップと、前記音源面上の点の前記三次元位置情報と前記測定面上の点の前記三次元位置情報とに基づいて、前記音源面と前記解析点との位置合わせを行うステップと、前記被測定物に固定された3点以上の特徴点に基づいて、前記三次元モデルデータと前記音源面との位置合わせを行うステップと、前記位置合わせ結果に従って、前記三次元モデルデータを前記解析点での前記粒子速度に応じて変形して表示させるステップと、を含む。 The acoustic analysis method according to one aspect of the present invention includes a step of acquiring three-dimensional model data of a sound source surface of a measurement object, a step of acquiring three-dimensional position information of a point on the sound source surface, and the sound source A step of acquiring three-dimensional position information of points on a measurement surface of a microphone array arranged in the vicinity of the surface, and a three-dimensional distribution of particle velocities that are physical quantities representing sound characteristics from the sound signal acquired by the microphone array Calculating the particle velocity at an analysis point on a plane parallel to the measurement plane, the three-dimensional position information of the point on the sound source plane, and the three-dimensional position of the point on the measurement plane Based on the information, the step of aligning the sound source surface and the analysis point, and the three-dimensional model data and the sound source surface based on three or more feature points fixed to the object to be measured Alignment And performing, in accordance with the positioning result, including the step of displaying to deform in response to the particle velocity at the analysis points of the three-dimensional model data.
 本発明の一つの態様によれば、被測定物の三次元モデルデータに対して、測定した粒子速度のデータを適切に重ね合わせ、三次元モデルデータを変形させて表示することができる。したがって、被測定物の表面が振動する状況を詳細に分析可能となる。 According to one aspect of the present invention, the measured particle velocity data can be appropriately superimposed on the 3D model data of the object to be measured, and the 3D model data can be deformed and displayed. Therefore, it is possible to analyze in detail the situation where the surface of the object to be measured vibrates.
図1は、音響解析システムの一例を示す図である。FIG. 1 is a diagram illustrating an example of an acoustic analysis system. 図2は、解析処理部における解析処理の概要を説明するための図である。FIG. 2 is a diagram for explaining an outline of the analysis processing in the analysis processing unit. 図3は、被測定物の撮像方法を示す図である。FIG. 3 is a diagram illustrating a method for imaging the object to be measured. 図4は、マイクロホンアレイの撮像方法を示す図である。FIG. 4 is a diagram illustrating a microphone array imaging method. 図5は、座標変換方法を説明する図である。FIG. 5 is a diagram for explaining a coordinate conversion method. 図6は、三次元モデルデータの表示方法を説明するための図である。FIG. 6 is a diagram for explaining a method of displaying the three-dimensional model data. 図7は、三次元メッシュモデルの一例である。FIG. 7 is an example of a three-dimensional mesh model. 図8は、三次元モデルデータの表示例である。FIG. 8 is a display example of 3D model data.
  以下、図面を用いて本発明の実施の形態について説明する。
 なお、本発明の範囲は、以下の実施の形態に限定されるものではなく、本発明の技術的思想の範囲内で任意に変更可能である。
Hereinafter, embodiments of the present invention will be described with reference to the drawings.
The scope of the present invention is not limited to the following embodiment, and can be arbitrarily changed within the scope of the technical idea of the present invention.
 図1は、本実施形態におけるマイクロホンアレイ1を備える音響解析システム1000の構成例である。
 本実施形態における音響解析システム1000は、近接場音響ホログラフィ法を使用して被測定物(音源)2からの被測定音を解析し、解析結果を表示するシステムである。近接場音響ホログラフィ法では、音源面2aに近接し且つ平行な測定面の音圧分布を測定する必要があり、複数のマイクロホンmcを格子状に配置したマイクロホンアレイ1が用いられる。
 本実施形態におけるマイクロホンアレイ1は、格子状に配置されたM×N個のマイクロホンmcを備える。マイクロホンmcは、例えばMEMS(Micro-Electrical-Mechanical Systems)マイクロホンとすることができる。音響解析システム1000は、M×N個のマイクロホンmcの各々から入力された信号(音信号)を解析し、音の特徴を表す物理量を検出する。
FIG. 1 is a configuration example of an acoustic analysis system 1000 including a microphone array 1 according to this embodiment.
The acoustic analysis system 1000 according to the present embodiment is a system that analyzes a sound to be measured from the object to be measured (sound source) 2 using a near-field acoustic holography method and displays an analysis result. In the near-field acoustic holography method, it is necessary to measure a sound pressure distribution on a measurement surface that is close to and parallel to the sound source surface 2a, and a microphone array 1 in which a plurality of microphones mc are arranged in a lattice shape is used.
The microphone array 1 according to the present embodiment includes M × N microphones mc arranged in a lattice pattern. The microphone mc may be a MEMS (Micro-Electrical-Mechanical Systems) microphone, for example. The acoustic analysis system 1000 analyzes a signal (sound signal) input from each of the M × N microphones mc, and detects a physical quantity representing a sound characteristic.
 また、音響解析システム1000は、マイクロホンアレイ1および被測定物2とはそれぞれ独立した撮像装置3を備える。本実施形態では、撮像装置3は、ステレオカメラである場合について説明する。
 ステレオカメラ3は、マイクロホンアレイ1および被測定物2から所定距離離間した位置に固定されている。ステレオカメラ3は、被測定物2の三次元位置情報と、マイクロホンアレイ1の三次元位置情報とを取得することができる。
The acoustic analysis system 1000 includes an imaging device 3 that is independent from the microphone array 1 and the DUT 2. In this embodiment, the case where the imaging device 3 is a stereo camera will be described.
The stereo camera 3 is fixed at a position spaced a predetermined distance from the microphone array 1 and the object 2 to be measured. The stereo camera 3 can acquire the three-dimensional position information of the DUT 2 and the three-dimensional position information of the microphone array 1.
 さらに、音響解析システム1000は、音響解析装置100と、表示装置200と、を備える。音響解析装置100は、信号処理部101と、解析処理部102と、記憶部103と、を備える。また、音響解析装置100は、第一の取得部と、第二の取得部と、第三の取得部と、算出部と、第一の位置合わせ部と、第二の位置合わせ部と、表示部とを、備える。第一の取得部は、被測定物2の音源面2aの画像を取得する。第二の取得部は、音源面2a上の点の三次元位置情報を取得する。第三の取得部は、音源面2aの近傍に配置されたマイクロホンアレイ1の測定面1b上の点の三次元位置情報を取得する。第一の位置合わせ部は、導出部と、変換部とを備える。信号処理部101は、マイクロホンアレイ1の各マイクロホンmcからの信号に対して所定の信号処理を行い、音響解析に用いる音信号を得る。なお、当該信号処理は、マイクロホンアレイ1が備えるM×N個のマイクロホンmcの信号の同期をとる処理等を含んでいてもよい。 Furthermore, the acoustic analysis system 1000 includes an acoustic analysis device 100 and a display device 200. The acoustic analysis device 100 includes a signal processing unit 101, an analysis processing unit 102, and a storage unit 103. The acoustic analysis apparatus 100 includes a first acquisition unit, a second acquisition unit, a third acquisition unit, a calculation unit, a first alignment unit, a second alignment unit, and a display. A portion. The first acquisition unit acquires an image of the sound source surface 2a of the DUT 2. The second acquisition unit acquires three-dimensional position information of points on the sound source surface 2a. The third acquisition unit acquires three-dimensional position information of points on the measurement surface 1b of the microphone array 1 arranged in the vicinity of the sound source surface 2a. The first alignment unit includes a derivation unit and a conversion unit. The signal processing unit 101 performs predetermined signal processing on the signal from each microphone mc of the microphone array 1 to obtain a sound signal used for acoustic analysis. The signal processing may include processing for synchronizing signals of M × N microphones mc included in the microphone array 1.
 解析処理部102は、信号処理部101により信号処理された音信号を解析し、音の特徴を表す物理量の三次元分布を検出する。本実施形態において、音の特徴を表す物理量の三次元分布は、粒子速度分布である。
 解析処理部102は、音の特徴を表す物理量である粒子速度を、音源面2aの振動として表示装置200に表示させる表示制御を行う。本実施形態では、解析処理部102は、粒子速度に応じて、音源面2a(被測定物2の表面)の構造を表す三次元モデルデータ(3Dモデルデータ)を変形して表示させる表示制御を行う。解析処理部102における解析処理については後述する。
The analysis processing unit 102 analyzes the sound signal that has been signal-processed by the signal processing unit 101, and detects a three-dimensional distribution of physical quantities representing the characteristics of the sound. In the present embodiment, the three-dimensional distribution of physical quantities representing the characteristics of sound is a particle velocity distribution.
The analysis processing unit 102 performs display control for causing the display device 200 to display a particle velocity, which is a physical quantity representing the characteristics of sound, as vibration of the sound source surface 2a. In the present embodiment, the analysis processing unit 102 performs display control for deforming and displaying the three-dimensional model data (3D model data) representing the structure of the sound source surface 2a (the surface of the DUT 2) according to the particle velocity. Do. The analysis processing in the analysis processing unit 102 will be described later.
 記憶部103は、解析処理部102による解析結果等を記憶する。また、記憶部103は、上記3Dモデルデータを記憶する。ここで、3Dモデルデータは、例えば、3D-CADデータとすることができる。
 表示装置200は、液晶ディスプレイ等のモニタを備え、音響解析装置100の解析結果を表示する。
The storage unit 103 stores the analysis result by the analysis processing unit 102 and the like. The storage unit 103 stores the 3D model data. Here, the 3D model data can be, for example, 3D-CAD data.
The display device 200 includes a monitor such as a liquid crystal display and displays the analysis result of the acoustic analysis device 100.
 本実施形態では、図2に示すように、マイクロホンアレイ1は、被測定物2よりも小さい形状を有する。マイクロホンアレイ1は、被測定物2の音源面2aの近傍を移動しながら複数回に分けて音信号を測定し、音響解析装置100は、マイクロホンアレイ1が複数回に分けて測定した音信号をそれぞれ解析し、複数の解析結果をマージして表示装置200に表示させる。このように、本実施形態では、音響解析装置100は、被測定物2よりも小さいマイクロホンアレイ1を用いて被測定物2全面での音場1aを解析し、その解析結果を表示装置200に表示させる。 In this embodiment, as shown in FIG. 2, the microphone array 1 has a shape smaller than that of the DUT 2. The microphone array 1 measures the sound signal in a plurality of times while moving in the vicinity of the sound source surface 2a of the object 2 to be measured, and the acoustic analyzer 100 uses the sound signal measured by the microphone array 1 in a plurality of times. Each analysis is performed, and a plurality of analysis results are merged and displayed on the display device 200. Thus, in this embodiment, the acoustic analysis device 100 analyzes the sound field 1a over the entire surface of the device under test 2 using the microphone array 1 that is smaller than the device under test 2 and displays the analysis result on the display device 200. Display.
 具体的には、音響解析装置100の解析処理部102は、固定手段3aにより固定されたステレオカメラ3により測定された音源面2a上の点の三次元位置情報を取得する。さらに、解析処理部102は、ステレオカメラ3により被測定物2の音源面2aの近傍に配置された収音中のマイクロホンアレイ1を撮像し、マイクロホンアレイ1の測定面上の点の三次元位置情報を取得する。
 また、解析処理部102は、マイクロホンアレイ1により取得された音信号を解析し、図3に示すように、音場1aの解析結果をもとに音源面2aの粒子速度分布を算出する。そして、解析処理部102は、被測定物2の音源面2aの3Dモデルデータと音源面2aの粒子速度分布との位置合わせを行い、位置合わせ結果に従って、3Dモデルデータを粒子速度に応じて変形して表示する。
Specifically, the analysis processing unit 102 of the acoustic analysis apparatus 100 acquires the three-dimensional position information of the points on the sound source surface 2a measured by the stereo camera 3 fixed by the fixing unit 3a. Further, the analysis processing unit 102 images the microphone array 1 being picked up arranged in the vicinity of the sound source surface 2 a of the DUT 2 by the stereo camera 3, and the three-dimensional position of the point on the measurement surface of the microphone array 1. Get information.
Further, the analysis processing unit 102 analyzes the sound signal acquired by the microphone array 1, and calculates the particle velocity distribution on the sound source surface 2a based on the analysis result of the sound field 1a as shown in FIG. Then, the analysis processing unit 102 aligns the 3D model data of the sound source surface 2a of the object to be measured 2 and the particle velocity distribution of the sound source surface 2a, and deforms the 3D model data according to the particle velocity according to the alignment result. And display.
 以下、解析処理部102における解析処理について、具体的に説明する。
(処理1)
 まず、解析処理部102は、マイクロホンアレイ1にて音場を測定する前に、固定手段3aにより固定されたステレオカメラ3により撮像された被測定物2の画像を取得する。つまり、ステレオカメラ3は、図4に示すように、マイクロホンアレイ1がステレオカメラ2の撮像範囲内にない状態、すなわち、マイクロホンアレイ1が被測定物2の近傍に配置されていない状態で被測定物2の音源面2aを撮像する。このとき解析処理部102は、マイクロホンアレイ1が写り込んでいない被測定物2の画像を取得することができる。
 また、解析処理部102は、固定手段3aにより固定されたステレオカメラ3により測定された音源面2a上の点の三次元位置情報を取得する。具体的には、解析処理部102は、ステレオカメラ3から音源2bの3点以上の点の三次元位置情報Po(n)=(xo(n),yo(n),zo(n))を取得し、音源面2aの形状So(aox+boy+coz+do=0)を算出する。ここで、nは、0≦n≦No(No≧2)の整数である。このように、解析処理部102は、カメラ座標系Σcにおける被測定物2の位置および形状を算出する。
Hereinafter, the analysis processing in the analysis processing unit 102 will be specifically described.
(Process 1)
First, the analysis processing unit 102 acquires an image of the DUT 2 captured by the stereo camera 3 fixed by the fixing unit 3a before measuring the sound field by the microphone array 1. That is, as shown in FIG. 4, the stereo camera 3 is to be measured in a state where the microphone array 1 is not within the imaging range of the stereo camera 2, that is, in a state where the microphone array 1 is not disposed in the vicinity of the object 2 to be measured. The sound source surface 2a of the object 2 is imaged. At this time, the analysis processing unit 102 can acquire an image of the object 2 to be measured in which the microphone array 1 is not reflected.
The analysis processing unit 102 acquires the three-dimensional position information of the points on the sound source surface 2a measured by the stereo camera 3 fixed by the fixing unit 3a. Specifically, the analysis processing unit 102 obtains three-dimensional position information Po (n) = (xo (n), yo (n), zo (n)) of three or more points of the sound source 2b from the stereo camera 3. Obtained, and calculates the shape So (aox + boy + coz + do = 0) of the sound source surface 2a. Here, n is an integer of 0 ≦ n ≦ No (No ≧ 2). In this way, the analysis processing unit 102 calculates the position and shape of the DUT 2 in the camera coordinate system Σc.
(処理2)
 次に、解析処理部102は、図5に示すように、マイクロホンアレイ1が被測定物2の音源面2aの近傍で収音しているときに、固定手段3aにより固定されたステレオカメラ3からマイクロホンアレイ1の任意の3点以上の点の三次元位置情報Pm(n)=(xm(n),ym(n),zm(n))を取得し、マイクロホンアレイ1の測定面1bの姿勢Om(amx+bmy+cmz+dm=0)を算出する。ここで、nは、0≦n≦Nm(Nm≧2)の整数である。このように、解析処理部102は、カメラ座標系Σcにおけるマイクロホンアレイ1の位置および姿勢を算出する。
(Process 2)
Next, as shown in FIG. 5, when the microphone array 1 is picking up sound near the sound source surface 2a of the object to be measured 2, the analysis processing unit 102 starts from the stereo camera 3 fixed by the fixing means 3a. The three-dimensional position information Pm (n) = (xm (n), ym (n), zm (n)) of any three or more points of the microphone array 1 is acquired, and the posture of the measurement surface 1b of the microphone array 1 Om (amx + bmy + cmz + dm = 0) is calculated. Here, n is an integer of 0 ≦ n ≦ Nm (Nm ≧ 2). In this way, the analysis processing unit 102 calculates the position and orientation of the microphone array 1 in the camera coordinate system Σc.
(処理3)
 次に、解析処理部102は、図6に示すように、被測定物2上の任意の点、例えばPo(0)を原点とし、xz平面を音源面2aとする被測定物座標系Σoを設定する。また、解析処理部102は、図6に示すように、マイクロホンアレイ1上の任意の点、例えばPm(0)を原点とし、xz平面を測定面1bとするマイクロホンアレイ座標系Σmを設定する。そして、解析処理部102は、マイクロホンアレイ座標系Σmから被測定物座標系Σoへの変換行列Rを算出する。
(Process 3)
Next, as shown in FIG. 6, the analysis processing unit 102 sets an object coordinate system Σo having an arbitrary point on the object 2 to be measured, for example, Po (0) as an origin and an xz plane as a sound source surface 2a. Set. Further, as shown in FIG. 6, the analysis processing unit 102 sets a microphone array coordinate system Σm having an arbitrary point on the microphone array 1, for example, Pm (0) as an origin and an xz plane as a measurement surface 1b. Then, the analysis processing unit 102 calculates a transformation matrix R from the microphone array coordinate system Σm to the measured object coordinate system Σo.
(処理4)
 次に、解析処理部102は、信号処理部101により信号処理されたマイクロホンアレイ1からの音信号を取得し、当該音信号を解析して音の三次元分布(粒子速度分布)を算出する。そして、解析処理部102は、算出した粒子速度分布から音響ホログラフィの原理に基づき、測定面1bに平行な任意の面(解析面)での粒子速度分布Vm(P(m))を算出する。つまり、解析処理部102は、測定面1bに平行な面上の複数の解析点での粒子速度を算出する。この解析結果の粒子速度分布Vm(P(m))は、マイクロホンアレイ座標系Σmで算出される。
(Process 4)
Next, the analysis processing unit 102 acquires a sound signal from the microphone array 1 that has been signal-processed by the signal processing unit 101, analyzes the sound signal, and calculates a three-dimensional sound distribution (particle velocity distribution). Then, the analysis processing unit 102 calculates the particle velocity distribution Vm (P (m)) on an arbitrary surface (analysis surface) parallel to the measurement surface 1b based on the principle of acoustic holography from the calculated particle velocity distribution. That is, the analysis processing unit 102 calculates particle velocities at a plurality of analysis points on a plane parallel to the measurement surface 1b. The particle velocity distribution Vm (P (m)) as a result of this analysis is calculated by the microphone array coordinate system Σm.
 音響ホログラフィの原理は、測定面の音圧に、測定面から当該測定面に平行な任意の面(解析面)までの伝達関数を畳み込むことによって、解析面での音圧を求めるものである。ここで、解析面を音源面とすれば、音源面の音圧を求めることができる。ただし、測定音圧にそのまま伝達関数を畳み込むのは難しいため、便宜上、空間フーリエ変換をすることにより処理しやすくするのが一般的である。
 つまり、格子状のマイクロホンアレイで音を収録(空間サンプリング)し、空間フーリエ変換した後で解析面(例えば音源面)までの伝達関数との積を取り、さらに逆空間フーリエ変換することにより解析面(例えば音源面)の音圧を求める。この原理を用いて、音源面での粒子速度の分布を得ることができる。
The principle of acoustic holography is to obtain the sound pressure on the analysis surface by convolving the sound pressure on the measurement surface with a transfer function from the measurement surface to an arbitrary surface (analysis surface) parallel to the measurement surface. Here, if the analysis surface is a sound source surface, the sound pressure of the sound source surface can be obtained. However, since it is difficult to convolve the transfer function with the measured sound pressure as it is, it is generally easy to process by performing a spatial Fourier transform for convenience.
In other words, sound is recorded (spatial sampling) with a grid-like microphone array, the product of the spatial Fourier transform and the transfer function up to the analysis surface (for example, the sound source surface) is taken, and then the inverse spatial Fourier transform is performed. The sound pressure of (for example, the sound source surface) is obtained. Using this principle, the particle velocity distribution on the sound source surface can be obtained.
(処理5)
 次に、解析処理部102は、マイクロホンアレイ座標系Σmで算出された解析結果の粒子速度分布Vm(P(m))を、変換行列Rを用いて、被測定物座標系Σoにおける粒子速度分布Vo(P(m))に変換する。
 これにより、音源面2aと解析点(解析結果の粒子速度分布Vm(P(m)))との位置合わせを行うことができる。
(Process 5)
Next, the analysis processing unit 102 uses the transformation matrix R for the particle velocity distribution Vm (P (m)) of the analysis result calculated in the microphone array coordinate system Σm, and the particle velocity distribution in the measurement object coordinate system Σo. Convert to Vo (P (m)).
Thereby, alignment with the sound source surface 2a and an analysis point (particle velocity distribution Vm (P (m)) of an analysis result) can be performed.
(処理6)
 次に、解析処理部102は、音源面2aと被測定物2の3Dモデルデータとの位置合わせを行う。
 第二の位置合わせ部は、被測定物2に固定された幾何学的特徴を持つ3個以上の特徴点に基づいて、音源面2a上の座標と3Dモデルデータ上の座標が一致するよう、音源面2aのデータの拡大、縮小、回転を行うことで位置合わせをする。ここで、被測定物2に固定された幾何学的特徴点は、被測定物2の取り付けネジや切欠き等を用いる。なお、特徴点は、解析処理部102が規定した所定の点でもよく、作業者が任意に選定してもよい。
 これにより、被測定物2の3Dモデルデータを被測定物座標系Σoに位置合わせすることができる。
(Process 6)
Next, the analysis processing unit 102 performs alignment between the sound source surface 2 a and the 3D model data of the DUT 2.
Based on three or more feature points having a geometric feature fixed to the DUT 2, the second alignment unit matches the coordinates on the sound source surface 2a with the coordinates on the 3D model data. Positioning is performed by enlarging, reducing, and rotating the data on the sound source surface 2a. Here, as a geometric feature point fixed to the DUT 2, a mounting screw or a notch of the DUT 2 is used. The feature point may be a predetermined point defined by the analysis processing unit 102 or may be arbitrarily selected by an operator.
Thereby, the 3D model data of the device under test 2 can be aligned with the device under test coordinate system Σo.
(処理7)
 次に、解析処理部102は、上記の位置合わせ結果に従って、3Dモデルデータを粒子速度分布Vo(P(m))に応じて変形して表示装置200に表示させる。
 まず、解析処理部102は、粒子速度分布Vo(P(m))をもとに、図7に示すように、被測定物座標系Σoの任意の点P(m)における粒子速度v(m)を算出する。ここで、上記任意の点P(m)は、3Dモデルデータ(3Dメッシュモデル)Mのノードに相当する点である。
 粒子速度の解析結果は、被測定物座標系Σoの任意の点P(m)にて三次元ベクトルとして得られる。そこで、解析処理部102は、3Dモデルデータにおける点P(m)に対応する領域(メッシュ)を、粒子速度を示すベクトルの大きさに応じた色に変形して表示させる。3Dモデルデータの色を粒子速度v(m)の大きさに応じて変形した例を図8に示す。
 なお、3Dモデルデータの変形方法は上記に限定されない。例えば、3Dモデルデータにおける点P(m)に対応するノードを、粒子速度を示すベクトルの方向に応じて移動して表示させてもよい。また、このとき、ノードの変形量(移動量)をベクトルの大きさに対応させることもできる。
(Process 7)
Next, the analysis processing unit 102 deforms the 3D model data according to the particle velocity distribution Vo (P (m)) according to the above alignment result, and causes the display device 200 to display the 3D model data.
First, based on the particle velocity distribution Vo (P (m)), the analysis processing unit 102, as shown in FIG. 7, particle velocity v (m) at an arbitrary point P (m) in the object coordinate system Σo. ) Is calculated. Here, the arbitrary point P (m) corresponds to a node of 3D model data (3D mesh model) M.
The analysis result of the particle velocity is obtained as a three-dimensional vector at an arbitrary point P (m) in the measured object coordinate system Σo. Therefore, the analysis processing unit 102 displays the region (mesh) corresponding to the point P (m) in the 3D model data by transforming it into a color corresponding to the size of the vector indicating the particle velocity. FIG. 8 shows an example in which the color of the 3D model data is modified according to the particle velocity v (m).
Note that the deformation method of the 3D model data is not limited to the above. For example, the node corresponding to the point P (m) in the 3D model data may be moved and displayed according to the direction of the vector indicating the particle velocity. At this time, the deformation amount (movement amount) of the node can correspond to the magnitude of the vector.
 解析処理部102は、マイクロホンアレイ1を移動させる度に処理2から処理7を繰り返すことで、被測定物2全面での解析結果を被測定物2の3Dモデルデータに関連付けて表示することができる。 The analysis processing unit 102 repeats the processing 2 to the processing 7 each time the microphone array 1 is moved, so that the analysis result on the entire surface of the device under test 2 can be displayed in association with the 3D model data of the device under test 2. .
 このように、本実施形態における算出部は、マイクロホンアレイ1により取得された音信号から音の特徴を表す物理量である粒子速度の三次元分布を算出し、マイクロホンアレイ1の測定面1bに平行な面上の解析点での粒子速度を算出する。また、第一の位置合わせ部は、被測定物2の音源面2a上の点の三次元位置情報と、マイクロホンアレイ1の測定面1b上の点の三次元位置情報とに基づいて、音源面2aと上記解析点との第一の位置合わせを行う。さらに、第二の位置合わせ部は、音源面2aの三次元モデルデータと音源面2aとの第二の位置合わせを行う。そして、表示部は、上記の位置合わせ結果に従って、三次元モデルデータを解析点での粒子速度に応じて変形して表示装置200に表示させる。 Thus, the calculation unit in the present embodiment calculates a three-dimensional distribution of particle velocities, which are physical quantities representing the characteristics of sound, from the sound signal acquired by the microphone array 1, and is parallel to the measurement surface 1b of the microphone array 1. The particle velocity at the analysis point on the surface is calculated. In addition, the first alignment unit generates a sound source surface based on the three-dimensional position information of the point on the sound source surface 2a of the object to be measured 2 and the three-dimensional position information of the point on the measurement surface 1b of the microphone array 1. First alignment between 2a and the analysis point is performed. Further, the second alignment unit performs second alignment between the three-dimensional model data of the sound source surface 2a and the sound source surface 2a. Then, the display unit deforms the three-dimensional model data according to the particle velocity at the analysis point in accordance with the alignment result, and causes the display device 200 to display it.
 このように、本発明は、被測定物2の表面をデフォルメして表示するので、音源となる被測定物2の表面の構造と当該表面の振動とを適切に関連付けて表示させることができる。また、音響解析システムは、数値解析を行う音響解析部を備えてもよい。本発明の構成によると、三次元モデルデータにおける数値解析の結果と、被測定物の解析結果を三次元モデルデータに重ね合わせた結果と、を並べて表示させることができる。この場合、ユーザーは、同じモデルデータに基づいて、数値解析の結果と実測の結果とを比較できるため、詳細に分析することが可能となる。音響解析部は、例えば、三次元モデルデータに基づいて周波数応答解析を行い、解析結果を出力する。周波数応答解析では、例えば、特定の周波数に対する音圧データ、音響パワー、被解析物の空間の粒子速度等を解析する。表示部は、第一の位置合わせ部および第二の位置合わせ部による各位置合わせ結果と、音響解析部による三次元モデルデータにおける解析結果と、を並べて表示する。 As described above, since the present invention deforms and displays the surface of the device under test 2, the structure of the surface of the device under test 2 serving as a sound source and the vibration of the surface can be displayed in an appropriate manner. Moreover, the acoustic analysis system may include an acoustic analysis unit that performs numerical analysis. According to the configuration of the present invention, it is possible to display the result of numerical analysis in the three-dimensional model data and the result of superimposing the analysis result of the object to be measured on the three-dimensional model data side by side. In this case, since the user can compare the result of numerical analysis and the result of actual measurement based on the same model data, it is possible to analyze in detail. For example, the acoustic analysis unit performs frequency response analysis based on the three-dimensional model data and outputs an analysis result. In the frequency response analysis, for example, sound pressure data, sound power, particle velocity in the space of the object to be analyzed, etc. are analyzed for a specific frequency. The display unit displays each alignment result by the first alignment unit and the second alignment unit and the analysis result in the three-dimensional model data by the acoustic analysis unit side by side.
 また、表示部は、三次元モデルデータにおける解析点に対応する領域を、解析点での粒子速度の大きさに応じた色に変形して表示させることができる。これにより、被測定物2の表面の振動の大きさを容易に確認させることができる。さらに、表示部は、三次元モデルデータにおける解析点に対応するノードを、解析点での粒子速度の大きさや方向に応じて移動して表示させることもできる。この場合、被測定物2の表面の振動の大きさや方向を容易に確認させることができる。 Further, the display unit can display the region corresponding to the analysis point in the three-dimensional model data by transforming it into a color corresponding to the particle velocity at the analysis point. Thereby, the magnitude | size of the vibration of the surface of the to-be-measured object 2 can be confirmed easily. Further, the display unit can move and display the node corresponding to the analysis point in the three-dimensional model data according to the size and direction of the particle velocity at the analysis point. In this case, the magnitude and direction of vibration on the surface of the DUT 2 can be easily confirmed.
 また、音響解析装置100が、音源面2aと解析点との位置合わせを行うに際し、導出部は、測定面1b上の任意の点Pm(0)を原点とするマイクロホンアレイ座標系Σmから、音源面2a上の任意の点Po(0)を原点とする被測定物座標系Σoへの変換行列Rを導出する。そして、変換部は、マイクロホンアレイ座標系Σmにおける解析点を、変換行列Rを用いて被測定物座標系Σoにおける点へ変換する。これにより、音響解析装置100は、音源面2aと解析点との位置合わせを適切に行うことができる。 Further, when the acoustic analysis apparatus 100 aligns the sound source surface 2a and the analysis point, the derivation unit uses the microphone array coordinate system Σm having the origin at an arbitrary point Pm (0) on the measurement surface 1b as the sound source. A transformation matrix R to the measured object coordinate system Σo having an arbitrary point Po (0) on the surface 2a as an origin is derived. Then, the conversion unit converts the analysis point in the microphone array coordinate system Σm to a point in the measurement object coordinate system Σo using the conversion matrix R. Thereby, the acoustic analysis device 100 can appropriately align the sound source surface 2a and the analysis point.
 このように、本実施形態では、被測定物2の音源面2aの3Dモデルデータに対して、音場の解析結果を適切に重ね合わせ、3Dモデルデータを変形させて表示することができる。したがって、被測定物2の表面が振動する状況を詳細に分析可能である。
 例えば、本実施形態における音響解析装置100によれば、モータを最終製品に組み込んだ際に発生する被測定物2の表面の振動を、被測定物2の構造を関連付けて表示させることができる。その結果、例えば騒音の原因特定を容易にし、騒音対策に要する工数を削減することが可能となる。
As described above, in this embodiment, the 3D model data of the sound source surface 2a of the object to be measured 2 can be appropriately superimposed on the 3D model data, and the 3D model data can be deformed and displayed. Therefore, it is possible to analyze in detail the situation in which the surface of the DUT 2 vibrates.
For example, according to the acoustic analysis apparatus 100 in the present embodiment, the vibration of the surface of the device under test 2 generated when the motor is incorporated into the final product can be displayed in association with the structure of the device under test 2. As a result, for example, it is possible to easily identify the cause of noise and reduce the man-hours required for noise countermeasures.
(変形例)
 上記実施形態においては、被処理物2の音源面2aおよびマイクロホンアレイ1の測定面1bから離間した位置に独立して固定された共通のステレオカメラ3を用いて、音源面2a上の点の三次元位置情報、および音源面2aの近傍に配置されたマイクロホンアレイ1の測定面1b上の点の三次元位置情報を取得する場合について説明した。すなわち、第一の取得部、第二の取得部および第三の取得部は、音源面および測定面から離間した位置に独立して固定された共通のステレオカメラを用いる。しかしながら、カメラ同士の位置関係(カメラ座標系同士の対応関係)が分かっている場合には、それぞれ異なるステレオカメラを用いて、上記の三次元位置情報を取得してもよい。
 ただし、上述した実施形態のように、共通のステレオカメラ3を用いた場合、共通のカメラ座標系Σcを用いてマイクロホンアレイ座標系Σmから被測定物座標系Σoへの変換行列Rの導出を容易に行うことができるため好ましい。
(Modification)
In the above embodiment, the third order of the points on the sound source surface 2a is obtained by using the common stereo camera 3 that is fixed independently from the sound source surface 2a of the workpiece 2 and the measurement surface 1b of the microphone array 1. The case where the original position information and the three-dimensional position information of the points on the measurement surface 1b of the microphone array 1 arranged in the vicinity of the sound source surface 2a have been described has been described. That is, the first acquisition unit, the second acquisition unit, and the third acquisition unit use a common stereo camera that is independently fixed at a position separated from the sound source surface and the measurement surface. However, when the positional relationship between cameras (corresponding relationship between camera coordinate systems) is known, the above three-dimensional positional information may be acquired using different stereo cameras.
However, when the common stereo camera 3 is used as in the above-described embodiment, it is easy to derive the transformation matrix R from the microphone array coordinate system Σm to the measured object coordinate system Σo using the common camera coordinate system Σc. It is preferable because it can be performed.
 さらに、上記実施形態においては、ステレオカメラ3を用いて被測定物2およびマイクロホンアレイ1の三次元位置情報を取得する場合について説明したが、三次元位置情報の取得手段はステレオカメラ3に限定されない。例えば、三次元位置情報の取得手段は、三次元位置を検出可能なデプスカメラやレーザスキャナー、超音波センサであってもよい。
 また、被測定物2およびマイクロホンアレイ1の三次元位置情報の取得精度を向上させるために、被測定物2やマイクロホンアレイ1にマーカーなどを設置してもよい。
Furthermore, in the above embodiment, the case where the stereo camera 3 is used to acquire the three-dimensional position information of the DUT 2 and the microphone array 1 has been described. However, the means for acquiring the three-dimensional position information is not limited to the stereo camera 3. . For example, the means for acquiring three-dimensional position information may be a depth camera, a laser scanner, or an ultrasonic sensor that can detect a three-dimensional position.
Further, in order to improve the acquisition accuracy of the three-dimensional position information of the device under test 2 and the microphone array 1, a marker or the like may be installed on the device under test 2 or the microphone array 1.
 また、上記実施形態においては、3Dモデルデータとして、3D-CADデータを用いる場合について説明したが、3Dモデルデータは3D-CADデータに限定されない。例えば、被測定物2とマイクロホンアレイ1との位置合わせに用いたステレオカメラ3にて測定した被測定物2上の点Po(n)=(xo(n),yo(n),zo(n))をもとに3Dモデルデータを作成してもよい。また、被測定物2を3Dスキャナーにより撮像し、3Dモデルデータを作成してもよい。 In the above embodiment, the case where 3D-CAD data is used as 3D model data has been described. However, 3D model data is not limited to 3D-CAD data. For example, a point Po (n) = (xo (n), yo (n), zo (n) on the measurement object 2 measured by the stereo camera 3 used for alignment between the measurement object 2 and the microphone array 1. 3D model data may be created based on)). Alternatively, the object to be measured 2 may be imaged with a 3D scanner to create 3D model data.
 1…マイクロホンアレイ、2…被測定物(音源)、2a…音源面、3…ステレオカメラ、100…音響解析装置、101…信号処理部、102…解析処理部、103…記憶部、200…表示装置、1000…音響解析システム、mc…マイクロホン DESCRIPTION OF SYMBOLS 1 ... Microphone array, 2 ... Measured object (sound source), 2a ... Sound source surface, 3 ... Stereo camera, 100 ... Acoustic analysis apparatus, 101 ... Signal processing part, 102 ... Analysis processing part, 103 ... Memory | storage part, 200 ... Display Apparatus, 1000 ... acoustic analysis system, mc ... microphone

Claims (7)

  1.  被測定物の音源面の三次元モデルデータを取得する第一の取得部と、
     前記音源面上の点の三次元位置情報を取得する第二の取得部と、
     前記音源面の近傍に配置されたマイクロホンアレイの測定面上の点の三次元位置情報を取得する第三の取得部と、
     前記マイクロホンアレイにより取得された音信号から音の特徴を表す物理量である粒子速度の三次元分布を算出し、前記測定面に平行な面上の解析点での前記粒子速度を算出する算出部と、
     前記音源面上の点の前記三次元位置情報と前記測定面上の点の前記三次元位置情報とに基づいて、前記音源面と前記解析点との位置合わせを行う第一の位置合わせ部と、
     前記被測定物に固定された3点以上の特徴点に基づいて、前記三次元モデルデータと前記音源面との位置合わせを行う第二の位置合わせ部と、
     前記第一の位置合わせ部および前記第二の位置合わせ部による各位置合わせ結果に従って、前記三次元モデルデータを前記解析点での前記粒子速度に応じて変形して表示させる表示部と、
    を備えることを特徴とする音響解析装置。
    A first acquisition unit for acquiring three-dimensional model data of the sound source surface of the object to be measured;
    A second acquisition unit that acquires three-dimensional position information of a point on the sound source surface;
    A third acquisition unit that acquires three-dimensional position information of a point on the measurement surface of the microphone array disposed in the vicinity of the sound source surface;
    A calculation unit that calculates a three-dimensional distribution of particle velocities, which are physical quantities representing sound characteristics, from a sound signal acquired by the microphone array, and calculates the particle velocities at analysis points on a plane parallel to the measurement surface; ,
    A first alignment unit configured to align the sound source surface and the analysis point based on the three-dimensional position information of the point on the sound source surface and the three-dimensional position information of the point on the measurement surface; ,
    A second alignment unit for aligning the three-dimensional model data and the sound source surface based on three or more feature points fixed to the object to be measured;
    According to each alignment result by the first alignment unit and the second alignment unit, a display unit that deforms and displays the three-dimensional model data according to the particle velocity at the analysis point;
    An acoustic analysis device comprising:
  2.  前記表示部は、
     前記三次元モデルデータにおける前記解析点に対応する領域を、前記解析点での前記粒子速度の大きさに応じた色に変形して表示させることを特徴とする請求項1に記載の音響解析装置。
    The display unit
    The acoustic analysis apparatus according to claim 1, wherein a region corresponding to the analysis point in the three-dimensional model data is displayed by being transformed into a color corresponding to the size of the particle velocity at the analysis point. .
  3.  前記表示部は、
     前記三次元モデルデータにおける前記解析点に対応するノードを、前記解析点での前記粒子速度の方向に応じて移動して表示させることを特徴とする請求項1または2に記載の音響解析装置。
    The display unit
    The acoustic analysis apparatus according to claim 1 or 2, wherein a node corresponding to the analysis point in the three-dimensional model data is moved and displayed in accordance with a direction of the particle velocity at the analysis point.
  4.  前記第二の取得部および前記第三の取得部は、
     前記音源面および前記測定面から離間した位置に独立して固定された共通のステレオカメラを用いて、取得することを特徴とする1から3のいずれか1項に記載の音響解析装置。
    The second acquisition unit and the third acquisition unit are:
    The acoustic analysis apparatus according to any one of claims 1 to 3, wherein the acoustic analysis apparatus is obtained by using a common stereo camera that is independently fixed at a position separated from the sound source plane and the measurement plane.
  5.  前記第一の位置合わせ部は、
      前記測定面上の点を原点とするマイクロホンアレイ座標系から、前記音源面上の点を原点とする被測定物座標系への変換行列を導出する導出部と、
      前記マイクロホンアレイ座標系における前記解析点を、前記変換行列を用いて、前記被測定物座標系における点へ変換する変換部と、
    を備えることを特徴とする請求項1から4のいずれか1項に記載の音響解析装置。
    The first alignment portion is
    A derivation unit for deriving a transformation matrix from a microphone array coordinate system having a point on the measurement surface as an origin to a measurement object coordinate system having a point on the sound source surface as an origin;
    A conversion unit that converts the analysis points in the microphone array coordinate system into points in the measurement object coordinate system using the conversion matrix;
    The acoustic analysis apparatus according to any one of claims 1 to 4, further comprising:
  6.  前記三次元モデルデータに基づいて数値解析を行う音響解析部をさらに備え、
     前記表示部は、前記位置合わせ結果と、前記音響解析部の解析結果とを、並べて表示することを特徴とする1から5のいずれか1項に記載の音響解析装置。
    An acoustic analysis unit that performs numerical analysis based on the three-dimensional model data;
    The acoustic analysis apparatus according to any one of 1 to 5, wherein the display unit displays the alignment result and the analysis result of the acoustic analysis unit side by side.
  7.  被測定物の音源面の三次元モデルデータを取得するステップと、
     前記音源面上の点の三次元位置情報を取得するステップと、
     前記音源面の近傍に配置されたマイクロホンアレイの測定面上の点の三次元位置情報を取得するステップと、
     前記マイクロホンアレイにより取得された音信号から音の特徴を表す物理量である粒子速度の三次元分布を算出し、前記測定面に平行な面上の解析点での前記粒子速度を算出するステップと、
     前記音源面上の点の前記三次元位置情報と前記測定面上の点の前記三次元位置情報とに基づいて、前記音源面と前記解析点との位置合わせを行うステップと、
     前記被測定物に固定された3点以上の特徴点に基づいて、前記三次元モデルデータと前記音源面との位置合わせを行うステップと、
     前記位置合わせ結果に従って、前記三次元モデルデータを前記解析点での前記粒子速度に応じて変形して表示させるステップと、を含むことを特徴とする音響解析方法。
    Obtaining 3D model data of the sound source surface of the object to be measured;
    Obtaining three-dimensional position information of a point on the sound source surface;
    Obtaining three-dimensional position information of a point on a measurement surface of a microphone array disposed in the vicinity of the sound source surface;
    Calculating a three-dimensional distribution of particle velocities, which are physical quantities representing sound characteristics, from the sound signals acquired by the microphone array, and calculating the particle velocities at analysis points on a plane parallel to the measurement plane;
    Aligning the sound source plane with the analysis point based on the three-dimensional position information of the point on the sound source plane and the three-dimensional position information of the point on the measurement plane;
    Aligning the three-dimensional model data with the sound source surface based on three or more feature points fixed to the object to be measured;
    A step of deforming and displaying the three-dimensional model data in accordance with the particle velocity at the analysis point according to the alignment result.
PCT/JP2019/013296 2018-03-28 2019-03-27 Acoustic analysis device and acoustic analysis method WO2019189424A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201980022442.7A CN111971536A (en) 2018-03-28 2019-03-27 Acoustic analysis device and acoustic analysis method

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2018062685 2018-03-28
JP2018-062685 2018-03-28

Publications (1)

Publication Number Publication Date
WO2019189424A1 true WO2019189424A1 (en) 2019-10-03

Family

ID=68059182

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/013296 WO2019189424A1 (en) 2018-03-28 2019-03-27 Acoustic analysis device and acoustic analysis method

Country Status (2)

Country Link
CN (1) CN111971536A (en)
WO (1) WO2019189424A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111709178A (en) * 2020-05-20 2020-09-25 上海升悦声学工程科技有限公司 Three-dimensional space-based acoustic particle drop point simulation analysis method
JP2021148626A (en) * 2020-03-19 2021-09-27 三菱重工業株式会社 Sound pressure estimation system, method for estimating sound pressure, and sound pressure estimation program

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000075014A (en) * 1998-09-01 2000-03-14 Isuzu Motors Ltd Method for searching sound source
US20110120222A1 (en) * 2008-04-25 2011-05-26 Rick Scholte Acoustic holography
JP2013528795A (en) * 2010-05-04 2013-07-11 クリアフォーム インコーポレイティッド Target inspection using reference capacitance analysis sensor
JP2016090289A (en) * 2014-10-30 2016-05-23 株式会社小野測器 Distribution figure display device and method

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102879080B (en) * 2012-09-11 2014-10-15 上海交通大学 Sound field analysis method based on image recognition positioning and acoustic sensor array measurement
DE102014217598A1 (en) * 2014-09-03 2016-03-03 Gesellschaft zur Förderung angewandter Informatik e.V. Method and arrangement for acquiring acoustic and optical information and a corresponding computer program and a corresponding computer-readable storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000075014A (en) * 1998-09-01 2000-03-14 Isuzu Motors Ltd Method for searching sound source
US20110120222A1 (en) * 2008-04-25 2011-05-26 Rick Scholte Acoustic holography
JP2013528795A (en) * 2010-05-04 2013-07-11 クリアフォーム インコーポレイティッド Target inspection using reference capacitance analysis sensor
JP2016090289A (en) * 2014-10-30 2016-05-23 株式会社小野測器 Distribution figure display device and method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
GOSEKI, MASAFUMI ET AL.: "Visualization of sound pressure distribution by combining microphone array processing and camera image processing", THE PROCEEDINGS OF JSME ANNUAL CONFERENCE ON ROBOTICS AND MECHATRONICS (ROBOMEC) 2011, vol. 11, 26 May 2011 (2011-05-26), pages 2P2-L02 (1) - 2P2-L02 (4), XP032165956, DOI: 0.1299/jsmermd.2011._2P2-L02_1 *
NAGATOMO, HIROSHI: "Noise source identification system", THE JOURNAL OF THE ACOUSTICAL SOCIETY OF JAPAN, vol. 72, no. 7, 2016, pages 416 - 417, XP055640367, DOI: 10.20697/jasj.72.7_416 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2021148626A (en) * 2020-03-19 2021-09-27 三菱重工業株式会社 Sound pressure estimation system, method for estimating sound pressure, and sound pressure estimation program
JP7314086B2 (en) 2020-03-19 2023-07-25 三菱重工業株式会社 Sound pressure estimation system, its sound pressure estimation method, and sound pressure estimation program
CN111709178A (en) * 2020-05-20 2020-09-25 上海升悦声学工程科技有限公司 Three-dimensional space-based acoustic particle drop point simulation analysis method
CN111709178B (en) * 2020-05-20 2023-03-28 上海升悦声学工程科技有限公司 Three-dimensional space-based acoustic particle drop point simulation analysis method

Also Published As

Publication number Publication date
CN111971536A (en) 2020-11-20

Similar Documents

Publication Publication Date Title
WO2019189417A1 (en) Acoustic analysis device and acoustic analysis method
Poozesh et al. Feasibility of extracting operating shapes using phase-based motion magnification technique and stereo-photogrammetry
US11480461B2 (en) Compact system and method for vibration and noise mapping
JP5378374B2 (en) Method and system for grasping camera position and direction relative to real object
US11307285B2 (en) Apparatus, system and method for spatially locating sound sources
JPH04332544A (en) Acoustical hold gram system
US11478219B2 (en) Handheld three-dimensional ultrasound imaging system and method
EP3397935B1 (en) Vibration and noise mapping system and method
EP2138813A1 (en) Sound source separating device and sound source separating method
JP6416456B2 (en) Car body stiffness test apparatus and car body stiffness test method
WO2019189424A1 (en) Acoustic analysis device and acoustic analysis method
JP2012150059A (en) Method and device for estimating sound source
Shao et al. Target-free 3D tiny structural vibration measurement based on deep learning and motion magnification
US9557400B2 (en) 3D soundscaping
Gardonio et al. Reconstruction of the sound radiation field from flexural vibration measurements with multiple cameras
JP2020527429A (en) Motion information acquisition method and equipment
KR100730297B1 (en) Sound source localization method using Head Related Transfer Function database
EP3203760A1 (en) Method and apparatus for determining the position of a number of loudspeakers in a setup of a surround sound system
KR102058776B1 (en) Method and Apparatus for Diagnosing Sound Source via Measurements of Micro-vibrations of Objects Using Multi-beams of Invisible Infra-red Ray Laser and Infra-Red Camera
JP2011122854A (en) System and program for determining incoming direction of sound
JP2015161659A (en) Sound source direction estimation device and display device of image for sound source estimation
US11317200B2 (en) Sound source separation system, sound source position estimation system, sound source separation method, and sound source separation program
Shen et al. Obtaining four-dimensional vibration information for vibrating surfaces with a Kinect sensor
KR20210132518A (en) Non contact vibration detection system and Method for controlling the same
JP2011194084A (en) Ultrasonic diagnostic apparatus

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19776007

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19776007

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP