CN111971536A - Acoustic analysis device and acoustic analysis method - Google Patents

Acoustic analysis device and acoustic analysis method Download PDF

Info

Publication number
CN111971536A
CN111971536A CN201980022442.7A CN201980022442A CN111971536A CN 111971536 A CN111971536 A CN 111971536A CN 201980022442 A CN201980022442 A CN 201980022442A CN 111971536 A CN111971536 A CN 111971536A
Authority
CN
China
Prior art keywords
analysis
sound source
point
plane
model data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201980022442.7A
Other languages
Chinese (zh)
Inventor
丰岛直穗子
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nidec Corp
Original Assignee
Nidec Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nidec Corp filed Critical Nidec Corp
Publication of CN111971536A publication Critical patent/CN111971536A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01HMEASUREMENT OF MECHANICAL VIBRATIONS OR ULTRASONIC, SONIC OR INFRASONIC WAVES
    • G01H3/00Measuring characteristics of vibrations by using a detector in a fluid
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/20Arrangements for obtaining desired frequency or directional characteristics
    • H04R1/32Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only
    • H04R1/40Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by combining a number of identical transducers

Landscapes

  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Otolaryngology (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Measurement Of Velocity Or Position Using Acoustic Or Ultrasonic Waves (AREA)
  • Measurement Of Mechanical Vibrations Or Ultrasonic Waves (AREA)

Abstract

An acoustic analysis device (100) is provided with: a first obtaining unit that obtains three-dimensional model data of a sound source plane of an object to be measured; a second obtaining unit that obtains three-dimensional position information of a point on the sound source plane; a third obtaining unit that obtains three-dimensional position information of a point on a measurement surface of a microphone array disposed in the vicinity of the sound source surface; a calculation unit that calculates a three-dimensional distribution of particle velocities as physical quantities representing characteristics of sound from sound signals obtained by the microphone array, and calculates particle velocities at analysis points on a plane parallel to the measurement plane; a first alignment unit that aligns the sound source plane and the analysis point; a second alignment unit that aligns the three-dimensional model data with the sound source plane; and a display unit that displays the three-dimensional model data deformed according to the particle velocity at the analysis point based on the alignment result.

Description

Acoustic analysis device and acoustic analysis method
Technical Field
The present invention relates to an acoustic analysis device and an acoustic analysis method.
Background
In recent years, due to the increasing noise reduction demand of products, it has been required to measure and analyze the spatial distribution of a sound field. In particular, in acoustic analysis, it is desirable to be able to intuitively grasp sound transmission by visualizing a sound field.
Patent document 1 discloses a sound field visualizing apparatus that detects a position in a space of a microphone by a position sensor attached to the microphone, and displays an image corresponding to a sound pressure of a sound signal output from the microphone at a display position corresponding to the position of the microphone.
Documents of the prior art
Patent document
Patent document 1: japanese patent No. 5353316
Disclosure of Invention
Problems to be solved by the invention
In order to express the acoustic state of the object to be measured more easily, it is conceivable to display an image in which the acoustic analysis result is superimposed on the object to be measured. As a method of displaying an image in which the acoustic analysis result is superimposed on the object to be measured, there is a method of: and fixing a camera on the microphone array, shooting the object to be measured by using the camera, and displaying the object to be measured and the acoustic analysis result in a superposition manner.
However, merely displaying an image in which the acoustic analysis result is superimposed on the object cannot analyze in detail the state of the surface vibration of the object serving as a sound source.
Therefore, an object of the present invention is to provide an acoustic analysis device and an acoustic analysis method that can analyze the state of surface vibration of a measurement target in detail.
Means for solving the problems
In order to solve the above problem, an acoustic analysis device according to an aspect of the present invention includes: a first obtaining unit that obtains three-dimensional model data of a sound source plane of an object to be measured; a second obtaining unit that obtains three-dimensional position information of a point on the sound source plane; a third obtaining unit that obtains three-dimensional position information of a point on a measurement surface of a microphone array disposed in the vicinity of the sound source surface; a calculation unit that calculates a three-dimensional distribution of particle velocities as physical quantities representing characteristics of sound from sound signals obtained by the microphone array, and calculates the particle velocities at analysis points on a plane parallel to the measurement plane; a first alignment unit that performs alignment between the sound source plane and the analysis point based on the three-dimensional position information of the point on the sound source plane and the three-dimensional position information of the point on the measurement plane; a second alignment unit that performs alignment between the three-dimensional model data and the sound source plane based on feature points fixed to three or more points of the object; and a display unit that displays the three-dimensional model data deformed in accordance with the particle velocity at the analysis point based on each alignment result of the first alignment portion and the second alignment portion.
In addition, an acoustic analysis method according to an aspect of the present invention includes: obtaining three-dimensional model data of a sound source surface of a measured object; obtaining three-dimensional position information of a point on the sound source plane; obtaining three-dimensional position information of a point on a measurement surface of a microphone array disposed in the vicinity of the sound source surface; calculating a three-dimensional distribution of particle velocities as physical quantities representing characteristics of sound from sound signals obtained by the microphone array, and calculating the particle velocities at analysis points on a plane parallel to the measurement plane; performing alignment between the sound source plane and the analysis point based on the three-dimensional position information of the point on the sound source plane and the three-dimensional position information of the point on the measurement plane; performing alignment between the three-dimensional model data and the sound source plane based on feature points fixed to three or more points of the object; and deforming and displaying the three-dimensional model data according to the particle velocity at the analysis point based on the alignment result.
Effects of the invention
According to one aspect of the present invention, the measured particle velocity data can be appropriately superimposed on the three-dimensional model data of the object to be measured, and the three-dimensional model data can be displayed in a deformed manner. Therefore, the state of the surface vibration of the object to be measured can be analyzed in detail.
Drawings
Fig. 1 is a diagram showing an example of an acoustic analysis system.
Fig. 2 is a diagram for explaining an outline of analysis processing by the analysis processing unit.
Fig. 3 is a diagram illustrating an imaging method of an object to be measured.
Fig. 4 is a diagram showing a photographing method of a microphone array.
Fig. 5 is a diagram illustrating a coordinate conversion method.
Fig. 6 is a diagram for explaining a display method of three-dimensional model data.
Fig. 7 shows an example of a three-dimensional mesh model.
Fig. 8 is a display example of three-dimensional model data.
Detailed Description
Hereinafter, embodiments of the present invention will be described with reference to the drawings.
The scope of the present invention is not limited to the following embodiments, and can be arbitrarily changed within the scope of the technical idea of the present invention.
Fig. 1 shows an example of the configuration of an acoustic analysis system 1000 including a microphone array 1 according to the present embodiment.
The acoustic analysis system 1000 according to the present embodiment is a system that analyzes a sound to be measured from a measurement target (sound source) 2 by using a near-field acoustic holography method and displays the analysis result. In the near-field acoustic holography, it is necessary to measure the sound pressure distribution of a measurement plane close to and parallel to the sound source plane 2a and use the microphone array 1 in which a plurality of microphones mc are arranged in a lattice shape.
The microphone array 1 of the present embodiment includes M × N microphones mc arranged in a lattice shape. The microphone mc can be, for example, a MEMS (Micro-Electrical-Mechanical Systems) microphone. The acoustic analysis system 1000 analyzes a signal (sound signal) input from each of the M × N microphones mc, and detects a physical quantity representing a feature of sound.
The acoustic analysis system 1000 includes an imaging device 3 independent from the microphone array 1 and the object 2. In the present embodiment, a case where the imaging device 3 is a stereo camera will be described.
The stereo camera 3 is fixed at a position separated from the microphone array 1 and the object 2 by a predetermined distance. The stereo camera 3 can obtain three-dimensional position information of the object 2 and three-dimensional position information of the microphone array 1.
The acoustic analysis system 1000 further includes an acoustic analysis device 100 and a display device 200. The acoustic analysis device 100 includes a signal processing unit 101, an analysis processing unit 102, and a storage unit 103. The acoustic analysis device 100 includes a first acquisition unit, a second acquisition unit, a third acquisition unit, a calculation unit, a first positioning unit, a second positioning unit, and a display unit. The first obtaining unit obtains an image of the sound source plane 2a of the object 2. The second obtaining unit obtains three-dimensional position information of a point on the sound source plane 2 a. The third obtaining unit obtains three-dimensional position information of a point on the measurement surface 1b of the microphone array 1 disposed in the vicinity of the sound source plane 2 a. The first positioning section includes a deriving section and a converting section. The signal processing unit 101 performs predetermined signal processing on the signal from each microphone mc of the microphone array 1 to obtain a sound signal for acoustic analysis. The signal processing may include a process of synchronizing signals of M × N microphones mc provided in the microphone array 1.
The analysis processing unit 102 analyzes the audio signal subjected to the signal processing by the signal processing unit 101, and detects a three-dimensional distribution of a physical quantity representing characteristics of the audio. In the present embodiment, the three-dimensional distribution of the physical quantity representing the characteristics of the sound is a particle velocity distribution.
The analysis processing unit 102 performs display control as follows: the particle velocity, which is a physical quantity representing the characteristics of sound, is displayed on the display device 200 as the vibration of the sound source plane 2 a. In the present embodiment, the analysis processing unit 102 performs display control as follows: three-dimensional model data (3D model data) representing the structure of the sound source plane 2a (the surface of the object 2) is deformed and displayed in accordance with the particle velocity. The analysis process performed by the analysis processing unit 102 will be described later.
The storage unit 103 stores analysis results and the like of the analysis processing unit 102. The storage unit 103 stores the 3D model data. Here, the 3D model data can be, for example, 3D-CAD data.
The display device 200 includes a monitor such as a liquid crystal display, and displays the analysis result of the acoustic analysis device 100.
In the present embodiment, as shown in fig. 2, the microphone array 1 has a smaller shape than the object 2. The microphone array 1 measures the sound signals a plurality of times while moving near the sound source plane 2a of the object 2, and the acoustic analysis device 100 analyzes the sound signals measured a plurality of times by the microphone array 1, and combines the analysis results to display the result on the display device 200. As described above, in the present embodiment, the acoustic analysis device 100 analyzes the sound field 1a of the entire surface of the object 2 using the microphone array 1 smaller than the object 2, and displays the analysis result on the display device 200.
Specifically, the analysis processing unit 102 of the acoustic analysis device 100 obtains three-dimensional position information of a point on the sound source plane 2a measured by the stereo camera 3 fixed by the fixing device 3 a. Further, the analysis processing unit 102 captures an image of the microphone array 1 in the sound reception placed near the sound source plane 2a of the measurement object 2 by the stereo camera 3, and obtains three-dimensional position information of a point on the measurement plane of the microphone array 1.
In addition, the analysis processing section 102 analyzes the sound signals obtained by the microphone array 1, and calculates the particle velocity distribution of the sound source plane 2a based on the analysis result of the sound field 1a as shown in fig. 3. Then, the analysis processing unit 102 performs alignment between the 3D model data of the sound source plane 2a of the object 2 and the particle velocity distribution of the sound source plane 2a, and based on the alignment result, displays the 3D model data deformed in accordance with the particle velocity.
The analysis process performed by the analysis processing unit 102 will be specifically described below.
(treatment 1)
First, the analysis processing unit 102 obtains an image of the object 2 captured by the stereo camera 3 fixed by the fixing device 3a before measuring the sound field by the microphone array 1. That is, as shown in fig. 4, the stereo camera 3 images the sound source plane 2a of the object 2 in a state where the microphone array 1 is not within the imaging range of the stereo camera 2, that is, in a state where the microphone array 1 is not disposed near the object 2. At this time, the analysis processing unit 102 can obtain an image of the object 2 not reflecting the microphone array 1.
The analysis processing unit 102 obtains three-dimensional position information of a point on the sound source plane 2a measured by the stereo camera 3 fixed by the fixing device 3 a. Specifically, the analysis processing unit 102 obtains three-dimensional position information po (xo (n), yo (n), zo (n)) of three or more points of the sound source 2b from the stereo camera 3, and calculates the shape So of the sound source plane 2a (aox + boy + coz + do ═ 0). Here, n is an integer of 0. ltoreq. n.ltoreq.No (No.gtoreq.2). In this way, the analysis processing unit 102 calculates the position and shape of the object 2 in the camera coordinate system Σ c.
(treatment 2)
Next, as shown in fig. 5, when the microphone array 1 picks up sound near the sound source plane 2a of the object 2, the analysis processing unit 102 obtains three-dimensional position information pm (n) (xm (n), ym (n), zm (n)) of any three or more points of the microphone array 1 from the stereo camera 3 fixed by the fixing device 3a, and calculates the attitude Om of the measurement plane 1b of the microphone array 1 (amx + bmy + cmz + dm 0). Here, n is an integer of 0. ltoreq. n.ltoreq.Nm (Nm.gtoreq.2). In this way, the analysis processing unit 102 calculates the position and orientation of the microphone array 1 in the camera coordinate system Σ c.
(treatment 3)
Next, as shown in fig. 6, the analysis processing unit 102 sets a measurement object coordinate system Σ o in which an arbitrary point on the measurement object 2, for example, Po (0), is set as an origin and the xz plane is set as the sound source plane 2 a. As shown in fig. 6, the analysis processing unit 102 sets a microphone array coordinate system Σ m in which an arbitrary point on the microphone array 1, for example, Pm (0), is set as an origin and the xz plane is set as the measurement plane 1 b. Then, the analysis processing unit 102 calculates a conversion matrix R from the microphone array coordinate system Σ m to the object coordinate system Σ o.
(treatment 4)
Next, the analysis processing unit 102 obtains the sound signal from the microphone array 1 subjected to the signal processing by the signal processing unit 101, analyzes the sound signal, and calculates a three-dimensional distribution (particle velocity distribution) of the sound. Then, the analysis processing unit 102 calculates a particle velocity distribution Vm (p (m)) on an arbitrary plane (analysis plane) parallel to the measurement plane 1b from the calculated particle velocity distribution based on the principle of acoustic holography. That is, the analysis processing unit 102 calculates the particle velocities at a plurality of analysis points on a plane parallel to the measurement plane 1 b. The particle velocity distribution Vm (p (m)) of the analysis result is calculated in the microphone array coordinate system Σ m.
The principle of acoustic holography is as follows: the sound pressure at the analysis surface is obtained by convolving the sound pressure at the measurement surface with the transfer function from the measurement surface to an arbitrary surface (analysis surface) parallel to the measurement surface. Here, if the analysis plane is used as the sound source plane, the sound pressure at the sound source plane can be obtained. Since it is difficult to directly convolve the transfer function with the measurement sound pressure, the processing is generally facilitated by performing spatial fourier transform for convenience.
That is, sound is recorded by a lattice microphone array (spatial sampling), and after spatial fourier transform, the product of the sound and a transfer function to an analysis plane (e.g., sound source plane) is obtained, and then spatial fourier inverse transform is performed, thereby obtaining sound pressure of the analysis plane (e.g., sound source plane). Using this principle, the distribution of the particle velocity at the sound source plane can be obtained.
(treatment 5)
Next, the analysis processing unit 102 converts the particle velocity distribution Vm (p (m)) of the analysis result calculated in the microphone array coordinate system Σ m into the particle velocity distribution Vo (p (m)) in the object coordinate system Σ o using the conversion matrix R.
This enables alignment between the sound source plane 2a and the analysis point (particle velocity distribution Vm (p (m)) as a result of the analysis).
(treatment 6)
Next, the analysis processing unit 102 performs alignment of the sound source plane 2a and the 3D model data of the object 2.
The second alignment unit performs alignment by performing enlargement, reduction, and rotation of the data of the sound source plane 2a so that the coordinates on the sound source plane 2a match the coordinates on the 3D model data, based on 3 or more feature points having geometric features fixed to the object 2. Here, the geometric feature points fixed to the object 2 are attachment screws, notches, and the like of the object 2. The feature points may be predetermined points defined by the analysis processing unit 102, or may be arbitrarily selected by an operator.
This allows the 3D model data of the object 2 to be aligned with the object coordinate system Σ o.
(treatment 7)
Then, the analysis processing unit 102 deforms the 3D model data according to the particle velocity distribution Vo (p (m)) based on the alignment result, and displays the deformed data on the display device 200.
First, the analysis processing unit 102 calculates the particle velocity v (m) of an arbitrary point p (m) in the object coordinate system Σ o as shown in fig. 7 based on the particle velocity distribution Vo (p (m)). Here, the arbitrary point p (M) is a point corresponding to a node of the 3D model data (3D mesh model) M.
The analysis result of the particle velocity is obtained by using a three-dimensional vector at an arbitrary point p (m) in the object coordinate system Σ o. Therefore, the analysis processing unit 102 transforms the region (mesh) corresponding to the point p (m) in the 3D model data into a color according to the magnitude of the vector indicating the particle velocity and displays the transformed region. Fig. 8 shows an example in which the color of the 3D model data is modified according to the magnitude of the particle velocity v (m).
The method of transforming the 3D model data is not limited to the above. For example, a node corresponding to the point p (m) in the 3D model data may be moved and displayed in accordance with the direction of the vector indicating the particle velocity. In this case, the amount of deformation (amount of movement) of the node can be made to correspond to the magnitude of the vector.
The analysis processing unit 102 repeats the processes 2 to 7 every time the microphone array 1 is moved, and can display the analysis result of the entire surface of the object 2 in association with the 3D model data of the object 2.
In this way, the calculation section of the present embodiment calculates a three-dimensional distribution of particle velocities as physical quantities representing characteristics of sound from the sound signals obtained by the microphone array 1, and calculates particle velocities at analysis points on a plane parallel to the measurement surface 1b of the microphone array 1. The first positioning unit performs first positioning of the sound source plane 2a and the analysis point based on the three-dimensional position information of the point on the sound source plane 2a of the object 2 and the three-dimensional position information of the point on the measurement plane 1b of the microphone array 1. Further, the second alignment unit performs second alignment of the three-dimensional model data of the sound source plane 2a and the sound source plane 2 a. Then, the display unit deforms the three-dimensional model data according to the particle velocity at the analysis point based on the alignment result, and displays the deformed data on the display device 200.
In this way, the present invention displays the surface of the object 2 by deforming the surface, and therefore, the structure of the surface of the object 2 serving as a sound source and the vibration of the surface can be appropriately associated with each other and displayed. The acoustic analysis system may further include an acoustic analysis unit that performs numerical analysis. According to the configuration of the present invention, it is possible to arrange and display the result of numerical analysis of the three-dimensional model data and the result of superimposing the analysis result of the object to be measured on the three-dimensional model data. In this case, the user can compare the result of the numerical analysis and the result of the actual measurement based on the same model data, and thus can perform the analysis in detail. The acoustic analysis unit performs frequency response analysis based on the three-dimensional model data, for example, and outputs an analysis result. In the frequency response analysis, for example, sound pressure data, sound power, and a spatial particle velocity of an analyte are analyzed with respect to a specific frequency. The display unit displays alignment results of the first and second alignment units and analysis results of the acoustic analysis unit on the three-dimensional model data in an aligned manner.
The display unit may display a region corresponding to the analysis point in the three-dimensional model data as a deformed color according to the magnitude of the particle velocity at the analysis point. This makes it possible to easily check the magnitude of the vibration of the surface of the object 2. Further, the display unit may move and display a node corresponding to the analysis point in the three-dimensional model data according to the magnitude and direction of the particle velocity at the analysis point. In this case, the magnitude and direction of the vibration of the surface of the object 2 can be easily confirmed.
When the acoustic analysis device 100 performs the alignment of the sound source plane 2a and the analysis point, the derivation unit derives the conversion matrix R from the microphone array coordinate system Σ m having the arbitrary point Pm (0) on the measurement plane 1b as the origin to the measurement object coordinate system Σ o having the arbitrary point Po (0) on the sound source plane 2a as the origin. Then, the conversion unit converts the analysis points in the microphone array coordinate system Σ m into points in the object coordinate system Σ o using the conversion matrix R. Thus, the acoustic analysis device 100 can appropriately align the sound source plane 2a with the analysis point.
As described above, in the present embodiment, the analysis result of the sound field can be appropriately superimposed on the 3D model data of the sound source plane 2a of the object 2, and the 3D model data can be displayed in a deformed manner. Therefore, the state of the surface vibration of the object 2 can be analyzed in detail.
For example, according to the acoustic analysis device 100 of the present embodiment, the vibration of the surface of the object 2 generated when the motor is incorporated into the final product can be displayed in association with the structure of the object 2. As a result, for example, the cause of noise can be easily specified, and the man-hours required for noise countermeasures can be reduced.
(modification example)
In the above-described embodiment, the case where the three-dimensional position information of the point on the sound source plane 2a and the three-dimensional position information of the point on the measurement plane 1b of the microphone array 1 arranged in the vicinity of the sound source plane 2a are obtained by using the common stereo camera 3 fixed at the position separated from the sound source plane 2a of the object 2 and the measurement plane 1b of the microphone array 1 independently has been described. That is, the first obtaining unit, the second obtaining unit, and the third obtaining unit use a common stereo camera that is independently fixed at a position separated from the sound source plane and the measurement plane. However, when the positional relationship between the cameras (the correspondence relationship between the camera coordinate systems) is known, the three-dimensional positional information may be obtained by using different stereo cameras.
In the case where the common stereo camera 3 is used as in the above-described embodiment, it is preferable to use the common camera coordinate system Σ c because the conversion matrix R from the microphone array coordinate system Σ m to the object coordinate system Σ o can be easily derived.
Further, in the above-described embodiment, the case where the three-dimensional position information of the object 2 and the microphone array 1 is obtained using the stereo camera 3 has been described, but the means for obtaining the three-dimensional position information is not limited to the stereo camera 3. For example, the three-dimensional position information obtaining means may be a depth camera, a laser scanner, or an ultrasonic sensor capable of detecting a three-dimensional position.
In order to improve the accuracy of obtaining three-dimensional positional information of the object 2 and the microphone array 1, marks or the like may be provided on the object 2 and the microphone array 1.
In the above-described embodiment, the case where 3D-CAD data is used as 3D model data has been described, but the 3D model data is not limited to 3D-CAD data. For example, the 3D model data may be created based on the points po (xo (n), yo (n), zo (n)) on the object 2 measured by the stereo camera 3 for aligning the object 2 with the microphone array 1. Alternatively, the subject 2 may be photographed by a 3D scanner to create 3D model data.
Description of the symbols
1-microphone array, 2-object (sound source), 2 a-sound source plane, 3-stereo camera, 100-acoustic analysis device, 101-signal processing unit, 102-analysis processing unit, 103-storage unit, 200-display device, 1000-acoustic analysis system, mc-microphone.

Claims (7)

1. An acoustic analysis device is characterized by comprising:
a first obtaining unit that obtains three-dimensional model data of a sound source plane of an object to be measured;
a second obtaining unit that obtains three-dimensional position information of a point on the sound source plane;
a third obtaining unit that obtains three-dimensional position information of a point on a measurement surface of a microphone array disposed in the vicinity of the sound source surface;
a calculation unit that calculates a three-dimensional distribution of particle velocities as physical quantities representing characteristics of sound from sound signals obtained by the microphone array, and calculates the particle velocities at analysis points on a plane parallel to the measurement plane;
a first alignment unit that performs alignment between the sound source plane and the analysis point based on the three-dimensional position information of the point on the sound source plane and the three-dimensional position information of the point on the measurement plane;
a second alignment unit that performs alignment between the three-dimensional model data and the sound source plane based on feature points fixed to three or more points of the object; and
and a display unit that displays the three-dimensional model data by deforming the three-dimensional model data according to the particle velocity at the analysis point, based on the alignment result of the first alignment portion and the second alignment portion.
2. The acoustic resolution device of claim 1,
the display unit displays a region corresponding to the analysis point in the three-dimensional model data by deforming the region into a color corresponding to the magnitude of the particle velocity at the analysis point.
3. Acoustic resolution device according to claim 1 or 2,
the display unit displays a node corresponding to the analysis point in the three-dimensional model data by moving the node according to the direction of the particle velocity at the analysis point.
4. Acoustic resolution device according to any one of claims 1 to 3,
the second obtaining unit and the third obtaining unit are obtained using a common stereo camera that is fixed independently at a position separated from the sound source plane and the measurement plane.
5. Acoustic resolution device according to any one of claims 1 to 4,
the first positioning portion includes:
a deriving unit that derives a conversion matrix from a microphone array coordinate system having a point on the measurement surface as an origin to a measurement object coordinate system having a point on the sound source surface as an origin; and
and a conversion unit that converts the analysis point in the microphone array coordinate system to a point in the object coordinate system using the conversion matrix.
6. Acoustic resolution device according to any one of claims 1 to 5,
further comprises an acoustic analysis unit for performing numerical analysis based on the three-dimensional model data,
the display unit displays the alignment result and the analysis result of the acoustic analysis unit in an aligned manner.
7. An acoustic analysis method, comprising:
obtaining three-dimensional model data of a sound source surface of a measured object;
obtaining three-dimensional position information of a point on the sound source plane;
obtaining three-dimensional position information of a point on a measurement surface of a microphone array disposed in the vicinity of the sound source surface;
calculating a three-dimensional distribution of particle velocities as physical quantities representing characteristics of sound from sound signals obtained by the microphone array, and calculating the particle velocities at analysis points on a plane parallel to the measurement plane;
performing alignment between the sound source plane and the analysis point based on the three-dimensional position information of the point on the sound source plane and the three-dimensional position information of the point on the measurement plane;
performing alignment between the three-dimensional model data and the sound source plane based on feature points fixed to three or more points of the object; and
and a step of displaying the three-dimensional model data deformed in accordance with the particle velocity at the analysis point based on the alignment result.
CN201980022442.7A 2018-03-28 2019-03-27 Acoustic analysis device and acoustic analysis method Pending CN111971536A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
JP2018-062685 2018-03-28
JP2018062685 2018-03-28
PCT/JP2019/013296 WO2019189424A1 (en) 2018-03-28 2019-03-27 Acoustic analysis device and acoustic analysis method

Publications (1)

Publication Number Publication Date
CN111971536A true CN111971536A (en) 2020-11-20

Family

ID=68059182

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201980022442.7A Pending CN111971536A (en) 2018-03-28 2019-03-27 Acoustic analysis device and acoustic analysis method

Country Status (2)

Country Link
CN (1) CN111971536A (en)
WO (1) WO2019189424A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7314086B2 (en) * 2020-03-19 2023-07-25 三菱重工業株式会社 Sound pressure estimation system, its sound pressure estimation method, and sound pressure estimation program
CN111709178B (en) * 2020-05-20 2023-03-28 上海升悦声学工程科技有限公司 Three-dimensional space-based acoustic particle drop point simulation analysis method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000075014A (en) * 1998-09-01 2000-03-14 Isuzu Motors Ltd Method for searching sound source
CN102879080A (en) * 2012-09-11 2013-01-16 上海交通大学 Sound field analysis method based on image recognition positioning and acoustic sensor array measurement
US20160066086A1 (en) * 2014-09-03 2016-03-03 Gesellschaft zur Förderung angewandter Informatik e.V. Method and arrangement for detecting acoustic and optical information as well as a corresponding computer program and a corresponding computer-readable storage medium
JP2016090289A (en) * 2014-10-30 2016-05-23 株式会社小野測器 Distribution figure display device and method

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8701491B2 (en) * 2008-04-25 2014-04-22 Stichting Voor De Technische Wetenschapen Acoustic holography
US20130028478A1 (en) * 2010-05-04 2013-01-31 St-Pierre Eric Object inspection with referenced volumetric analysis sensor

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000075014A (en) * 1998-09-01 2000-03-14 Isuzu Motors Ltd Method for searching sound source
CN102879080A (en) * 2012-09-11 2013-01-16 上海交通大学 Sound field analysis method based on image recognition positioning and acoustic sensor array measurement
US20160066086A1 (en) * 2014-09-03 2016-03-03 Gesellschaft zur Förderung angewandter Informatik e.V. Method and arrangement for detecting acoustic and optical information as well as a corresponding computer program and a corresponding computer-readable storage medium
JP2016090289A (en) * 2014-10-30 2016-05-23 株式会社小野測器 Distribution figure display device and method

Also Published As

Publication number Publication date
WO2019189424A1 (en) 2019-10-03

Similar Documents

Publication Publication Date Title
CN111936829B (en) Acoustic analysis device and acoustic analysis method
CN106679651B (en) Sound localization method, device and electronic equipment
Shao et al. Computer vision based target-free 3D vibration displacement measurement of structures
US8756033B2 (en) Ultrasonic diagnostic imaging system and control method thereof
JP5378374B2 (en) Method and system for grasping camera position and direction relative to real object
EP1886281B1 (en) Image processing method and image processing apparatus
US11744549B2 (en) Handheld three-dimensional ultrasound imaging method
US11270465B2 (en) Multiple camera calibration
US11480461B2 (en) Compact system and method for vibration and noise mapping
US10347029B2 (en) Apparatus for measuring three dimensional shape, method for measuring three dimensional shape and three dimensional shape measurement program
KR20050100646A (en) Method and device for imaged representation of acoustic objects, a corresponding information program product and a recording support readable by a corresponding computer
JPH04332544A (en) Acoustical hold gram system
TW201335888A (en) Augmented reality image processing device and method
JP6416456B2 (en) Car body stiffness test apparatus and car body stiffness test method
CN111971536A (en) Acoustic analysis device and acoustic analysis method
US20190394447A1 (en) Imaging apparatus
JP7489670B2 (en) Correction parameter calculation method, displacement amount calculation method, correction parameter calculation device, and displacement amount calculation device
JP2005181088A (en) Motion-capturing system and motion-capturing method
Rothbucher et al. Measuring anthropometric data for HRTF personalization
JP2012133591A (en) Augmented reality display system, augmented reality display method used in the system and augmented reality display program
EP3203760A1 (en) Method and apparatus for determining the position of a number of loudspeakers in a setup of a surround sound system
Carneiro et al. Three-dimensional sound source diagnostic using a spherical microphone array from multiple capture positions
US11317200B2 (en) Sound source separation system, sound source position estimation system, sound source separation method, and sound source separation program
EP2422704A2 (en) Providing ultrasound spatial compound images in an ultrasound system
KR102039902B1 (en) Remote device precision inspection system and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20201120