<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:media="http://search.yahoo.com/mrss/"
	
	>

<channel>
	<title>Vincent Grimaldi</title>
	<link>https://vincentgrimaldi.net</link>
	<description>Vincent Grimaldi</description>
	<pubDate>Mon, 08 Jan 2024 21:35:13 +0000</pubDate>
	<generator>https://vincentgrimaldi.net</generator>
	<language>en</language>
	
		
	<item>
		<title>Texture Synth</title>
				
		<link>https://vincentgrimaldi.net/Texture-Synth</link>

		<pubDate>Thu, 09 Nov 2023 17:19:29 +0000</pubDate>

		<dc:creator>Vincent Grimaldi</dc:creator>

		<guid isPermaLink="true">https://vincentgrimaldi.net/Texture-Synth</guid>

		<description>
	Parametric Synthesis of Crowd Noises in Virtual Acoustic Environments
This paper presents the design and evaluation of a parametric sound texture synthesis for the generation of crowd noise in virtual acoustic environments. It allows the control of the crowd size, its level of excitement, and its spatial distribution in real-time. A corpus-based concatenative approach is used to generate single streams of indistinct speech that are superimposed to create an unintelligible "babbling" texture. Speech material was recorded in semi-supervised group discussions in the anechoic chamber. The database is used in a real-time implementation with a subsequent rendering using dynamic binaural synthesis. Listening tests were conducted to evaluate the effect of different parameter settings, as well as the perceived “naturalness” of the simulation.

Publication ➔


	&#60;img width="2554" height="1434" width_o="2554" height_o="1434" data-src="https://freight.cargo.site/t/original/i/2d83285a468d973b375ade458b22d24009e2251df56dbd0f492dc771eb763f97/Capture-decran-2023-11-14-a-22.11.42.png" data-mid="196873331" border="0" data-scale="90" src="https://freight.cargo.site/w/1000/i/2d83285a468d973b375ade458b22d24009e2251df56dbd0f492dc771eb763f97/Capture-decran-2023-11-14-a-22.11.42.png" /&#62;

&#60;img width="2322" height="1502" width_o="2322" height_o="1502" data-src="https://freight.cargo.site/t/original/i/a7b43443512842d81e04a8d27438c6ee7b94ce74fb0fba16ef5395b340f4eb4e/Capture-decran-2023-11-14-a-21.24.35.png" data-mid="196870138" border="0" data-scale="91" src="https://freight.cargo.site/w/1000/i/a7b43443512842d81e04a8d27438c6ee7b94ce74fb0fba16ef5395b340f4eb4e/Capture-decran-2023-11-14-a-21.24.35.png" /&#62;





	




</description>
		
	</item>
		
		
	<item>
		<title>Distance HI</title>
				
		<link>https://vincentgrimaldi.net/Distance-HI</link>

		<pubDate>Thu, 09 Nov 2023 17:17:31 +0000</pubDate>

		<dc:creator>Vincent Grimaldi</dc:creator>

		<guid isPermaLink="true">https://vincentgrimaldi.net/Distance-HI</guid>

		<description>
	Perception of Auditory Distance in Normal-Hearing and Moderate-to-Profound Hearing-Impaired Listeners
The auditory system allows the estimation of the distance to sound-emitting objects using multiple spatial cues. In virtual acoustics over headphones, a prerequisite to render auditory distance impression is sound externalization, which denotes the perception of synthesized stimuli outside of the head. Prior studies have found that listeners with mild-to-moderate hearing loss are able to perceive auditory distance and are sensitive to externalization. However, this ability may be degraded by certain factors, such as non-linear amplification in hearing aids or the use of a remote wireless microphone. In this study, 10 normal-hearing and 20 moderate-to-profound hearing-impaired listeners were instructed to estimate the distance of stimuli processed with different methods yielding various perceived auditory distances in the vicinity of the listeners. Two different configurations of non-linear amplification were implemented, and a novel feature aiming to restore a sense of distance in wireless microphone systems was tested. The results showed that the hearing-impaired listeners, even those with a profound hearing loss, were able to discriminate nearby and far sounds that were equalized in level. Their perception of auditory distance was however more contracted than in normal-hearing listeners. Non-linear amplification was found to distort the original spatial cues, but no adverse effect on the ratings of auditory distance was evident. Finally, it was shown that the novel feature was successful in allowing the hearing-impaired participants to perceive externalized sounds with wireless microphone systems.
Publication&#38;nbsp;➔&#38;nbsp;
	&#38;nbsp; &#38;nbsp; &#38;nbsp; &#38;nbsp;&#38;nbsp;



</description>
		
	</item>
		
		
	<item>
		<title>Lowcost Ext</title>
				
		<link>https://vincentgrimaldi.net/Lowcost-Ext</link>

		<pubDate>Thu, 09 Nov 2023 17:09:43 +0000</pubDate>

		<dc:creator>Vincent Grimaldi</dc:creator>

		<guid isPermaLink="true">https://vincentgrimaldi.net/Lowcost-Ext</guid>

		<description>
	Externalization of virtual sounds using low computational cost algorithms for hearablesIn binaural sound reproduction, it has been shown that externalization improves the listening comfort. Using individualized binaural room impulse responses, it is possible to simulate sound sources in a given room for a listener wearing headphones. However, in some real-time binaural sound applications such as miniaturized hearables, it is not always possible to use such optimal filtering. This potentially results in the perception of virtual sources inside the head rather than externalized. Being able to be aware of surroundings in space, referred as spatial awareness, is another crucial feature in this type of application. This study assessed three sound spatialization algorithms that aim to optimize externalization while preserving spatial awareness. Those algorithms were designed to be implementable on wearable devices, using low computational power and little memory. These algorithms were evaluated in terms of externalization as well as spatial awareness. The results show that a convincing externalization can be achieved with those low computational cost algorithms while preserving spatial awareness.

Publication ➔


	&#60;img width="834" height="559" width_o="834" height_o="559" data-src="https://freight.cargo.site/t/original/i/ed3f3280d5ae1202f7ebdd2501ed77be7b14b92e6f234f45b43518176cc8961e/listeningroom.PNG" data-mid="197865852" border="0"  src="https://freight.cargo.site/w/834/i/ed3f3280d5ae1202f7ebdd2501ed77be7b14b92e6f234f45b43518176cc8961e/listeningroom.PNG" /&#62;




</description>
		
	</item>
		
		
	<item>
		<title>HT</title>
				
		<link>https://vincentgrimaldi.net/HT</link>

		<pubDate>Thu, 09 Nov 2023 17:08:01 +0000</pubDate>

		<dc:creator>Vincent Grimaldi</dc:creator>

		<guid isPermaLink="true">https://vincentgrimaldi.net/HT</guid>

		<description>
	Human Head Yaw Estimation Based on Two 3-Axis AccelerometersThe estimation of the orientation of an object, and a human head in particular, can be defined by the Euler angles: the yaw, pitch and roll. The robust and drift-free estimation of those angles is usually achieved with the data from several sensors such as accelerometers, gyroscopes and magnetometers, processed with sensor fusion algorithms. However, wearable devices such as hearing instruments are rarely equipped with all those sensors, and might usually only have a single accelerometer embedded per device. While it is possible to retrieve a correct estimation of the roll and pitch using only a single accelerometer, estimating the yaw is a more challenging task as accelerations around the gravity vector cannot be detected by an accelerometer. In the context of binaural communication devices and spatial hearing, the yaw of the head is a key information that helps achieving dynamic binaural synthesis. This work proposes an algorithm that aims at estimating the yaw of a human head by using only two 3-axis accelerometers that are placed at each side of the head. The algorithm is evaluated with acceleration measurements of human subjects achieving a realistic task. The results suggest that the strategy used is promising as a good performance can be achieved for a majority of the measures as long as an individualized set of parameters can be selected. The results also suggest that the performance of the current implementation is sensitive to sensor displacement after calibration or a wrong estimation during initialization.

Publication ➔


	
&#60;img width="626" height="505" width_o="626" height_o="505" data-src="https://freight.cargo.site/t/original/i/cac05d1527d5accceb8df9f9bf091853f5e0ada422de9bf7f196e6a5b33cec21/acceleroHI.jpg" data-mid="196871479" border="0" data-scale="65" src="https://freight.cargo.site/w/626/i/cac05d1527d5accceb8df9f9bf091853f5e0ada422de9bf7f196e6a5b33cec21/acceleroHI.jpg" /&#62;



</description>
		
	</item>
		
		
	<item>
		<title>HT artefacts</title>
				
		<link>https://vincentgrimaldi.net/HT-artefacts</link>

		<pubDate>Thu, 09 Nov 2023 16:59:32 +0000</pubDate>

		<dc:creator>Vincent Grimaldi</dc:creator>

		<guid isPermaLink="true">https://vincentgrimaldi.net/HT-artefacts</guid>

		<description>



	Effects of head-tracking artefacts on auditory externalization and localization in azimuthHead tracking combined with head movements have been shown to improve 
auditory ex-ternalization of a virtual sound source and contribute to 
the performance in localization. With certain technically constrained 
head-tracking algorithms, as can be found in wearable devices, artefacts
 can be encountered. Typical artefacts could consist of an estimation 
mismatch or a tracking latency. The experiments reported in this article
 aim to evaluate the effect of such artefacts on the spatial perception 
of a non-individualized binaural synthesis algorithm. The first 
experiment focused on auditory externalization of a frontal source while
 the listener was performing a large head movement. The results showed 
that a degraded head tracking combined with head movement yields a 
higher degree of externalization compared to head movements with no head
 tracking. This suggests that the listeners could still take advantage 
of spatial cues provided by the head movement. The second experiment 
consisted of a localization task in azimuth with the same simulated 
head-tracking artefacts. The results showed that a large latency (400 
ms) did not affect the ability of the listeners to locate virtual sound 
sources compared to a reference headtracking. However, the estimation 
mismatch artefact reduced the localization performance in azimuth. 

	
































































Publication ➔


	



</description>
		
	</item>
		
		
	<item>
		<title>Annoyance</title>
				
		<link>https://vincentgrimaldi.net/Annoyance</link>

		<pubDate>Thu, 09 Nov 2023 16:56:18 +0000</pubDate>

		<dc:creator>Vincent Grimaldi</dc:creator>

		<guid isPermaLink="true">https://vincentgrimaldi.net/Annoyance</guid>

		<description>
	Sound annoyance evaluation of single pass-by vehicle noise.Traffic noise monitoring usually 
relies on acoustic metrics, such as A-weighted equivalent continuous 
sound level. With the current development of noise radars, the noise 
emitted by a single vehicle can be tracked from a fixed position close 
to the road. However, equivalent sound level may not fully explain the 
sound annoyance generated by each single vehicle. Such sound can be 
considered short and time-varying. In this study, fifty naive 
participants were asked to rate the perceived annoyance when listening 
to 3 s pass-by vehicle sounds excerpts. The sounds were played from a 
loudspeaker at four fixed percentile loudness levels. The audio excerpts
 were chosen to span over typical range of values in terms of roughness 
and fluctuation strength. The correlation of the subjective annoyance 
ratings with several psychoacoustic metrics of the literature was 
investigated. The results suggest that A-weighted equivalent continuous 
sound level as well as Zwicker’s model of sound annoyance may not be the
 best indicators of the actual annoyance perceived by listeners for this
 type of sounds. The averaged instantaneous loudness, computed with 
either Zwicker’s model or Moore-Glasberg’s model, provided the best 
correlation with the subjective annoyance ratings.


	
</description>
		
	</item>
		
		
	<item>
		<title>Home</title>
				
		<link>https://vincentgrimaldi.net/Home</link>

		<pubDate>Mon, 08 Jan 2024 21:35:13 +0000</pubDate>

		<dc:creator>Vincent Grimaldi</dc:creator>

		<guid isPermaLink="true">https://vincentgrimaldi.net/Home</guid>

		<description>
	

	




</description>
		
	</item>
		
		
	<item>
		<title>Sound</title>
				
		<link>https://vincentgrimaldi.net/Sound</link>

		<pubDate>Wed, 27 Sep 2023 13:57:36 +0000</pubDate>

		<dc:creator>Vincent Grimaldi</dc:creator>

		<guid isPermaLink="true">https://vincentgrimaldi.net/Sound</guid>

		<description>
	&#60;img width="5167" height="2607" width_o="5167" height_o="2607" data-src="https://freight.cargo.site/t/original/i/59a6674616347f847782967ccc6da36b6b00a62e2fada30d44248ddcf4a62a86/_MG_5881-copie.JPG" data-mid="196508352" border="0" data-scale="98" src="https://freight.cargo.site/w/1000/i/59a6674616347f847782967ccc6da36b6b00a62e2fada30d44248ddcf4a62a86/_MG_5881-copie.JPG" /&#62;➔ Bandcamp
	


[fr]
Vincent
 développe un rapport au sonore à partir de pratiques comme la prise de 
son, les manipulations sur magnétophones à bande, la synthèse analogique
 et différents dispositifs électroacoustiques: microphones, 
haut-parleurs, tables de mixage. Dans ses travaux, il s’intéresse aux 
feedbacks, à la phonographie, aux artefacts d'enregistrement / 
reproduction des media analogiques, aux transformations par le support 
et aux phénomènes (psycho)acoustiques.





[eng]
Vincent is developing a relationship with sound based on practices such as field recording, reel-to-reel tape recorder manipulations, analog modular synthesis and various electroacoustic setups: mixing boards, microphones and loudspeakers.











In particular, he is interested in feedbacks, phonography, recording and reproduction artefacts of the analog media, transformation through the medium and (psycho)acoustic phenomena.












	&#60;img width="1200" height="1200" width_o="1200" height_o="1200" data-src="https://freight.cargo.site/t/original/i/5dc53fa055184818e6ee96183d4358dedcd738dd6c0a6b81013f62960d548da2/a1098463645_10.jpg" data-mid="219767336" border="0" data-scale="92" src="https://freight.cargo.site/w/1000/i/5dc53fa055184818e6ee96183d4358dedcd738dd6c0a6b81013f62960d548da2/a1098463645_10.jpg" /&#62;

	
	débris d’éclatsPart I, 06’25”

Part II,&#38;nbsp;06’07”

Part III,&#38;nbsp;05’55”

Part IV, 05’27”

Part V, 02’40”

Part VI, 06’25”


Field recording / Revox / Serge modularReleased October 11, 2024Recorded in 2022-2024 in LausanneCD-R published by Kirigirisu Recordings - kgr051
 

Listen






	&#60;img width="2535" height="2535" width_o="2535" height_o="2535" data-src="https://freight.cargo.site/t/original/i/257e77af2d24905e3c2c00099328420c65554b1e35bc07d6ec4a721db33a30ba/_MG_5974_.JPG" data-mid="236959920" border="0" data-scale="92" src="https://freight.cargo.site/w/1000/i/257e77af2d24905e3c2c00099328420c65554b1e35bc07d6ec4a721db33a30ba/_MG_5974_.JPG" /&#62;
	
	VacoaVacoa 1, 07’09”Vacoa 2, 06’48”Vacoa 3, 05’55”Vacoa 4, 04’34”Vacoa 5, 04’30”Serge Modular / Piezo micsThese pieces are made mostly of feedbacks recorded either with a Serge Modular system or electro-acoustic setups.Released March 16, 2022
 
 Recorded in 2020-2021 in LausanneTape published by Scum Yr Earth - SCUM TAPES 104
 

Listen









	
	Geophana&#38;nbsp;
10’58”
Field recording / Serge Modular
Phonographies from Luberon and Bernese Alps in 2020.
Recorded and composed in Lausanne in 2021.Originally presented in the Phonurgia Nova Awards 2021 - Prix "Paysage Sonore"

Listen 







	
	La Para10’46”Serge Modular / Field RecordingOriginally presented in the GRM Découvertes Prize 2020
Radio diffusion - France Musique, L’expérimentale

	
	








	~ awardsComposition Prize (2nd) Phonurgia Awards - Field Recording (Paysage Sonore) 2021Composition Prize (3rd) INA-GRM Découvertes 2020Special Award - Luc Ferrari "Presque Rien" Composition Prize 2019










	~ some concerts / diffusions

Quadriphonic performance + duo w/ Vincent Jehanno, 

espace d’art Humus, Lausanne

, 03.11.2025


Quadriphonic performance + duo w/ Vincent Jehanno, Néomartine, Lausanne, 27.09.2024


Quadriphonic performance, espace d’art Humus, Lausanne, 25.05.2024Multichannel concert, The Centerpoint, Cinéma Bellevaux, Lausanne, 05.05.2023Diffusion, Chambre d’écoute Département d’Art Sonore, Musée Reattu, Arles, 2022Diffusion, Fête des sens, Muséum national d’Histoire naturelle - Jardin des Plantes, Paris, 2022Diffusion, Presque Rien, Galerie Univer ( Muse en circuit ), Paris, 2020Quadriphonic concert, Réduction Verticale, Grrrnd Zero, Lyon, 2019

</description>
		
	</item>
		
		
	<item>
		<title>Research</title>
				
		<link>https://vincentgrimaldi.net/Research</link>

		<pubDate>Mon, 06 Nov 2023 17:33:01 +0000</pubDate>

		<dc:creator>Vincent Grimaldi</dc:creator>

		<guid isPermaLink="true">https://vincentgrimaldi.net/Research</guid>

		<description>
	
~ research interestsperception de l’espace sonore / psychoacoustique&#38;nbsp; / perception auditive / synthèse sonore 
spatial hearing / psychoacoustics / auditory perception / sound synthesis








	~ selected projects

(click for details)2024
Design of a custom spatialization tool for a multi-channel sound system ➔
2022-2023
Sound annoyance evaluation of single pass-by vehicle noise ➔ 


2021-2022

Effects of head-tracking artefacts on externalization and localization in azimuth during binaural reproduction ➔

 


2020-2021

Head-tracking for dynamic binaural rendering based on two 3-axis accelerometers ➔

2020

Auditory externalization during binaural reproduction using low computational cost algorithms ➔

2018-2019


Perception of auditory distance in normal-hearing and moderate-to-profound hearing-impaired listeners ➔


2017
Parametric synthesis of texture sounds in a binaural environment ➔

&#60;img width="830" height="283" width_o="830" height_o="283" data-src="https://freight.cargo.site/t/original/i/c7ed49a153dbc33f734c20a7f0a52cefd6de89f22ea6798469fab419d93f2c15/listeningroom.png" data-mid="197978953" border="0" data-scale="56" src="https://freight.cargo.site/w/830/i/c7ed49a153dbc33f734c20a7f0a52cefd6de89f22ea6798469fab419d93f2c15/listeningroom.png" /&#62;






	~ publicationsMost publications are available as .pdf here and&#38;nbsp;here, feel free to contact me if you need access to a specific document.

JOURNAL ARTICLES

Effects of head-tracking artefacts on auditory externalization and localization in azimuth with wearable binaural devices
V. Grimaldi, L. Simon, G. Courtois, H. Lissek, Journal of the Audio Engineering Society, 71(10): 650-663, 2023


Head yaw estimation based on two 3-axis accelerometers

V. Grimaldi, L. Simon, M. Sans, G. Courtois, H. Lissek, IEEE Sensors Journal, 22(17): 1-12, 2022


Perception of auditory distance in normal-hearing and moderate-to-profound hearing-impaired listeners


G. Courtois, V. Grimaldi, H. Lissek, P. Estoppey, E. Georganti, Trends in Hearing, Vol. 23: 1-18, 2019
PHD THESIS

Auditory externalization of a remote microphone signalV. Grimaldi, PhD Thesis, EPFL Lausanne, 2022


CONFERENCE PAPERS + PRESENTATIONS

Sound annoyance evaluation of single pass-by vehicle noise


V. Grimaldi, T. Pham Vu, H. Lissek, Proceedings of Forum Acusticum, Torino, Italy, September 2023


Effect of head-tracking artefacts on externalization and localization in non-individualized binaural synthesis

V. Grimaldi, G. Courtois, L. Simon, H. Lissek, Congrès Français d’Acoustique, Marseille, France, April 2022


Externalization of virtual sounds using low computational cost algorithms for hearables

V. Grimaldi, G. Courtois, L. Simon, H. Lissek, Proceedings of Forum Acusticum, Lyon, France, December 2020


Auditory externalization in hearing-impaired listeners with remote microphone systems for hearing aids

V. Grimaldi, G. Courtois, P. Estoppey, E. Georganti, H. Lissek, Proceedings of ICSV, Montréal, Canada, July 2019


Objective evaluation of static beamforming on the quality of speech in noise

V. Grimaldi, G. Courtois, E. Georganti, H. Lissek, Proceedings of Euronoise, Heraklion, Crete, Greece, May 2018


Experimental evaluation of speech enhancement methods in remote microphone systems for hearing aids

G. Courtois, V. Grimaldi, E. Georganti, H. Lissek, Proceedings of Euronoise, Heraklion, Crete, Greece, 2018


Parametric synthesis of crowd noises in virtual acoustic environments

V. Grimaldi, C. Böhm, S. Weinzierl and H. von Coler, Proceedings of the 142nd AES Convention, Berlin, Germany, 2017




~ academic bio2022-2023Postdoctoral researcher ~ Acoustics Group &#124; École Polytechnique Fédérale de Lausanne 


2017-2022
Doctoral researcher ~ Acoustics Group &#124; École Polytechnique Fédérale de Lausanne 

2016
Research intern ~ Audio Communication Group &#124; Technische Universität Berlin
 

2015-2016
Master ATIAM ~ IRCAM + Sorbonne Univeristé Paris






</description>
		
	</item>
		
		
	<item>
		<title>Contact</title>
				
		<link>https://vincentgrimaldi.net/Contact</link>

		<pubDate>Wed, 27 Sep 2023 14:30:08 +0000</pubDate>

		<dc:creator>Vincent Grimaldi</dc:creator>

		<guid isPermaLink="true">https://vincentgrimaldi.net/Contact</guid>

		<description>
	
~ contact &#38;nbsp; &#38;nbsp; 
vincent DOT grimaldi AT protonmail DOT com

Bandcamp
ResearchGate





	&#60;img width="685" height="685" width_o="685" height_o="685" data-src="https://freight.cargo.site/t/original/i/302d9f844c115cd1189160b19df4101c7e669b00f234ba9c52e84bd1c3100c66/b009df08-f451-4b64-a6c4-aff265f667de.jpg" data-mid="236599629" border="0" data-scale="81" src="https://freight.cargo.site/w/685/i/302d9f844c115cd1189160b19df4101c7e669b00f234ba9c52e84bd1c3100c66/b009df08-f451-4b64-a6c4-aff265f667de.jpg" /&#62;

	

	
	



</description>
		
	</item>
		
	</channel>
</rss>