Practices of working with sound and audio – universal design

The first few years You were here, sound and tangible interaction was the primary way of engaging in the world.  Words were carried by sound and speech.  After a few years, You probably learned to read and write text - and you did this by eyes and hands.  Is it sound to say that words by sound and audio is more primordial for us than visual words (text)?

The seventh of Mach1876, the patent application was handed in for the telephone.  Today (16 August 2023) 53.852 days have passed  – or 147 years, 5 months, and 4 days. The Phonograph was made – approximately during the same period as the telephone came around. These are examples of sound equipment that have been in use for a while, tools with microphones as input devices for the sound – and loudspeakers as output devices. Tangible switches and buttons were used as controllers for the telephone, to open and close the microphone for example. The telephone has been, and is still, a technology that is used by many people for talking together over distance.

Lately, sound pattern recognition technologies are applied and used in everyday life in various situations. For example, recognizing voices and then translating this audio into text is currently used by many drivers of automobiles.  Technologies for Interaction and use of sound and audio are steadily evolving.  Examples are the Nomono system (“Nomono | Podcasting, Simplified. | Nomono,” n.d.) for the everyday making of podcast, while on the move by automobiles (“Android Auto,” n.d.) or in teaching and learning settings  (“Forelesningsopptak – enkelt opptak og publisering av forelesninger - Universitetet i Oslo,” n.d.) and the field of music and musical instruments (Jensenius, 2022).

Human-Computer Interaction and sound

Human-Computer Interaction literature about sound and audio goes back to the early days of HCI (Frauenberger et al., 2007).  Audio and sound have been a part of HCI research in for example non speak sound for navigation (Brewster, 1998) or audio cubes (Schiettecatte and Vanderdonckt, 2008), and social media use (Karlsen et al., 2016).

Recording, modifying, storing, using, communicating, retrieving, finding, and playing sound is done in many different ways on different devices and tools. The microphone, such as the Lavalier microphone or microphones built into smartphones are sensors and input devices – and standalone loudspeakers or built-in loudspeakers in the form of bone conductive speakers or built-in speakers in pieces of furniture are output devices.  However, how to “use”, “interact with” and "make sense" of interaction with these devices and the corresponding functions are still to complicated for many users in various use situations.

Studying situations where sound technologies are used

Below is a list of situations when and where sound technologies are used – and perhaps some of these are of interest to learn more about. 

When biking.  What does a bicycle setup look like for interacting with sound while on the move? Helmets, body-worn equipment, and bike-mounted equipment are in use for the purpose of using audio technologies - but how does it actually work for bike commuters and leisure bicyclists?

When making and using videos and podcasts?  What practices exist in your neighborhood?

When driving an automobile?

During lectures – and in general in learning and teaching situations?  What audio tools does students use when "reading" (or listening to) PDFs, web pages, screencasts, podcasts and textbooks?

During and after interviews?

Everyday conversations?

Meetings?

Yoga sessions - online and in learning situations?

At sea, drifting or kayaking?

At home – young and old people – for interacting with various systems - starting and stopping by way of sound?

Universal Design and sound - possible topics and questions

How is the translation - and transformation between sound and text and images done in various contexts?

With situational abilities in mind (Saplacan, 2020); in what way is it possible to make translations usable in specific situations, for specific users?

How is the "control" of microphones and loudspeakers as input and output equipment done?  What are the challenges imposed by the context of use?  What is the visual, auditory or tangible feedback from the microphone and the loudspeaker when "on" and "off", various modes such as recording, sharing etc?  How can all this be different?

Reading and writing by way of both visuals, sound and tangibles - what ways is working for specific users?

If you are interested in investigating interaction and the use of sound in everyday life – please do not hesitate to contact me.

References:

Android Auto [WWW Document], n.d. . Android. URL https://www.android.com/auto/ (accessed 8.16.23).

Brewster, S.A., 1998. Using nonspeech sounds to provide navigation cues. ACM Trans. Comput.-Hum. Interact. TOCHI 5, 224–259.

Forelesningsopptak – enkelt opptak og publisering av forelesninger - Universitetet i Oslo [WWW Document], n.d. URL https://www.uio.no/tjenester/it/lyd-video/forelesningsopptak/index.html (accessed 8.16.23).

Frauenberger, C., Stockman, T., Bourguet, M.-L., 2007. A Survey on Common Practice in Designing Audio in the User Interface. Presented at the Proceedings of HCI 2007 The 21st British HCI Group Annual Conference University of Lancaster, UK, BCS Learning & Development. https://doi.org/10.14236/ewic/HCI2007.19

Jensenius, A.R., 2022. Sound actions: conceptualizing musical instruments. MIT Press.

Karlsen, J., Stigberg, S.K., Herstad, J., 2016. Probing Privacy in Practice: Privacy Regulation and Instant Sharing of Video in Social Media when Running, in: International Conferences on Advances in Computer-Human Interactions ACHI. pp. 29–36.

Nomono | Podcasting, Simplified. | Nomono [WWW Document], n.d. URL https://nomono.co/?gclid=CjwKCAjw5_GmBhBIEiwA5QSMxP428lcJV1UC2lQrGL8Q1ysJxINcIpyZBIASW1YQn2neywLP2JwREBoCbM4QAvD_BwE (accessed 8.16.23).

Saplacan, D., 2020. Situated Ability: A Case from Higher Education on Digital Learning Environments, in: Antona, M., Stephanidis, C. (Eds.), Universal Access in Human-Computer Interaction. Applications and Practice, Lecture Notes in Computer Science. Springer International Publishing, Cham, pp. 256–274. https://doi.org/10.1007/978-3-030-49108-6_19

Schiettecatte, B., Vanderdonckt, J., 2008. AudioCubes: a distributed cube tangible interface based on interaction range for sound design, in: Proceedings of the 2nd International Conference on Tangible and Embedded Interaction, TEI ’08. ACM, New York, NY, USA, pp. 3–10. https://doi.org/10.1145/1347390.1347394

 

Publisert 16. aug. 2023 16:52 - Sist endret 10. okt. 2023 09:12

Veileder(e)

Omfang (studiepoeng)

60