Tuesday, April 4, 2017

'Innovation' in Games Music

"Trying to bring innovation to the genre of music games costs you a lot of time, even more nerves and mostly feels like tilting at windmills" (Geiger, web). Beat buddy (Threaks, 2013) is a non-linear music-action-adventure game. The music for this game is interactive, the game characters' actions are reacting to the music [5:11]. "Each level uses a different song and set of hand drawn graphics with the mechanics and level design based on the structure of that song" (Colleen, 2015, web).



Music files take up a huge disk capacity. The audio designers for Beatbuddy (Threaks, 2013) worked with stems to visualise each instrument in a separate game mechanism. Each music has eight to twelve stems and the game has a total of six songs. This result to a large data weight and it was a disadvantage for mobile devices. Therefore, the sound designers have to develop their own audio system.

Rich Vreeland, more commonly known as Disasterpeace, noted his perception towards games music in the Pyramid Studios and Game Audio Network Guild:

"it was quite an interesting challenge because [...] instead of thinking about order, like when things happen in music, it was more about proximity, like which notes do I want to happen near other notes so that they sound pleasing. Which is kind of a weird thing to think about in music" (Enns, 2015, p. 81).

In his quote, he mentioned to a particular instance in the production of the soundtrack for the video game FEZ (Polytron Corporation, 2012). FEZ is known for its exploration of space as a susceptible concept, as it introduces players to its own spatial reasoning and presents space as easily manipulable.  FEZ combined 'chiptune' sounds and effects like reverberation and delay, furthermore incorporated this manipulation into its production and composition. For instance, during a level in which the main character Gomez climbed to a higher altitudes with the help of blocks that appear and disappear. The appearance and disappearance of these blocks is imposed by the rhythm of the music, and the respective blocks have its own sonic signature in the type of a bright low-resolution synth array (shown in the video below).


FEZ (Polytron Corporation, 2012) has its own music system tools, Fezzer, a system designed designed by Renaud Bedard. Enns researched the interactive music system of Fezzer and introduced three tools and techniques in this music system: (i) the sequence context menu; (ii) the script browser; and (iii) the main composition sequencer (Enns, 2015, p. 84). Fezzer was developed to allow the user to explore FEZ as an omniscient observer. 


Fig 1.0 Fezzer Main Composition Sequencer Window (Enns, 2015, p. 88)

The figure above shows Fezzer's main composition sequencer window. The sequencer can makes changes that affect the gameplay level and each individual musical elements. "Video game music composition is now fully integrated into game programming, which makes the traditional distinction drawn between scoring and game programming all but irrelevant" (Enns, 2015, p. 90).

"Surround systems in home game audio systems commonly use 5.1 channel systems, with channels arranged as follows: left, centre, left surround, right surround, and LFE, LFE - that is, low frequency effects is usually played through the subwoofer speaker (Sweet, 2015, p. 305). Virtual Reality (VR) is   a new technology introduced to game consumers. However 3D audio has yet to be fully developed in the games industry yet. The music system will need to be carefully planned out as "music in the surround speakers can be distracting to the player if it covers up the footsteps of an enemy who is sneaking up on the player (Sweet, 2015, p. 306). Audio middleware engines have yet to develop a 3D audio system, Fmod and Wise have only recently begun to add real-time tools for mixing audio while the game is running. Surround audio files also take up a huge disk capacity which sound designers and game designers need to consider. Further research, inventions and collaborations will be needed to develop a 3D audio system for games.

Bibliography 

(Intel), C.C. (2015) Beatbuddy, an Indie Game, Expands with New Technologies [Internet]. Available from: [Accessed 4 April 2017]. 

T. (2014) Beatbuddy HD (by Threaks GmbH) - Universal - HD Gameplay Trailer [Internet]. Available from: [Accessed 4 April 2017]. 

Enns, M. (2015) Game Scoring: FEZ, Video Game Music and Interactive Composition. dissertation. London, Ontario, Future Technology Press. Available from: [Accessed 4 April 2017]. 

M. (2013) FEZ Sync Level [PC/STEAM] [Internet]. Available from: [Accessed 4 April 2017]. 

Geiger, P. (2014) 5 Reasons Why Music Games Suck [Internet]. Available from: [Accessed 4 April 2017]. 

Lalwani, M. (2015) Surrounded by sound: how 3D audio hacks your brain [Internet]. Available from: [Accessed 4 April 2017]. 

Polytron Corporation (2012) FEZ , Playstation 3, Montreal.

Sweet, M. (2015) Writing interactive music for video games: a composer's guide. Upper Saddle River, NJ, Addison-Wesley.

Threats (2013) Beatbuddy, computer games, Microsoft Windows, Hamburg.

Monday, March 20, 2017

Interactive Music

Occasionally, interactive music is referred to adaptive music or dynamic music in the games industry. "An interactive score is defined as music that can change dynamically based on some type of control input" (Sweet, 2015, pg. 36). Interactive music is applied when the player has direct control of the music. On the other hand, adaptive music is applied when the player has indirect control over the music. There are five types of interactive music used in video games. These are improvisational, realtime composition and arranging, performance-based, experimental composition techniques and art installation performance.


Guitar Hero (RedOctane, 2015) is an example of interactive music. The direct input (guitar) influenced the music outcome (gameplay).


Jazz is the most common form of improvised music. "Improvisation within music is defined as the ability to create or perform spontaneously, or without preparation" (Sweet, 2015, pg. 37). A jazz composer decides the harmonic structure and initial ideas and the performers in the group execute the melody, enhancing and changing the notes as their own artistic representation. In interactive improvisational games music, the composers create the framework and compositional structure for the music, but the player's control input determines the final outcome of the music. Variation helps to lengthen a piece of music by creating different variations of incremental changes in the melody and harmony. Variation is an important aspect for improvisational games music.

"Real-time composition is when the musician or ensemble makes musical choices while watching the events unfold the story (Sweet, 2015, pg. 39). The difference between real-time composition and improvisational technique is the manipulator/s of the control input. Real-time composition technique has control input that consists of the story, scene, or narrative being shown; whereas improvisation's control input is normally determined by the musicians themselves or the audience. A composer writing in the style of real-time composition generates the computer processors, which sends a message of what the player is doing to the music engine. Hence, it would have an approximate transition to the next musical cue or emotion.

Performance-based games music reacts to the events or stories that unfold. An example would be the opera, the speed with which the action on the stage unravels dictates how the conductor speeds up or slows down the musical tempo.  "Composers for video games can think about using control mechanisms in the game to change the dynamics of the music or increase the tempo in real time to escalate the tension in a scene (Sweet, 2015, pg. 40).

 

O2Jam (O2 Media, 2005) is an example of a performance-based music game, the player has to trigger the computer keyboard to play the music notes in the gameplay.

Experimental composition techniques usually includes chance music, aleatoric music, and indeterminate music. The roots of interactive music began with experimental music dated many centuries ago.



Spore (Electronic Arts, 2008), composed by Brian Eno, is an experimental interactive game music.

Many arts installations build virtual or acoustic instruments that allows users to interact with and play the instruments. These instruments usually incorporate interactivity into part of their design to construct a dynamical music experience. "Video games that use music creation as a core mechanic can be inspired by real-world instrument installations and their design an interaction (Sweet, 2015, pg. 42).



The arts installation above shows the interactive with the sound art to create a music piece.



The video above shows a Max/MSP and Flash arts installation that used volume signal to play the game Pong.



Hachibun Onpu (八分音符) (Freedomcrow, 2017) has a relatively similar concept to the volume signal that triggered Pong.

The output of interactive music is usually determined by the control inputs. There are two types of control inputs, the direct and indirect. The direct control inputs include: game controllers, visual and motion sensors (Kinect and EyeToy), touch and pen controllers, computer keyboards and mice, steering wheels, guns and microphones. Specialised inputs are also available for some games, an example would be the bongo and shakers in Samba de Amigo (1999). The indirect control inputs comprises of the character location, enemy proximity or location, environmental effects (e.g. weather or time), level of suspense in the scene, the player's emotion and health in the gameplay, the interaction between the non-player characters (NPCs), the player's current action and the number of puzzles the player had solved.

In the process of composing interactive music, the composer decides which game parameters will dynamically affect the music. Sweet introduced seven responses the music interacts with the control inputs: (1) cue switching and musical form adaption, (2) dynamic mixing, (3) tempo and rhythmic manipulation, (4) DSP and effect application, (5) stinger and musical flourish additions, (6) instrument and arrangement alteration, lastly (7) harmonic approach, melodic adaptation, and note manipulation (Sweet, 2015, pg. 51).

Bibliography 


Buffoni, L.X. Making Interactive Music For Video Games [Internet]. Available from: [Accessed 19 March 2017]. 

A. (2009) Fly Magpie (02jam music) [Internet]. Available from: [Accessed 19 March 2017]. 

D.B. (2016) GRIDI: An Interactive Music Installation [Internet]. Available from: [Accessed 19 March 2017]. 

Electronic Arts (2008) Spore, computer game, Microsoft Windows, Electronic Arts United States.

Freedomcrow (2017) Hachibun Onpu, mobile games, Android and iOS, Freedomcrow Japan.

O2 Media (2005) O2Jam, computer games, Microsoft Windows, O2 Media South Korea.

RedOctane (2005) Guitar Hero, PlayStation 2, Activision California, United States.

S.S. (2010) Queen - Bohemian Rhapsody 100% Expert FC Guitar Hero: Warriors of Rock [Internet]. Available from: [Accessed 19 March 2017]. 

S. (2008) Spore - Creature Stage Part 1 [Internet]. Available from: [Accessed 19 March 2017]. 

Sweet, M. (2015) Writing interactive music for video games: a composer's guide. Upper Saddle River, NJ, Addison-Wesley. 

T.T.S. (2017) 什麼∑(Д)竟然用A Cappella挑戰「八分音符」!!合作頻道【VOX 玩聲樂團】 [Internet]. Available from: [Accessed 19 March 2017]. 

Friday, February 24, 2017

Acoustics in Games

Defined by American National Standards Institute, sound is called acoustics. Acoustics is the science of sound and can be modelled as mechanical waves in an elastic medium using the acoustic wave equation. The fundamental understanding of acoustics is frequency. "Frequency is defined as the number of times in a given period a repetitive phenomenon repeats; in audio, the unit frequency is hertz, which measures repetitions per second" (Somberg, 2017, pg. 5). Sound surrounds us and gameplay has sounds that mimic and create immersion for the players. Human has a dynamic range of 120 decibels (dB). Also, "humans have the ability to tell the direction that a sound is coming from, primarily due to geometry of the head and ears (Somberg, 2017, pg.6). Spatial hearing helps to create immersion in the gameplay. Soundscape was previously known to provide acoustic feedback to the player in the past, music was not the main audio element. However, consumers expectations changed. Sound effects, music, and dialog became one cohesive audio vision.

To create a sense of space in games, audio designers have to model 3D audio. This is done by activating spatialisation in audio wares. Game designers could experiment with different acoustic modelling such as diffraction, occlusion, reflection, attenuation and auralisation. Obstruction between the listener and emitter will result to quieter and less low-frequency content. This is known as the diffraction modelling. Diffraction is hearing sound not in line of sight. "Diffraction can cause sound to bend past edges, thus allowing one to hear sound through portals such as doors or past objects" (Bengtsson, 2009, pg.6).

Occlusion modelling is used when the sound emitter is placed in another space. This acoustic model is frequently used when a door or window is present. When a sound is confined in another space, the sound will lose some of its high-frequency content and will be attenuated.



An example of occlusion and attenuation is shown in the Counter-Strike (Valve Corporation, 2000) gameplay video above. The sound of the bomb from afar was occluded and has low pass filters [3:11]. Furthermore, it was attenuated because of its distance from the player.

Reflection occurs when sound bounces off a surface or strikes an object, the reflected sounds may reach the listener at different strengths, times and pitch. An example would be when one experienced echo situated near a cliff or large building. Apart from echo, reflection also causes early reverberation and late reverberation. The audio designer will need to build a sound propagation system to cover more than one reverb zone for one game object. The sound propagation comprises of the direct sound (emit directly from the sound source), early reflections (the first echoes of a audio that reach the player after direct sound arrives), and lastly the late reverberation (the last component heard by the player).

The video above shows that reverb added into the space, the gun shot has a reverb tail that indicates the spatial quality [2:20].

Attenuation emphasis on placement of the sound source. "The strength of the sound source decreases with distance, from air absorption caused by the reflecting surfaces" (Bengtsson, 2009, pg. 6). The closer the distance from the sound source, the louder the dynamics.

"Auralisation models the listener as two point 'microphones' in the virtual world yields a rather unconvincing result" (Bengtsson, 2009, pg. 6). The sound designer could either mix the audio in surround systems (5.1 or 7.1 systems) or manipulate
a Head-Related Transfer Function (HRTF). "Head-related transfer function is a function used in acoustics that characterises how a particular ear receive sound from a point in space" (Potisk, 2015, pg.1).

Bibliography


Bengtsson, J. (2009) Real-time Acoustics Modeling in Games. thesis. Available from: [Accessed 23 February 2017]. 

T. (2012) Counter-Strike: Global Offensive Gameplay PC HD [Internet]. Available from: [Accessed 23 February 2017]. 

Anon (2011) Gsound: Interactive Sound Propagation for Games. thesis. AES 41st Conference: Audio for Games. Available from: [Accessed 23 February 2017]. 

Guay, J.-F. (2012) Game Developers Conference 2012. In: Real-time Sound Propagation in Video Games. California, Ubisoft Montreal . Available from: [Accessed 23 February 2017]. 

Howard, D.M. & Angus, J. (2009) Acoustics and psychoacoustics. 4th ed. Oxford, UK, Focal Press. 

Potisk, T. (2015) Head-Related Transfer Function. thesis. Available from: [Accessed 23 February 2017]. 

Somberg, G. (2017) Game audio programming: principles and practices. Boca Raton, FL, CRC Press Taylor & Francis Group. 

Valve Corporation (2000) Counter Strike, computer game, Microsoft Windows, Valve Corporation America.




Saturday, February 11, 2017

Procedural Sound Design

In games audio, "procedural sound design" or "procedural audio" are used to create a wider sound palette. It is necessary here to clarify exactly what is meant by "Procedural". "Procedural audio is non-linear, often synthetic sound, created in real time according to a set of programmatic rules and live input" (Farnell, 2007, pg.1) Computer music or sounds sometimes are used or viewed as datas (eg. Sonification). Sonification is "a technique that uses data as input, and generates sound signals" (Hermann, 2008, pg.1) Unlike Sonification, procedural sound design is an audio process generated by computer, similar to "generative sounds". However, generative sounds are not procedural sounds. Generative is an abstract term, algorithmic, procedural and AI (artificial intelligence) sound are all generative. However the definition of "generative" is commonly defined as a piece that "requires no input, or the input is given only as initial conditions prior to execution" (Farnell, 2007, pg.3). An example of a generative piece would be Brian Eno's Music for Airport (Eno, 1978). In other words, generative sounds are not interactive. Game audio focused on interactive sounds. "Procedural sound design is about sound design as a system, an algorithm, or a procedure that re-arranges, combines, or manipulates sounds asset" (Stevens et al, 2016, pg. 59) This approach allows the user to interact with the visual media and audio.

"Procedural sound" is associated with "synthetic" and "algorithm". To further breakdown the definition of "procedural sounds", the elaboration of "synthetic" and "algorithm" provide another depth of understanding to this term. Synthetic sounds are produced by analog or digital. Analog signal generation are produced by oscillators, analog synthesisers and radiosonic modulations. Digital sound synthesis are produced by techniques based on micro sounds that were difficult or impossible with analog techniques. (Roads, 2015, loc.2100) Algorithm is a set of rules for solving a problem in a finite number of steps. Turning now to distinguish "synthesis" and "algorithm", Farnell believed that "synthesis is usually about sounds produced at the sample and waveform level under careful control, which algorithmic sound tends to refer to the data, like that from a sequencer, which is used to actually control these waveforms" (Farnell, 2007, pg.5)

Fournel introduced five examples of procedural content in games: Sentinel, Elite, DEFCON, Spore and Love. (Fournal, n.d, pg.9) This post focuses on the introduction and procedural sound design of Spore. Spore (Electronic Arts, 2008) is a life simulation, real-time strategy, role-playing and action game developed by Maxis and designed by Will Wright. This game was released on 2008 and the composers of Spore includes Brian Eno, Cliff Martinez and Saul Stokes. The player is allowed to control the development of the pieces through five stages of evolution: Cell, Creature, Tribe, Civilisation, and Space. The gamer could use the Creator tools to make creatures, vehicles, buildings and spaceships. Kent Jolly, the audio director for Maxis/Electronic Arts, stated that Spore has Eno's 1983 album Apollo:Atmospheres and Soundtracks compositions. The music in this game 'will develop and mutate along with their style of play' (Steffen, 2008, Web)

"The system takes variables from user input and uses mathematical algorithms to create control data which subtly changes certain aspects of game-play" (Donnellan, 2010, Web) This helps to enhance the game experience. An example is shown below:

The player is able to customise the mouth, eyes, hands, legs, horn and tails. Different mouth produced different kind of sound.

The video below is another example of procedural sound design. The player was rewarded for his actions and different rewards have different sounds. [5:02 and 6:25] The music was also manipulated subtly based on the new data triggered by the player. [3:21]


The following clip shows the audio changes according to the orientation of the creature. After the player attacked [0:04], the rhythm started to pick up its pace and the creatures started to sound more aggressive. When the player procceded to attack the base [0:50], a sound notification was used and the music starts to change.



In conclusion, procedural sound design is needed to combine, re-arrange and manipulate the different sound assets in the game play for a more engaging and interactive experience.


Bibliography

Bommel, A. (2008) Spore Tribal Stage Part 1 [Internet]. Available from: [Accessed 11 February 2017]. 

Electronic Arts (2008) Spore, computer game, Microsoft Windows, Electronic Arts United States.

Farnell, A. (2007) An introduction to procedural audio and its application in computer games [Internet]. Available from: [Accessed 11 February 2017]. 

Farnell, A. (2010) Designing sound. Cambridge, MA, MIT Press. 

Fournel, N. Games Developers Conference. In: Procedural Audio for Video Games: Are we there yet? Sony Computer Entertainment Europe. Available from: [Accessed 11 February 2017]. 

Hermann, T. (2008) 14th International Conference on Auditory Display. In: Taxnomy and Definitions For Sonification and Auditory Display. Paris, Bielefeld University. Available from: [Accessed 11 February 2017]. 

Roads, C. (2015) Composing electronic music: a new aesthetic. Oxford, Oxford University Press. 

shadowzack (2008) Spore - Creature Stage Part 1 [Internet]. Available from: [Accessed 11 February 2017]. 

D.K. (2008) Spore: The Mouths and the Sounds They make [Internet]. Available from: [Accessed 11 February 2017]. 

Stevens, R. & Raybould, D. (2016) Game audio implementation: a practical guide using the unreal engine. Boca Raton, CRC Press, Taylor & Francis Group, CRC Press is an imprint of the Taylor & Francis Group, an informa Business. 

Sunday, February 5, 2017

Ludic Audio Function: Feedback

"Unlike the consumption of many other forms of media in which the audience is a more passive "receiver" of a sound signal, game players play an active role in the triggering of sound events in the game." (Collins, 2008, loc.86) Game sound designers need to be aware of the communication between the player and the game. The ludic audio function, also known as "The I.N.F.O.R.M Model", consists of instruction, notification, feedback, orientation, rhythm-action and mechanic. These audio functions provide greater understanding of what needs to be heard and why sounds are mixed in the gameplay. "Ludic comes from the latin ludus which means game or play." (Kamp, 2010, pg. 6) This term derived from Van Elferen who referenced Juul's dissimilarity between a game's fiction (narrative) and its rules (or it's ludic aspects).

This post mainly elaborates on one of the six ludic audio functions: feedback and later provide a relevant example. The term "feedback" means the "audio in response to the action instigated by the player that indicates confirmation or rejection or provides reward or punishment." (Stevens, 2016, pg. 310) Puzzle games like Zuma (Popcap Games, 2003) provides an instant feedback to allow the gamer understand what is happening in the game. Puzzle games like Zuma has objects that were not based on reality, therefore, the audio designer has to make the audio sounds as if they were coming from the object itself. This game makes practical and effective use of arcade style sounds to engage the player while providing feedback and information of the ongoing events in the game. Zuma has matched or bonus items displayed, the sounds provide significant hints to keep the game engaging. (Marks, 2010, web)

According to Collin's game audio terminology, sounds are categorised into interactive, adaptive and dynamic audio. Zuma's (Popcap Games, 2003) ludic feedback audio function mainly reacts to the player's direct input, making it an interactive game. Besides Zuma is also an adaptive audio game that has sounds that responds to the proximity of the balls and the yellow skull structure. With both the interactive and dynamic audio characteristics, Zuma is a dynamic audio game. (Collins, 2008, loc. 98)


An analysis was done to one of Zuma (Popcap Games, 2003) boss level gameplay [0:10 - 5:07]. The objective of this boss level is to extinguish the torches, the player loses a life when the balls reaches the yellow skull. This level had five feedback sound events. The first sound event was used to indicate an explosion when three or more of the balls with the same colour came in contact [0:31]. However, the sound did not sound like a real explosion, it was just an indication to fit the context. This contact explosion could trigger more explosions as part of the chain reaction, a second sound event with an increasing pitch was used in this instance [0:46 - 0:48]. The third feedback sound occurred when the player extinguished a torch [0:56]. The fourth feedback audio appeared when the balls were at a close proximity to the yellow skull, This acted as a warning to the player's life [0:35]. The fourth audio feedback imitated the sound of a heartbeat to evoke anxiousness, as the balls got closer to the yellow skull, the rhythm of the heartbeat started to increase [2:32].  In an unfortunate circumstance, the player lost the level. The last sound event happened when the balls dropped into the yellow skull [2:33].

In conclusion, feedback sound effects are necessary to engage with the player and provide an understanding in the gameplay. The gamer could miss out visual cues happening in the gameplay and the audio feedback provides an assistance for the player to complete the level.


Bibliography


Collins, K. (2008) Game sound: an introduction to the history, theory, and practice of video game music and sound design. Cambridge, MA, MIT Press. 

Gaming, C.J. (2012) Zuma's Revenge - The final boss(es) battle [Internet]. Available from: [Accessed 5 February 2017]. 

Isaza, M. (2010) Aaron Marks Special: Function of Game Sound Effects [Internet]. Available from: [Accessed 5 February 2017]. 

Kamp, M. (2010) Ludic music in video games. thesis. Research Gate. Available from: [Accessed 5 February 2017]. 

Popcap Games (2003) Zuma, video game, Xbox 360, Electronic Arts America.


Stevens, R. & Raybould, D. (2016) Game audio implementation: a practical guide using the unreal engine. Boca Raton, CRC Press, Taylor & Francis Group, CRC Press is an imprint of the Taylor & Francis Group, an informa Business. 

Thursday, February 2, 2017

2 Feb 2017 Notes


Game Sound: An Introduction to the History, Theory, and Practice of Video Game Music and Sound Design Quotes

"Unlike the consumption of many other forms of media in which the audience is a more passive "receiver" of a sound signal, game players play an active role in the triggering of sound events in the game." (Collins, 2008, loc.86)

- Dialogue
-Ambient sounds
- Sound effects
- Musical events

The interactivity between the medium and the user sets the difference between video games and other forms of media.

Collins' terminology on game audio

- Interactive Audio
- Adaptive Audio
- Dynamic Audio

Meaning of interactive audio

"Refers to those sound events that react to the player's direct input." (Collins, 2008, loc. 94)

- Gunshots
- Footsteps

Meaning of adaptive audio

"Sounds that react to the game states, responding to various in-game parameters such as time-ins, time-outs, player health, enemy health, and so on." (Collins, 2008, loc. 98)

Meaning of dynamic audio

"Reacts both to changes in the gameplay environment, and/or to actions taken by the player." (Collins, 2008, loc. 98)

Similarity between games and films:

"Games often contain what are called cinematics, full motion video (FMV), or non interactive sequences, which are linear animated clips inside the game in which the player has no control or participation." (Collins, 2008, loc. 109)

Other notes:

"Sound was a key factor in generating the feeling of success." (Collins, 2008, loc. 151)

It creates the illusion that the player has win the game.

"Sounds were not an aesthetic decision, but were a direct result of the limited capabilities of the technology of the time." (Collins, 2008, loc. 167)

In other words, different era produced different sound effects, texture and quality.

1-2-2017 SMIG

Ludic audio functions (also known as the I.N.F.O.R.M Model):

- Instructions
- Notification
- Feedback
- Orientation
- Rhythm Action
- Mechanics

This is to have greater understanding of what need to be heard and why the audio is mixed in the game.

Thursday, January 26, 2017

Game Audio I Admire (Msc Sound & Music for Interactive Games)

Among all the games that I had played, I admire Maplestory and Kingdom Hearts' game audio. Maple story is a Massively Multiplayer Online (MMO) game. The synthetic sounds of the music blend in with the environment. The continuous loop would result in irritation; the music does not have a short loop that brings the player out of the game. In the gameplay, the players can choose different characters. These characters would have different abilities that came with various attack sounds. Sounds were also used for the assault and death of the monsters. Not all the actions of the player are mapped to a sound effect, for instance, the players do not have the recording of a footstep. The sound designer balanced the usage of sound effects, and the sounds were properly equalised. Unfortunately, the updated Maplestory have voiceovers which were terribly done. The voiceovers and the story were redundant; the designers should consider taking out the whole scene.

Kingdom Hearts has probably one of the best game audio. The sounds were equalised like a film and animation production; time-based and dynamic effects were used to enhance the sounds. The main theme was composed and sang by Utada Hikaru which complemented the graphics and storyline. Kingdom hearts had different series. The first production theme song, Simple and Clean, had a remixed in the later series that acts as a reminisce for the first series. The voice over was also properly done, with no noise or unwanted sounds and frequencies in the recording. Published by Square Enix and Disney, this game produced not only good quality audio but also good quality graphics and animation.

Tuesday, January 24, 2017

Introduction (Msc Sound & Music for Interactive Games)

I am an audio-visual artist, with a keen interest in sound design and electronic music composition. Apart from stereo speakers, I have experimented with multi-speakers, and I am still exploring the different possibilities of using the system. The difference between games and other visual projects is the ability to bring a digital image to life, furthermore interact with the user. This interactivity has brought me into doing games projects. As a gamer, I would prefer to play puzzle, adventure, role-playing and social simulation games. Besides, I have a great interest in experimental music and abstract moving images. I believe that synesthesia plays a big part in audio-visual works. I have done relevant research on synesthesia to have a deeper understanding of my capabilities.

I have pursued my Bachelors degree at Lasalle College of the Arts in Singapore, where I was exposed to studio recording, live sound, music composition, early electronic music, noise improvisational music performance, abstract moving images and sound design.

Subsequently, I furthered my Masters at University of West London, specialising in Electronic Music Composition. During my stay in London, I have focused on surround systems and bringing my audio-visual works to the next level. I have incorporated graphic scores, paintings and Max MSP colour tracking into my experimental music.

My next focus is to work on interactive games, therefore I applied for MSc Sound and Music for Interactive Games at Leeds Beckett. My ambition is to open a stereo and surround recording studio for interactive games. I will love to do projects ranging from mobile, PlayStation, 3DS to Virtual Reality. I am aware that it will take an extended period, but I am willing to spend my next ten years on such exposure and development. Beyond ten years, technology is always improving; I believe I will have something else in mind.

MY VIDEO!