When trying to solve a problem that at first seems overwhelming, it's usually best to break it down into smaller more manageable pieces. My somewhat overwhelming problem is to create a musically inspired synesthetic game. There appear to be two big pieces to this goal:
- a game that's both very accessible and an engaging experience to users
- an algorithm that takes as input game state, and outputs music that enhances the game's experience and possibly even feeds back into the contents of the game.
Below is a diagram of what is admittedly a ridiculously vague model.
Let's first try to focus on the game aspect, and brainstorm a list of games that could fit this genre:
- A sidescrolling/third person/first person shooter where enemies, powerups, and graphics vary based on what sounds are generated from shooting enemies and gaining powerups.
- A tactical strategy game where the board, enemy, and rules vary based on what sounds are generated from moves made and enemies attacked.
- A platform puzzle game where the level is explored and solved based on what sounds are generated from exploring the level.
- A card game where your deck, the rules, and play state changes based on the sounds are generated by the cards you play.
- An adventure game where the characters you meet and the actions you can take change based on the sounds that are generated by actions you take and characters you meet.
It's getting clear to me that the only way that I can successfully create this game is to break the hard circular dependency between the sound generation and the game itself. I think I'll first focus on creating an app that generates sounds, and later create a game that can evolve and change based on multiple input vectors.