So, we are talking about talking heads and virtual characters here. I'm making a small TODO list of tech that I'll need to make some demos with to get up to speed (so that I don't forget anything).
Audio output to go with visualisation output. The Web Audio API seems to be a popular new choice. Chrome Angry Birds has switched to it. I'll have to look at that. I'll need to look at both 3D and 2D
audio APIs. Can we stream it from files? Is it better to pre-load a data bank of files and stream factors? Can we mix channels? What are the basics? What are the figures on current limits?
Hardware skinning animation.
Models to go with talking heads. Do we want mixed factor morph targets at all? What model should we use for mapping dialogue systems to animation factors? Is timing an issue? Does this impact web use?