3 votes

Skynet meets The Swarm: How the Berkeley Overmind won the 2010 StarCraft AI competition

1 comment

  1. Crespyl
    Link
    This came up in a separate discussion about recent developments in deep-learning models for playing Dota 2 and Quake III Arena. It's an interesting write up of how a Berkeley team created a...

    This came up in a separate discussion about recent developments in deep-learning models for playing Dota 2 and Quake III Arena. It's an interesting write up of how a Berkeley team created a winning Starcraft bot, with some nice exploration of the architecture and techniques used.

    In an interesting contrast to more recent developments based on deep learning models, the Berkeley Overmind uses a model inspired by operating system process schedulers to allocate resources and prioritize tasks, combined with a training stage to search for optimal parameters governing the emergent swarming behavior of its Mutalisk hordes.

    I'm not aware of any attempts to apply deep learning methods to Starcraft, but it would be interesting to see how a similar model would fare, or what the resource usage (memory, compute time during gameplay, etc.) would be like.