Does anyone know if they have a more rigorous explanation of what they did somewhere? I'd especially like to look at the network they're using and their input layers in particular.
Does anyone know if they have a more rigorous explanation of what they did somewhere? I'd especially like to look at the network they're using and their input layers in particular.
There's some more info linked in the blog post, I was able to find some more text including a bit on infrastructure, a writeup of the reward function, and a network architecture diagram (PDF...
There's some more info linked in the blog post, I was able to find some more text including a bit on infrastructure, a writeup of the reward function, and a network architecture diagram (PDF warning). They do say they're not ready to talk in depth about the agent internals yet, but hopefully they'll share more in the future.
To me, this is way more exciting than the 1v1 from last year, even with the limited mirror matchup mode, but I'm still skeptical/curious/excited to see if they can actually keep up this pace and beat the full version of the game by next year.
I really want to see them try to tackle the kind of long-term knowledge heavy planning that goes into the draft pick/ban stage and selecting items appropriate for the draft, handling counter picks dynamically, vision control, etc.
Hopefully they make the bots available to try out again, I'd love try playing against them.
Thanks a lot! My current focus is trying to apply RL to Game of Drones, which is a much simpler game. But of course I don't have the skills or the resources of a group like OpenAI, so I find it...
Thanks a lot! My current focus is trying to apply RL to Game of Drones, which is a much simpler game. But of course I don't have the skills or the resources of a group like OpenAI, so I find it extremely hard to get a reasonable level. Hopefully I get get some ideas from their contributions.
Does anyone know if they have a more rigorous explanation of what they did somewhere? I'd especially like to look at the network they're using and their input layers in particular.
There's some more info linked in the blog post, I was able to find some more text including a bit on infrastructure, a writeup of the reward function, and a network architecture diagram (PDF warning). They do say they're not ready to talk in depth about the agent internals yet, but hopefully they'll share more in the future.
To me, this is way more exciting than the 1v1 from last year, even with the limited mirror matchup mode, but I'm still skeptical/curious/excited to see if they can actually keep up this pace and beat the full version of the game by next year.
I really want to see them try to tackle the kind of long-term knowledge heavy planning that goes into the draft pick/ban stage and selecting items appropriate for the draft, handling counter picks dynamically, vision control, etc.
Hopefully they make the bots available to try out again, I'd love try playing against them.
Thanks a lot! My current focus is trying to apply RL to Game of Drones, which is a much simpler game. But of course I don't have the skills or the resources of a group like OpenAI, so I find it extremely hard to get a reasonable level. Hopefully I get get some ideas from their contributions.