21
votes
r/DataHoarder project to archive reddit before the API changes (link to request a copy of your personal data in comments)
https://www.reddit.com/r/DataHoarder/comments/142l1i0/archiveteam_has_saved_over_108_billion_reddit/
If you have time, the ArchiveTeam Warrior runs easily in VirtualBox.
Also, you can request a copy of your personal data from reddit at (useful if you wish to delete your account):
https://www.reddit.com/settings/data-request
I requested a data export a couple of days ago as well and have yet to get it. It's highly likely that they're overwhelmed with them at the moment. Even during standard operations, data requests can take a meaningful amount of server time to process and prepare for a user. I suppose one potential upside to the delay, depending on how you look at it, is that it probably means many users are requesting their data which also means it will cost Reddit a pretty penny to process, which is just extra salt on their current wounds.
That is extremely true. And if it's not server processing time, Reddit may be [pure speculation from this point] purposefully delaying these exports until after the API cutoff date (or at least delaying it to some extent).
Luckily, the ArchiveTeam Warrior is unaffected (it scrapes). A lot of the common deletion tools/scripts also shouldn't be affected. Most of the ones I know of "log in" to Reddit just as you normally would, which shouldn't be subject to the API rate limit (it has a rate limit, just not the API one)
Something is going on. I haven't received my link either.
I wonder how the mass exit of technically literate users and mods will change the platform.
I'm in the same boat - I requested an export so I could delete every comment I've ever posted to Reddit, instead of just my most recent 1000 visible through the API. I wouldn't be surprised to learn that many others have done the same.