Something Tildes might do well to consider. My friend Woozle formulated what I call Woozle's Paradox of Epistemic Systems:
Apparently sick of being criticized by everyone from US Congress to Seth Rogan for not doing enough to purge its platform of bots, trolls, and malicious foreign actors, Twitter has started liberally dropping the ban hammer on suspicious accounts, suspending more than 70 million users over the course of May and June.
Our present epistemic systems are undergoing kind of the same shock that the online community underwent when transitioning from BBSs and Usenet to the commercial web to social media.
We were used to a very high content-to-BS ratio because it took a certain amount of intelligence and intense domain-interest for people to be there in the first place -- and we've now transitioned to a situation where many people are there more or less accidentally and (the worst part): because of a high percentage of the population being present, there is now substantial power to be had by influencing the discussions that take place.
Watching the slow switch of the internet being a space mostly populated and curated by the tech-savy few to the general public has been a very interesting experience. Some days I actually catch...
Watching the slow switch of the internet being a space mostly populated and curated by the tech-savy few to the general public has been a very interesting experience. Some days I actually catch myself thinking "they should never have made it so my grandma could easily use the internet"
[Flaubert] didn’t just hate the railway as such; he hated the way it flattered people with the illusion of progress. What was the point of scientific advance without moral advance? The railway would merely permit more people to move about, meet and be stupid together.
I get the sentiment but I don't agree with that at all. The internet is facing some serious problems but on balance, the benefits it provides still absolutely crush the costs into dust. One answer...
I get the sentiment but I don't agree with that at all. The internet is facing some serious problems but on balance, the benefits it provides still absolutely crush the costs into dust.
One answer is to hang out on forums that still take some effort to enter. Funnily enough, one of the nicest forums I've ever encountered was on a private tracker. The invite process itself and the requirements of the community was more than enough to ensure a much higher level of community and good behavior.
I don't agree with the idea of weighing the Internet's costs vs it's benefits because it doesn't take into account that "the internet" as a concept is exceptionally broad and actually pretty...
I don't agree with the idea of weighing the Internet's costs vs it's benefits because it doesn't take into account that "the internet" as a concept is exceptionally broad and actually pretty ill-defined, so factually establishing it's costs and benefits becomes difficult. Are the energy requirements of cryptominers a cost of the internet? Are sweeping social changes a cost? Is increases communication a benefit proper to the internet or to improvements in communications technologies world-wide?
Honestly, this is brave of them. Twitter has in the past been tolerant of bots as they inflate the appearance of success. To lose 70 million fake users while their number of real users is dropping...
Honestly, this is brave of them. Twitter has in the past been tolerant of bots as they inflate the appearance of success. To lose 70 million fake users while their number of real users is dropping is not the kind of action many boards can bring themselves to.
The Association for the Advancement of Artificial Intelligence estimates bots make up between nine and 15 per cent of Twitter’s userbase. David Caplan, the co-founder of TwitterAudit, told Gizmodo earlier this year that just 40 to 60 per cent of Twitter accounts are real people.
That is a lot of accounts to lose.
Will getting rid of bots make the user experience better, or advertiser return more meaningful? At the very least, it will make the company appear a little more responsible. It will also cause frustration for real users being falsely positived, who may permanently leave the site depending on customer service response. One person wrote that they had been flagged as a bot because they retweeted something from a bot.
This raises the question of performance metrics. Users, sign-ups, average monthly users, post quality, engagement, topic depth or quality, are among possible metrics used to judge or rate sites,...
This raises the question of performance metrics.
Users, sign-ups, average monthly users, post quality, engagement, topic depth or quality, are among possible metrics used to judge or rate sites, by investors, adverrtisers, subscribers.
I'm not going to say that they're being brave until they disable Trump's account. I'm too tired to look it up now, but he has used the account to break Twitter's rules more than once.
Honestly, this is brave of them.
I'm not going to say that they're being brave until they disable Trump's account.
I'm too tired to look it up now, but he has used the account to break Twitter's rules more than once.
Call me cynical, but they'll be playing for the Stanley Cup in Hell long before Twitter bans Trump. His account is Twitter's primary claim to relevance at this point.
Call me cynical, but they'll be playing for the Stanley Cup in Hell long before Twitter bans Trump. His account is Twitter's primary claim to relevance at this point.
Soon we're going to see the battle of AIs between NLP AI bots trying to imitate normal user behavior and NLP AIs trying to ban them This is very similar to the algo-trading wars we're already...
Soon we're going to see the battle of AIs between NLP AI bots trying to imitate normal user behavior and NLP AIs trying to ban them
This is very similar to the algo-trading wars we're already seeing, where fast trading algorithms battle to take advantage of the arbitrage or speculation markets
from my own experience in machine learning, adversarial attacks on neural networks tend to be far easier than creating the initial neural network in the first place. For instance, the VGG and...
from my own experience in machine learning, adversarial attacks on neural networks tend to be far easier than creating the initial neural network in the first place. For instance, the VGG and resnet image recognition networks can be fooled by a single pixel attack, and yet hardening these architectures against adversarial attacks comes with a significant penalty to the accuracy of the network. This also seems to apply with discriminatory networks; in image GANs, it's always the discriminator that has to be weakened in order for the generator to succeed. Unless this changes, i'd say that the networks aiming to ban NLP bots have a distinct advantage in the battle between generative bots and discriminatory bots.
This is my experience as well. I generally think that if I as a human can distinguish between two sets, it should be possible (if often complex) to write code to automate this. But if humans are...
This is my experience as well. I generally think that if I as a human can distinguish between two sets, it should be possible (if often complex) to write code to automate this. But if humans are fooled by such bots (and they easily are), it will be very hard to created automated cools to categorize and filter them
I wonder if this is why my Twitter account suddenly got a new wave of people trying to log into it... got about 10 2FA prompts in two days after months of silence.
I wonder if this is why my Twitter account suddenly got a new wave of people trying to log into it... got about 10 2FA prompts in two days after months of silence.
Something Tildes might do well to consider.
My friend Woozle formulated what I call Woozle's Paradox of Epistemic Systems:
Watching the slow switch of the internet being a space mostly populated and curated by the tech-savy few to the general public has been a very interesting experience. Some days I actually catch myself thinking "they should never have made it so my grandma could easily use the internet"
This ... has happened before. Elizabeth Eisenstein, The Printing Press as an Agent of Change (1979).
Also McLuhan, Mackay, Plato, ...
- Julian Barnes, Flaubert's Parrot
I love this, thanks!
https://en.wikipedia.org/wiki/Eternal_September
I get the sentiment but I don't agree with that at all. The internet is facing some serious problems but on balance, the benefits it provides still absolutely crush the costs into dust.
One answer is to hang out on forums that still take some effort to enter. Funnily enough, one of the nicest forums I've ever encountered was on a private tracker. The invite process itself and the requirements of the community was more than enough to ensure a much higher level of community and good behavior.
I don't agree with the idea of weighing the Internet's costs vs it's benefits because it doesn't take into account that "the internet" as a concept is exceptionally broad and actually pretty ill-defined, so factually establishing it's costs and benefits becomes difficult. Are the energy requirements of cryptominers a cost of the internet? Are sweeping social changes a cost? Is increases communication a benefit proper to the internet or to improvements in communications technologies world-wide?
Use as liberal a definition of costs as you like--I can match your costs with benefits which are equally spurious if I must.
Honestly, this is brave of them. Twitter has in the past been tolerant of bots as they inflate the appearance of success. To lose 70 million fake users while their number of real users is dropping is not the kind of action many boards can bring themselves to.
That is a lot of accounts to lose.
Will getting rid of bots make the user experience better, or advertiser return more meaningful? At the very least, it will make the company appear a little more responsible. It will also cause frustration for real users being falsely positived, who may permanently leave the site depending on customer service response. One person wrote that they had been flagged as a bot because they retweeted something from a bot.
This raises the question of performance metrics.
Users, sign-ups, average monthly users, post quality, engagement, topic depth or quality, are among possible metrics used to judge or rate sites, by investors, adverrtisers, subscribers.
I've taken stabs at this myself with estimates of active Google+ activity and "Tracking the Conversation", looking at post quality across a number of sites, domains, and TLDs, via various proxies.
The reluctance of sites to purge obvious bot or other manipulative/trolling accounts is curious. And Twitter's change of heart noteable.
I'm not going to say that they're being brave until they disable Trump's account.
I'm too tired to look it up now, but he has used the account to break Twitter's rules more than once.
Call me cynical, but they'll be playing for the Stanley Cup in Hell long before Twitter bans Trump. His account is Twitter's primary claim to relevance at this point.
Oh, I don't disagree with you at all on this issue.
About damn time
Soon we're going to see the battle of AIs between NLP AI bots trying to imitate normal user behavior and NLP AIs trying to ban them
This is very similar to the algo-trading wars we're already seeing, where fast trading algorithms battle to take advantage of the arbitrage or speculation markets
from my own experience in machine learning, adversarial attacks on neural networks tend to be far easier than creating the initial neural network in the first place. For instance, the VGG and resnet image recognition networks can be fooled by a single pixel attack, and yet hardening these architectures against adversarial attacks comes with a significant penalty to the accuracy of the network. This also seems to apply with discriminatory networks; in image GANs, it's always the discriminator that has to be weakened in order for the generator to succeed. Unless this changes, i'd say that the networks aiming to ban NLP bots have a distinct advantage in the battle between generative bots and discriminatory bots.
This is my experience as well. I generally think that if I as a human can distinguish between two sets, it should be possible (if often complex) to write code to automate this. But if humans are fooled by such bots (and they easily are), it will be very hard to created automated cools to categorize and filter them
I wonder if this is why my Twitter account suddenly got a new wave of people trying to log into it... got about 10 2FA prompts in two days after months of silence.