YouTube is already rolling out "AI enhanced" comments to help channels comment on viewers posts easier. It's a minimum value product, an easy thing to knock out that provides a "benefit" so they...
YouTube is already rolling out "AI enhanced" comments to help channels comment on viewers posts easier.
It's a minimum value product, an easy thing to knock out that provides a "benefit" so they put it into production. It answers the question "why are they using my data?" with an egocentric "so it can create comments for me."
There's definitely a gap between "offer humans with accounts AI to write responses," which has existed on a number of platforms in some form or another for a while, with actually making entire...
There's definitely a gap between "offer humans with accounts AI to write responses," which has existed on a number of platforms in some form or another for a while, with actually making entire accounts that have no real human behind them, though. I suppose bots have also existed for quite a while, so it's always been technically achievable, but the platform itself implementing them is a big step. I too am quite surprised Meta has embraced this so quickly.
Every time I see articles like this with people boasting these planned use cases for AI, I'm always reminded of the quote from Dr. Ian Malcolm in Jurassic Park: "your scientists were so...
Every time I see articles like this with people boasting these planned use cases for AI, I'm always reminded of the quote from Dr. Ian Malcolm in Jurassic Park:
"your scientists were so preoccupied with whether or not they could that they didn't stop to think if they should."
I mean, I strongly believe that someone has considered the why they should. The unfortunate part is that it is executives and folks in finance. If I had to guess, this will just be another way to...
I mean, I strongly believe that someone has considered the why they should. The unfortunate part is that it is executives and folks in finance. If I had to guess, this will just be another way to stealthily serve ads to folks. Make 90% of the posts/replies/content of someone just believable enough to be an actual person that it can get friends and friend people back, and then when someone who shares 8 acquaintances shows up on your timeline and drops a link to some service, some clothing shop, or whatever you might be more willing to click on it.
As an aside it's very likely going to be used in a polarizing way to capture folks attention as well and keep them on the platform, so that you're spending more time scrolling past (and hopefully clicking) ads.
Yeah my comment was more tongue-in-cheek. I know that the "why they should" has been extensively thought about by those pushing this. The bottom line is they think it will make them money at some...
Yeah my comment was more tongue-in-cheek. I know that the "why they should" has been extensively thought about by those pushing this. The bottom line is they think it will make them money at some point. It just feels like such a nefarious and shitty thing to do on a fundamental level which is more what I was going for. The "should" in the context I was referring to (and the movie was referring to) is more of a philosophical question.
This is why we should all get DeleteMe. DeleteMe actively searches your information in advertiser databases and automatically opts you out and requests deletion of your private information....
This is why we should all get DeleteMe. DeleteMe actively searches your information in advertiser databases and automatically opts you out and requests deletion of your private information. Thankfully I can get you a discount if you use my referral code.
(This was intended to be a joke but I’m wondering if it shouldn’t be.)
When Facebook was new it was about real people. I was an early adopter and it was actually a fun way to keep sort of in touch with old acquaintancies. Now no one I know uses it for much more than...
When Facebook was new it was about real people. I was an early adopter and it was actually a fun way to keep sort of in touch with old acquaintancies. Now no one I know uses it for much more than the occasional event planning, a few interest groups and local community groups.
I know regular users aren't the customers for Facebook, but who is asking for something like this? Will advertisers still spend money targeted humans with money to spend, if Facebook gradually removes the human element and it becomes a bot town? Or maybe there is just enough people that will happily engage with Facebook even though they are just interacting with bots.
The most engaged facebook users are older people that fall for rage bait headlines. AI powered users will further target the most active users and make them even more engaged. I'd put a lot of...
Or maybe there is just enough people that will happily engage with Facebook even though they are just interacting with bots.
The most engaged facebook users are older people that fall for rage bait headlines. AI powered users will further target the most active users and make them even more engaged.
I'd put a lot of money on this just pushing more and more crap onto the platform and locking in the current users. I don't think that facebook is pushing for this to bring back users that have left, or try to sell this as a way to get new users to join (are there even any "new users" at this point to get via new features?).
This will just keep existing users eyes on the screen even longer. That means right wing rage bait interspersed with the occasional eye candy post.
It was so good for event planning. There's nothing like it as far as I am aware. It was so good for sharing photos. It was so good for telling everyone you know you are going to be in whatever...
It was so good for event planning. There's nothing like it as far as I am aware. It was so good for sharing photos. It was so good for telling everyone you know you are going to be in whatever city at whatever time and then letting anyone who wants hit you up for plans without having to reach out to everyone. Nothing's like that anymore.
This past year has really solidified for me how much one cannot trust anything on the internet. Not that I wasn’t wary of it before, but with all of this new AI tech that’s flooded every sphere of...
This past year has really solidified for me how much one cannot trust anything on the internet. Not that I wasn’t wary of it before, but with all of this new AI tech that’s flooded every sphere of online activity, combined with news like this, the awareness of how manipulative and canny those operating on the other side of even the most innocuous of interactions might be has just shot through the roof.
Now, every comment on the screen becomes a gauntlet of questions: Is this another stupid bot? Is this one of those Russian subversion campaigns leading me slowly but surely down an unnecessarily contentious thought-path? Is this just some random asshole tween from Idaho with too much time on their hands? Is this another stealth advertisement? And so on, and so forth…
And doubly so for social media, which has honestly just become one bloated, multi-limbed fiction-machine with rapidly diminishing handfuls of truth thrown in. But truth is also, to a point, subjective — so, unless one has a knowledge base of where someone else’s point of view is coming from, a lot of the time, even that falls short.
ALSO, since I’m already here, lol, I might as well bring up one of the most irksome developments to have transpired within the past few years: The insistence by the tech community behind the push for AI to term the blatant bullshit that ChatGPT and its like belch out as “hallucinations”.
A machine does not hallucinate, but it can produce something closer to a mirage. A new type of illusion witnessed and experienced by us — the viewers, the interpreters, the beings with sentience and consciousness; the humans creating and developing and manipulating the tech. We are the recipients of this phenomena, we are seeing and experiencing what is simply not there. We give the nonsense we receive meaning and substance — not the damned machine. It cannot, it does not HALLUCINATE — and to call it as such is to project our imaginings of the tech onto the tech itself, and to then declare that projection as reality. We are anthropomorphizing our own creation, and this cannot be without consequence.
And those utter fucking twats so hellbent on selling us this dream-nightmare of some Blade Runner-esque future know this. It is marketing, and it is very good at turning our language associations against us, and, from the way so many laypersons seem so eager to stake their futures and fortunes on it, it appears to be working.
(Also… this is how the groundwork is laid to eventually bestow the legal concept of personhood — and, therefore, less liability for the more powerful corporations and individuals utilizing it — onto these products. We are using language to very subtly psychologically move the locus of control and responsibility away from the developers and large scale operators of the machines onto the machines themselves. Not a surprising notion coming from such overtly ‘laissez-faire for me but not for thee’-minded folks.)
Do it, cowards! (I hate how big corporations handle their users, pardon me, their goods of trade, thus I want them to do this kind of things and maybe drive off some of the userbase by doing that)
Do it, cowards!
(I hate how big corporations handle their users, pardon me, their goods of trade, thus I want them to do this kind of things and maybe drive off some of the userbase by doing that)
I wonder if it will be easy to identify these "AI commenters" or will they be hidden amongst the human user comments. I also wonder if they will respond to trolling or rude comments with the same...
I wonder if it will be easy to identify these "AI commenters" or will they be hidden amongst the human user comments. I also wonder if they will respond to trolling or rude comments with the same in return hah.
This is going to bring a whole new level to disinformation campaigns as people game the system’s drones’s engagement metrics to trick them into mass repeating ever more complex dog whistles and...
This is going to bring a whole new level to disinformation campaigns as people game the system’s drones’s engagement metrics to trick them into mass repeating ever more complex dog whistles and propaganda.
Seems like the vision is to make it so it becomes a personality platform and AI friend platform. I've been hearing a lot about character.ai but that seems like small potatoes compared to Facebook...
Seems like the vision is to make it so it becomes a personality platform and AI friend platform. I've been hearing a lot about character.ai but that seems like small potatoes compared to Facebook doing something like this if they push the integration hard. Presumably the audience demands might be a bit different, but I could see some people finding appeal in being friends with celebrities and wouldn't be surprised if there's some kind of AI romance elements in it as well. It's pretty dystopian but what isn't these days.
I see this sort of like Facebook games, which were big for a while, but apparently have been discontinued. A lot depends on whether they're properly marked and can be blocked. Facebook has a poor...
I see this sort of like Facebook games, which were big for a while, but apparently have been discontinued.
A lot depends on whether they're properly marked and can be blocked. Facebook has a poor record of letting you block things, though.
Dead Internet Theory As A Service being trumpeted as a first-class feature from one of the biggies is not something I expected to see this quickly.
YouTube is already rolling out "AI enhanced" comments to help channels comment on viewers posts easier.
It's a minimum value product, an easy thing to knock out that provides a "benefit" so they put it into production. It answers the question "why are they using my data?" with an egocentric "so it can create comments for me."
There's definitely a gap between "offer humans with accounts AI to write responses," which has existed on a number of platforms in some form or another for a while, with actually making entire accounts that have no real human behind them, though. I suppose bots have also existed for quite a while, so it's always been technically achievable, but the platform itself implementing them is a big step. I too am quite surprised Meta has embraced this so quickly.
Every time I see articles like this with people boasting these planned use cases for AI, I'm always reminded of the quote from Dr. Ian Malcolm in Jurassic Park:
"your scientists were so preoccupied with whether or not they could that they didn't stop to think if they should."
I mean, I strongly believe that someone has considered the why they should. The unfortunate part is that it is executives and folks in finance. If I had to guess, this will just be another way to stealthily serve ads to folks. Make 90% of the posts/replies/content of someone just believable enough to be an actual person that it can get friends and friend people back, and then when someone who shares 8 acquaintances shows up on your timeline and drops a link to some service, some clothing shop, or whatever you might be more willing to click on it.
As an aside it's very likely going to be used in a polarizing way to capture folks attention as well and keep them on the platform, so that you're spending more time scrolling past (and hopefully clicking) ads.
Yeah my comment was more tongue-in-cheek. I know that the "why they should" has been extensively thought about by those pushing this. The bottom line is they think it will make them money at some point. It just feels like such a nefarious and shitty thing to do on a fundamental level which is more what I was going for. The "should" in the context I was referring to (and the movie was referring to) is more of a philosophical question.
This is why we should all get DeleteMe. DeleteMe actively searches your information in advertiser databases and automatically opts you out and requests deletion of your private information. Thankfully I can get you a discount if you use my referral code.
(This was intended to be a joke but I’m wondering if it shouldn’t be.)
When Facebook was new it was about real people. I was an early adopter and it was actually a fun way to keep sort of in touch with old acquaintancies. Now no one I know uses it for much more than the occasional event planning, a few interest groups and local community groups.
I know regular users aren't the customers for Facebook, but who is asking for something like this? Will advertisers still spend money targeted humans with money to spend, if Facebook gradually removes the human element and it becomes a bot town? Or maybe there is just enough people that will happily engage with Facebook even though they are just interacting with bots.
The most engaged facebook users are older people that fall for rage bait headlines. AI powered users will further target the most active users and make them even more engaged.
I'd put a lot of money on this just pushing more and more crap onto the platform and locking in the current users. I don't think that facebook is pushing for this to bring back users that have left, or try to sell this as a way to get new users to join (are there even any "new users" at this point to get via new features?).
This will just keep existing users eyes on the screen even longer. That means right wing rage bait interspersed with the occasional eye candy post.
It's also just a trial for AI influencers on Instagram and Threads, even if it's not explicitly marketed as such.
It was so good for event planning. There's nothing like it as far as I am aware. It was so good for sharing photos. It was so good for telling everyone you know you are going to be in whatever city at whatever time and then letting anyone who wants hit you up for plans without having to reach out to everyone. Nothing's like that anymore.
This past year has really solidified for me how much one cannot trust anything on the internet. Not that I wasn’t wary of it before, but with all of this new AI tech that’s flooded every sphere of online activity, combined with news like this, the awareness of how manipulative and canny those operating on the other side of even the most innocuous of interactions might be has just shot through the roof.
Now, every comment on the screen becomes a gauntlet of questions: Is this another stupid bot? Is this one of those Russian subversion campaigns leading me slowly but surely down an unnecessarily contentious thought-path? Is this just some random asshole tween from Idaho with too much time on their hands? Is this another stealth advertisement? And so on, and so forth…
And doubly so for social media, which has honestly just become one bloated, multi-limbed fiction-machine with rapidly diminishing handfuls of truth thrown in. But truth is also, to a point, subjective — so, unless one has a knowledge base of where someone else’s point of view is coming from, a lot of the time, even that falls short.
ALSO, since I’m already here, lol, I might as well bring up one of the most irksome developments to have transpired within the past few years: The insistence by the tech community behind the push for AI to term the blatant bullshit that ChatGPT and its like belch out as “hallucinations”.
A machine does not hallucinate, but it can produce something closer to a mirage. A new type of illusion witnessed and experienced by us — the viewers, the interpreters, the beings with sentience and consciousness; the humans creating and developing and manipulating the tech. We are the recipients of this phenomena, we are seeing and experiencing what is simply not there. We give the nonsense we receive meaning and substance — not the damned machine. It cannot, it does not HALLUCINATE — and to call it as such is to project our imaginings of the tech onto the tech itself, and to then declare that projection as reality. We are anthropomorphizing our own creation, and this cannot be without consequence.
And those utter fucking twats so hellbent on selling us this dream-nightmare of some Blade Runner-esque future know this. It is marketing, and it is very good at turning our language associations against us, and, from the way so many laypersons seem so eager to stake their futures and fortunes on it, it appears to be working.
(Also… this is how the groundwork is laid to eventually bestow the legal concept of personhood — and, therefore, less liability for the more powerful corporations and individuals utilizing it — onto these products. We are using language to very subtly psychologically move the locus of control and responsibility away from the developers and large scale operators of the machines onto the machines themselves. Not a surprising notion coming from such overtly ‘laissez-faire for me but not for thee’-minded folks.)
Unsolicited internet rant concluded.
So we've gone from banning spambots to... making spambots.
I'm so happy I deleted Facebook.
Do it, cowards!
(I hate how big corporations handle their users, pardon me, their goods of trade, thus I want them to do this kind of things and maybe drive off some of the userbase by doing that)
I wonder if it will be easy to identify these "AI commenters" or will they be hidden amongst the human user comments. I also wonder if they will respond to trolling or rude comments with the same in return hah.
This is going to bring a whole new level to disinformation campaigns as people game the system’s drones’s engagement metrics to trick them into mass repeating ever more complex dog whistles and propaganda.
Seems like the vision is to make it so it becomes a personality platform and AI friend platform. I've been hearing a lot about character.ai but that seems like small potatoes compared to Facebook doing something like this if they push the integration hard. Presumably the audience demands might be a bit different, but I could see some people finding appeal in being friends with celebrities and wouldn't be surprised if there's some kind of AI romance elements in it as well. It's pretty dystopian but what isn't these days.
I see this sort of like Facebook games, which were big for a while, but apparently have been discontinued.
A lot depends on whether they're properly marked and can be blocked. Facebook has a poor record of letting you block things, though.