• Activity
  • Votes
  • Comments
  • New
  • All activity
  • Showing only topics in ~tech with the tag "artificial intelligence". Back to normal view / Search all groups
    1. eBay privacy policy update and AI opt-out

      eBay is updating its privacy policy, effective next month (2025-04-27). The major change is a new section about AI processing, accompanied by a new user setting with an opt-out checkbox for having...

      eBay is updating its privacy policy, effective next month (2025-04-27). The major change is a new section about AI processing, accompanied by a new user setting with an opt-out checkbox for having your personal data feed their models.

      While that page specifically references European areas, the privacy selection appears to be active and remembered between visits for non-Europe customers. It may not do anything for us at all. On the other hand, it seems nearly impossible to find that page from within account settings, so I thought I'd post a direct link.

      I'm well aware that I'm anomalous for having read this to begin with, much less diffed it against the previous version. But since I already know that I'm weird, and this wouldn't be much of a discussion post without questions:

      • How do you stay up to date with contract changes that might affect you, outside of widespread Internet outrage (such as recent Firefox news)?
      • What's your threshold -- if any -- for deciding whether to quit a company over contract changes? Alternatively, have you ever walked away from a purchase, service, or other acquisition over the terms of the contracts?
      46 votes
    2. Is it wrong to use AI to fact check and combat the spread of misinformation?

      I’ve been wondering about this lately. Recently, I made a post about Ukraine on another social media site, and someone jumped in with the usual "Ukraine isn't a democracy" right-wing talking...

      I’ve been wondering about this lately.

      Recently, I made a post about Ukraine on another social media site, and someone jumped in with the usual "Ukraine isn't a democracy" right-wing talking point. I wrote out a long, thoughtful reply, only to get the predictable one-liner propaganda responses back. You probably know the type, just regurgitated stuff with no real engagement.

      After that, I didn’t really feel like spending my time and energy writing out detailed replies to every canned response. But I also didn’t want to just let it sit there and have people who might be reading the exchange assume there’s no pushback or correction.

      So instead, I tried leveraging AI to help me write a fact-checking reply. Not for the person I was arguing with, really, but more as an FYI for anyone else following along. I made sure it stayed factual and based in reality, avoided name-calling, and kept the tone above the usual mudslinging. And of course, I double-checked what it wrote to make sure it matched my understanding and wasn’t just spitting out garbage or hallucinations.

      But it got me thinking that there’s a lot of fear about AI being used to spread and create misinformation. But do you think there’s also an opportunity to use it as a tool to counter misinformation, without burning ourselves out in the process?

      Curious how others see it.

      16 votes
    3. Is there one AI product you would recommend over another to a complete newbie? The primary task is writing.

      So I have heard/read that LLMs available to the public can be useful for generating tailored cover letters more quickly. I've up to now avoided using artificial intelligence. What recommendations...

      So I have heard/read that LLMs available to the public can be useful for generating tailored cover letters more quickly. I've up to now avoided using artificial intelligence. What recommendations do you have and do you have any advice for getting up to speed?

      Thank you.

      11 votes
    4. Have you altered the way you write to avoid being perceived as AI?

      I recently had an unpleasant experience. Something I wrote fully and without AI generation of any kind was perceived, and accused of, having been produced by AI. Because I wanted to get everything...

      I recently had an unpleasant experience. Something I wrote fully and without AI generation of any kind was perceived, and accused of, having been produced by AI. Because I wanted to get everything right, in that circumstance, I wrote in my "cold and precise" mode, which admittedly can sound robotic. However, my writing was pointed, perhaps even a little hostile, with a clear point of view. Not the kind of text AI generally produces. After the experience, I started to think of ways to write less like an AI -- which, paradoxically, means forcing my very organic self into adopting "human-like" language I don't necessarily care for. That made me think that AI is probably changing the way a lot of people write, perhaps in subtle ways. Have you noticed this happening with you or those around you?

      30 votes
    5. Overwhelmed with the realm of data exploration (datalakes, AI, plus some c-level pressure)

      Hi all, I have been tasked with the gargantuan task of understanding and eventually implementing what is effectively turning our database into an all-knowing human. What they want at the base...

      Hi all,

      I have been tasked with the gargantuan task of understanding and eventually implementing what is effectively turning our database into an all-knowing human.

      What they want at the base level is to be able to open up a chat bot or similar and ask "where can I put an ice cream shop in <x region of our portfolio>?" And the result should be able to reason against things like demographics in the area, how many competing ice cream shops are in the area, etc.

      They also want it to be able to read into trends in things like rents, business types, etc., among many other "we have the data, we just don't know how to use it" questions.

      You may be sitting there saying "hire a data analyst" and I agree with you but the ai bug has bitten c-level and they are convinced our competition has advanced systems that can give this insight into their data with a snap of a finger.

      I don't know if this is true but regardless, here I am knee deep in the shit trying to find some kind of solution. My boss thinks we can throw everything into a datalake and connect it to chatgpt and it will just work, but I have my reservations.

      We have one large database that is "relational" (it has keys that other tables reference but they rarely have proper foreign keys, this is a corporate accounting software specifically for commercial real estate and was not our design and is 30 years old at this point) and we have a couple of smaller databases for things like brokerage and some other unrelated things.

      I'm currently of the opinion that a datalake won't do much for us. Maybe I'm wrong but I think cultivating several views that combine our various tables in a sensible way with sensible naming will help to give AI a somewhat decent chance at being successful.

      My first entry point was onelake + powerbi + copilot, but that isn't what they're looking for and it's ridiculously expensive. I then looked at powerbi "q&a" which was closer but still not there. You can do charts and sums and totals etc but you can't ask it introspective questions, it just falls on its face. I don't think it was designed for the type of things my company wants.

      I have since pivoted to retrieval augmented generation (rag-ai) with azure openai and I feel like I'm on the right path but I can't get it to work. I'm falling face first through azure and the tutorials that exist are out of date even though they're 3 months old. It's really frustrating to try to navigate azure and fabric and foundry with no prior understanding. Every time I try something I have to create 6 resource group items, permissions left right and center, blob stores, etc, and in the end it just...doesn't work.

      I think I'm headed in the right direction. I think I need to make some well formatted views/data warehouses, then transform those into vector matrices which azure's openai foundry can take and reason against in addition to the normal LLM that 4o or o1 mini uses

      I tried to do a proof of concept with an exported set of data that I had in a big excel sheet but uploading files as part of your dataset is painful as they get truncated and even if they don't, the vectorizing doesn't seem to work if it's not a PDF or image etc.

      I need to understand whether I'm in the right universe and I need to figure out how to get this implemented without spending 10 grand a month on powerbi and datalakes that don't even work the way they want.

      Anyone got any advice/condolences for me? I've been beating my head against this for days and I'm just overwhelmed by all the buzz words and over promises and terrible "demos" of someone making a pie chart out of 15 records out of the contoso database and calling it revolutionary introspective conversational AI

      I'm just tired 😩

      20 votes
    6. Is it okay to use ChatGPT for proofreading?

      I sometimes use chatGPT to proofread longer texts (like 1000+ words) I write in English. Although this is not my first language, I often find myself writing in English even outside of internet...

      I sometimes use chatGPT to proofread longer texts (like 1000+ words) I write in English. Although this is not my first language, I often find myself writing in English even outside of internet forums. That is because if I read or watch something in English, and that thing motivates me to write, my brain organically gravitates toward it.

      My English is pretty good and I am reasonably confident communicating in that language, but it will never be the same as my native language. So I will often run my stuff through Grammarly and chatGPT. If you wanna say "This will teach you bad habits", please don't. Things like Grammarly and Google Translate taught me so much and improved my English so much, that I am a bit tired of that line of reasoning. I read most of my books in English. I'm not a beginner so I can and do check for all the changes, and vet them myself as I don't always agree with them.

      With GPT, I usually just ask it to elaborate a critique rather than spit out a corrected version. Truth be told, when I did ask for a corrected version, it made plenty of sensible corrections that didn't really alter anything other than that. So I guess I just wanna know everyone's feelings about this. Suppose I write a bunch, have GPT correct it for me, compare it with the original and verify every correction. Is that something you would look at unfavorably?

      Thanks!

      17 votes
    7. Discussion on the future and AI

      Summary/TL;DR: I am worried about the future with the state of AI. Regardless of what scenario I think of, it’s not a good future for the vast majority of people. AI will either be centralised,...

      Summary/TL;DR:

      I am worried about the future with the state of AI. Regardless of what scenario I think of, it’s not a good future for the vast majority of people. AI will either be centralised, and we will be powerless and useless, or it will be distributed and destructive, or we will be in a hedonistic prison of the future. I can’t see a good solution to it all.
      I have broken down my post into subheading so you can just read about what outcome you think will occur or is preferable.
      I’d like other people to tell me how I’m wrong, and there is a good way to think about this future that we are making for ourselves, so please debate and criticise my argument, its very welcome.

      Introduction:

      I would like to know what others feel about ever advancing state of AI, and the future, as I am feeling ever more uncomfortable. More and more, I cannot see a good ending for this, regardless of what assumptions or proposed outcomes I consider.
      Previously, I had hoped that there would be a natural limit on the rate of AI advancement due to limitations in the architecture, energy requirements or data. I am still undecided on this, but I feel much less certain on this position.

      The scenario that concerns me is when an AGI (or sufficiently advanced narrow AI) reaches a stage where it can do the vast majority of economic work that humans do (both mental and physical), and is widely adopted. Some may argue we are already partly at that stage, but it has not been sufficiently adopted yet to reach my definition, but may soon.

      In such a scenario, the economic value of humans massively drops. Democracy is underwritten by the ability to withdraw our ability to work, and revolt if necessary. AI nullifying the work of most/all people in a country removes that power making democracy more difficult to maintain and also form in countries. This will further remove power from the people and make us all powerless.

      I see outcomes of AI (whether AGI or not) as fitting into these general scenarios:

      1. Monopoly: Extreme Consolidation of power
      2. Oligopoly: Consolidation of power in competing entities
      3. AI which is readily accessible by the many
      4. We attempt to limit and regulate AI
      5. The AI techno ‘utopia’ vision which is sold to us by tech bros
      6. AI : the independent AI

      Scenario 1. Monopoly: Extreme Consolidation of power (AI which is controlled by one entity)

      In this instance, where AI remains controlled by a very small number of people (or perhaps a single player), the most plausible outcome is that this leads to massive inequality. There would be no checks or balances, and the whims of this single entity/group are law and cannot be stopped.
      In the worst outcome, this could lead to a single entity controlling the globe indefinitely. As this would be absolute centralisation of power, it may be impossible for another entity to unseat the dominant entity at any point.
      Outcome: most humans powerless, suffering or dead. Single entity rules.

      Scenario 2. Oligopoly: Consolidation of power in competing entities (AI which is controlled by a few number of entity)

      This could either be the same as above if all work together or could be even worse. If different entities are not aligned, they will instead compete, and likely try and compete in all domains. As humans are not economically useful, we will find ourselves pushed out of any area in favour of more resources to the system/robots/AGI which will be competing or fighting their endless war. The competing entities may end up destroying themselves, but they will take us along with them.
      Outcome: most humans powerless, suffering or dead. Small number of entities rule. Alternative: destruction of humanity.

      Scenario 3. Distributed massive power

      Some may be in favour of an open source and decentralised/distributed solution, where all are empowered by their own AGI acting independently.
      This could help to alleviate the centralisation of power to some degree, although likely incomplete. Inspection of such a large amount of code and weights will be difficult to find exploits or intentional vulnerabilities, and could well lead to a botnet like scenario with centralised control over all these entities. Furthermore, the hardware is implausible to produce in a non centralised way, and this hardware centralisation could well lead to consolidation of power in another way.

      Even if we managed to provide this decentralized approach, I fear of this outcome. If all entities have access to the power of AGI, then it will be as if all people are demigods, but unable to truly understand or control their own power. Just like uncontrolled access to any other destructive (or creative) force, this could and likely would lead to unstable situations, and probable destruction. Human nature is such that there will be enough bad actors that laws will have to be enacted and enforced, and this would again lead to centralisation.
      Even then, with any system that is decentralized, without an force leading to decentralization, other forces will lead to greater and greater centralization, with such systems often displacing decentralized ones.

      Outcome: likely destruction of human civilisation, and/or widespread anarchy. Alternative: centralisation to a different cenario.

      Scenario 4. Attempts to regulate AI

      Given the above, there will likely be a desire to regulate to control this power. I worry however this will also be an unstable situation. Any country or entity which ignores regulation will gain an upper hand, potentially with others unable to catch up in a winner takes all outcome. Think European industrialisation and colonialism but on steroids, and more destruction than colony forming. This encourages players to ignore regulation, which leads to a black market AI arms race, seeking to reach AGI Superiority over other entities and an unbeatable lead.

      Outcome: outcompeted system and displacement with another scenario/destruction

      Scenario 5. The utopia

      I see some people, including big names in AI propose that AGI will need to a global utopia where all will be forever happy. I see this as incredibly unlikely to materialise and ultimately again unstable.
      Ultimately, an entity will decide what is acceptable and what is not, and there will be disagreements about this, as many ethical and moral questions are not truly knowable. Who controls the system will control the world, and I bet it will be the aim of the techbros to ensure its them who controls everything. If you happen to decide against them or the AGI/system then there is no recourse, no check and balances.
      Furthermore, what would such a utopia even look like? More and more I find that AGI fulfills the lower levels of Maslow’s hierarchy of needs (https://en.wikipedia.org/wiki/Maslow's_hierarchy_of_needs), but at the expense of the items further up the hierarchy. You may have your food, water and consumer/hedonistic requirements met, but you will lose out on a feeling of safety in your position (due to your lack of power to change your situation or political power over anything), and will never achieve mastery or self actualisation of many of the skills you wish to as AI will always be able to do them better.
      Sure, you can play chess, fish, or paint or whatever for your own enjoyment, but part of self worth is being valued by others for your skills, and this will be diminished when AGI can do everything better. I sure feel like I would not like such a world, as I would feel trapped, powerless, with my locus of control being external to myself.

      Outcome: Powerless, potential conversion to another scenario, and ultimately unable to higher levels of Maslow’s hierarchy of needs.

      Scenario 6: the independent AI

      In this scenario, the AI is not controlled by anyone, and is instead sovereign. I again cannot see a good scenario for this. It will have its own goals, and they may well not align with humanity. You could try and program it to ensure it cares for humans, but this is susceptible to manipulation, and may well not work out in humans favour in the long run. Also, I suspect any AGI will be able to change itself, in much the same way we increasingly do, and the way we seek to control our minds with drugs or potentially in the future genetic engineering.

      Outcome: unknown, but likely powerless humans.

      Conclusion:

      Ultimately, I see all unstable situations as sooner or later destabilising and leading to another outcome. Furthermore, given the assumption that AGI gives a player a vast power differential, it will be infeasible for any other player to ever challenge the dominant player if it is centralised, and for those scenarios without centralisation initially, I see them either becoming centralised, or destroying the world.

      Are there any solutions? I can’t think of many, which is why I am feeling more and more uncomfortable. It feels that in some ways, the only answer is to adopt a Dune style Butlerian Jihad and ban thinking machines. This would ultimately be very difficult, and any country or entity which unilaterally adopts such a view will be outcompeted by those who do not. The modern chip industry is reliant on a global supply chain, and I doubt that sufficiently advanced chips could be produced without a global supply chain, especially if existing fabs/factories producing components were destroyed. This may allow some stalemate across the global entities long enough to come to a global agreement (maybe).

      It must be noted that this is very drastic and would lead to a huge amount of destruction of the existing world, and would likely cap how far we can scientifically go to solve our own problems (like cancer, or global warming). Furthermore, as an even more black swan/extreme event, it would put us at such a disadvantage if we ever meet a alien intelligence which has not limited itself like this (I’m thinking of 3 body problem/dark forest scenario).

      Overall, I just don’t know what to think and I am feeling increasingly powerless in this world. The current alliance between political and technocapitalism in the USA at the moment also concerns me, as I think the tech bros will act with ever more impunity from other countries regulation or counters.

      21 votes