26 votes

Microsoft admits that maybe surveiling everything you do on your computer isn’t a brilliant idea

21 comments

  1. [13]
    JCAPER
    Link
    Some immediate thoughts, sorry if this isn’t structured right or feels unfocused I think it’s a question of time until AI at OS level becomes a common thing. Apple and Microsoft are heading that...

    Some immediate thoughts, sorry if this isn’t structured right or feels unfocused

    I think it’s a question of time until AI at OS level becomes a common thing. Apple and Microsoft are heading that way, Google is undoubtedly going to follow.

    The AI they’re going to implement will be one that feeds on context. They will be incredibly useful, we’ll be able to ask it specific questions and not need to provide it context, they will know what we mean, how we usually write, what we usually prefer, etc. In other words, they will know us.

    Apple gave some good examples of how I imagined this was going to work. Ask it to write emails, it will know our writing style and replicates it. Ask it to schedule something, it will know what else we have scheduled and plan accordingly. It will know what notifications we care about. It will know what emails are more important. Etc etc

    In short, a good implementation will be useful. Not a toy, not a gimmick, but actually useful. A lot of people will jump at the chance of having these features.

    And thus, this is why this will be one of the major swings at privacy. This will be an uppercut that will disorient and put privacy rights on the rope.

    I personally “trust” Apple, but I wouldn’t die on that hill if someone challenged me on that. But I absolutely do not trust Microsoft or Google. And I feel I’m not the only one, because in essence, what Microsoft proposed with their AI isn’t much different from what Apple proposed. On paper both should be equally scrutinized, but Apple has a better track record with privacy issues than Microsoft, and for most people it’s easier to recognize that taking screenshots is invasive; it’s a concept easier to grasp and understand, in contrast with the “weird digital magic” that happens in the background that also spies on you, which feels more alien and therefore easier to hand wave.

    Privacy oriented alternatives will appear. I have no doubt open source linux distros and android custom roms with AI will appear. But until then, most companies will want to give AI capabilities to everyone, even those without the necessary hardware. Most of them will use cloud based solutions. People will adopt, and willingly give even more info about themselves.

    10 votes
    1. [6]
      Onomanatee
      Link Parent
      A very likely future, and I'm not looking forward to it. Sure, it gives some immediate small usability advantages. But countering that is: the privacy concerns you're talking about, of course AI...

      A very likely future, and I'm not looking forward to it. Sure, it gives some immediate small usability advantages. But countering that is:

      • the privacy concerns you're talking about, of course
      • AI is, if I understand it correctly, quite computationally heavy. And it will run all the time on these OS's, for every menial task. In other words, we're looking at a massive increase of energy usage, graphics cards and dedicated hardware, obsolescence of older devices... Big environmental impact here for, at least in my eyes, trivial gain.
      • the issues of inequality still have not been addressed in AI. Training models are focused on a well-off, white, young and male demographic, and will further cement their habits and patterns as defaults. Other voices will get drowned out more and more as we in effect stop having the advantage of an open and diverse internet (in so far as we have it), especially with AI not actually understanding context and just regurgitating popular answers. I see a further deepening of binary stances on issues, further alienation, lazy answers on complex questions and tons of people feeling left out of the conversation without recourse. Deeply worried about this.
      14 votes
      1. R3qn65
        Link Parent
        It's not nearly as bad once the model is trained, especially on dedicated silicon. Google's latest phones can run some fairly advanced genAI processes on-device, and of course a computer has much...

        AI is, if I understand it correctly, quite computationally heavy. And it will run all the time on these OS's, for every menial task. In other words, we're looking at a massive increase of energy usage, graphics cards and dedicated hardware,

        It's not nearly as bad once the model is trained, especially on dedicated silicon. Google's latest phones can run some fairly advanced genAI processes on-device, and of course a computer has much more power than a phone.

        4 votes
      2. [4]
        JCAPER
        (edited )
        Link Parent
        For me at least, it will be very useful. Two barriers that I have at the moment is just convenience (I don't want to swap windows/apps every time) and privacy (some topics I do not feel...

        For me at least, it will be very useful. Two barriers that I have at the moment is just convenience (I don't want to swap windows/apps every time) and privacy (some topics I do not feel confortable sharing with any cloud AI). But I can see this being "your mileage may vary" kind of thing.

        About the second point, it depends on how things go from here. For example Apple will use a partitioned system of several models, each for a specific task; in other words, a system that does not call the most powerful models for every little thing, but calls smaller models designed for different tasks that can be run on device. This is not a surprising development, there are several LLM models available online that are being built for specific tasks.

        On the other hand, I can also see several OEM's just wrapping chatGPT API into the OS and call it a day. Our hope there is that we keep seeing improvements in performance that we've seen in the last years. There are some powerful general LLM models that, although not as good as GPT 4 or Claude Opus, can be run locally with decent GPU's. One or two years ago people didn't think this was possible

        1 vote
        1. [3]
          Asinine
          Link Parent
          Convenience is the one thing I can easily give up in most areas of my life - especially to keep my privacy. It would be a bit of a rabbit hole to explain, but I feel that our conveniences (despite...

          Convenience is the one thing I can easily give up in most areas of my life - especially to keep my privacy. It would be a bit of a rabbit hole to explain, but I feel that our conveniences (despite huge achievements in technology, medicine, etc.) have guided humanity into a self-centered and disjointed downward spiral. We'll give up anything if something is convenient for us at the cost of either others or a greater good. Corporations are a wonderful example and NIMBYs are another.

          But, as I said, rabbit hole.

          14 votes
          1. [2]
            ingannilo
            Link Parent
            I 100% agree with your statement that convenience has driven a major culture shift, and one that I personally dislike. I'd like to add onto your characterization of this shift as self centered,...

            I 100% agree with your statement that convenience has driven a major culture shift, and one that I personally dislike. I'd like to add onto your characterization of this shift as self centered, that it has also isolated us from one another and pushed most peoples' locus of control outward.

            Folks today are simultaneously growing more demanding, less capable, and are becoming more inclined to point to an outside entity as the source of their failures or dissatisfaction. This feels like a mental health epidemic in the making.

            I teach mathematics to (mostly) college age kids in the US. The whole idea of skill building is becoming intractable to my students. If they aren't immediately good at something, then they want someone or something else to do it for them. There's an AI tool for solving equations, so why should I learn? Desmos will sketch this curve or surface for me, so why should I bother? Photomath can handle most integrals and derivatives, wolfram will do the ones Photomath won't, and if you shell out the cash they'll even give you steps to copy down to make teacher think you're learning. Then the exam comes along and they bomb. They make excuses, stay in for another three weeks until they bomb the next exam. Still just using their phones to solve all the homework quiz and lab problems. Then they make more excuses to the petitions committee and beg to have the class removed from their record. Rinse and repeat until the grant money runs out or their parents refuse to continue paying for Club Ed.

            I'm getting distracted, but what you said really struck a cord with me, and I wanted to share my anecdotal evidence in support of your read on the cultural implications. This is just in one small niche of academia, and I think these trend will continue everywhere.

            I'm not afraid of or against the use of technology. I use it all the time in my own research. But we need to evolve our own maturity along with the tech, and from where I'm standing at least, it feels like we've given up on that part.

            8 votes
            1. Asinine
              Link Parent
              That's almost exactly what I tried to say. :) As you mention, I believe (and I hate to use this quote, but it's so true): with great power of handheld devices that gain us whatever knowledge we...

              I'd like to add onto your characterization of this shift as self centered, that it has also isolated us from one another and pushed most peoples' locus of control outward.

              That's almost exactly what I tried to say. :)

              As you mention, I believe (and I hate to use this quote, but it's so true): with great power of handheld devices that gain us whatever knowledge we might desire in a blink of an eye (as long as you have reception) comes great responsibility. And we tend to prefer convenience over that responsibility.

              Unrelated, but also playing in the back of my mind: https://tildes.net/~talk/1gzq/where_do_you_find_community

              1 vote
    2. [6]
      vord
      (edited )
      Link Parent
      If you only ever use AI to write emails, you won't have your own style, you'll have the AI's. So far, the best case scenarios for AI uses on general computing still boil down to 'autocomplete with...

      Ask it to write emails, it will know our writing style and replicates it

      If you only ever use AI to write emails, you won't have your own style, you'll have the AI's.

      So far, the best case scenarios for AI uses on general computing still boil down to 'autocomplete with extra steps' or 'discovering your job is useless.' Because the AI doesn't serve you on a work machine: it serves your boss. It'll report metrics. It'll silently track every idle moment. And it will be abused.

      It's not that hard to order a list of priorities. Sacrificing massive amounts of privacy, introducing huge security risks, and spending exponentially more power for maybe a few minutes saved a day is dumb.

      8 votes
      1. [5]
        JCAPER
        Link Parent
        If you feed it your own texts, it can adopt similar writing styles. There are two ways of doing this, training your own model or having a competent model that can replicate it, as long it has that...

        If you only ever use AI to write emails, you won't have your own style, you'll have the AI's.

        If you feed it your own texts, it can adopt similar writing styles. There are two ways of doing this, training your own model or having a competent model that can replicate it, as long it has that context.

        Something that potentially is going to happen with the OS level AI's, is that they're going to grab those texts and do it for you

        The thing here is that most people that use chatGPT or alternatives, do not bother to give it examples of their own writing style, so they just ask it for an output and copy paste

        3 votes
        1. [4]
          vord
          Link Parent
          That works great for you, who has a backlog of years of writings. What of someone who doesn't?

          That works great for you, who has a backlog of years of writings.

          What of someone who doesn't?

          4 votes
          1. [3]
            JCAPER
            Link Parent
            I'm not sure I follow. Even if someone is born now and grows up into having a new device powered by even more advanced AI models, I find it hard to believe that person will never ever write anything.

            I'm not sure I follow. Even if someone is born now and grows up into having a new device powered by even more advanced AI models, I find it hard to believe that person will never ever write anything.

            2 votes
            1. [2]
              vord
              Link Parent
              Sure, they'll write stuff. But I'm doubtful they'll ever write more than a handful of words of organic, unassisted thoughts. Sure, elementary kids might avoid it for a bit, even then, maybe...

              Sure, they'll write stuff. But I'm doubtful they'll ever write more than a handful of words of organic, unassisted thoughts.

              Sure, elementary kids might avoid it for a bit, even then, maybe not...they were being handed Chromebooks in first grade.

              Dunno about you, but my personal style has taken decades to evolve, and continuously ebbs and flows based on a huge array of factors.

              An AI that was fully trained on 20something me would have locked me into that horrific box of who I was at the time.

              7 votes
              1. JCAPER
                Link Parent
                Consider that neither the AI nor you would be stuck in a time or place. As you get older you adopt new ways of thinking, of being, of writing, etc, you would start to dislike that style of...

                Consider that neither the AI nor you would be stuck in a time or place. As you get older you adopt new ways of thinking, of being, of writing, etc, you would start to dislike that style of writing, and either prompt the AI to do it differently or write it yourself, over the years. Your style of writing in an alternative timeline with AI's would still evolve anyway, but differently.

                The only horror I see here is how, if this AI is a cloud one, a company will get to know you over the years (not that it's unprecedented, Google has voice clips of me from back in 2010)

                2 votes
  2. Protected
    Link
    In response to everyone and no one in particular, here are my thoughts, sorry if they're bleak. Microsoft has spent the last several years reinventing itself as a cloud-centric company. It clearly...

    In response to everyone and no one in particular, here are my thoughts, sorry if they're bleak.

    Microsoft has spent the last several years reinventing itself as a cloud-centric company. It clearly wants to charge subscriptions for all of its services if it can. In that context, and accounting for how their darling OpenAI is loathe to share its best models with the public, I don't expect a feature like this to be local in any significant capacity in the long term. At most, it might use local storage and leverage (expensive) local hardware resources for the feature's own benefit, but like all cloud companies, I think they're chomping at the bit to take full control of our devices and our experience.

    Google's services have long "known" us, since so many people have a google account and then engage with them and never turn on history-based recommendations, so I have experience with engaging with services that have a lot of information about me. In that experience, just because a service knows you, it doesn't mean it will necessarily be designed to "help" you. The priorities of the designers of the service are likely to be completely different from yours. They had ads to sling, sponsors to push, and desire to actually manipulate your behavior to make you pay for this and that, get you on a new service, get you off an old one. Netflix is another good example. I gave it lots of information about what I want and don't want to watch, but it has its own priorities, so results are mediocre.

    I'm also one of those who believe the next generation will have no writing style. People being born now might never write anything in their lives outside school (which they will hate); in fact, it will be machines talking to each other. It doesn't matter if one person is an eccentric old fogey and wants to write their own letters; the recipient will never read it. They will ask an AI to digest it and tell them the gist of it.

    Scheduling is a hard problem transversal to language processing. As long as it's not hallucinating, a LLM should be able to convey our inputs to an external library to do that sort of thing and deliver the outputs back to us.

    9 votes
  3. [6]
    Deely
    Link
    Honestly, if Microsoft will make this feature local-only, opensource, and with good security protection I will love it. My main issues is very simple: I do not have trust in Microsoft.

    Honestly, if Microsoft will make this feature local-only, opensource, and with good security protection I will love it.
    My main issues is very simple: I do not have trust in Microsoft.

    3 votes
    1. [5]
      vord
      Link Parent
      Did you not see the part where a small hack was quickly done and allowed attackers to get every screenshot taken? This is an incredibly dangerous thing to be deployed at-scale, even if written by...

      Did you not see the part where a small hack was quickly done and allowed attackers to get every screenshot taken?

      This is an incredibly dangerous thing to be deployed at-scale, even if written by altruistic genuises. Even with best-in-class protections state actors will be highly motivated to attack it from every possible angle.

      10 votes
      1. [4]
        Deely
        Link Parent
        First. I mentioned "good security protection". Second, did you actually read my comment? Im speaking about hypothetical opensource solution that not implemented yet. Third: please change your...

        First. I mentioned "good security protection".

        Second, did you actually read my comment? Im speaking about hypothetical opensource solution that not implemented yet.

        Third: please change your tone, on this site its preffered to have a good faith in user interactions.

        Forth: whats your opinion on password managers? Don't they also have this property: "protections state actors will be highly motivated to attack it from every possible angle."?

        8 votes
        1. [3]
          ACEmat
          Link Parent
          I'm very confident that Microsoft thought their current setup already had good security protection.

          I'm very confident that Microsoft thought their current setup already had good security protection.

          3 votes
          1. [2]
            JCAPER
            Link Parent
            I wonder if the devs themselves thought that? Considering how Microsoft has been rushing everything AI related to market, wouldn't be surprised if the same was said about Recall and its concurrent...

            I wonder if the devs themselves thought that? Considering how Microsoft has been rushing everything AI related to market, wouldn't be surprised if the same was said about Recall and its concurrent features

            2 votes
            1. 0xSim
              Link Parent
              The devs? Yes, definitely. At least some of them. They're not the ones in charge, though.

              I wonder if the devs themselves thought that?

              The devs? Yes, definitely. At least some of them.

              They're not the ones in charge, though.

              2 votes
  4. pyeri
    Link
    But but it stays locally...!

    But but it stays locally...!

    1 vote