31 votes

Apple Intelligence

23 comments

  1. [10]
    adorac
    Link
    Apple's WWDC keynote just ended today and they've announced a lot of really cool stuff. Most notably, however, they announced their new suite of AI features they're calling Apple Intelligence ("AI...
    • Exemplary

    Apple's WWDC keynote just ended today and they've announced a lot of really cool stuff. Most notably, however, they announced their new suite of AI features they're calling Apple Intelligence ("AI for the rest of us"). They go into a lot of detail, but here's a quick summary:

    • Most of this runs entirely on-device, and it will ask for your permission before it sends anything to a server!
    • Writing tools available across all their OSes that can proofread, write emails and notes, rewrite in different tones, etc.
    • Something really cool they showed off: if you have it respond to an email, it'll actually ask you questions to gather information it needs to respond.
    • "Priority" notifications and emails, along with summaries of each.
    • Apple Intelligence is able to access pretty much everything on your iPhone/iPad/Mac as needed. They're calling this "personal context".
    • You can use on-device image generation AI to create pictures using your friends (which look... horrifying) or Genmoji, which are generated emoji based on a prompt.
    • New thing called Image Playground that lets you mess around with generating images, or even turn a sketch into an image.
    • Siri is much better. I can't verify this, as the revamped Siri isn't in the iOS 18 beta yet, but the demos looked very promising.
    • Siri maintains conversational context and can perform complex actions across the system to answer questions or do things. So basically the R1 Rabbit and Humane Pin and the like are dead.
    • If need be, Siri can use GPT-4o, too.
    16 votes
    1. [5]
      adorac
      Link Parent
      Something they kind of brushed over in the keynote: this is apparently only going to be supported by the iPhone 15 Pro (not the base 15) and Macs with Apple Silicon. Presumably this is a RAM...

      Something they kind of brushed over in the keynote: this is apparently only going to be supported by the iPhone 15 Pro (not the base 15) and Macs with Apple Silicon. Presumably this is a RAM issue, but it does put a bit of a damper on "AI for the rest of us". They really should've started putting more RAM in their phones earlier.

      18 votes
      1. [2]
        ackables
        (edited )
        Link Parent
        I believe it's because the M-chips and the A17 Pro Chip in the iPhone 15 Pro and Pro Max have neural processing cores and older chips do not. Edit: Looking into it more, it's most likely that you...

        I believe it's because the M-chips and the A17 Pro Chip in the iPhone 15 Pro and Pro Max have neural processing cores and older chips do not.

        Edit: Looking into it more, it's most likely that you need 8GB of RAM to run the on device LLMs. Only the 15 Pro and Pro Max have gotten 8GB of RAM so far in the iPhone lineup.

        8 votes
        1. adorac
          (edited )
          Link Parent
          Nah, they've been putting neural processors on their chips for a while. My old 12 was able to run LLaMA, but it was slow and took up all of its memory. Edit to add: the M1 has an equivalent neural...

          Nah, they've been putting neural processors on their chips for a while. My old 12 was able to run LLaMA, but it was slow and took up all of its memory.

          Edit to add: the M1 has an equivalent neural engine to the A14 (which doesn't support Apple Intelligence), but all M1 devices have a base memory of 8GB.

          10 votes
      2. [2]
        Zorind
        Link Parent
        I would guess it’s more of a processor bottleneck than RAM, but that’s just a guess. Either way, my 14 Pro and I are a bit sad.

        I would guess it’s more of a processor bottleneck than RAM, but that’s just a guess.

        Either way, my 14 Pro and I are a bit sad.

        7 votes
        1. adorac
          Link Parent
          The 15 Pro is the only iPhone with 8GB of RAM, and those models can be pretty big. Still sucks that they weren't able to make use of their research in running AI from flash.

          The 15 Pro is the only iPhone with 8GB of RAM, and those models can be pretty big. Still sucks that they weren't able to make use of their research in running AI from flash.

          9 votes
    2. [2]
      Zorind
      Link Parent
      I wish there was a way to know which things required going to the “personal private cloud” because as neat as that sounds, I’m not sure how much I’d trust it. I do appreciate the on-device...

      I wish there was a way to know which things required going to the “personal private cloud” because as neat as that sounds, I’m not sure how much I’d trust it. I do appreciate the on-device processing for most things though (assuming it is for most things).

      Pretty much everything but the image toolkit and genmoji seem cool to me. I’m not a huge fan of the writing tools, but the grammar one and the more advanced auto-complete with questions seem alright.

      The Siri demos do seem very promising, though I do wonder if/how much these upgrades will integrate with the Home Pods.

      12 votes
      1. TurtleCracker
        Link Parent
        I'm curious about this as well. The future of "AI" that I'd prefer is entirely private local processing. I wouldn't mind having a server installed in my home for my more mobile devices to hand off...

        I'm curious about this as well. The future of "AI" that I'd prefer is entirely private local processing. I wouldn't mind having a server installed in my home for my more mobile devices to hand off processing.

        If the "Private cloud" they promise is independently audited, anonymized (as much as it can be), and has a strict no logging policy then it is far better than what the competitors seem to be offering.

        3 votes
    3. [2]
      babypuncher
      Link Parent
      It was only a matter of time before someone said "what if this, but on the device that is already in your pocket and has access to all the relevant user data needed to make it useful". Thus,...

      Siri maintains conversational context and can perform complex actions across the system to answer questions or do things. So basically the R1 Rabbit and Humane Pin and the like are dead.

      It was only a matter of time before someone said "what if this, but on the device that is already in your pocket and has access to all the relevant user data needed to make it useful". Thus, killing off an entire class of pointless devices while they were still in the cradle.

      8 votes
      1. papasquat
        Link Parent
        They were never alive to begin with. Smartphones coild already do everything that those devices did via an app well before they came out.

        They were never alive to begin with. Smartphones coild already do everything that those devices did via an app well before they came out.

        16 votes
  2. JCAPER
    Link
    I'm actually pretty excited, I was hoping for on-device AI and they delivered. I may get the next iphone just because of this. Side note, I know it was obvious for many people, but I did say that...

    I'm actually pretty excited, I was hoping for on-device AI and they delivered. I may get the next iphone just because of this.

    Side note, I know it was obvious for many people, but I did say that devices like Rabbit and Humane Pin were 1 software update away from becoming obsolete. And well, here it is

    8 votes
  3. [7]
    itdepends
    Link
    Isn't the capability to access anything on your phone, computer etc exactly what people were freaking out about when MS announced something similar? Granted this Isn't taking screenshots but...

    Isn't the capability to access anything on your phone, computer etc exactly what people were freaking out about when MS announced something similar? Granted this Isn't taking screenshots but still, it seems like you're risking a lot.

    8 votes
    1. [2]
      ButteredToast
      Link Parent
      In practice it’s not that different from the indexing Apple stuff has done for decades to enable Spotlight and various “smart” features like surfacing flight information from emails. The reason...

      In practice it’s not that different from the indexing Apple stuff has done for decades to enable Spotlight and various “smart” features like surfacing flight information from emails.

      The reason people were freaking out about Recall is because Microsoft has zero reservations when it comes to shipping off gathered data to their servers to use however they please. Their track record on security and privacy has never been great but has gotten worse under Nadella as Windows, Office, etc have shifted from software that’s sold to software as a service, and this pattern continued when Recall paid not even as much as a nod to either.

      Of course there’s reason to be skeptical of any company doing these things, but Apple appears to have made substantial investments in keeping their AI stuff private and secure — first, a lot more is done on-device, and for what can’t they went as far as to build bespoke server hardware and a matching OS that strips out everything not immediately relevant to the task of processing data and goes to great lengths to ensure that it and the connection to the user hasn’t been tampered with and that processed data does not persist in any form and cannot leak or be viewed by employees. They’ll also be releasing copies of the OS which have been cryptographically signed (so it’s possible to verify Apple’s servers are running the same build) for the community to verify these claims.

      MS hasn’t spoken of anything like that, which suggests their server setup is bog standard stuff with all of the usual weak points.

      15 votes
      1. sparksbet
        Link Parent
        Maybe it's because I'm getting my news on Recall from more security-oriented places, but by far the bigger worry with Recall was that it saved everything locally in an easily-recoverable...

        The reason people were freaking out about Recall is because Microsoft has zero reservations when it comes to shipping off gathered data to their servers to use however they please.

        Maybe it's because I'm getting my news on Recall from more security-oriented places, but by far the bigger worry with Recall was that it saved everything locally in an easily-recoverable unencrypted format and was on by default, meaning anyone who gains access to your PC can casual browse through a bunch of screenshot containing private info like passwords and banking details.

        9 votes
    2. TurtleCracker
      Link Parent
      The emphasis on local processing, privacy, and no logging when not locally processing is the difference. The trust I have in Microsoft or Google to maintain privacy is substantially lower compared...

      The emphasis on local processing, privacy, and no logging when not locally processing is the difference.

      The trust I have in Microsoft or Google to maintain privacy is substantially lower compared to Apple. A significant amount of the Google business model is reliant on a lack of privacy. Microsoft stuffs absurd amounts of telemetry into Windows. It's a pain to disable it, and updates can turn it back on.

      6 votes
    3. [3]
      snappyl
      Link Parent
      First I have to say that I don't actually know how these two systems work -- I only know "how they work" based on statements I've seen from the two companies, so this could be wrong. That said,...

      First I have to say that I don't actually know how these two systems work -- I only know "how they work" based on statements I've seen from the two companies, so this could be wrong. That said, from what I understand, I think there's actually a pretty big difference between Microsoft Recall and Apple Intelligence.

      The difference that I see between the two is that, in the case of Apple there's no more data kept by the system than what you already have there today, but in the case of Microsoft it keeps everything that's on your screen and an analysis of everything that has ever been on your screen unless you disable Recall. So, if I don't explicitly save a file on my Mac or create a note in Notes or add a new synthesizer in MiRack, Apple Intelligence wouldn't have any idea I did anything at all. By contrast, a thing needs to merely appear on my screen for Recall to be aware of it. So I could open a Notepad window and start typing and that's now saved for eternity (or until I run out of Recall space) without needing to even save the note.

      I think both systems sound neat and I'm excited to see where they go, but I am much more apprehensive with having an actual history of everything I've ever done on my PC than I am with simply having a supercharged search mechanism into what's already there. That's not to say that Apple's system is without issue. I still haven't processed how this might affect victims in abusive relationships for example, but in my use case and for people in my family I see Apple Intelligence being safer than Recall specifically because

      • there isn't a centralized database that can be uploaded when we get a virus
      • there isn't a database that contains all of our passwords and credit card numbers scraped from every entry field everywhere
      • there isn't a stupid screenshot OCR tool running in the background, wasting power all the time

      And, as an aside, on that last point from what I understand from the Apple State of the Platforms presentation, when you activate Siri/Apple Intelligence it'll analyze the screen using the actual UI components. So it knows there are text fields, images, and windows etc. It doesn't take a screenshot and OCR everything, but rather it uses the actual object tree so there's no lossy conversion nonsense where it tries to understand low contrast text somewhere -- it just gets the text from the presentation layer directly. When I heard Microsoft is probably just running everything through OCR, it hurt me a little. Apple's implementation isn't like some super fantastic implementation or anything -- it's just correct. Microsoft's implementation is just bad. I can only hope there was some huge hurdle preventing them from just grabbing the text that was already there because running screenshots through OCR just seems wrong. Especially when you're doing this every few seconds.

      So for me personally and for my threat model, I'm fine with Apple Intelligence but I'll probably only keep Recall on systems that I use for like beta testing operating systems or whatnot. It won't be on my daily driver. Of course that all changes if we find out Apple feeds our data into ad targeting systems or Microsoft makes strides in blocking out sensitive data and truly securing their database. So we'll see how they develop over time.

      6 votes
      1. [2]
        ButteredToast
        Link Parent
        My hunch is that Microsoft didn’t want to bother with trying to integrate with the myriad of UI toolkits used to build Windows software. They could probably integrate with their own reasonably...

        And, as an aside, on that last point from what I understand from the Apple State of the Platforms presentation, when you activate Siri/Apple Intelligence it'll analyze the screen using the actual UI components. So it knows there are text fields, images, and windows etc. It doesn't take a screenshot and OCR everything, but rather it uses the actual object tree so there's no lossy conversion nonsense where it tries to understand low contrast text somewhere -- it just gets the text from the presentation layer directly. When I heard Microsoft is probably just running everything through OCR, it hurt me a little. Apple's implementation isn't like some super fantastic implementation or anything -- it's just correct. Microsoft's implementation is just bad. I can only hope there was some huge hurdle preventing them from just grabbing the text that was already there because running screenshots through OCR just seems wrong. Especially when you're doing this every few seconds.

        My hunch is that Microsoft didn’t want to bother with trying to integrate with the myriad of UI toolkits used to build Windows software. They could probably integrate with their own reasonably well, but win32, MFC, WPF, UWP, and WinUI have dwindled in popularity with third party devs in recent years which means they’d only be integrating with a fraction of popular Windows programs.

        Apple has an advantage here in that a lot of iOS/mac apps (and the majority of the higher quality ones) are built with native UI toolkits.

        That said, I’m sure MS could’ve figured out something smarter than screen OCR, like taking advantage of accessibility features like screen reader compatibility, which doesn’t have perfect coverage but should include a good majority of mainstream programs.

        4 votes
        1. snappyl
          Link Parent
          Fair and a good call out. I was assuming that at some point the font rendering was handed off to Windows and they could leverage that, but I also know about 0.0% on how that actually works.

          They could probably integrate with their own reasonably well, but win32, MFC, WPF, UWP, and WinUI

          Fair and a good call out. I was assuming that at some point the font rendering was handed off to Windows and they could leverage that, but I also know about 0.0% on how that actually works.

          1 vote
  4. [2]
    password1
    Link
    It looks neat, but the english language requirement is really disappointing here, and a complete blocker for me.

    It looks neat, but the english language requirement is really disappointing here, and a complete blocker for me.

    4 votes
    1. NoPants
      Link Parent
      That will change with time.

      That will change with time.

      2 votes
  5. [3]
    Jordan117
    Link
    Aw, I was hoping they'd roll out a new Siri with OpenAI's next-gen real-time conversational abilities, or at least announce it for next year. The stuff they did show seems useful though, albeit...

    Aw, I was hoping they'd roll out a new Siri with OpenAI's next-gen real-time conversational abilities, or at least announce it for next year. The stuff they did show seems useful though, albeit more than a little limited (only three style options for image gen, having to manually approve all ChatGPT calls instead of integrating directly with Siri, etc.). If they can make it work I'd love for Siri to be able to fulfill complex requests and answer questions about iOS features.

    2 votes
    1. [2]
      Zorind
      Link Parent
      They demo’s Siri being able to help with/answer questions about iOS features.

      They demo’s Siri being able to help with/answer questions about iOS features.

      5 votes
      1. Jordan117
        Link Parent
        Right, but there's often a gap between the demo and the reality, especially if they're trying to limit most processing to the device. I could see it being finicky enough or draining enough battery...

        Right, but there's often a gap between the demo and the reality, especially if they're trying to limit most processing to the device. I could see it being finicky enough or draining enough battery that's it's not very useful in practice. I know that the original Siri demo seemed amazing but in retrospect I only really use it to schedule events and set timers.

        4 votes