4 votes

AI: Where in the loop should humans go?

1 comment

  1. feanne
    Link
    I just came across this blog post dated 2025-03-07, but it still feels very relevant and useful. It's essentially a set of questions we can ask about using AI as a tool. I've listed some of the...

    I just came across this blog post dated 2025-03-07, but it still feels very relevant and useful. It's essentially a set of questions we can ask about using AI as a tool. I've listed some of the questions below along with some excerpts.


    Are you better even after the tool is taken away?

    While people can feel like they’re getting better and more productive with tool assistance, it doesn’t necessarily follow that they are learning or improving. Over time, there’s a serious risk that your overall system’s performance will be limited to what the automation can do—because without proper design, people keeping the automation in check will gradually lose the skills they had developed prior.

    Are you augmenting the person or the computer?

    Neither is fundamentally better nor worse than the other—but you should figure out what kind of automation you’re getting, because they fail differently. Augmenting the user implies that they can tackle a broader variety of challenges effectively. Augmenting the computers tends to mean that when the component reaches its limits, the challenges are worse for the operator.

    Is it turning you into a monitor rather than helping build an understanding?

    If your job is to look at the tool go and then say whether it was doing a good or bad job (and maybe take over if it does a bad job), you’re going to have problems. It has long been known that people adapt to their tools, and automation can create complacency.

    Is it a built-in distraction?

    What perspectives does it bake in?

    After an outage or incident, who does the learning and who does the fixing?

    2 votes