I think Edsger Dijkstra's take is perhaps the more interesting, and certainly the more illuminating:
The other famous note is Note G. Lovelace begins Note G by arguing that, despite its impressive powers, the Analytical Machine cannot really be said to “think.”
I think Edsger Dijkstra's take is perhaps the more interesting, and certainly the more illuminating:
The question of whether a computer can think is no more interesting than the question of whether a submarine can swim.
Ohh, good one. Likewise, with understanding. Can my ML model understand what it's saying? Yes. Does it? Well, if it quacks like a duck... If you need an argument from me, ask yourself what...
The question of whether a computer can think is no more interesting than the question of whether a submarine can swim
Ohh, good one.
Likewise, with understanding. Can my ML model understand what it's saying? Yes. Does it? Well, if it quacks like a duck...
If you need an argument from me, ask yourself what standards you apply to other humans. And then convince yourself that the same standard would (not) apply to a machine.
I don't think that's quite the right interpretation. Djikstra's point is not that that thought is a primitive, axiomatic sort of activity which it is possible for a machine to engage in. It's that...
Likewise, with understanding. Can my ML model understand what it's saying? Yes. Does it? Well, if it quacks like a duck...
If you need an argument from me, ask yourself what standards you apply to other humans. And then convince yourself that the same standard would (not) apply to a machine.
I don't think that's quite the right interpretation.
Djikstra's point is not that that thought is a primitive, axiomatic sort of activity which it is possible for a machine to engage in. It's that thought is a messy concept, and that our conception of it is inherently bound up in the issues of our own humanity. Can a machine think? It's a purely semantic question. It depends on what we interpret the word ‘think’ to mean, and differing interpretations are congruently valid (in differing contexts).
Absolutely. I completely agree. The terms are messy and the question ultimately either meaningless, trivial or unanswerable (depending on your definition of the terms). The same goes, though, not...
Absolutely. I completely agree. The terms are messy and the question ultimately either meaningless, trivial or unanswerable (depending on your definition of the terms). The same goes, though, not only for thinking but also for understanding. What does it mean to understand? If you ask me, it's a black-box test of whether your reactions to external stimuli are consistent with my mental model of some subject matter. If you behave(or speak) a certain way that makes me believe you lack the same mental model as me, I will believe that you do not understand. I can't see inside your head. Just because I can with a machine and it's one hell of a mess, doesn't mean its understanding is inherently less-than. So, if I am content with quantifying understanding in others in a black-box manner, I really should be content with that too in a ML model.
(Though I will add that machines are harder to test than humans in a way, because due to shared experience etc. we can and do expect a baseline of common sense that need not be there for machines. But that's a problem solvable with more testing.)
My introduction to cognitive sciences started with the prof asking : "what would be more complicated : artificial thought or artificial life ?" More than two decades later, the question is still...
It depends on what we interpret the word ‘think’ to mean
My introduction to cognitive sciences started with the prof asking : "what would be more complicated : artificial thought or artificial life ?"
More than two decades later, the question is still valid, I guess.
This was a great read, thanks for posting. I loved the bit about how Note G actually included a legitimate claim for the title of Oldest Computer Bug. Interestingly, that predates both Grace...
This was a great read, thanks for posting. I loved the bit about how Note G actually included a legitimate claim for the title of Oldest Computer Bug. Interestingly, that predates both Grace Hopper's literal bug (a moth) in 1947 and Edison's coining of the (computer-unrelated) term "bug" in 1873.
I've been an admirer of mechanical computers and other clockwork devices for many years, from the Classical-age Antikythera mechanism up through chess-playing automata, Babbage's inventions, and of course the Wintergatan Marble Machine X. There's something almost magical about being able to observe the logic, up close and at any speed you like, as it physically flows through the device. As a modern software engineer, I'm used to having all the low-level computation abstracted away. But I'd encourage anyone in the field to explore the fundamental principles that make it all possible. A gravity-powered marble board like the Digi-Comp II (free replica details here) or Turing Tumble is a great place to start.
I think Edsger Dijkstra's take is perhaps the more interesting, and certainly the more illuminating:
Ohh, good one.
Likewise, with understanding. Can my ML model understand what it's saying? Yes. Does it? Well, if it quacks like a duck...
If you need an argument from me, ask yourself what standards you apply to other humans. And then convince yourself that the same standard would (not) apply to a machine.
I don't think that's quite the right interpretation.
Djikstra's point is not that that thought is a primitive, axiomatic sort of activity which it is possible for a machine to engage in. It's that thought is a messy concept, and that our conception of it is inherently bound up in the issues of our own humanity. Can a machine think? It's a purely semantic question. It depends on what we interpret the word ‘think’ to mean, and differing interpretations are congruently valid (in differing contexts).
Absolutely. I completely agree. The terms are messy and the question ultimately either meaningless, trivial or unanswerable (depending on your definition of the terms). The same goes, though, not only for thinking but also for understanding. What does it mean to understand? If you ask me, it's a black-box test of whether your reactions to external stimuli are consistent with my mental model of some subject matter. If you behave(or speak) a certain way that makes me believe you lack the same mental model as me, I will believe that you do not understand. I can't see inside your head. Just because I can with a machine and it's one hell of a mess, doesn't mean its understanding is inherently less-than. So, if I am content with quantifying understanding in others in a black-box manner, I really should be content with that too in a ML model.
(Though I will add that machines are harder to test than humans in a way, because due to shared experience etc. we can and do expect a baseline of common sense that need not be there for machines. But that's a problem solvable with more testing.)
My introduction to cognitive sciences started with the prof asking : "what would be more complicated : artificial thought or artificial life ?"
More than two decades later, the question is still valid, I guess.
This was a great read, thanks for posting. I loved the bit about how Note G actually included a legitimate claim for the title of Oldest Computer Bug. Interestingly, that predates both Grace Hopper's literal bug (a moth) in 1947 and Edison's coining of the (computer-unrelated) term "bug" in 1873.
I've been an admirer of mechanical computers and other clockwork devices for many years, from the Classical-age Antikythera mechanism up through chess-playing automata, Babbage's inventions, and of course the Wintergatan Marble Machine X. There's something almost magical about being able to observe the logic, up close and at any speed you like, as it physically flows through the device. As a modern software engineer, I'm used to having all the low-level computation abstracted away. But I'd encourage anyone in the field to explore the fundamental principles that make it all possible. A gravity-powered marble board like the Digi-Comp II (free replica details here) or Turing Tumble is a great place to start.
This was great to read, thanks for posting.