This seems to me an important point to emphasize. It's worth noting that reverse-engineering not only books, but people, was a thing for enthusiasts of the idea of AI long before there was any AI worth talking about.
I do think, though, that the way we are using the term 'artificial intelligence' needs some care. The idea of AIs that definitely *are* minds just like ours was always there and it has not been realized (although there is the old Turing Test question of how would you know if it had). That it *can never* be realized is a faith-based claim, very different from the claim that a large language model like ChatGPT is not, and cannot be, a mind.
You can argue that an LLM could be a component of a mind, certainly; or that it is an important step towards the emergence of 'general' AI - but then you could argue the same thing about all kinds of previous steps that went into the development of LLMs (as well as about other contemporary examples of AI that are not LLMs, like AlphaFold or Stockfish).
So that means that you could argue that statistical models were always a form of AI. Alternatively you could see current AI as a form of 'automated statistics,' part of a longer trend in research to have statistical models become more automated and opaque. Saying 'automated statistics' rather than 'artificial intelligence' would not be any more accurate, I think, but I think it would reveal deep roots to the epistemological change you describe.
I think the roots are in the concept of probability and the reluctance to engage with that concept on a deep level which leads to a tendency to use it to arrive back at a simulacrum of a binary true and false.
Well I think all terms inherently mislead. I'm not sure that one does so particularly much. What we are calling AI genuinely does solve problems that we use our own intelligence to solve and which, until recently, could only be solved with our own intelligence. The thing is, that is also true of a lot of other things - like statistical models - which we didn't refer to as 'artificial intelligence.'
I don't see that there's an obvious, unarguable, point at which you start using the term and we you refuse to ever use it - even for a hypothetical future thing - then I think we are hiding behind some rather shaky walls.
As with animal intelligence, there's always the question over whether definitions of intelligence are there to help us understand or to help us feel special.
Coincidentally, I just listened to an episode of the Philosophize This podcast https://philosophizethis.substack.com/ talking about a similar argument in Byung Chul Han's book The Crisis of Narration, responding to a 2006 article in Wired called The End of Theory - which is here: https://www.wired.com/2008/06/pb-theory/ but paywalled.
So... that's interesting. Something on the podcast about 'intelligence' and 'geist'.
Anyway I wonder what I would call my computer if I had to call it an 'electronic X'.
A Grammar of Old Pictish is presently in the works. I also proposed to the Scottish Place-Name Society a couple of years ago algorithms that could restore lost placenames, even down to the level of microtoponyms.
This seems to me an important point to emphasize. It's worth noting that reverse-engineering not only books, but people, was a thing for enthusiasts of the idea of AI long before there was any AI worth talking about.
I do think, though, that the way we are using the term 'artificial intelligence' needs some care. The idea of AIs that definitely *are* minds just like ours was always there and it has not been realized (although there is the old Turing Test question of how would you know if it had). That it *can never* be realized is a faith-based claim, very different from the claim that a large language model like ChatGPT is not, and cannot be, a mind.
You can argue that an LLM could be a component of a mind, certainly; or that it is an important step towards the emergence of 'general' AI - but then you could argue the same thing about all kinds of previous steps that went into the development of LLMs (as well as about other contemporary examples of AI that are not LLMs, like AlphaFold or Stockfish).
So that means that you could argue that statistical models were always a form of AI. Alternatively you could see current AI as a form of 'automated statistics,' part of a longer trend in research to have statistical models become more automated and opaque. Saying 'automated statistics' rather than 'artificial intelligence' would not be any more accurate, I think, but I think it would reveal deep roots to the epistemological change you describe.
I think the roots are in the concept of probability and the reluctance to engage with that concept on a deep level which leads to a tendency to use it to arrive back at a simulacrum of a binary true and false.
Perhaps it comes down to the fact that in alighting on ‘artificial intelligence’ we’ve chosen a term that inherently misleads
Well I think all terms inherently mislead. I'm not sure that one does so particularly much. What we are calling AI genuinely does solve problems that we use our own intelligence to solve and which, until recently, could only be solved with our own intelligence. The thing is, that is also true of a lot of other things - like statistical models - which we didn't refer to as 'artificial intelligence.'
I don't see that there's an obvious, unarguable, point at which you start using the term and we you refuse to ever use it - even for a hypothetical future thing - then I think we are hiding behind some rather shaky walls.
As with animal intelligence, there's always the question over whether definitions of intelligence are there to help us understand or to help us feel special.
There was a vogue in the 50s and 60s iirc for calling computers ‘electronic brains’ or similar, and it’s probably a good thing that didn’t stick
Coincidentally, I just listened to an episode of the Philosophize This podcast https://philosophizethis.substack.com/ talking about a similar argument in Byung Chul Han's book The Crisis of Narration, responding to a 2006 article in Wired called The End of Theory - which is here: https://www.wired.com/2008/06/pb-theory/ but paywalled.
So... that's interesting. Something on the podcast about 'intelligence' and 'geist'.
Anyway I wonder what I would call my computer if I had to call it an 'electronic X'.
Thank you. As a classicist, principally of Akkadian, this is a really interesting step forward in how we think about AI in manuscript ’restoration.’
A Grammar of Old Pictish is presently in the works. I also proposed to the Scottish Place-Name Society a couple of years ago algorithms that could restore lost placenames, even down to the level of microtoponyms.
Well, Charles Bertram managed it...