Discussion about this post

User's avatar
Nicholas.Wilkinson's avatar

This seems to me an important point to emphasize. It's worth noting that reverse-engineering not only books, but people, was a thing for enthusiasts of the idea of AI long before there was any AI worth talking about.

I do think, though, that the way we are using the term 'artificial intelligence' needs some care. The idea of AIs that definitely *are* minds just like ours was always there and it has not been realized (although there is the old Turing Test question of how would you know if it had). That it *can never* be realized is a faith-based claim, very different from the claim that a large language model like ChatGPT is not, and cannot be, a mind.

You can argue that an LLM could be a component of a mind, certainly; or that it is an important step towards the emergence of 'general' AI - but then you could argue the same thing about all kinds of previous steps that went into the development of LLMs (as well as about other contemporary examples of AI that are not LLMs, like AlphaFold or Stockfish).

So that means that you could argue that statistical models were always a form of AI. Alternatively you could see current AI as a form of 'automated statistics,' part of a longer trend in research to have statistical models become more automated and opaque. Saying 'automated statistics' rather than 'artificial intelligence' would not be any more accurate, I think, but I think it would reveal deep roots to the epistemological change you describe.

I think the roots are in the concept of probability and the reluctance to engage with that concept on a deep level which leads to a tendency to use it to arrive back at a simulacrum of a binary true and false.

Expand full comment
Elizabeth Briggs's avatar

Thank you. As a classicist, principally of Akkadian, this is a really interesting step forward in how we think about AI in manuscript ’restoration.’

Expand full comment
7 more comments...

No posts