A Reflection
By: Michael Boehmcke
Where to Start?
When I first decided to take this Philosophy and AI course, I chose it over other digital intensive offerings because I am a former computer scientist and current science-fiction writer. I have, for years, been fascinated by the role machine intelligence (or any artificial intellect) will play in the future of humanity. As a science-fiction staple AI is shown to be either the utter, exponential end of humankind or the savior which enables a true post-scarcity utopia, although there are also more nuanced takes on artificial intelligence, such as the Star Trek: The Next Generation character Data, who presents a look at the more specific question of artificial consciousness and personhood (If you haven’t seen it, S2 EP 9 “The Measure of a Man” is considered one of the classics of television for a reason). However, my personal fascination with AI has always been its tangential association with the field of transhumanism.
Transhumanism is to some an ideology, and to others merely a passing hobby. I would suppose that I fall somewhere in the middle, as I don’t think that the melding of man and machine will instantly solve all of humanities problems, but I do think that integrated cybernetics of some kind are an integral part of any future which has humanity of any kind still within it. At the far end of all transhumanist speculation, however, is the inevitable question of the transubstantiation synthesis: mind upload. Whatever mechanism is used to achieve this, whether direct neural mapping and simulation or some kind of nanite cerebral weave, the end goal of nearly all forms of transhumanism is the total replacement of the biological body with a perfect, immortal machine. As such, the transhumanist must confront the ship of Theseus head on, and grapple with the hard questions of artificial life. Is our sapience an emergent property of a sufficiently complex substrate, or something more specific to humans? Can a machine ever truly be the same as a person? Where do emotions and complex yearnings like curiosity arise from?
As you might have gathered, my predilections for a synthetic ascension had predisposed me not only to wanting to learn about AI, but having already known quite a bit about it and the philosophical debate that surrounded it before starting this course. Of course, it’s no surprise as well that, with the current state of artificial “intelligence,” the course would ground these questions in an examination of the Large Language Models which now dominate nearly all discourse that involves AI. Fortunately, my background in computer science and its focus on neural networks during my course of study had also left me well informed on the state of the technology. Coupled with a few strongly held political and ethical philosophies, and my stance on LLM’s was long set: They are a dead end, a useless distraction from more important research into AI that might be able to actually crack the problem of substrate independent consciousness.
Where are We Now?
One of the main things I learned in this course was, frankly, that I was right. I went into things with an open mind, I was intrigued by the possibility that I might be mistaken, or had overlooked some sort of major breakthrough in the current state of AI research. Unfortunately, there was nothing to be had on that front. As shown by the results of my final project LLMs continue to be nothing more than highly polished predictive text generators that can regurgitate the plagiarized work of the authors who they originally scraped the data from and don’t meaningfully transform their output in any way. Due to the way LLMs parse information through tokenization, fundamentally encoding the information separately from the input, and put in conjunction with the fact they have no stream of conscious memory only a direct reinitialization of all prior components of a conversation, it is clear to me that there is highly unlikely to be a breakthrough that leads to an artificial general intelligence any time soon. Certainly in my mind, even if LLMs improve their abilities to the point of having a functional general intelligence, they will not answer the questions of consciousness that are important to.
Something I did learn, and gained a new appreciation for in the course of this class, was the fact that LLM based AI is shockingly convincing on short, dedicated chat iterations that are not expected to last for exceptionally long periods of time. I would also be willing to acknowledge that stagnant conversations, or other systems which do not present meaningful changes to the LLM’s conversational status quo, are quite likely to last far longer than my simulation did without running into substantial problems, and that a human monitoring the system could potentially make the LLM more resilient to such errors as my simulation exhibited through correctional re-prompting or even just system resets. The reliability of LLMs on these semi-consistent scales is quite concerning to me, however, as it means more companies and even government systems will attempt to integrate these LLMs into their workflows.
Of course there are considerations to be made for the economic ramifications that automation driven unemployment would have across the world, but it should also be noted that these systems will constantly warn users about the fact LLMs are not perfect and can make mistakes. But as more and more institutions embrace the LLM automation temptation the chance that a critical system will be placed under the control of an un- or under monitored AI, and in turn the chances that a disaster directly caused by an LLMs error will occur. We have systems in place for people who cause harms to others, but our legal system currently has no mechanism for dealing with the same situations when caused by an AI. Even the possibility of passing the responsibility to the AI’s creator is unlikely, as the companies behind these models have shielded themselves behind walls of subscriber agreements, EULA’s, and service warnings.
What of Tomorrow?
Going forwards, I think I will remain a transhumanist and keep a skeptical eye on innovations in the field of artificial intelligence. It really could be at any time that a breakthrough occurs that utterly recontextualizes our idea of what computational consciousness is capable of. Despite that, I will also be moving forwards with a much greater anxiety about the future of LLM based AI. As these models only continue to improve in their ability to fool people into considering their output as significant or thoughtful, so too will the number of people who think that ChatGPT is whispering the secrets of the multiverse or inciting a new religion from their phone. It is vital that we recognize the socio-political harm that may be induced from the choice to embrace artificial intelligence, just as eagerly as we examine the benefits that might come from such an arrangement.

