It’s been just over a year since OpenAI made a huge splash in the generative text space. Though the promise of true human-level AI is still just that - a promise, the current ecosystem of genAI and ML leads to a few exciting trends in overall Information and Infrastructure technology. However, to talk about these trends, like usual, we’re going to have to go to the beginning. And in the Beginning, was the Command Line.
For folks who maybe don’t remember offhand, this was an article penned by author Neal Stephenson of Cryptonomicon fame. In this article on user interfaces, Stephenson primarily dealt with the realities of the time, the time being the late 90s/early 2000s. It’s a quarter century on, and I am no Neal Stephenson if only by word count alone. But in that quarter-century, the computing world has shifted once again, and I’m seeing a trend that’s taking us back to some very interesting (and fundamental) places.
The Command Line
In the before times, the interface to any given computer was of a command-line nature. The difference between typing into a shell prompt and meticulously creating a punch card is a matter of translation, but not of the interface itself. The human-machine interface remains text (or numbers) in the form of some language.
The graphical user interface changed the user-computer relationship. But what’s also curious is that the user interface shift also brought with it the idea of a personal computer. Computers became individual tools for creativity.
That’s not to say that there weren’t time-sharing systems, or that personal computers didn’t have a command line. In fact, Stephenson in his article specifically talks about the usability of the commandline in the context of personal computing.
There are very many things that a GUI is good at, but it’s not fantastic at everything. So, the humble command line remains in the background of pretty much every wildly-used operating system today.
A brief detour, back to merely a decade and a half ago to Apple computers demoing Siri. Virtual assistants - really - are nothing more than command lines with an audio decoding step. It’s all just text. For the purposes of this article, we will treat them as such.
History and AI
History is cyclical, computer history - doubly so. Individual mainframes gave way to time-sharing systems gave way to big iron and personal computers gave way to cloud computing and web apps. With the current state of personal computers being that 98% of everything we do is in a web-attached application, with maybe 80% of that being via a web browser, what the modern computing landscape has transformed into is right back into the old time-sharing systems of yore. However, our own interaction with these systems can be likened to punch cards. Sure, these are virtual punch cards created via some combination of graphical and command line user interfaces. But the connections between these systems remain manual, and require a bit of thinking on our part. In essence, we are exactly where we started.
So where does AI fit into all of this? On the surface level, it’s been a fantastic tool for certain use cases (see https://betterthanservices.com/blog/when-to-use-aws-bedrock). But on a much grander scale, where does the ability to procedurally generate responses, and maybe include new data in a continuously self-modifying algorithm?
When you’re working with command line scripting as an engineer, it can often become an exercise of feeling like the proverbial monkey at a typewriter. The programming (fine, scripting) language that the command line runs on is yet another linguistic oddity that we have to keep in our heads as engineers.
However, is it really that much different from figuring out the exact set of keywords that prompt the AI algo to spit out something that is useable for your use case, I would argue - not at all. In fact, all that Prompt Engineering has done is create a subset of modern primary languages that in reality act much more like a computer scripting language than as a tool for interhuman communication.
Much like command-line shell-like languages are just enough barebones functions to tie disparate command-line tools together, so is this prompt engineering version of natural language.
The Metacomputing Future
The big promise for AI as it stands today is the ability to enhance the capabilities of any singular worker. (See article https://betterthanservices.com/blog/disruption-of-a-market ). However, there’s already been some movement in tying AI systems together, in essence, AIs becoming a sort of global API that any tool or product can hook into and deliver some sort of output. This results in a number of disparate systems, all very good at what they do operating within a generalized system that was never expressly created for the outputs and inputs that we are feeding it. In essence, most infrastructure environments are well on the way to being a self-contained operating system. The interface to this operating system? Let me paint a picture:
And in the end, there was the command line
As the interconnectedness of various systems grows, via the AI interconnector, the metacomputing operating system will come into a clearer and clearer view. We can argue about it, I for one will forever be annoyed by the layers of abstraction.
Hashicorp just recently announced a new meta-provider for Terraform that can incorporate any given API and generate a TF provider, which will be huge for building new infrastructure faster. To me, this is one of the first steps in the creation of this operating system. From the pure cloud consultant perspective – there are already systems out there orchestrating infrastructure via generative AI.
There will be other AI systems that sit between applications providing interconnection and interoperability in places where no such interaction existed before. But from a user perspective, the primary mode of interaction with this system will be a command-line prompt. Blinking cursor and orange phosphor are optional.