Three Crazy Things That Could Happen as AI Takes Off

Three Crazy Things That Could Happen as AI Takes Off
Not a customized user interface

There are a lot of assumptions about software. What if some of them are not true any more?

We're in an incredibly rich period of innovation when it comes to computer technology and software and my feeds are full of comments and ideas that hint at disruptions that may lie ahead. I'm starting to connect the dots on some of this. What I take away is that there are some assumptions we make about software and code and user relationships which have survived from the early days of packaged software. Here are some of those assumptions which may not survive much longer:

AI will start to code in software languages humans can't read or speak

Since Lady Ada first composed algorithms using jacquard loom punch cards humans have been formulating and expressing ideas to machines.

As we have gone from punch cards to plugboards to machine language to compilers to objects to frameworks, the methods of communication used have gotten richer in their ability to express those ideas within an expanding context, allowing for more and more complex ideas to be communicated.

The AI prompt is a sort of abstraction engine for of all that: you express an idea in plain language, and the AI, which is a sort of compiler with a very large context, gives you back working capability expressed in a software language.

The software language that capability is expressed in today will likely be a human-readable/human-writable software language, because those are the languages the AI has been trained on. That humans CAN read and understand code written in those languages is a legacy attribute, not a key advantage.

There are debates about whether development teams should review AI written code as they do for human written code, or should they test only the resulting functionality? This debate flourishes only because the AI is still doing something the humans think they also can do, using a language the humans think they can understand.

Somewhere in a lab a student has an army of agents developing a software language designed to be used exclusively by armies of agents. This language will provide capabilities only agents can really figure our how to access and utilize. It's not necessarily that software written in this language will be better or more efficient when viewed through current software quality metrics, although it might be. More simply agent swarms using this language will be faster and more accurate at delivering desired capabilities matching the intent expressed by the user.

Once that happens, the debate will be over.

Packaged software goes away as applications are written on the fly

Since the first personal computer was sold, software has been distributed, and later sold, as a packaged good. For a while the distribution involved physical artifacts that looked a lot like cereal boxes. Later electronic distribution enabled a download model. Even more recently cloud-based SaaS applications express the packaging online. But you can argue even SaaS is a form of packaged good.

The core of my argument for this is that typically the offered functionality is essentially the same for everybody. Features have been ideated, developed, packaged, and made available in a pipeline that assumes that every feature is going to be made available to every customer, if only they somehow pay for it and configure the system to use it.

This leads to a lot of bloat. Users typically use only a small fraction of the features of any software they purchase. Fortune 100 software companies may have entire floors of people whose job it is to get customers to use additional features of software they have already purchased. Consumers have grown very used to navigating menus full of options built and distributed for personas they do not even know exist.

If you search on 'AI based OS with on the fly software capability' you will get readouts on several AI Operating System projects which do not believe it has to be this way. Need to edit a document: create an editor on the fly. Need to edit a photo, or a database, or a financial document: no problem. Here is a UX tuned to what you need to do, with just the features you need. Missing the color coding feature: sorry, here it is...

In this world UX is personalized, and the target persona is a population of one. For this to work, agent-generated software creation must get much much faster. It will.

Many companies have discovered that when their product or service is compressed into an AI prompt, their only moats are the data they hold and the schema they use to enable value creation from it. In a software-on-the-fly-world this is especially true: the UX of your software package is no longer material, because there is no software package. There are file formats, and data stores, and schemas for records, and APIs for accessing them. Every data interaction involves interaction with a custom view. Every user experience will be unique, and the capabilities of that experience will evolve in real time to meet the specific needs of the moment.

Forbidden Algorithms

I worked in software related to television and video for a long time. It was a highly litigious arena, especially when it came to what the user could see. High stakes litigation over the visual 'grid' view, which was central to channel-based systems, chilled innovation for decades.

Patent litigation over software methods of all kinds continues to this day, and it is only a matter of time until a patent holder decides to litigate on infringement by an AI which whipped up a custom view for an end user. Maybe it was the view, maybe it was the capability, maybe it was the configuration or artifact or document that resulted. Doesn't really matter. If it has not already happened, and I'm not even sure of that, it most certainly will.

The implications of this could be far reaching. Patent holders might ask for a tax on every AI-provided implementation, past or future. AI providers might steer users and their custom views away from patented or litigated methods creating forbidden classes of algorithms. It is likely in this case the user might never even know their experience was being impacted.

To date big AI has gotten a significant pass on the ingest of copyrighted materials. It is not very clear, at least to me, what will happen when it starts to provide users access to patented technology on a mass customized basis. Non-practicing entities are in a different league from authors and musicians. The likelihood of a free pass is much much smaller, if not minuscule indeed.

The packaged software world provides mechanisms for managing this kind of legal risk: significant development lead times, regular code reviews, patent bots, supply chain rules, long lead time licensing negotiations. The mass personalization software world we are entering currently provides very little equivalent to all that. How it evolves will matter, a lot.

Put all of these eventualities together and the challenges of managing – businesses, people, agents, relationships – in this new world make it clear the period we are entering could see significant disruption on basic assumptions. None of this may come to pass, or only some of it. If even a small part of it does, the challenges to the current way of doing things will be immense.