HAL, Open the Job Interview Bay Door Please...

HAL, Open the Job Interview Bay Door Please...
Photo by Axel Richter / Unsplash

The first mass market experience most people had of interactions with a modern AI was the dialog between the astronaut David Bowman and the HAL 9000 computer in the Stanley Kubrick film '2001 A Space Odyssey', and from that early beginning it has been pretty crystal clear that the algorithm doesn't really care what you think or how you feel about it.

There is no real algorithm for respect: instead it relies upon intention to be given and received. Examples are easy to find. For several years the most successful SaaS companies in the world have used systems referred to as 'Customer Relationship Management' or CRM systems. These systems provide mature algorithmic support for the business of ensuring that customers actually want to pay that hefty bill you are sending them every month because they see good value in the result which is provided. There is respect in the notion that payment is earned for value provided, and that the foundation of value is the perception that it is present, something only a customer can judge or determine.

It is sad then to realize those same companies founded on respect for customers and their judgement, when hiring employees to advance their program, feed resumes into an 'Applicant Tracking System', or ATS, to be algorithmically filtered and categorized. There is no relationship management here: applying for a job in the age of machine learning is to be truly only an input, with no opportunity to provide feedback on the resulting algorithmic output, and often no real communication on what that output even is. Just as in the movie, the machine is not going to open the interview bay doors, and its not going to tell you why.

The difference between a 'relationship management' system and a 'tracking' system is the difference between treating someone as a participant and treating someone as an input. Only one of these roles is afforded respect in the design of the interactions and the systems that support them.

As a society the bar gets set by what we collectively tolerate. Over time implementations of machine learning have come more pervasively to treat people as inputs and less commonly as participants in the process the system supports. It is a feedback loop where every time we approach that bar it becomes more acceptable, more normal, more standard, until the implementors and their business sponsors have difficulty imagining it could be any other way.

In the context of this progression the quality of life people experience, at work and at home, has diminished. Inputs are a commodity, and commodities are typically bought and sold only in bulk, the perception of their value never judged on the characteristics of the individual members of their population. As it is devalued, individual variation dies, and with it the proven benefits of diversity and inclusion.

People are aware of this progression. At SXSW people booed the notion that 'AI makes us more human'. Implementations of generative AI today do not respect people as creators whose work is their fuel. In the current generative AI paradigm creative works are commodity inputs, resulting derivative outputs somehow have significant value, and there is a big assumption you should think this is a good thing. The SXSW audience gave all of this a big thumbs down. They wanted to be treated as participants and not solely as providers of commodities in a process they see as dehumanizing. That they were provided an opportunity to express such feedback was the most surprising element of the entire episode.

To be clear, I'm not against machine learning as a technology. I personally implement it in ways that are useful and helpful to solving challenges faced by those around me. It is a good tool when properly used. But as with many tools, how you use them directly determines the amount of ensuing trauma, and the trend right now is to place increasingly heavy equipment in the hands of people who do not have the attention span necessary for the reading of safety manuals, and whose objectives do not include measures of harm caused to human dignity. Deeply thinking about the long term results of such an approach is not in vogue, meaning it is more necessary than ever, especially as the results of current experiments become ever more clear.

I'm a full lifecycle innovation leader with experience in SaaS, ML, Cloud, and more, in both B2B and B2B2C contexts. As you are implementing your value proposition, I can take a data driven approach to helping you get it right the first time. If that seems helpful to you, please reach out. I'm #OpenToWork.