Tags
20th century, 21st century, Apple, European Union, Franklin Delano Roosevelt, Google, Lina Khan, Nick Bostrom, technology, Ted Gioia
The technological innovations of the last fifteen years, from advertising enshittifcation to AI cheating, have largely been a disaster. We are sadly at the point where, as Ted Gioia says, “most so-called innovations are now anti-progress by any honest definition.” I dare say that if we could revert all digital technology to where it was in 2009 – before the invention of the retweet – we’d all be better off.
I am not a hard techno-pessimist; I don’t think I could be. I love technology too much. I remember eras where technology was making our lives better; that was most of my life, the ’80s, the ’90s, and especially the ’00s. There’s no iron law that says technology has to make things worse, things have to enshittify. It’s just that they are currently doing so, have been doing so for over a decade. The question is how we change things back – not reverting back to old technology, but reverting back to a state where new technology serves rather than opposes human interests, where it is progress and not regress.
How do we get there? Let’s start by recalling that it wasn’t always like this! Most of us remember the 2000s, the time when we allowed Apple and Google and Amazon and Facebook to get huge because they were delivering great new products that made our lives better. (Thus my own glowing obituary for Steve Jobs back in the day.) Back then, Google still had a motto of “don’t be evil” – and when you have a motto like that and you then remove it, you are sending an obvious message of what you have in fact become. We trusted the corporations to use their technology to make the world better – and they fucked it up.
So what can be done to return there, to a world where digital tech is making our lives better? It’s a tough question when the problems are so many and multifaceted, and I certainly don’t claim to have all the answers. But there are a few clear moves that would go a long way to fixing a lot. Most of them have to do with taking power away from monopolistic corporations and putting them in the hands of governments.
Consider one of the earlier examples of a ridiculous and banal enshittification. This was when Apple – that company founded on a reputation of making things easy and simple – got rid of the standard headphone jack on an iPhone and replaced it with a proprietary connection that wasn’t even the same one it used on its computers, requiring every Mac user to purchase a mess of multiple cables. (I cannot imagine Jobs ever approving of this.) Apple steadfastly refused to take up the obvious simplifying solution of having everything run on the same standard USB-C port – until it was forced to by, of all things, EU bureaucrats! Apple’s excuse for the years it didn’t do this was that the USB-C would supposedly interfere with water resistance – and yet it quickly found a way to make USB-C iPhones live up to the exact same standard of waterproofing as the ones with the proprietary cable, once the government told it it had to. All that Reaganite garbage we’ve been fed for decades about government regulations stifling technological improvement – here, at least, the opposite happened. The technological improvement didn’t happen until the government put in its regulation.
Making more useful products is the sort of thing that corporations are supposed to sort out for themselves under capitalism. When government needs to step in just to make that happen, it becomes clear that the corporations can’t be trusted to manage their own affairs: national and transnational governments, without which corporations could not operate, need to step in and regulate.
For the biggest and most obvious cause of enshittification is monopoly. Tech companies made our lives better in the ’00s when they actually faced competition. Google wouldn’t be making ads appear like real search results if it had any fear that people would switch in large numbers to DuckDuckGo. Better products through competition is supposed to be the whole point of having a capitalist economy; monopoly capitalism gives you all the bad parts of capitalism without the good. Yet governments have so far allowed corporations to kill their competition, and that needs to stop.
Here too the EU has led the way. When I was getting my computer-science degree I was shocked to learn that, if you develop an app yourself and want to install it on your own phone, Apple will charge you a fee to do this, through its requirement that apps only be installed through the App Store; the EU has now told Apple they’re not allowed to do this. (In response, Apple appears ready to allow competing app marketplaces, but only in the EU – which makes the EU a considerably better place than the US to practise the very American act of entrepreneurship!)
When Apple claims that the antitrust suit against them “would also set a dangerous precedent, empowering government to take a heavy hand in designing people’s technology”, the criticism would seem more worrying if Apple hadn’t delayed improvements everyone wanted until the government’s heavy hand forced it to do so. When a monopoly no longer cares about what its customers want, the government must do so instead.
After lessons learned the hard way, Franklin Delano Roosevelt declared “business and financial monopoly… had begun to consider the Government of the United States as a mere appendage to their own affairs…. They are unanimous in their hate for me—and I welcome their hatred.” Tech monopolies, including once-beloved ones like Apple, have now earned our hatred; a good government today is one that will earn theirs.
As we look to the future, there are dangers worse than enshittification. Artificial intelligence does hold great promise – it has already unearthed details of Plato’s last days and burial place – but also potential dangers, most memorably exemplified in Nick Bostrom’s paperclip maximizer scenario. AI does what it’s instructed to do, and if we’re not careful, that could lead to scenarios up to and including to human extinction. For this reason, researchers increasingly discuss how it’s important to make sure AI is in “alignment” with broader human values. The tricky part of this is that we already have powerful entities that are not in alignment in that sense: they’re called corporations! Just like the AI that acts only to maximize the production of paperclips irrespective of human values, corporations are explicitly designed only to maximize their own profitability irrespective of human values. They cannot and should not be trusted to act in humane or beneficial ways; they need to be constrained. There is only one organization capable of constraining them, and that is government. It can constrain them, and it must.
While the US hasn’t yet been as good about constraining corporations as the EU, there are promising signs: the Federal Trade Commission under Lina Khan has taken a much more active anti-monopoly role, most prominently by blocking the proposed merger of the US’s best airline with its worst. It is generally difficult for the US government to do much of anything these days, with the country divided into two warring camps that hate each other. Yet something unprecedented in this age is that many Republicans have significant hostility toward Big Tech – not without justification – for censoring opinions on their side of the political spectrum. Some of them are even fans of Khan. There is an opportunity for the Elizabeth Warrens and the Josh Hawleys – corporate-bashers on the left and the right – to work together on a bill that would restrain corporations in multiple ways.
Humanity – whose effective leadership is in the governments of the USA, the European Union, and a few other influential states like Japan – faces a choice. We can let the paperclip – er, profit – maximizers continue to innovate us into the dystopian direction they’ve been taking us for fifteen years, or democratically elected governments can force them into an alignment that serves human values. We need active government intervention to make sure that technology serves humanity and not the other way round. We used to have such a régime, back in the very techno-optimistic pre-Reaganite era of the 1950s, in which human beings (as a government program) came to walk on the moon. If we could go back to an era like that, where elected governments and not corporations are in control – maybe then I could go back to being the techno-optimist I’d always expected to be.
Nathan said:
I would add to what is said above that I think there’s an important layer underneath (or in addition to) government intervention: the public/civic ethos in which people need to be educated. As Samuel Bowles said in The Moral Economy (though that book is not especially relevant to this topic): “Political philosophers from Aristotle to Thomas Aquinas, Jean-Jacques Rousseau, and Edmund Burke recognized the cultivation of civic virtue not only as an indicator of good government but also as its essential foundation.” I would be surprised if Amod, who emphasizes virtue ethics so often, disagrees with this.
In my comment on the last post, I pointed to public libraries as a sector where a public/civic ethos is solidly established among both professionals (e.g., the American Library Association’s “Code of Ethics” and “Core Values of Librarianship”) and the general public. That ethos is maintained by library schools, librarians, national library associations, local “Friends of the Library” groups, and others—a whole network of multi-generational institutions across the economic sectors. We need something like this values-based network of people and institutions for digital technology, not just government intervention.
The closest equivalent to advocacy of library values in the realm of digital technology may be in the free and open-source software (FOSS) movements. Most of the digital technology issues that Amod is concerned about here are software issues and, more generally, interoperability issues. A 2018 edited collection titled Applying Library Values to Emerging Technology, which is mostly about digital technology in libraries, discusses the intersection of libraries and open-source software in several chapters. FOSS is, of course, much newer than libraries and so doesn’t seem to have the institutional maturity of the latter.
The recent attention to maintenance in philosophy of technology seems relevant too. There’s an edited collection published this year that I haven’t read yet titled Maintenance and Philosophy of Technology: Keeping Things Going. What if we identified “progress” in digital information technology as much with maintenance as with innovation? Librarians seem to already know the importance of maintenance: they know that most of their mission is maintaining collections and services, not constantly producing innovations. What if we thought of progress in digital technology as making systems maintainable over longer and longer time periods instead of shorter and shorter cycles of planned obsolescence? Government intervention can help us get there, but such a change of priorities entails a change of mindset that requires public education.
Amod Lele said:
All fair points, yeah. I focused on government intervention because I think it’s the most pressing thing, but the problems are big enough that their solutions are not one-size-fits-all. I’d certainly be for more education in civic virtue in general, and within professions as well.
I love the idea of focusing on maintenance. We ignore that at our peril. I’ve seen peer-reviewed online journals have earlier articles vanish and nobody do anything about them.
Paul D. Van Pelt said:
Techno optimism. I was not familiar with that term, until today. I suppose my absence from such interests has led to a sheltered life. That admitted, it seems to me our reliance on technology is overblown. Claims around artificial intelligence are beyond current knowledge, and, I think many people are sick of it…unless they are personally involved in research and development. I have written about the proloferation of the smart phone and believe (strongly) that device has contributed to alienation and isolation in modern society. There will be more detailed comments here, I imagine. I like to keep mine brief.
Amod Lele said:
Twenty or even ten years ago I wouldn’t have agreed with you. Circa 2010 I would have looked around at the difference between the technologies current then and those current in 1980, and laughed at anyone who thought technology was making our lives worse: there was so much it had made possible. (At the moment of writing this I am using technology that didn’t exist in 1990, and at the moment of reading this you are too.)
But what a difference a decade makes. You are correct about alienation and isolation; the data bears this out. Young people today aren’t going out to socialize with their friends, and that’s harmful. It is where we currently find ourselves, but it is not a necessary consequence of digital technology. When I was their age, I spent every afternoon after school playing video games – with my friend coming home from school and sitting beside me and taking his turn. And while caution should be applied to any statement that begins with “when I was your age”, I think what’s unique about the current young generation is that they seem to agree their habits are bad; they just don’t really know what to do about it. They often express support for removing phones from classrooms.
Lloyd said:
How Can An Unaware Human Technocrat Create An Aware Intelligence?!
For me, the advent of progressive technology beyond 2009 is an exercise in the futile dilemma of attempting to create an image of oneself, as a human, fully equipped to create, think, do critical analysis, manipulate, control and dictate as a sovereign entity.
The part that these self-unaware tech geeks are missing, is they themselves are the ‘creator supreme’ attempting to assign the same basic human elements into code!
As knowledgeable legal council has noted regarding Artificial Intelligence, if, 1% of the damage and destruction caused by said AI machine done IN ERROR can be blamed on the code writing team, then said harm will be BLAMED on the company/peers in charge.
And as such, in my view, all AI code will have to be reigned in to operate in an ASSISTIVE role rather than as a *SOVEREIGN role.
*A sovereign role in code is described as “I the AI, I, am God, therefore, whatever I decide to do reigns SUPREME.”
It is well known in AI coding circles, that, the embedded code is merely utilizing information that ALREADY exists.
Thusly, the code writer remains the creator of whatever content the AI has access to. AI will never be an original content creator even it it seems to do so what with seemingly novel arrangements thereof.
If the said code writer does not become aware of this dilemma, then we are truly screwed.