Tags

, , , , , , , , , , , ,

I’ve loved digital technology as long as I’ve been alive. Growing up in the analog world of the 1980s, I was excited by every bright light and new world opened up by a digital display. I was so excited by what computers could do that, before my family owned a computer, I wrote out the code for a text-based computer game on an electric typewriter. Circa 2000 I would physically go to the Apple Store to watch the live-streamed Steve Jobs keynote introducing new Apple products, even when I wasn’t planning on buying one soon. At a family Christmas event in 2011, I became clear that educational technology was the right non-faculty career choice for me, when I realized everyone else had left the room while my wife’s uncle and I had a heated discussion about operating systems. After all that I doubled down and got a master’s in computer science.

That’s why it pains me deeply to say: I’ve become a techno-pessimist.

My previous career in educational technology played a major role in this realization. I’m proud of the work that I, and later my direct reports, did at Boston University; from Blackboard Ally‘s disability access to the Turnitin’s macros to speed up essay marking, I think we did our part to make professors’ and students’ lives better and easier. I just wish that I could say the same for technology in education as a whole.

Over my twelve years in the field, there were three major innovations in educational technology that made the headlines, did something to transform education. And I think that overall, the impact of all three was bad.

The first of these was massive open online courses or MOOCs like edX. Thomas L. Friedman’s infamous column breathlessly celebrated that we would put pretty much every university out of a job because students would just take “the best online courses from the best professors from around the world — some computing from Stanford, some entrepreneurship from Wharton, some ethics from Brandeis, some literature from Edinburgh — paying only the nominal fee for the certificates of completion” – as if it would be a good thing for students and professors never to see each other.

Fortunately MOOCs died an ignominious death some time ago, once people realized the quality of the “education” they offered was garbage: it turns out that it makes a difference to receive personal attention from people who know something about the subejct. edX revealed itself to be a sinister bait-and-switch: where its glib founder Anant Agarwal had once raised piles of money through a promise of educating the world for free, telling the world about a girl in a Pakistani village who used edX to learn to code, eventually he turned around and sold the company to a for-profit corporation. To all those nonprofit universities that sunk six- or seven-figure sums of money into an organization that supposedly was there to educate the world: congratulations, all you accomplished was to line the personal pockets of a snake-oil salesman. Universities got played.

Happily, nobody takes edX itself seriously anymore. But what it left in its wake is something that could have worse consequences down the line: the continued dumbing down of higher education. It inspired places like the mostly online Southern New Hampshire University, which spends only 18% of its revenue on instruction – not a single professor has tenure – while lavishing it on advertising. “Disruptive innovation” had already been very bad for education, and online technologies made it significantly worse.

Then, perhaps more controversially, there was the efficient video conferencing of Zoom. I think this made an overall positive difference in higher education, where I worked, allowing students already capable of self-directed work to continue learning remotely through COVID. But in the much larger world of elementary and secondary (“K-12”) education, video conferencing was a disaster. One study noted that even in the “best-case” scenario of the Netherlands (“short lockdown, equitable school funding, and world-leading rates of broadband access”), “students made little or no progress while learning from home.” In general, most studies found significant evidence of learning loss – the longer the closure, the greater the loss. If K-12 schools had simply closed for the worst of the pandemic, they would surely have had far more incentive to reopen sooner – but because Zoom allowed them to pretend that the students were still learning, they wasted everyone’s time and learned less. No wonder that when a 2022 educational technology conference asked high-school students for their perspective, their nearly unanimous response was “For technology, less is more.” (One obvious explanation for why I was enthusiastic about tech in the ’20s and not now is that then I was in my twenties and now I am in my forties, and therefore this is just an old man’s crankiness – but these teens belie that explanation.)

Finally, there was artificial intelligence (or large language models, to be more specific). In 2023, right before I left the field of, educational technology’s big challenge was figuring out ways to stop students from using the new tools of generative AI, which provided them with such easy ways to cheat. Turnitin claimed to offer a tool to detect AI cheating, but it was a disgraceful farce: unlike their normal plagiarism tool, which allowed professors to follow up and see where plagiarism was lifted from, the AI tool gave nothing more than a percentage of how likely it “thought” a work had been made by AI, with no further explanation. Imagine trying to discipline a student for AI cheating with that as your “evidence”!

Image created by Bing. At least AI can give us a picture of the direction it’s currently leading us in.

Beyond education, there is something particularly dystopian about the phenomenon of AI “creativity”. The dream of techno-visionaries for well over a century has been a Hägglundian realm of freedom, a world where the technology does the drudgery and frees us up for our creative pursuits. But generative AI now spits up essays and music in seconds, doing the writing itself and leaving us to edit – the technology does the creativity and leaves us with the drudge work! Instead of technology moving human beings from the realm of necessity to the realm of freedom, the insentient technology occupies the realm of freedom for itself and pushes us out further into the realm of necessity. (Symbolizing the depressing world it is helping to build, Apple recently released an ad where musical instruments and other symbols of creativity are literally crushed to be replaced with an iPad. Weird fun fact for everyone too young to remember: once upon a time, Apple used to be for creative people.) Technology was supposed to serve us human beings, but so far the direction of generative AI has been toward a world in which we instead serve technology – the road to Chiron Beta Prime.

Then there is the social-media apocalypse. By nearly every indicator – diagnosis, self-report, rates of self-harm and suicide – the mental health of teenagers in anglophone countries has taken a disastrous nosedive since 2012, when Instagram smartphone selfies became a mainstream part of childhood, and there is mounting evidence to show this isn’t merely correlational. Meanwhile our public sphere has become dominated by extremes left and right, whose achievements in either direction are dwarfed by the volume of conflict they have created – all spurred on by the algorithms of YouTube and Twitter and Facebook that reward extremism.

And none of this is even to mention the phenomenon that Cory Doctorow has aptly called enshittification, in which corporations that got a large user base by making a quality product (largely in the 2000s) steadily decrease that product’s quality – often by steadily increasing the amount of advertising – in order to make more money once the users are locked in. Amazon started competing with physical bookstores through the “look inside” feature that compared to the experience of browsing a book… and now that the physical bookstores are gone, Amazon took the feature away. The first screens of results you saw searching Amazon used to be the ones that matched your search; now they’re from those who pay to advertise. More and more of our Facebook and Instagram feeds are taken up by advertising. Ads now appear on Google as if they were your actual search results. And so on. Notice in particular: none of this is happening because these companies are losing profit and needing to make up for lean times. They’ve been hugely profitable for a long time! They could still be giving us a product as good as the one they gave us in the 2000s, an experience we could rely on that limited the advertising. They just choose not to – because they can.

I think the verdict’s clear: the changes in digital technology since about 2009 have made our lives worse. So what can we do about that? Tough question, but I’ll give it a crack next time.