4 minute read

Why I’m not too worried about AI taking over the world

[stuff stuff]

I’m currently reading Nick Bostrom’s classic book Superintelligence, and my thoughts could change based on how he describes the future of AGI and our relationship with it. It’s definitely in the science-fiction/transhumanist camp, which I think applies to a very niche effective-altruist subset of people, but there’s a high overlap between that group and Silicon Valley.

Some concerns about AI that I think are valid

Use by malicious state actors

Propagation of misinformation

AI significantly makes it easier to spread coordinated misinformation campaigns. [more stuff]

For example, in the Philippines, misinformation is already rampant and is used as a tool by the elites to craft narratives in a way that suites their interests and to do so in more explicit and subversive ways than Western democracies could only dream of. Rappler has done extensive work, for example, on how the Philippines was used as an experimentation ground for Cambridge Analytica before the 2016 US elections. They also track some of the many falsehoods and misinformation spread by the current Marcos administrationas well as the previous Duterte administration, so much so that they were targeted as a “fake news outlet” by Duterte himself, using classic Trump-style tactics. Just recently, a deepfake of the current president Bongbong Marcos was discovered that urged Philippines to take action against China, in a time where Filipino-China relationships, in light of conflicts in the West Philippine/South China Sea as well as concerns about “Chinese sleeper agents”, are at an all-time low.

My reasons for not worrying about AI in general

If AI exceeded human intelligence, how would we know? And would we believe it?

Can AI touch grass?

AI adoption is left as an exercise to the reader

[great at building stuff, not adapting it in practice]

[so much enterprise tech still revolves around combinations of Excel, VBA, and people coding stuff up. That’s why so many startups can come in and try to fix “enterprise problems” even though AI exists]

[most companies arguably don’t even have the data infrastructure required to do AI in the first place]

[lots of value is currently locked in proprietary databases]

[applying AI to areas of high reach, like corporations, govt, etc., face so many non-technical hurdles, such as regulation, paperwork, the whims of the C-suite, etc.]

For example, a large chunk of Wall Street has been grappling with having to migrate their legacy codebases away from COBOL code written in the 1960s to something more modern and compliant. Many companies such as IBM have been hard at work using Generative AI to migrate COBOL to more modern systems like Java, but that is not a straightforward task by any means. As anyone who has done a nontrivial migration of either a codebase or a database knows, a migration can be one of the most stressful but ill-rewarded things you can do as an engineer. Your job is to make sure that one system is migrated to another, all while ensuring that current systems are still operational in-real-time, all while testing that the current fix goes right, all while making sure to have backups in place the switch from the old to new system breaks. This blog and this blog both describe in more detail what these “horror story” experiences can be; they really can be watershed moments in the maturation of a software engineer.

Advancing the frontiers != advancing all of humanity

[stuff]

There’s this implicit assumption that by working on better AI models or creating self-driving cars or cryptocurrencies or whatever frontier projects Silicon Valley overhypes, that this will somehow lead to “the betterment of all humanity”. This techno-optimism is so deeply ingrained in Silicon Valley tech culture and reinforced by the mountains of capital and human talent pushed towards these projects.

Self-driving cars

Let’s take self-driving cars for example.

Financial applications

[something something new Silicon Valley startups, Stripe, etc.]

[large bits of the world unbanked]

[large bits of the world operate on cash]

There are plenty of problems left to be fixed that don’t require “Generative AI”

Simple, impactful uses of “classical” (i.e., “boring”) AI

There is this platform called DrivenData.

What I am more worried about instead

There are many problems in the world that I find more worrying than any thoughts about AI apocalypse or “AI eating the world”

Conclusion

I’m very pro-progress. I love tech, I love working in AI as an engineer, and I think that working on AI for frontier applications does, on net, lead to general advancement. But I also think that the general ethos of “advancing all humanity” by hyperfixating on Silicon Valley’s latest pet projects is disingenuous, short-sighted, and really only serves to exacerbate already-existing divides between the West and the Global South.