Are all hard technical problems AI problems now?
It may be all over...or just beginning depending on your point of view.
It was a Sunday afternoon. Bored, I decided to head out for a walk. I’d been thinking about hard technical problems in general.
In the past few years, I’ve observed the crazy amount of progress in the field of AI, specifically Deep Learning, mostly brought about due to the invention of the Transformer architecture. Since then, we’ve seen the creation of tools that can communicate with us on a near-human level(and bullshit even better); tools that allow us to predict the weather up to 12 hours ahead; tools that have allowed us to, in essence, solve Protein Folding.
Many of these problems would have been considered intractable just a decade ago.
Think about just how far we’ve come in a relatively short amount of time. The first personal computer was made in the 1970s.
If there isn’t to be another winter brewing, and progress keeps growing at its current rate, who is to say that not all technical problems eventually become AI problems?
You can make a list of a bunch of difficult problems and realize that they are at their core optimization/minimization problems:
Climate Change? The reduction of CO2 and other Greenhouse gas emissions in the atmosphere.
Water distribution? Mostly a routing, capital, and infrastructure problem.
Efficient logistics? Same as above. Heck Water distribution is a logistics problem.
Crop Yield optimization? A problem where you basically optimize for the necessary resources that allow a certain crop to grow.
Diplomacy? Ok, this one is social, but AI already seems to be fairly good in this arena too.
Neural Networks (a deep learning architecture) can approximate any continuous function for a specific range of inputs. They can in essence simulate anything with some error being accounted for. Use them in novel ways and we can see world-changing results.
When I got back, I was hit with the realization that we may truly be at the final frontier of technology; that we are at what can only be described as the weirdest point in history from a technological standpoint; perhaps reaching a plateau in Kurtzweil’s technological S curve— the most significant one to date.
The significance of this on the meaning of work has not been lost on me either.
This past year alone has seen an ongoing dispute between various artists and users of Image Generation models primarily waged by artists for fear of being displaced. While it is framed as a copyright issue it’s still very much a wage issue.
On the other hand, programmers and software engineers mainly welcome the increased capacity of Large Language Models to handle the drudgery.
For reference, Andrej Karpathy isn’t just anyone. He was the former director of AI at Tesla, is currently working at OpenAI, and is probably one of the best AI educators ever.
This difference in perspective isn’t due to how boring the respective jobs may be as much as their scope and the tasks therein.
Most programmers aren’t “just” programming. They are thinking about integrating code into a wider project. The product isn’t the code itself but much more so how it fits a wider context. Sometimes this context is in the solution of say a Physics problem or the creation of video games. There’s always more to programming than just “programming”. A lot of the artists concerned with the rise of AI in their profession weren’t animators for example, but much more so the sort that work online as freelancers and survive on commissions. As such the painting or drawing for them was the product.
It doesn’t make their pleas any less valid but the tide rising is quite probably unstoppable.
Currently, AI models have progressed so far as to handle the sort of tasks that there is a low demand for but require a really specific set of skills and are constrained to digital environments. These models will continue to make progress in these environments and the world of bits at large. It will take a lot of time for us to see headway in physical environments due to the hardware inflexibility of machinery; moravec’s paradox at play again.
It also became apparent to me that said race for strong AI may be for all intents and purposes, zero-sum; a zero-sum pursuit of work and meaning; a zero-sum pursuit of resources and capital; a zero-sum pursuit perhaps for the dictation of the future of human civilization as we know it. There is very little incentive for those who aspire to create a coherent general-purpose model to show the world how they did so. And even if that were the case, very few quite honestly would have an interest therein unless such a powerful tool was wrapped up in a pretty UI.
That last part is still up for debate and maybe I was being a bit too hyperbolic.
That said, all technology is a force for centralization and there isn’t a more centralizing force than a piece of technology whose main task is to do it all.
I know it all sounds grim, and it could be… but I can’t help but be excited.
Growing up, I’d always dreamt of building or having a horde of a bunch of tiny flexible machines that allowed me to build whatever it was that I wanted to. In a sense, certain AI models as are and will undoubtedly continue to be will allow people to actually realize the ideas they wanted to bring all along, while not having to deal with the thousand tiny things that cause a loss of motivation before achieving the main thing.
It’s automation on steroids.
For all the ways this could go wrong, there are some ways this could go right, and I can’t help but look optimistically at those possibilities. Whatever the case may be, this next decade is going to be really interesting. We’ve hit escape velocity and there’s no going back.