The people who’ve stood out in my collaborations are not just amazing engineers, but amazing people. Whether naturally or not, they deeply understood and cared about the people they worked with, especially when their collaborators were learning and less experienced. I worry a lot that future AI systems will not be this kind of collaborator. Increasingly the field measures progress in terms of autonomy: how long can a model work alone or with other instances of itself to complete a task. But there’s another axis: how well do they work with us?
There are clear downsides to models that don’t understand us. Models optimized for your immediate happiness will tell you whatever you want to hear. You’ll learn less from making things and you’ll understand less about the systems you build. Fundamentally, you’ll grow less and have less say in what you create.
In a world of fully autonomous agents, people (reasonably!) would feel like they are being replaced by AI, but this is not the only possible outcome. Instead, we can build AI systems and interfaces that understand our priorities and interests. These systems would not only answer questions and accomplish tasks, but help us discover things we didn't even know we’d be excited about. They will collaborate with us actively and help us understand each other, and we'll feel like we are meaningfully participating and contributing.
These are hard problems that will require new interfaces and algorithms, but they are solvable and we can have models that act towards us in the way that people act towards others they care deeply about. Having more robust, reliable, smarter models is still important: we still want models to help us solve the world's biggest challenges. Many of the underlying frameworks that allow models to solve hard problems and collaborate with each other can transfer to understanding us and collaborating with us. But if we invest everything in autonomy and nothing in collaboration, all IQ and no EQ, the future will be a colder place.