Training the Next Generation of AI-Native SWEs
The days of carefully crafting a CRUD application character by character may soon feel distant. As software engineering rapidly transforms into an AI-native discipline, our role as engineers will be completely re-written. Instead of ensuring every last semicolon fits into place or memorizing arcane regex or bash syntax, we must now navigate a rapidly evolving landscape—one that only seems to be accelerating, no less—and stay abreast of the latest models’ quirks and personality, if you will. The question becomes:
“How do we train the next generation of software engineers to thrive in this new AI-native landscape?”
My Experience So Far
As an engineer myself, I’ve found that AI allows me to rapidly eliminate the mundane parts of engineering. Tasks like assembling a successful API call from dense, arcane documentation, meticulously crafting regex expressions character by character, or wading through lengthy explanations to implement complex SQL windowing techniques—these traditionally consume huge chunks of our time. They’re tedious, unpredictable, and yet often essential hurdles we have to climb.
Using AI, I’ve experienced a genuine sense of euphoria. Suddenly, I’m able to tackle projects I wouldn’t have had time to explore previously. A prompt like, “Write a particle system for a 2D physics platformer,” fed into ChatGPT, gives me a solid starting point that would otherwise have been prohibitively costly in terms of time and effort.
What once took days or even weeks of careful study and experimentation is now condensed into a brief dialogue with AI. Just a few queries can significantly improve my grasp on a problem, rapidly enabling deeper reasoning and quicker iteration.
Yet, I’ve been surprised how the results of engineers vary. For instance, I’ve heard from engineers that I respect that “AI doesn’t work for me,” or “I hadn’t thought about using AI for that.”
AI works great for me and I try using it for everything. Based on what’s been successful, I’ve developed key beliefs about how I’d train the next generation of software engineers:
Always Be Tinkering
Each model has its own personality. Understanding each one’s quirks, what they’re capable of doing well and why, how to prompt them (and how not to prompt them), require many reps for us to build the intuition needed to steer them effectively.
Our workflows should be looking less and less like the did prior to the release of ChatGPT in late 2022. I’ve given myself permission to make exploring new workflows a part of my routine.
For instance, my writing workflow now involves various revisions and conversations with AI to refine my thoughts and language. My first draft is always voice-powered, and subsequent drafts are hand-written to retain tone.
As another example, I bias against reading documentation now. I try asking questions to a language model first, and only if it produces an obviously incorrect answer or if I need to corroborate something do I fall back to documentation.
I’ll never return to the old way of doing things at this point.
Nothing Is Precious
As engineers, we pride ourselves on writing high-quality code. We often get attached to the code we carefully toil over. In an AI world, we’re afforded the luxury of remixing and rewriting code liberally. Perfectionism will be a much bigger liability.
Of course, we don’t want to be sloppy or over-trust the AI. I believe that the value of invariants in coding will rise. Using AI to help us enforce invariants—or perhaps more generally, to define and draw interface boundaries—and then using AI to rapidly iterate on the code within the safety of those boundaries is the direction toward which I’m betting our workflows will evolve.
Staying on top of changes is now part of your job
The current rate of change in AI is dizzying, and it feels impossible to keep up. It’s more important than ever to make time to read updates about what’s going on. Ideally, you have a colleague that plays this role exceptionally well and keeps everybody abreast of the latest. Regardless, familiarizing ourselves with enough understanding to know why Sonnet 3.7 is different than Gemini Pro 2.5 is different than o3 will become paramount in the future, and keeping us with the onslaught of releases as an ongoing practice will be important.
As a shortcut, I’ve found Hacker News to be a great resource, as well as Twitter users that synthesize AI news into easy-to-digest threads.
Learn A Little ML
As with any abstraction, understanding what’s going on under the hood will invariably make you more effective. The concepts necessary to understand LLMs and deep leraning seem pretty distinct from “traditional” ML that has traditionally been taught in college courses, such as linear and logical regression, clustering algorithms, etc (I may be dating myself here). I’d recommend digging into the fast.ai deep learning course or Build a Large Language Model (From Scratch) and getting up to speed on concepts like embeddings, attention mechanisms, tokenization.
Conclusion
Ultimately, embracing AI-native practices isn’t just about productivity, but about building habits that continuously help us adapt in a world where the only constant is change. Furthermore, they help us build comfort, stay relevant, stay competitive and take risks when our industry feels chaotic.
Do you have lessons you’ve learned? add me on X and share.
