Will AI coding tools make languages like Rust more accessible and popular?

Until recently most AI coding tools required significant oversight and input from developers. The release of Claude Opus 4.6 and GPT 5.3 Codex and others are changing that. Boris Cherny, the developer behind Claude Code, reports that by the end of 2025, AI wrote 100% of his code. Of course, you have to be a little wary about taking some of the statements from AI model developers at face value – there are plenty who will take issue with Boris’s recent assertion that coding is a domain that is ‘largely solved.’ But whether or not you believe some of the more extravagant claims of the AI pilled, it is hard to argue that there hasn’t been a significant shift in the capabilities of AI coding assistants in the last few months. People who viewed coding assistants as slightly more sophisticated autocomplete are now embracing them. Productivity is increasing. Developer workflows are changing.


But what if the implications of the new AI coding models go beyond merely increasing productivity and changing workflows? What if they change the technologies we use and the languages we develop in? If developers don’t have to write code any more, does it matter what language they write it in? Obviously code review remains an important part of the developers workflow, but now AI helps with that too, and people like Cherny are pretty clear that they think AI will get better and better at code review and all the other bits that they are currently less good at.


Language choice has always been a negotiation between abstraction and control. Higher-level languages such as Python, Ruby, and JavaScript gained prominence because they allowed developers to move quickly, express intent clearly, and rely on runtime systems to manage memory and concurrency. They flourished in domains where productivity mattered more than predictability, from web applications to data analysis and automation. Lower-level languages such as C and C++ persisted where hardware proximity, deterministic performance, and fine-grained resource management were essential, including operating systems, databases, and networking stacks. Historically, teams accepted the risks of manual memory management in exchange for performance, while tolerating inefficiency at higher levels in exchange for developer productivity.  There have been attempts to bridge this gap with new languages that offer performance and productivity – Rust being a notable attempt. 


We wrote Wingfoil, our graph based data streaming framework, in Rust, precisely because it offers that bridge between performance and productivity. On widely cited benchmarks, Rust routinely delivers performance within a few percentage points of optimized C and C++, while avoiding the runtime overhead of garbage collection. At the same time, surveys such as Stack Overflow’s annual developer survey have, for years, ranked Rust as the most loved language, and studies of defect rates suggest that Rust’s compile-time guarantees dramatically reduce classes of bugs that would otherwise require extensive testing and debugging. The result is a language that promises something close to the productivity profile of Java, with performance characteristics approaching those of C++. On paper, this combination makes Rust look like the logical destination for a wide range of systems and backend development. And Rust has had an increasing impact in recent years: it has been accepted into the Linux kernel, not as an experimental curiosity but as a supported language for new subsystems. Major cloud providers use Rust in production for performance – and security-critical components, from networking layers to edge infrastructure. Firefox’s Servo engine, Android, parts of Amazon’s AWS stack, Cloudflare’s edge services, and growing portions of the WebAssembly ecosystem all rely on Rust where correctness and efficiency intersect. These are not greenfield experiments but foundational systems, and their success provides a proof point that Rust can operate at industrial scale. Evidence that there has been a steady, structural shift towards Rust that is now difficult to reverse. 


And yet, despite this compelling value proposition, and some significant adoption, Rust hasn’t yet swept all before it. Despite being widely admired, it has not displaced incumbents like C++ and Java in the way its advocates once anticipated. There are two main reasons for this – a steep learning curve and organisational inertia. 


Let’s address the learning curve first. Rust is relatively difficult for developers to learn, in part because of its strictness  – it doesn’t allow developers to be vague about memory ownership, mutability, or lifetimes. Those concepts must be understood early, and the compiler enforces that understanding with error messages that, while precise, can feel unforgiving. For developers coming from garbage-collected or dynamically typed languages, the initial learning curve is steep. Productivity gains arrive later, after the mental model has settled. But by that time many have given up. This delayed payoff has certainly limited Rust’s adoption by constricting the pool of Rust developers, which has in turn hampered enterprise adoption. Ask a CTO at a big corporate why they’re not adopting Rust, and they’ll most likely tell you that it’s because they can’t get enough developers.  


But if Claude or Codex is writing all the code, then that becomes a lot less of an issue. Modern code generation tools can produce idiomatic Rust that respects ownership, lifetimes, and borrowing rules without the developer having to recall every detail from memory. Of course, there is an issue with code review, but that requires fewer developers, and of course AI can help developers learn a language like Rust a lot more quickly. If AI can generate reliable code in multiple languages with comparable ease, then language choice naturally shifts toward structural qualities such as safety, performance, and long-term maintainability. What’s more, AI coding agents are often ‘good’ at Rust, precisely because it forces correctness, so instead of treating the compiler as an adversary during the early learning phase, developers can use AI as a guide, inspecting generated examples and iterating toward understanding. It is also arguable that Rust’s strictness actually makes it safer for AI. Since the compiler catches AI hallucinations (e.g., memory leaks or race conditions) before they run, Rust is arguably a better ‘target’ for AI than Python or C++. All of which means that friction shifts from ‘how do I make this compile’ to ‘does this design make sense’; a far more familiar and comfortable problem. The learning curve becomes less steep when developers can rely on high-quality examples on demand and use AI tools to architect and test code. Inertia becomes less powerful when teams are no longer constrained by the availability of deeply specialised expertise. If a competent backend engineer can be productive in Rust within weeks rather than months, the economic argument against adoption weakens substantially. 


Organisational inertia, is in part a function of this perceived lack of resource, but it goes deeper than that. Most organisations already have deep investments in existing languages, frameworks and human capital. Rewriting core infrastructure is a big investment. Even when Rust is technically superior, it competes against ecosystems of libraries, tool chains, operational knowledge and hiring pipelines. From a managerial perspective, choosing Rust can look like an unnecessary bet, especially when Java, C++ or Python are ‘good enough’ and familiar. Even though you take a hit on performance or productivity, who wants to be the one to take a risk? But again, this starts to look like a lot less of a problem if coding is mainly done by machines which have much broader capabilities – machines that are proficient in all languages, and can set up environments and make intelligent design choices. Machines that aren’t interested in flame wars about which language is best, but look to employ the right tool for the right job.   


Of course, not everyone agrees with Boris Cherny’s assertion that ‘coding is largely solved’, and we’re not arguing that AI coding tools means Rust will suddenly become the de facto language of choice, or that other established languages are about to disappear. Far from it – Python, JavaScript, and Java are deeply embedded in domains where their ecosystems matter more than raw performance or memory safety. AI accelerates work in those languages just as effectively, reinforcing their dominance in data science, web development, and enterprise software. However, we’d argue that lower-level languages with weak safety properties face a less certain future. In new projects where performance matters and AI can write the code and manage syntactic complexity, the rationale for choosing C or C++ over Rust becomes increasingly thin. Again, similar dynamics may benefit other expressive but demanding languages – from functional languages to specialised domain-specific tools – as AI lowers the cost of entry. 


And there is evidence that shifts in the language landscape are starting to take place. On Monday Anthropic announced that Claude can now help streamline the tasks that once made COBOL optimisation cost prohibitive. IBM, which makes lots of money consulting and managing these code bases, saw its stock drop 13% on the news. And while this isn’t a switch in language, it clearly shows the impacts that AI tools are starting to have on decisions about which languages to use and when. 


We’re now entering a world where large amounts of code will be generated and checked by AI. We’d argue that, in that world, Rust’s guarantees start to look less like an academic luxury and more like a rational default. That the question of which language to use when will be much less driven by organisational inertia, and much more by a desire to choose the right tool for the right job.