{Insights}

Tales of a Staff Engineer (and a Secret Agent Fox) from DevNexus 2026

Tales of a Staff Engineer (and a Secret Agent Fox) from DevNexus 2026

by alchemain team

Since 2004, DevNexus has been a high point on the calendar for developers, architects and technology leaders, and this year was no different! 1,500 attendees (and one cartoon fox) came together for three days of expert-led panels and workshops, engaging sessions, and plenty of networking in Atlanta, Georgia. Here are some of my main takeaways from the event, where the topics on everyone’s lips were AI, AI, and AI. 

Coding Ain’t What it Used to Be

Word on the conference floor is that the shift in human to AI coding has changed everything. Organizations are now using a number of coding assistants from Claude Code and OpenAI’s Codex to build entire apps from scratch, and to take over some of their most time-consuming tasks. Instead of being the coders, engineers are describing themselves as AI managers. 

While the productivity gains are positive, there’s definitely a lot of doom and gloom around the shift. If AI can do the work for you, what’s the point of a team of engineers? Well, as 00felix says, get with the program or get left behind! We believe that organizations who use AI tools effectively will be able to empower developers to shift their focus, whether that’s to the DevOps side of things, the infrastructure side, or some other business area which is revenue generating and impactful to the org. 

To make this happen, companies need to ask themselves… What does it mean to use AI tools effectively? Well one major consideration people were discussing over DevNexus is cost. A lot of AI tools right now are cheap, because big players want us to start consuming and making them part of our workflow. Down the road, are we in for a bubble-breaking scenario where costs skyrocket? This is what we’ve seen with cloud adoption, where promises of reduced costs led to massive cloud adoption, and initial users were subsidized heavily. Fast forward a few years, and companies like Dropbox, Adobe, Basecamp and more are rolling back their cloud adoption and moving on-prem in an effort to save money. 

Companies should definitely consider this dynamic before onboarding new AI tools. For example, at Alchemain we charge by line of code, and we’re doing the hard work, so that you don’t need to worry about token use. We have a transparent model where you can see what you’re spending, and we’re proud of that approach. 

What Does Agentic Really Mean for Software Dependency Management? 

I had a lot of conversations at DevNexus about agentic AI, and the mandate from high up to have AI in the loop at all points. I loved listening to Rod Johnson, creator of the popular and widely used Spring Framework in Java during his keynote speech. He's creating a new agent framework in Java called Embabel. He believes it's up to us Java developers to fix/use this and he thinks the JVM is a great platform for this. If you don't want to switch to a new framework there is also Spring AI from Spring/Broadcom or Langchain4J in conjunction with Red Hat's backend framework Quarkus. It’s worth checking out. 

Taking a step back, a lot of the chat was centered around modernizing in a smart way. Many companies are mandating AI or agentic adoption, but without a clear roadmap. That leaves people asking: if we’re going to adopt new AI tools and technologies — what does success actually look like? When a lot of people are still stuck in their old code, still using Spring boot 2 or Java 8, what should they be looking for? 

What caught my attention was looking at the few solutions which can retry and fail when they hit a roadbump. A lot of AI tools can surface issues or suggest fixes, but not that many can fix a problem independently, and even fewer can retry if those fixes hit a wall. I heard one session by Sam Dengler from JPMorgan about an execution framework called Restate, which allows you to create orchestration workflows which are durable. In practice that means if one part of the orchestration fails, it can retry or fail gracefully and then proceed again at the right time. 

It reminded me a lot of Alchemain! A crucial part of our product is that we don’t just stop at opening PRs which may or may not create a green build. We have our compile - fix - test - retry loop which validates that everything will work before we bring in the human to merge the PR. Durability in AI tools is a topic to watch. 

Bringing Security into the Conversation

Finally, especially for those who are leaning into AI coding assistants, 00felix and I heard a lot of fear around whether we can trust the code you’re pulling in from AI. Too many people are letting their AI assistants do the work and not checking what’s happening, or are even totally unaware of what’s happening under the hood. 

AI can work fast, but it can also introduce libraries, packages or versions that developers  didn’t explicitly choose or evaluate. That means vulnerabilities, outdated dependencies, or risky transitive packages can slip into production without anyone realizing it.

00felix helps by automatically analyzing the dependencies introduced into a project, mapping the full dependency tree, and identifying vulnerabilities or risky components, including those buried deep in transitive dependencies. More importantly, it doesn’t just flag the problem. It can automatically remediate dependency issues by updating packages and resolving version conflicts while ensuring the build still works. In a world where AI is writing more and more of the code, having automated verification and remediation like this becomes critical to keeping software supply chains secure.

That’s a wrap on DevNexus 2026! Always happy to talk about the latest in dependency management with anyone who loves this stuff as much as I do, you can reach me on LinkedIn, and learn more about Alchemain here.