5 things to watch out for when using AI Code Generators for .NET modernisation

Thinking about using AI for .NET modernisation? Discover 5 critical pitfalls to avoid, from hidden architectural complexity and context limits to UI automation gaps and data security. Learn how to accelerate migration without sacrificing control.Blog post description.

Sanjeev Narayan

3/9/20264 min read

Over the past few months, I have spent time helping clients think through and execute .NET modernisation work, including situations where AI is introduced to accelerate analysis, migration, refactoring, and code uplift.

If AI tools are used well, they can materially reduce effort. Used carelessly, it can create a false sense of progress.

This is especially true in mid-sized to large enterprise .NET environments, where the codebase is only one part of the picture. The runtime, dependency graph, security posture, infrastructure assumptions, and deeply embedded business rules are often where the real complexity lives.

Here are five things I believe businesses should watch out for when using AI in .NET modernisation projects.

1. Modernising .NET is never just one upgrade

One of the most common mistakes I see is treating a .NET upgrade like a single technical event.

In reality, there are usually four separate layers to think through:

  • The .NET Framework or the target .NET runtime itself

  • The dependency and package ecosystem around the application

  • The security requirements that the upgraded solution must now satisfy

  • The UI behaviour, and it is the most consistent culprit thus far, I have found, is deeply buried UI logic in Code, and in some cases, databases.

You can move code forward syntactically, yet still end up with incompatible libraries, unsupported UI behaviour, or an application that no longer meets modern security expectations.

Microsoft’s own upgrade guidance shows this. A successful upgrade starts with assessment, compatibility review, and validation, not just automatic conversion.

2. AI context limits are better than before, but still a real constraint

A lot of people assume that larger context windows mean that large enterprise upgrades can now be handled end-to-end by AI.

That is not how it plays out in practice.

Yes, modern code models now support very large contexts. OpenAI’s Codex stack supports models with large windows, and Anthropic’s latest Claude models also advertise 1M-token context options in some environments.

But in complex multi-project .NET solutions, context pressure still appears quickly.

Why?

Because the model is not only reading source files. It also has to juggle architectural intent, business rules, UI flow, package constraints, security assumptions, migration strategy, and the output of earlier steps in the chain.

That is where hallucination risk increases. Not always because the model is weak, but because the task is poorly decomposed.

In my experience, the real differentiator is not who has the biggest context window. It is who has the better orchestration strategy, who slices the work properly, and who validates each stage.

Adding to this complexity is that AI Code generators follow one or another coding pattern that may be different from what the developer is used to. A human reading the same code, especially if the developer is no longer the original author, can lead to fatigue in quality assessment.

3. Complex UI is still one of the first areas where full automation breaks down

User interface migration remains one of the hardest parts of AI-assisted .NET modernisation.

This is particularly true when:

  • The UI is old and inconsistently structured.

  • The rendering logic is mixed with backend generation.

  • Business rules are hidden in event-driven behaviour.

  • Parts of the screen are conditionally constructed at runtime.

  • Nobody has documented what the UI is actually meant to do anymore.

In these environments, AI can appear productive very early. It can quickly generate views, components, and controller patterns.

But when the UI is deeply buried in legacy assumptions, the model often loses the thread. It can reproduce structure without preserving intent.

That is why mature modernisation work still needs architectural control, review checkpoints, and human checking against real business behaviour, not just generated output.

4. Knowledge leakage is a real enterprise concern

This is one of the biggest issues for serious organisations.

When businesses use public cloud-based code models, their code is transmitted outside their own environment. Providers may have contractual protections, enterprise controls, and strong security measures in place, but for certain clients, that still does not eliminate the underlying concern.

And in enterprise work, perception matters almost as much as policy.

For highly sensitive codebases, I strongly believe prevention is better than cure.

At Artrilogic, we approach this differently. We have built a local model environment designed for code-generation-style tasks, running on isolated multi-GPU infrastructure via Ollama and disconnected from the internet. That allows us to support clients who want the acceleration benefits of AI without exposing their source to public cloud model workflows.

This becomes notably valuable in regulated, security-sensitive, or commercially sensitive modernisation programs.

5. Without a clear architecture definition, AI just helps you move confusion faster

This, in my view, is the biggest one.

Many businesses start modernisation with the question, “How fast can we upgrade the code?”

The better question is, “What is the product and architecture expectation after the upgrade?”

If that is not clear, AI may accelerate code movement, but it does not solve business ambiguity. In fact, it can amplify it.

Before any serious migration starts, there needs to be alignment on:

  • What the system is supposed to do

  • What should remain unchanged?

  • What can be improved?

  • What can be retired

  • What non-functional expectations now apply, including security, performance, scalability, and supportability

Once that architecture definition is in place, the migration becomes much more effective. AI can then be used as an accelerator inside a governed process, rather than as a substitute for thinking.

That is where experienced teams create the most value.

Final thought

AI can absolutely help reduce the timeline of a .NET modernisation exercise.

In the right hands, I believe it can cut a six-month effort down to a handful of weeks for selected parts of the journey. But that only happens when the work is properly framed, the architecture is defined upfront, the codebase is intelligently decomposed, and the execution environment is secure.

The biggest risk is not that AI writes bad code.

The biggest risk is that businesses mistake fast-looking output for a safe and complete modernisation strategy.

At Artrilogic, we help organisations approach .NET modernisation with the right mix of architecture thinking, delivery experience, and secure AI-assisted execution. That means faster outcomes, stronger control, and a path that is grounded in how enterprise systems really behave.