The Philosophy of Self-Driving Enterprise Software – Rethinking How We Build Software

At its core, the philosophy is simple and something I have always believed in and promoted: enterprise systems should support people, not vice versa. Technology should extend human capability, not consume it. For too long, enterprise software has demanded that people conform to its predefined structure, based on legacy workflows, accommodating the ever-increasing amount of functionality. Over time, this has turned enterprise software from a valuable productivity tool to a burden that can best be described as digital bureaucracy.

I believe in building context-sensitive, adaptive systems aligned with operational intent. Systems that reduce cognitive load by providing relevant insight when needed to improve decision-making. This is not just about productivity. It’s about respecting people’s time by giving people back their time, focus, and ability to do work that matters.

The philosophy of AI-supported enterprise systems is rooted in utility, ethics, and design that handles complexity while minimizing user burden by removing digital bureaucracy. Software should no longer be the center of attention. People should.

The De-Flowchartification of Enterprise Software

For decades, enterprise software has been defined by structure and control. We created systems to record, to enforce, to comply. They reflected the world as we thought it should be: organized, linear, deterministic.

We took paper-based processes, turned them into flowcharts, and then imprisoned users within the rigidity of those flowcharts. But the problem didn’t stop there.

Screens were built to reflect not just the workflows, but the underlying database tables themselves. In many enterprise systems today, you can look at a screen and nearly reverse-engineer the data model behind it. A tightly coupled triangle emerged: flowcharts dictating processes, databases shaping screens, and UIs enforcing structure and workflow.

This triad has constrained enterprise software into a narrow path, where the user’s experience is predefined by systems designed for data accuracy and compliance, not for real-world flexibility. The result is an inflexible, stepwise interaction model that reflects system architecture more than human need. The system we constructed stripped users of their agency, reducing them to transactional processing entities.

Over time, we slowly lost track of how users actually experience these systems. Enterprise software began to serve its own abstractions, tables, forms, and flows rather than the people using it.

In the AI-supported era, deterministic structures may still exist in the background to safeguard data fidelity and transactional integrity, but the visible rigidity dissolves. What emerges instead is a more adaptive, event-driven interaction where action is based on context, not sequence. Users are guided, not constrained.

At the heart of this shift lies a subtle but profound transformation: the databasification of enterprise systems. What used to be an application-centric model is now turning inside out, where the structured data, not the software UI, becomes the primary foundation. We’re moving toward systems where data is becoming truly decoupled from interfaces and behavior, allowing algorithms to interpret, through data and meta-information, state, process, and intent more adaptively.

This transformation requires more than just capturing data; it requires contextualizing it. The traditional record-based view is being replaced by semantic context, made possible through ontologies, state models, and defined relationships, the meta-information layer. It’s how we turn enterprise data from a passive ledger into a dynamic structure for inference, enabling automation systems to respond based on context, not just rules.

We’re entering a new era where algorithmic systems, powered by real-time data, probabilistic models, and embedded inference, are reshaping the very foundations of how we think about enterprise software.

This isn’t just a technological shift. It’s a foundational one.

From Control to Context

Traditional enterprise systems were built around control, predictable flows, predefined fields, and rigid, rules-based outcomes. Users had to conform to the system’s logic, navigating experiences shaped more by internal structures than real-world needs. In the process, they lost agency and were reduced to functionaries within systems designed more for control than contribution. What was meant to support work ended up dictating it.. Rather than being helpful or enabling, these systems were constraining by design.

AI-supported systems can invert the old paradigm. Where traditional systems dictated steps, these systems adapt. Instead of requiring users to follow rigid flows, they respond to context, surfacing what’s needed and when it’s needed. Rather than demanding conformity, they provide assistance. This puts the user in the center rather than the system, making the system the operator of the user, and not the other way around.

This marks a fundamental shift. Enterprise software moves from enforcing structure to enabling flow, from predefined workflows to dynamic events, from system-driven usage to purpose-driven relevance.

From UI to Inference

One of the core ideas I believe in, and always have, is that the best software becomes pervasive, not by disappearing but by reducing the need for users to engage with it actively. In traditional systems, every action required a corresponding screen. Each step was tightly coupled to a screen, a transaction, or a workflow. As a result, the user experience was designed around the software’s internal structure, rather than focusing on delivering a relevant outcome.

The ability to run machine learning algorithms at scale gives us a unique opportunity to change this. This, finally, allows us to move from interface-driven interactions to inference-driven assistance. Through predictive algorithms, the system “learns” to anticipate, recommend, and act, surfacing relevant information or completing routine steps without requiring the user to navigate menus or remember where to go next. The burden of knowing how to use the system shifts from the user to the system itself.

This is the shift from invasive to ambient. Traditional enterprise software forced users into transactional UIs. Context-aware enterprise software operates in the background, surfacing what matters when it matters.

It’s not just about reducing steps. It’s about changing the mental model entirely, from software as something we operate to software that either acts on our behalf or provides relevant information and context when needed. It automates what can be automated and augments when human judgment is needed.

Data as Living Context

To make this possible, we need to think differently about data, not just as structured records but as a living context. Traditional enterprise data has been fragmented across various software areas, often stripped of meaning beyond its immediate functional use to satisfy the system’s fundamentals. In an AI-first model, data must be elevated: organized not just by schema but, more importantly, by purpose, relationships, and state.

This is where the meta-information layer becomes critical. By introducing ontologies, state models, and defined relationships, we can construct algorithms that interpret data in context, not just what something is, but why it matters and how it connects to the broader flow of work. This is what enables predictive logic, relevance-based assistance, and automation that goes beyond coded scripting.

In AI-supported systems, the data model evolves from a passive record-keeping structure to a dynamic, semantic framework that reflects real-world complexity. The goal is to create a foundation that allows the system to draw connections, track transitions, and operate with context-driven logic.

Data is no longer just for reporting. It becomes the foundation for context-driven computation and algorithmic execution.

Enterprise Systems as Operational Infrastructure

The databasification of enterprise systems marks a fundamental shift. What used to be centered around coded business logic leading to predefined screens and database tables will increasingly be modeled through a combination of ontologies, state models, and process descriptions.

This shift allows algorithms to operate across data, state, and events without the user guiding every step. Instead of burying business logic in screens and interfaces, we make it accessible and available as computational tools that algorithms can use to adapt and optimize. Execution logic is derived in context, based on detected patterns and statistical correlations, not hardcoded instructions.

Enterprise software has always shaped how organizations operate, but now, it’s beginning to influence how they adapt to real-time conditions. From constraints and exceptions to shifting priorities, systems are starting to guide action instead of just enforcing structure.

AI-supported systems affect how decisions are framed, priorities are surfaced, and operational knowledge is applied. In this sense, they become operational infrastructure, scaffolding for operations and improvement.

This is what I mean by databasification. Databases transitioned from terminal-based interaction to a supporting infrastructure. Similarly, enterprise software is evolving, no longer just the “system where you go to work,” but now the pervasive engine that records changes, interprets data, provides insights, and triggers actions, based on analysis, across the workplace.

This is where the real shift lies. We’re no longer just building tools. We’re enabling systemic responsiveness.

A Belief System, Not a Roadmap

What I’m trying to describe isn’t just a strategy. It’s a belief system, a philosophy for constructing systems that, through thoughtful application of technology, reduces complexity and increases clarity to help people do more of the work that matters.

I fundamentally believe:

  • That software should empower, not encumber.
  • That system should adapt to circumstances, not require people to adjust to rigid flows.
  • That automation should decrease complexity and increase clarity.
  • That enterprise software can evolve from a back-office burden to forward-looking engines of innovation.

We have the tools. We have the data. What we need now is the discipline to rethink the foundations.

Because the future isn’t just about building better features. It’s about building better systems of interaction.

The Enterprise Software Promise

An ethical commitment for building AI-supported systems that serve human purpose.

As creators of enterprise systems in the age of adaptive automation, let’s hold ourselves to the following principles:

1. Build systems that support people, not systems that people serve.
The highest obligation is to assist human agency and purpose, giving people time back to work on what matters.

2. Amplify human judgment, don’t override it.
Enterprise systems should support decision-making, not dictate it. Software should be assistive, not authoritative, tools that extend human capability without displacing responsibility or control.

3. Design for relevance, not just execution.
Surface relevant information in context when needed.

4. Automate only what should be automated.
Recognize the difference between repetitive operations and human judgment.

5. Respect the dignity of work.
Enhance creativity, collaboration, and insight, not replace them.

6. Treat data not as exhaust, but as actionable input.
Give data meaning through meta-information to create systems that can respond.

7. Respect time and cognitive load.
Prioritize relevance, reduce friction, and eliminate wasteful complexity. Build systems that work for people, not the other way around.

Thoughts on Rewrites (Estimation)

Estimating rewrites is hard, and frequently developers got a tendency to underestimate such a task – as, in their mind, they know exactly how the system should work, as they already built it once. This holds only if the rewrite is 1-to-1 rewrite – meaning you are constructing a similar solution from both an architectural, deployment and functional perspective. But if that is the case, why then even bother rewriting?

Moving from an on-premise single solution to a super-scalable cloud based solution requires a complete new architecture and equally important, new deployment scenarios. The latter needs to factor in continuous deployment such that it will be possible to deploy functional updates at a quicker pace than that for on-premise solutions, as well as taking into consideration that updates shouldn’t break customer specific configuration or functionality.

These elements, along with others, spills into the architecture – that needs to be more loosely coupled, utilizing a decoupled messaging strategic allowing to deploy functional instance clusters – by using a service oriented approach, like MSA – Micro Service Architecture, and utilizing patterns like, say, CQRS. For all applications there will always be a certain requirement for customer specific functionality or configuration, which in a super scalable solution shouldn’t be accomplished by code changes, but by providing facilities in the underlying platform for extending the solution by means of metadata, configuration, rules engines or built-in scripting capabilities. For the latter, it is imperative that such a construct can be isolated within its own cluster to avoid scripts to impact the base system.

Another element of rewriting is the functional capabilities of the solution going forward. When faced with the opportunity/challenge of a rewrite – it is equally important to reconsider/rethink the functionality both at the micro-level (features) and at the macro-level (solution). It is an opportunity to solve previously shortcomings as well as focus on new capabilities that may not even exist in current offering.

The above leads to the rule of rewrites: DON’T. If required, rethink and build new, with a sharp focus on the value creation a new solution would bring to the customer.

Focusing on constructing a flexible architecture and internal processes allowing for continuously deployment initially, will pave the way for the ability to provide functionality to customers earlier in the development phase, than can be accomplished in an on-premise model, as you control the update cycles – and are not at the mercy of the customer’s ability or willingness to deploy updates. Moreover, it is crucial to align platform and functional development activities such that visible progress can be made available to the stakeholders, infusing confidence that the project is on track. With regards to the latter, then having tangible deliverables throughout the development phase will increase the success of the project significantly, not only because stakeholders tends to get nervous without visible progress, but equally important you will be able to receive customer feedback on a continuously basis, allowing you to either adjust or enhance features quickly and redeploy for feasibility testing.

The net is, that a transition from a traditional on-premise solution to a cloud based solution – in most cases – will take equal time as it took to develop the solution you are trying to cloud enable. The question then becomes, how to calculate the original effort. It is not that straight forward to calculate the total effort in man month invested in building a solution over a multi-year time frame, but it is a good mental exercise trying to do so – as it will give you a better idea of the total scope. In most cases people get surprised about how long it actually took to build a solution.

The next element of the top-level estimation process is somewhat controversially, as it builds on research into programmer productivity by Boehm conducted in 1995. Boehm found that a programmer in a medium complex system, effectively, produces 900 lines of code per month (LOC/pm). Note that this number is an average over the lifetime of a system. Initially the number is higher; however, as the complexity of the system increases the LOC/pm decreases. The LOC/pm can be used as a sanity check against the total estimated time from above.

The third element of the top-level estimation is to create a team estimate, where the development team themselves gives a high-level estimate.

Having these numbers will allow you to get a rough idea on the total effort required, and some numbers on which you can base your reasoning when presenting the effort and investment required to stakeholders.

Example

To illustrate I will use an examples from two previous projects.

Case 1

The team had estimated a total effort of 324 man month. The estimated historical effort came to 900 man month. The LOC/pm gave 920 man month. In this particularly case the actual time ended up being 870 man month. Here the historical and LOC/pm matched very well, but the team estimate was 37% of the actual.

Case 2

In another project the team estimate was 180 man month. Estimated historical effort came to 800 man month. LOC/pm gave 1300. Here the actual time was 1080 man month. In this situation the team estimate was 16% of the actual.

In both of the above cases, the hardest part was not to convince stakeholders about the effort required, but to get the development teams to accept the initial estimation based on a combination of historical effort and LOC/pm. Personally I find that intriguing as it illustrates that developers are inherently optimistic when it comes to estimation. Not only at the micro-level – implementing specific features – but also at the macro-level – top-level solution estimation.

Elephant butt, Software Development and Estimation

Okay, here’s an interesting question: what does an elephant butt, software development and estimation have in common? At first thought not really much, but then again they actually have a lot in common – depending on how you look at it.

Imagine you got blindfolded and placed right behind an elephant, being ignorant to the existence of such an animal. You remove the blindfold, blink a couple of times – and the only thing you can see is a huge grey butt! Wow, that the heck is this you ask yourself wondering. You slowly start moving around the grey butt in an effort trying to understand the nature of the grey mass you’re staring at. Getting to the side, and it’s still somewhat unclear what you’re looking at. Something very big and all grey. Unable to determine the nature of the grey mass – you step back. Suddenly you get enough distance to have the whole creature within your field of vision. And you go: aha that’s what the animal looks like.

So how does this tie into software development and estimates? Actually more than one would initially believe. Any totally new development project presented is like the butt of the elephant. You look at the task delegated and wondering what the heck is this all about. You then start working on the task, doing some research (moving around the elephant), cranking out some code (stepping back from the elephant) – and suddenly you go: aha, that’s what I’m going to build. And at that point in time, is the point in time you actually understand the task and somewhere realize the implications and start getting an idea of the time involved in solving that specific development task.

So that’s all nice and good. You get a task, researching in trying to understand it, doing some limited coding – prototyping until you go eureka! The problem is, that typically you’ll be asked to estimate or at least ball park an estimate upfront. Using the above analogy that’s the point in time someone removes the blindfold and the only thing you can see is one gigantic grey butt. If this is a completely new development task, where you’ll need to research new areas you have absolutely no clue to the implications of the task and can’t even remotely come up with a reliable guestimate on how long it’ll take.

What to do? Well, the only approach I’ve found that works is initially to spell out that you’ll need 30-90 calendar days of pre-project research, fact finding and prototyping prior to offering up any estimates. Depending on the company you work in you’ll find that some companies are completely fine with this, others aren’t. In case of the latter, try using the elephant analogy. If that doesn’t work, well – take what ever estimate that makes sense and add 60 days.

Upon going “aha!”, throw away everything you did, don’t move any research/prototype code forward into the product, as it usually is kind of messy – as you’ll be messing around trying to figure out what the heck you’re supposed to develop.

The Future of Software Development

It’s becoming more and more clear that monolithic applications are going the way of the Dodo. With the general adoption of smart-phones, tablets computers and social network portals users starts having an expectation that information is available anytime anywhere. Users simply don’t want to deal with booting up a desktop or laptop, login into an application, go to the right place to get the information. Its time consuming and inflexible. Users want seamless integration of essential data and information into their preferred social media sites and mobile devices.

What does this mean to the ISV that produces traditional applications? Well, if these ISV’s don’t start reconsidering their development strategy they risk facing the same fate as the Dodo. Actually, already now companies without a clear social media and mobile strategy are considered dated by the younger generation of users. New tech savvy users milling out from the universities and colleges look and evaluate companies on what strategy they have, and if a company allows them to work on cool stuff or at least there’s the potential to work on cool stuff. Hence, it becomes a huge challenge for ISV’s to recruit new young talent, especially the better students will prefer companies with a strategy that embraces mobile computing at the core.

And this is only the development side of it. Think about it, in the near future the next generation of users will also become part of the decision making process at the customers of the ISV’s. Making it a huge challenge for software vendors without sufficient presence in the mobile application market, to sell their solutions. There’s a huge risk, and in some situations, this is already the reality where a missing mobile strategy or adequate integration into social media sites disqualify a vendor in the initial phases of the buying process of new software systems. Personally I believe this will become an even bigger problem in the near future.

What’s interesting is that most ISV’s have the opportunity to actually provide interesting applications to their customer base, as they got years of data and experience in collecting data. They have a solid foundation for extending their offerings to include mobile and other interesting lightweight applications that can access the data and present it into different portals. Portals of the choice of the users.

ISV’s should focus their development efforts more on how to expand the usage to the casual users instead of the power users. Power users will continue to use traditional clients on their desktop or laptop, as they need high speed processing of huge amount of data. However, the casual user of the future doesn’t want to deal with these types of clients. They want immediate access to data on their preferred device.

Note that it’s not just about providing data and information but also about having lightweight applications for handling processes. Users will more and more be looking for applications that essentially do the work for them, and users only need to validate that the proposed action is actually the right action. Like flying a plane, the pilots really don’t do much anymore, they monitor that the software actually does it right, and they only intervene if a unique situation arises that requires manual intervention. That’s how all software involving processes should work in the future.

For us as developers this means that we also need to embrace the technologies and start extending our skill-sets to include mobile computing as well as portal computing (like web parts). Just as the ISV’s we can’t keep relying on our existing knowledge on building n-tier or more traditional client/server solutions. We need to start thinking in total distributed computing, multiply number and diverse data sources. We need to change as well; otherwise, we risk the same fate as the ISV’s – that our skills risks being inadequate for doing software development in the future.

#programming #light #apps