What You’re Getting Wrong About DevOps (and How To Make it Right)

DevOps is now the top software deployment strategy by a mile — 77% of organizations say they use the approach to roll out new software. This is good news because a DevOps approach is an important marker of business maturity, and its benefits (more on that later) are so tangible that most organizations don’t need to be convinced that DevOps is the future. 

The bad news is most organizations are getting DevOps wrong, which is probably why only 10% report that their organization is “very successful” at achieving rapid software development and deployment. My company started its DevOps journey in 2019 as we began to build out the next generation of our ERP software. We were committed to the strategy but made mistakes along the way. 

External blog post click to continue reading

Why You Should Think Beyond The Features Of Your Next Tech Purchase

What’s migrating core business systems to the cloud got to do with in-car Bluetooth technology? Enterprise leaders should consider an important analogy if they want to achieve their digital transformation goals sooner.

Rush To The Cloud

According to the results of Gartner research, nearly two-thirds (65.9%) of spending on application software will be on cloud technologies in 2025, up from 57.7% this year (2022). Many enterprises are keen to switch to subscription modes, scale more easily and sharpen business agility.

External blog post click to continue reading

Why DevOps Is The Next Step In Your Business Evolution (And How To Get It Right)

DevOps means that the running and maintenance of software is considered when it’s being built—an approach that’s slowly grown in popularity due to its many advantages.

Since Salesforce became the first major company to deploy enterprise software on a web browser in 1999—using cloud computing to deliver programs on demand to anyone with an internet connection—more organizations have followed suit. But moving to the cloud shone a flashlight on a whole host of issues developers had previously never worried about.

External blog post click to continue reading

Is It Time to Replace Your Legacy ERP System?

Enterprise resource planning (ERP) systems have been around since the 1990s, helping organizations manage and integrate business processes. However, as companies implement digital transformation strategies, it will be essential for the CIO/CTO to take a look at the status of the current ERP system, which at many companies is still an on-premise legacy system used to manage HR, finance, procurement, and other critical tasks.

External blog post click to continue reading

The low-no-code series – Unit4: 3 reasons low-code is over-hyped

I have worked in the IT industry for many years and sometimes it’s hard not to become a little weary of the hype around the latest trends. Low-code/no-code is one of those trends where hype can distract from a proper understanding of the true value of this approach to application development.

External blog post click to continue reading

Rethinking Traditional ERP Software: How To Make A Smooth Transition To The Cloud

As technology advances, more and more businesses are rethinking their legacy ERP systems and moving to the cloud. It’s a journey we’ve been on ourselves at Unit4, moving from a monolithic, self-contained architecture to a next-gen, cloud-native system that allows us to truly serve our customers and their people. This journey was years in the making, and we’ve learned a lot along the way—which I want to share now.

External blog post click to continue reading

10 Tactics For Successful Innovation

The Most Innovative Companies 2021 report from BCG reveals that successful innovators make innovation a priority, commit investment and talent to it, and have an innovation system to transform ideas into results.

The BCG report is a scientific study, but I’m pleased to see it accords reasonably well with my own personal experience of leading innovation in a technology business. Here are what I have found to be the essential ingredients in successful innovation.

External blog post click to continue reading

Thoughts on Rewrites (Estimation)

Estimating rewrites is hard, and frequently developers got a tendency to underestimate such a task – as, in their mind, they know exactly how the system should work, as they already built it once. This holds only if the rewrite is 1-to-1 rewrite – meaning you are constructing a similar solution from both an architectural, deployment and functional perspective. But if that is the case, why then even bother rewriting?

Moving from an on-premise single solution to a super-scalable cloud based solution requires a complete new architecture and equally important, new deployment scenarios. The latter needs to factor in continuous deployment such that it will be possible to deploy functional updates at a quicker pace than that for on-premise solutions, as well as taking into consideration that updates shouldn’t break customer specific configuration or functionality.

These elements, along with others, spills into the architecture – that needs to be more loosely coupled, utilizing a decoupled messaging strategic allowing to deploy functional instance clusters – by using a service oriented approach, like MSA – Micro Service Architecture, and utilizing patterns like, say, CQRS. For all applications there will always be a certain requirement for customer specific functionality or configuration, which in a super scalable solution shouldn’t be accomplished by code changes, but by providing facilities in the underlying platform for extending the solution by means of metadata, configuration, rules engines or built-in scripting capabilities. For the latter, it is imperative that such a construct can be isolated within its own cluster to avoid scripts to impact the base system.

Another element of rewriting is the functional capabilities of the solution going forward. When faced with the opportunity/challenge of a rewrite – it is equally important to reconsider/rethink the functionality both at the micro-level (features) and at the macro-level (solution). It is an opportunity to solve previously shortcomings as well as focus on new capabilities that may not even exist in current offering.

The above leads to the rule of rewrites: DON’T. If required, rethink and build new, with a sharp focus on the value creation a new solution would bring to the customer.

Focusing on constructing a flexible architecture and internal processes allowing for continuously deployment initially, will pave the way for the ability to provide functionality to customers earlier in the development phase, than can be accomplished in an on-premise model, as you control the update cycles – and are not at the mercy of the customer’s ability or willingness to deploy updates. Moreover, it is crucial to align platform and functional development activities such that visible progress can be made available to the stakeholders, infusing confidence that the project is on track. With regards to the latter, then having tangible deliverables throughout the development phase will increase the success of the project significantly, not only because stakeholders tends to get nervous without visible progress, but equally important you will be able to receive customer feedback on a continuously basis, allowing you to either adjust or enhance features quickly and redeploy for feasibility testing.

The net is, that a transition from a traditional on-premise solution to a cloud based solution – in most cases – will take equal time as it took to develop the solution you are trying to cloud enable. The question then becomes, how to calculate the original effort. It is not that straight forward to calculate the total effort in man month invested in building a solution over a multi-year time frame, but it is a good mental exercise trying to do so – as it will give you a better idea of the total scope. In most cases people get surprised about how long it actually took to build a solution.

The next element of the top-level estimation process is somewhat controversially, as it builds on research into programmer productivity by Boehm conducted in 1995. Boehm found that a programmer in a medium complex system, effectively, produces 900 lines of code per month (LOC/pm). Note that this number is an average over the lifetime of a system. Initially the number is higher; however, as the complexity of the system increases the LOC/pm decreases. The LOC/pm can be used as a sanity check against the total estimated time from above.

The third element of the top-level estimation is to create a team estimate, where the development team themselves gives a high-level estimate.

Having these numbers will allow you to get a rough idea on the total effort required, and some numbers on which you can base your reasoning when presenting the effort and investment required to stakeholders.

Example

To illustrate I will use an examples from two previous projects.

Case 1

The team had estimated a total effort of 324 man month. The estimated historical effort came to 900 man month. The LOC/pm gave 920 man month. In this particularly case the actual time ended up being 870 man month. Here the historical and LOC/pm matched very well, but the team estimate was 37% of the actual.

Case 2

In another project the team estimate was 180 man month. Estimated historical effort came to 800 man month. LOC/pm gave 1300. Here the actual time was 1080 man month. In this situation the team estimate was 16% of the actual.

In both of the above cases, the hardest part was not to convince stakeholders about the effort required, but to get the development teams to accept the initial estimation based on a combination of historical effort and LOC/pm. Personally I find that intriguing as it illustrates that developers are inherently optimistic when it comes to estimation. Not only at the micro-level – implementing specific features – but also at the macro-level – top-level solution estimation.