Thoughts on Rewrites (Estimation)

Estimating rewrites is hard, and frequently developers got a tendency to underestimate such a task – as, in their mind, they know exactly how the system should work, as they already built it once. This holds only if the rewrite is 1-to-1 rewrite – meaning you are constructing a similar solution from both an architectural, deployment and functional perspective. But if that is the case, why then even bother rewriting?

Moving from an on-premise single solution to a super-scalable cloud based solution requires a complete new architecture and equally important, new deployment scenarios. The latter needs to factor in continuous deployment such that it will be possible to deploy functional updates at a quicker pace than that for on-premise solutions, as well as taking into consideration that updates shouldn’t break customer specific configuration or functionality.

These elements, along with others, spills into the architecture – that needs to be more loosely coupled, utilizing a decoupled messaging strategic allowing to deploy functional instance clusters – by using a service oriented approach, like MSA – Micro Service Architecture, and utilizing patterns like, say, CQRS. For all applications there will always be a certain requirement for customer specific functionality or configuration, which in a super scalable solution shouldn’t be accomplished by code changes, but by providing facilities in the underlying platform for extending the solution by means of metadata, configuration, rules engines or built-in scripting capabilities. For the latter, it is imperative that such a construct can be isolated within its own cluster to avoid scripts to impact the base system.

Another element of rewriting is the functional capabilities of the solution going forward. When faced with the opportunity/challenge of a rewrite – it is equally important to reconsider/rethink the functionality both at the micro-level (features) and at the macro-level (solution). It is an opportunity to solve previously shortcomings as well as focus on new capabilities that may not even exist in current offering.

The above leads to the rule of rewrites: DON’T. If required, rethink and build new, with a sharp focus on the value creation a new solution would bring to the customer.

Focusing on constructing a flexible architecture and internal processes allowing for continuously deployment initially, will pave the way for the ability to provide functionality to customers earlier in the development phase, than can be accomplished in an on-premise model, as you control the update cycles – and are not at the mercy of the customer’s ability or willingness to deploy updates. Moreover, it is crucial to align platform and functional development activities such that visible progress can be made available to the stakeholders, infusing confidence that the project is on track. With regards to the latter, then having tangible deliverables throughout the development phase will increase the success of the project significantly, not only because stakeholders tends to get nervous without visible progress, but equally important you will be able to receive customer feedback on a continuously basis, allowing you to either adjust or enhance features quickly and redeploy for feasibility testing.

The net is, that a transition from a traditional on-premise solution to a cloud based solution – in most cases – will take equal time as it took to develop the solution you are trying to cloud enable. The question then becomes, how to calculate the original effort. It is not that straight forward to calculate the total effort in man month invested in building a solution over a multi-year time frame, but it is a good mental exercise trying to do so – as it will give you a better idea of the total scope. In most cases people get surprised about how long it actually took to build a solution.

The next element of the top-level estimation process is somewhat controversially, as it builds on research into programmer productivity by Boehm conducted in 1995. Boehm found that a programmer in a medium complex system, effectively, produces 900 lines of code per month (LOC/pm). Note that this number is an average over the lifetime of a system. Initially the number is higher; however, as the complexity of the system increases the LOC/pm decreases. The LOC/pm can be used as a sanity check against the total estimated time from above.

The third element of the top-level estimation is to create a team estimate, where the development team themselves gives a high-level estimate.

Having these numbers will allow you to get a rough idea on the total effort required, and some numbers on which you can base your reasoning when presenting the effort and investment required to stakeholders.

Example

To illustrate I will use an examples from two previous projects.

Case 1

The team had estimated a total effort of 324 man month. The estimated historical effort came to 900 man month. The LOC/pm gave 920 man month. In this particularly case the actual time ended up being 870 man month. Here the historical and LOC/pm matched very well, but the team estimate was 37% of the actual.

Case 2

In another project the team estimate was 180 man month. Estimated historical effort came to 800 man month. LOC/pm gave 1300. Here the actual time was 1080 man month. In this situation the team estimate was 16% of the actual.

In both of the above cases, the hardest part was not to convince stakeholders about the effort required, but to get the development teams to accept the initial estimation based on a combination of historical effort and LOC/pm. Personally I find that intriguing as it illustrates that developers are inherently optimistic when it comes to estimation. Not only at the micro-level – implementing specific features – but also at the macro-level – top-level solution estimation.

Elephant butt, Software Development and Estimation

Okay, here’s an interesting question: what does an elephant butt, software development and estimation have in common? At first thought not really much, but then again they actually have a lot in common – depending on how you look at it.

Imagine you got blindfolded and placed right behind an elephant, being ignorant to the existence of such an animal. You remove the blindfold, blink a couple of times – and the only thing you can see is a huge grey butt! Wow, that the heck is this you ask yourself wondering. You slowly start moving around the grey butt in an effort trying to understand the nature of the grey mass you’re staring at. Getting to the side, and it’s still somewhat unclear what you’re looking at. Something very big and all grey. Unable to determine the nature of the grey mass – you step back. Suddenly you get enough distance to have the whole creature within your field of vision. And you go: aha that’s what the animal looks like.

So how does this tie into software development and estimates? Actually more than one would initially believe. Any totally new development project presented is like the butt of the elephant. You look at the task delegated and wondering what the heck is this all about. You then start working on the task, doing some research (moving around the elephant), cranking out some code (stepping back from the elephant) – and suddenly you go: aha, that’s what I’m going to build. And at that point in time, is the point in time you actually understand the task and somewhere realize the implications and start getting an idea of the time involved in solving that specific development task.

So that’s all nice and good. You get a task, researching in trying to understand it, doing some limited coding – prototyping until you go eureka! The problem is, that typically you’ll be asked to estimate or at least ball park an estimate upfront. Using the above analogy that’s the point in time someone removes the blindfold and the only thing you can see is one gigantic grey butt. If this is a completely new development task, where you’ll need to research new areas you have absolutely no clue to the implications of the task and can’t even remotely come up with a reliable guestimate on how long it’ll take.

What to do? Well, the only approach I’ve found that works is initially to spell out that you’ll need 30-90 calendar days of pre-project research, fact finding and prototyping prior to offering up any estimates. Depending on the company you work in you’ll find that some companies are completely fine with this, others aren’t. In case of the latter, try using the elephant analogy. If that doesn’t work, well – take what ever estimate that makes sense and add 60 days.

Upon going “aha!”, throw away everything you did, don’t move any research/prototype code forward into the product, as it usually is kind of messy – as you’ll be messing around trying to figure out what the heck you’re supposed to develop.

The Future of Software Development

It’s becoming more and more clear that monolithic applications are going the way of the Dodo. With the general adoption of smart-phones, tablets computers and social network portals users starts having an expectation that information is available anytime anywhere. Users simply don’t want to deal with booting up a desktop or laptop, login into an application, go to the right place to get the information. Its time consuming and inflexible. Users want seamless integration of essential data and information into their preferred social media sites and mobile devices.

What does this mean to the ISV that produces traditional applications? Well, if these ISV’s don’t start reconsidering their development strategy they risk facing the same fate as the Dodo. Actually, already now companies without a clear social media and mobile strategy are considered dated by the younger generation of users. New tech savvy users milling out from the universities and colleges look and evaluate companies on what strategy they have, and if a company allows them to work on cool stuff or at least there’s the potential to work on cool stuff. Hence, it becomes a huge challenge for ISV’s to recruit new young talent, especially the better students will prefer companies with a strategy that embraces mobile computing at the core.

And this is only the development side of it. Think about it, in the near future the next generation of users will also become part of the decision making process at the customers of the ISV’s. Making it a huge challenge for software vendors without sufficient presence in the mobile application market, to sell their solutions. There’s a huge risk, and in some situations, this is already the reality where a missing mobile strategy or adequate integration into social media sites disqualify a vendor in the initial phases of the buying process of new software systems. Personally I believe this will become an even bigger problem in the near future.

What’s interesting is that most ISV’s have the opportunity to actually provide interesting applications to their customer base, as they got years of data and experience in collecting data. They have a solid foundation for extending their offerings to include mobile and other interesting lightweight applications that can access the data and present it into different portals. Portals of the choice of the users.

ISV’s should focus their development efforts more on how to expand the usage to the casual users instead of the power users. Power users will continue to use traditional clients on their desktop or laptop, as they need high speed processing of huge amount of data. However, the casual user of the future doesn’t want to deal with these types of clients. They want immediate access to data on their preferred device.

Note that it’s not just about providing data and information but also about having lightweight applications for handling processes. Users will more and more be looking for applications that essentially do the work for them, and users only need to validate that the proposed action is actually the right action. Like flying a plane, the pilots really don’t do much anymore, they monitor that the software actually does it right, and they only intervene if a unique situation arises that requires manual intervention. That’s how all software involving processes should work in the future.

For us as developers this means that we also need to embrace the technologies and start extending our skill-sets to include mobile computing as well as portal computing (like web parts). Just as the ISV’s we can’t keep relying on our existing knowledge on building n-tier or more traditional client/server solutions. We need to start thinking in total distributed computing, multiply number and diverse data sources. We need to change as well; otherwise, we risk the same fate as the ISV’s – that our skills risks being inadequate for doing software development in the future.

#programming #light #apps