Four megatrends that will reshape what software can do for us

Four major enterprise management trends are maturing simultaneously, and their imminent fusion has only begun to transform the way we conduct business. They are big data, mobile access, user experience, and computer intelligence.

Individually, each of these breakthroughs is game changers. Together they will combine into a new, vast and complex enterprise software.  Mastering it will require a new type of software capable of autonomously emulating how people perform tasks and make decisions. By operating in the background on a massive scale, it will free employees from mundane tasks and empower them to spend their time on high-level decision-making and oversight.  It is called self-driving software.

External blog post click to continue reading

Self-driving enterprise software

By using advanced technologies within pattern recognition, machine learning and computer aided decision support systems, as well as new IoT devices, software can make automated decisions on behalf of the user, or provide guidance to allow users to make intelligent decisions based on smart patterns. This is achieved through self-learning software that assumes actions based on previous or similar scenarios and uses that to pre-populate and self-drive.

Interaction with the software is minimized to only involve people when human intelligence and experience is critical to resolving a business issue or to drive business processes in the right direction. If user interaction is required, a UI will be provided in its simplest form to accommodate that, on the users preferred device of choice, intelligently adapted to the situation.

Incredible value can be derived by limiting the amount of manual input required by users so they can focus on serving customers. This is the future of business software.

What is self-driving enterprise software really about?

In a nut shell self-driving enterprise software is literally about creating software with which the user does not need to interact, or at least interaction is minimized. The reality is that the majority of the interactions users have with their enterprise software is administrative. It’s not about doing their job well or running their business but simply maintaining the mechanics of sending invoices, doing bookkeeping or filing expense reports.

All these mechanic interactions can be tracked, analyzed and fed into algorithms so the software can learn typical usage patterns within a specific context whether that’s procurement, invoicing, bookkeeping or expenses. Through pattern recognition and statistical analysis the software can “learn” typical user actions in specific situations and automate the expected action in the future.

Analyzing past interactions will allow us to construct more semi-intelligent solutions. Solutions that dynamically alter behavior based on previous interactions, decisions and user actions. There are a number of different aspects to self-driving software.

Adaptive UI

 
The fundamental idea of adaptive UI is, as the name suggests, UI that intelligently adapts to the user based on previous interactions. In today’s enterprise software, forms or screens typically contain many fields that need to be filled out by the user. Some fields are linked with other fields, and there are even hard dependencies meaning that if a certain field has a specific value, others need to be populated exactly right for the system to accept the data as a whole. As a causal user simply entering a purchase order it can be a horrifying experience – regardless of how nice the UI looks. The adaptive UI records everything the user is doing, when he or she is doing it and in what order. If a user always populates a specific number of fields with the same values it can do that automatically. Even better, if some fields are always left untouched – they can be removed dynamically from the UI to make it less ‘noisy’ or confusing. By removing un-used fields, and populating everything automatically based on previous interactions – the system can complete the task on the user’s behalf. For example, if someone orders A4 binders on every second Monday from the same vendor, the system might as well just do it.

Workflow

Normally business processes require user interaction for approvals. A manager needs to approve a request for a new laptop from one of his reports for example. If the manager in question always approves requests within certain criteria, most commonly the price, the system can give that approval and notify the manager it’s been done.

Historical and Current Data

What I’ve discussed above is all based on historical data; however, using aggregation of historical and current data – like your present location software has the ability to automatically populate travel expenses, field service work reports, and even suggest additional maintenance work do be done as part of a field service visit.

Some examples of where self-driving ERP will add significant value:

Expense Reports

Expense reporting is a great example of an administrative task that lends itself to self-driving software. In the future systems will automatically create line items in an expense report based on the scanning of a receipt. By extracting the address the system can gauge what the item is and based on the time know if it’s breakfast, lunch or dinner. Essentially populating the complete line. Rather than having to store the receipt, go into the system and enter expense type, date, amount, all the user has to do is photograph the receipt.

Field Service

Within field service, systems can schedule maintenance visits for equipment based on the previous maintenance of similar equipment at other customers. Also, if a visit is already planned, it may prove more economical to – pro-actively – service other equipment while already on-site.

The future is here

Previously analyzing massive amounts of data was expensive due to the amount of data storage and computational power required. However, due to ever decreasing prices on storage and CPU, running advanced algorithms becomes more accessible enabling inclusion of semi-intelligent agents into a vast majority of the functionality within enterprise software. Customers will benefit from being able to focus on running their businesses and serving their clients rather than the tedious mechanics of maintaining the ERP.

This is why we focus on developing enterprise software that allows customers to focus on their business instead of focusing on the software, by making it completely self-driving. We’re already starting to deliver this in our software.

Patterns

Patterns is not a new buzz or anything, actually patterns, the use hereof and definitions started in 1994 when the book “Design Patterns” was released by the gang of four. Since then there have been uncountable books published on the subject of patterns. Some of the more noticeable and most frequently mentioned is “Enterprise Integration Patterns”, “Patterns of Enterprise Application Architecture” and lesser known “Patterns in Java” and similar “Patterns in Objective-C”. All good books and definitely worth spending time browsing through.

When saying browsing through, then personally I never found it especially worthwhile to read any book on patterns from start to end. Mostly due to the fact, that I simply cannot remember all the patterns described after reading the book. When browsing, through, and reading the introduction to different patterns – you can pick up the essence and if a patterns relates to something you have worked on, you get a standard terminology useful to describe for others what specific area of the code is accomplishing. 

Over the years I have witness a number of projects where the engineering teams adopt what I prefer to refer to as “Pattern Driven Development”. All major pieces of the code ends being structured around specific patterns, to the extent that you can actually recognize the code in the systems from the books in which a particularly pattern was found. Moreover, different engineers read different patterns books, which means that you can find same patterns implemented slightly different in different sections of the system.

The latter, as anyone can envision, leads to a fragmented and confusing code base, with a non-cohesive implementation of code solving similar problems sometimes even same problem, being contradictory to the fundamental idea of patterns, being providing an uniform understanding and terminology to use when discussing and addressing specific problems.

To me the real value of patterns is not the example code, but the problem a given pattern solves and the terminology which can be seen as constituting a protocol between developers when interacting about a specific problem to be solved. To simply use the examples provided in your own code is, in my view, not a good plan.

Does multi-tenancy really matter anymore?

Software multi-tenancy refers to a software architecture in which a single instance of software runs on a server and serves multiple tenants. A tenant is a group of users who share common access with specific privileges to the software instance. With a multi-tenant architecture, a software application is designed to provide every tenant a dedicated share of the instance including its data, configuration, user management, tenant individual functionality, and non-functional properties.

External blog post click to continue reading

Thoughts on Rewrites (Estimation)

Estimating rewrites is hard, and frequently developers got a tendency to underestimate such a task – as, in their mind, they know exactly how the system should work, as they already built it once. This holds only if the rewrite is 1-to-1 rewrite – meaning you are constructing a similar solution from both an architectural, deployment and functional perspective. But if that is the case, why then even bother rewriting?

Moving from an on-premise single solution to a super-scalable cloud based solution requires a complete new architecture and equally important, new deployment scenarios. The latter needs to factor in continuous deployment such that it will be possible to deploy functional updates at a quicker pace than that for on-premise solutions, as well as taking into consideration that updates shouldn’t break customer specific configuration or functionality.

These elements, along with others, spills into the architecture – that needs to be more loosely coupled, utilizing a decoupled messaging strategic allowing to deploy functional instance clusters – by using a service oriented approach, like MSA – Micro Service Architecture, and utilizing patterns like, say, CQRS. For all applications there will always be a certain requirement for customer specific functionality or configuration, which in a super scalable solution shouldn’t be accomplished by code changes, but by providing facilities in the underlying platform for extending the solution by means of metadata, configuration, rules engines or built-in scripting capabilities. For the latter, it is imperative that such a construct can be isolated within its own cluster to avoid scripts to impact the base system.

Another element of rewriting is the functional capabilities of the solution going forward. When faced with the opportunity/challenge of a rewrite – it is equally important to reconsider/rethink the functionality both at the micro-level (features) and at the macro-level (solution). It is an opportunity to solve previously shortcomings as well as focus on new capabilities that may not even exist in current offering.

The above leads to the rule of rewrites: DON’T. If required, rethink and build new, with a sharp focus on the value creation a new solution would bring to the customer.

Focusing on constructing a flexible architecture and internal processes allowing for continuously deployment initially, will pave the way for the ability to provide functionality to customers earlier in the development phase, than can be accomplished in an on-premise model, as you control the update cycles – and are not at the mercy of the customer’s ability or willingness to deploy updates. Moreover, it is crucial to align platform and functional development activities such that visible progress can be made available to the stakeholders, infusing confidence that the project is on track. With regards to the latter, then having tangible deliverables throughout the development phase will increase the success of the project significantly, not only because stakeholders tends to get nervous without visible progress, but equally important you will be able to receive customer feedback on a continuously basis, allowing you to either adjust or enhance features quickly and redeploy for feasibility testing.

The net is, that a transition from a traditional on-premise solution to a cloud based solution – in most cases – will take equal time as it took to develop the solution you are trying to cloud enable. The question then becomes, how to calculate the original effort. It is not that straight forward to calculate the total effort in man month invested in building a solution over a multi-year time frame, but it is a good mental exercise trying to do so – as it will give you a better idea of the total scope. In most cases people get surprised about how long it actually took to build a solution.

The next element of the top-level estimation process is somewhat controversially, as it builds on research into programmer productivity by Boehm conducted in 1995. Boehm found that a programmer in a medium complex system, effectively, produces 900 lines of code per month (LOC/pm). Note that this number is an average over the lifetime of a system. Initially the number is higher; however, as the complexity of the system increases the LOC/pm decreases. The LOC/pm can be used as a sanity check against the total estimated time from above.

The third element of the top-level estimation is to create a team estimate, where the development team themselves gives a high-level estimate.

Having these numbers will allow you to get a rough idea on the total effort required, and some numbers on which you can base your reasoning when presenting the effort and investment required to stakeholders.

Example

To illustrate I will use an examples from two previous projects.

Case 1

The team had estimated a total effort of 324 man month. The estimated historical effort came to 900 man month. The LOC/pm gave 920 man month. In this particularly case the actual time ended up being 870 man month. Here the historical and LOC/pm matched very well, but the team estimate was 37% of the actual.

Case 2

In another project the team estimate was 180 man month. Estimated historical effort came to 800 man month. LOC/pm gave 1300. Here the actual time was 1080 man month. In this situation the team estimate was 16% of the actual.

In both of the above cases, the hardest part was not to convince stakeholders about the effort required, but to get the development teams to accept the initial estimation based on a combination of historical effort and LOC/pm. Personally I find that intriguing as it illustrates that developers are inherently optimistic when it comes to estimation. Not only at the micro-level – implementing specific features – but also at the macro-level – top-level solution estimation.

How to Predict next Disruptive Technology

On a very lazy Saturday I was pondering about what eventually could be the next great disruptor within technology, that I could lash onto and be among the first movers. But how would anyone be able to predict that? One of natures hard questions – I guess. I quickly dismissed the idea of dreaming something up myself in hope that whatever irrelevant thought I came up with, would end up taking over the world, and commenced a small research project looking at recent disruptive innovations within the technology space to see if I could spot a trend.

As a starting point I listed the technologies or shifts that have had most profound impact on our lives and also caused disruption in existing business. The first game changer was the rise of the personal computer – 1970ish –  exactly when depends on the definition of PC, carving the road for affordable computers. The second one was the internet on which web browsers (1990) was based, personally the biggest game changer here was the web browser and the ability to move from client/server to hosted solutions. Third was the iPhone (2007) which revolutionized mobile phones. The fourth one is cloud computing, hard to determine exactly when cloud computing started getting real traction, but a suggestion would be 2006 where Amazon launched Amazon Web Services.

Listing the above 4 disruptive technologies wasn’t hard. But how would anyone determine whether or not any of these in fact would take off, and become real disruptive technologies and major game changers? Well, after doing some research on the internet using my web browser I found a number of quotes relating to each of the above.

Personal Computer

  • “But what…is it good for?” — Engineer at the Advanced Computing Systems Division of IBM, 1968, commenting on the microchip. Included this one as the microchip and Personal Computer goes hand in hand.
  • In 1977, Ken Olsen, CEO of DEC, said, “There is no reason for any individual to have a computer in his home”.

Internet/Web Browsers

  • In 1995, Robert Metcalfe, founder of 3Com, inventor of Ethernet, said, “I predict the Internet will soon go spectacularly supernova and in 1996 catastrophically collapse”
  • In 1993, Bill Gates, CEO of Microsoft, said, “The Internet? We’re not interested in it.”
  • Newsweek wrote in 1995, “The Internet is just a fad.”

iPhone/Smartphones

  • In 2007, Steve Balmer, CEO of Microsoft, said, “”There’s no chance that the iPhone is going to get any significant market share. No chance.”
  • In 2007, David Platt, Author, said, “The iPhone is going to be a bigger marketing flop than Ishtar and Waterworld.”
  • In 2007, Seth Porgess, TechCrunch columnist, said, “We predict the iPhone will bomb.”
  • In 2007, Al Ries, Marketing Consultant, wrote an article, “Why the iPhone will fail.”
  • In 2007, Brett Arends, Financial Writer, said, “The iPhone isn’t the future.”
  • In 2007, Todd Sullivan, Investment Advisor, said, “The iPhone: Apple’s first flop.”
  • In 2007, Mitchell Ashley, IT executive, said, “The iPhone is certain to fade into history.”
  • In 2007, John Dvorak, MarketWatch columnist, said, “Apple should pull the plug on the iPhone.”

Cloud Computing

There’s tons of related information all, in more or less degree, dismissing the above mentioned truly disruptive technologies. The question, though, how to predict what’s the next great thing still persists and how can the above help? Well, it seems that the likelihood a new trend, technology or shift taking hold and developing into a disruptive innovation is proportional with the amount of bashing from technology pundits, and companies that risk their business models to be disrupted by emerging innovations.

Relating that to my current area, ERP, then there’s been a couple of disruptive shifts towards cloud based solutions and lighter clients – responsive web design. The new comers primarily focus on providing cloud based solutions other on-premise installations. However, by taking into the considerations the recent development in BYOD and increase of youngsters only using mobile devices, in some cases not even owning a PC, it’s very probable that the next disruption of the ERP space, that can threaten the incumbents, is a combination of a cloud based solution and mobile clients.

#erp #cloudcomputing #mobile #disruptive #predictions

Random Thought

As a software professional I keep being astonished what users accept and put up with, when it comes to faulty, buggy and unfathomable useless apps or systems. You download an app from an AppStore, or if you’re older than 20, you may even install it on your laptop/desktop. Either way, you start using it – and it works fine, or at least it seems to work fine. But for some incomprehensible reason your device “blows” up. And guess what? We simply shrug and reboot the whole thing – and what’s most astounding is that we don’t even complain.

If you think about it then it’s comparable to that you every day gets the milk out of the fridge for your cereal, but randomly the milk would explode. Wouldn’t that upset you?

Running apps without connectivity

The other day on my daily commute I was, as usual, reading the latest news on technology and other, to me, interesting areas, I ended up spending most of my commute reflecting on the subject of mobile apps supporting disconnected state instead of reading. Honestly, I was rather annoyed with the situation, as I really enjoy being able to read my news and emails while riding the train.

So what was difference this particularly morning, since I usually read news and emails while on the train? Well, the difference was that normally I’m on an InterCity train that offers free WIFI, however, on this day I got on an earlier train – the sprinter – which doesn’t have free WIFI. Meaning, that I had to rely on “standard” mobile connectivity, which can be sketchy, even in a country like the Netherlands that got exceptional good network coverage.

Being without reliable connectivity I started testing a number of different news reader apps like Zite, FlipBoard, Dr. Dobbs and others – and found that they all requires connectivity to function. None of the newsreaders I tried, offered any solution for reading news without having connectivity. Which is some part peculiar, and I would argue a major flaw. Most of the apps build personalized news based on users preferences, it’s not as they’re searching the net for news. They could easily, on launch, download the news the user are interested in and store it on the device, and whenever there’s connection sync with the server.

If you think about this, then most of these apps and any other app that relies on connectivity is simply a revamped web app, and they doesn’t really offer any noticeable better user experience than a mobile web app would do. So why even bother building these apps as native apps? Apart from be able to distribute them as native apps by means of the app stores. But honestly, these apps don’t provide any additional advantages over traditional web apps.

When building native apps for devices like the iPad it’s crucial that there are additional benefits for the user over building mobile web apps. And one of these additional benefits is the ability to design and create apps that are capable of running without connectivity – disconnected. Remember that the mobile devices got – nearly – all the capabilities of a modern laptop. Storage, file system and database, so there’s a lot of power of API available to facilitate a great user experience, even when the device is disconnected.

When building mobile apps, you should rethink the functionality in terms of being disconnected. Look at the use case you’re trying to solve, and then rethink the use case in the light of being disconnected. Ask yourself, what your users wants to be do if there’s no connection. In the example of reading news, then I would assume that my users would like to be capable of reading news regardless of whether or not there’s connectivity. Compare it with reading a newspaper, and provide similar functionality – since that’s what the newsreader is trying to substitute – the good old newspaper. For the current mainstream apps in the newsreader space, they clearly haven’t succeeded in providing that richer user experience.

And to everyone out there, arguing that there’s such thing as perpetual connectivity, I suggest you go on a road trip, using different means of transportation as that’ll make you understand that perpetual connectivity doesn’t exists.

Will Windows 8 bring MS into the tablet/phone game?

Recent I had a discussion about the contents of the following article: http://gizmodo.com/5839665/windows-8-slate-hands-on-its-fantastic-but-dont-sell-your-ipad, where I labeled a post: “Eventually the tablet OS market will be like the desktop OS market. MS against Apple also on tablets.” which caused an interesting debate. So I thought I would write why I believe this will be so.

I would say that on the tablet market Android isn’t imposing any real threat to anything, maybe apart from themselves. Regardless of ice cream, soft ice or whatever sweet project name Google came up with this time – guess it requires a lot of sweets to swallow Android, it won’t bring them into the tablet market. If you’ve followed stories on non-iPad tablets, then they’re being returned at an alarming rate. There’s nobody delivering Android tablets, not I’ve been able to find, that are keen on informing about number of sold units – normally something you would be very keen on doing if you were selling a lot of units. Should give you a hint to the traction the Android based tablet devices.

One thing that history has taught us is that big corporation becomes very innovative when their core business and primary revenue generator comes under pressure, and feel themselves against the wall. That’s the point in time, they either bet their business (as Apple did) or reinvent themselves (as IBM did). Not sure MSFT is really against the wall, yet. But they definitely feel under pressure, and are probably concerned about their primary revenue generators being Office and Windows. Will they succeed, only time can tell – but it’s definitely naive to count MSFT out of the game on tablets and phones.

On the subject of Android on phones, then there’s not just 1 Android, there’s uncountable number of Android implementations out there. Every phone and tablet vendor using Android got their own version of Android. This fragmentation causes problems with the apps developed for Android, and you can have apps that works on a device from Samsung, but doesn’t work on a device from HTC. People keeps talking about the success of the Android platform and there’s a bunch of charts showing that there’s more Android phones than, but that covers all flavors of Android on all vendors of devices. If you compare the manufactures of phones, then Apple is the single biggest vendor of phones. Followed by Samsung and Nokia. Note that Samsung includes non-smartphone and non-android phones, goes for Nokia as well. So how successful is Android really when it comes to it? And also, how to you measure whether or not something or someone is successful?

Google buying Motorola could backfire on Android, as the current device manufactures will be concerned, that Motorola will get an unfair market advantage when it comes to delivering new versions of Android. There’s already been speculations about Samsung looking at WP7 and/or buying WebOS and/or promoting Bada. I bet you, that the Softies in Redmond and employees at Apple was clapping their hands when Google announced the acquisition. As much as Apple and MSFT (probably) hates each other, they do agree on disliking google. And together they do make a somewhat scary opponent. MSFT is probably already working hard to convince Samsung and HTC that WP7 is a better bet than Android, and by acquiring Motorola, Google just made the argument easier.

There’s been a lot of criticism of MSFT not building a complete new OS for tablets, or at least uses the WP7 OS from the phone. To some extent I do share that concern, as Windows is a huge OS. The interesting thing, though, is that Apple is trying to merge MacOSX and iOS into one OS, that work started with LION. So you could argue, that Apple and MSFT is doing similar things, however, Apple used a convergence strategy – having 2 distinct OSes and converge them, whereas MSFT used a duplicate strategy, duplicate your OS to multiply devices – and slim it. By the end of the day, both Apple and MSFT are trying to do the same thing, having one OS to ease development of apps. For Apple that would allow users to use iOS based apps on Macintosh, allowing Apple to sell more computers. For MSFT it’ll allow users to use the apps they already know on tablets. Same Same, but different starting point and approach.

How will this end? Well, only time can tell. But it’s imperative when we try to project or anticipate the future that we stay non-biased and objective; otherwise, we risk making critical strategic decisions based on personal preferences, which doesn’t always come out successful.

Elephant butt, Software Development and Estimation

Okay, here’s an interesting question: what does an elephant butt, software development and estimation have in common? At first thought not really much, but then again they actually have a lot in common – depending on how you look at it.

Imagine you got blindfolded and placed right behind an elephant, being ignorant to the existence of such an animal. You remove the blindfold, blink a couple of times – and the only thing you can see is a huge grey butt! Wow, that the heck is this you ask yourself wondering. You slowly start moving around the grey butt in an effort trying to understand the nature of the grey mass you’re staring at. Getting to the side, and it’s still somewhat unclear what you’re looking at. Something very big and all grey. Unable to determine the nature of the grey mass – you step back. Suddenly you get enough distance to have the whole creature within your field of vision. And you go: aha that’s what the animal looks like.

So how does this tie into software development and estimates? Actually more than one would initially believe. Any totally new development project presented is like the butt of the elephant. You look at the task delegated and wondering what the heck is this all about. You then start working on the task, doing some research (moving around the elephant), cranking out some code (stepping back from the elephant) – and suddenly you go: aha, that’s what I’m going to build. And at that point in time, is the point in time you actually understand the task and somewhere realize the implications and start getting an idea of the time involved in solving that specific development task.

So that’s all nice and good. You get a task, researching in trying to understand it, doing some limited coding – prototyping until you go eureka! The problem is, that typically you’ll be asked to estimate or at least ball park an estimate upfront. Using the above analogy that’s the point in time someone removes the blindfold and the only thing you can see is one gigantic grey butt. If this is a completely new development task, where you’ll need to research new areas you have absolutely no clue to the implications of the task and can’t even remotely come up with a reliable guestimate on how long it’ll take.

What to do? Well, the only approach I’ve found that works is initially to spell out that you’ll need 30-90 calendar days of pre-project research, fact finding and prototyping prior to offering up any estimates. Depending on the company you work in you’ll find that some companies are completely fine with this, others aren’t. In case of the latter, try using the elephant analogy. If that doesn’t work, well – take what ever estimate that makes sense and add 60 days.

Upon going “aha!”, throw away everything you did, don’t move any research/prototype code forward into the product, as it usually is kind of messy – as you’ll be messing around trying to figure out what the heck you’re supposed to develop.