Patterns

Patterns is not a new buzz or anything, actually patterns, the use hereof and definitions started in 1994 when the book “Design Patterns” was released by the gang of four. Since then there have been uncountable books published on the subject of patterns. Some of the more noticeable and most frequently mentioned is “Enterprise Integration Patterns”, “Patterns of Enterprise Application Architecture” and lesser known “Patterns in Java” and similar “Patterns in Objective-C”. All good books and definitely worth spending time browsing through.

When saying browsing through, then personally I never found it especially worthwhile to read any book on patterns from start to end. Mostly due to the fact, that I simply cannot remember all the patterns described after reading the book. When browsing, through, and reading the introduction to different patterns – you can pick up the essence and if a patterns relates to something you have worked on, you get a standard terminology useful to describe for others what specific area of the code is accomplishing. 

Over the years I have witness a number of projects where the engineering teams adopt what I prefer to refer to as “Pattern Driven Development”. All major pieces of the code ends being structured around specific patterns, to the extent that you can actually recognize the code in the systems from the books in which a particularly pattern was found. Moreover, different engineers read different patterns books, which means that you can find same patterns implemented slightly different in different sections of the system.

The latter, as anyone can envision, leads to a fragmented and confusing code base, with a non-cohesive implementation of code solving similar problems sometimes even same problem, being contradictory to the fundamental idea of patterns, being providing an uniform understanding and terminology to use when discussing and addressing specific problems.

To me the real value of patterns is not the example code, but the problem a given pattern solves and the terminology which can be seen as constituting a protocol between developers when interacting about a specific problem to be solved. To simply use the examples provided in your own code is, in my view, not a good plan.

Thoughts on Rewrites (Estimation)

Estimating rewrites is hard, and frequently developers got a tendency to underestimate such a task – as, in their mind, they know exactly how the system should work, as they already built it once. This holds only if the rewrite is 1-to-1 rewrite – meaning you are constructing a similar solution from both an architectural, deployment and functional perspective. But if that is the case, why then even bother rewriting?

Moving from an on-premise single solution to a super-scalable cloud based solution requires a complete new architecture and equally important, new deployment scenarios. The latter needs to factor in continuous deployment such that it will be possible to deploy functional updates at a quicker pace than that for on-premise solutions, as well as taking into consideration that updates shouldn’t break customer specific configuration or functionality.

These elements, along with others, spills into the architecture – that needs to be more loosely coupled, utilizing a decoupled messaging strategic allowing to deploy functional instance clusters – by using a service oriented approach, like MSA – Micro Service Architecture, and utilizing patterns like, say, CQRS. For all applications there will always be a certain requirement for customer specific functionality or configuration, which in a super scalable solution shouldn’t be accomplished by code changes, but by providing facilities in the underlying platform for extending the solution by means of metadata, configuration, rules engines or built-in scripting capabilities. For the latter, it is imperative that such a construct can be isolated within its own cluster to avoid scripts to impact the base system.

Another element of rewriting is the functional capabilities of the solution going forward. When faced with the opportunity/challenge of a rewrite – it is equally important to reconsider/rethink the functionality both at the micro-level (features) and at the macro-level (solution). It is an opportunity to solve previously shortcomings as well as focus on new capabilities that may not even exist in current offering.

The above leads to the rule of rewrites: DON’T. If required, rethink and build new, with a sharp focus on the value creation a new solution would bring to the customer.

Focusing on constructing a flexible architecture and internal processes allowing for continuously deployment initially, will pave the way for the ability to provide functionality to customers earlier in the development phase, than can be accomplished in an on-premise model, as you control the update cycles – and are not at the mercy of the customer’s ability or willingness to deploy updates. Moreover, it is crucial to align platform and functional development activities such that visible progress can be made available to the stakeholders, infusing confidence that the project is on track. With regards to the latter, then having tangible deliverables throughout the development phase will increase the success of the project significantly, not only because stakeholders tends to get nervous without visible progress, but equally important you will be able to receive customer feedback on a continuously basis, allowing you to either adjust or enhance features quickly and redeploy for feasibility testing.

The net is, that a transition from a traditional on-premise solution to a cloud based solution – in most cases – will take equal time as it took to develop the solution you are trying to cloud enable. The question then becomes, how to calculate the original effort. It is not that straight forward to calculate the total effort in man month invested in building a solution over a multi-year time frame, but it is a good mental exercise trying to do so – as it will give you a better idea of the total scope. In most cases people get surprised about how long it actually took to build a solution.

The next element of the top-level estimation process is somewhat controversially, as it builds on research into programmer productivity by Boehm conducted in 1995. Boehm found that a programmer in a medium complex system, effectively, produces 900 lines of code per month (LOC/pm). Note that this number is an average over the lifetime of a system. Initially the number is higher; however, as the complexity of the system increases the LOC/pm decreases. The LOC/pm can be used as a sanity check against the total estimated time from above.

The third element of the top-level estimation is to create a team estimate, where the development team themselves gives a high-level estimate.

Having these numbers will allow you to get a rough idea on the total effort required, and some numbers on which you can base your reasoning when presenting the effort and investment required to stakeholders.

Example

To illustrate I will use an examples from two previous projects.

Case 1

The team had estimated a total effort of 324 man month. The estimated historical effort came to 900 man month. The LOC/pm gave 920 man month. In this particularly case the actual time ended up being 870 man month. Here the historical and LOC/pm matched very well, but the team estimate was 37% of the actual.

Case 2

In another project the team estimate was 180 man month. Estimated historical effort came to 800 man month. LOC/pm gave 1300. Here the actual time was 1080 man month. In this situation the team estimate was 16% of the actual.

In both of the above cases, the hardest part was not to convince stakeholders about the effort required, but to get the development teams to accept the initial estimation based on a combination of historical effort and LOC/pm. Personally I find that intriguing as it illustrates that developers are inherently optimistic when it comes to estimation. Not only at the micro-level – implementing specific features – but also at the macro-level – top-level solution estimation.

How to Predict next Disruptive Technology

On a very lazy Saturday I was pondering about what eventually could be the next great disruptor within technology, that I could lash onto and be among the first movers. But how would anyone be able to predict that? One of natures hard questions – I guess. I quickly dismissed the idea of dreaming something up myself in hope that whatever irrelevant thought I came up with, would end up taking over the world, and commenced a small research project looking at recent disruptive innovations within the technology space to see if I could spot a trend.

As a starting point I listed the technologies or shifts that have had most profound impact on our lives and also caused disruption in existing business. The first game changer was the rise of the personal computer – 1970ish –  exactly when depends on the definition of PC, carving the road for affordable computers. The second one was the internet on which web browsers (1990) was based, personally the biggest game changer here was the web browser and the ability to move from client/server to hosted solutions. Third was the iPhone (2007) which revolutionized mobile phones. The fourth one is cloud computing, hard to determine exactly when cloud computing started getting real traction, but a suggestion would be 2006 where Amazon launched Amazon Web Services.

Listing the above 4 disruptive technologies wasn’t hard. But how would anyone determine whether or not any of these in fact would take off, and become real disruptive technologies and major game changers? Well, after doing some research on the internet using my web browser I found a number of quotes relating to each of the above.

Personal Computer

  • “But what…is it good for?” — Engineer at the Advanced Computing Systems Division of IBM, 1968, commenting on the microchip. Included this one as the microchip and Personal Computer goes hand in hand.
  • In 1977, Ken Olsen, CEO of DEC, said, “There is no reason for any individual to have a computer in his home”.

Internet/Web Browsers

  • In 1995, Robert Metcalfe, founder of 3Com, inventor of Ethernet, said, “I predict the Internet will soon go spectacularly supernova and in 1996 catastrophically collapse”
  • In 1993, Bill Gates, CEO of Microsoft, said, “The Internet? We’re not interested in it.”
  • Newsweek wrote in 1995, “The Internet is just a fad.”

iPhone/Smartphones

  • In 2007, Steve Balmer, CEO of Microsoft, said, “”There’s no chance that the iPhone is going to get any significant market share. No chance.”
  • In 2007, David Platt, Author, said, “The iPhone is going to be a bigger marketing flop than Ishtar and Waterworld.”
  • In 2007, Seth Porgess, TechCrunch columnist, said, “We predict the iPhone will bomb.”
  • In 2007, Al Ries, Marketing Consultant, wrote an article, “Why the iPhone will fail.”
  • In 2007, Brett Arends, Financial Writer, said, “The iPhone isn’t the future.”
  • In 2007, Todd Sullivan, Investment Advisor, said, “The iPhone: Apple’s first flop.”
  • In 2007, Mitchell Ashley, IT executive, said, “The iPhone is certain to fade into history.”
  • In 2007, John Dvorak, MarketWatch columnist, said, “Apple should pull the plug on the iPhone.”

Cloud Computing

There’s tons of related information all, in more or less degree, dismissing the above mentioned truly disruptive technologies. The question, though, how to predict what’s the next great thing still persists and how can the above help? Well, it seems that the likelihood a new trend, technology or shift taking hold and developing into a disruptive innovation is proportional with the amount of bashing from technology pundits, and companies that risk their business models to be disrupted by emerging innovations.

Relating that to my current area, ERP, then there’s been a couple of disruptive shifts towards cloud based solutions and lighter clients – responsive web design. The new comers primarily focus on providing cloud based solutions other on-premise installations. However, by taking into the considerations the recent development in BYOD and increase of youngsters only using mobile devices, in some cases not even owning a PC, it’s very probable that the next disruption of the ERP space, that can threaten the incumbents, is a combination of a cloud based solution and mobile clients.

#erp #cloudcomputing #mobile #disruptive #predictions

Random Thought

As a software professional I keep being astonished what users accept and put up with, when it comes to faulty, buggy and unfathomable useless apps or systems. You download an app from an AppStore, or if you’re older than 20, you may even install it on your laptop/desktop. Either way, you start using it – and it works fine, or at least it seems to work fine. But for some incomprehensible reason your device “blows” up. And guess what? We simply shrug and reboot the whole thing – and what’s most astounding is that we don’t even complain.

If you think about it then it’s comparable to that you every day gets the milk out of the fridge for your cereal, but randomly the milk would explode. Wouldn’t that upset you?

Running apps without connectivity

The other day on my daily commute I was, as usual, reading the latest news on technology and other, to me, interesting areas, I ended up spending most of my commute reflecting on the subject of mobile apps supporting disconnected state instead of reading. Honestly, I was rather annoyed with the situation, as I really enjoy being able to read my news and emails while riding the train.

So what was difference this particularly morning, since I usually read news and emails while on the train? Well, the difference was that normally I’m on an InterCity train that offers free WIFI, however, on this day I got on an earlier train – the sprinter – which doesn’t have free WIFI. Meaning, that I had to rely on “standard” mobile connectivity, which can be sketchy, even in a country like the Netherlands that got exceptional good network coverage.

Being without reliable connectivity I started testing a number of different news reader apps like Zite, FlipBoard, Dr. Dobbs and others – and found that they all requires connectivity to function. None of the newsreaders I tried, offered any solution for reading news without having connectivity. Which is some part peculiar, and I would argue a major flaw. Most of the apps build personalized news based on users preferences, it’s not as they’re searching the net for news. They could easily, on launch, download the news the user are interested in and store it on the device, and whenever there’s connection sync with the server.

If you think about this, then most of these apps and any other app that relies on connectivity is simply a revamped web app, and they doesn’t really offer any noticeable better user experience than a mobile web app would do. So why even bother building these apps as native apps? Apart from be able to distribute them as native apps by means of the app stores. But honestly, these apps don’t provide any additional advantages over traditional web apps.

When building native apps for devices like the iPad it’s crucial that there are additional benefits for the user over building mobile web apps. And one of these additional benefits is the ability to design and create apps that are capable of running without connectivity – disconnected. Remember that the mobile devices got – nearly – all the capabilities of a modern laptop. Storage, file system and database, so there’s a lot of power of API available to facilitate a great user experience, even when the device is disconnected.

When building mobile apps, you should rethink the functionality in terms of being disconnected. Look at the use case you’re trying to solve, and then rethink the use case in the light of being disconnected. Ask yourself, what your users wants to be do if there’s no connection. In the example of reading news, then I would assume that my users would like to be capable of reading news regardless of whether or not there’s connectivity. Compare it with reading a newspaper, and provide similar functionality – since that’s what the newsreader is trying to substitute – the good old newspaper. For the current mainstream apps in the newsreader space, they clearly haven’t succeeded in providing that richer user experience.

And to everyone out there, arguing that there’s such thing as perpetual connectivity, I suggest you go on a road trip, using different means of transportation as that’ll make you understand that perpetual connectivity doesn’t exists.

Will Windows 8 bring MS into the tablet/phone game?

Recent I had a discussion about the contents of the following article: http://gizmodo.com/5839665/windows-8-slate-hands-on-its-fantastic-but-dont-sell-your-ipad, where I labeled a post: “Eventually the tablet OS market will be like the desktop OS market. MS against Apple also on tablets.” which caused an interesting debate. So I thought I would write why I believe this will be so.

I would say that on the tablet market Android isn’t imposing any real threat to anything, maybe apart from themselves. Regardless of ice cream, soft ice or whatever sweet project name Google came up with this time – guess it requires a lot of sweets to swallow Android, it won’t bring them into the tablet market. If you’ve followed stories on non-iPad tablets, then they’re being returned at an alarming rate. There’s nobody delivering Android tablets, not I’ve been able to find, that are keen on informing about number of sold units – normally something you would be very keen on doing if you were selling a lot of units. Should give you a hint to the traction the Android based tablet devices.

One thing that history has taught us is that big corporation becomes very innovative when their core business and primary revenue generator comes under pressure, and feel themselves against the wall. That’s the point in time, they either bet their business (as Apple did) or reinvent themselves (as IBM did). Not sure MSFT is really against the wall, yet. But they definitely feel under pressure, and are probably concerned about their primary revenue generators being Office and Windows. Will they succeed, only time can tell – but it’s definitely naive to count MSFT out of the game on tablets and phones.

On the subject of Android on phones, then there’s not just 1 Android, there’s uncountable number of Android implementations out there. Every phone and tablet vendor using Android got their own version of Android. This fragmentation causes problems with the apps developed for Android, and you can have apps that works on a device from Samsung, but doesn’t work on a device from HTC. People keeps talking about the success of the Android platform and there’s a bunch of charts showing that there’s more Android phones than, but that covers all flavors of Android on all vendors of devices. If you compare the manufactures of phones, then Apple is the single biggest vendor of phones. Followed by Samsung and Nokia. Note that Samsung includes non-smartphone and non-android phones, goes for Nokia as well. So how successful is Android really when it comes to it? And also, how to you measure whether or not something or someone is successful?

Google buying Motorola could backfire on Android, as the current device manufactures will be concerned, that Motorola will get an unfair market advantage when it comes to delivering new versions of Android. There’s already been speculations about Samsung looking at WP7 and/or buying WebOS and/or promoting Bada. I bet you, that the Softies in Redmond and employees at Apple was clapping their hands when Google announced the acquisition. As much as Apple and MSFT (probably) hates each other, they do agree on disliking google. And together they do make a somewhat scary opponent. MSFT is probably already working hard to convince Samsung and HTC that WP7 is a better bet than Android, and by acquiring Motorola, Google just made the argument easier.

There’s been a lot of criticism of MSFT not building a complete new OS for tablets, or at least uses the WP7 OS from the phone. To some extent I do share that concern, as Windows is a huge OS. The interesting thing, though, is that Apple is trying to merge MacOSX and iOS into one OS, that work started with LION. So you could argue, that Apple and MSFT is doing similar things, however, Apple used a convergence strategy – having 2 distinct OSes and converge them, whereas MSFT used a duplicate strategy, duplicate your OS to multiply devices – and slim it. By the end of the day, both Apple and MSFT are trying to do the same thing, having one OS to ease development of apps. For Apple that would allow users to use iOS based apps on Macintosh, allowing Apple to sell more computers. For MSFT it’ll allow users to use the apps they already know on tablets. Same Same, but different starting point and approach.

How will this end? Well, only time can tell. But it’s imperative when we try to project or anticipate the future that we stay non-biased and objective; otherwise, we risk making critical strategic decisions based on personal preferences, which doesn’t always come out successful.

Elephant butt, Software Development and Estimation

Okay, here’s an interesting question: what does an elephant butt, software development and estimation have in common? At first thought not really much, but then again they actually have a lot in common – depending on how you look at it.

Imagine you got blindfolded and placed right behind an elephant, being ignorant to the existence of such an animal. You remove the blindfold, blink a couple of times – and the only thing you can see is a huge grey butt! Wow, that the heck is this you ask yourself wondering. You slowly start moving around the grey butt in an effort trying to understand the nature of the grey mass you’re staring at. Getting to the side, and it’s still somewhat unclear what you’re looking at. Something very big and all grey. Unable to determine the nature of the grey mass – you step back. Suddenly you get enough distance to have the whole creature within your field of vision. And you go: aha that’s what the animal looks like.

So how does this tie into software development and estimates? Actually more than one would initially believe. Any totally new development project presented is like the butt of the elephant. You look at the task delegated and wondering what the heck is this all about. You then start working on the task, doing some research (moving around the elephant), cranking out some code (stepping back from the elephant) – and suddenly you go: aha, that’s what I’m going to build. And at that point in time, is the point in time you actually understand the task and somewhere realize the implications and start getting an idea of the time involved in solving that specific development task.

So that’s all nice and good. You get a task, researching in trying to understand it, doing some limited coding – prototyping until you go eureka! The problem is, that typically you’ll be asked to estimate or at least ball park an estimate upfront. Using the above analogy that’s the point in time someone removes the blindfold and the only thing you can see is one gigantic grey butt. If this is a completely new development task, where you’ll need to research new areas you have absolutely no clue to the implications of the task and can’t even remotely come up with a reliable guestimate on how long it’ll take.

What to do? Well, the only approach I’ve found that works is initially to spell out that you’ll need 30-90 calendar days of pre-project research, fact finding and prototyping prior to offering up any estimates. Depending on the company you work in you’ll find that some companies are completely fine with this, others aren’t. In case of the latter, try using the elephant analogy. If that doesn’t work, well – take what ever estimate that makes sense and add 60 days.

Upon going “aha!”, throw away everything you did, don’t move any research/prototype code forward into the product, as it usually is kind of messy – as you’ll be messing around trying to figure out what the heck you’re supposed to develop.

Smart Clients Versus Web Clients – What’s the Answer?

For a long period of time every ISV jumped on the browser bandwagon, meaning creating web versions of their existing fat clients. This transition was driven by Google’s everything web mantra, together with the emerging cloud technologies. The transition was moreover an answer to requirements from CIOs all over the world for an easier deployment when upgrading software internally. With a browser based app, you only need to update the server – and all of your users would automatically get upgraded. This approach would allow companies to purchase inexpensive computer equipment as the hardware literally only need to be capable of running a browser. Which is the whole idea being Google’s ChromeOS.

Initially moving away from fat clients to browser based app was painful. Mostly due to the limited capabilities of the web browsers, causing limitations to how advanced one could make the browser based apps. Over time the browser and standards behind the web (HTML, CSS and JS) became more and more sophisticated, allowing developers to create more rich web applications – Rich Internet Applications. With RIA developers could offer near desktop like experiences to their users, everything still within the framework of the browser. The browser became a platform for application execution, instead of a render of web information – which was the original purpose.

Alongside the continuously enhancement of web technologies to provide richer user experiences, or experiences close to what could be offered with native developed and deployed applications, technologies like MS NoTouch and Java WebStart emerged. These technologies solved the deployment headaches organizations had with traditional fat clients, by automatically downloading the app to your computer and execute it. If there was an update, you simply updated the server version – and the clients would automatically download the updated version. No need for IT personnel to physical be present to install software on individual computers. This was essentially the birth of the smart client. A desktop client that gave you all the benefits, like tight integration with the underlying OS and other components on the computer, offering features like drag’n drop.

An advantage of web clients is that you can access your data from any device, as the data or documents doesn’t reside locally. However, on the flip side that means that you can’t access your data UNLESS your connected to the internet. Smart clients allow you to store data locally, BUT then you can’t access your data unless you’re on the same device. With HTML5 and recent new specifications this is somewhat resolved. In HTML5 there’s the notion of local storage, the ability to store data locally in your browsers cache. Either as sessions data or actually local data. The latter allows you to store data locally and have that data available even if you’re disconnected. Drawback is that if users clears their cache all local data is gone. There’s another specification (http://dev.w3.org/html5/webdatabase/) defining an API for accessing a local SQL database.

Overall the HTML5 specification itself and the adjunct specification all serves the same purpose, trying to narrow the gap between web apps and clients executed on the device. Both in terms of User Experience and capabilities in terms of storing data locally. One of the key features of a traditional as well as smart client.

At WWDC 2011 Apple showed the way for allowing Smart Clients to seamlessly store any data in the cloud, so that you’ll be able to get to your data regardless of what device you’re using. They took NoTouch and WebStart further. With Apple’s new technologies you can develop applications that’s downloadable from the Mac or iOS AppStores, and they’ll get updated automatically when new versions are made available – same as NoTouch and WebStart. These applications can use the iCloud for storing data in the cloud so that the same data will be available on other devices, allowing you to seamlessly move between different devices, like a Mac, iPad, iPhone or iPod. The users don’t need to think about where their data is, “it just works” as Steve Jobs put it.

Will others adopt this approach? Personally, I wouldn’t be surprised if Microsoft does. That’ll allow them to keep their products as is, and at the same time get users to use Azure. They could of course go all web, as they partly have done with offering Office as a web offering, but honestly, Microsoft isn’t a web company. Just like Apple aren’t a web company. Only web company in this game is Google.

So looking at the recent development among web technologies, and recent announcements from Apple you’re moving to a point where the gap between web clients and smart clients are narrowing. This being said, then the smart clients with the close integration with the underlying OS still offers superior User Experience than a browser hosted application.

So what will the application of the future be? Browser based application or a Smart Client. Where the latter uses all the services available on the internet, and at the same time offers native capabilities. Personally I believe that we’ll see more native applications in the Apple world, as there’s a lot of developers that previous worked on iOS using Apple’s tools – that now can start thinking about building integrated applications across multiple devices, without the maintenance nightmare of having to update computers by manually installing the updated. If Microsoft follows suit and start adopting the Apple approach in Windows 8 and later on tablets – my take is that the road it paved towards more Smart Clients and less browser based clients.

Is HTML5/JS really the Silver Bullet for Mobile Applications?

HTML5 and JavaScript seems to be getting a lot of attention and being pushed as the next great set of technologies for write-once-deploy-many for mobile devices. If that crystallizes it would be great – wouldn’t it – like a nirvana for software developers. Imagine only have to develop once, and by sheer miracle you support n number of platforms. Brings back memories from similar pitches and discussions when Java was invented.

I’m not concerned with HTML5/JS as technologies, but more the perception that those specific technologies (could be any technology) are THE silver bullet that’ll allow you build build once-and-deploy-many without additional work. Similar pitch and hype that surrounded Java initially – and that didn’t really crystalize. Even if HTML5 becomes the predomination approach for rendering UI on a variety of devices, none of the devices will share equal characteristics, literally meaning that there’ll still be work – and sometimes significant work – to do for each specific device you want to render against, e.g. there’s a huge difference between the UX/UI on an iPad and an iPhone, that won’t be automatically resolved by using HTML5 – nor will it be automatically resolved using native tooling. A very good example of this, was that initially everybody believed that users would accept viewing web pages designed for desktop browsers on their mobile devices. It quickly turned out, that companies not offering a special tailored mobile version of their web lost viewers, and HTML got extended with features to determine whether or not you were on a mobile browser, and would then redirect you to a special mobile web page.

Take the above into consideration when vendors argues that you can support a number of different devices with same code base. It’s partly a true statement, as long as the screen doesn’t change too much. As soon as you change the screen you’re HTML based application will either sit nailed in the top-left corner or scale all elements to fit the new physical screen dimensions. Either way, there’s a huge risk that your app ends up looking skewed. To avoid that you’ll have to implement logic to accommodate that. In the event that you aren’t too concerned with the look, then this is a non-issue. But you may lose users or people will rate your app as unattractive.

There’s also the challenge of using the native capabilities of the devices. Vendors of platforms offers different levels of API that maps to the underlying OS, and recently they’re starting to push the notion of plugins – like PhoneGap. These plugins allow you to develop native code that you can call from JavaScript, which is nice, but it does raise the question why this is needed. My take is that it’s needed simply because the API offered aren’t always extensive enough to support all possible permutations of usage of the underlying OS. But seriously, if you have to do that – what’s the whole point using HTML5/JS then, as you end up having plugin code for each device that you need to maintain.

Almost every vendor of cross-platform tooling based on HTML5/JS web pages states that the key driver for using these tools are the ability to repurpose skill sets that your web developers already got – HTML and JS. Instead of having to learn new technologies. Simply because that’s cheaper than having to hire iOS developers, which are hard to get by and notoriously expensive. The funny thing is that on Android and RIM the language is Java, something that most HTML/JS developers knows as well – or C# for WP7, which all .NET developers knows (Visual Basic will be added).

On the native side, an idea could be to encapsulate anything that concerns business logic in a device agnostic language like JavaScript or Lua scripting, embed that in the native application and only focus on UI/UX when writing native code. It’ll give you total access to the underlying OS, all it’s capabilities and ability to build whatever you want without the additional abstraction of HTML5/JS and wrappers. The abstracted code can be maintained centrally and extended without impacting the native code, e.g. if there’s changes to validation rules you can update the library without having to change one single line of code in the native applications.

Long story short, my reluctance isn’t about the technologies, but about the perception that we all of a sudden can support all possible permutation of devices and OSes just by using HTML5/JS. Which won’t be the case.

Native Tooling versus Non-native Tooling

Recently I’ve been following some interesting discussions on the subject whether to develop native apps for mobile or using cross platform technologies. Let me elaborate what I mean by native apps. A native app is an app that is built using the tooling provided by the vendor. So for iOS devices that would be Objective-C and cocoa touch, for Android and RIM that would be Java.

Mostly the discussions seems to be concentrating on two underlying subjects: 1) how to utilize existing skills and framework knowledge and 2) how to support multiple mobile platforms with one code base. You can argue that these two subjects are intertwined and cross platform solutions tries to solve both areas simultaneously. Which is somewhat correct; however, #2 actually got another level as well, as there’s a whole range of MEAP vendors trying to push “code free” solutions, requiring none or limited coding.

Let’s look at #1 and what seems to be the key driver behind wanting to use existing tooling or similar tooling and technologies. Well, should be obvious, mostly seems to stem from a wish either to save money or reuse skills. The former is primarily driven by organizations whereas the latter is developers wanting to develop apps, but don’t want to go through the hassle of learning and new set of tools and technologies (see my blog on this, choosing the right tools). Also, there’s the latest SDK problem. If you’re using native you’ll always have the latest and greatest SDKs at your disposal; whereas if your going with non-native you’ll have to wait until the vendor gets his version ready. The time the vendor uses to do that, your competitors that uses native SDKs exploit new features and leaves you behind.

#2 is a repeat of previous attempts to find a silver bullet for supporting a number of platforms, build-once-deploy-many. So far nobody have succeeded in this field, at least not that I’m aware of. Back in the early nineties a substantial number of companies tried to build CASE tooling in an attempt to abstract the development away from code and hence be able to design once deploy many. Nobody uses CASE tools today. There was also a number of attempt trying to build cross platforms for Wintel and Macintosh. None of these are around. Personally I wouldn’t hold my breath or bet my business on the MEAP vendors, they’ll be gone as soon as the mobile OS market starts congregating towards 2 maybe 3 platforms.

Another thing about MEAP is that in trying to target many platforms they end up having to go with the least common denominator. And frameworks typically gives you 80% of what you need, the last 20% you can’t do and have to go into native code anyway. And guess what, it’s the last 20% of finalizing the project that takes 80% of the time. So no real gain (only a lot of pain).

Another important and often forgotten subject is the user base of certain technologies. The more users using specific technologies the easier it is to get help, find solutions and sample code. With lesser user adoption you’ll be more left on your own and can only rely on the vendor, who will most likely charge you for anything and everything possible.

If you haven’t already guessed then my stand is native everything, you get better apps, they look cooler, they feel like native apps, you don’t risk having to wait for a vendor to implement latest and greatest features, you don’t risk a vendor suddenly going out of business, you got huge user communities that can help and you won’t end up being charged unreasonable amounts of money for simple questions.