Does Low-Code/No-Code Spell The End For IT Professionals?

Low-code/no-code software is on the rise. Gartner, Inc. estimates that the worldwide low-code development technologies market will grow 23%, to $13.8 billion, in 2021. So, what is it? And why is everyone so interested?

“Low-code” and “no-code” refer to software that doesn’t require a qualified programmer to set up and change. Implementing this type of software still involves configuration, but far less actual coding.

External blog post click to continue reading

How Big Businesses Can Avoid Being Fired By Their Customers

Almost all sectors are talking about the importance of moving at the speed of the customer and customer-centricity, but hardly anyone achieves either. Often this is because they can only move as fast as their legacy systems and have a view on the customer rendered incoherent by fragmented data.

Historically, services and products have been provided to customers based upon the terms and operational capability of the provider. For everyone’s benefit, this rigidity is being replaced by a more flexible model with far greater emphasis on the customer.

External blog post click to continue reading

Unit4 2019 Predictions: The Year Ahead for the ERP Market

As we transition from one year to another, it is customary to look back on the past year and begin to consider what might unfold in the year ahead. We are in a period of massive technology-driven disruption in the ERP market and this past year will undoubtedly be remembered as the year of artificial intelligence and chatbots. However, when it comes to developing technologies that truly change the way organizations operate, the hype of the simplistic call-and-response chatbots we saw this year will likely be just a small blip on a much longer journey towards true AI. Here are some thoughts from Unit4 on what the ERP market can expect in 2019.

External blog post click to continue reading

ERP Has Reinvented Itself Again, And This Time It’s Going Designer

As early as the ’90s, businesses and analysts alike have foretold the death of the ERP system. Over 20 years later, however, ERP is still alive and well. Globalization, digitalization, the internet and a whole host of other technologies have made it virtually impossible for businesses to move away from ERP. There’s simply too much critical data housed in the underlying databases for elimination to ever be a viable option.

External blog post click to continue reading

Thoughts on Rewrites (Estimation)

Estimating rewrites is hard, and frequently developers got a tendency to underestimate such a task – as, in their mind, they know exactly how the system should work, as they already built it once. This holds only if the rewrite is 1-to-1 rewrite – meaning you are constructing a similar solution from both an architectural, deployment and functional perspective. But if that is the case, why then even bother rewriting?

Moving from an on-premise single solution to a super-scalable cloud based solution requires a complete new architecture and equally important, new deployment scenarios. The latter needs to factor in continuous deployment such that it will be possible to deploy functional updates at a quicker pace than that for on-premise solutions, as well as taking into consideration that updates shouldn’t break customer specific configuration or functionality.

These elements, along with others, spills into the architecture – that needs to be more loosely coupled, utilizing a decoupled messaging strategic allowing to deploy functional instance clusters – by using a service oriented approach, like MSA – Micro Service Architecture, and utilizing patterns like, say, CQRS. For all applications there will always be a certain requirement for customer specific functionality or configuration, which in a super scalable solution shouldn’t be accomplished by code changes, but by providing facilities in the underlying platform for extending the solution by means of metadata, configuration, rules engines or built-in scripting capabilities. For the latter, it is imperative that such a construct can be isolated within its own cluster to avoid scripts to impact the base system.

Another element of rewriting is the functional capabilities of the solution going forward. When faced with the opportunity/challenge of a rewrite – it is equally important to reconsider/rethink the functionality both at the micro-level (features) and at the macro-level (solution). It is an opportunity to solve previously shortcomings as well as focus on new capabilities that may not even exist in current offering.

The above leads to the rule of rewrites: DON’T. If required, rethink and build new, with a sharp focus on the value creation a new solution would bring to the customer.

Focusing on constructing a flexible architecture and internal processes allowing for continuously deployment initially, will pave the way for the ability to provide functionality to customers earlier in the development phase, than can be accomplished in an on-premise model, as you control the update cycles – and are not at the mercy of the customer’s ability or willingness to deploy updates. Moreover, it is crucial to align platform and functional development activities such that visible progress can be made available to the stakeholders, infusing confidence that the project is on track. With regards to the latter, then having tangible deliverables throughout the development phase will increase the success of the project significantly, not only because stakeholders tends to get nervous without visible progress, but equally important you will be able to receive customer feedback on a continuously basis, allowing you to either adjust or enhance features quickly and redeploy for feasibility testing.

The net is, that a transition from a traditional on-premise solution to a cloud based solution – in most cases – will take equal time as it took to develop the solution you are trying to cloud enable. The question then becomes, how to calculate the original effort. It is not that straight forward to calculate the total effort in man month invested in building a solution over a multi-year time frame, but it is a good mental exercise trying to do so – as it will give you a better idea of the total scope. In most cases people get surprised about how long it actually took to build a solution.

The next element of the top-level estimation process is somewhat controversially, as it builds on research into programmer productivity by Boehm conducted in 1995. Boehm found that a programmer in a medium complex system, effectively, produces 900 lines of code per month (LOC/pm). Note that this number is an average over the lifetime of a system. Initially the number is higher; however, as the complexity of the system increases the LOC/pm decreases. The LOC/pm can be used as a sanity check against the total estimated time from above.

The third element of the top-level estimation is to create a team estimate, where the development team themselves gives a high-level estimate.

Having these numbers will allow you to get a rough idea on the total effort required, and some numbers on which you can base your reasoning when presenting the effort and investment required to stakeholders.

Example

To illustrate I will use an examples from two previous projects.

Case 1

The team had estimated a total effort of 324 man month. The estimated historical effort came to 900 man month. The LOC/pm gave 920 man month. In this particularly case the actual time ended up being 870 man month. Here the historical and LOC/pm matched very well, but the team estimate was 37% of the actual.

Case 2

In another project the team estimate was 180 man month. Estimated historical effort came to 800 man month. LOC/pm gave 1300. Here the actual time was 1080 man month. In this situation the team estimate was 16% of the actual.

In both of the above cases, the hardest part was not to convince stakeholders about the effort required, but to get the development teams to accept the initial estimation based on a combination of historical effort and LOC/pm. Personally I find that intriguing as it illustrates that developers are inherently optimistic when it comes to estimation. Not only at the micro-level – implementing specific features – but also at the macro-level – top-level solution estimation.

Random Thought

As a software professional I keep being astonished what users accept and put up with, when it comes to faulty, buggy and unfathomable useless apps or systems. You download an app from an AppStore, or if you’re older than 20, you may even install it on your laptop/desktop. Either way, you start using it – and it works fine, or at least it seems to work fine. But for some incomprehensible reason your device “blows” up. And guess what? We simply shrug and reboot the whole thing – and what’s most astounding is that we don’t even complain.

If you think about it then it’s comparable to that you every day gets the milk out of the fridge for your cereal, but randomly the milk would explode. Wouldn’t that upset you?

Running apps without connectivity

The other day on my daily commute I was, as usual, reading the latest news on technology and other, to me, interesting areas, I ended up spending most of my commute reflecting on the subject of mobile apps supporting disconnected state instead of reading. Honestly, I was rather annoyed with the situation, as I really enjoy being able to read my news and emails while riding the train.

So what was difference this particularly morning, since I usually read news and emails while on the train? Well, the difference was that normally I’m on an InterCity train that offers free WIFI, however, on this day I got on an earlier train – the sprinter – which doesn’t have free WIFI. Meaning, that I had to rely on “standard” mobile connectivity, which can be sketchy, even in a country like the Netherlands that got exceptional good network coverage.

Being without reliable connectivity I started testing a number of different news reader apps like Zite, FlipBoard, Dr. Dobbs and others – and found that they all requires connectivity to function. None of the newsreaders I tried, offered any solution for reading news without having connectivity. Which is some part peculiar, and I would argue a major flaw. Most of the apps build personalized news based on users preferences, it’s not as they’re searching the net for news. They could easily, on launch, download the news the user are interested in and store it on the device, and whenever there’s connection sync with the server.

If you think about this, then most of these apps and any other app that relies on connectivity is simply a revamped web app, and they doesn’t really offer any noticeable better user experience than a mobile web app would do. So why even bother building these apps as native apps? Apart from be able to distribute them as native apps by means of the app stores. But honestly, these apps don’t provide any additional advantages over traditional web apps.

When building native apps for devices like the iPad it’s crucial that there are additional benefits for the user over building mobile web apps. And one of these additional benefits is the ability to design and create apps that are capable of running without connectivity – disconnected. Remember that the mobile devices got – nearly – all the capabilities of a modern laptop. Storage, file system and database, so there’s a lot of power of API available to facilitate a great user experience, even when the device is disconnected.

When building mobile apps, you should rethink the functionality in terms of being disconnected. Look at the use case you’re trying to solve, and then rethink the use case in the light of being disconnected. Ask yourself, what your users wants to be do if there’s no connection. In the example of reading news, then I would assume that my users would like to be capable of reading news regardless of whether or not there’s connectivity. Compare it with reading a newspaper, and provide similar functionality – since that’s what the newsreader is trying to substitute – the good old newspaper. For the current mainstream apps in the newsreader space, they clearly haven’t succeeded in providing that richer user experience.

And to everyone out there, arguing that there’s such thing as perpetual connectivity, I suggest you go on a road trip, using different means of transportation as that’ll make you understand that perpetual connectivity doesn’t exists.

Native Tooling versus Non-native Tooling

Recently I’ve been following some interesting discussions on the subject whether to develop native apps for mobile or using cross platform technologies. Let me elaborate what I mean by native apps. A native app is an app that is built using the tooling provided by the vendor. So for iOS devices that would be Objective-C and cocoa touch, for Android and RIM that would be Java.

Mostly the discussions seems to be concentrating on two underlying subjects: 1) how to utilize existing skills and framework knowledge and 2) how to support multiple mobile platforms with one code base. You can argue that these two subjects are intertwined and cross platform solutions tries to solve both areas simultaneously. Which is somewhat correct; however, #2 actually got another level as well, as there’s a whole range of MEAP vendors trying to push “code free” solutions, requiring none or limited coding.

Let’s look at #1 and what seems to be the key driver behind wanting to use existing tooling or similar tooling and technologies. Well, should be obvious, mostly seems to stem from a wish either to save money or reuse skills. The former is primarily driven by organizations whereas the latter is developers wanting to develop apps, but don’t want to go through the hassle of learning and new set of tools and technologies (see my blog on this, choosing the right tools). Also, there’s the latest SDK problem. If you’re using native you’ll always have the latest and greatest SDKs at your disposal; whereas if your going with non-native you’ll have to wait until the vendor gets his version ready. The time the vendor uses to do that, your competitors that uses native SDKs exploit new features and leaves you behind.

#2 is a repeat of previous attempts to find a silver bullet for supporting a number of platforms, build-once-deploy-many. So far nobody have succeeded in this field, at least not that I’m aware of. Back in the early nineties a substantial number of companies tried to build CASE tooling in an attempt to abstract the development away from code and hence be able to design once deploy many. Nobody uses CASE tools today. There was also a number of attempt trying to build cross platforms for Wintel and Macintosh. None of these are around. Personally I wouldn’t hold my breath or bet my business on the MEAP vendors, they’ll be gone as soon as the mobile OS market starts congregating towards 2 maybe 3 platforms.

Another thing about MEAP is that in trying to target many platforms they end up having to go with the least common denominator. And frameworks typically gives you 80% of what you need, the last 20% you can’t do and have to go into native code anyway. And guess what, it’s the last 20% of finalizing the project that takes 80% of the time. So no real gain (only a lot of pain).

Another important and often forgotten subject is the user base of certain technologies. The more users using specific technologies the easier it is to get help, find solutions and sample code. With lesser user adoption you’ll be more left on your own and can only rely on the vendor, who will most likely charge you for anything and everything possible.

If you haven’t already guessed then my stand is native everything, you get better apps, they look cooler, they feel like native apps, you don’t risk having to wait for a vendor to implement latest and greatest features, you don’t risk a vendor suddenly going out of business, you got huge user communities that can help and you won’t end up being charged unreasonable amounts of money for simple questions.

Choosing the right tools

Over the course of the last couple of years, I’ve been working on a lot of different projects where we initially had to make some technology decisions on what development ecosystem to use. By development ecosystem I refer to the sum of programming language, IDE and API (libraries) needed. In most of the discussions it was typically a debate based on the individuals personal preferences and what they’ve used to work with, rather than a non-biased discussion based on what problem we’re trying to solve.

Personally I don’t think the best approach to select any specific implementation technology is to let it be based on personal preferences, albeit I do concur, that sometimes that will get you there faster. There’s a lot of factors that needs to be taken into consideration prior to selecting specific technologies. One of the more important things to take into consideration is the preferences of your customer base. E.g. trying to shove a Java based solution down the throat of a .NET fixated customer base or vice versa – isn’t the most sensible thing to do, or at least not the smartest. It may be easier to chose technology based on the preferences of your customer base. From a selling perspective, by doing so, you remove one obstacle in the sales or adoption cycle, whatever commercial model you’re using.

The next problem in the personal-bias-decision-process (PBDP) is that the decision process becomes based on the individual decisions makers personal bias and technology comfort zone. So rather than evaluating tools and technology in the context of the task, I’ve encountered and witnessed a fair amount, or more precise, I’ve predominately seen people doing it the other way around. They evaluate in the context of their comfort zone when it comes to technology and tools.

Personally I find this approach somewhat disturbing, as there’s a risk you end up utilizing the wrong tools for the wrong tasks – like a carpenter trying to use a chainsaw to create a replica of a renaissance chair. But what really throws me off, is that we as software engineers are obligated to propose the best tooling for the task to facilitate speed to market, customer adoption and fast feature turnaround.

We as engineers should always be looking for new and better ways to develop software, researching new technologies, tools and languages to keep track of what’s going on in our field, to be able to recommend the appropriate tools and technology for a given task. We should see an opportunity to use new tools as a way to broaden our experience rather than being hampered by our comfort zone.

#programming #tools

The Future of Software Development

It’s becoming more and more clear that monolithic applications are going the way of the Dodo. With the general adoption of smart-phones, tablets computers and social network portals users starts having an expectation that information is available anytime anywhere. Users simply don’t want to deal with booting up a desktop or laptop, login into an application, go to the right place to get the information. Its time consuming and inflexible. Users want seamless integration of essential data and information into their preferred social media sites and mobile devices.

What does this mean to the ISV that produces traditional applications? Well, if these ISV’s don’t start reconsidering their development strategy they risk facing the same fate as the Dodo. Actually, already now companies without a clear social media and mobile strategy are considered dated by the younger generation of users. New tech savvy users milling out from the universities and colleges look and evaluate companies on what strategy they have, and if a company allows them to work on cool stuff or at least there’s the potential to work on cool stuff. Hence, it becomes a huge challenge for ISV’s to recruit new young talent, especially the better students will prefer companies with a strategy that embraces mobile computing at the core.

And this is only the development side of it. Think about it, in the near future the next generation of users will also become part of the decision making process at the customers of the ISV’s. Making it a huge challenge for software vendors without sufficient presence in the mobile application market, to sell their solutions. There’s a huge risk, and in some situations, this is already the reality where a missing mobile strategy or adequate integration into social media sites disqualify a vendor in the initial phases of the buying process of new software systems. Personally I believe this will become an even bigger problem in the near future.

What’s interesting is that most ISV’s have the opportunity to actually provide interesting applications to their customer base, as they got years of data and experience in collecting data. They have a solid foundation for extending their offerings to include mobile and other interesting lightweight applications that can access the data and present it into different portals. Portals of the choice of the users.

ISV’s should focus their development efforts more on how to expand the usage to the casual users instead of the power users. Power users will continue to use traditional clients on their desktop or laptop, as they need high speed processing of huge amount of data. However, the casual user of the future doesn’t want to deal with these types of clients. They want immediate access to data on their preferred device.

Note that it’s not just about providing data and information but also about having lightweight applications for handling processes. Users will more and more be looking for applications that essentially do the work for them, and users only need to validate that the proposed action is actually the right action. Like flying a plane, the pilots really don’t do much anymore, they monitor that the software actually does it right, and they only intervene if a unique situation arises that requires manual intervention. That’s how all software involving processes should work in the future.

For us as developers this means that we also need to embrace the technologies and start extending our skill-sets to include mobile computing as well as portal computing (like web parts). Just as the ISV’s we can’t keep relying on our existing knowledge on building n-tier or more traditional client/server solutions. We need to start thinking in total distributed computing, multiply number and diverse data sources. We need to change as well; otherwise, we risk the same fate as the ISV’s – that our skills risks being inadequate for doing software development in the future.

#programming #light #apps