5 ways chatbots will change college

The first week on any college campus is hectic, but what if there was a digital assistant available 24/7 to help new students and their families navigate the college process?

With chatbots rapidly infiltrating every aspect of our personal lives, it’s no surprise that student technology is an area ripe for bot intervention. As both students and faculty embrace bots in their personal lives for things like checking the weather, ordering pizza, or finding a cab, it’s about time that campus tech caught up.

External blog post click to continue reading

Creating the Foundation for Developing Bots in a SaaS World

Legacy software vendors that traditionally provide on-premise solutions still face a couple of architectural challenges. Those allow them to provide cloud-native applications as well as a solid foundation for offering a natural bedrock on which to construct bots, ultimately allowing them to offer a conversational user experience.

Cloud Architecture

Many vendors of legacy software struggle to accomplish a smooth transition from a monolithic architecture to a cloud-enabled service oriented architecture, as such a transition involves rewriting, which comes at a significant price. However, the previous economic benefit of re-architecting to ensure a more efficient use of compute and storage resources is gone due to increasingly cheaper IaaS and PaaS services.

External blog post click to continue reading

Machine learning: The new way to combat expenses fraud?

Consider the following expenses claims: registration fees for a cancelled seminar, two separate claims for mileage when the employees travelled together, and a sandwich-and-coffee dinner claimed as the full per diem.

While it’s easy to believe that a few dishonest claims won’t hurt, for individual victims, expenses fraud can be costly. Research conducted by the National Fraud Authority suggests that exaggerated expenses claims cost the British economy around £100 million annually; the private sector alone lost £80 million in 2013. Imagine if 20 per cent of your staff added 10 per cent to each mileage claim; the cumulative loss for the company would quickly become significant.

External blog post click to continue reading

ERP and the A.I. Factor

Artificial Intelligence (AI) is here. Once a topic of conversation, news, and science fiction, AI has finally entered our technology landscape. We are only beginning to see the impact it will have on our businesses, our jobs, and our enterprise software, but we already know that impact is profound and growing.

Until very recently advanced AI functionality was extremely expensive due to the limited and immature tooling available. Constructing the algorithms required highly skilled and very specialized employees, not available to most software vendors. In addition, these new and complex algorithms required immense amounts of additional data storage for pattern recognition and required CPU-intensive computing power for pattern recognition.

All this has changed. We will see it often. Current ERP software will incorporate AI capabilities more and more, and here is why.

Today, all major cloud vendors offer both PaaS and IaaS that specifically address computing and storage issues. Fierce competition between major cloud players has resulted in decreasing prices for both cloud storage and computing power, literally democratized machine learning functionality and AI capabilities. It is allowing software vendors to incorporate more-complex algorithms that crunch bigger and bigger datasets.

The result is ERP with AI for all.  It is leading to ever-more-advanced solutions. It has given birth to a whole new range of systems capable of making decisions based on historic data. Data collection is becoming pervasive, automatic and non-intrusive, instead of spotty, manual, and requiring high levels of interaction. Natural language input will soon decrease the need for manually intensive UI.  Increasingly, computers will proactively support decision making.

We recently announced our new digital assistant, Wanda. Wanda and its functional agents epitomize the evolution to semi-intelligent and self-driving solutions. They liberate people from their tedious manual interactions with enterprise software and allow them to focus on running their business and serving their customers.

Should we fear AI?

Human beings all share an important trait – the ability to devise tooling that simplifies tasks and drives our species forward. Our tools liberate us from mundane day-to-day chores, so we can focus on more abstract challenges, solving more problems – problems we solve, ironically, by devising even more tools. This has made us the most successful and adaptable animal on the planet.

Every new technology invented gave rise to a backlash. Change is difficult. Many people will prefer the status quo.

Despite resistance to change, advances in technology always prevail. In time we learn to trust and use our new tools with enthusiasm. The introduction of AI into enterprise applications will follow the same path. It’s only human. Initially there will be lots of resistance because AI will fundamentally change how users interact with software.  Large amounts of structured and unstructured new data will enter the ERP system invisibly, through mobile, UI improvements and IoT. Knowledge systems will employ agents – computer programs that decide and act independently on behalf of an employee.  Many users will feel threatened. Despite this, technology will prevail. It always does because companies will always strive to operate more efficiently and focus on their primary objective, serving their customers.

Should we then fear it? No, not at all. New technology brings us better living standards, safer working conditions, more goods, better services, and endless benefits. AI will do likewise. It will remove tedious, repetitious work.  It will empower employees to improve their performance. It will enable companies to provide more goods and better services. A new era is upon us. Embrace it. The impact of AI on ERP will be enormous and wonderful.

Four megatrends that will reshape what software can do for us

Four major enterprise management trends are maturing simultaneously, and their imminent fusion has only begun to transform the way we conduct business. They are big data, mobile access, user experience, and computer intelligence.

Individually, each of these breakthroughs is game changers. Together they will combine into a new, vast and complex enterprise software.  Mastering it will require a new type of software capable of autonomously emulating how people perform tasks and make decisions. By operating in the background on a massive scale, it will free employees from mundane tasks and empower them to spend their time on high-level decision-making and oversight.  It is called self-driving software.

External blog post click to continue reading

Self-driving enterprise software

By using advanced technologies within pattern recognition, machine learning and computer aided decision support systems, as well as new IoT devices, software can make automated decisions on behalf of the user, or provide guidance to allow users to make intelligent decisions based on smart patterns. This is achieved through self-learning software that assumes actions based on previous or similar scenarios and uses that to pre-populate and self-drive.

Interaction with the software is minimized to only involve people when human intelligence and experience is critical to resolving a business issue or to drive business processes in the right direction. If user interaction is required, a UI will be provided in its simplest form to accommodate that, on the users preferred device of choice, intelligently adapted to the situation.

Incredible value can be derived by limiting the amount of manual input required by users so they can focus on serving customers. This is the future of business software.

What is self-driving enterprise software really about?

In a nut shell self-driving enterprise software is literally about creating software with which the user does not need to interact, or at least interaction is minimized. The reality is that the majority of the interactions users have with their enterprise software is administrative. It’s not about doing their job well or running their business but simply maintaining the mechanics of sending invoices, doing bookkeeping or filing expense reports.

All these mechanic interactions can be tracked, analyzed and fed into algorithms so the software can learn typical usage patterns within a specific context whether that’s procurement, invoicing, bookkeeping or expenses. Through pattern recognition and statistical analysis the software can “learn” typical user actions in specific situations and automate the expected action in the future.

Analyzing past interactions will allow us to construct more semi-intelligent solutions. Solutions that dynamically alter behavior based on previous interactions, decisions and user actions. There are a number of different aspects to self-driving software.

Adaptive UI

The fundamental idea of adaptive UI is, as the name suggests, UI that intelligently adapts to the user based on previous interactions. In today’s enterprise software, forms or screens typically contain many fields that need to be filled out by the user. Some fields are linked with other fields, and there are even hard dependencies meaning that if a certain field has a specific value, others need to be populated exactly right for the system to accept the data as a whole. As a causal user simply entering a purchase order it can be a horrifying experience – regardless of how nice the UI looks. The adaptive UI records everything the user is doing, when he or she is doing it and in what order. If a user always populates a specific number of fields with the same values it can do that automatically. Even better, if some fields are always left untouched – they can be removed dynamically from the UI to make it less ‘noisy’ or confusing. By removing un-used fields, and populating everything automatically based on previous interactions – the system can complete the task on the user’s behalf. For example, if someone orders A4 binders on every second Monday from the same vendor, the system might as well just do it.


Normally business processes require user interaction for approvals. A manager needs to approve a request for a new laptop from one of his reports for example. If the manager in question always approves requests within certain criteria, most commonly the price, the system can give that approval and notify the manager it’s been done.

Historical and Current Data

What I’ve discussed above is all based on historical data; however, using aggregation of historical and current data – like your present location software has the ability to automatically populate travel expenses, field service work reports, and even suggest additional maintenance work do be done as part of a field service visit.

Some examples of where self-driving ERP will add significant value:

Expense Reports

Expense reporting is a great example of an administrative task that lends itself to self-driving software. In the future systems will automatically create line items in an expense report based on the scanning of a receipt. By extracting the address the system can gauge what the item is and based on the time know if it’s breakfast, lunch or dinner. Essentially populating the complete line. Rather than having to store the receipt, go into the system and enter expense type, date, amount, all the user has to do is photograph the receipt.

Field Service

Within field service, systems can schedule maintenance visits for equipment based on the previous maintenance of similar equipment at other customers. Also, if a visit is already planned, it may prove more economical to – pro-actively – service other equipment while already on-site.

The future is here

Previously analyzing massive amounts of data was expensive due to the amount of data storage and computational power required. However, due to ever decreasing prices on storage and CPU, running advanced algorithms becomes more accessible enabling inclusion of semi-intelligent agents into a vast majority of the functionality within enterprise software. Customers will benefit from being able to focus on running their businesses and serving their clients rather than the tedious mechanics of maintaining the ERP.

This is why we focus on developing enterprise software that allows customers to focus on their business instead of focusing on the software, by making it completely self-driving. We’re already starting to deliver this in our software.


Patterns is not a new buzz or anything, actually patterns, the use hereof and definitions started in 1994 when the book “Design Patterns” was released by the gang of four. Since then there have been uncountable books published on the subject of patterns. Some of the more noticeable and most frequently mentioned is “Enterprise Integration Patterns”, “Patterns of Enterprise Application Architecture” and lesser known “Patterns in Java” and similar “Patterns in Objective-C”. All good books and definitely worth spending time browsing through.

When saying browsing through, then personally I never found it especially worthwhile to read any book on patterns from start to end. Mostly due to the fact, that I simply cannot remember all the patterns described after reading the book. When browsing, through, and reading the introduction to different patterns – you can pick up the essence and if a patterns relates to something you have worked on, you get a standard terminology useful to describe for others what specific area of the code is accomplishing. 

Over the years I have witness a number of projects where the engineering teams adopt what I prefer to refer to as “Pattern Driven Development”. All major pieces of the code ends being structured around specific patterns, to the extent that you can actually recognize the code in the systems from the books in which a particularly pattern was found. Moreover, different engineers read different patterns books, which means that you can find same patterns implemented slightly different in different sections of the system.

The latter, as anyone can envision, leads to a fragmented and confusing code base, with a non-cohesive implementation of code solving similar problems sometimes even same problem, being contradictory to the fundamental idea of patterns, being providing an uniform understanding and terminology to use when discussing and addressing specific problems.

To me the real value of patterns is not the example code, but the problem a given pattern solves and the terminology which can be seen as constituting a protocol between developers when interacting about a specific problem to be solved. To simply use the examples provided in your own code is, in my view, not a good plan.

Thoughts on Rewrites (Estimation)

Estimating rewrites is hard, and frequently developers got a tendency to underestimate such a task – as, in their mind, they know exactly how the system should work, as they already built it once. This holds only if the rewrite is 1-to-1 rewrite – meaning you are constructing a similar solution from both an architectural, deployment and functional perspective. But if that is the case, why then even bother rewriting?

Moving from an on-premise single solution to a super-scalable cloud based solution requires a complete new architecture and equally important, new deployment scenarios. The latter needs to factor in continuous deployment such that it will be possible to deploy functional updates at a quicker pace than that for on-premise solutions, as well as taking into consideration that updates shouldn’t break customer specific configuration or functionality.

These elements, along with others, spills into the architecture – that needs to be more loosely coupled, utilizing a decoupled messaging strategic allowing to deploy functional instance clusters – by using a service oriented approach, like MSA – Micro Service Architecture, and utilizing patterns like, say, CQRS. For all applications there will always be a certain requirement for customer specific functionality or configuration, which in a super scalable solution shouldn’t be accomplished by code changes, but by providing facilities in the underlying platform for extending the solution by means of metadata, configuration, rules engines or built-in scripting capabilities. For the latter, it is imperative that such a construct can be isolated within its own cluster to avoid scripts to impact the base system.

Another element of rewriting is the functional capabilities of the solution going forward. When faced with the opportunity/challenge of a rewrite – it is equally important to reconsider/rethink the functionality both at the micro-level (features) and at the macro-level (solution). It is an opportunity to solve previously shortcomings as well as focus on new capabilities that may not even exist in current offering.

The above leads to the rule of rewrites: DON’T. If required, rethink and build new, with a sharp focus on the value creation a new solution would bring to the customer.

Focusing on constructing a flexible architecture and internal processes allowing for continuously deployment initially, will pave the way for the ability to provide functionality to customers earlier in the development phase, than can be accomplished in an on-premise model, as you control the update cycles – and are not at the mercy of the customer’s ability or willingness to deploy updates. Moreover, it is crucial to align platform and functional development activities such that visible progress can be made available to the stakeholders, infusing confidence that the project is on track. With regards to the latter, then having tangible deliverables throughout the development phase will increase the success of the project significantly, not only because stakeholders tends to get nervous without visible progress, but equally important you will be able to receive customer feedback on a continuously basis, allowing you to either adjust or enhance features quickly and redeploy for feasibility testing.

The net is, that a transition from a traditional on-premise solution to a cloud based solution – in most cases – will take equal time as it took to develop the solution you are trying to cloud enable. The question then becomes, how to calculate the original effort. It is not that straight forward to calculate the total effort in man month invested in building a solution over a multi-year time frame, but it is a good mental exercise trying to do so – as it will give you a better idea of the total scope. In most cases people get surprised about how long it actually took to build a solution.

The next element of the top-level estimation process is somewhat controversially, as it builds on research into programmer productivity by Boehm conducted in 1995. Boehm found that a programmer in a medium complex system, effectively, produces 900 lines of code per month (LOC/pm). Note that this number is an average over the lifetime of a system. Initially the number is higher; however, as the complexity of the system increases the LOC/pm decreases. The LOC/pm can be used as a sanity check against the total estimated time from above.

The third element of the top-level estimation is to create a team estimate, where the development team themselves gives a high-level estimate.

Having these numbers will allow you to get a rough idea on the total effort required, and some numbers on which you can base your reasoning when presenting the effort and investment required to stakeholders.


To illustrate I will use an examples from two previous projects.

Case 1

The team had estimated a total effort of 324 man month. The estimated historical effort came to 900 man month. The LOC/pm gave 920 man month. In this particularly case the actual time ended up being 870 man month. Here the historical and LOC/pm matched very well, but the team estimate was 37% of the actual.

Case 2

In another project the team estimate was 180 man month. Estimated historical effort came to 800 man month. LOC/pm gave 1300. Here the actual time was 1080 man month. In this situation the team estimate was 16% of the actual.

In both of the above cases, the hardest part was not to convince stakeholders about the effort required, but to get the development teams to accept the initial estimation based on a combination of historical effort and LOC/pm. Personally I find that intriguing as it illustrates that developers are inherently optimistic when it comes to estimation. Not only at the micro-level – implementing specific features – but also at the macro-level – top-level solution estimation.

How to Predict next Disruptive Technology

On a very lazy Saturday I was pondering about what eventually could be the next great disruptor within technology, that I could lash onto and be among the first movers. But how would anyone be able to predict that? One of natures hard questions – I guess. I quickly dismissed the idea of dreaming something up myself in hope that whatever irrelevant thought I came up with, would end up taking over the world, and commenced a small research project looking at recent disruptive innovations within the technology space to see if I could spot a trend.

As a starting point I listed the technologies or shifts that have had most profound impact on our lives and also caused disruption in existing business. The first game changer was the rise of the personal computer – 1970ish –  exactly when depends on the definition of PC, carving the road for affordable computers. The second one was the internet on which web browsers (1990) was based, personally the biggest game changer here was the web browser and the ability to move from client/server to hosted solutions. Third was the iPhone (2007) which revolutionized mobile phones. The fourth one is cloud computing, hard to determine exactly when cloud computing started getting real traction, but a suggestion would be 2006 where Amazon launched Amazon Web Services.

Listing the above 4 disruptive technologies wasn’t hard. But how would anyone determine whether or not any of these in fact would take off, and become real disruptive technologies and major game changers? Well, after doing some research on the internet using my web browser I found a number of quotes relating to each of the above.

Personal Computer

  • “But what…is it good for?” — Engineer at the Advanced Computing Systems Division of IBM, 1968, commenting on the microchip. Included this one as the microchip and Personal Computer goes hand in hand.
  • In 1977, Ken Olsen, CEO of DEC, said, “There is no reason for any individual to have a computer in his home”.

Internet/Web Browsers

  • In 1995, Robert Metcalfe, founder of 3Com, inventor of Ethernet, said, “I predict the Internet will soon go spectacularly supernova and in 1996 catastrophically collapse”
  • In 1993, Bill Gates, CEO of Microsoft, said, “The Internet? We’re not interested in it.”
  • Newsweek wrote in 1995, “The Internet is just a fad.”


  • In 2007, Steve Balmer, CEO of Microsoft, said, “”There’s no chance that the iPhone is going to get any significant market share. No chance.”
  • In 2007, David Platt, Author, said, “The iPhone is going to be a bigger marketing flop than Ishtar and Waterworld.”
  • In 2007, Seth Porgess, TechCrunch columnist, said, “We predict the iPhone will bomb.”
  • In 2007, Al Ries, Marketing Consultant, wrote an article, “Why the iPhone will fail.”
  • In 2007, Brett Arends, Financial Writer, said, “The iPhone isn’t the future.”
  • In 2007, Todd Sullivan, Investment Advisor, said, “The iPhone: Apple’s first flop.”
  • In 2007, Mitchell Ashley, IT executive, said, “The iPhone is certain to fade into history.”
  • In 2007, John Dvorak, MarketWatch columnist, said, “Apple should pull the plug on the iPhone.”

Cloud Computing

There’s tons of related information all, in more or less degree, dismissing the above mentioned truly disruptive technologies. The question, though, how to predict what’s the next great thing still persists and how can the above help? Well, it seems that the likelihood a new trend, technology or shift taking hold and developing into a disruptive innovation is proportional with the amount of bashing from technology pundits, and companies that risk their business models to be disrupted by emerging innovations.

Relating that to my current area, ERP, then there’s been a couple of disruptive shifts towards cloud based solutions and lighter clients – responsive web design. The new comers primarily focus on providing cloud based solutions other on-premise installations. However, by taking into the considerations the recent development in BYOD and increase of youngsters only using mobile devices, in some cases not even owning a PC, it’s very probable that the next disruption of the ERP space, that can threaten the incumbents, is a combination of a cloud based solution and mobile clients.

#erp #cloudcomputing #mobile #disruptive #predictions

Random Thought

As a software professional I keep being astonished what users accept and put up with, when it comes to faulty, buggy and unfathomable useless apps or systems. You download an app from an AppStore, or if you’re older than 20, you may even install it on your laptop/desktop. Either way, you start using it – and it works fine, or at least it seems to work fine. But for some incomprehensible reason your device “blows” up. And guess what? We simply shrug and reboot the whole thing – and what’s most astounding is that we don’t even complain.

If you think about it then it’s comparable to that you every day gets the milk out of the fridge for your cereal, but randomly the milk would explode. Wouldn’t that upset you?