IT is continually and increasingly being pressured from the business world to justify the value it brings to the company given the huge investments it absorbs. Quite rightly so, and thankfully we are moving into a new era of how we think about technology systems, moving away from duplicating data and platforms to both cause the need for EAI and the problems and costs associated with its implementation. Designing to access, real-time shared services will eliminate this unnecessary step and simultaneously design IT to naturally reflect and enable the cross-company business processes it is intended for in the first place.
Building Real-Time Enterprises
Another technically correct term associated with On Demand is the Real-Time Enterprise’.
A logistics operation that checks goods in and out of warehouse inventory systems holds second-hand, static data, in that the item might be later misplaced but the database will still reflect the old information. It’s not real-time data, it’s not live because it needs manual update to reflect reality, and that costs a fortune. Even one misplaced item can cost $’000’s, because the business process failures it causes, such as the customer orders that are based on this old, incorrect data.
Thus “organic IT” is achieved through models that harness literal information: RFID (Radio Frequency Identification) tags attached to stock items that utilise mobile Location Based Services’ transmit their location as it always is, literally. Therefore workflow designed to reference these services becomes a real-time process, and provides the foundation for self-managing networks. There is no need to spend $ and time checking in an item when its location is always known to a real-time IT environment.
Legacy transformation
Of course the key question is how you update the legacy systems that run the current logistics operations, the SAPs through to the old mainframe platforms.
Legacy transformation’ is the function of rejuvenating old, change-resistant environments, and will occur across three dimensions:
The hardware they run on
Their interoperation with other systems
How they are updated by programmers
The On Demand Services Architecture combines these to make IT inherently adaptable to change, and is achieved through the following foundations.
Self-created problems
The key principle of self-organising systems is to reduce, not increase, the manual workload and this is especially important for technology, a domain that continually gets strong pressure to demonstrate how it adds real value to the business for what it costs. This is accomplished through understanding how we create this workload in the first place.
IT is designed and deployed from a restricted, top down point of view, in that when a department or even an organisation decides it needs CRM or other functionality, another duplicated island of information is created when a package or new code is deployed.
Although there is only ever one actual physical customer or only one actual bicycle being shipped from China to Tescos in London, no matter the question in IT the answer is always to implement a new database. Another instance of static, duplicated data that already exists in multiple systems across multiple companies.
Middleware is dead. Long live shared services.
Therefore we create the workload for ourselves, we create the need for EAI, for the middleware needed so that “this system can talk to that system”. The most common source of cost, pain and change resistance in every IT project.
When we consider the desired end-result, we can see how this is a process of going one step back then one forward. That result is information singularity, data changes being universally reflected throughout all relevant value chain systems, so that when orders are placed or customers’ details changed it’s consistently updated everywhere it needs to be.
Multiple different applications are deployed, then enormous sums spent trying to re-unite them. We dig a hole, get in it, then try and fill it back in while we’re still in it.
Like all effective solutions, it’s one of simplicity: Don’t dig the hole to begin with. If there is only one customer, only one bicycle, then design business processes to reflect the physical reality. Ignore artificial organisational boundaries and think in terms of being only a part of broader, end-to-end workflow that spans from customer to manufacturer.
Logistics requires bulk shipping that is broken down incrementally as it tends towards a single user, but information about the contained items doesn’t need to work the same way. Information flow can be entirely frictionless. If we let it.
Don’t seek to create another copy of redundant data, access a single, real-time source.
Distributed, shared services
These single sources will be distributed, shared services. It’s what really underlies the appeal of Web services: A universal programming environment where we can create new applications faster by embedding patterns of calls to other remote services.
Ironically it’s been positioned as a new form of lightweight middleware but it’s this global plug and play modularity that has been calling out to us that will actually spell the end of EAI.
This taken for granted habit, and its solution, is outlined by Sun Microsystem’s Executive VP of Software, Jonathan Schwartz, who predicts the end of middleware with the rise of shared services (http://sys-con.com/story/?storyid=43550&DE=1).
Jonathan is correct. The purpose of middleware is to shuffle and translate data between numerous different systems that need to access correct information, and the IT industry has fallen into the habit of assuming all this pain and cost is necessary.
It’s not. Just design your system to use a universal shared service and you’re always working from live, real-time, up to date information. No more data-cleaning’, ever again. No more EAI, ever again.
The rational of shared services, and why they will prove so effective, is one of ultra-simplicity. If there is only one You, there only needs to be one version of the data, and one service to manage and access it. This is most pertinently demonstrated through the function of user authentication. Isn’t it frustrating that every web site you use requires yet another username and password, yet another copy of your personal information? Because, of course, they too are duplicating CRM and web site databases.
Therefore, a model where this authentication is a network based, central service that each e-business environment accesses to gain exactly the same function as if they logged in locally, solves everyone’s problem:
Users only have to log in once to any device to then gain access to any site or service running anywhere
Business and government saves enormous sums by eliminating the need for EAI in the first place, being able to re-use software components without developing them from scratch each time, and of course delivering streamlined service access for customers
With telecommunication and Digital TV providers using these same shared services, so we will gain these benefits across all media and devices we use. From our mobile phones to our TVs. Any service will be accessible from anywhere, On Demand.
Frictionless e-business, Amazon style, everywhere.
Introducing Model Driven Federation
What we have started working on at the On Demand Network forum is the implementation methodology for distributed shared service architecture, a program called Model Driven Federation.
Shared services is achieved through federated architecture’, systems that accomplish singularity through subscribing to shared information models. This membership subscription defines how distributed components unite to exchange and replicate data to effect global consistency, and thus delivers a local peer of a universal service.
It defines that it both accepts and publishes updates, so that the ultimate user/owner of the data can change their information via any network member anywhere to see it consistently replicated throughout the network.
For example, change their address in their Microsoft Outlook and it will be changed on their friends’ mobile phone.
This distributed system approach will allow previously locked away process data on mainframes to be intelligently replicated out onto grid computing platforms provided by the virtualisation of lots of little hardware units, like blade servers’, operating in Internet colocation centres.
Ultra rapid development
MDF is based on Model Driven Architecture, pioneered by the OMG (omg.org), where faster, more effective development of software is possible through the re-use of pre-defined patterns’ of code. Since IT architecture is a function of design, then best-practices in terms of how application environments are constructed can be encoded into models that can be repeatedly applied given that all technology scenarios feature common requirements and structures.
MDF builds on this in that because the purpose of IT is to facilitate business processes, and that shared services are instances of federated workflow, then so the same model driven approach can be used. For example, user authentication is a workflow service that provides the mechanism to allow access to a systems resources, generating billing data from its usage and so forth.
Therefore establishing how this can be blended into an SAP, mainframe or other legacy environment means that this development can be re-used for any SAP installations since these are common modules in all of them.
When we look at any business model and enabling IT functions we can see that there are relatively few workflow building blocks. CRM, content management, ERP, might be complex and unique in each scenario, but cutting across each of them are the same lego bricks of authentication, messaging, information store and so on.
Therefore creating composite applications’, where new functionality is built by factoring together these universal components not only makes software engineering lighter and faster, but with pre-defined nested patterns of code with the shared services embedded, so naturally customer-centric applications will be built.
Instead of creating another instance of customer data and related processes like authentication, the singular methods they already use can be harnessed. Better software, cheaper and faster.
The primary characteristic of the Service Oriented Architecture is as it sounds, the service is logically separate from common network methods such as user authentication, meaning that developers will be freed to create only the new service. Only the new process itself, with all other common components provided by the network. With these shared services implemented universally by distributed computing, then they will be available locally so that they can be harnessed for ultra customisation’: Creating workflow specifically unique to each and every customer.
Process programming
Because these services are defined at the business process level, then so programming tools will come into effect that tackle the primary “change bottleneck” in corporate adaptability.
Graphical drag and drop style tools that allow non-technical users to modify workflow code will remove the constraint on change implementation that is caused when the software development team are the only resources in the company who can change the business systems. With every new business initiative, from a simple marketing campaign to launch of a new product, requiring changes made to these systems then so this constraint causes queues of workload to build up and thus profit growth programs are held back.
The business is exposed to threats of competitors exploiting market opportunities quicker and feedback from customers cannot be reflected quickly back into the infrastructure. In general, it’s the lack of an adaptable IT platform.
When front-line staff, non-technical managers and personnel, can directly change the systems themselves via these tools, then so the business can respond to the need for change in real-time. The two step process of stipulating business need to IT design can be eliminated and change implemented on demand, allowing actions such as call-centre agents responding to change requests then and there, and marketing teams capable of quickly exploiting opportunities without delay.
With services architecture including network as well as application level functions then so totally unique customisation can be applied to any and all processes. A call centre operator could create workflow for a customer so that when they phone their call avoids the touch-tone greeting and routes it straight to them instead. Marketing teams cold create e-brochures that locally adapt themselves to each customer, by embedding only services into the documents that dynamically combine with local, distributed preference and profile data. Therefore presenting contextually relevant actions such as generating and presenting a one-click hyperlink to order the product based on this being relevant to their customer profile.
Conclusion – The Adaptable Enterprise
With singular, universal shared services pervasively consistent across all businesses, and these graphical drag and drop tools available to frontline staff and the customers themselves, so that process services from any organisation can be easily composed into new workflow, then so Plug and Play business integration will replace EAI.
The ability to implement change in real-time is fully distributed to every touch-point where the business meets the market and so it becomes an inherently adaptable enterprise, changing on demand.
Neil McEvoy is CEO of the Genesis forum (http://www.webservices-strategy.com), an
industry initiative of Service Oriented Architecture vendors describing the
business benefits of their technologies. He is the Chief Architect of the On
Demand framework, the platform for autonomic business models that match demand
and supply perfectly. Neil provides unique consultancy solutions to enterprise
end-users and vendor suppliers customised to deliver ROI within the On Demand
market. He can be reached at http://www.ondemand-strategy.com